General Question

PhiNotPi's avatar

How do computers work (see details)?

Asked by PhiNotPi (12686points) February 23rd, 2011

I know about the CPU, RAM, ROM, hard drive, etc. I know about programs. I also know about transisters, logic gates, logic circuits, etc. But what I am having trouble understanding is how, given instructions, the CPU changes its fuction, like changing from adding numbers, to dividing, to whatever else it does. So, basicly, I am asking how higher function arises out of the basic logic circuits of a computer.

Observing members: 0 Composing members: 0

13 Answers

koanhead's avatar

This is a complex question, and I don’t pretend to know computer science well enough to give you a comprehensive answer. The link below is probably the best place to start:

http://en.wikipedia.org/wiki/Turing_machine

Turing was the mathematician who first came up with a general solution for an imaginary “machine” that could follow algorithmic steps to reach a solution for any computable problem.
Digital computers are a special case of Universal Turing Machine using a Von Neumann architecture.

Zaku's avatar

Changing from adding to dividing, at the lowest level, is a matter or two different instructions in the CPU. The CPU performs operations, by reading a list of instructions, and some of those instructions are mathematical operations on values stored in specified places in the computer (typically memory addresses or registers). Almost everything else the CPU does boils down to a list of possible instructions (the CPU instruction set). However, people mainly programmed the CPU directly say 40 years ago. Now there are several other layers available to write code to on top of that, which present different languages, though they all eventually result in CPU instructions being executed.

Zyx's avatar

I get how that can be confusing, but it’s not the part I’m confused about.

Any output device serving any function has a limited number of states (like @koanhead, see turing machine). Producing the actual output is a matter of dividing all the needed states across the aforementioned math, which gets a little large here simply because of the large number of states most output devices require. (for your screen for example the number of states is the number of colours each pixel can display (including off for convience) times the number of pixels in your screen)

If by higher functions you meant something else, well, then I’m confused.

Zaku's avatar

Well, much the same way a bunch of flattened wood pulp, string, glue, and a pen can be turned into a novel. People write programs (generally, operating system programs and then other layers of programs that write to those, etc, but essentially just all programs) that have a consistent context the programmers had in mind, which adds up to something that other people can interpret as having a higher function, and find useful or interesting or fun.

Back in the 1970’s, my dad had a digital calculator, and a paperback book of games you could play using the calculator. Things like doing some math and ending up with 07734, which if you turned it upside-down, you could then be tickled pink that it looked like the word HELLO, kind of. Modern programming is just lots more complicated than that, and layered, but it’s the same fundamental phenomenon.

jerv's avatar

One thing I feel I should point out here to expand on @Zyx‘s answer is that the sheer number of transistors in a modern computer allow for many states. And when I say “many”, I mean a number that you might have difficulty wrapping your head around and will probably only understand by calling it magic.

Vortico's avatar

My father asked the same thing a couple years ago, and I had to span the explanation over the course of a few days. Since this is such a complex subject, I would agree with @koanhead and recommend you learn about the fundamental computer science formulated by mathematicians and computer scientists in the 30’s and 40’s. Also, if you were to learn the basics of the Assembly language and how it relates to machine code, your question may be satisfied.

Odysseus's avatar

You didnt mention the word binary 01100101 01110110 01100101 01110010 01111001 01110100 01101000 01101001 01101110 01100111 00100000 01100001 00100000 01100011 01101111 01101101 01110000 01110101 01110100 01100101 01110010 00100000 01100100 01101111 01100101 01110011 00100000 01101001 01110011 00100000 01100010 01100001 01110011 01100101 01100100 00100000 01101111 01101110 00100000 01100010 01101001 01101100 01101100 01101001 01101111 01101110 01110011 00100000 01101111 01100110 00100000 00100111 01110011 01110111 01101001 01110100 01100011 01101000 01100101 01110011 00100111 00100000 01110010 01100101 01110000 01110010 01100101 01110011 01100101 01101110 01110100 01100101 01100100 00100000 01101111 01101110 01100101 01110011 00100000 01100001 01101110 01100100 00100000 01111010 01100101 01110010 01101111 01110011

jerv's avatar

@Odysseus… which makes them a two-symbol, N-state Turing machine where N is a huge number.

0a 4a 75 73 74 20 77 61 69 74 20 75 6e 74 69 6c 20 6d 61 63 68 69 6e 65 73 20 73 74 61 72 74 20 77 6f 72 6b 69 6e 67 20 69 6e 20 74 72 69 6e 61 72 79 20 3b 29

LostInParadise's avatar

I have seen an explanation of this, but being a software person and not into the hardware, I only vaguely remember it. There is a pointer register in the computer that addresses a particular place in program memory. Driving the computer is a clock that, every so many nanoseconds, causes another device to use the pointer to grab the instruction at a the location. Originally the bits of the instruction mapped directly to a series of gates that executed the instruction. Now the bits are interpreted by micro-instructions. The address pointer automatically increments to the next sequential memory location unless the instruction changes it to branch to another location.

PhiNotPi's avatar

@Odysseus Binary would be included with logic gates and logic circuits, since they operate on purely binary code.

stratman37's avatar

for the answers to this and more questions like it, visit
howstuffworks.com

ratboy's avatar

Generally, the state is stored in registers (dedicated memory locations); it consists of the location of the next instruction, the locations of the tops of the stacks, faults (e.g. attempts to access a non-existent memory location), errors (overflow, etc.), interrupts, paging, etc. When a program is loaded, the values of the appropriate registers are pushed onto a stack, those registers are initialized, and the instruction counter is set to point at the first instruction of the program. Each instruction is designated by an operation code that includes the location of the required data and the operation to be performed on that data. When an instruction is executed, the instruction counter is loaded with the address of the next instruction, the indicated operation is performed on the data, the result is stored in memory, and the effects of the instruction’s execution are stored in the registers. This process then repeats itself until the program ends. This is a skeletal outline that omits many details, and I won’t vouch for the correctness of those that are included.

Odysseus's avatar

@jerv , trinary. my word, It blows my mind.

I read on new scientist of ‘Quantum computing and beyond’ something crazy about an atom existing in more than one place at the same time? (I suspect they were delving into superstring theory, but it got way over my head)
And isn’t Graphene such a coool invention? Keeps moores law on track.

Ha to think something so revolutionary was discovered with the use of good old cello-tape)

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther