Paste number 139933: How does code get translated to voltage inside a CPU transistor?

Paste number 139933: How does code get translated to voltage inside a CPU transistor?
Pasted by: pjb
When:11 years, 3 months ago
Share:Tweet this! | http://paste.lisp.org/+2ZZ1
Channel:None
Paste contents:
Raw Source | XML | Display As
How does code get translated to voltage inside a CPU transistor?


It does not.  If by "translated" you mean "converted" or "processed".

A compiler translates source code into object code.  This is a
software process, that translates from one code form to another code
form, keeping the meaning identical.

But there is no such processing that would take code as input, and
produce voltages inside a CPU.  

This is a category error.


The processings that are performed by a material electronic computer,
are physical processings.  Mostly, conversion of energy, conversions
between tensions, intensities, charges, inductions, magnetism.
Different parts of an electronic component use all those electronic
phenomena, since they're closely embraided by Maxwell's laws.


So, when you power up a computer, all its electronic compenents are
flooded by electrical energy and there's a short time where they're in
some random state.  But some circuits are designed to fall into a
determined electrical state, so they can drive the other circuits to a
wanted initial state.  What this means, is that after a short time,
the charges, tensions and intensities are configured so that the
evolution of this electrical state is determined.  Basically, the CPU
is designed so that it will start driving its memory bus (setting the
voltages of the various lines at the right times) according to a
protocol such that a specific configuration of voltage is transmitted
on a bus to the ROM, along with a latch signal.

A ROM (let's consider a true ROM here), contains a circuit called
"decoder", which has n input lines, and 2^n "word" lines.  Each input
line can have one of two voltage level.  Each combination of voltage
level on the input lines correspond to a single word line, which means
that this single word line has one given voltage level, while all the
other word lines have the other voltage level.

These word lines can be connected or not to a set of output lines.
When the word line that is under the right voltage level is connected
to an output line, this voltage level is transmitted to the output
line.  If it's not connected, then the output line is forced to the
other voltage level (by the "pull ups").


Those output lines are then connected back to the processor, on
another bus, and the processor uses internally those voltage levels to
further determine what electronic circuit to activate, and what
voltage level to transmit and where.

We could describe this in great details, only talking about
electromagnetic phenomena.

The important point is that all that occurs, is only deterministic
transmission of electrical energy all over the electrical circuits
that made the hardware computer.


I've detailed the working of the ROM, because here we have something
that is slightly outside of the electronic realm:  the connection or
disconnection between a word line and the output lines.  This is a
material configuration, not something that's configured
electronically.  This is something that a human being can configure
manually, by cutting or soldering wires.  Or possibly, by switching
interruptors.  


However, and this is the important thing here: that remains something
entirely material: there's no code here, only conductors connected or
not, thru which electrical energy may or may not pass.


Now, in a processor or in a whole computer, there are between millions
and trillions of transistor (or equivalent, in RAM and hard disk
magnetic cells), and correspondingly, billions of interconnections
between those transistors and electronic circuits.

So describing the working of the hardware computer in terms of
voltage, transistors and circuits, would be very tedious.  Even
considering each component and subcomponent separately, describing
them at this electonic level would lose anybody in too many details.

On the other hand, there are a lot of repeated patterns all over the
place.  Flip-flops, memory cells, gates, registers, multiplexers,
demultiplexers, buses, etc. 

So we could give a description of the hardwarea machine in those
abstract terms.  We can explain the ROM as I did already above, in
terms of a decoder, of lines and gates.  Further, we can distinguish
the abstraction of the address bus (the input lines of the ROM), and
the data bus (the output lines of the ROM), which connect with the
processor.

And instead of talking of voltage levels on the lines of the address
or data bus, we can add a further level of abstraction, by numbering
those lines from 0 to A-1 for the address bus, and from 0 to D-1 for
the data bus.  We can introduce the abstraction of saying that one
voltage level is 1, and the other is 0.  

And to describe concisely a configuration of a bus, we can compute the
numbers:

           A-1
        a = Σ aᵢ × 2ⁱ
           i=0


           D-1
        d = Σ dᵢ × 2ⁱ
           i=0


aᵢ being the voltage of the line i of the address bus,
dᵢ being the voltage of the line i of the data bus.

Notice that those numbers depend on the order in which we number the
lines of the buses.  You could switch two lines, and you'd get a
different voltage configurations therefore a different number.

(A is the number of lines or "bits"  of the address bus,
 D is the number of lines or "bits" of the data bus).

So now we can say the ROM is a big array that gives a number _d_
between 0 and 2^D-1 for each number _a_ between 0 and 2^A-1.


Similarly,  when the processor receives a certain voltage
configuration on its data bus at the right time, it will use this
electrical configuration to deterministically evolve in a certain way.
The number of different ways it can evolve is big but is finite.  We
can simplify the description of how the processor evolves, by
remarking that there are families of evolving, and that the family of
evolution depends on the voltage levels on the data bus at certain
times.  We can then give names to those families and say for example
that when the processor receives the number 208, it will evolve in
such a way that the register will have a configuration of voltages
that depends on its previous configuration of voltages and the
configuration of voltages of another register in such a way, that,
given a numbering of the circuits in those registers, the number
obtained by a formula like above are such that:

       new_r1 ← old_r1 + r2

and we may call "ADD r1,r2"  the number 208.

As humans, may may have a better time talking of ADD r1,r2 and
remembering that means  new_r1 ← old_r1 + r2, than talking of numbers
and registers, , or of voltage levels and electrical circuits.  If
only because our short term memory is limited to about 7 elements.
ADD r1,r2  is 3 elements:  addition, register 1, and register 3.  So
we're left with 4 free brain cells to think about the rest of the code
:-)  

But if we had to think about the voltage levels and electrical
circuits, we'd be overwhelmed.

However, in reality, that's all there is to a hardware electronic
computer.




So perhaps now we could answer in a way to this question, how does
code get translated to voltage inside a CPU transistor?

Code doesn't get translated to voltage, it is already voltage (or
other electrical circuit configurations) in the hardware computer.

It actually start as connections between lines in the ROM.

And those connections originally have been made by human hand,
following an abstract plan, a program or "code" imagined by a human
programmer.  Actually the early computers were programming this way,
by wiring a "patch board" which was essentially a ROM made of cables.
Those patch boards were interchangeable, so you could run different
programs on the same computer.  Then, there was no difference between
"code" and "voltage".
http://ibm-1401.info/403LeftSide-wPatchPanel-.jpg

Once this bootstrapping stage is passed, we have computers with ROMs
containing programs (wiring) that will drive the electrical
configurations of the electronic computer hardware, and there's no
code.

When you type a key on the keyboard, there's a little electrical
configuration change that deterministically determine the evolution of
the electrical circuits in the keyboard, and in the computer. 

So, as a human being, you may be thinking you're typing the letter
"A", but what you're really doing, is to move down some key on the
keyboard, and this produces determined variations of tensions and
currents and charges and magnetic fields inside the computer hardware.

Similarly when you type other letters such as "ADD r1,r2".  This,
according to the programs (electrical configurations) running in
(making evolve in a given direction) the computer, may change the
electrical configuration of the RAM or the hard disk magnetic areas.

However, it is most probable that those electrical configurations
don't make a meaningful program for the machine.

Note that some early computers were designed so that the electrical
configurations corresponding to your idea of letters and numbers, also
corresponded directly to executable instructions: reading the punched
cards with those "characters", ie. material holes in the card, which
were translated to electrical current on certain wires, and then on,
to the orientation of the magnetic field in core memories, would then
drive directly the processor to evolves its electric configuration in
a way that meant something useful to the programmers.  But this is not
generally the case. (There are some text strings that when stored in
memory using the ASCII code make interesting X86 programs, but they're
rare enough.  For example, there's the EICAR virus test file, which is
68 characters that can be executed directly on x86 MS-DOS/MS-Windows
machines.).

So you've typed on keys on the keyboard, and this has configured
charges in a few cells of the DRAM memory of your computers, in such a
way that we could say there's stored the string "ADD r1,r2".   How do
we obtain a memory cell with the charge configuration corresponding to
the number 208, or that would drive the processor to evolves in such a
way that its register r1 gets a configuration of voltage that we would
interpret as the sum of it's previous configuration with the
configuration of voltage of the register r2?

Well, by writing a program to do it!  But this program will take a
sequence of numbers as input, interpreting them as characters encoded
in a given code (ASCII or UTF-8 nowadays, EBCDIC or some other code in
the old days), and by processing those numbers it would output the
number 208 (and perhaps a few others).  Then another program would be
able to "load" those numbers in memory and to "jump" to them to make
them executed by the processor.  

Err, no way!  

Actually it's voltage configurations making the processors change
voltage configurations in such a way that the new voltage
confugurations are used by the processor to determine how it will
further evolve.

Yep.  That's the truth, but it's too complex to be understood.

Therefore we talk at higher levels of abstraction, introducing those
"codes".  Machine code, source code, program specifications, etc.

The codes don't get translated, because they're abstractions that are
only in human minds.  Those abstractions may be reflected,
"represented" as otherwise meaningless electrical configurations in
the computer.  But they can be processed by special electrical
configurations called compilers to be transformed into electrical
configurations that can be used by the processor.  We as human imagine
that the source code is compiled into object code.  And we imagine
that the object code, a sequence of numbers is executed by a processor
that has registers, ALU, FPU, and so on.   But this is only something
we're imagining to be able to understand what's happening in an
electrical circuit that's becoming soon almost as complex as our
brains, and that we just have not the capability to understand at its
actual material level.


There is no translation of code toward voltage.

There is the processing of the hardware computer which transforms
electrical configurations of some kind into electrical configurations
of some other kind,  and that we imagine being the transformation of
source code (text describing some processing), into object code
(sequence of numbers describing the same processing for the machine).

There can be no such translation of code towar voltage because they
are two entirely different categories, living in two entirely
different universes.  The voltages are in the physical world.  Code is
in the world of mathematical beings.  Voltages will end with the
universe, anywhen between 15 billion years and 200 billion years with
the heat death.  Code lives eternally, like any mathematical concept.

They have parallel lifes.  The brain of a programmer finds the
mathematical being, the algorithm.  He expresses it in a language as
source code, and use another mathematical being, another algorithm, to
transalte this source code into some object code.  In parallel, the
programmer has moved keys on a keyboard, and electrical configurations
have changed in the machine.  Since the programmer of the compiler did
the same for the compiler program, there is in the machine an
electrical configuration that lives in parallel to the compiler, and
that can be used to transform the electrical configuration parallel to
the source code, into an electrical configuration parallel to the
object code.  And this later electrical configuration can be used by
the processor to evolve its electrical configurations in a way that
parallels the process described by the algorithm of the program.

Each level of abstraction stays in its own world.

We could say that the code gets translated into voltage when the
programmer wires the lines of the ROM, or when he move the keys of its
keyboard.  That'd be the closest "translating" steps there would be,
since that's the only time when the computer hardware is manipulated
to enfore the parallelism between the code world imagined by the
programmer, and the hardware of machine in the physical world.

But since the electrical configuration that are executed by the
computer are usually generated by the computer itself following the
electrical configurations corresponding to the compiler program, you
can see that it's a little stretched an interpretation.

As long as the hardware works (things in the physical world tend to
fail: stars explode into novas, atoms don't stay in place, things
break, etc), but as long as the hardware works as designed, you can
just forget it, and just live in the world of mathematical
abstractions and source code.

This paste has no annotations.

Colorize as:
Show Line Numbers

Lisppaste pastes can be made by anyone at any time. Imagine a fearsomely comprehensive disclaimer of liability. Now fear, comprehensively.