1. Honors Engineering and Design
Period 3
The Microprocessor
Presented to: Mr.Mugno
Presented by Zachary Job
Due Date: 1/14/2011
2. Zachary Job Job 1
Mr. Mugno
Period 3
12/20/10
The Microprocessor
In 1968, one of the most important of all triumphs of our kind began with the creation of
a small technological company founded by Robert Noyce and Gordon Moore. Although
unknown to nearly anyone at the time, Intel was a rising star which began as a humble memory
chip producer, and only a short time later did they invent the device which gave mankind the
personal computer. Only a year later in 1969, the Computer Terminal Corporation visited the
small company Intel, thus launching the small company on a task that would give the world the
incredible technological device, the microprocessor (Libes, Sol).
Originally, processors consisted of large complex systems which used thousands of
transistors to translate input signals to give need binary outputs. These processors were room
sized and usually where only capable of four bit architecture, and this is otherwise known as the
amount of information the chip could handle at a time along with the functions it could provide
(Edelson, Ed). Seeing the benefits of a smaller yet more efficient device, Computer Terminal
Corporation sought the help of a small memory chip, or ram maker named Intel. Seeing that they
could supply the information on the workings of a processor, CTC wanted Intel to shrink the
item to a few devices that were noticeably smaller than the bulky processors. Instead of
accepting, Intel actually stated they could do better, and they went on to state they could put all
they desired into on small chip. Once CTC accepted, Intel created the very first microprocessor,
the 8008. Now, instead of thousands of transistors, each chip had a control unit, an ALU unit,
3. Job 2
and the register units, all of which consist of semi conductive connections which function as the
many bulky processor transistors did (Polsson, Ken). While the control unit now determined
what information flowed into the chip, the ALU unit actually performed the processing, and the
register units now held values to allow the chip to perform calculations without outputting to
external memory. Although it seems simple, such advancement was a mind blowing achievement
which started the race to constantly advance the arithmetic chip. Not only was the ability to have
all the processes occur in one device an incredible advantage, no longer was there a need for
rooms of unnecessary devices to perform the simple task of what a chip smaller than a quarter
could provide.
The Intel 4004, this chip is the genuine ancestor to all modern processors for one simple
reason. This chip was programmable, thus it was a multi-purpose chip which could be
programmed to be used for any purpose. With this ability, this chip and ones alike could be used
for purposes like basic calculators, to being used as basic word processors. With the dawn of this
technology, power and architecture of chips was the only barrier in the way of advancing
processors to what they are today. Even with the introduction of many new Computational
Processing Unit, or CPU architectures which have vastly improved specific type processors
(chips which specialize in performing fewer tasks extremely efficiently), a new issue has risen
which may threaten the very existence of titans like Intel. This very problem arose with the
incredible advancement of the chip called the Graphics Processing Unit, otherwise called the
GPU. Although late chips by companies like Intel and AMD are well known, one created by
some enormous tech-titans has been under appreciated, and soon will be forgotten due to the
possible GPU revolution. IBM’s cell processor was created by Sony, Toshiba, and IBM to create
4. Job 3
a chip which would perform incredible feats with such efficiency price and power wise that all
competition would have no grounds to compete. From this, the CELL chip emerged which was
placed in the Sony’s PS3 which shows the true power this chip offers. The chip had proven so
powerful that a sixty-four bit variant could have allowed IBM to compete with Intel. The reason
for such success was that this chip was specialized to function with modern computing patterns
giving it leaps and bound of efficiency over high power chips which are unspecialized. Most
chips are essentially dumb and are created to simply function as a multi-purpose device ("The
Cell Project at IBM Research"). Although this is ok, if the chip is specialized to work efficiently
with select languages, less power is wasted in trying to perform monotonous translation tasks
(Kurzweil, Ray). Such an innovation seemed like the next best thing since sliced bread to some
developers, but IBM decided that the chip was actually obsolete. Crazy as it seems, IBM decided
to take a leap of faith with the evolving concept of GPU supercomputing. Even though this chip
proved to rival the power offered by million dollar super computers, the GPU has shown that it
could yield even greater results using the same specialization concept. Both IBM and Nvidia, a
major GPU producer have began research into the benefits of using a GPU to control all that a
CPU would and more.
While a CPU handles many arithmetic based languages which have founding earlier in
technological history, the GPU specifically handles fewer languages with incredible efficiency
which the CPU could never imagine accomplishing. As a bonus to this advantage, computing has
started to revolve around visualizations thus a GPU which specializes in visual computing also
has incredible advantages for such uses as well. As of now, the GPU is relatively cheaper to
make and it contains hundreds of cores were a normal CPU today has two, four, or in expensive
5. Job 4
types, eight (Nguyen, Tuan & Parrish, Kevin). Another issue with having a CPU and GPU would
be lag caused by the need for the two chips to communicate. Instead of having one GPU that
handles all processes quickly on one chip, the two have to constantly transfer data to function
with each other. Other than such an obvious fact, applications largely deal with graphics and
high level languages over basic computing today. Due to this, a GPU type CPU would be an
even more viable option as many intensive applications also use languages to run the program
that GPUs are highly compatible with that the CPU would normally handle. Finally, if a GPU
was to be modified to function as a CPU and GPU, the price of the computer would be incredibly
cheaper seeing that only one cheaper device would be needed to handle all the computer’s
functions ("What is GPU computing"). This would make computers so affordable that prices
would lower by the hundreds thus changing who could afford a computer (Kurzweil, Ray).
While a GPU CPU sounds almost like a dream, one major set of issues plagues the
progress that this development craves. All known languages and basic instructional codes have
been built around and revolve around a CPU core system. In order to swap the current
technologies, all programs would have to be “ported” or changed which would require years in
order to allow applications to work on the new system ("Intel’s Future in GPUs"). Also, older
applications would no longer be supported as they would be written in languages that might not
be completely compatible with a GPU. Simply put, to switch to a GPU system would be to
reinvent the wheel. Not only is that an issue, but a fully working and functional GPU system has
not been thoroughly investigated nor invented in order to test how feasible a GPU system is.
Only NVidia has created something remotely similar (the Tesla GPU), but it still relies on
cooperation with the CPU to receive instructions. Even with the unbelievable capabilities of a
6. Job 5
GPU system, the price of creating one and porting programs might even trump the long term
benefits that it holds. Only if a collaborative effort was massed by nearly every technological
chip titan could this effort be achieved. Perhaps another Noyce and Moore could reach this goal,
but for now, the possibility remains a possibility. From humble beginnings to technological
marvels, the CPU remains a key element in daily computers. Whether a GPU computer is close
to it’s dawning, or whether the CPU is here to stay, computers have advanced ridiculously with
the improvement of CPUs and GPUs. With the small miracle chips that were born from the
genius’ of the twentieth century, computers have become an integrated part of everyday life.
Without the miracle device called a microprocessor, life as we know it simply could not have
ever existed.
8. This overview displays the basics behind their modern chips. A network of transistors and
copper link to create what we know as the CPU core.
One of the most advanced chips to date, the Cell CPU core by IBM utilizes specialization to
perform incredible feats that other processors could only dream of.
9. Demonstrating an obvious GPU advantage, this picture shows how GPU’s have advanced in
amounts of processing units as to CPU’s which tend to advance via architecture, speed, and few
additional cores.
This is a shot of a first generation GPU type CPU which handles a great deal of the CPU load by
processing it’s native languages that stress a CPU.
10. WORKS CITED
Edelson, Ed. "Real Computers you can assemble yourself." Popular Science 1 Dec. 1976: 82-83.
Google Books. Web. 13 Jan. 2011. <http://books.google.com/
books?id=7wAAAAAAMBAJ&pg=PA83&dq=intel+4004&hl=en&ei=x5gvTfzkLYqcgQf6ytxa
&sa=X&oi=book_result&ct=resu
lt&resnum=2&ved=0CCcQ6AEwAQ#v=onepage&q=intel%204004&f=false>.
Kurzweil, Ray. The Age of Intelligent Machines. Cambridge, Massachusetts: MIT Press, 1990.
Print.
Kurzweil, Ray. The Age of Spiritual Machines. New York, New York: Viking, 1999. Print.
Libes, Sol. "Bits and Bytes." Info World 8 Dec. 1980: 15, 35. Google Books. Web. 13 Jan. 2011.
<http://books.google.com/
books?id=mD4EAAAAMBAJ&pg=PT34&lpg=PT34&dq=intel+8008+periodical&source=bl&
ots=iulXQNA0Ry&sig=HVoVFyr
UeeZhWNyCoiiG8knXy1I&hl=en&ei=TpcvTf-
UFcTngQe97NRa&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBcQ6A
EwAA#v=onepage&q=intel%208008%5C%20periodical&f=false>.
Nguyen, Tuan, and Kevin Parrish. "Intel shows how a cpu is made." Tom's Hardware. N.p., 18
July 2009. Web. 20 Dec. 2010. <http://www.tomshardware.com/picturestory/
514-intel-cpu-processor-core-i7.html>.
Polsson, Ken. "Chronology of Personal Computers." About.com. N.p., 1 Jan. 2011. Web. 20 Dec.
2010.
<http://inventors.about.com/gi/dynamic/offsite.htm?site=http://www.islandnet.com/%257Ekpols
son/
comphist/>.
"Intel’s Future in GPUs." Intel.com. N.p., n.d. Web. 20 Dec. 2010.
<http://www.intel.com/technology/visual/microarch.htm>.
"The Cell Project at IBM Research." IBM.com. N.p., n.d. Web. 20 Dec. 2010.
<http://www.research.ibm.com/cell/>.
"What is GPU computing." NVidia.com. N.p., n.d. Web. 20 Dec. 2010.
<http://www.nvidia.com/object/GPU_Computing.html>.
Picture A:
http://www.old-computers.com/history/detail.asp?n=69&t=3