1. ICT HANDOUT FOR PRELIM
A. Computer History
What is a Computer?
In its most basic form a computer is any device which aids humans in performing various kinds of computations or
calculations. In that respect the earliest computer was the abacus, used to perform basic arithmetic operations. Every
computer supports some form of input, processing, and output. This is less obvious on a primitive device such as the abacus
where input, output and processing are simply the act of moving the pebbles into new positions, seeing the changed
positions, and counting. Regardless, this is what computing is all about, in a nutshell. We input information, the computer
processes it according to its basic logic or the program currently running, and outputs the results. Modern computers do this
electronically, which enables them to perform a vastly greater number of calculations or computations in less time. Despite
the fact that we currently use computers to process images, sound, text and other non-numerical forms of data, all of it
depends on nothing more than basic numerical calculations. Graphics, sound etc. are merely abstractions of the numbers
being crunched within the machine; in digital computers these are the ones and zeros, representing electrical on and off
states, and endless combinations of those. In other words every image, every sound, and every word have a corresponding
binary code. While abacus may have technically been the first computer most people today associate the word “computer”
with electronic computers which were invented in the last century, and have evolved into modern computers we know of
today.
ENIAC First Generation Computers (1940s – 1950s)
First electronic computers used vacuum tubes, and they were huge and complex. The first general purpose electronic
computer was the ENIAC (Electronic Numerical Integrator And Computer). It was digital, although it didn’t operate with binary
code, and was reprogrammable to solve a complete range of computing problems. It was programmed using plugboards and
switches, supporting input from an IBM card reader, and output to an IBM card punch. It took up 167 square meters, weighed
27 tons, and consuming 150 kilowatts of power. It used thousands of vacuum tubes, crystal diodes, relays, resistors, and
capacitors.
The first non-general purpose computer was ABC (Atanasoff–Berry Computer), and other similar computers of this era
included german Z3, ten British Colossus computers, LEO, Harvard Mark I, and UNIVAC.
IBM 1401 Second Generation Computers (1955 – 1960)
The second generation of computers came about thanks to the invention of the transistor, which then started replacing
vacuum tubes in computer design. Transistor computers consumed far less power, produced far less heat, and were much
smaller compared to the first generation, albeit still big by today’s standards.
The first transistor computer was created at the University of Manchester in 1953. The most popular of transistor computers
was IBM 1401. IBM also created the first disk drive in 1956, the IBM 350 RAMAC.
Third Generation Computers (1960s)
IBM System/360
The invention of the integrated circuits (ICs), also known as microchips, paved the way for computers as we know them
today. Making circuits out of single pieces of silicon, which is a semiconductor, allowed them to be much smaller and more
practical to produce. This also started the ongoing process of integrating an ever larger number of transistors onto a single
2. microchip. During the sixties microchips started making their way into computers, but the process was gradual, and second
generation of computers still held on.
First appeared minicomputers, first of which were still based on non-microchip transistors, and later versions of which were
hybrids, being based on both transistors and microchips, such as IBM’s System/360. They were much smaller, and cheaper
than first and second generation of computers, also known as mainframes. Minicomputers can be seen as a bridge between
mainframes and microcomputers, which came later as the proliferation of microchips in computers grew.
Fourth Generation Computers (1971 – present)
First microchips-based central processing units consisted of multiple microchips for different CPU components. The drive for
ever greater integration and miniaturization led towards single-chip CPUs, where all of the necessary CPU components were
put onto a single microchip, called a microprocessor. The first single-chip CPU, or a microprocessor, was Intel 4004.
The advent of the microprocessor spawned the evolution of the microcomputers, the kind that would eventually become
personal computers that we are familiar with today.
First Generation of Microcomputers (1971 – 1976)
Altair 8800
First microcomputers were a weird bunch. They often came in kits, and many were essentially just boxes with lights and
switches, usable only to engineers and hobbyists whom could understand binary code. Some, however, did come with a
keyboard and/or a monitor, bearing somewhat more resemblance to modern computers.
It is arguable which of the early microcomputers could be called a first. CTC Datapoint 2200 is one candidate, although it
actually didn’t contain a microprocessor (being based on a multi-chip CPU design instead), and wasn’t meant to be a
standalone computer, but merely a terminal for the mainframes. The reason some might consider it a first microcomputer is
because it could be used as a de-facto standalone computer, it was small enough, and its multi-chip CPU architecture
actually became a basis for the x86 architecture later used in IBM PC and its descendants. Plus, it even came with a
keyboard and a monitor, an exception in those days.
Second Generation Microcomputers (1977 – present)
Commodore PET2001 (Image by Tomislav Medaklicensed under CC-BY-SA).
As microcomputers continued to evolve they became easier to operate, making them accessible to a larger audience. They
typically came with a keyboard and a monitor, or could be easily connected to a TV, and they supported visual
representation of text and numbers on the screen.
In other words, lights and switches were replaced by screens and keyboards, and the necessity to understand binary code
was diminished as they increasingly came with programs that could be used by issuing more easily understandable
commands. Famous early examples of such computers include Commodore PET, Apple II, and in the 80s the IBM PC.
The nature of the underlying electronic components didn’t change between these computers and modern computers we
know of today, but what did change was the number of circuits that could be put onto a single microchip. Intel’s co-founder
Gordon Moore predicted the doubling of the number of transistor on a single chip every two years, which became known as
“Moore’s Law”, and this trend has roughly held for over 30 years thanks to advancing manufacturing processes and
microprocessor designs.
Graphical User Interface (GUI)
Macintosh 128k (Image by All About Apple museum licensed under CC-BY-SA-2.5-it)
Possibly the most significant of those shifts was the invention of the graphical user interface, and
the mouse as a way of controlling it. Doug Engelbart and his team at the Stanford Research Lab
3. developed the first mouse, and a graphical user interface, demonstrated in 1968. They were just a few years short of the
beginning of the personal computer revolution sparked by the Altair 8800 so their idea didn’t take hold.
Instead it was picked up and improved upon by researchers at the Xerox PARC research center, which in 1973 developed
Xerox Alto, the first computer with a mouse-driven GUI. It never became a commercial product, however, as Xerox
management wasn’t ready to dive into the computer market and didn’t see the potential of what they had early enough.
It took Steve Jobs negotiating a stocks deal with Xerox in exchange for a tour of their research center to finally bring the user
friendly graphical user interface, as well as the mouse, to the masses. Steve Jobs was shown what Xerox PARC team had
developed, and directed Apple to improve upon it. In 1984 Apple introduced the Macintosh, the first mass-market computer
with a graphical user interface and a mouse.
Portable Computers
Powerbook 150 (Image by Dana Sibera licensed under CC-BY-SA.)
As it turned out the idea of a laptop-like portable computer existed even before it was possible to create one, and it was
developed at Xerox PARC by Alan Kay whom called it the Dynabook and intended it for children. The first portable computer
that was created was the Xerox Notetaker, but only 10 were produced.
The first laptop that was commercialized was Osborne 1 in 1981, with a small 5″ CRT monitor and a keyboard that sits inside
of the lid when closed. It ran CP/M (the OS that Microsoft bought and based DOS on). Later portable computers included
Bondwell 2 released in 1985, also running CP/M, which was among the first with a hinge-mounted LCD display. Compaq
Portable was the first IBM PC compatible computer, and it ran MS-DOS, but was less portable than Bondwell 2. Other
examples of early portable computers included Epson HX-20, GRiD compass, Dulmont Magnum, Kyotronic 85, Commodore
SX-64, IBM PC Convertible, Toshiba T1100, T1000, and T1200 etc.
The first portable computers which resemble modern laptops in features were Apple’s Powerbooks, which first introduced a
built-in trackball, and later a trackpad and optional color LCD screens. IBM’s ThinkPad was largely inspired by Powerbook’s
design, and the evolution of the two led to laptops and notebook computers as we know them. Powerbooks were eventually
replaced by modern MacBook Pro’s.
Of course, much of the evolution of portable computers was enabled by the evolution of microprocessors, LCD displays,
battery technology and so on. This evolution ultimately allowed computers even smaller and more portable than laptops,
such as PDAs, tablets, and smartphones.
B. Computer Hardware Components: CPU, Memory, and Input/Output Device
Basic Concepts of Computer Hardware
This model of the typical digital computer is often called the von Neumann computer.
• Programs and data are stored in the same memory: primary memory.
• The computer can only perform one instruction at a time.
Input/Output (I/O): Refers to the process of getting information into and out of the computer.
• Input: Those parts of the computer receiving information to programs.
CPU
(Central Processing
Unit)
Input
Units
Output
Units
Primary Memory
4. • Output: Those parts of the computer that provide results of computation to the person using the computer.
Two types of data stored within a computer:
• Original data or information: Data being introduced to a computing system for the first time.
Computers can deal directly with printed text, pictures, sound, and other common types of
information.
• Previously stored data or information: Data that has already been processed by a computer and is
being stored for later use.
These are forms of binary data useful only to the computer.
Examples: Floppy disks, DVD disks, and music CDs.
Two categories of input hardware:
• Those that deal with original data.
• Those that handle previously stored data.
Input hardware: Those that deal with original data.
• Keyboard
• Mouse
• Voice recognition hardware
• Scanner
• Digital camera
• Digitizing: The process of taking a visual image, or audio recording and converting it to a binary form for
the computer.
• Used as data for programs to display, play or manipulate the digitized data.
Connecting Hardware to the computer:
• Hardware needs access through some general input/output connection.
Port: The pathway for data to go into and out of the computer from external devices such as
keyboards.
There are many standard ports as well as custom electronic ports designed for special purposes.
Ports follow standards that define their use.
SCSI, USB: Multiple peripheral devices (chain).
RS-232, IDE: Individual peripheral devices.
Peripheral device: A piece of hardware like a printer or disk drive, that is outside the main
computer
• Hardware needs software on the computer that can service the device.
– Device driver: Software addition to the operating system that will allow the computer to
communicate with a particular device.
• Common Basic Technologies for Storing Binary Information: (Electronic, Magnetic, Optical)
Electronic Circuits
• Most expensive of the three forms for storing binary information.
• A flip-flop circuit has either one electronic status or the other. It is said to flip-flop from one to the other.
• Electronic circuits come in two forms:
– Permanent
– Non-permanent
Magnetic Technology
• Two parts to most of the magnetic forms of information storage:
– The medium that stores the magnetic information.
• Example: Floppy disk. Tiny spots on the disk are magnetized to represent 0s and 1s.
– The device that can “read” that information from the medium.
• The drive spins the disk.
• It has a magnetic sensing arm that moves over the disk.
• Performs nondestructive reading.
Optical
• Uses lasers to “read” the binary information from the medium, usually a disc.
– Millions of tiny holes are “burned” into the surface of the disc.
– The holes are interpreted as 1s. The absences of holes are interpreted as 0s.
Secondary Memory Input Devices
• These input devices are used by a computer to store information and then to retrieve that information as
needed.
– External to the computer.
– Commonly consists of floppy disks, hard disk drives, or CD-ROMs.
• Secondary memory uses binary.
– The usual measurement is the byte.
A byte consists of 8 binary digits (bits). The byte is a standard unit.
The four most important characteristics of storage devices:
5. Speed (Access time) - How fast information can be taken from or stored onto the computer memory device’s
medium.
• Electronic circuits: Fastest to access.
– 40 billionths of a second.
• Floppy disks: Very slow in comparison.
– Takes up to 1/2 second to reach full speed before access is even possible.
Cost
• Megabyte: A Million bytes.
• Gigabyte: A billion bytes.
• Two parts to a removable secondary storage device:
– The cost of the medium. (Cheaper if bought in quantity)
– The cost of the drive.
Examples: Cost for drive Cost for medium
Floppy drive (1.4MB) 59.00 .50
Zip 100 (100 MB) 99.00 10.00
CD-WR (650 MB) 360.00 and up 1.00
Capacity - The amount of information that can be stored on the medium
Unit Description Approximate Size
1 bit 1 binary digit
1 nibble 4 bits
1 byte 8 bits 1 character
1 kilobyte 1,024 bytes 1/2 page, double spaced
1 megabyte 1,048,576 bytes 500,000 pages
1 million bytes
1 gigabyte 1,073,741,824 bytes 5 million pages
1 billion bytes
1 terabyte 1 trillion bytes 5 billion pages
The Central Processing Unit
The Central Processing Unit ( CPU)
• Often referred to as the “brain” of the computer.
• Responsible for controlling all activities of the computer system.
• The three major components of the CPU are:
1. Arithmetic Unit (Computations performed)
Accumulator (Results of computations kept here)
2. Control Unit (Has two locations where numbers are kept)
Instruction Register (Instruction placed here for analysis)
Program Counter (Which instruction will be performed next?)
3. Instruction Decoding Unit (Decodes the instruction)
Motherboard: The place where most of the electronics including the CPU are mounted.
Output Devices:
Output units store and display information (calculated results and other messages) for us to see and use.
• Floppy disk drives and Hard disk drives.
• Display monitors: Hi-resolution monitors come in two types:
– Cathode ray tube (CRT) - Streams of electrons make phosphorous glow on a large vacuum
tube.
– Liquid crystal display (LCD) - A flat panel display that uses crystals to let varying amounts of
different colored light to pass through it.
– Light Emitting Diode (LED) - A flat panel display that uses LED lights.
• Developed primarily for portable computers
Audio Output Devices
• Windows machines need special audio card for audio output.
• Macintosh has audio playback built in.
• Audio output is useful for:
– Music
• CD player is a computer.
• Most personal computers have CD players that can access both music CDs and CD-
ROMs.
– Voice synthesis (becoming more human sounding.)
– Multimedia
– Specialized tasks (i.e.: elevator’s floor announcements)
Optical Disks: CD-ROM and DVD
• CD-ROM (Compact Disk - Read Only Memory)
6. – By its definition, CD-ROM is Read Only.
– Special CD drives “burn” information into blank CDs.
• Burn: A laser is used to “burn” craters into the surface to represent a binary 1.
• Two main types of CDs:
» CD-R (Compact Disk - Recordable)
» CD-WR (Compact Disk - ReWritable)
– It takes longer to write to a CD-R than a hard drive.
– Special software is needed to record.
DVD (Digital Versatile Disk)
• Allows up to 17 gigabytes of storage (from 4.7 GB to 17 GB).
• Compatible with older CD-ROM technology.
Following is the basic window which you get when you start word application. Let us understand various important
parts of this window.
C. Technology in the classroom (Evolution and Developments)
There are various types of technologies currently used in traditional classrooms. Among these are:
Chalk/Board Activity: A traditional way of presenting activities using the mounted board and chalk or
marking materials.
Flash Card:A card bearing word/s, number/s, or picture/s that is briefly displayed (as by a teacher to a class)
as a learning aid.
Charts/Flip Charts:Charts/Flip Charts are sheet/s of visual aid/s such as graph/s and, or diagram/s {a
graphic design that explains rather than represents; esp: a drawing that shows arrangement and relations (as
of parts)} fastened together usually used in a traditional classroom.
Mock-Up Models: The use of a full-sized structural model built to scale chiefly for study, testing, or display.
Role Plays:To represent in action ‹students were asked to ~ the thoughts and feelings of each character and
to convey idea as what they assumed it was.
Games and Experiments:
Game is a physical or mental competition conducted according to rules with the participants in direct
opposition to each other. Game is also used as a method of transmitting knowledge in a classroom setting.
Suggested activity: students are group accordingly, let them choose a topic from their major field of
specialization and then construct or design a game.
Experiment is a tentative procedure or operation carried out under controlled conditions in order to discover
an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.
Resource Speakers:A method of delivering the knowledge inside the classroom using individual other than
the teacher; whocommonly portray the work in real life.
Audio/Audio Recordings: sounds such as audiobook (a recording of a book or magazine being read),
audio-cassette (an audiotape recording mounted in a cassette), audio-lingual(involving a drill routine of
listening and speaking in language learning)and other audio means areutilized for classroom instruction.
Audio-Visual Shows:involves planning, preparation, and use of devices and materials that involve sight,
sound, or both designed to aid in learning or teaching process.Among the devices used are still and motion
pictures, filmstrips, television, transparencies, audio-animatronic (being or consisting of a lifelike
electromechanical figure of a person or animal that has synchronized movement and sound), records,
teaching machines, computers, and videodiscs. The growth of audiovisual has reflected developments in
both technology and learning theory.
7. Computer in the Classroom: Having a computer in the classroom is an asset to any teacher. With a
computer in the classroom, teachers are able to demonstrate a new lesson, present new material, illustrate
how to use new programs, and show new websites.
Class Website: An easy way to display your student's work is to create a web page designed for your class.
Once a web page is designed, teachers can post homework assignments, student work, famous quotes,
trivia games, and so much more. In today's society, children know how to use the computer and navigate
their way through a website, so why not give them one where they can be a published author. Just be careful
as most districts maintain strong policies to manage official websites for a school or classroom. Also, most
school districts provide teacher webpages that can easily be viewed through the school district's website.
Class Blogs and Wikis: There are a variety of Web 2.0 tools that are currently being implemented in the
classroom. Blogs (Weblog) allow for students to maintain a running dialogue, such as a journal, thoughts,
ideas, and assignments that also provide for student comment and reflection. Wikis are more group focused
to allow multiple members of the group to edit a single document and create a truly collaborative and
carefully edited finished product.
Wireless Classroom Microphones: Noisy classrooms are a daily occurrence, and with the help of
microphones, students are able to hear their teachers more clearly. Children learn better when they hear the
teacher clearly. The benefit for teachers is that they no longer lose their voices at the end of the day.
Mobile Devices: Mobile devices such as clickers or smartphone can be used to enhance the experience in
the classroom by providing the possibility for professors to get feedback. (MLearning or Mobile Learning)
Interactive Whiteboards: An interactive whiteboard that provides touch control of computer applications.
These enhance the experience in the classroom by showing anything that can be on a computer screen. This
not only aids in visual learning, but it is interactive so the students can draw, write, or manipulate images on
the interactive whiteboard.
Online Media: Streamed video websites can be utilized to enhance a classroom lesson (e.g. United
Streaming, Teacher Tube, YouTube etc.)
Digital Games: The field of educational games and serious games has been growing significantly over the
last few years. The digital games are being provided as tools for the classroom and have a lot of positive
feedback including higher motivation for students.
There are many other tools being utilized depending on the local school board and funds available. These
may include: digital cameras, video cameras, interactive whiteboard tools, document cameras, or LCD projectors.
Podcasts: Podcasting, the use of a personal computer to create a “radio show” that users can download and play
on their computer or portable music player became the “bleeding edge” of personal performance in
2005.Podcasting is a relatively new invention that allows anybody to publish files to the Internet where individuals
can subscribe and receive new files from people by a subscription. The primary benefit of podcasting for educators
is quite simple. It enables teachers to reach students through a medium that is both "cool" and a part of their daily
lives. For a technology that only requires a computer, microphone and internet connection, podcasting has the
capacity of advancing a student’s education beyond the classroom. When students listen to the podcasts of other
students as well as their own, they can quickly demonstrate their capacities to identify and define "quality." This
can be a great tool for learning and developing literacy inside and outside the classroom. Podcasting can help
sharpen students’ vocabulary, writing, editing, public speaking, and presentation skills. Students will also learn
skills that will be valuable in the working world, such as communication, time management, and problem-solving.
(Note: Submit a USB mass storage to your professor to secure a copy of MS WORD Lecture)
Prepared by:
OMAR M. JACALNE
ICT Professor