SlideShare a Scribd company logo
1 of 75
PIPELINE AND
MEMORY
SUBJECT: COMPUTERORGANIZATION
PIPELINING AND VECTOR PROCESSING
• Parallel Processing
• Pipelining
• Arithmetic Pipeline
• Instruction Pipeline
• RISC Pipeline
• Vector Processing
• Array Processors
PARALLEL PROCESSING
Levels of Parallel Processing
- Job or Program level
- Task or Procedure level
- Inter-Instruction level
- Intra-Instruction level
Execution of Concurrent Events in the computing
process to achieve faster Computational Speed
Parallel Processing
PARALLEL COMPUTERS
Architectural Classification
Number of Data Streams
Number of
Instruction
Streams
Single
Multiple
Single Multiple
SISD SIMD
MISD MIMD
Parallel Processing
– Flynn's classification

Based on the multiplicity of Instruction Streams and Data
Streams

Instruction Stream
• Sequence of Instructions read from memory

Data Stream
• Operations performed on the data in the processor
COMPUTER ARCHITECTURES FOR PARALLEL
PROCESSING
Von-Neuman
based
Dataflow
Reduction
SISD
MISD
SIMD
MIMD
Superscalar processors
Superpipelined processors
VLIW
Nonexistence
Array processors
Systolic arrays
Associative processors
Shared-memory multiprocessors
Bus based
Crossbar switch based
Multistage IN based
Message-passing multicomputers
Hypercube
Mesh
Reconfigurable
Parallel Processing
SISD COMPUTER SYSTEMS
Control
Unit
Processor
Unit
Memory
Instruction stream
Data stream
Characteristics
- Standard von Neumann machine
- Instructions and data are stored in memory
- One operation at a time
Limitations
Von Neumann bottleneck
Maximum speed of the system is limited by the
Memory Bandwidth (bits/sec or bytes/sec)
- Limitation on Memory Bandwidth
- Memory is shared by CPU and I/O
Parallel Processing
SISD PERFORMANCE IMPROVEMENTS
• Multiprogramming
• Spooling
• Multifunction processor
• Pipelining
• Exploiting instruction-level parallelism
- Superscalar
- Superpipelining
- VLIW (Very Long Instruction Word)
Parallel Processing
MISD COMPUTER SYSTEMS
M CU P
M CU P
M CU P
•
•
•
•
•
•
Memory
Instruction stream
Data stream
Characteristics
- There is no computer at present that can be
classified as MISD
Parallel Processing
SIMD COMPUTER SYSTEMS
Control Unit
Memory
Alignment network
P P P• • •
M MM • • •
Data bus
Instruction stream
Data stream
Processor units
Memory modules
Characteristics
- Only one copy of the program exists
- A single controller executes one instruction at a time
Parallel Processing
TYPES OF SIMD COMPUTERS
Array Processors
- The control unit broadcasts instructions to all PEs,
and all active PEs execute the same instructions
- ILLIAC IV, GF-11, Connection Machine, DAP, MPP
Systolic Arrays
- Regular arrangement of a large number of
very simple processors constructed on
VLSI circuits
- CMU Warp, Purdue CHiP
Associative Processors
- Content addressing
- Data transformation operations over many sets
of arguments with a single instruction
- STARAN, PEPE
Parallel Processing
MIMD COMPUTER SYSTEMS
Interconnection Network
P M P MP M • • •
Shared Memory
Characteristics
- Multiple processing units
- Execution of multiple instructions on multiple data
Types of MIMD computer systems
- Shared memory multiprocessors
- Message-passing multicomputers
Parallel Processing
SHARED MEMORY MULTIPROCESSORS
Characteristics
All processors have equally direct access to
one large memory address space
Example systems
Bus and cache-based systems
- Sequent Balance, Encore Multimax
Multistage IN-based systems
- Ultracomputer, Butterfly, RP3, HEP
Crossbar switch-based systems
- C.mmp, Alliant FX/8
Limitations
Memory access latency
Hot spot problem
Interconnection Network(IN)
• • •
• • •P PP
M MM
Buses,
Multistage IN,
Crossbar Switch
Parallel Processing
MESSAGE-PASSING MULTICOMPUTER
Characteristics
- Interconnected computers
- Each processor has its own memory, and
communicate via message-passing
Example systems
- Tree structure: Teradata, DADO
- Mesh-connected: Rediflow, Series 2010, J-Machine
- Hypercube: Cosmic Cube, iPSC, NCUBE, FPS T Series, Mark III
Limitations
- Communication overhead
- Hard to programming
Message-Passing Network
• • •P PP
M M M• • •
Point-to-point connections
Parallel Processing
PIPELINING
R1 ← Ai, R2 ← Bi Load Ai and Bi
R3 ← R1 * R2, R4 ← Ci Multiply and load Ci
R5 ← R3 + R4 Add
A technique of decomposing a sequential process
into suboperations, with each subprocess being
executed in a partial dedicated segment that
operates concurrently with all other segments.
Ai * Bi + Ci for i = 1, 2, 3, ... , 7
Ai
R1 R2
Multiplier
R3 R4
Adder
R5
Memory
Pipelining
Bi Ci
Segment 1
Segment 2
Segment 3
OPERATIONS IN EACH PIPELINE STAGE
Clock
Pulse
Segment 1 Segment 2 Segment 3
Number R1 R2 R3 R4 R5
1 A1 B1
2 A2 B2 A1 * B1 C1
3 A3 B3 A2 * B2 C2 A1 * B1 + C1
4 A4 B4 A3 * B3 C3 A2 * B2 + C2
5 A5 B5 A4 * B4 C4 A3 * B3 + C3
6 A6 B6 A5 * B5 C5 A4 * B4 + C4
7 A7 B7 A6 * B6 C6 A5 * B5 + C5
8 A7 * B7 C7 A6 * B6 + C6
9 A7 * B7 + C7
Pipelining
GENERAL PIPELINE
General Structure of a 4-Segment Pipeline
S R1 1 S R2 2 S R3 3 S R4 4Input
Clock
Space-Time Diagram
1 2 3 4 5 6 7 8 9
T1
T1
T1
T1
T2
T2
T2
T2
T3
T3
T3
T3 T4
T4
T4
T4 T5
T5
T5
T5 T6
T6
T6
T6
Clock cycles
Segment 1
2
3
4
Pipelining
PIPELINE SPEEDUP
n: Number of tasks to be performed
Conventional Machine (Non-Pipelined)
tn: Clock cycle
τ1: Time required to complete the n tasks
τ1 = n * tn
Pipelined Machine (k stages)
tp: Clock cycle (time to complete each suboperation)
τκ: Time required to complete the n tasks
τκ = (k + n - 1) * tp
Speedup
Sk: Speedup
Sk = n*tn / (k + n - 1)*tp
n → ∞
Sk =
tn
tp
( = k, if tn = k * tp )lim
Pipelining
PIPELINE AND MULTIPLE FUNCTION UNITS
P1
Ii
P2
Ii+1
P3
Ii+2
P4
Ii+3
Multiple Functional Units
Example
- 4-stage pipeline
- subopertion in each stage; tp = 20nS
- 100 tasks to be executed
- 1 task in non-pipelined system; 20*4 = 80nS
Pipelined System
(k + n - 1)*tp = (4 + 99) * 20 = 2060nS
Non-Pipelined System
n*k*tp = 100 * 80 = 8000nS
Speedup
Sk = 8000 / 2060 = 3.88
4-Stage Pipeline is basically identical to the system
with 4 identical function units
Pipelining
ARITHMETIC PIPELINE
Floating-point adder
[1] Compare the exponents
[2] Align the mantissa
[3] Add/sub the mantissa
[4] Normalize the result
X = A x 2a
Y = B x 2b
R
Compare
exponents
by subtraction
a b
R
Choose exponent
Exponents
R
A B
Align mantissa
Mantissas
Difference
R
Add or subtract
mantissas
R
Normalize
result
R
R
Adjust
exponent
R
Segment 1:
Segment 2:
Segment 3:
Segment 4:
Arithmetic Pipeline
4-STAGE FLOATING POINT ADDER
A = a x 2p B = b x 2q
p a q b
Exponent
subtractor
Fraction
selector
Fraction with min(p,q)
Right shifter
Other
fraction
t = |p - q|
r = max(p,q)
Fraction
adder
Leading zero
counter
r c
Left shifter
c
Exponent
adder
r
s d
d
Stages:
S1
S2
S3
S4
C = A + B = c x 2 = d x 2r s
(r = max (p,q), 0.5 ≤ d < 1)
Arithmetic Pipeline
INSTRUCTION CYCLE
Six Phases* in an Instruction Cycle
[1] Fetch an instruction from memory
[2] Decode the instruction
[3] Calculate the effective address of the operand
[4] Fetch the operands from memory
[5] Execute the operation
[6] Store the result in the proper place
* Some instructions skip some phases
* Effective address calculation can be done in
the part of the decoding phase
* Storage of the operation result into a register
is done automatically in the execution phase
==> 4-Stage Pipeline
[1] FI: Fetch an instruction from memory
[2] DA: Decode the instruction and calculate
the effective address of the operand
[3] FO: Fetch the operand
[4] EX: Execute the operation
Instruction Pipeline
INSTRUCTION PIPELINE
Execution of Three Instructions in a 4-Stage Pipeline
Instruction Pipeline
FI DA FO EX
FI DA FO EX
FI DA FO EX
i
i+1
i+2
Conventional
Pipelined
FI DA FO EX
FI DA FO EX
FI DA FO EX
i
i+1
i+2
INSTRUCTION EXECUTION IN A 4-STAGE PIPELINE
1 2 3 4 5 6 7 8 9 10 12 1311
FI DA FO EX1
FI DA FO EX
FI DA FO EX
FI DA FO EX
FI DA FO EX
FI DA FO EX
FI DA FO EX
2
3
4
5
6
7
FI
Step:
Instruction
(Branch)
Instruction Pipeline
Fetch instruction
from memory
Decode instruction
and calculate
effective address
Branch?
Fetch operand
from memory
Execute instruction
Interrupt?Interrupt
handling
Update PC
Empty pipe
no
yes
yes
no
Segment1:
Segment2:
Segment3:
Segment4:
MAJOR HAZARDS IN PIPELINED EXECUTION
Structural hazards(Resource Conflicts)
Hardware Resources required by the instructions in
simultaneous overlapped execution cannot be met
Data hazards (Data Dependency Conflicts)
An instruction scheduled to be executed in the pipeline requires the
result of a previous instruction, which is not yet available
JMP ID PC + PC
bubble IF ID OF OE OS
Branch address dependency
Hazards in pipelines may make it
necessary to stall the pipeline
Pipeline Interlock:
Detect Hazards Stall until it is cleared
Instruction Pipeline
ADD DA B,C +
INC DA +1R1bubble
Data dependencyR1 <- B + C
R1 <- R1 + 1
Control hazards
Branches and other instructions that change the PC
make the fetch of the next instruction to be delayed
STRUCTURAL HAZARDS
Structural Hazards
Occur when some resource has not been
duplicated enough to allow all combinations
of instructions in the pipeline to execute
Example: With one memory-port, a data and an instruction fetch
cannot be initiated in the same clock
The Pipeline is stalled for a structural hazard
<- Two Loads with one port memory
-> Two-port memory will serve without stall
Instruction Pipeline
FI DA FO EXi
i+1
i+2
FI DA FO EX
FI DA FO EXstallstall
DATA HAZARDS
Data Hazards
Occurs when the execution of an instruction
depends on the results of a previous instruction
ADD R1, R2, R3
SUB R4, R1, R5
Hardware Technique
Interlock
- hardware detects the data dependencies and delays the scheduling
of the dependent instruction by stalling enough clock cycles
Forwarding (bypassing, short-circuiting)
- Accomplished by a data path that routes a value from a source
(usually an ALU) to a user, bypassing a designated register. This
allows the value to be produced to be used at an earlier stage in the
pipeline than would otherwise be possible
Software Technique
Instruction Scheduling(compiler) for delayed load
Data hazard can be dealt with either hardware
techniques or software technique
Instruction Pipeline
FORWARDING HARDWARE
Register
file
Result
write bus
Bypass
path
ALU result buffer
MUX
ALU
R4
MUX
Instruction Pipeline
Example:
ADD R1, R2, R3
SUB R4, R1, R5
3-stage Pipeline
I: Instruction Fetch
A: Decode, Read Registers,
ALU Operations
E: Write the result to the
destination register
I A EADD
SUB I A E Without Bypassing
I A ESUB With Bypassing
INSTRUCTION SCHEDULING
a = b + c;
d = e - f;
Unscheduled code:
Delayed Load
A load requiring that the following instruction not use its result
Scheduled Code:
LW Rb, b
LW Rc, c
LW Re, e
ADD Ra, Rb, Rc
LW Rf, f
SW a, Ra
SUB Rd, Re, Rf
SW d, Rd
LW Rb, b
LW Rc, c
ADD Ra, Rb, Rc
SW a, Ra
LW Re, e
LW Rf, f
SUB Rd, Re, Rf
SW d, Rd
Instruction Pipeline
CONTROL HAZARDS
Branch Instructions
- Branch target address is not known until
the branch instruction is completed
- Stall -> waste of cycle times
FI DA FO EX
FI DA FO EX
Branch
Instruction
Next
Instruction
Target address available
Dealing with Control Hazards
* Prefetch Target Instruction
* Branch Target Buffer
* Loop Buffer
* Branch Prediction
* Delayed Branch
Instruction Pipeline
CONTROL HAZARDS
Instruction Pipeline
Prefetch Target Instruction
− Fetch instructions in both streams, branch not taken and branch taken
− Both are saved until branch is executed. Then, select the right
instruction stream and discard the wrong stream
Branch Target Buffer(BTB; Associative Memory)
− Entry: Addr of previously executed branches; Target instruction
and the next few instructions
− When fetching an instruction, search BTB.
− If found, fetch the instruction stream in BTB;
− If not, new stream is fetched and update BTB
Loop Buffer(High Speed Register file)
− Storage of entire loop that allows to execute a loop without accessing memory
Branch Prediction
− Guessing the branch condition, and fetch an instruction stream based on
the guess. Correct guess eliminates the branch penalty
Delayed Branch
− Compiler detects the branch and rearranges the instruction sequence
by inserting useful instructions that keep the pipeline busy
in the presence of a branch instruction
RISC PIPELINE
Instruction Cycles of Three-Stage Instruction Pipeline
RISC Pipeline
RISC
- Machine with a very fast clock cycle that
executes at the rate of one instruction per cycle
<- Simple Instruction Set
Fixed Length Instruction Format
Register-to-Register Operations
Data Manipulation Instructions
I: Instruction Fetch
A: Decode, Read Registers, ALU Operations
E: Write a Register
Load and Store Instructions
I: Instruction Fetch
A: Decode, Evaluate Effective Address
E: Register-to-Memory or Memory-to-Register
Program Control Instructions
I: Instruction Fetch
A: Decode, Evaluate Branch Address
E: Write Register(PC)
DELAYED LOAD
Three-segment pipeline timing
Pipeline timing with data conflict
clock cycle 1 2 3 4 5 6
Load R1 I A E
Load R2 I A E
Add R1+R2 I A E
Store R3 I A E
Pipeline timing with delayed load
clock cycle 1 2 3 4 5 6 7
Load R1 I A E
Load R2 I A E
NOP I A E
Add R1+R2 I A E
Store R3 I A E
LOAD: R1 ← M[address 1]
LOAD: R2 ← M[address 2]
ADD: R3 ← R1 + R2
STORE: M[address 3] ← R3
RISC Pipeline
The data dependency is taken
care by the compiler rather
than the hardware
DELAYED BRANCH
1
I
3 4 652Clock cycles:
1. Load A
2. Increment
4. Subtract
5. Branch to X
7
3. Add
8
6. NOP
E
I A E
I A E
I A E
I A E
I A E
9 10
7. NOP
8. Instr. in X
I A E
I A E
1
I
3 4 652Clock cycles:
1. Load A
2. Increment
4. Add
5. Subtract
7
3. Branch to X
8
6. Instr. in X
E
I A E
I A E
I A E
I A E
I A E
Compiler analyzes the instructions before and after
the branch and rearranges the program sequence by
inserting useful instructions in the delay steps
Using no-operation instructions
Rearranging the instructions
RISC Pipeline
VECTOR PROCESSING
Vector Processing
Vector Processing Applications

Problems that can be efficiently formulated in terms of vectors
− Long-range weather forecasting
− Petroleum explorations
− Seismic data analysis
− Medical diagnosis
− Aerodynamics and space flight simulations
− Artificial intelligence and expert systems
− Mapping the human genome
− Image processing
Vector Processor (computer)
Ability to process vectors, and related data structures such as matrices
and multi-dimensional arrays, much faster than conventional computers
Vector Processors may also be pipelined
VECTOR PROGRAMMING
DO 20 I = 1, 100
20 C(I) = B(I) + A(I)
Conventional computer
Initialize I = 0
20 Read A(I)
Read B(I)
Store C(I) = A(I) + B(I)
Increment I = i + 1
If I ≤ 100 goto 20
Vector computer
C(1:100) = A(1:100) + B(1:100)
Vector Processing
VECTOR INSTRUCTIONS
f1: V ∗ V
f2: V ∗ S
f3: V x V ∗ V
f4: V x S ∗ V
V: Vector operand
S: Scalar operand
Type Mnemonic Description (I = 1, ..., n)
Vector Processing
f1 VSQR Vector square root B(I) ∗ SQR(A(I))
VSIN Vector sine B(I) ∗ sin(A(I))
VCOM Vector complement A(I) ∗ A(I)
f2 VSUM Vector summation S ∗ Σ A(I)
VMAX Vector maximum S ∗ max{A(I)}
f3 VADD Vector add C(I) ∗ A(I) + B(I)
VMPY Vector multiply C(I) ∗ A(I) * B(I)
VAND Vector AND C(I) ∗ A(I) . B(I)
VLAR Vector larger C(I) ∗ max(A(I),B(I))
VTGE Vector test > C(I) ∗ 0 if A(I) < B(I)
C(I) ∗ 1 if A(I) > B(I)
f4 SADD Vector-scalar add B(I) ∗ S + A(I)
SDIV Vector-scalar divide B(I) ∗ A(I) / S
VECTOR INSTRUCTION FORMAT
Operation
code
Baseaddress
source1
Baseaddress
source2
Baseaddress
destination
Vector
length
Vector Processing
Vector Instruction Format
Source
A
Source
B
Multiplier
pipeline
Adder
pipeline
Pipeline for Inner Product
MULTIPLE MEMORY MODULE AND INTERLEAVING
Vector Processing
Multiple Module Memory
Address Interleaving
Different sets of addresses are assigned to
different memory modules
AR
Memory
array
DR
AR
Memory
array
DR
AR
Memory
array
DR
AR
Memory
array
DR
Address bus
Data bus
M0 M1 M2 M3
MEMORY ORGANISATION
Memory Organisation 2
MEMORY HIERARCHY
Memory Organisation 40
CPU
I/O
Processor
MEMORY HIERARCHY
• CPU logic is usually faster than main memory access
time, with the result that processing speed is limited
primarily by the speed of main memory.
• The cache is used for storing segments of programs
currently being executed in the CPU and temporary data
frequently needed in the present calculations.
• The typical access time ratio between cache and main
memory is about 1 to 7~10 .Memory Organisation 41
MEMORY HIERARCHY
• The memory hierarchy system consists of all
storage devices employed in a computer system
from the slow by high-capacity auxiliary memory to
a relatively faster main memory, to an even smaller
and faster cache memory.
Memory Organisation 42
Auxiliary Memory
Or
Secondary Memory
Increasing order of
access time ratio Primary Memory
ACCESS METHODS
• Each memory is a collection of various memory location. Accessing the memory means finding
and reaching desired location and than reading information from memory location. The
information from locations can be accessed as follows:
1. Randomaccess
2. Sequential access
3. Direct access
• Random Access: It is the access mode where each memory location has a unique address.
Using these unique addresses each memory location can be addressed independently in any
order in equal amount of time. Generally, main memories are random access memories(RAM).
Memory Organisation 43
ACCESS METHODS
• Sequential Access: If storage locations can be accessed only in a
certain predetermined sequence, the access method is known as
serial or sequential access.
• Opposite of RAM: Serial Access Memory (SAM). SAM works very well for
memory buffers, where the data is normally stored in the order in which
it will be used (a good example is the texture buffer memory on a video
card , magnetic tapes, etc.).
• Direct Access: In this access information is stored on tracks and
each track has a separate read/write head. This features makes it
a semi random mode which is generally used in magnetic disks.
Memory Organisation 44
MAIN MEMORY
• Most of the main memory in a general purpose computer is
made up of RAMintegrated circuits chips, but a portion of the
memory may be constructed with ROMchips.
• RAM– Random Access memory
• Integrated RAM are available in two possible operating
modes, Static and Dynamic.
• ROM– Read Only memory
Memory Organisation 45
WHAT IS RAM??
Memory Organisation 46
RANDOM ACCESS MEMORY
(RAM)
• RAM is used for storing bulk of programs and data
that is subject to change.
Memory Organisation 47
words
(8 bits (one byte) per word)
0
127
Typical RAM chip
Chip select 1
Chip select 2
Read
Write
7-bit address
CS1
CS2
RD
WR
AD 7
128 x 8
RAM
8-bit data bus
CS1 CS2 RD WR
0 0 x x
0 1 x x
1 0 0 0
1 0 0 1
1 0 1 x
1 1 x x
Memory function
Inhibit
Inhibit
Inhibit
Write
Read
Inhibit
State of data bus
High-impedence
High-impedence
High-impedence
Input data to RAM
Output data from RAM
High Impedence
TYPES OF RANDOM ACCESS
MEMORY
(RAM)• Static RAM (SRAM)
• Each cell stores bit with a six-transistor circuit.
• Retains value indefinitely, as long as it is kept powered.
• Faster (8-16 times faster) and more expensive (8-16 times more expensive as well)
than DRAM.
• Dynamic RAM (DRAM)
• Each cell stores bit with a capacitor and transistor.
• Value must be refreshed every 10-100 ms.
• Slower and cheaper than SRAM. Has reduced power consumption, and a large storage
capacity.
In contrast to , SRAM and DRAM:
 Non Volatile RAM (NVRAM)
• retains its information when power is turned off (non volatile).
• best-known form of NVRAM memory today is flash memory.
Virtually all desktop or server computers since 1975 used DRAMs for main memory and
SRAMs for cache.
Memory Organisation 48
READ ONLY MEMORY
(ROM)
• It is non-volatile memory, which retains the data even when power is
removed from this memory. Programs and data that can not be altered
are stored in ROM.
• ROM is used for storing programs that are PERMANENTLY resident in
the computer and for tables of constants that do not change in value
once the production of the computer is completed.
• The ROM portion of main memory is needed for storing an initial program
called bootstraploader, which is to start the computer operating system
when power is turned on.
Memory Organisation 49
READ ONLY MEMORY
(ROM)
• Typical ROM chip:
• Since the ROM can only READ, the data bus can only be in output mode.
• No need of READ and WRITE control.
• Same sized RAM and ROM chip , it is possible to have more bits of ROM
than of RAM , because the internal binary cells in ROM occupy less
space than in RAM.
Memory Organisation 50
Chip select 1
Chip select 2
9-bit address
CS1
CS2
AD 9
512 x 8
ROM
8-bit data bus
TYPES OF ROM
• The required paths in a ROM may be programmed in four different
ways:
1. MaskProgramming: It is done by the company during the fabrication
process of the unit. The procedure for fabricating a ROM requires that
the customer fills out the truth table he wishes the ROM to satisfy.
2. Programmable Read only memory(PROM):
PROM contain all the fuses intact giving all 1’s in the bits of the stored
words. A blown fuse defines binary 0 state and an intact fuse give a
binary 1 state. This allows the user to program the PROM by using a
special instruments called PROM programmer.
Memory Organisation 51
TYPES OF ROM
3. Erasable PROM(EPROM): In a PROM once fixed pattern is
permanent and can not be altered. The EPROM can be
restructured to the initial state even through it has been
programmed previously. When EPROM is placed under a special
ultra-violet light for a given period of time all the data are erased.
After erase, the EPROM returns to it initial state and can be
programmed to a new set of values.
4. Electrically Erasable PROM(EEPROM): It is similar to EPROM
except that the previously programmed connections can be erased
with an electrical signal instead of ultra violet light. The advantage
is that device can be erased without removing it from it’s socket.Memory Organisation 52
AUXILIARY MEMORY
• Also called as Secondary Memory, used to store large chunks
of data at a lesser cost per byte than a primary memory for
backup.
• It does not lose the data when the device is powered down—it
is non-volatile.
• It is not directly accessible by the CPU, they are accessed via
the input/output channels.
• The most common form of auxiliary memory devices used in
consumer systems is flash memory, optical discs, and
magnetic disks, magnetic tapes.
Memory Organisation 56
TYPES OF AUXILIARY MEMORY
• Flash m e m o ry:  An electronic non-volatile computer
storage device that can be electrically erased and
reprogrammed, and works without any moving parts.
Examples of this are USBflash drives and solid state
drives.
• O pticaldisc:  Its a storage medium from which data is
read and to which it is written by lasers. There are three
basic types of optical disks: CD-ROM (read-only),
WORM (write-once read-many) & EO (erasable optical
disks).Memory Organisation 57
TYPES OF AUXILIARY MEMORY
• Mag ne tic tape s :  A magnetic
tape consists of electric,
mechanical and electronic
components to provide the
parts and control mechanism
for a magnetic tape unit.
• The tape itself is a strip of
plastic coated with a magnetic
recording medium. Bits are
recorded as magnetic spots on
tape along several tracks called
RECORDS.
Memory Organisation 58
EOF
IRG
block 1 block 2
block 3
block 1
block 2
block 3
R1
R2 R3 R4
R5
R6
R1
R3 R2
R5 R4
file i
EOF
• R/W heads are mounted in each track so that
data can be recorded and read as a
sequence of characters.
• Can be stopped , started to move forward,
or in reverse, or can be rewound, but cannot be
stopped fast enough between individual
characters.
TYPES OF AUXILIARY MEMORY
Mag ne tic Disk:
•A magnetic disk is a circular plate
constructed of metal or plastic coated with
magnetized material.
•Both sides of the disk are used and
several disks may be stacked on one
spindle with read/write heads available on
each surface.
•Bits are stored in magnetized surface in
spots along concentric circles called
tracks. Tracks are commonly divided into
sections called sectors.
•Disk that are permanently attached and
cannot removed by occasional user are
Memory Organisation 59
ASSOCIATIVE MEMORY
• A memory unit accessed by contents is called an
associative memory or content addressable
memory(CAM).
• This type of memory is accessed simultaneously
and in parallel on the basis of data content rather
than by specific address or location.
READ/WRITE OPERATION IN
CAM
• Write operation:
 When a word is written in in an associative memory, no address is
given.
 The memory is capable of finding an unused location to store the
word.
• Read operation:
 When a word is to be read from an associative memory, the
contents of the word, or a part of the word is specified.
 The memory locates all the words which match the specified
content and marks them for reading.
HARDWARE ORGANISATIONArgument register(A): It contains the
word to be searched. It has n bits(one for
each bit of the word).
Key Register(K): It provides mask for
choosing a particular field or key in the
argument word. It also has n bits.
Associative memory array: It contains
the words which are to be compared
with the argument word.
Match Register(M):It has m bits, one bit
corresponding to each word in the
memory array . After the matching
process, the bits corresponding to
matching words in match register are set
to 1.
MATCHING PROCESS
• The entire argument word is compared with each memory
word, if the key register contains all 1’s. Otherwise, only
those bits in the argument that have 1’s in their corresponding
position of the key register are compared.
•Thus the key provides a mask or identifying piece of
information which specifies how the reference to memory is
made.
•To illustrate with a numerical example, suppose that the
argument register A and the key register K have the bit
configuration as shown below.
•Only the three left most bits of A are compared with the
memory words because K has 1’s in these three positions
only.
DISADVANTAGE
• An associative memory is more expensive than a
random access memory because each cell must have
an extra storage capability as well as logic circuits for
matching its content with an external argument.
• For this reason, associative memories are used in
applications where the search time is very critical and
must be very short.
CACHE MEMORY
• If the active portions of the program and data are
placed in a fast small memory, the average memory
access time can be reduced.
• Thus reducing the total execution time of the
program
• Such a fast small memory is referred to as cache
memory
• The cache is the fastest component in the memory
hierarchy and approaches the speed of CPU
component
BASIC OPERATIONS OF CACHE
• When CPU needs to access memory, the cache is examined.
• If the word is found in the cache, it is read from the cache memory.
• If the word addressed by the CPU is not found in the cache, the main
memory is accessed to read the word.
• A block of words containing the one just accessed is then transferred
from main memory to cache memory.
• If the cache is full, then a block equivalent to the size of the used word is
replaced according to the replacement algorithm being used.
HIT RATIO
• When the CPU refers to memory and finds the word
in cache, it is said to produce a hit
• Otherwise, it is a miss
• The performance of cache memory is frequently
measured in terms of a quantity called hit ratio
Hit ratio = hit /(hit+miss)
MAPPING PROCESS
•The transformation of data from main memory to
cache memory is referred to as a mapping
process, there are three types of mapping:
 Associative mapping
 Direct mapping
 Set-associative mapping
•To help understand the mapping procedure, we
have the following example:
ASSOCIATIVE MAPPING
• The fastest and most flexible cache
organization uses an associative
memory.
• The associative memory stores both
the address and data of the memory
word.
• This permits any location in cache to
store a word from main memory.
• The address value of 15 bits is
DIRECT MAPPING
• Associative memory is
expensive compared to
RAM.
• In general case, there are
2^k words in cache
memory and 2^n words in
main memory (in our case,
k=9, n=15).
• The n bit memory address
is divided into two fields:
k-bits for the index and
n-k bits for the tag field.
SET – ASSOCIATIVE MAPPING
• The disadvantage of direct
mapping is that two words with
the same index in their
address but with different tag
values cannot reside in cache
memory at the same time.
• Set-Associative Mapping is an
improvement over the direct-
mapping in that each word of
cache can store two or more
REPLACEMENT ALGORITHMS
• Optimal replacement algorithm – find the block for
replacement that has minimum chance to be
referenced next time.
• Two algorithms:
• FIFO: Selects the item which has been in the set
the longest.
• LRU: Selects the item which has been least
recently used by the CPU.
VIRTUAL MEMORY
• Virtual memory is a common part of operating system on
desktop computers.
• The term Virtual Memory refers to something which
appears to be present but actually is not.
• This technique allows users to use more memory for a
program than the real memory of a computer.
NEED OF VIRTUAL MEMORY
• Virtual Memory is a imaginary memory which we assume
or use, when we have a material that exceeds our
memory at that time.
 Virtual Memory is
temporary memory
which is used along
with the ramof the
system.
ADVANTAGES
• Allows Processes whose aggregate memory requirement is
greater than the amount of physical memory, as infrequently
used pages can reside on the disk.
• Virtual memory allows speed gain when only a particular
segment of the program is required for the execution of the
program.
• This concept is very helpful in implementing
multiprogramming environment.
DISADVANTAGES
• Applications run rather slower when they are using
virtual memory.
• It takes more time to switch between applications.
• Reduces system stability.
Computer Organozation

More Related Content

What's hot

Pipelining in Computer System Achitecture
Pipelining in Computer System AchitecturePipelining in Computer System Achitecture
Pipelining in Computer System AchitectureYashiUpadhyay3
 
Instruction pipelining
Instruction pipeliningInstruction pipelining
Instruction pipeliningTech_MX
 
Lec18 pipeline
Lec18 pipelineLec18 pipeline
Lec18 pipelineGRajendra
 
Pipelining powerpoint presentation
Pipelining powerpoint presentationPipelining powerpoint presentation
Pipelining powerpoint presentationbhavanadonthi
 
Pipeline hazard
Pipeline hazardPipeline hazard
Pipeline hazardAJAL A J
 
Pipeline and data hazard
Pipeline and data hazardPipeline and data hazard
Pipeline and data hazardWaed Shagareen
 
Instruction Level Parallelism Compiler optimization Techniques Anna Universit...
Instruction Level Parallelism Compiler optimization Techniques Anna Universit...Instruction Level Parallelism Compiler optimization Techniques Anna Universit...
Instruction Level Parallelism Compiler optimization Techniques Anna Universit...Dr.K. Thirunadana Sikamani
 
pipeline and vector processing
pipeline and vector processingpipeline and vector processing
pipeline and vector processingAcad
 
Chapter 04 the processor
Chapter 04   the processorChapter 04   the processor
Chapter 04 the processorBảo Hoang
 
Pipeline processing - Computer Architecture
Pipeline processing - Computer Architecture Pipeline processing - Computer Architecture
Pipeline processing - Computer Architecture S. Hasnain Raza
 
Computer arithmetic in computer architecture
Computer arithmetic in computer architectureComputer arithmetic in computer architecture
Computer arithmetic in computer architectureishapadhy
 
Pipelining, processors, risc and cisc
Pipelining, processors, risc and ciscPipelining, processors, risc and cisc
Pipelining, processors, risc and ciscMark Gibbs
 

What's hot (20)

Pipelining
PipeliningPipelining
Pipelining
 
Pipelining in Computer System Achitecture
Pipelining in Computer System AchitecturePipelining in Computer System Achitecture
Pipelining in Computer System Achitecture
 
Instruction pipelining
Instruction pipeliningInstruction pipelining
Instruction pipelining
 
Lec18 pipeline
Lec18 pipelineLec18 pipeline
Lec18 pipeline
 
Pipelining powerpoint presentation
Pipelining powerpoint presentationPipelining powerpoint presentation
Pipelining powerpoint presentation
 
Pipeline hazard
Pipeline hazardPipeline hazard
Pipeline hazard
 
Pipeline and data hazard
Pipeline and data hazardPipeline and data hazard
Pipeline and data hazard
 
Instruction Level Parallelism Compiler optimization Techniques Anna Universit...
Instruction Level Parallelism Compiler optimization Techniques Anna Universit...Instruction Level Parallelism Compiler optimization Techniques Anna Universit...
Instruction Level Parallelism Compiler optimization Techniques Anna Universit...
 
Piplining
PipliningPiplining
Piplining
 
pipeline and vector processing
pipeline and vector processingpipeline and vector processing
pipeline and vector processing
 
pipelining
pipeliningpipelining
pipelining
 
Design a pipeline
Design a pipelineDesign a pipeline
Design a pipeline
 
Chapter 04 the processor
Chapter 04   the processorChapter 04   the processor
Chapter 04 the processor
 
Presentation on risc pipeline
Presentation on risc pipelinePresentation on risc pipeline
Presentation on risc pipeline
 
Pipeline processing - Computer Architecture
Pipeline processing - Computer Architecture Pipeline processing - Computer Architecture
Pipeline processing - Computer Architecture
 
Computer arithmetic in computer architecture
Computer arithmetic in computer architectureComputer arithmetic in computer architecture
Computer arithmetic in computer architecture
 
3 Pipelining
3 Pipelining3 Pipelining
3 Pipelining
 
pipelining
pipeliningpipelining
pipelining
 
Pipelining, processors, risc and cisc
Pipelining, processors, risc and ciscPipelining, processors, risc and cisc
Pipelining, processors, risc and cisc
 
1.prallelism
1.prallelism1.prallelism
1.prallelism
 

Viewers also liked

Pipelining and vector processing
Pipelining and vector processingPipelining and vector processing
Pipelining and vector processingKamal Acharya
 
Introduction Cell Processor
Introduction Cell ProcessorIntroduction Cell Processor
Introduction Cell Processorcoolmirza143
 
central processing unit and pipeline
central processing unit and pipelinecentral processing unit and pipeline
central processing unit and pipelineRai University
 
Coa swetappt copy
Coa swetappt   copyCoa swetappt   copy
Coa swetappt copysweta_pari
 
Multivector and multiprocessor
Multivector and multiprocessorMultivector and multiprocessor
Multivector and multiprocessorKishan Panara
 
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...saumyatapu
 
FD.IO Vector Packet Processing
FD.IO Vector Packet ProcessingFD.IO Vector Packet Processing
FD.IO Vector Packet ProcessingKernel TLV
 
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Dan Halperin
 
Parallel Processors (SIMD)
Parallel Processors (SIMD) Parallel Processors (SIMD)
Parallel Processors (SIMD) Ali Raza
 
Random scan displays and raster scan displays
Random scan displays and raster scan displaysRandom scan displays and raster scan displays
Random scan displays and raster scan displaysSomya Bagai
 
Speech Recognition System By Matlab
Speech Recognition System By MatlabSpeech Recognition System By Matlab
Speech Recognition System By MatlabAnkit Gujrati
 

Viewers also liked (16)

Pipelining and vector processing
Pipelining and vector processingPipelining and vector processing
Pipelining and vector processing
 
Computer Architecture
Computer ArchitectureComputer Architecture
Computer Architecture
 
Introduction Cell Processor
Introduction Cell ProcessorIntroduction Cell Processor
Introduction Cell Processor
 
central processing unit and pipeline
central processing unit and pipelinecentral processing unit and pipeline
central processing unit and pipeline
 
Coa swetappt copy
Coa swetappt   copyCoa swetappt   copy
Coa swetappt copy
 
Aca2 08 new
Aca2 08 newAca2 08 new
Aca2 08 new
 
Multivector and multiprocessor
Multivector and multiprocessorMultivector and multiprocessor
Multivector and multiprocessor
 
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...
 
FD.IO Vector Packet Processing
FD.IO Vector Packet ProcessingFD.IO Vector Packet Processing
FD.IO Vector Packet Processing
 
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
 
Array Processor
Array ProcessorArray Processor
Array Processor
 
Parallel Processors (SIMD)
Parallel Processors (SIMD) Parallel Processors (SIMD)
Parallel Processors (SIMD)
 
Random scan displays and raster scan displays
Random scan displays and raster scan displaysRandom scan displays and raster scan displays
Random scan displays and raster scan displays
 
Chapter 1(4)SCALAR AND VECTOR
Chapter 1(4)SCALAR AND VECTORChapter 1(4)SCALAR AND VECTOR
Chapter 1(4)SCALAR AND VECTOR
 
Android workShop
Android workShopAndroid workShop
Android workShop
 
Speech Recognition System By Matlab
Speech Recognition System By MatlabSpeech Recognition System By Matlab
Speech Recognition System By Matlab
 

Similar to Computer Organozation

CS304PC:Computer Organization and Architecture Session 33 demo 1 ppt.pdf
CS304PC:Computer Organization and Architecture  Session 33 demo 1 ppt.pdfCS304PC:Computer Organization and Architecture  Session 33 demo 1 ppt.pdf
CS304PC:Computer Organization and Architecture Session 33 demo 1 ppt.pdfAsst.prof M.Gokilavani
 
Parallel Processing Techniques Pipelining
Parallel Processing Techniques PipeliningParallel Processing Techniques Pipelining
Parallel Processing Techniques PipeliningRNShukla7
 
Computer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.ppt
Computer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.pptComputer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.ppt
Computer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.pptHermellaGashaw
 
Pipelining And Vector Processing
Pipelining And Vector ProcessingPipelining And Vector Processing
Pipelining And Vector ProcessingTheInnocentTuber
 
pipelining ppt.pdf
pipelining ppt.pdfpipelining ppt.pdf
pipelining ppt.pdfWilliamTom9
 
Pipelining of Processors Computer Architecture
Pipelining of  Processors Computer ArchitecturePipelining of  Processors Computer Architecture
Pipelining of Processors Computer ArchitectureHaris456
 
Topic2a ss pipelines
Topic2a ss pipelinesTopic2a ss pipelines
Topic2a ss pipelinesturki_09
 
Lecutre-6 Datapath Design.ppt
Lecutre-6 Datapath Design.pptLecutre-6 Datapath Design.ppt
Lecutre-6 Datapath Design.pptRaJibRaju3
 
CMPN301-Pipelining_V2.pptx
CMPN301-Pipelining_V2.pptxCMPN301-Pipelining_V2.pptx
CMPN301-Pipelining_V2.pptxNadaAAmin
 
CPU Pipelining and Hazards - An Introduction
CPU Pipelining and Hazards - An IntroductionCPU Pipelining and Hazards - An Introduction
CPU Pipelining and Hazards - An IntroductionDilum Bandara
 
Embedded computing platform design
Embedded computing platform designEmbedded computing platform design
Embedded computing platform designRAMPRAKASHT1
 
Design pipeline architecture for various stage pipelines
Design pipeline architecture for various stage pipelinesDesign pipeline architecture for various stage pipelines
Design pipeline architecture for various stage pipelinesMahmudul Hasan
 

Similar to Computer Organozation (20)

CS304PC:Computer Organization and Architecture Session 33 demo 1 ppt.pdf
CS304PC:Computer Organization and Architecture  Session 33 demo 1 ppt.pdfCS304PC:Computer Organization and Architecture  Session 33 demo 1 ppt.pdf
CS304PC:Computer Organization and Architecture Session 33 demo 1 ppt.pdf
 
Parallel Processing Techniques Pipelining
Parallel Processing Techniques PipeliningParallel Processing Techniques Pipelining
Parallel Processing Techniques Pipelining
 
Computer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.ppt
Computer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.pptComputer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.ppt
Computer_Architecture_3rd_Edition_by_Moris_Mano_Ch_09.ppt
 
BTCS501_MM_Ch9.pptx
BTCS501_MM_Ch9.pptxBTCS501_MM_Ch9.pptx
BTCS501_MM_Ch9.pptx
 
Pipelining And Vector Processing
Pipelining And Vector ProcessingPipelining And Vector Processing
Pipelining And Vector Processing
 
Parallel processing and pipelining
Parallel processing and pipeliningParallel processing and pipelining
Parallel processing and pipelining
 
pipelining ppt.pdf
pipelining ppt.pdfpipelining ppt.pdf
pipelining ppt.pdf
 
Unit 4 COA.pptx
Unit 4 COA.pptxUnit 4 COA.pptx
Unit 4 COA.pptx
 
Pipelining of Processors Computer Architecture
Pipelining of  Processors Computer ArchitecturePipelining of  Processors Computer Architecture
Pipelining of Processors Computer Architecture
 
CA UNIT III.pptx
CA UNIT III.pptxCA UNIT III.pptx
CA UNIT III.pptx
 
COA Unit-5.pptx
COA Unit-5.pptxCOA Unit-5.pptx
COA Unit-5.pptx
 
Pipeline r014
Pipeline   r014Pipeline   r014
Pipeline r014
 
Topic2a ss pipelines
Topic2a ss pipelinesTopic2a ss pipelines
Topic2a ss pipelines
 
Lecutre-6 Datapath Design.ppt
Lecutre-6 Datapath Design.pptLecutre-6 Datapath Design.ppt
Lecutre-6 Datapath Design.ppt
 
CMPN301-Pipelining_V2.pptx
CMPN301-Pipelining_V2.pptxCMPN301-Pipelining_V2.pptx
CMPN301-Pipelining_V2.pptx
 
Pipelining slides
Pipelining slides Pipelining slides
Pipelining slides
 
Coa.ppt2
Coa.ppt2Coa.ppt2
Coa.ppt2
 
CPU Pipelining and Hazards - An Introduction
CPU Pipelining and Hazards - An IntroductionCPU Pipelining and Hazards - An Introduction
CPU Pipelining and Hazards - An Introduction
 
Embedded computing platform design
Embedded computing platform designEmbedded computing platform design
Embedded computing platform design
 
Design pipeline architecture for various stage pipelines
Design pipeline architecture for various stage pipelinesDesign pipeline architecture for various stage pipelines
Design pipeline architecture for various stage pipelines
 

Recently uploaded

Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm Systemirfanmechengr
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxsiddharthjain2303
 
Industrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESIndustrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESNarmatha D
 
Input Output Management in Operating System
Input Output Management in Operating SystemInput Output Management in Operating System
Input Output Management in Operating SystemRashmi Bhat
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
Industrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.pptIndustrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.pptNarmatha D
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
 
Steel Structures - Building technology.pptx
Steel Structures - Building technology.pptxSteel Structures - Building technology.pptx
Steel Structures - Building technology.pptxNikhil Raut
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptJasonTagapanGulla
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsSachinPawar510423
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
System Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event SchedulingSystem Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event SchedulingBootNeck1
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Mine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxMine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxRomil Mishra
 
Internet of things -Arshdeep Bahga .pptx
Internet of things -Arshdeep Bahga .pptxInternet of things -Arshdeep Bahga .pptx
Internet of things -Arshdeep Bahga .pptxVelmuruganTECE
 

Recently uploaded (20)

Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
Class 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm SystemClass 1 | NFPA 72 | Overview Fire Alarm System
Class 1 | NFPA 72 | Overview Fire Alarm System
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptx
 
Industrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESIndustrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIES
 
Input Output Management in Operating System
Input Output Management in Operating SystemInput Output Management in Operating System
Input Output Management in Operating System
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
Industrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.pptIndustrial Safety Unit-IV workplace health and safety.ppt
Industrial Safety Unit-IV workplace health and safety.ppt
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
 
Steel Structures - Building technology.pptx
Steel Structures - Building technology.pptxSteel Structures - Building technology.pptx
Steel Structures - Building technology.pptx
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.ppt
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documents
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
System Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event SchedulingSystem Simulation and Modelling with types and Event Scheduling
System Simulation and Modelling with types and Event Scheduling
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Mine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxMine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptx
 
Internet of things -Arshdeep Bahga .pptx
Internet of things -Arshdeep Bahga .pptxInternet of things -Arshdeep Bahga .pptx
Internet of things -Arshdeep Bahga .pptx
 

Computer Organozation

  • 2. PIPELINING AND VECTOR PROCESSING • Parallel Processing • Pipelining • Arithmetic Pipeline • Instruction Pipeline • RISC Pipeline • Vector Processing • Array Processors
  • 3. PARALLEL PROCESSING Levels of Parallel Processing - Job or Program level - Task or Procedure level - Inter-Instruction level - Intra-Instruction level Execution of Concurrent Events in the computing process to achieve faster Computational Speed Parallel Processing
  • 4. PARALLEL COMPUTERS Architectural Classification Number of Data Streams Number of Instruction Streams Single Multiple Single Multiple SISD SIMD MISD MIMD Parallel Processing – Flynn's classification  Based on the multiplicity of Instruction Streams and Data Streams  Instruction Stream • Sequence of Instructions read from memory  Data Stream • Operations performed on the data in the processor
  • 5. COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING Von-Neuman based Dataflow Reduction SISD MISD SIMD MIMD Superscalar processors Superpipelined processors VLIW Nonexistence Array processors Systolic arrays Associative processors Shared-memory multiprocessors Bus based Crossbar switch based Multistage IN based Message-passing multicomputers Hypercube Mesh Reconfigurable Parallel Processing
  • 6. SISD COMPUTER SYSTEMS Control Unit Processor Unit Memory Instruction stream Data stream Characteristics - Standard von Neumann machine - Instructions and data are stored in memory - One operation at a time Limitations Von Neumann bottleneck Maximum speed of the system is limited by the Memory Bandwidth (bits/sec or bytes/sec) - Limitation on Memory Bandwidth - Memory is shared by CPU and I/O Parallel Processing
  • 7. SISD PERFORMANCE IMPROVEMENTS • Multiprogramming • Spooling • Multifunction processor • Pipelining • Exploiting instruction-level parallelism - Superscalar - Superpipelining - VLIW (Very Long Instruction Word) Parallel Processing
  • 8. MISD COMPUTER SYSTEMS M CU P M CU P M CU P • • • • • • Memory Instruction stream Data stream Characteristics - There is no computer at present that can be classified as MISD Parallel Processing
  • 9. SIMD COMPUTER SYSTEMS Control Unit Memory Alignment network P P P• • • M MM • • • Data bus Instruction stream Data stream Processor units Memory modules Characteristics - Only one copy of the program exists - A single controller executes one instruction at a time Parallel Processing
  • 10. TYPES OF SIMD COMPUTERS Array Processors - The control unit broadcasts instructions to all PEs, and all active PEs execute the same instructions - ILLIAC IV, GF-11, Connection Machine, DAP, MPP Systolic Arrays - Regular arrangement of a large number of very simple processors constructed on VLSI circuits - CMU Warp, Purdue CHiP Associative Processors - Content addressing - Data transformation operations over many sets of arguments with a single instruction - STARAN, PEPE Parallel Processing
  • 11. MIMD COMPUTER SYSTEMS Interconnection Network P M P MP M • • • Shared Memory Characteristics - Multiple processing units - Execution of multiple instructions on multiple data Types of MIMD computer systems - Shared memory multiprocessors - Message-passing multicomputers Parallel Processing
  • 12. SHARED MEMORY MULTIPROCESSORS Characteristics All processors have equally direct access to one large memory address space Example systems Bus and cache-based systems - Sequent Balance, Encore Multimax Multistage IN-based systems - Ultracomputer, Butterfly, RP3, HEP Crossbar switch-based systems - C.mmp, Alliant FX/8 Limitations Memory access latency Hot spot problem Interconnection Network(IN) • • • • • •P PP M MM Buses, Multistage IN, Crossbar Switch Parallel Processing
  • 13. MESSAGE-PASSING MULTICOMPUTER Characteristics - Interconnected computers - Each processor has its own memory, and communicate via message-passing Example systems - Tree structure: Teradata, DADO - Mesh-connected: Rediflow, Series 2010, J-Machine - Hypercube: Cosmic Cube, iPSC, NCUBE, FPS T Series, Mark III Limitations - Communication overhead - Hard to programming Message-Passing Network • • •P PP M M M• • • Point-to-point connections Parallel Processing
  • 14. PIPELINING R1 ← Ai, R2 ← Bi Load Ai and Bi R3 ← R1 * R2, R4 ← Ci Multiply and load Ci R5 ← R3 + R4 Add A technique of decomposing a sequential process into suboperations, with each subprocess being executed in a partial dedicated segment that operates concurrently with all other segments. Ai * Bi + Ci for i = 1, 2, 3, ... , 7 Ai R1 R2 Multiplier R3 R4 Adder R5 Memory Pipelining Bi Ci Segment 1 Segment 2 Segment 3
  • 15. OPERATIONS IN EACH PIPELINE STAGE Clock Pulse Segment 1 Segment 2 Segment 3 Number R1 R2 R3 R4 R5 1 A1 B1 2 A2 B2 A1 * B1 C1 3 A3 B3 A2 * B2 C2 A1 * B1 + C1 4 A4 B4 A3 * B3 C3 A2 * B2 + C2 5 A5 B5 A4 * B4 C4 A3 * B3 + C3 6 A6 B6 A5 * B5 C5 A4 * B4 + C4 7 A7 B7 A6 * B6 C6 A5 * B5 + C5 8 A7 * B7 C7 A6 * B6 + C6 9 A7 * B7 + C7 Pipelining
  • 16. GENERAL PIPELINE General Structure of a 4-Segment Pipeline S R1 1 S R2 2 S R3 3 S R4 4Input Clock Space-Time Diagram 1 2 3 4 5 6 7 8 9 T1 T1 T1 T1 T2 T2 T2 T2 T3 T3 T3 T3 T4 T4 T4 T4 T5 T5 T5 T5 T6 T6 T6 T6 Clock cycles Segment 1 2 3 4 Pipelining
  • 17. PIPELINE SPEEDUP n: Number of tasks to be performed Conventional Machine (Non-Pipelined) tn: Clock cycle τ1: Time required to complete the n tasks τ1 = n * tn Pipelined Machine (k stages) tp: Clock cycle (time to complete each suboperation) τκ: Time required to complete the n tasks τκ = (k + n - 1) * tp Speedup Sk: Speedup Sk = n*tn / (k + n - 1)*tp n → ∞ Sk = tn tp ( = k, if tn = k * tp )lim Pipelining
  • 18. PIPELINE AND MULTIPLE FUNCTION UNITS P1 Ii P2 Ii+1 P3 Ii+2 P4 Ii+3 Multiple Functional Units Example - 4-stage pipeline - subopertion in each stage; tp = 20nS - 100 tasks to be executed - 1 task in non-pipelined system; 20*4 = 80nS Pipelined System (k + n - 1)*tp = (4 + 99) * 20 = 2060nS Non-Pipelined System n*k*tp = 100 * 80 = 8000nS Speedup Sk = 8000 / 2060 = 3.88 4-Stage Pipeline is basically identical to the system with 4 identical function units Pipelining
  • 19. ARITHMETIC PIPELINE Floating-point adder [1] Compare the exponents [2] Align the mantissa [3] Add/sub the mantissa [4] Normalize the result X = A x 2a Y = B x 2b R Compare exponents by subtraction a b R Choose exponent Exponents R A B Align mantissa Mantissas Difference R Add or subtract mantissas R Normalize result R R Adjust exponent R Segment 1: Segment 2: Segment 3: Segment 4: Arithmetic Pipeline
  • 20. 4-STAGE FLOATING POINT ADDER A = a x 2p B = b x 2q p a q b Exponent subtractor Fraction selector Fraction with min(p,q) Right shifter Other fraction t = |p - q| r = max(p,q) Fraction adder Leading zero counter r c Left shifter c Exponent adder r s d d Stages: S1 S2 S3 S4 C = A + B = c x 2 = d x 2r s (r = max (p,q), 0.5 ≤ d < 1) Arithmetic Pipeline
  • 21. INSTRUCTION CYCLE Six Phases* in an Instruction Cycle [1] Fetch an instruction from memory [2] Decode the instruction [3] Calculate the effective address of the operand [4] Fetch the operands from memory [5] Execute the operation [6] Store the result in the proper place * Some instructions skip some phases * Effective address calculation can be done in the part of the decoding phase * Storage of the operation result into a register is done automatically in the execution phase ==> 4-Stage Pipeline [1] FI: Fetch an instruction from memory [2] DA: Decode the instruction and calculate the effective address of the operand [3] FO: Fetch the operand [4] EX: Execute the operation Instruction Pipeline
  • 22. INSTRUCTION PIPELINE Execution of Three Instructions in a 4-Stage Pipeline Instruction Pipeline FI DA FO EX FI DA FO EX FI DA FO EX i i+1 i+2 Conventional Pipelined FI DA FO EX FI DA FO EX FI DA FO EX i i+1 i+2
  • 23. INSTRUCTION EXECUTION IN A 4-STAGE PIPELINE 1 2 3 4 5 6 7 8 9 10 12 1311 FI DA FO EX1 FI DA FO EX FI DA FO EX FI DA FO EX FI DA FO EX FI DA FO EX FI DA FO EX 2 3 4 5 6 7 FI Step: Instruction (Branch) Instruction Pipeline Fetch instruction from memory Decode instruction and calculate effective address Branch? Fetch operand from memory Execute instruction Interrupt?Interrupt handling Update PC Empty pipe no yes yes no Segment1: Segment2: Segment3: Segment4:
  • 24. MAJOR HAZARDS IN PIPELINED EXECUTION Structural hazards(Resource Conflicts) Hardware Resources required by the instructions in simultaneous overlapped execution cannot be met Data hazards (Data Dependency Conflicts) An instruction scheduled to be executed in the pipeline requires the result of a previous instruction, which is not yet available JMP ID PC + PC bubble IF ID OF OE OS Branch address dependency Hazards in pipelines may make it necessary to stall the pipeline Pipeline Interlock: Detect Hazards Stall until it is cleared Instruction Pipeline ADD DA B,C + INC DA +1R1bubble Data dependencyR1 <- B + C R1 <- R1 + 1 Control hazards Branches and other instructions that change the PC make the fetch of the next instruction to be delayed
  • 25. STRUCTURAL HAZARDS Structural Hazards Occur when some resource has not been duplicated enough to allow all combinations of instructions in the pipeline to execute Example: With one memory-port, a data and an instruction fetch cannot be initiated in the same clock The Pipeline is stalled for a structural hazard <- Two Loads with one port memory -> Two-port memory will serve without stall Instruction Pipeline FI DA FO EXi i+1 i+2 FI DA FO EX FI DA FO EXstallstall
  • 26. DATA HAZARDS Data Hazards Occurs when the execution of an instruction depends on the results of a previous instruction ADD R1, R2, R3 SUB R4, R1, R5 Hardware Technique Interlock - hardware detects the data dependencies and delays the scheduling of the dependent instruction by stalling enough clock cycles Forwarding (bypassing, short-circuiting) - Accomplished by a data path that routes a value from a source (usually an ALU) to a user, bypassing a designated register. This allows the value to be produced to be used at an earlier stage in the pipeline than would otherwise be possible Software Technique Instruction Scheduling(compiler) for delayed load Data hazard can be dealt with either hardware techniques or software technique Instruction Pipeline
  • 27. FORWARDING HARDWARE Register file Result write bus Bypass path ALU result buffer MUX ALU R4 MUX Instruction Pipeline Example: ADD R1, R2, R3 SUB R4, R1, R5 3-stage Pipeline I: Instruction Fetch A: Decode, Read Registers, ALU Operations E: Write the result to the destination register I A EADD SUB I A E Without Bypassing I A ESUB With Bypassing
  • 28. INSTRUCTION SCHEDULING a = b + c; d = e - f; Unscheduled code: Delayed Load A load requiring that the following instruction not use its result Scheduled Code: LW Rb, b LW Rc, c LW Re, e ADD Ra, Rb, Rc LW Rf, f SW a, Ra SUB Rd, Re, Rf SW d, Rd LW Rb, b LW Rc, c ADD Ra, Rb, Rc SW a, Ra LW Re, e LW Rf, f SUB Rd, Re, Rf SW d, Rd Instruction Pipeline
  • 29. CONTROL HAZARDS Branch Instructions - Branch target address is not known until the branch instruction is completed - Stall -> waste of cycle times FI DA FO EX FI DA FO EX Branch Instruction Next Instruction Target address available Dealing with Control Hazards * Prefetch Target Instruction * Branch Target Buffer * Loop Buffer * Branch Prediction * Delayed Branch Instruction Pipeline
  • 30. CONTROL HAZARDS Instruction Pipeline Prefetch Target Instruction − Fetch instructions in both streams, branch not taken and branch taken − Both are saved until branch is executed. Then, select the right instruction stream and discard the wrong stream Branch Target Buffer(BTB; Associative Memory) − Entry: Addr of previously executed branches; Target instruction and the next few instructions − When fetching an instruction, search BTB. − If found, fetch the instruction stream in BTB; − If not, new stream is fetched and update BTB Loop Buffer(High Speed Register file) − Storage of entire loop that allows to execute a loop without accessing memory Branch Prediction − Guessing the branch condition, and fetch an instruction stream based on the guess. Correct guess eliminates the branch penalty Delayed Branch − Compiler detects the branch and rearranges the instruction sequence by inserting useful instructions that keep the pipeline busy in the presence of a branch instruction
  • 31. RISC PIPELINE Instruction Cycles of Three-Stage Instruction Pipeline RISC Pipeline RISC - Machine with a very fast clock cycle that executes at the rate of one instruction per cycle <- Simple Instruction Set Fixed Length Instruction Format Register-to-Register Operations Data Manipulation Instructions I: Instruction Fetch A: Decode, Read Registers, ALU Operations E: Write a Register Load and Store Instructions I: Instruction Fetch A: Decode, Evaluate Effective Address E: Register-to-Memory or Memory-to-Register Program Control Instructions I: Instruction Fetch A: Decode, Evaluate Branch Address E: Write Register(PC)
  • 32. DELAYED LOAD Three-segment pipeline timing Pipeline timing with data conflict clock cycle 1 2 3 4 5 6 Load R1 I A E Load R2 I A E Add R1+R2 I A E Store R3 I A E Pipeline timing with delayed load clock cycle 1 2 3 4 5 6 7 Load R1 I A E Load R2 I A E NOP I A E Add R1+R2 I A E Store R3 I A E LOAD: R1 ← M[address 1] LOAD: R2 ← M[address 2] ADD: R3 ← R1 + R2 STORE: M[address 3] ← R3 RISC Pipeline The data dependency is taken care by the compiler rather than the hardware
  • 33. DELAYED BRANCH 1 I 3 4 652Clock cycles: 1. Load A 2. Increment 4. Subtract 5. Branch to X 7 3. Add 8 6. NOP E I A E I A E I A E I A E I A E 9 10 7. NOP 8. Instr. in X I A E I A E 1 I 3 4 652Clock cycles: 1. Load A 2. Increment 4. Add 5. Subtract 7 3. Branch to X 8 6. Instr. in X E I A E I A E I A E I A E I A E Compiler analyzes the instructions before and after the branch and rearranges the program sequence by inserting useful instructions in the delay steps Using no-operation instructions Rearranging the instructions RISC Pipeline
  • 34. VECTOR PROCESSING Vector Processing Vector Processing Applications  Problems that can be efficiently formulated in terms of vectors − Long-range weather forecasting − Petroleum explorations − Seismic data analysis − Medical diagnosis − Aerodynamics and space flight simulations − Artificial intelligence and expert systems − Mapping the human genome − Image processing Vector Processor (computer) Ability to process vectors, and related data structures such as matrices and multi-dimensional arrays, much faster than conventional computers Vector Processors may also be pipelined
  • 35. VECTOR PROGRAMMING DO 20 I = 1, 100 20 C(I) = B(I) + A(I) Conventional computer Initialize I = 0 20 Read A(I) Read B(I) Store C(I) = A(I) + B(I) Increment I = i + 1 If I ≤ 100 goto 20 Vector computer C(1:100) = A(1:100) + B(1:100) Vector Processing
  • 36. VECTOR INSTRUCTIONS f1: V ∗ V f2: V ∗ S f3: V x V ∗ V f4: V x S ∗ V V: Vector operand S: Scalar operand Type Mnemonic Description (I = 1, ..., n) Vector Processing f1 VSQR Vector square root B(I) ∗ SQR(A(I)) VSIN Vector sine B(I) ∗ sin(A(I)) VCOM Vector complement A(I) ∗ A(I) f2 VSUM Vector summation S ∗ Σ A(I) VMAX Vector maximum S ∗ max{A(I)} f3 VADD Vector add C(I) ∗ A(I) + B(I) VMPY Vector multiply C(I) ∗ A(I) * B(I) VAND Vector AND C(I) ∗ A(I) . B(I) VLAR Vector larger C(I) ∗ max(A(I),B(I)) VTGE Vector test > C(I) ∗ 0 if A(I) < B(I) C(I) ∗ 1 if A(I) > B(I) f4 SADD Vector-scalar add B(I) ∗ S + A(I) SDIV Vector-scalar divide B(I) ∗ A(I) / S
  • 37. VECTOR INSTRUCTION FORMAT Operation code Baseaddress source1 Baseaddress source2 Baseaddress destination Vector length Vector Processing Vector Instruction Format Source A Source B Multiplier pipeline Adder pipeline Pipeline for Inner Product
  • 38. MULTIPLE MEMORY MODULE AND INTERLEAVING Vector Processing Multiple Module Memory Address Interleaving Different sets of addresses are assigned to different memory modules AR Memory array DR AR Memory array DR AR Memory array DR AR Memory array DR Address bus Data bus M0 M1 M2 M3
  • 40. MEMORY HIERARCHY Memory Organisation 40 CPU I/O Processor
  • 41. MEMORY HIERARCHY • CPU logic is usually faster than main memory access time, with the result that processing speed is limited primarily by the speed of main memory. • The cache is used for storing segments of programs currently being executed in the CPU and temporary data frequently needed in the present calculations. • The typical access time ratio between cache and main memory is about 1 to 7~10 .Memory Organisation 41
  • 42. MEMORY HIERARCHY • The memory hierarchy system consists of all storage devices employed in a computer system from the slow by high-capacity auxiliary memory to a relatively faster main memory, to an even smaller and faster cache memory. Memory Organisation 42 Auxiliary Memory Or Secondary Memory Increasing order of access time ratio Primary Memory
  • 43. ACCESS METHODS • Each memory is a collection of various memory location. Accessing the memory means finding and reaching desired location and than reading information from memory location. The information from locations can be accessed as follows: 1. Randomaccess 2. Sequential access 3. Direct access • Random Access: It is the access mode where each memory location has a unique address. Using these unique addresses each memory location can be addressed independently in any order in equal amount of time. Generally, main memories are random access memories(RAM). Memory Organisation 43
  • 44. ACCESS METHODS • Sequential Access: If storage locations can be accessed only in a certain predetermined sequence, the access method is known as serial or sequential access. • Opposite of RAM: Serial Access Memory (SAM). SAM works very well for memory buffers, where the data is normally stored in the order in which it will be used (a good example is the texture buffer memory on a video card , magnetic tapes, etc.). • Direct Access: In this access information is stored on tracks and each track has a separate read/write head. This features makes it a semi random mode which is generally used in magnetic disks. Memory Organisation 44
  • 45. MAIN MEMORY • Most of the main memory in a general purpose computer is made up of RAMintegrated circuits chips, but a portion of the memory may be constructed with ROMchips. • RAM– Random Access memory • Integrated RAM are available in two possible operating modes, Static and Dynamic. • ROM– Read Only memory Memory Organisation 45
  • 46. WHAT IS RAM?? Memory Organisation 46
  • 47. RANDOM ACCESS MEMORY (RAM) • RAM is used for storing bulk of programs and data that is subject to change. Memory Organisation 47 words (8 bits (one byte) per word) 0 127 Typical RAM chip Chip select 1 Chip select 2 Read Write 7-bit address CS1 CS2 RD WR AD 7 128 x 8 RAM 8-bit data bus CS1 CS2 RD WR 0 0 x x 0 1 x x 1 0 0 0 1 0 0 1 1 0 1 x 1 1 x x Memory function Inhibit Inhibit Inhibit Write Read Inhibit State of data bus High-impedence High-impedence High-impedence Input data to RAM Output data from RAM High Impedence
  • 48. TYPES OF RANDOM ACCESS MEMORY (RAM)• Static RAM (SRAM) • Each cell stores bit with a six-transistor circuit. • Retains value indefinitely, as long as it is kept powered. • Faster (8-16 times faster) and more expensive (8-16 times more expensive as well) than DRAM. • Dynamic RAM (DRAM) • Each cell stores bit with a capacitor and transistor. • Value must be refreshed every 10-100 ms. • Slower and cheaper than SRAM. Has reduced power consumption, and a large storage capacity. In contrast to , SRAM and DRAM:  Non Volatile RAM (NVRAM) • retains its information when power is turned off (non volatile). • best-known form of NVRAM memory today is flash memory. Virtually all desktop or server computers since 1975 used DRAMs for main memory and SRAMs for cache. Memory Organisation 48
  • 49. READ ONLY MEMORY (ROM) • It is non-volatile memory, which retains the data even when power is removed from this memory. Programs and data that can not be altered are stored in ROM. • ROM is used for storing programs that are PERMANENTLY resident in the computer and for tables of constants that do not change in value once the production of the computer is completed. • The ROM portion of main memory is needed for storing an initial program called bootstraploader, which is to start the computer operating system when power is turned on. Memory Organisation 49
  • 50. READ ONLY MEMORY (ROM) • Typical ROM chip: • Since the ROM can only READ, the data bus can only be in output mode. • No need of READ and WRITE control. • Same sized RAM and ROM chip , it is possible to have more bits of ROM than of RAM , because the internal binary cells in ROM occupy less space than in RAM. Memory Organisation 50 Chip select 1 Chip select 2 9-bit address CS1 CS2 AD 9 512 x 8 ROM 8-bit data bus
  • 51. TYPES OF ROM • The required paths in a ROM may be programmed in four different ways: 1. MaskProgramming: It is done by the company during the fabrication process of the unit. The procedure for fabricating a ROM requires that the customer fills out the truth table he wishes the ROM to satisfy. 2. Programmable Read only memory(PROM): PROM contain all the fuses intact giving all 1’s in the bits of the stored words. A blown fuse defines binary 0 state and an intact fuse give a binary 1 state. This allows the user to program the PROM by using a special instruments called PROM programmer. Memory Organisation 51
  • 52. TYPES OF ROM 3. Erasable PROM(EPROM): In a PROM once fixed pattern is permanent and can not be altered. The EPROM can be restructured to the initial state even through it has been programmed previously. When EPROM is placed under a special ultra-violet light for a given period of time all the data are erased. After erase, the EPROM returns to it initial state and can be programmed to a new set of values. 4. Electrically Erasable PROM(EEPROM): It is similar to EPROM except that the previously programmed connections can be erased with an electrical signal instead of ultra violet light. The advantage is that device can be erased without removing it from it’s socket.Memory Organisation 52
  • 53. AUXILIARY MEMORY • Also called as Secondary Memory, used to store large chunks of data at a lesser cost per byte than a primary memory for backup. • It does not lose the data when the device is powered down—it is non-volatile. • It is not directly accessible by the CPU, they are accessed via the input/output channels. • The most common form of auxiliary memory devices used in consumer systems is flash memory, optical discs, and magnetic disks, magnetic tapes. Memory Organisation 56
  • 54. TYPES OF AUXILIARY MEMORY • Flash m e m o ry:  An electronic non-volatile computer storage device that can be electrically erased and reprogrammed, and works without any moving parts. Examples of this are USBflash drives and solid state drives. • O pticaldisc:  Its a storage medium from which data is read and to which it is written by lasers. There are three basic types of optical disks: CD-ROM (read-only), WORM (write-once read-many) & EO (erasable optical disks).Memory Organisation 57
  • 55. TYPES OF AUXILIARY MEMORY • Mag ne tic tape s :  A magnetic tape consists of electric, mechanical and electronic components to provide the parts and control mechanism for a magnetic tape unit. • The tape itself is a strip of plastic coated with a magnetic recording medium. Bits are recorded as magnetic spots on tape along several tracks called RECORDS. Memory Organisation 58 EOF IRG block 1 block 2 block 3 block 1 block 2 block 3 R1 R2 R3 R4 R5 R6 R1 R3 R2 R5 R4 file i EOF • R/W heads are mounted in each track so that data can be recorded and read as a sequence of characters. • Can be stopped , started to move forward, or in reverse, or can be rewound, but cannot be stopped fast enough between individual characters.
  • 56. TYPES OF AUXILIARY MEMORY Mag ne tic Disk: •A magnetic disk is a circular plate constructed of metal or plastic coated with magnetized material. •Both sides of the disk are used and several disks may be stacked on one spindle with read/write heads available on each surface. •Bits are stored in magnetized surface in spots along concentric circles called tracks. Tracks are commonly divided into sections called sectors. •Disk that are permanently attached and cannot removed by occasional user are Memory Organisation 59
  • 57. ASSOCIATIVE MEMORY • A memory unit accessed by contents is called an associative memory or content addressable memory(CAM). • This type of memory is accessed simultaneously and in parallel on the basis of data content rather than by specific address or location.
  • 58. READ/WRITE OPERATION IN CAM • Write operation:  When a word is written in in an associative memory, no address is given.  The memory is capable of finding an unused location to store the word. • Read operation:  When a word is to be read from an associative memory, the contents of the word, or a part of the word is specified.  The memory locates all the words which match the specified content and marks them for reading.
  • 59. HARDWARE ORGANISATIONArgument register(A): It contains the word to be searched. It has n bits(one for each bit of the word). Key Register(K): It provides mask for choosing a particular field or key in the argument word. It also has n bits. Associative memory array: It contains the words which are to be compared with the argument word. Match Register(M):It has m bits, one bit corresponding to each word in the memory array . After the matching process, the bits corresponding to matching words in match register are set to 1.
  • 60. MATCHING PROCESS • The entire argument word is compared with each memory word, if the key register contains all 1’s. Otherwise, only those bits in the argument that have 1’s in their corresponding position of the key register are compared. •Thus the key provides a mask or identifying piece of information which specifies how the reference to memory is made. •To illustrate with a numerical example, suppose that the argument register A and the key register K have the bit configuration as shown below. •Only the three left most bits of A are compared with the memory words because K has 1’s in these three positions only.
  • 61. DISADVANTAGE • An associative memory is more expensive than a random access memory because each cell must have an extra storage capability as well as logic circuits for matching its content with an external argument. • For this reason, associative memories are used in applications where the search time is very critical and must be very short.
  • 62. CACHE MEMORY • If the active portions of the program and data are placed in a fast small memory, the average memory access time can be reduced. • Thus reducing the total execution time of the program • Such a fast small memory is referred to as cache memory • The cache is the fastest component in the memory hierarchy and approaches the speed of CPU component
  • 63. BASIC OPERATIONS OF CACHE • When CPU needs to access memory, the cache is examined. • If the word is found in the cache, it is read from the cache memory. • If the word addressed by the CPU is not found in the cache, the main memory is accessed to read the word. • A block of words containing the one just accessed is then transferred from main memory to cache memory. • If the cache is full, then a block equivalent to the size of the used word is replaced according to the replacement algorithm being used.
  • 64. HIT RATIO • When the CPU refers to memory and finds the word in cache, it is said to produce a hit • Otherwise, it is a miss • The performance of cache memory is frequently measured in terms of a quantity called hit ratio Hit ratio = hit /(hit+miss)
  • 65. MAPPING PROCESS •The transformation of data from main memory to cache memory is referred to as a mapping process, there are three types of mapping:  Associative mapping  Direct mapping  Set-associative mapping •To help understand the mapping procedure, we have the following example:
  • 66. ASSOCIATIVE MAPPING • The fastest and most flexible cache organization uses an associative memory. • The associative memory stores both the address and data of the memory word. • This permits any location in cache to store a word from main memory. • The address value of 15 bits is
  • 67. DIRECT MAPPING • Associative memory is expensive compared to RAM. • In general case, there are 2^k words in cache memory and 2^n words in main memory (in our case, k=9, n=15). • The n bit memory address is divided into two fields: k-bits for the index and n-k bits for the tag field.
  • 68. SET – ASSOCIATIVE MAPPING • The disadvantage of direct mapping is that two words with the same index in their address but with different tag values cannot reside in cache memory at the same time. • Set-Associative Mapping is an improvement over the direct- mapping in that each word of cache can store two or more
  • 69. REPLACEMENT ALGORITHMS • Optimal replacement algorithm – find the block for replacement that has minimum chance to be referenced next time. • Two algorithms: • FIFO: Selects the item which has been in the set the longest. • LRU: Selects the item which has been least recently used by the CPU.
  • 70. VIRTUAL MEMORY • Virtual memory is a common part of operating system on desktop computers. • The term Virtual Memory refers to something which appears to be present but actually is not. • This technique allows users to use more memory for a program than the real memory of a computer.
  • 71.
  • 72. NEED OF VIRTUAL MEMORY • Virtual Memory is a imaginary memory which we assume or use, when we have a material that exceeds our memory at that time.  Virtual Memory is temporary memory which is used along with the ramof the system.
  • 73. ADVANTAGES • Allows Processes whose aggregate memory requirement is greater than the amount of physical memory, as infrequently used pages can reside on the disk. • Virtual memory allows speed gain when only a particular segment of the program is required for the execution of the program. • This concept is very helpful in implementing multiprogramming environment.
  • 74. DISADVANTAGES • Applications run rather slower when they are using virtual memory. • It takes more time to switch between applications. • Reduces system stability.