The document discusses cache memory and provides information on various aspects of cache memory including:
- Introduction to cache memory including its purpose and levels.
- Cache structure and organization including cache row entries, cache blocks, and mapping techniques.
- Performance of cache memory including factors like cycle count and hit ratio.
- Cache coherence in multiprocessor systems and coherence protocols.
- Synchronization mechanisms used in multiprocessor systems for cache coherence.
- Paging techniques used in cache memory including address translation using page tables and TLBs.
- Replacement algorithms used to determine which cache blocks to replace when the cache is full.
Explain cache memory with a diagram, demonstrate hit ratio and miss penalty with an example. Discussed different types of cache mapping: direct mapping, fully-associative mapping and set-associative mapping. Discussed temporal and spatial locality of references in cache memory. Explained cache write policies: write through and write back. Shown the differences between unified cache and split cache.
In these slides the registration organization and stack organization have discussed in detail. Stack organization is discussed with the aid of animation to let the user understand it in a better and easy way.
Explain cache memory with a diagram, demonstrate hit ratio and miss penalty with an example. Discussed different types of cache mapping: direct mapping, fully-associative mapping and set-associative mapping. Discussed temporal and spatial locality of references in cache memory. Explained cache write policies: write through and write back. Shown the differences between unified cache and split cache.
In these slides the registration organization and stack organization have discussed in detail. Stack organization is discussed with the aid of animation to let the user understand it in a better and easy way.
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
(Ref : Computer System Architecture by Morris Mano 3rd edition) : Microprogrammed Control unit, micro instructions, micro operations, symbolic and binary microprogram.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
Addressing mode is the way of addressing a memory location in instruction. Microcontroller needs data or operands on which the operation is to be performed. The method of specifying source of operand and output of result in an instruction is known as addressing mode.
There are various methods of giving source and destination address in instruction, thus there are various types of Addressing Modes. Here you will find the different types of Addressing Modes that are supported in Micro Controller 8051. Types of Addressing Modes are explained below:
1.Register Addressing Mode
2.Direct Addressing Mode
3.Register Indirect Addressing Mode
4.Immediate Addressing Mode
5.Index Addressing Mode
Explanation:
Register Addressing Mode: In this addressing mode, the source of data or destination of result is Register. In this type of addressing mode the name of the register is given in the instruction where the data to be read or result is to be stored.
Example: ADD A, R5 ( The instruction will do the addition of data in Accumulator with data in register R5)
Direct Addressing Mode: In this type of Addressing Mode, the address of data to be read is directly given in the instruction. In case, for storing result the address given in instruction is used to store the result.
Example: MOV A, 46H ( This instruction will move the contents of memory location 46H to Accumulator)
Register Indirect Addressing Mode: In Register Indirect Addressing Mode, as its name suggests the data is read or stored in register indirectly. That is, we provide the register in the instruction, in which the address of the other register is stored or which points to other register where data is stored or to be stored.
Example: MOV A, @R0 ( This instruction will move the data to accumulator from the register whose address is stored in register R0 ).
Also Read: Architecture Of 8051
Immediate Addressing Mode : In Immediate Addressing Mode , the data immediately follows the instruction. This means that data to be used is already given in the instruction itself.
Example: MOV A, #25H ( This instruction will move the data 25H to Accumulator. The # sign shows that preceding term is data, not the address.)
Index Addressing Mode: Offset is added to the base index register to form the effective address if the memory location.This Addressing Mode is used for reading lookup tables in Program Memory. The Address of the exact location of the table is formed by adding the Accumulator Data to the base pointer.
Example: MOVC, @A+DPTR ( This instruction will move the data from the memory to Accumulator; the address is made by adding the contents of Accumulator and Data Pointer.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
Memory organization
Memory Organization in Computer Architecture. A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. ... Volatile Memory: This loses its data, when power is switched off.
(Ref : Computer System Architecture by Morris Mano 3rd edition) : Microprogrammed Control unit, micro instructions, micro operations, symbolic and binary microprogram.
Modern processors are faster than memory
So Processors may waste time for accessing memory
Its purpose is to make the main memory appear to the processor to be much faster than it actually is
Addressing mode is the way of addressing a memory location in instruction. Microcontroller needs data or operands on which the operation is to be performed. The method of specifying source of operand and output of result in an instruction is known as addressing mode.
There are various methods of giving source and destination address in instruction, thus there are various types of Addressing Modes. Here you will find the different types of Addressing Modes that are supported in Micro Controller 8051. Types of Addressing Modes are explained below:
1.Register Addressing Mode
2.Direct Addressing Mode
3.Register Indirect Addressing Mode
4.Immediate Addressing Mode
5.Index Addressing Mode
Explanation:
Register Addressing Mode: In this addressing mode, the source of data or destination of result is Register. In this type of addressing mode the name of the register is given in the instruction where the data to be read or result is to be stored.
Example: ADD A, R5 ( The instruction will do the addition of data in Accumulator with data in register R5)
Direct Addressing Mode: In this type of Addressing Mode, the address of data to be read is directly given in the instruction. In case, for storing result the address given in instruction is used to store the result.
Example: MOV A, 46H ( This instruction will move the contents of memory location 46H to Accumulator)
Register Indirect Addressing Mode: In Register Indirect Addressing Mode, as its name suggests the data is read or stored in register indirectly. That is, we provide the register in the instruction, in which the address of the other register is stored or which points to other register where data is stored or to be stored.
Example: MOV A, @R0 ( This instruction will move the data to accumulator from the register whose address is stored in register R0 ).
Also Read: Architecture Of 8051
Immediate Addressing Mode : In Immediate Addressing Mode , the data immediately follows the instruction. This means that data to be used is already given in the instruction itself.
Example: MOV A, #25H ( This instruction will move the data 25H to Accumulator. The # sign shows that preceding term is data, not the address.)
Index Addressing Mode: Offset is added to the base index register to form the effective address if the memory location.This Addressing Mode is used for reading lookup tables in Program Memory. The Address of the exact location of the table is formed by adding the Accumulator Data to the base pointer.
Example: MOVC, @A+DPTR ( This instruction will move the data from the memory to Accumulator; the address is made by adding the contents of Accumulator and Data Pointer.
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
Cache Memory
1. CACHE MEMORY
O N L I N E P R E S E N TAT I O N
B Y S U B I D B I S W A S
M A I L : s u b i d b i s w a s 2 0 6 1 @ g m a i l . c o m
CS401- Computer Architechture & Organisation
2. GROUP MEMBERS
1 . A I N T R O D U C T I O N , W O R K I N G A N D L E V E L S O F C A C H E M E M O R Y
2 . B C A C H E S T R U C T U R E A N D O R G A N I Z AT I O N , C O N C L U S I O N
3 . C P E R F O R M A N C E O F C A C H E M E M O R Y
4 . D C A C H E C O H E R E N C E
5 . E S Y N C H R O N I Z AT I O N M E C H A N I S M
6 . F M A P P I N G T E C H N I Q U E S O F C A C H E M E M O R Y
7 . G PA G I N G T E C H N I Q U E S I N C A C H E M E M O R Y
8 . H R E P L A C E M E N T A L G O R I T H M
9 . I C A C H E W R I T E P O L I C I E S
1 0 . J C A C H E M E M O R Y H I E R A R C H Y
CSE
4. What is Cache Memory?
Cache memory is a small, high speed RAM buffer located between
the CPU and Main Memory.
• Cache memory holds a copy the instructions or data currently
being used by the CPU.
• The main purpose of a cache memory is to accelerate your
computer speed while keeping the price of the computer low.
• It stores and retains data only until a computer is powered up.
• The cache memory stores copies of the data from frequently
used main memory locations.
Cache memory is faster than RAM,
and because it is located closer to
the CPU, it can get and start
processing the instructions and
data much more quickly.
CACHE MEMORY
A buffer contains data that is stored for a short amount of time, typically in the computer's memory (RAM).
The purpose of a buffer is to hold data right before it is used.
5. • Why Cache is needed?
The cache memory is required to balance the speed mismatch between the main memory and the
CPU. The clock of the processor is very fast, while the main memory access time is comparatively
slower. Hence, the processing speed depends more on the speed of the main memory.
• How it differs from RAM?
Cache memory is a type of super-fast RAM which is designed to make a computer or device run
more efficiently. By itself, this may not be particularly useful, but cache memory plays a key role in
computing when used with other parts of memory.
ADVANTAGES & DISADVANTAGES
Advantages of cache memory
The main memory is slower than cache memory.
It creates a way for fast data transfers so it
consumes less access time as compared to main
memory.
It stores frequently access that can be executed
within a short period of time.
Disadvantages of cache memory
It is limited capacity memory.
It is very expensive as compared to
Memory (random access memory
(RAM)) and Hard Disk.
8. LEVELS OF CACHE MEMORY
Levels of memory:
Level 1 or Register-
It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program counter, address register etc.
Level 2 or Cache memory-
It is the fastest memory which has faster access time where data is temporarily stored for faster
access.
Level 3 or Main Memory-
It is memory on which computer works currently. It is small in size and once power is off data no
longer stays in this memory.
Level 4 or Secondary Memory-
It is external memory which is not as fast as main memory but data stays permanently in this memory.
10. HOW IT WORKS?
• How Cache Memory works?
MEMORY ORGANIZATION
CPU – Central Processing Unit is just like brain of a
computer; and performs the arithmetical, logical
operations of the system by carrying instructions on the
code.
The memory organization of a system is shown below:
1. At the core is CPU,
2. Cache,
3. RAM
4. Storage Device.
12. Cache Structure:
Cache row entries usually have the following structure:
CACHE STRUCTURE AND ORGANIZATION
Tag Data Block Flag Bits
An effective memory address which goes along with the cache line (memory block) is split into the tag,
the index and the block offset.
Tag Index Block Offset
The data block (cache line) contains the actual data fetched from the main memory.
The tag contains (part of) the address of the actual data fetched from the main memory.
An instruction cache requires only one flag bit per cache row entry: a valid bit.
The index describes which cache set that the data has been put in.
The block offset specifies the desired data within the stored data block within the cache row.
13. Cache Organization:
The cache organization is about mapping
data in memory to a location in cache.
One way to go about this mapping is to
consider last few bits of long memory
address to find small cache address, and
place them at the found address.
The problem with this approach is, we
loose the information about high order
bits and have no way to find out the lower
order bits belong to which higher order
bits.
CACHE STRUCTURE AND ORGANIZATION
14. • To handle above problem, more
information is stored in cache to tell
which block of memory is stored in
cache. We store additional information
as Tag.
CACHE STRUCTURE AND ORGANIZATION
• What is a Cache Block?
Since programs have Spatial Locality (Once a
location is retrieved, it is highly probable that
the nearby locations would be retrieved in
near future). So a cache is organized in the
form of blocks. Typical cache block sizes are
32 bytes or 64 bytes.
?? – 11
16. Issues of Cache Memory Performance:
The performance of cache memory concerns mainly on two aspects-
1. Cycle Count
2. Hit Ratio
The Cycle Count: The cache speed is affected by the underlying static or dynamic ram technology, the
cache organization and the cache hit ratios.
Hit Ratio: It is affected by the cache size and by the block size .
Effect of Block Size: with a fixed cache size, cache performance is rather sensitive to block size.
Effects of Set Number: for a fixed cache capacity the hit ratio may increase as the number of sets
increase.
PERFORMANCE OF CACHE MEMORY
17. Techniques to Improve Cache Memory Performance of Cache Memory:
Technique 1:
Larger block size.
Technique 2:
Larger cache to reduce miss rate.
Technique 3:
Higher associativity to reduce miss rate.
Technique 4:
Multi level cache Caches should be faster to keep Pace with the speed of the processor and the cache should be
larger to overcome the widening gap between the processor and main memory.
Technique 5:
Prioritize read misses to reduce miss penalty.
Technique 6:
Avoid address translation for indexing to reduce hit time.
PERFORMANCE OF CACHE MEMORY
19. Cache Coherence:
Cache coherence is the uniformity of shared resource data
that ends up stored in multiple local caches.
CACHE COHERENCE
• Cache and the main memory
may have inconsistent copies
of the same object.
• In a multiprocessor system, data
inconsistency may occur among
adjacent levels or within the same
level of the memory hierarchy.
20. There are three types of coherence:
• Directory-based:
In a directory-based system, the data being shared is
placed in a common directory that maintains the
coherence between caches.
• Snooping:
Snooping is a process where the individual caches
monitor address lines for accesses to memory locations
that they have cached.
• Snarfing:
It is a mechanism where a cache controller watches both
address and data in an attempt to update its own copy of
a memory location when a second master modifies a
location in main memory.
CACHE COHERENCE
21. CACHE COHERENCE
There are three distinct level of cache
coherence :-
● Every write operation appears to
occur instantaneously.
● All processors see exactly the same
sequence of changes of values for
each separate operand.
● Different processors may see an
operation and assume different
sequences of values; this is known as
non-coherent behavior.
There are various Cache Coherence
Protocols in multiprocessor system.
These are :-
● MSI protocol (Modified, Shared,
Invalid)
● MOSI protocol (Modified, Owned,
Shared, Invalid)
● MESI protocol (Modified, Exclusive,
Shared, Invalid)
● MOESI protocol (Modified, Owned,
Exclusive, Shared, Invalid)
23. Hardware Synchronization Mechanisms:
Synchronization is a special form of communication where instead of data control,
information is exchanged between communicating processes residing in the same
or different processors.
Multiprocessor systems use hardware mechanisms to implement low-level
synchronization operations. Most multiprocessors have hardware mechanisms to
impose atomic operations such as memory read, write or read-modify-write
operations to implement some synchronization primitives. Other than atomic
memory operations, some inter-processor interrupts are also used for
synchronization purposes
SYNCHRONIZATION MECHANISM
25. What is Cache Mapping?
Cache mapping is a technique by which the contents of main memory are brought into the
cache memory.
Cache Mapping Techniques
Cache Mapping is performed using following
three different techniques –
1. Direct Mapping
2. Full Associative Mapping
3. K-way Set Associative Mapping
CACHE MAPPING TECHNIQUES
Cache
Mapping
Set
Associative
Mapping
Full
Associative
Mapping
Direct
Mapping
Figure: The given diagram illustrates the mapping process
26. • Direct Mapping Technique:
In Direct mapping, each memory block is assigned to a specific line in the cache. If a line is previously taken up
by a memory block when a new block needs to be loaded, the old block is trashed. An address space is split into
two parts index field and a tag field. The cache is used to store the tag field whereas the rest is stored in the
main memory. Direct mapping`s performance is directly proportional to the Hit ratio.
DIRECT MAPPING TECHNIQUE
Tag Line number Block Offset
The line number of cache to which a particular block can map
is given by-
Cache line no. = (Main block address) modulo (No. of lines in cache)
In direct mapping, the physical address is divided as -
Example of Direct Mapping
27. • Full Associative Mapping Technique:
In this type of mapping, the associative memory is used to store content and addresses of the
memory word. Any block can go into any line of the cache. This means that the word in bits are used
to identify which word in the block is needed, but the tag becomes all of the remaining bits. This
enables the placement of any word at any place in the cache memory. It is considered to be the
fastest and the most flexible mapping form.
FULL ASSOCIATIVE MAPPING
Block No./Tag Block /Line Offset
In Fully Associative Mapping the physical address is
divided as -
Example of Full Associative Mapping
28. • Set Associative Mapping Technique:
Set associative addresses the problem of possible thrashing in the direct mapping method.
Instead of having exactly one line that a block can map to in the cache, few lines are grouped together creating
a set . Then a block in memory can map to any one of the lines of a specific set. Set-associative mapping allows
that each word that is present in the cache can have two or more words in the main memory for the same index
address. Set associative cache mapping combines the best of direct and associative cache mapping techniques.
SET ASSOCIATIVE MAPPING
Tag Set No. Block/Line Offset
The set of the cache to which a particular block of the main memory
can map is given by-
Cache set no. = (Main memory Block address ) modulo ( No. of sets in cache )
In set associative mapping the physical address is given as –
Example of 2 Way Set Associative Mapping
30. What is Paging Technique?
Paging is a memory management scheme that eliminates the need for contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non – contiguous. The
mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device and this mapping is known as paging technique.
The Physical Address Space is conceptually divided into a
number of fixed-size blocks, called frames.
The Logical address Space is also splitted into fixed-size
blocks, called pages.
Page Size = Frame Size
Let us consider an example:
Physical Address = 12 bits, then Physical Address Space =
4 K words
Logical Address = 13 bits, then Logical Address Space = 8
K words
Page size = frame size = 1 K words (assumption)
PAGING TECHNIQUES
31. Address generated by CPU is divided into
Page number(p): Number of bits required to represent the
pages in Logical Address Space or Page number
Page offset(d): Number of bits required to represent
particular word in a page or page size of Logical Address
Space or word number of a page or page offset.
Physical Address is divided into
Frame number(f): Number of bits required to represent the
frame of Physical Address Space or Frame number.
Frame offset(d): Number of bits required to represent
particular word in a frame or frame size of Physical Address
Space or word number of a frame or frame offset.
ADDRESS SPACE IN PAGING TECHNIQUES
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page
in page table)
32. TLB IN PAGING TECHNIQUES
The hardware implementation of page table can be done by using dedicated registers. But the usage of register
for the page table is satisfactory only if page table is small. If page table contain large number of entries then
we can use TLB(Translation Look-Aside Buffer), a special, small, fast look up hardware cache.
The TLB is associative, high speed memory.
Each entry in TLB consists of two parts: a tag and a value.
When this memory is used, then an item is compared with all tags simultaneously.If the item is found, then
corresponding value is returned.
33. PAGE TABLE IN PAGING TECHNIQUES
Page table has page table entries where each page table entry stores a frame number and optional status (like protection)
bits. Many of status bits used in the virtual memory system. The most important thing in PTE is frame Number.
Page table entry has the following information –
Caching in Page Table:
Caching enabled/disabled – Some times we need the fresh data. Let us say the user is typing some information from the keyboard and
your program should run according to the input given by the user. In that case, the information will come into the main memory.
Therefore main memory contains the latest information which is typed by the user. Now if you try to put that page in the cache, that cache
will show the old information. So whenever freshness is required, we don’t want to go for caching or many levels of the memory.
The information present in the closest level to the CPU and the information present in the closest level to the user might be different. So
we want the information has to be consistency, which means whatever information user has given, CPU should be able to see it as first as
possible. That is the reason we want to disable caching. So, this bit enables or disable caching of the page.
34. ADVANTAGES & DISADVANTAGES
Advantages of Paging Disadvantages of Paging
The paging technique is easy to
implement.
The paging technique makes efficient
utilization of memory.
The paging technique supports time-
sharing system.
The paging technique supports non-
contiguous memory allocation
Paging may encounter a problem
called page break.
When the number of pages in
virtual memory is quite large,
maintaining page table become
hectic.
36. What is Replacement Algorithm?
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies)
are optimizing instructions, or algorithms, that a computer program or a hardware-maintained structure can utilize
in order to manage a cache of information stored on the computer. Caching improves performance by keeping
recent or often-used data items in memory locations that are faster or computationally cheaper to access than
normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for
the new ones.
There are two primary figures of merit of a cache: The latency, and the hit rate. There are also a number of secondary
factors affecting cache performance.
The "hit ratio" of a cache describes how often a searched-for item is actually found in the cache. More efficient
replacement policies keep track of more usage information in order to improve the hit rate (for a given cache size).
The "latency" of a cache describes how long after requesting a desired item the cache can return that item (when there is
a hit). Faster replacement strategies typically keep track of less usage information—or, in the case of direct-mapped cache,
no information—to reduce the amount of time required to update that information.
Each replacement strategy is a compromise between hit rate and latency.
REPLACEMENT ALGORITHM
37. Different Replacement Algorithm:
Bélády's algorithm:
The most efficient caching algorithm would be to always discard the information that will not be needed
for the longest time in the future. This optimal result is referred to as Bélády's optimal algorithm/simply
optimal replacement policy or the clairvoyant algorithm. Since it is generally impossible to predict how far
in the future information will be needed, this is generally not implementable in practice. The practical
minimum can be calculated only after experimentation, and one can compare the effectiveness of the
actually chosen cache algorithm.
First in first out (FIFO):
Using this algorithm the cache behaves in the same way as a FIFO queue. The cache evicts the blocks in
the order they were added, without any regard to how often or how many times they were accessed
before.
DIFFERENT REPLACEMENT ALGORITHM
38. Different Replacement Algorithm:
Least-frequently used (LFU): Counts how often an item is needed. Those that are used least often are
discarded first. This works very similar to LRU except that instead of storing the value of how recently a
block was accessed, we store the value of how many times it was accessed. So of course while running
an access sequence we will replace a block which was used fewest times from our cache. E.g., if A was
used (accessed) 5 times and B was used 3 times and others C and D were used 10 times each, we will
replace B.
Random replacement (RR) Randomly selects a candidate item and discards it to make space when
necessary. This algorithm does not require keeping any information about the access history. For its
simplicity, it has been used in ARM processors. It admits efficient stochastic simulation.
Least recently used (LRU): Discards the least recently used items first. This algorithm requires keeping
track of what was used when, which is expensive if one wants to make sure the algorithm always discards
the least recently used item. General implementations of this technique require keeping "age bits" for
cache-lines and track the "Least Recently Used" cache-line based on age-bits. In such an implementation,
every time a cache-line is used, the age of all other cache-lines changes.
DIFFERENT REPLACEMENT ALGORITHM
40. CACHE WRITE POLICIES
• What is Cache Write Policy?
Cache is a technique of storing a copy of data temporarily in rapidly accessible storage
memory. Cache stores most recently used words in small memory to increase the speed in
which a data is accessed. It acts like a buffer between RAM and CPU and thus increases the
speed in which data is available to the processor.
There are two main Cache Write Policy:
1. Write-Through policy.
2. Write-Back policy.
41. TYPES OF CACHE WRITE POLICIES
1. Write through policy:
Write-through policy is the most commonly used methods of writing into the cache memory.
In write-through method when the cache memory is updated simultaneously the main memory is also updated.
Thus at any given time, the main memory contains the same data which is available in the cache memory.
It is to be noted that, write-through technique is a slow process as everytime it needs to access main memory.
2. Write back policy:
Write-back policy can also be used for cache writing.
During a write operation only the cache location is updated while following write-back method. When update in
cache occurs then updated location is marked by a flag. The flag is known as modified or dirty bit.
When the word is replaced from the cache, it is written into main memory if its flag bit is set. The logic behind
this technique is based on the fact that during a cache write operation, the word present in the cache may be
accessed several times. This method helps reduce the number of references to main memory.
43. CACHE MEMORY HEIRARCHY
Hierarchy List
• Registers
• L1 Cache
• L2 Cache
• Main memory
• Disk cache
• Disk
• Optical
• Tape
As one goes down the
hierarchy
• Decreasing cost per bit
• Increasing capacity
• Increasing access time
• Decreasing frequency of
access of the memory by
the processor – locality of
reference
44. CACHE MEMORY HEIRARCHY
Semiconductor Memory Read Only Memory (ROM)
RAM – Random Access Memory
• Misnamed as all semiconductor
memory is random access
• Read/Write
• Volatile
• Temporary storage
• Two main types: Static or
Dynamic
• Permanent storage
• Microprogramming
• Library subroutines
• Systems programs (BIOS)
• Function tables
46. QUESTIONS
1. The number successful access to memory stated as a fraction is called as _____.
2. A cache line is 64 bytes. The main memory has latency 32ns and bandwidth 1 GBytes/s. The time required
to fetch the entire cache line from the main memory is _____?
3. Consider a 4-way set associative mapped cache with block size 4 KB. The size of main memory is 16 GB and
there are 10 bits in the tag. Find size of cache memory.
4. A computer has a 256 KByte, 4-way set associative, write back data cache with the block size of 32 Bytes. The
processor sends 32-bit addresses to the cache controller. Each cache tag directory entry contains, in addition,
to address tag, 2 valid bits, 1 modified bit and 1 replacement bit. The number of bits in the tag field of an
address is
a) 11 b) 14 c) 16 d) 27
5. Consider a direct mapped cache of size 512 KB with block size 1 KB. There are 7 bits in the tag. Find size of main
memory.
a) 32 MB b) 60 MB c) 64 KB d) 64 MB
47. QUESTIONS
6. Memory management technique in which system stores and retrieves data from secondary storage for use in main
memory is called ______
a) Fragmentation b) paging
c) Mapping d) none of the mentioned
7. The address of a page table in memory is pointed by ____________
a) stack pointer b) page table base register c) page register d) program counter
8. The page table contains ____________
a) base address of each page in physical memory
b) page offset
c) page sized
d) none of the mentioned
48. QUESTIONS
9. Operating System maintains the page table for ____________
a) each process b) each thread
c) each instruction d) each address
10. The LRU provides very bad performance when it comes to
a) Blocks being accessed is sequential
b) When the blocks are randomized
c) When the consecutive blocks accessed are in the extremes
d) None of the mentioned
11. The algorithm which removes the recently used page first is ________
a) LRU b) MRU
c) OFM d) None of the mentioned
49. O N L I N E P R E S E N TAT I O N
B Y S U B I D B I S W A S
M A I L : s u b i d b i s w a s 2 0 6 1 @ g m a i l . c o m