SlideShare ist ein Scribd-Unternehmen logo
1 von 43
Downloaden Sie, um offline zu lesen
OPERATING SYSTEMS
iii- 1
UNIT –III
STORAGE MANAGEMENT
OPERATING SYSTEMS
iii- 2
UNIT –III STORAGE MANAGEMENT
Memory Management: background-swapping-contiguous memory allocation-paging-
segmentation-segmentation with paging. Virtual memory: background-Demand paging-
Process creation-Page replacement-Allocation of frames-Thrashing. Case study: Memory
management in Linux
Chapter 9 Storage Management
Memory Management
The main purpose of a computer system is executing program .During execution
programs should be in main memory along with data they access.
In multiprogramming environment, the main memory is one of the most precious
resources. Memory management is concerned with the allocation of physical memory of
finite capacity to the requesting resources. Its main four functions are
1. Keeping track of the status of each memory location-whether it is allocated of free.
2. Policy for allocation-which process should be allocated memory, how much when and
where. Also it should handle conflicting request from the processes.
3. Memory allocation: When the process request fro memory, specific location must be
selected and allocated as per a policy and status information updates.
4. De-allocation-The allocated memory may be either reclaimed by memory
management or released by the process management. After de-allocation, the status
information must be updated.
Base and Limit Registers
A pair of base and limit registers define the logical address space
1. Swapping
2. Contiguous Memory allocation
3. Paging
4. Segmentation
5. Segmentation with paging
OPERATING SYSTEMS
iii- 3
Binding of Instructions and Data to Memory
Address binding of instructions and data to memory addresses can happen at three
different stages
Compile time: If memory location known a priori, absolute code can be
generated; must recompile code if starting location changes
Load time: Must generate relocatable code if memory location is not known
at compile time
Execution time: Binding delayed until run time if the process can be moved
during its execution from one memory segment to another. Need hardware
support for address maps (e.g., base and limit registers)
Logical vs. Physical Address Space
The concept of a logical address space that is bound to a separate physical address
space is central to proper memory management
OPERATING SYSTEMS
iii- 4
Logical address – generated by the CPU; also referred to as virtual address
Physical address – address seen by the memory unit
Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time
address-binding scheme
Memory-Management Unit (MMU)
 Hardware device that maps virtual to physical address
 In MMU scheme, the value in the relocation register is added to every address
generated by a user process at the time it is sent to memory
 The user program deals with logical addresses; it never sees the real physical
addresses
Dynamic relocation using relocation register
Dynamic Loading
 Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never loaded
 Useful when large amounts of code are needed to handle infrequently
occurring cases
 No special support from the operating system is required implemented
through program design
OPERATING SYSTEMS
iii- 5
Dynamic Linking
 Linking postponed until execution time
 Small piece of code, stub, used to locate the appropriate memory-resident
library routine
 Stub replaces itself with the address of the routine, and executes the routine
 Operating system needed to check if routine is in processes’ memory address
 Dynamic linking is particularly useful for libraries
 System also known as shared libraries
9.1 Swapping
 A process can be swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution
 Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images
 Roll out, roll in – swapping variant used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority process
can be loaded and executed
 Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped
 Modified versions of swapping are found on many systems (i.e., UNIX,
Linux, and Windows)
 System maintains a ready queue of ready-to-run processes which have
memory images on disk
Constraints on swapping are:
To swap a process, it should completely idle.
Never swap a process with pending I/O or execute I/O operation only into
operating system buffers.
For efficient CPU utilization, the number of swapping should be less. Swapping is
used in UNIX and WINDOWS NT.
Advatnages:
 Swapping helps in running more jobs by CPU keeping only those programs in
memory which are currently required by the system and the rest of the
programs to be swapped out to the secondary storage.
 Allows more jobs to be run, than that can fit into memory at a time.
OPERATING SYSTEMS
iii- 6
Schematic View of Swapping
9.2Contiguous Allocation
 Main memory usually into two partitions:
o Resident operating system, usually held in low memory with interrupt
vector
o User processes then held in high memory
 Relocation registers used to protect user processes from each other, and from
changing operating-system code and data
o Base register contains value of smallest physical address
o Limit register contains range of logical addresses – each logical address
must be less than the limit register
o MMU maps logical address dynamically
Memory partition
FREE SPACE
USER PROGGRAMS
OPERATING SYSTEM
OPERATING SYSTEMS
iii- 7
SINGLE PARTITION ALLOCATION:
In this scheme very little h/w support is required. To ensure that the user’s process
do not trespass into the operating system area. Relocation registers and limit
registers an user/supervisor mode operation are used. Usually the operating
system is placed in the lowest memory area and the relocation register has the
value of smallest physical address by the operating system.
In user mode, each memory address computed by a process is compared with the
contents of the relocation registers or limit registers. Any attempt to access the
protected area occupied by the operating system may be detected and violating
process is aborted.
HW address protection with base and limit registers
Logical address=346
Relocation register=14000
Physical address=14346
Advantages:
 Simplicity
 Does not require expertise to understand or use such a system
 Used for small, inexpensive computing systems.
Disadvantages:
 resources are not managed in an efficient manner.
 Memory is not fully utilized leaving some wasted space.
OPERATING SYSTEMS
iii- 8
Contiguous Allocation (Cont.)
Multiple-partition allocation
Hole – block of available memory; holes of various size are scattered
throughout memory
When a process arrives, it is allocated memory from a hole large enough to
accommodate it
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
1) Static partitioned (MFT)
Memory is divided into fixed partitions. The size of partitions and the number
are determined during system setup taking into account the degree of
multiprogramming, available memory and typical size of the process to
processes frequently. These partitions can be manually defined in only certain
systems. The current status of each partition and its attributes(partition
number,location and size) is stored in static partition table(SPST).
LOGICALOGICAL
PHYSICAL
ADD
MMU
Limit contains the maximum value of the range of logical address .Relocation
register contains the value of the smallest physical address.
Dynamic Storage-Allocation Problem
In the scheme partitions are partitioned during job processing so as to match
partitioned sizes to process sizes. Here two separate tables are used.
Allocated partitioned status table for allocated area
Free Area status table for free areas.
Initially all memory is available for user processes and is considered as one large
block of available memory-a hole. When a process arrives and needs memory.
CPU
MEMORY
140000
+
OPERATING SYSTEMS
iii- 9
Hole large enough for that process is searched .If one such hole is found ,the
needed memory one such hole is found, the needed memory is allocated ,keeping
the rest available to satisfy future request.
For eg. Assume that there is 2560k of memory available operating system of
400k.
This leaves 2160k for user processes as shown below .consider the following
processes. How to satisfy a request of size n from a list of free holes
OPERATING SYSTEM
2160K
Assume FCFS scheduling memory space can be allocated to processes p1 ,p2 and
p3 creating a memory map as in figure.
Os
P1
P2
 First-fit: Allocate the first hole that is big enough. Search can begin form the
starting or where the previous first-fit ended. Searching stop when the required
hole is found.
 Best-fit: Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size
o Produces the smallest leftover hole
 Worst-fit: Allocate the largest hole; must also search entire list
o Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Problem
PROCESS MEMORY TIME
P1
P2
P3
P4
P5
600K
100K
300K
700K
500K
10
5
20
8
15
OPERATING SYSTEMS
iii- 10
Given memory partitions of 100k,500k,200k,300k,600k how would each of the
first fit,best –fit and worst-fit algorithm palce processes of 212k,417k,112k,426k
Fragmentation
 External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous
 Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is memory
internal to a partition, but not being used
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory together in one
large block
 Compaction is possible only if relocation is dynamic, and is done
at execution time
 I/O problem
 Latch job in memory while it is involved in I/O
 Do I/O only into OS buffers
9.3Paging
 Logical address space of a process can be noncontiguous; process is allocated
physical memory whenever the latter is available
 Divide physical memory into fixed-sized blocks called frames (size is power of 2,
between 512 bytes and 8,192 bytes)
 Divide logical memory into blocks of same size called pages
 Keep track of all free frames
 To run a program of size n pages, need to find n free frames and load program
 Set up a page table to translate logical to physical addresses
 Internal fragmentation
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
 Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit
OPERATING SYSTEMS
iii- 11
Address Translation Scheme
 For given logical address space 2m and page size 2n
Paging Hardware
Paging Model of Logical and Physical Memory
nm - n
Page number Page offset
OPERATING SYSTEMS
iii- 12
Paging Example
32-byte memory and 4-byte page
Free Frames
OPERATING SYSTEMS
iii- 13
Before allocation after allocation
Implementation of Page Table
 Page table is kept in main memory
 Page-table base register (PTBR) points to the page table
 Page-table length register (PRLR) indicates size of the page table
 In this scheme every data/instruction access requires two memory accesses. One
for the page table and one for the data/instruction.
 The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers
(TLBs)
 Some TLBs store address-space identifiers (ASIDs) in each TLB entry –
uniquely identifies each process to provide address-space protection for that
process
Associative Memory
Associative memory – parallel search
Frame #Page #
OPERATING SYSTEMS
iii- 14
Address translation (p, d)
o If p is in associative register, get frame # out
o Otherwise get frame # from page table in memory
Paging Hardware with TLB
Advantage of paging
 Paging avoids the problem of fitting the varying sized memory chunks onto
the backing store.
 Permits a programs memory space to be non-contiguous
Drawbacks
 Address mapping involves bugs overhead
 A process is loaded into memory only if all its pages can be loaded
 It is difficult to select the optimum page size
Effective Access Time
 Associative Lookup =  time unit
 Assume memory cycle time is 1 microsecond
 Hit ratio – percentage of times that a page number is found in the associative
registers; ratio related to number of associative registers
 Hit ratio = 
 Effective Access Time (EAT)
 EAT = (1 + )  + (2 + )(1 – )
 = 2 +  – 
OPERATING SYSTEMS
iii- 15
Memory Protection
 Memory protection implemented by associating protection bit with each frame
 Valid-invalid bit attached to each entry in the page table:
o “valid” indicates that the associated page is in the process’ logical address
space, and is thus a legal page
o “invalid” indicates that the page is not in the process’ logical address
space
Valid (v) or Invalid (i) Bit In A Page Table
Shared Pages
 Shared code
o One copy of read-only (reentrant) code shared among processes (i.e., text
editors, compilers, window systems).
o Shared code must appear in same location in the logical address space of
all processes
 Private code and data
o Each process keeps a separate copy of the code and data
o The pages for the private code and data can appear anywhere in the logical
address space
OPERATING SYSTEMS
iii- 16
Shared Pages Example
Structure of the Page Table
 Hierarchical Paging
 Hashed Page Tables
 Inverted Page Tables
Hierarchical Page Tables
 Break up the logical address space into multiple page tables
 A simple technique is a two-level page table
OPERATING SYSTEMS
iii- 17
Two-Level Page-Table Scheme
Two-Level Paging Example
 A logical address (on 32-bit machine with 1K page size) is divided into:
o a page number consisting of 22 bits
o a page offset consisting of 10 bits
 Since the page table is paged, the page number is further divided into:
o a 12-bit page number
o a 10-bit page offset
 Thus, a logical address is as follows:
page number
where pi is an index into the outer page table, and p2 is the displacement within the
page of the outer page table
OPERATING SYSTEMS
iii- 18
Address-Translation Scheme
Three-level Paging Scheme
Hashed Page Tables
 Common in address spaces > 32 bits
OPERATING SYSTEMS
iii- 19
 The virtual page number is hashed into a page table. This page table contains a
chain of elements hashing to the same location.
 Virtual page numbers are compared in this chain searching for a match. If a match
is found, the corresponding physical frame is extracted.
Hashed Page Table
Inverted Page Table
 One entry for each real page of memory
 Entry consists of the virtual address of the page stored in that real memory
location, with information about the process that owns that page
 Decreases memory needed to store each page table, but increases time needed
to search the table when a page reference occurs
 Use hash table to limit the search to one — or at most a few — page-table
entries
Inverted Page Table Architecture
OPERATING SYSTEMS
iii- 20
9.4Segmentation
 Memory-management scheme that supports user view of memory
 A program is a collection of segments. A segment is a logical unit such as:
 main program,
 procedure,
 function,
 method,
 object,
 local variables, global variables,
 common block,
 stack,
 symbol table, arrays
User’s View of a Program
OPERATING SYSTEMS
iii- 21
The leader takes all these segments and assigns then segment numbers.
The logical address space is a collection of segments. Each segment has
name and length. The address specifies both the segment name and the
offset within the segment. Therefore the user specifies each address by
two quantities
-a segment name(segment nos)
offset
therefore the logical address consists of
<segment_no,o ffset>
Logical address
Every segment resides as one contiguous location of physical with each
program location of physical memory with each program segment
compiled as if starting from 0 addresses. A process may run only if it is
current segment is in the memory.
The mapping of the logical address to physical address is affected by a
segment table. The entry of the segment table has a segment base and
S D
LLooggiiccaall VViieeww ooff SSeeggmmeennttaattiioonn
1
3
2
4
1
4
2
3
Physical memory spaceUser space
OPERATING SYSTEMS
iii- 22
segment limit. The segment base contains the starting physical address
where the segment resides in memory. Whereas the segment limit
specifies the length of the segment. The use of s
Segmentation Architecture
 Logical address consists of a two tuple:
 <segment-number, offset>,
 Segment table – maps two-dimensional physical addresses; each table entry has:
o base – contains the starting physical address where the segments reside in
memory
o limit – specifies the length of the segment
 Segment-table base register (STBR) points to the segment table’s location in
memory
 Segment-table length register (STLR) indicates number of segments used by a
program;
 segment number s is legal if s <
STLR
Segmentation Architecture (Cont.)
 Protection
o With each entry in segment table associate:
 validation bit = 0  illegal segment
 read/write/execute privileges
 Protection bits associated with segments; code sharing occurs at segment level
 Since segments vary in length, memory allocation is a dynamic storage-allocation
problem
 A segmentation example is shown in the following diagram
OPERATING SYSTEMS
iii- 23
Segmentation Hardware
A logical address consists of two parts
 A segment nos(s)
 Offset into that segment(d)
The segment number is used as index info the segment table. The offset d
of the logical address must be between 0 and the segment limit. If it is not
trap is sent to operating system indicating address error, if the offset is
legal, it is added to the segment base to produce the address in physical
memory of the desired byte. The segment table is essentially an array of
base-limit register pairs.
Example of Segmentation
OPERATING SYSTEMS
iii- 24
There are five segments numbered from 0 to 4.The segments are stored in
physical memory as shown. The segment table has a separate entry for
each segment, giving the beginning address of the segment, giving the
beginning address of the segment in physical memory(the base) and the
length of that segment(the limit).
For eg: segment 2 is 400 bytes and begins at location 4300.Thus a
reference to byte 53 of segment 2 is mapped onto location
4300+53=4353.
Seg3,byte 853=>3200+852=4052
Seg 0,byte 1222=>addressing error
9.5 Paging Vs Segmentation
Paging and segmentation are functionally similar. A page is a physical slice of address
space and all pages are of equal size. A segment is a logical module of the address space
and it bas arbitrary length. While in paging, each module (or even instruction) itself may
be split among several pages. Whereas a segment corresponding to a dynamic data
structure may grow or shrink as the data structure varies. Sharing segments is quite
straight-forward when compared to sharing in a pure paging system.
16 Mark questions
1. Describe the following allocation algorithms
a. First fit
b. Best fit
c.Worst fit
2.Why are segmentation and paging sometimes combined into one scheme? And explain
OPERATING SYSTEMS
iii- 25
3.Given memory partitions of 100KB,500KB,200KB,300KB,and 600KB(in order),how
would each of the first-fit ,best-fit and worst-fit algorithms place processes of
212kB,417KB,112KB,and 426 KB(in order)?Which algorithm makes the most efficient
use of memory?
4.Consider the following segment table
Segmentation Base Length
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96
what are the physical addresses for the following logical addresses?
a.0430
b.110
c.2500
d.3400
e.4112
5.In the IBM/370 memory protection is provided through the use of keys. A key is a 4bit
quantity. Each 2 KB block of memory has a key (the storage key) associated with it? The
CPU also has a key (the protection equal, or if either is Zero. Which of the following
memory-management schemes could be used successfully with this hardware?
 Single user system
 Multiprogramming with a fixed number of processes
 Multiprogramming with a variable number of processes
 Paging
 segmentation
Chapter 10: Virtual Memory
Virtual Memory is a technique which allows the execution of processes that may not be
in memory completely. It allows the execution of a process; even the logical address
space is greater than the physical available memory. Hence programs larger than physical
memory can be executed. Virtual memory technique free programmers form the memory
storage limitations. It is complex technique not easy to implement.
Demand Paging
Process creation
Page Replacement
Allocation of Frames
Thrashing
OPERATING SYSTEMS
iii- 26
Virtual memory – separation of user logical memory from physical memory.
o Only part of the program needs to be in memory for execution
o Logical address space can therefore be much larger than physical
address space
o Allows address spaces to be shared by several processes
o Allows for more efficient process creation
1. Virtual memory can be implemented via:
a. Demand paging
b. Demand segmentation
Scheme of virtual memory
The implementation of virtual memory involves at least two storage levels-main memory
and secondary storage.
The virtual memory is a memory management technique which does splitting of a
program into number of pieces as well as swapping. The basic idea behind virtual
memory is that the combined size of the program and data may exceed the amount of
physical memory. The operating system keeps those parts of the program in the memory,
which are required during execution and the rest on the disk.
Example:
A 5MB program can run on a 640KB RAM machine by carefully choosing 640KB to be
kept in memory at each instant and swapping pieces of a program between the disk and
memory as needed.
Function of virtual memory
The function of virtual memory may be characterized as follows:
An address generated by a programmer is referred to as a virtual address (or name) and
the set of such addresses is called a virtual address space or name space.
An address of a word (or a byte) in the physical memory is referred to as memory address
(or real address) and the set of all such addresses is called memory space (or real address
space)
Usage:
The purpose of virtual memory mechanism is to realize the address mapping
function, that performs the necessary mapping, either generating a memory address, if the
required program in virtual address is present in the memory address ,or generating a
missing item fault otherwise.
Incase the required process is present in the memory address ,the main memory
can be accessed else in case of missing item fault, the program that generated the address
is temporarily suspended. This case of missing item fault can be solved by the principle
of locality.
OPERATING SYSTEMS
iii- 27
10.1 Demand paging
A demand paging technique is similar to paging with swapping. Usually processes reside
on secondary memory, when the process is required, it is swapped into memory using a
LAZY swapper it never swaps a page into memory until the page will be needed.
The swapper is used to swap processes whereas a pager is concerned with the individual
pages of a process.
When a process is to be swapped in the pager guesses which pages will be used before
the process is swapped out again and the pager brings only those necessary pages into
memory .Hence it avoids reading into memory pages that will not be used anyway,
decreasing the swap time and the amount of physical memory needed.
A hardware scheme is used to distinguish these pages that are in memory and on the disk
pages that are in memory and on the disk .This can be implemented by using valid-
invalid bit scheme. if this bit is set to valid then the associated page is both legal and in
memory. If the bit is invalid then the page is not valid or is not in memory. For pages
brought into memory, the page table entry is set as usual, but the page table entry is either
invalid and may contain address space on disk. This situation shown below
Page Table When Some Pages Are Not in Main Memory
OPERATING SYSTEMS
iii- 28
Any access to page marked invalid causes a page fault trap. A process tries to access a
page which was not swapped in previously while translating the address through the page
table the paging hardware will check the invalid bit,if it is set then causes a trap to os.
The page fault trap means that the operating system failure to bring the desired page into
memory rather than an invalid address.
The procedure for handling page fault is illustrated by the following diagram.
Steps in Handling a Page Fault
OPERATING SYSTEMS
iii- 29
 Check the page table(usually kept with PCB) for this process, to determine whether
the reference was a valid or invalid memory access.
 If the reference was invalid, the process terminates. If it was valid, but the page is not
yet in memory then swap in that page.
 Find a free frame(from the free frame list)
 Schedule a disk operation to read the desired page into the newly allocated frame
 When the disk read is complete, modify the internal table kept with the process and
the page table to indicate that the page is now memory.
 Restart the instruction that was interrupted by the illegal address trap.
 The process can now access the page as thought it had always been in memory.
Pure demand paging:
If the process execution starts executing with no pages in memory, when the first
instruction executes, the process will immediately fault for the page. After this page was
brought into memory, the process would continue to execute, faulting as necessary until
every page that is is pure demand paging never bring a page into memory until it is
required.
Hardware support
Page table
This table has the ability to mark an entry invalid through a valid-invalid bit or special
value of protection bits.
Secondary memory
This memory holds those pages that are not present in main memory. The secondary
memory is usually a high speed disk. It is known as the swap device and the section of
the disk used for this purpose is known as swap space or backing store.
OPERATING SYSTEMS
iii- 30
10.3Page Replacement:
Page replacement takes the following approach. If no frame is free, find one that is not
being currently used and free it. The frame can be freed by writing its contents to swap
space and changing the page table and other tables. The freed frame can no0w be used to
hold the page for which the process faulted.
The page fault routine is now modified to include page replacement.
 Find the location of the desired page on the disk.
 Find free frame
1. If there is a free frame use it
2. Otherwise use a page replacement algorithm to select a victim frame
3. Write the victim page to the disk, change the page and frame tables
4. Read the desired page into the (newly)free frame, change the page and
frame tables
5. Restart the user processes.
6. If no frames are free then two page transfers are required. This situation
effectively doubles the page fault service time.
7. This two page transfer overhead can be reduced by using a modify bit or
dirty bit. Every page frame will have modify bit associated with it ,at the
h/w level.
8. This modify bit is set by the h/w whenever any byte/word is written into
the page; indicates that the page has been modified. When a page is
selected for replacement, the bit is not set, and then the page need not be
written it is already there. Thus this scheme will reduce the time to service
a page fault by half if the page is not modified.
OPERATING SYSTEMS
iii- 31
10.4 Allocation of frames
Page replacement algorithm
Practically every os has its own unique replacement scheme and there are may different
page replacement algorithms. The main criteria for the replacements algorithm selection
is the one with lowest page fault rate. These algorithms are evaluated by running it on a
particular string of memory references (known as reference string) and computing the no
of page faults. These reference strings are generated by using a random number generator
or by tracing the system.
For a given page size, only the page number is considered and not the entire address. If
there is a references to page P then any immediately following references to page P will
never cause a page fault.
There are several page replacement algorithms some of which are the following
1. FIFO algorithm
2. Optimal algorithm
OPERATING SYSTEMS
iii- 32
3. LRU algorithm
4. Counting algorithm
5. Page buffering
FIFO algorithm
The FIFO algorithm is a simplest page replacement algorithm and it associates with every
page the arrival time(when that page was brought into memory).To replace a page, the
oldest page is chosen. It is not strictly necessary to record the arrival time. Instead a FIFO
queue can be created to hold all pages in memory. Replace the page at the head of the
queue and the head pointer is moved to the next element inn the queue. When a page is
brought into memory for an empty frame ,it can be inserted at the tail of the frame
FFiirrsstt--IInn--FFiirrsstt--OOuutt ((FFIIFFOO)) AAllggoorriitthhmm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per process)
4 frames
Belady’s Anomaly: more frames  more page faults
1
2
3
1
2
3
4
1
2
5
3
4
9 page faults
1
2
3
1
2
3
5
1
2
4
5 10 page faults
44 3
OPERATING SYSTEMS
iii- 33
FFIIFFOO PPaaggee RReeppllaacceemmeenntt
FFIIFFOO IIlllluussttrraattiinngg BBeellaaddyy’’ss AAnnoommaallyy
OPERATING SYSTEMS
iii- 34
Optimal algorithm
An optimal page replacement algorithm called as OPT or MIN has the lowest page fault
rate of all algorithms. It is simply state as
“Replace the page that will not be used for the longest period of time”
To illustrate consider the sample reference string. This algorithm will yield only nine
page faults. Optimal replacement algorithm(only 9 faults) is much better than FIFO
algorithm(15 fault).However the optimal page replacement algorithm is difficult to
implement, because it requires future knowledge of the reference string. As a result, the
optimal algorithm is used mainly for comparison studies only.
OOppttiimmaall AAllggoorriitthhmm
Replace page that will not be used for longest period of time
4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
How do you know this?
Used for measuring how well your algorithm performs
1
2
3
4
6 page faults
faults
4 5
OPERATING SYSTEMS
iii- 35
LRU algorithm
Since the optimal page replacement algorithm is not feasible but an approximate to the
optimal algorithm is possible. The FIFO algorithm uses the time when a page has to be
used. In LRU(Least Recently used)algorithm, it uses the recent past as an approximation
of the near future. It will replace the page that has not been used for the longest period of
time.
During LRU replacement, when a page needs to be replaced, it chooses a page that has
not been used for the longest period of time. The result of applying LRU to the reference
string is shown below.
Practically;LRU policy is considered to be quite good and is used as a page replacement
algorithm. But is requires hardware assistance to determine an order for the frames
defined by the time of use. This can be implemented using stack or counters.
OOppttiimmaall PPaaggee RReeppllaacceemmeenntt
OPERATING SYSTEMS
iii- 36
Counting Algorithm:
LLRRUU PPaaggee RReeppllaacceemmeenntt
LLeeaasstt RReecceennttllyy UUsseedd ((LLRRUU)) AAllggoorriitthhmm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Counter implementation
Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter
When a page needs to be changed, look at the counters to
determine which are to change
5
2
4
3
1
2
3
4
1
2
5
4
1
2
5
3
1
2
4
3
OPERATING SYSTEMS
iii- 37
A counter is kept in the page table which stores the number of references and these
counters can be used to implement the following tow schemes.
LRU algorithm:
The least frequently used page replacement algorithm requires that the page with the
smallest count to be replaced whereas as actively used page should have a large
references count.
The disadvantage of this algorithm is that if a page is used heavily during the initial phase
of a process, but then is never used again since it was used heavily, it has a large count
and remains in memory even though it is no longer needed.
MFU Algorithm
Most frequently used page replacement algorithm is based on the argument that the age
with the smallest count was probably first brought in and has yet to be used .Neither
MFU or LRU replacement is common as implementation of these algorithms is fairly
expensive.
Page Buffering algorithm:
In addition to page replacement algorithm systems commonly keep some
subroutines which improve performance. They are;
 Systems commonly keep a pool of free frames or page buffers. When
a page fault occurs, a victim frame is chosen as before. Before the
victim is written out the desired page is read into a free frame from the
pool. Hence it allows the process to restart immediately, without
waiting for the victim page to be written out later, after the victim is
written out its frame is added to the free frame pool.
LRU Approximation Algorithm:
Few computer systems provide hardware support for LRU systems
provide support in the form of reference bit. The reference bit for apage is
set by the hardware, whenever that page is referenced. Reference bits are
associated with each entry in the page table.
OPERATING SYSTEMS
iii- 38
Initially all bits are cleared to zero by the operating system. As a user
process executes, the bit associated with each page reference is set to 1 by
the hardware. After sometimes, it can be determined whether the page was
used or not by examining the reference bits. The order of the usage is not
known but it is possible to know whether the page was used or not. This
information loads to LRU approximation algorithms.
Additional Reference bits algorithm
This is implemented by using a 8 bit byte for each page in a table in
memory .At
Regular intervals (say every 100ms) a timer interrupts transfers control to
the operating system. The operating system shifts the reference bit for
each page into the higher order bit of its 8 bit byte shifting the other bits
right 1 discarding the low order bit. These 8 bit shift registers contain the
history of page use for the last 8 time period a page that is used at least
once each period would have the shift register value 1111 1111.
A page with a history register value of 1100 0100 has been used more
recently that he page with register values 0111 0111.If these 8 bit bytes are
interpreted as unsigned integers, the page with the lowest number is the
LRU page, and it can be replaced.
Consider the following instance ,where there are 3 oage frame 0 is
refernced, at time interval 2 page frames 1 and 2 are referenced and at
interval 3 only page frames 0 and 2 are referenced.After all these, the
page frame values of the counter are as shown below.
Events Page Frame 0 Page frame 1 Page Frame 2
Before clock 0000 0000 0000 0000 0000 0000
At time Interval 1 1000 0000 0000 0000 0000 0000
At time Interval 2 0100 0000 1000 0000 1000 0000
At time Interval 3 1010 0000 0100 0000 1100 0000
The page with the lowest value is the LRU and hence it can be
replaced.Therefore frame 1 is selected as the victim for replacement.
OPERATING SYSTEMS
iii- 39
b)Second Chance Algorithm
The basic algorithm of second chance replacement is FIFO replacement algorithm. When
a page has been selected, the references bit is checked .If the value is Zero the page can
be replaced. If the reference bit is 1, however the page is given a second reference bit is
cleared and its arrival time is rest to current time. Thus a page that is given a second
chance will not be replaced until all other pages a replaced. One way to implement this
algorithm is by using a circular queue.
SSeeccoonndd--CChhaannccee ((cclloocckk)) PPaaggee--RReeppllaacceemmeenntt AAllggoorriitthhmm
Global vs. Local Allocation
Global replacement – process selects a replacement frame from the set of all
frames; one process can take a frame from another
Local replacement – each process selects from only its own set of allocated frames
10.5 Thrashing
This occurs in situation where a process does not have enough frames to execute .If the
process does not have the number of frames ,it will very quickly page fault. At this point,
it must replace some page. However since all pages are in active use, it must replace a
page that will be needed again right away. Consequently it very quickly faults again and
again. This process continues to fault replacing pages for which it will fault again.
This high paging activity is called thrashing. A process is thrashing if it is spending more
time paging than executing.
OPERATING SYSTEMS
iii- 40
Causes of Thrashing
If the CPU utilization is too low then the degree of multiprogramming is increased by
introducing new process to the system. A global replacement algorithm is used, replacing
pages with no regard to the process which they belong. If a process needs more frames, it
starts faulting by taking away pages from other processes. Therefore they queue up
waiting for the paging device. Now the ready queue gets empty and therefore CPU
utilization decreases.
The CPU scheduler sees this and increases the degree of multiprogramming which causes
more page faults and increases the queue for the paging device. As a result CPU
utilization drops even further and the CPU scheduler tries to increase the degree of
multiprogramming even more. Thrashing has occurred and the system throughput
plunges are spending all the time paging.
The effects of thrashing can be limited by using a local replacement algorithm with this,
if a process starts thrashing it cannot steal frames from another process and cause the
latter to thrash pages are replaced with regard to process of which they are a part.
Thrashing (Cont.)
OPERATING SYSTEMS
iii- 41
Working Set Strategy:
To prevent thrashing a process must be allocated as many frames as it needs .This
strategy starts by looking at how many frames a process is actually using. It then uses
locality the set pages actively used together and it executes by moving form locality to
locality.
1.   working-set window  a fixed number of page references
Example: 10,000 instruction
2. WSSi (working set of Process Pi) =
total number of pages referenced in the most recent  (varies in time)
a. if  too small will not encompass entire locality
b. if  too large will encompass several localities
c. if  =   will encompass entire program
3. D =  WSSi  total demand frames
4. if D > m  Thrashing
5. Policy if D > m, then suspend one of the processes
1. Approximate with interval timer + a reference bit
2. Example:  = 10,000
a. Timer interrupts after every 5000 time units
b. Keep in memory 2 bits for each page
OPERATING SYSTEMS
iii- 42
c. Whenever a timer interrupts copy and sets the values of all
reference bits to 0
d. If one of the bits in memory = 1  page in working set
3. Why is this not completely accurate?
4. Improvement = 10 bits and interrupt every 1000 time units
Working-set model
The principle of locality states that processes have the tendfency to refer to the storage
area in non-unifor but lightly localized patterns.
LOocality can be represented in terms of both time and space.Locality with reference to
time is called as temporal locality where the set of restricted
Page-Fault Frequency Scheme
a. Establish “acceptable” page-fault rate
i. If actual rate too low, process loses frame
ii. If actual rate too high, process gains frame
OPERATING SYSTEMS
iii- 43
It is a direct approach. If the page fault is high then the process needs more frames. If it is
low then the process has too many frames. A upper and lower bound on the desired page
fault can be established. If the actual page fault rate exceeds the upper limit, the process
is allocated another frame. If the page fault rate falls below the lower limit a frame is
removed from that process. Thus page fault can be directly controlled to prevent
thrashing.
************************

Weitere ähnliche Inhalte

Was ist angesagt?

Memory management early_systems
Memory management early_systemsMemory management early_systems
Memory management early_systemsMybej Che
 
Introduction of Memory Management
Introduction of Memory Management Introduction of Memory Management
Introduction of Memory Management Maitree Patel
 
Chapter 1 - Introduction
Chapter 1 - IntroductionChapter 1 - Introduction
Chapter 1 - IntroductionWayne Jones Jnr
 
Understanding memory management
Understanding memory managementUnderstanding memory management
Understanding memory managementGokul Vasan
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OSvampugani
 
Chapter 8 : Memory
Chapter 8 : MemoryChapter 8 : Memory
Chapter 8 : MemoryAmin Omi
 
Computer memory management
Computer memory managementComputer memory management
Computer memory managementKumar
 
Chapter 2 part 1
Chapter 2 part 1Chapter 2 part 1
Chapter 2 part 1rohassanie
 
Memory management
Memory managementMemory management
Memory managementcpjcollege
 
Memory management ppt
Memory management pptMemory management ppt
Memory management pptManishaJha43
 
Ch9 OS
Ch9 OSCh9 OS
Ch9 OSC.U
 
Ios103 ios102 iv-operating-system-memory-management_wk4
Ios103 ios102 iv-operating-system-memory-management_wk4Ios103 ios102 iv-operating-system-memory-management_wk4
Ios103 ios102 iv-operating-system-memory-management_wk4Anwal Mirza
 
Overview of Distributed Systems
Overview of Distributed SystemsOverview of Distributed Systems
Overview of Distributed Systemsvampugani
 

Was ist angesagt? (20)

Memory management early_systems
Memory management early_systemsMemory management early_systems
Memory management early_systems
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Introduction of Memory Management
Introduction of Memory Management Introduction of Memory Management
Introduction of Memory Management
 
Chapter 1 - Introduction
Chapter 1 - IntroductionChapter 1 - Introduction
Chapter 1 - Introduction
 
Understanding memory management
Understanding memory managementUnderstanding memory management
Understanding memory management
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Chapter 8 : Memory
Chapter 8 : MemoryChapter 8 : Memory
Chapter 8 : Memory
 
OSCh14
OSCh14OSCh14
OSCh14
 
Memory management
Memory managementMemory management
Memory management
 
Computer memory management
Computer memory managementComputer memory management
Computer memory management
 
Chapter 2 part 1
Chapter 2 part 1Chapter 2 part 1
Chapter 2 part 1
 
Memory management
Memory managementMemory management
Memory management
 
Memory management ppt
Memory management pptMemory management ppt
Memory management ppt
 
Cs8493 unit 3
Cs8493 unit 3Cs8493 unit 3
Cs8493 unit 3
 
Ch9 OS
Ch9 OSCh9 OS
Ch9 OS
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
 
Ios103 ios102 iv-operating-system-memory-management_wk4
Ios103 ios102 iv-operating-system-memory-management_wk4Ios103 ios102 iv-operating-system-memory-management_wk4
Ios103 ios102 iv-operating-system-memory-management_wk4
 
Overview of Distributed Systems
Overview of Distributed SystemsOverview of Distributed Systems
Overview of Distributed Systems
 
CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3CS6401 OPERATING SYSTEMS Unit 3
CS6401 OPERATING SYSTEMS Unit 3
 
Memory management
Memory managementMemory management
Memory management
 

Ähnlich wie Unit iiios Storage Management

Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OSC.U
 
Bab 4
Bab 4Bab 4
Bab 4n k
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for allVSKAMCSPSGCT
 
Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementkazim Hussain
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management Shashank Asthana
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory managementrprajat007
 
ch8 Memory Management OS.pptx
ch8 Memory Management OS.pptxch8 Memory Management OS.pptx
ch8 Memory Management OS.pptxIndhu Periys
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxHarikishnaKNHk
 

Ähnlich wie Unit iiios Storage Management (20)

Chapter 9 OS
Chapter 9 OSChapter 9 OS
Chapter 9 OS
 
Operating System
Operating SystemOperating System
Operating System
 
Opetating System Memory management
Opetating System Memory managementOpetating System Memory management
Opetating System Memory management
 
CH08.pdf
CH08.pdfCH08.pdf
CH08.pdf
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Cs8493 unit 3
Cs8493 unit 3Cs8493 unit 3
Cs8493 unit 3
 
Bab 4
Bab 4Bab 4
Bab 4
 
UNIT-2 OS.pptx
UNIT-2 OS.pptxUNIT-2 OS.pptx
UNIT-2 OS.pptx
 
Memory management OS
Memory management OSMemory management OS
Memory management OS
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for all
 
Ch8
Ch8Ch8
Ch8
 
Paging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory managementPaging +Algorithem+Segmentation+memory management
Paging +Algorithem+Segmentation+memory management
 
Operating system
Operating systemOperating system
Operating system
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
Memory comp
Memory compMemory comp
Memory comp
 
Operating system Memory management
Operating system Memory management Operating system Memory management
Operating system Memory management
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
 
ch8 Memory Management OS.pptx
ch8 Memory Management OS.pptxch8 Memory Management OS.pptx
ch8 Memory Management OS.pptx
 
Ch8 main memory
Ch8   main memoryCh8   main memory
Ch8 main memory
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptx
 

Mehr von donny101

Unit vos - File systems
Unit vos - File systemsUnit vos - File systems
Unit vos - File systemsdonny101
 
Unit ivos - file systems
Unit ivos - file systemsUnit ivos - file systems
Unit ivos - file systemsdonny101
 
Unit iios process scheduling and synchronization
Unit iios process scheduling and synchronizationUnit iios process scheduling and synchronization
Unit iios process scheduling and synchronizationdonny101
 
Unit 1os processes and threads
Unit 1os processes and threadsUnit 1os processes and threads
Unit 1os processes and threadsdonny101
 

Mehr von donny101 (9)

Unit v
Unit vUnit v
Unit v
 
Unit iv
Unit ivUnit iv
Unit iv
 
Unit iii
Unit iiiUnit iii
Unit iii
 
Unit ii
Unit   iiUnit   ii
Unit ii
 
Unit 1
Unit  1Unit  1
Unit 1
 
Unit vos - File systems
Unit vos - File systemsUnit vos - File systems
Unit vos - File systems
 
Unit ivos - file systems
Unit ivos - file systemsUnit ivos - file systems
Unit ivos - file systems
 
Unit iios process scheduling and synchronization
Unit iios process scheduling and synchronizationUnit iios process scheduling and synchronization
Unit iios process scheduling and synchronization
 
Unit 1os processes and threads
Unit 1os processes and threadsUnit 1os processes and threads
Unit 1os processes and threads
 

Kürzlich hochgeladen

Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxJuliansyahHarahap1
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxMuhammadAsimMuhammad6
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Call Girls Mumbai
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiessarkmank1
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxmaisarahman1
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startQuintin Balsdon
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationBhangaleSonal
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxSCMS School of Architecture
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdfAldoGarca30
 
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLEGEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLEselvakumar948
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesRAJNEESHKUMAR341697
 
Computer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersComputer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersMairaAshraf6
 

Kürzlich hochgeladen (20)

Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
 
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLEGEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planes
 
Computer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersComputer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to Computers
 

Unit iiios Storage Management

  • 1. OPERATING SYSTEMS iii- 1 UNIT –III STORAGE MANAGEMENT
  • 2. OPERATING SYSTEMS iii- 2 UNIT –III STORAGE MANAGEMENT Memory Management: background-swapping-contiguous memory allocation-paging- segmentation-segmentation with paging. Virtual memory: background-Demand paging- Process creation-Page replacement-Allocation of frames-Thrashing. Case study: Memory management in Linux Chapter 9 Storage Management Memory Management The main purpose of a computer system is executing program .During execution programs should be in main memory along with data they access. In multiprogramming environment, the main memory is one of the most precious resources. Memory management is concerned with the allocation of physical memory of finite capacity to the requesting resources. Its main four functions are 1. Keeping track of the status of each memory location-whether it is allocated of free. 2. Policy for allocation-which process should be allocated memory, how much when and where. Also it should handle conflicting request from the processes. 3. Memory allocation: When the process request fro memory, specific location must be selected and allocated as per a policy and status information updates. 4. De-allocation-The allocated memory may be either reclaimed by memory management or released by the process management. After de-allocation, the status information must be updated. Base and Limit Registers A pair of base and limit registers define the logical address space 1. Swapping 2. Contiguous Memory allocation 3. Paging 4. Segmentation 5. Segmentation with paging
  • 3. OPERATING SYSTEMS iii- 3 Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers) Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address space is central to proper memory management
  • 4. OPERATING SYSTEMS iii- 4 Logical address – generated by the CPU; also referred to as virtual address Physical address – address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time address- binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme Memory-Management Unit (MMU)  Hardware device that maps virtual to physical address  In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory  The user program deals with logical addresses; it never sees the real physical addresses Dynamic relocation using relocation register Dynamic Loading  Routine is not loaded until it is called  Better memory-space utilization; unused routine is never loaded  Useful when large amounts of code are needed to handle infrequently occurring cases  No special support from the operating system is required implemented through program design
  • 5. OPERATING SYSTEMS iii- 5 Dynamic Linking  Linking postponed until execution time  Small piece of code, stub, used to locate the appropriate memory-resident library routine  Stub replaces itself with the address of the routine, and executes the routine  Operating system needed to check if routine is in processes’ memory address  Dynamic linking is particularly useful for libraries  System also known as shared libraries 9.1 Swapping  A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution  Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images  Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed  Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped  Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)  System maintains a ready queue of ready-to-run processes which have memory images on disk Constraints on swapping are: To swap a process, it should completely idle. Never swap a process with pending I/O or execute I/O operation only into operating system buffers. For efficient CPU utilization, the number of swapping should be less. Swapping is used in UNIX and WINDOWS NT. Advatnages:  Swapping helps in running more jobs by CPU keeping only those programs in memory which are currently required by the system and the rest of the programs to be swapped out to the secondary storage.  Allows more jobs to be run, than that can fit into memory at a time.
  • 6. OPERATING SYSTEMS iii- 6 Schematic View of Swapping 9.2Contiguous Allocation  Main memory usually into two partitions: o Resident operating system, usually held in low memory with interrupt vector o User processes then held in high memory  Relocation registers used to protect user processes from each other, and from changing operating-system code and data o Base register contains value of smallest physical address o Limit register contains range of logical addresses – each logical address must be less than the limit register o MMU maps logical address dynamically Memory partition FREE SPACE USER PROGGRAMS OPERATING SYSTEM
  • 7. OPERATING SYSTEMS iii- 7 SINGLE PARTITION ALLOCATION: In this scheme very little h/w support is required. To ensure that the user’s process do not trespass into the operating system area. Relocation registers and limit registers an user/supervisor mode operation are used. Usually the operating system is placed in the lowest memory area and the relocation register has the value of smallest physical address by the operating system. In user mode, each memory address computed by a process is compared with the contents of the relocation registers or limit registers. Any attempt to access the protected area occupied by the operating system may be detected and violating process is aborted. HW address protection with base and limit registers Logical address=346 Relocation register=14000 Physical address=14346 Advantages:  Simplicity  Does not require expertise to understand or use such a system  Used for small, inexpensive computing systems. Disadvantages:  resources are not managed in an efficient manner.  Memory is not fully utilized leaving some wasted space.
  • 8. OPERATING SYSTEMS iii- 8 Contiguous Allocation (Cont.) Multiple-partition allocation Hole – block of available memory; holes of various size are scattered throughout memory When a process arrives, it is allocated memory from a hole large enough to accommodate it Operating system maintains information about: a) allocated partitions b) free partitions (hole) 1) Static partitioned (MFT) Memory is divided into fixed partitions. The size of partitions and the number are determined during system setup taking into account the degree of multiprogramming, available memory and typical size of the process to processes frequently. These partitions can be manually defined in only certain systems. The current status of each partition and its attributes(partition number,location and size) is stored in static partition table(SPST). LOGICALOGICAL PHYSICAL ADD MMU Limit contains the maximum value of the range of logical address .Relocation register contains the value of the smallest physical address. Dynamic Storage-Allocation Problem In the scheme partitions are partitioned during job processing so as to match partitioned sizes to process sizes. Here two separate tables are used. Allocated partitioned status table for allocated area Free Area status table for free areas. Initially all memory is available for user processes and is considered as one large block of available memory-a hole. When a process arrives and needs memory. CPU MEMORY 140000 +
  • 9. OPERATING SYSTEMS iii- 9 Hole large enough for that process is searched .If one such hole is found ,the needed memory one such hole is found, the needed memory is allocated ,keeping the rest available to satisfy future request. For eg. Assume that there is 2560k of memory available operating system of 400k. This leaves 2160k for user processes as shown below .consider the following processes. How to satisfy a request of size n from a list of free holes OPERATING SYSTEM 2160K Assume FCFS scheduling memory space can be allocated to processes p1 ,p2 and p3 creating a memory map as in figure. Os P1 P2  First-fit: Allocate the first hole that is big enough. Search can begin form the starting or where the previous first-fit ended. Searching stop when the required hole is found.  Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size o Produces the smallest leftover hole  Worst-fit: Allocate the largest hole; must also search entire list o Produces the largest leftover hole First-fit and best-fit better than worst-fit in terms of speed and storage utilization Problem PROCESS MEMORY TIME P1 P2 P3 P4 P5 600K 100K 300K 700K 500K 10 5 20 8 15
  • 10. OPERATING SYSTEMS iii- 10 Given memory partitions of 100k,500k,200k,300k,600k how would each of the first fit,best –fit and worst-fit algorithm palce processes of 212k,417k,112k,426k Fragmentation  External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous  Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used  Reduce external fragmentation by compaction  Shuffle memory contents to place all free memory together in one large block  Compaction is possible only if relocation is dynamic, and is done at execution time  I/O problem  Latch job in memory while it is involved in I/O  Do I/O only into OS buffers 9.3Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available  Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8,192 bytes)  Divide logical memory into blocks of same size called pages  Keep track of all free frames  To run a program of size n pages, need to find n free frames and load program  Set up a page table to translate logical to physical addresses  Internal fragmentation  Address generated by CPU is divided into:  Page number (p) – used as an index into a page table which contains base address of each page in physical memory  Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit
  • 11. OPERATING SYSTEMS iii- 11 Address Translation Scheme  For given logical address space 2m and page size 2n Paging Hardware Paging Model of Logical and Physical Memory nm - n Page number Page offset
  • 12. OPERATING SYSTEMS iii- 12 Paging Example 32-byte memory and 4-byte page Free Frames
  • 13. OPERATING SYSTEMS iii- 13 Before allocation after allocation Implementation of Page Table  Page table is kept in main memory  Page-table base register (PTBR) points to the page table  Page-table length register (PRLR) indicates size of the page table  In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction.  The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)  Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each process to provide address-space protection for that process Associative Memory Associative memory – parallel search Frame #Page #
  • 14. OPERATING SYSTEMS iii- 14 Address translation (p, d) o If p is in associative register, get frame # out o Otherwise get frame # from page table in memory Paging Hardware with TLB Advantage of paging  Paging avoids the problem of fitting the varying sized memory chunks onto the backing store.  Permits a programs memory space to be non-contiguous Drawbacks  Address mapping involves bugs overhead  A process is loaded into memory only if all its pages can be loaded  It is difficult to select the optimum page size Effective Access Time  Associative Lookup =  time unit  Assume memory cycle time is 1 microsecond  Hit ratio – percentage of times that a page number is found in the associative registers; ratio related to number of associative registers  Hit ratio =   Effective Access Time (EAT)  EAT = (1 + )  + (2 + )(1 – )  = 2 +  – 
  • 15. OPERATING SYSTEMS iii- 15 Memory Protection  Memory protection implemented by associating protection bit with each frame  Valid-invalid bit attached to each entry in the page table: o “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page o “invalid” indicates that the page is not in the process’ logical address space Valid (v) or Invalid (i) Bit In A Page Table Shared Pages  Shared code o One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). o Shared code must appear in same location in the logical address space of all processes  Private code and data o Each process keeps a separate copy of the code and data o The pages for the private code and data can appear anywhere in the logical address space
  • 16. OPERATING SYSTEMS iii- 16 Shared Pages Example Structure of the Page Table  Hierarchical Paging  Hashed Page Tables  Inverted Page Tables Hierarchical Page Tables  Break up the logical address space into multiple page tables  A simple technique is a two-level page table
  • 17. OPERATING SYSTEMS iii- 17 Two-Level Page-Table Scheme Two-Level Paging Example  A logical address (on 32-bit machine with 1K page size) is divided into: o a page number consisting of 22 bits o a page offset consisting of 10 bits  Since the page table is paged, the page number is further divided into: o a 12-bit page number o a 10-bit page offset  Thus, a logical address is as follows: page number where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table
  • 18. OPERATING SYSTEMS iii- 18 Address-Translation Scheme Three-level Paging Scheme Hashed Page Tables  Common in address spaces > 32 bits
  • 19. OPERATING SYSTEMS iii- 19  The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location.  Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted. Hashed Page Table Inverted Page Table  One entry for each real page of memory  Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page  Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs  Use hash table to limit the search to one — or at most a few — page-table entries Inverted Page Table Architecture
  • 20. OPERATING SYSTEMS iii- 20 9.4Segmentation  Memory-management scheme that supports user view of memory  A program is a collection of segments. A segment is a logical unit such as:  main program,  procedure,  function,  method,  object,  local variables, global variables,  common block,  stack,  symbol table, arrays User’s View of a Program
  • 21. OPERATING SYSTEMS iii- 21 The leader takes all these segments and assigns then segment numbers. The logical address space is a collection of segments. Each segment has name and length. The address specifies both the segment name and the offset within the segment. Therefore the user specifies each address by two quantities -a segment name(segment nos) offset therefore the logical address consists of <segment_no,o ffset> Logical address Every segment resides as one contiguous location of physical with each program location of physical memory with each program segment compiled as if starting from 0 addresses. A process may run only if it is current segment is in the memory. The mapping of the logical address to physical address is affected by a segment table. The entry of the segment table has a segment base and S D LLooggiiccaall VViieeww ooff SSeeggmmeennttaattiioonn 1 3 2 4 1 4 2 3 Physical memory spaceUser space
  • 22. OPERATING SYSTEMS iii- 22 segment limit. The segment base contains the starting physical address where the segment resides in memory. Whereas the segment limit specifies the length of the segment. The use of s Segmentation Architecture  Logical address consists of a two tuple:  <segment-number, offset>,  Segment table – maps two-dimensional physical addresses; each table entry has: o base – contains the starting physical address where the segments reside in memory o limit – specifies the length of the segment  Segment-table base register (STBR) points to the segment table’s location in memory  Segment-table length register (STLR) indicates number of segments used by a program;  segment number s is legal if s < STLR Segmentation Architecture (Cont.)  Protection o With each entry in segment table associate:  validation bit = 0  illegal segment  read/write/execute privileges  Protection bits associated with segments; code sharing occurs at segment level  Since segments vary in length, memory allocation is a dynamic storage-allocation problem  A segmentation example is shown in the following diagram
  • 23. OPERATING SYSTEMS iii- 23 Segmentation Hardware A logical address consists of two parts  A segment nos(s)  Offset into that segment(d) The segment number is used as index info the segment table. The offset d of the logical address must be between 0 and the segment limit. If it is not trap is sent to operating system indicating address error, if the offset is legal, it is added to the segment base to produce the address in physical memory of the desired byte. The segment table is essentially an array of base-limit register pairs. Example of Segmentation
  • 24. OPERATING SYSTEMS iii- 24 There are five segments numbered from 0 to 4.The segments are stored in physical memory as shown. The segment table has a separate entry for each segment, giving the beginning address of the segment, giving the beginning address of the segment in physical memory(the base) and the length of that segment(the limit). For eg: segment 2 is 400 bytes and begins at location 4300.Thus a reference to byte 53 of segment 2 is mapped onto location 4300+53=4353. Seg3,byte 853=>3200+852=4052 Seg 0,byte 1222=>addressing error 9.5 Paging Vs Segmentation Paging and segmentation are functionally similar. A page is a physical slice of address space and all pages are of equal size. A segment is a logical module of the address space and it bas arbitrary length. While in paging, each module (or even instruction) itself may be split among several pages. Whereas a segment corresponding to a dynamic data structure may grow or shrink as the data structure varies. Sharing segments is quite straight-forward when compared to sharing in a pure paging system. 16 Mark questions 1. Describe the following allocation algorithms a. First fit b. Best fit c.Worst fit 2.Why are segmentation and paging sometimes combined into one scheme? And explain
  • 25. OPERATING SYSTEMS iii- 25 3.Given memory partitions of 100KB,500KB,200KB,300KB,and 600KB(in order),how would each of the first-fit ,best-fit and worst-fit algorithms place processes of 212kB,417KB,112KB,and 426 KB(in order)?Which algorithm makes the most efficient use of memory? 4.Consider the following segment table Segmentation Base Length 0 219 600 1 2300 14 2 90 100 3 1327 580 4 1952 96 what are the physical addresses for the following logical addresses? a.0430 b.110 c.2500 d.3400 e.4112 5.In the IBM/370 memory protection is provided through the use of keys. A key is a 4bit quantity. Each 2 KB block of memory has a key (the storage key) associated with it? The CPU also has a key (the protection equal, or if either is Zero. Which of the following memory-management schemes could be used successfully with this hardware?  Single user system  Multiprogramming with a fixed number of processes  Multiprogramming with a variable number of processes  Paging  segmentation Chapter 10: Virtual Memory Virtual Memory is a technique which allows the execution of processes that may not be in memory completely. It allows the execution of a process; even the logical address space is greater than the physical available memory. Hence programs larger than physical memory can be executed. Virtual memory technique free programmers form the memory storage limitations. It is complex technique not easy to implement. Demand Paging Process creation Page Replacement Allocation of Frames Thrashing
  • 26. OPERATING SYSTEMS iii- 26 Virtual memory – separation of user logical memory from physical memory. o Only part of the program needs to be in memory for execution o Logical address space can therefore be much larger than physical address space o Allows address spaces to be shared by several processes o Allows for more efficient process creation 1. Virtual memory can be implemented via: a. Demand paging b. Demand segmentation Scheme of virtual memory The implementation of virtual memory involves at least two storage levels-main memory and secondary storage. The virtual memory is a memory management technique which does splitting of a program into number of pieces as well as swapping. The basic idea behind virtual memory is that the combined size of the program and data may exceed the amount of physical memory. The operating system keeps those parts of the program in the memory, which are required during execution and the rest on the disk. Example: A 5MB program can run on a 640KB RAM machine by carefully choosing 640KB to be kept in memory at each instant and swapping pieces of a program between the disk and memory as needed. Function of virtual memory The function of virtual memory may be characterized as follows: An address generated by a programmer is referred to as a virtual address (or name) and the set of such addresses is called a virtual address space or name space. An address of a word (or a byte) in the physical memory is referred to as memory address (or real address) and the set of all such addresses is called memory space (or real address space) Usage: The purpose of virtual memory mechanism is to realize the address mapping function, that performs the necessary mapping, either generating a memory address, if the required program in virtual address is present in the memory address ,or generating a missing item fault otherwise. Incase the required process is present in the memory address ,the main memory can be accessed else in case of missing item fault, the program that generated the address is temporarily suspended. This case of missing item fault can be solved by the principle of locality.
  • 27. OPERATING SYSTEMS iii- 27 10.1 Demand paging A demand paging technique is similar to paging with swapping. Usually processes reside on secondary memory, when the process is required, it is swapped into memory using a LAZY swapper it never swaps a page into memory until the page will be needed. The swapper is used to swap processes whereas a pager is concerned with the individual pages of a process. When a process is to be swapped in the pager guesses which pages will be used before the process is swapped out again and the pager brings only those necessary pages into memory .Hence it avoids reading into memory pages that will not be used anyway, decreasing the swap time and the amount of physical memory needed. A hardware scheme is used to distinguish these pages that are in memory and on the disk pages that are in memory and on the disk .This can be implemented by using valid- invalid bit scheme. if this bit is set to valid then the associated page is both legal and in memory. If the bit is invalid then the page is not valid or is not in memory. For pages brought into memory, the page table entry is set as usual, but the page table entry is either invalid and may contain address space on disk. This situation shown below Page Table When Some Pages Are Not in Main Memory
  • 28. OPERATING SYSTEMS iii- 28 Any access to page marked invalid causes a page fault trap. A process tries to access a page which was not swapped in previously while translating the address through the page table the paging hardware will check the invalid bit,if it is set then causes a trap to os. The page fault trap means that the operating system failure to bring the desired page into memory rather than an invalid address. The procedure for handling page fault is illustrated by the following diagram. Steps in Handling a Page Fault
  • 29. OPERATING SYSTEMS iii- 29  Check the page table(usually kept with PCB) for this process, to determine whether the reference was a valid or invalid memory access.  If the reference was invalid, the process terminates. If it was valid, but the page is not yet in memory then swap in that page.  Find a free frame(from the free frame list)  Schedule a disk operation to read the desired page into the newly allocated frame  When the disk read is complete, modify the internal table kept with the process and the page table to indicate that the page is now memory.  Restart the instruction that was interrupted by the illegal address trap.  The process can now access the page as thought it had always been in memory. Pure demand paging: If the process execution starts executing with no pages in memory, when the first instruction executes, the process will immediately fault for the page. After this page was brought into memory, the process would continue to execute, faulting as necessary until every page that is is pure demand paging never bring a page into memory until it is required. Hardware support Page table This table has the ability to mark an entry invalid through a valid-invalid bit or special value of protection bits. Secondary memory This memory holds those pages that are not present in main memory. The secondary memory is usually a high speed disk. It is known as the swap device and the section of the disk used for this purpose is known as swap space or backing store.
  • 30. OPERATING SYSTEMS iii- 30 10.3Page Replacement: Page replacement takes the following approach. If no frame is free, find one that is not being currently used and free it. The frame can be freed by writing its contents to swap space and changing the page table and other tables. The freed frame can no0w be used to hold the page for which the process faulted. The page fault routine is now modified to include page replacement.  Find the location of the desired page on the disk.  Find free frame 1. If there is a free frame use it 2. Otherwise use a page replacement algorithm to select a victim frame 3. Write the victim page to the disk, change the page and frame tables 4. Read the desired page into the (newly)free frame, change the page and frame tables 5. Restart the user processes. 6. If no frames are free then two page transfers are required. This situation effectively doubles the page fault service time. 7. This two page transfer overhead can be reduced by using a modify bit or dirty bit. Every page frame will have modify bit associated with it ,at the h/w level. 8. This modify bit is set by the h/w whenever any byte/word is written into the page; indicates that the page has been modified. When a page is selected for replacement, the bit is not set, and then the page need not be written it is already there. Thus this scheme will reduce the time to service a page fault by half if the page is not modified.
  • 31. OPERATING SYSTEMS iii- 31 10.4 Allocation of frames Page replacement algorithm Practically every os has its own unique replacement scheme and there are may different page replacement algorithms. The main criteria for the replacements algorithm selection is the one with lowest page fault rate. These algorithms are evaluated by running it on a particular string of memory references (known as reference string) and computing the no of page faults. These reference strings are generated by using a random number generator or by tracing the system. For a given page size, only the page number is considered and not the entire address. If there is a references to page P then any immediately following references to page P will never cause a page fault. There are several page replacement algorithms some of which are the following 1. FIFO algorithm 2. Optimal algorithm
  • 32. OPERATING SYSTEMS iii- 32 3. LRU algorithm 4. Counting algorithm 5. Page buffering FIFO algorithm The FIFO algorithm is a simplest page replacement algorithm and it associates with every page the arrival time(when that page was brought into memory).To replace a page, the oldest page is chosen. It is not strictly necessary to record the arrival time. Instead a FIFO queue can be created to hold all pages in memory. Replace the page at the head of the queue and the head pointer is moved to the next element inn the queue. When a page is brought into memory for an empty frame ,it can be inserted at the tail of the frame FFiirrsstt--IInn--FFiirrsstt--OOuutt ((FFIIFFOO)) AAllggoorriitthhmm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process) 4 frames Belady’s Anomaly: more frames  more page faults 1 2 3 1 2 3 4 1 2 5 3 4 9 page faults 1 2 3 1 2 3 5 1 2 4 5 10 page faults 44 3
  • 33. OPERATING SYSTEMS iii- 33 FFIIFFOO PPaaggee RReeppllaacceemmeenntt FFIIFFOO IIlllluussttrraattiinngg BBeellaaddyy’’ss AAnnoommaallyy
  • 34. OPERATING SYSTEMS iii- 34 Optimal algorithm An optimal page replacement algorithm called as OPT or MIN has the lowest page fault rate of all algorithms. It is simply state as “Replace the page that will not be used for the longest period of time” To illustrate consider the sample reference string. This algorithm will yield only nine page faults. Optimal replacement algorithm(only 9 faults) is much better than FIFO algorithm(15 fault).However the optimal page replacement algorithm is difficult to implement, because it requires future knowledge of the reference string. As a result, the optimal algorithm is used mainly for comparison studies only. OOppttiimmaall AAllggoorriitthhmm Replace page that will not be used for longest period of time 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 How do you know this? Used for measuring how well your algorithm performs 1 2 3 4 6 page faults faults 4 5
  • 35. OPERATING SYSTEMS iii- 35 LRU algorithm Since the optimal page replacement algorithm is not feasible but an approximate to the optimal algorithm is possible. The FIFO algorithm uses the time when a page has to be used. In LRU(Least Recently used)algorithm, it uses the recent past as an approximation of the near future. It will replace the page that has not been used for the longest period of time. During LRU replacement, when a page needs to be replaced, it chooses a page that has not been used for the longest period of time. The result of applying LRU to the reference string is shown below. Practically;LRU policy is considered to be quite good and is used as a page replacement algorithm. But is requires hardware assistance to determine an order for the frames defined by the time of use. This can be implemented using stack or counters. OOppttiimmaall PPaaggee RReeppllaacceemmeenntt
  • 36. OPERATING SYSTEMS iii- 36 Counting Algorithm: LLRRUU PPaaggee RReeppllaacceemmeenntt LLeeaasstt RReecceennttllyy UUsseedd ((LLRRUU)) AAllggoorriitthhmm Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Counter implementation Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to determine which are to change 5 2 4 3 1 2 3 4 1 2 5 4 1 2 5 3 1 2 4 3
  • 37. OPERATING SYSTEMS iii- 37 A counter is kept in the page table which stores the number of references and these counters can be used to implement the following tow schemes. LRU algorithm: The least frequently used page replacement algorithm requires that the page with the smallest count to be replaced whereas as actively used page should have a large references count. The disadvantage of this algorithm is that if a page is used heavily during the initial phase of a process, but then is never used again since it was used heavily, it has a large count and remains in memory even though it is no longer needed. MFU Algorithm Most frequently used page replacement algorithm is based on the argument that the age with the smallest count was probably first brought in and has yet to be used .Neither MFU or LRU replacement is common as implementation of these algorithms is fairly expensive. Page Buffering algorithm: In addition to page replacement algorithm systems commonly keep some subroutines which improve performance. They are;  Systems commonly keep a pool of free frames or page buffers. When a page fault occurs, a victim frame is chosen as before. Before the victim is written out the desired page is read into a free frame from the pool. Hence it allows the process to restart immediately, without waiting for the victim page to be written out later, after the victim is written out its frame is added to the free frame pool. LRU Approximation Algorithm: Few computer systems provide hardware support for LRU systems provide support in the form of reference bit. The reference bit for apage is set by the hardware, whenever that page is referenced. Reference bits are associated with each entry in the page table.
  • 38. OPERATING SYSTEMS iii- 38 Initially all bits are cleared to zero by the operating system. As a user process executes, the bit associated with each page reference is set to 1 by the hardware. After sometimes, it can be determined whether the page was used or not by examining the reference bits. The order of the usage is not known but it is possible to know whether the page was used or not. This information loads to LRU approximation algorithms. Additional Reference bits algorithm This is implemented by using a 8 bit byte for each page in a table in memory .At Regular intervals (say every 100ms) a timer interrupts transfers control to the operating system. The operating system shifts the reference bit for each page into the higher order bit of its 8 bit byte shifting the other bits right 1 discarding the low order bit. These 8 bit shift registers contain the history of page use for the last 8 time period a page that is used at least once each period would have the shift register value 1111 1111. A page with a history register value of 1100 0100 has been used more recently that he page with register values 0111 0111.If these 8 bit bytes are interpreted as unsigned integers, the page with the lowest number is the LRU page, and it can be replaced. Consider the following instance ,where there are 3 oage frame 0 is refernced, at time interval 2 page frames 1 and 2 are referenced and at interval 3 only page frames 0 and 2 are referenced.After all these, the page frame values of the counter are as shown below. Events Page Frame 0 Page frame 1 Page Frame 2 Before clock 0000 0000 0000 0000 0000 0000 At time Interval 1 1000 0000 0000 0000 0000 0000 At time Interval 2 0100 0000 1000 0000 1000 0000 At time Interval 3 1010 0000 0100 0000 1100 0000 The page with the lowest value is the LRU and hence it can be replaced.Therefore frame 1 is selected as the victim for replacement.
  • 39. OPERATING SYSTEMS iii- 39 b)Second Chance Algorithm The basic algorithm of second chance replacement is FIFO replacement algorithm. When a page has been selected, the references bit is checked .If the value is Zero the page can be replaced. If the reference bit is 1, however the page is given a second reference bit is cleared and its arrival time is rest to current time. Thus a page that is given a second chance will not be replaced until all other pages a replaced. One way to implement this algorithm is by using a circular queue. SSeeccoonndd--CChhaannccee ((cclloocckk)) PPaaggee--RReeppllaacceemmeenntt AAllggoorriitthhmm Global vs. Local Allocation Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another Local replacement – each process selects from only its own set of allocated frames 10.5 Thrashing This occurs in situation where a process does not have enough frames to execute .If the process does not have the number of frames ,it will very quickly page fault. At this point, it must replace some page. However since all pages are in active use, it must replace a page that will be needed again right away. Consequently it very quickly faults again and again. This process continues to fault replacing pages for which it will fault again. This high paging activity is called thrashing. A process is thrashing if it is spending more time paging than executing.
  • 40. OPERATING SYSTEMS iii- 40 Causes of Thrashing If the CPU utilization is too low then the degree of multiprogramming is increased by introducing new process to the system. A global replacement algorithm is used, replacing pages with no regard to the process which they belong. If a process needs more frames, it starts faulting by taking away pages from other processes. Therefore they queue up waiting for the paging device. Now the ready queue gets empty and therefore CPU utilization decreases. The CPU scheduler sees this and increases the degree of multiprogramming which causes more page faults and increases the queue for the paging device. As a result CPU utilization drops even further and the CPU scheduler tries to increase the degree of multiprogramming even more. Thrashing has occurred and the system throughput plunges are spending all the time paging. The effects of thrashing can be limited by using a local replacement algorithm with this, if a process starts thrashing it cannot steal frames from another process and cause the latter to thrash pages are replaced with regard to process of which they are a part. Thrashing (Cont.)
  • 41. OPERATING SYSTEMS iii- 41 Working Set Strategy: To prevent thrashing a process must be allocated as many frames as it needs .This strategy starts by looking at how many frames a process is actually using. It then uses locality the set pages actively used together and it executes by moving form locality to locality. 1.   working-set window  a fixed number of page references Example: 10,000 instruction 2. WSSi (working set of Process Pi) = total number of pages referenced in the most recent  (varies in time) a. if  too small will not encompass entire locality b. if  too large will encompass several localities c. if  =   will encompass entire program 3. D =  WSSi  total demand frames 4. if D > m  Thrashing 5. Policy if D > m, then suspend one of the processes 1. Approximate with interval timer + a reference bit 2. Example:  = 10,000 a. Timer interrupts after every 5000 time units b. Keep in memory 2 bits for each page
  • 42. OPERATING SYSTEMS iii- 42 c. Whenever a timer interrupts copy and sets the values of all reference bits to 0 d. If one of the bits in memory = 1  page in working set 3. Why is this not completely accurate? 4. Improvement = 10 bits and interrupt every 1000 time units Working-set model The principle of locality states that processes have the tendfency to refer to the storage area in non-unifor but lightly localized patterns. LOocality can be represented in terms of both time and space.Locality with reference to time is called as temporal locality where the set of restricted Page-Fault Frequency Scheme a. Establish “acceptable” page-fault rate i. If actual rate too low, process loses frame ii. If actual rate too high, process gains frame
  • 43. OPERATING SYSTEMS iii- 43 It is a direct approach. If the page fault is high then the process needs more frames. If it is low then the process has too many frames. A upper and lower bound on the desired page fault can be established. If the actual page fault rate exceeds the upper limit, the process is allocated another frame. If the page fault rate falls below the lower limit a frame is removed from that process. Thus page fault can be directly controlled to prevent thrashing. ************************