SlideShare a Scribd company logo
1 of 51
BITS Pilani, Pilani Campus
BITS Pilani
Pilani Campus
Operating Systems
Dr. Mayuri Digalwar
Computer Science and Information Systems Department
BITS, Pilani
BITS Pilani, Pilani Campus
BITS Pilani
Pilani Campus
Operating Systems :Virtual Memory
Lecture 27
(06/04/2018)
BITS Pilani, Pilani Campus
Virtual Memory
• Virtual memory is a technique that allows the
execution of processes that are not completely
in memory.
• Benefit is – Programs can be larger than
physical memory.
• Programmer no longer has to worry about the
amount of memory available.
CS C372 OPERATING SYSTEMS 3
BITS Pilani, Pilani Campus
Background
• One basic requirement to execute a program is that
instructions need to be in memory before execution, but
entire program is rarely used
– E.g. Error code, unusual routines, large data structures
• Entire program code not needed at same time
• Consider ability to execute partially-loaded program
– Program size will no longer be constrained by limits of size
of physical memory
– Program / programs could be larger than physical memory
– Also less I/O would be needed to load or swap user
programs into memory, so each user program would run
faster
CS C372 OPERATING SYSTEMS 4
BITS Pilani, Pilani Campus
Virtual Memory That is
Larger Than Physical Memory
BITS Pilani, Pilani Campus
Virtual-address Space
• Virtual address space of a process refers to the logical
view of how a process is stored in memory.
• But physical memory is organized in page frames and
these page frames are not necessarily contiguous.
• The large blank hole between heap and stack is part of
virtual address space but physical pages will only be
allocated when heap or stack grows.
• Such a virtual address space that include holes is
known as sparse address space.
BITS Pilani, Pilani Campus
Demand Paging
• There are two options to load executable program into memory from disk
– Bring entire process into memory at execution time
• There are problems with this approach
– Bring a page into memory only when it is needed (Demand Paging). With
this approach pages are only loaded when they are demanded during program
execution. Pages that are never accessed are never loaded into memory.
• Less I/O needed, no unnecessary I/O
• Less memory needed
• When a page is needed  reference to it
– invalid reference  abort
– not-in-memory  bring to memory
• Demand paging is similar to paging system with swapping.
• Lazy swapper – never swaps a page into memory unless page will be
needed
– Swapper that deals with pages is a pager
BITS Pilani, Pilani Campus
Transfer of a Paged Memory to contiguous Disk Space
BITS Pilani, Pilani Campus
Valid-Invalid Bit
• With each page table entry a valid–invalid bit is associated
(v  in-memory – memory resident, i  not-in-memory)
• Initially valid–invalid bit is set to i on all entries
• Example of a page table snapshot:
•
• During address translation, if valid–invalid bit in page table entry
is i  page fault
v
v
v
v
i
i
i
….
Frame # valid-invalid bit
page table
BITS Pilani, Pilani Campus
Page table when some pages are not in main memory
BITS Pilani, Pilani Campus
Page Fault
• If there is a reference to a page, first reference to that page will
trap to operating system:
page fault
1. Operating system looks at another table to decide:
– Invalid reference  abort
– Just not in memory
2. Get empty frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
BITS Pilani, Pilani Campus
Aspects of Demand Paging
• Extreme case – start process with no pages in memory
– OS sets instruction pointer to first instruction of process, non-memory-
resident -> page fault
– And for every other process, pages on first access
– Pure demand paging
• Actually, a given instruction could access multiple pages -> multiple
page faults
– Consider fetch and decode of instruction which adds 2 numbers
from memory and stores result back to memory
– Pain decreased because of locality of reference
• Hardware support needed for demand paging
– Page table with valid / invalid bit
– Secondary memory (swap device with swap space)
– Instruction restart
BITS Pilani, Pilani Campus
Steps in Handling a Page Fault
BITS Pilani, Pilani Campus
Performance of Demand Paging
• Stages in Demand Paging (worse case)
1. Trap to the operating system
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page on the
disk
5. Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced
2. Wait for the device seek and/or latency time
3. Begin the transfer of the page to a free frame
6. While waiting, allocate the CPU to some other user
7. Receive an interrupt from the disk I/O subsystem (I/O completed)
8. Save the registers and process state for the other user
9. Determine that the interrupt was from the disk
10. Correct the page table and other tables to show page is now in memory
11. Wait for the CPU to be allocated to this process again
12. Restore the user registers, process state, and new page table, and then resume the
interrupted instruction
CS C372 OPERATING SYSTEMS 14
BITS Pilani, Pilani Campus
Performance of Demand Paging (Cont.)
CS C372 OPERATING SYSTEMS 15
• Three major activities
– Service the interrupt – careful coding means just several hundred instructions
needed
– Read the page – lots of time
– Restart the process – again just a small amount of time
• Page Fault Rate 0  p  1
– if p = 0 no page faults
– if p = 1, every reference is a fault
• Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )
Memory access time = 200 nanoseconds
Average page-fault service time = 8
milliseconds
EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p x 200 + p x 8,000,000
= 200 + p x 7,999,800
BITS Pilani, Pilani Campus
What Happens if There is no Free Frame?
• Used up by process pages
• Also in demand from the kernel, I/O buffers, etc
• How much to allocate to each?
• Page replacement – find some page in memory,
but not really in use, page it out
– Algorithm – terminate? swap out? replace the page?
– Performance – want an algorithm which will result in
minimum number of page faults
• Same page may be brought into memory several
times
BITS Pilani, Pilani Campus
Page Replacement
• Use modify (dirty) bit to reduce overhead of
page transfers – only modified pages are written
to disk
• Page replacement completes separation between
logical memory and physical memory – large
virtual memory can be provided on a smaller
physical memory
BITS Pilani, Pilani Campus
Need For Page Replacement
BITS Pilani, Pilani Campus
Basic Page Replacement
1. Find the location of the desired page on disk
2. Find a free frame:
- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to
select a victim frame
- Write victim frame to disk if dirty
3. Bring the desired page into the (newly) free frame; update the page
and frame tables
4. Continue the process by restarting the instruction that caused the
trap
Note now potentially 2 page transfers for page fault – increasing EAT
BITS Pilani, Pilani Campus
Page Replacement
BITS Pilani, Pilani Campus
Page and Frame Replacement Algorithms
• Frame-allocation algorithm determines
– How many frames to give each process
– Which frames to replace
• Page-replacement algorithm
– Want lowest page-fault rate on both first access and re-access
• Evaluate algorithm by running it on a particular string of
memory references (reference string) and computing the
number of page faults on that string
– String is just page numbers, not full addresses
– Repeated access to the same page does not cause a page fault
– Results depend on number of frames available
• In all our examples, the reference string of referenced page
numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
BITS Pilani, Pilani Campus
Graph of Page Faults Versus The Number of Frames
BITS Pilani, Pilani Campus
First-In-First-Out (FIFO) Algorithm
• Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
• 3 frames (3 pages can be in memory at a time per process)
• Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5
– Adding more frames can cause more page faults!
• Belady’s Anomaly
• How to track ages of pages?
– Just use a FIFO queue
15 page faults
BITS Pilani, Pilani Campus
FIFO Illustrating Belady’s Anomaly
BITS Pilani, Pilani Campus
Optimal Algorithm
• Replace page that will not be used for longest period of time
– 9 is optimal for the example
• How do you know this?
– Can’t read the future
• Used for measuring how well your algorithm performs
BITS Pilani, Pilani Campus
Page Replacement Algorithms: Virtual
Memory
Lecture 28
9/4/2018
CS C372 OPERATING SYSTEMS 26
BITS Pilani, Pilani Campus
FIFO Vs. OPT Algorithm
FIFO Algorithm
• Look backward while
searching for a victim
page
• Uses time when the
page is brought into the
memory
OPT Algorithm
• Look forward while
searching for a victim
page
• Uses time, that is when
page is used for
execution
CS C372 OPERATING SYSTEMS 27
•LRU combines the logic in the above two algorithms
•Looks backward in time
•Selects a page that has not been used recently
BITS Pilani, Pilani Campus
Least Recently Used (LRU) Algorithm
• Use past knowledge rather than future
• Replace page that has not been used in the most amount of time
• Associate time of last use with each page
• 12 faults – better than FIFO but worse than OPT
• Generally good algorithm and frequently used
• But how to implement?
BITS Pilani, Pilani Campus
LRU Algorithm (Cont.)
• Counter implementation and Stack implementation
• Counter implementation:
– A time of use field (or counter) is associated with each entry in page table
– Every time when a page is referenced, clock is incremented.
– This clock value is copied to the counter field of page table entry
– This gives time of last reference to each page
– Smallest value of this counter gives least recently used page
– This requires the system to write this time value to the page table at every
reference.
– Search throughout the table is required to find the smallest value
CS C372 OPERATING SYSTEMS 29
BITS Pilani, Pilani Campus
LRU Algorithm (Cont.)
• Stack Implementation
– Keep a stack of page numbers in a doubly link list:
– When a Page is referenced:
• It is removed from the stack and put on the top. This results in,
– Most recently used page on top of the stack
– Least recently used page at the bottom of the stack
– requires 6 pointers to be changed
– Each update us little expensive but no search is required for
replacement
• LRU and OPT are cases of stack algorithms that don’t have Belady’s
Anomaly
BITS Pilani, Pilani Campus
Use Of A Stack to Record Most Recent Page References
BITS Pilani, Pilani Campus
Page Replacement and Allocation
Algorithms: Virtual Memory
Lecture 29
(11/4/2018)
CS C372 OPERATING SYSTEMS 32
BITS Pilani, Pilani Campus
LRU Approximation Algorithms
• LRU needs special hardware and still slow
• Reference bit
– With each page associate a bit, initially = 0
– When page is referenced bit set to 1
– Replace any with reference bit = 0 (if one exists)
• We do not know the order, however
• Additional Reference Bit Algorithm
– Ordering Information by recording the ref bits at regular intervals.
– Need to keep a 8 bit byte (can vary as well) for each page table entry in a shift
register.
– This can give ref bit ordering information for eight intervals
– 00000000 represents page not reference in last eight intervals, so can be replaced,
11111111 represents page is referenced in every interval
– If we convert this byte to unsigned int, then smallest number indicates page to be
replaced
– But need extra hard ware. Lets see if one bit ref bit can provide ordering information
BITS Pilani, Pilani Campus
Second Chance Algorithm
• Generally FIFO, plus hardware-provided reference bit . Also
called as Clock replacement algorithm.s
• Initially, when a page is first referenced its ref bit is set to 0
and is brought into the memory
• When it is referenced next, its ref bit is set to 1, and there will
not be a page fault
• If page to be replaced has
– Reference bit = 0 -> replace it
– Reference bit = 1 then:
• Give this page second chance to be in memory
• set reference bit to 0, leave page in memory
• replace next page, subject to same rules
CS C372 OPERATING SYSTEMS 34
BITS Pilani, Pilani Campus
Second-Chance (clock) Page-Replacement
Algorithm
BITS Pilani, Pilani Campus
Enhanced Second-Chance Algorithm
• Improve algorithm by using reference bit and modify bit (if
available) in combination.
• Take ordered pair (reference, modify)
1. (0, 0) neither recently used not modified – best page to replace
2. (0, 1) not recently used but modified – not quite as good, must
write out before replacement
3. (1, 0) recently used but clean – probably will be used again soon
4. (1, 1) recently used and modified – probably will be used again
soon and need to write out before replacement
• When page replacement called for, use the clock scheme but use
the four classes replace page in lowest non-empty class
– Might need to search circular queue several times
BITS Pilani, Pilani Campus
Counting Algorithms
• Keep a counter of the number of references that have been
made to each page
– Not common
• Lease Frequently Used (LFU) Algorithm: replaces page
with smallest count
• Most Frequently Used (MFU) Algorithm: based on the
argument that the page with the smallest count was probably
just brought in and has yet to be used, so replace page with
largest count
BITS Pilani, Pilani Campus
Allocation of Frames: Virtual
Memory
Lecture 30
CS C372 OPERATING SYSTEMS 38
BITS Pilani, Pilani Campus
Allocation Algorithms
• Multiple processes in memory needs to decide how many
frames to allocate to each process.
• Designing frame allocation and page replacement algorithms is
important because
– Disk I/O is so expensive in terms of time
– Slight improvement in Demand paging methods yield large
gain in system performance
CS C372 OPERATING SYSTEMS 39
BITS Pilani, Pilani Campus
Allocation of Frames
• Each process needs minimum number of frames
• Reason
– Performance: As the number of frames allocated to a process decreases,
page fault increases, slowing the process execution
– Instruction needs to be restarted after the handling the page fault.
• Minimum number of frames are defined by the computer architecture
• Maximum of course is total frames in the system
• But, in between we can make significant choice in frame allocation
• Two major allocation schemes
– Fixed Allocation
• Equal allocation , m frames and n processes, each process is
allocated m/n frames
• Proportional Allocation
– Based on the size of the process
BITS Pilani, Pilani Campus
Fixed Allocation
• Equal allocation – For example, if there are 100 frames (after allocating frames for
the OS) and 5 processes, give each process 20 frames
– Keep some as free frame buffer pool
• Proportional allocation – Allocate according to the size of process
– Dynamic as degree of multiprogramming, process sizes change
m
S
s
pa
m
sS
ps
i
ii
i
ii




forallocation
framesofnumbertotal
processofsize
5762
137
127
462
137
10
127
10
62
2
1
2
1





a
a
s
s
m
BITS Pilani, Pilani Campus
Priority Allocation
• Use a proportional allocation scheme using
priorities rather than size
• If process Pi generates a page fault,
– select for replacement one of its frames
– select for replacement a frame from a process
with lower priority number
BITS Pilani, Pilani Campus
Global vs. Local Allocation
• Global replacement – It allows a process to select a
replacement frame from the set of all frames; one process
can take a frame from another
– Higher priority process can use the frames allocated to
lower priority process
• Local replacement – each process selects from only its
own set of allocated frames
– More consistent per-process performance
– But possibly underutilized memory
BITS Pilani, Pilani Campus
Non-Uniform Memory Access
• So far all memory accessed equally
• Many systems are NUMA – speed of access to memory varies
– Consider system boards containing CPUs and memory, interconnected over
a system bus
• Optimal performance comes from allocating memory “close to” the CPU on
which the thread is scheduled
– And modifying the scheduler to schedule the thread on the same system
board when possible
BITS Pilani, Pilani Campus
Thrashing
• If a process does not have “enough” pages, the page-fault
rate is very high
– Page fault to get page
– Replace existing frame
– But quickly need replaced frame back
– This leads to:
• Low CPU utilization
• Operating system thinks that it needs to increase the degree of
multiprogramming
• Another process added to the system
• Thrashing  a process is busy swapping pages in and out
BITS Pilani, Pilani Campus
Cause of Thrashing
• Increasing Degree of Multiprogramming
• Use global page replacement
CS C372 OPERATING SYSTEMS 46
BITS Pilani, Pilani Campus
Thrashing (contd…)
• We can limit thrashing
– By local page replacement algorithm
• If a process is thrashing, it can not cause other process to thrash
– but average service time for a page fault still is large
• Which affect the effective access time of a process that is not thrashing
• Thrashing can be prevented, if a process that is thrashing is given the number of
frames it needs
– How to know its requirement?
– Working set strategy based on locality model
• Locality model of process execution: as a process executes, it moves from
locality to locality
• A locality is a set of pages that are actively used together.
• A program is generally composed of several different localities
• For example, when a function is called, it moves to a new locality
CS C372 OPERATING SYSTEMS 47
BITS Pilani, Pilani Campus
Locality Model: Thrashing
• Localities are defined by
– Program structure
– Its data structures
– And the basic memory reference structure that the program
exhibit while executing a function in a locality
• Page faults (also thrashing) can be stopped if enough
frames are allocated to a process to accommodate its
current locality as the pages belonging to current
locality are the pages it is actively using
CS C372 OPERATING SYSTEMS 48
BITS Pilani, Pilani Campus
Working Set Model
•   working-set window  a fixed number of page references
• WSSi (working set of Process Pi) =
total number of pages referenced in the most recent  (varies in time)
– if  too small will not encompass entire locality
– if  too large will encompass several localities
– if  =   will encompass entire program
• Working Set model is the approximation of the program’s locality
• For example Δ = 10
CS C372 OPERATING SYSTEMS 49
BITS Pilani, Pilani Campus
Working Set Model
• Once Δ has been selected, OS monitors the working set of each
process and allocates it enough frames
• if the sum of working set sizes increases, exceeding the total
number of available frames,
– OS selects a process to suspend.
– Pages belonging to this suspended process are swapped out
– the resulting available frames are allocated to the other
processes.
– Later, the suspended process can be restarted.
• Working set strategy prevents thrashing while keeping the
degree of multiprogramming as high as possible and
maintaining the CPU Utilization
CS C372 OPERATING SYSTEMS 50
BITS Pilani, Pilani Campus
• Queries ?
CS C372 OPERATING SYSTEMS 51

More Related Content

Similar to Lect 27 28_29_30_virtual_memory

16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptx16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptxMyName1sJeff
 
Unit 2chapter 2 memory mgmt complete
Unit 2chapter 2  memory mgmt completeUnit 2chapter 2  memory mgmt complete
Unit 2chapter 2 memory mgmt completeKalai Selvi
 
Chapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryChapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryWayne Jones Jnr
 
Virtual Memory Management
Virtual Memory ManagementVirtual Memory Management
Virtual Memory ManagementRahul Jamwal
 
Computer architecture virtual memory
Computer architecture virtual memoryComputer architecture virtual memory
Computer architecture virtual memoryMazin Alwaaly
 
381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)Rabie Masoud
 
Operating system 38 page replacement
Operating system 38 page replacementOperating system 38 page replacement
Operating system 38 page replacementVaibhav Khanna
 
08 virtual memory
08 virtual memory08 virtual memory
08 virtual memoryKamal Singh
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfHarika Pudugosula
 
Memory Management-Muhammad Ahmad.ppt
Memory Management-Muhammad Ahmad.pptMemory Management-Muhammad Ahmad.ppt
Memory Management-Muhammad Ahmad.pptAliyanAbbas1
 
Windows Internal - Ch9 memory management
Windows Internal - Ch9 memory managementWindows Internal - Ch9 memory management
Windows Internal - Ch9 memory managementKent Huang
 
Ch10 OS
Ch10 OSCh10 OS
Ch10 OSC.U
 

Similar to Lect 27 28_29_30_virtual_memory (20)

Demand paging
Demand pagingDemand paging
Demand paging
 
CH09.pdf
CH09.pdfCH09.pdf
CH09.pdf
 
16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptx16. PagingImplementIssused.pptx
16. PagingImplementIssused.pptx
 
Unit 2chapter 2 memory mgmt complete
Unit 2chapter 2  memory mgmt completeUnit 2chapter 2  memory mgmt complete
Unit 2chapter 2 memory mgmt complete
 
Chapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryChapter 9 - Virtual Memory
Chapter 9 - Virtual Memory
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Virtual Memory Management
Virtual Memory ManagementVirtual Memory Management
Virtual Memory Management
 
Sucet os module_4_notes
Sucet os module_4_notesSucet os module_4_notes
Sucet os module_4_notes
 
Operating System
Operating SystemOperating System
Operating System
 
Computer architecture virtual memory
Computer architecture virtual memoryComputer architecture virtual memory
Computer architecture virtual memory
 
381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)
 
Operating system 38 page replacement
Operating system 38 page replacementOperating system 38 page replacement
Operating system 38 page replacement
 
08 virtual memory
08 virtual memory08 virtual memory
08 virtual memory
 
Distributed Operating System_3
Distributed Operating System_3Distributed Operating System_3
Distributed Operating System_3
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdf
 
Memory Management-Muhammad Ahmad.ppt
Memory Management-Muhammad Ahmad.pptMemory Management-Muhammad Ahmad.ppt
Memory Management-Muhammad Ahmad.ppt
 
Windows Internal - Ch9 memory management
Windows Internal - Ch9 memory managementWindows Internal - Ch9 memory management
Windows Internal - Ch9 memory management
 
OSCh10
OSCh10OSCh10
OSCh10
 
Ch10 OS
Ch10 OSCh10 OS
Ch10 OS
 
OS_Ch10
OS_Ch10OS_Ch10
OS_Ch10
 

Recently uploaded

Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniquesugginaramesh
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitterShivangiSharma879191
 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction managementMariconPadriquez1
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 

Recently uploaded (20)

Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniques
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter
 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction management
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 

Lect 27 28_29_30_virtual_memory

  • 1. BITS Pilani, Pilani Campus BITS Pilani Pilani Campus Operating Systems Dr. Mayuri Digalwar Computer Science and Information Systems Department BITS, Pilani
  • 2. BITS Pilani, Pilani Campus BITS Pilani Pilani Campus Operating Systems :Virtual Memory Lecture 27 (06/04/2018)
  • 3. BITS Pilani, Pilani Campus Virtual Memory • Virtual memory is a technique that allows the execution of processes that are not completely in memory. • Benefit is – Programs can be larger than physical memory. • Programmer no longer has to worry about the amount of memory available. CS C372 OPERATING SYSTEMS 3
  • 4. BITS Pilani, Pilani Campus Background • One basic requirement to execute a program is that instructions need to be in memory before execution, but entire program is rarely used – E.g. Error code, unusual routines, large data structures • Entire program code not needed at same time • Consider ability to execute partially-loaded program – Program size will no longer be constrained by limits of size of physical memory – Program / programs could be larger than physical memory – Also less I/O would be needed to load or swap user programs into memory, so each user program would run faster CS C372 OPERATING SYSTEMS 4
  • 5. BITS Pilani, Pilani Campus Virtual Memory That is Larger Than Physical Memory
  • 6. BITS Pilani, Pilani Campus Virtual-address Space • Virtual address space of a process refers to the logical view of how a process is stored in memory. • But physical memory is organized in page frames and these page frames are not necessarily contiguous. • The large blank hole between heap and stack is part of virtual address space but physical pages will only be allocated when heap or stack grows. • Such a virtual address space that include holes is known as sparse address space.
  • 7. BITS Pilani, Pilani Campus Demand Paging • There are two options to load executable program into memory from disk – Bring entire process into memory at execution time • There are problems with this approach – Bring a page into memory only when it is needed (Demand Paging). With this approach pages are only loaded when they are demanded during program execution. Pages that are never accessed are never loaded into memory. • Less I/O needed, no unnecessary I/O • Less memory needed • When a page is needed  reference to it – invalid reference  abort – not-in-memory  bring to memory • Demand paging is similar to paging system with swapping. • Lazy swapper – never swaps a page into memory unless page will be needed – Swapper that deals with pages is a pager
  • 8. BITS Pilani, Pilani Campus Transfer of a Paged Memory to contiguous Disk Space
  • 9. BITS Pilani, Pilani Campus Valid-Invalid Bit • With each page table entry a valid–invalid bit is associated (v  in-memory – memory resident, i  not-in-memory) • Initially valid–invalid bit is set to i on all entries • Example of a page table snapshot: • • During address translation, if valid–invalid bit in page table entry is i  page fault v v v v i i i …. Frame # valid-invalid bit page table
  • 10. BITS Pilani, Pilani Campus Page table when some pages are not in main memory
  • 11. BITS Pilani, Pilani Campus Page Fault • If there is a reference to a page, first reference to that page will trap to operating system: page fault 1. Operating system looks at another table to decide: – Invalid reference  abort – Just not in memory 2. Get empty frame 3. Swap page into frame via scheduled disk operation 4. Reset tables to indicate page now in memory Set validation bit = v 5. Restart the instruction that caused the page fault
  • 12. BITS Pilani, Pilani Campus Aspects of Demand Paging • Extreme case – start process with no pages in memory – OS sets instruction pointer to first instruction of process, non-memory- resident -> page fault – And for every other process, pages on first access – Pure demand paging • Actually, a given instruction could access multiple pages -> multiple page faults – Consider fetch and decode of instruction which adds 2 numbers from memory and stores result back to memory – Pain decreased because of locality of reference • Hardware support needed for demand paging – Page table with valid / invalid bit – Secondary memory (swap device with swap space) – Instruction restart
  • 13. BITS Pilani, Pilani Campus Steps in Handling a Page Fault
  • 14. BITS Pilani, Pilani Campus Performance of Demand Paging • Stages in Demand Paging (worse case) 1. Trap to the operating system 2. Save the user registers and process state 3. Determine that the interrupt was a page fault 4. Check that the page reference was legal and determine the location of the page on the disk 5. Issue a read from the disk to a free frame: 1. Wait in a queue for this device until the read request is serviced 2. Wait for the device seek and/or latency time 3. Begin the transfer of the page to a free frame 6. While waiting, allocate the CPU to some other user 7. Receive an interrupt from the disk I/O subsystem (I/O completed) 8. Save the registers and process state for the other user 9. Determine that the interrupt was from the disk 10. Correct the page table and other tables to show page is now in memory 11. Wait for the CPU to be allocated to this process again 12. Restore the user registers, process state, and new page table, and then resume the interrupted instruction CS C372 OPERATING SYSTEMS 14
  • 15. BITS Pilani, Pilani Campus Performance of Demand Paging (Cont.) CS C372 OPERATING SYSTEMS 15 • Three major activities – Service the interrupt – careful coding means just several hundred instructions needed – Read the page – lots of time – Restart the process – again just a small amount of time • Page Fault Rate 0  p  1 – if p = 0 no page faults – if p = 1, every reference is a fault • Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in ) Memory access time = 200 nanoseconds Average page-fault service time = 8 milliseconds EAT = (1 – p) x 200 + p (8 milliseconds) = (1 – p x 200 + p x 8,000,000 = 200 + p x 7,999,800
  • 16. BITS Pilani, Pilani Campus What Happens if There is no Free Frame? • Used up by process pages • Also in demand from the kernel, I/O buffers, etc • How much to allocate to each? • Page replacement – find some page in memory, but not really in use, page it out – Algorithm – terminate? swap out? replace the page? – Performance – want an algorithm which will result in minimum number of page faults • Same page may be brought into memory several times
  • 17. BITS Pilani, Pilani Campus Page Replacement • Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk • Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory
  • 18. BITS Pilani, Pilani Campus Need For Page Replacement
  • 19. BITS Pilani, Pilani Campus Basic Page Replacement 1. Find the location of the desired page on disk 2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty 3. Bring the desired page into the (newly) free frame; update the page and frame tables 4. Continue the process by restarting the instruction that caused the trap Note now potentially 2 page transfers for page fault – increasing EAT
  • 20. BITS Pilani, Pilani Campus Page Replacement
  • 21. BITS Pilani, Pilani Campus Page and Frame Replacement Algorithms • Frame-allocation algorithm determines – How many frames to give each process – Which frames to replace • Page-replacement algorithm – Want lowest page-fault rate on both first access and re-access • Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string – String is just page numbers, not full addresses – Repeated access to the same page does not cause a page fault – Results depend on number of frames available • In all our examples, the reference string of referenced page numbers is 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
  • 22. BITS Pilani, Pilani Campus Graph of Page Faults Versus The Number of Frames
  • 23. BITS Pilani, Pilani Campus First-In-First-Out (FIFO) Algorithm • Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 • 3 frames (3 pages can be in memory at a time per process) • Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5 – Adding more frames can cause more page faults! • Belady’s Anomaly • How to track ages of pages? – Just use a FIFO queue 15 page faults
  • 24. BITS Pilani, Pilani Campus FIFO Illustrating Belady’s Anomaly
  • 25. BITS Pilani, Pilani Campus Optimal Algorithm • Replace page that will not be used for longest period of time – 9 is optimal for the example • How do you know this? – Can’t read the future • Used for measuring how well your algorithm performs
  • 26. BITS Pilani, Pilani Campus Page Replacement Algorithms: Virtual Memory Lecture 28 9/4/2018 CS C372 OPERATING SYSTEMS 26
  • 27. BITS Pilani, Pilani Campus FIFO Vs. OPT Algorithm FIFO Algorithm • Look backward while searching for a victim page • Uses time when the page is brought into the memory OPT Algorithm • Look forward while searching for a victim page • Uses time, that is when page is used for execution CS C372 OPERATING SYSTEMS 27 •LRU combines the logic in the above two algorithms •Looks backward in time •Selects a page that has not been used recently
  • 28. BITS Pilani, Pilani Campus Least Recently Used (LRU) Algorithm • Use past knowledge rather than future • Replace page that has not been used in the most amount of time • Associate time of last use with each page • 12 faults – better than FIFO but worse than OPT • Generally good algorithm and frequently used • But how to implement?
  • 29. BITS Pilani, Pilani Campus LRU Algorithm (Cont.) • Counter implementation and Stack implementation • Counter implementation: – A time of use field (or counter) is associated with each entry in page table – Every time when a page is referenced, clock is incremented. – This clock value is copied to the counter field of page table entry – This gives time of last reference to each page – Smallest value of this counter gives least recently used page – This requires the system to write this time value to the page table at every reference. – Search throughout the table is required to find the smallest value CS C372 OPERATING SYSTEMS 29
  • 30. BITS Pilani, Pilani Campus LRU Algorithm (Cont.) • Stack Implementation – Keep a stack of page numbers in a doubly link list: – When a Page is referenced: • It is removed from the stack and put on the top. This results in, – Most recently used page on top of the stack – Least recently used page at the bottom of the stack – requires 6 pointers to be changed – Each update us little expensive but no search is required for replacement • LRU and OPT are cases of stack algorithms that don’t have Belady’s Anomaly
  • 31. BITS Pilani, Pilani Campus Use Of A Stack to Record Most Recent Page References
  • 32. BITS Pilani, Pilani Campus Page Replacement and Allocation Algorithms: Virtual Memory Lecture 29 (11/4/2018) CS C372 OPERATING SYSTEMS 32
  • 33. BITS Pilani, Pilani Campus LRU Approximation Algorithms • LRU needs special hardware and still slow • Reference bit – With each page associate a bit, initially = 0 – When page is referenced bit set to 1 – Replace any with reference bit = 0 (if one exists) • We do not know the order, however • Additional Reference Bit Algorithm – Ordering Information by recording the ref bits at regular intervals. – Need to keep a 8 bit byte (can vary as well) for each page table entry in a shift register. – This can give ref bit ordering information for eight intervals – 00000000 represents page not reference in last eight intervals, so can be replaced, 11111111 represents page is referenced in every interval – If we convert this byte to unsigned int, then smallest number indicates page to be replaced – But need extra hard ware. Lets see if one bit ref bit can provide ordering information
  • 34. BITS Pilani, Pilani Campus Second Chance Algorithm • Generally FIFO, plus hardware-provided reference bit . Also called as Clock replacement algorithm.s • Initially, when a page is first referenced its ref bit is set to 0 and is brought into the memory • When it is referenced next, its ref bit is set to 1, and there will not be a page fault • If page to be replaced has – Reference bit = 0 -> replace it – Reference bit = 1 then: • Give this page second chance to be in memory • set reference bit to 0, leave page in memory • replace next page, subject to same rules CS C372 OPERATING SYSTEMS 34
  • 35. BITS Pilani, Pilani Campus Second-Chance (clock) Page-Replacement Algorithm
  • 36. BITS Pilani, Pilani Campus Enhanced Second-Chance Algorithm • Improve algorithm by using reference bit and modify bit (if available) in combination. • Take ordered pair (reference, modify) 1. (0, 0) neither recently used not modified – best page to replace 2. (0, 1) not recently used but modified – not quite as good, must write out before replacement 3. (1, 0) recently used but clean – probably will be used again soon 4. (1, 1) recently used and modified – probably will be used again soon and need to write out before replacement • When page replacement called for, use the clock scheme but use the four classes replace page in lowest non-empty class – Might need to search circular queue several times
  • 37. BITS Pilani, Pilani Campus Counting Algorithms • Keep a counter of the number of references that have been made to each page – Not common • Lease Frequently Used (LFU) Algorithm: replaces page with smallest count • Most Frequently Used (MFU) Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used, so replace page with largest count
  • 38. BITS Pilani, Pilani Campus Allocation of Frames: Virtual Memory Lecture 30 CS C372 OPERATING SYSTEMS 38
  • 39. BITS Pilani, Pilani Campus Allocation Algorithms • Multiple processes in memory needs to decide how many frames to allocate to each process. • Designing frame allocation and page replacement algorithms is important because – Disk I/O is so expensive in terms of time – Slight improvement in Demand paging methods yield large gain in system performance CS C372 OPERATING SYSTEMS 39
  • 40. BITS Pilani, Pilani Campus Allocation of Frames • Each process needs minimum number of frames • Reason – Performance: As the number of frames allocated to a process decreases, page fault increases, slowing the process execution – Instruction needs to be restarted after the handling the page fault. • Minimum number of frames are defined by the computer architecture • Maximum of course is total frames in the system • But, in between we can make significant choice in frame allocation • Two major allocation schemes – Fixed Allocation • Equal allocation , m frames and n processes, each process is allocated m/n frames • Proportional Allocation – Based on the size of the process
  • 41. BITS Pilani, Pilani Campus Fixed Allocation • Equal allocation – For example, if there are 100 frames (after allocating frames for the OS) and 5 processes, give each process 20 frames – Keep some as free frame buffer pool • Proportional allocation – Allocate according to the size of process – Dynamic as degree of multiprogramming, process sizes change m S s pa m sS ps i ii i ii     forallocation framesofnumbertotal processofsize 5762 137 127 462 137 10 127 10 62 2 1 2 1      a a s s m
  • 42. BITS Pilani, Pilani Campus Priority Allocation • Use a proportional allocation scheme using priorities rather than size • If process Pi generates a page fault, – select for replacement one of its frames – select for replacement a frame from a process with lower priority number
  • 43. BITS Pilani, Pilani Campus Global vs. Local Allocation • Global replacement – It allows a process to select a replacement frame from the set of all frames; one process can take a frame from another – Higher priority process can use the frames allocated to lower priority process • Local replacement – each process selects from only its own set of allocated frames – More consistent per-process performance – But possibly underutilized memory
  • 44. BITS Pilani, Pilani Campus Non-Uniform Memory Access • So far all memory accessed equally • Many systems are NUMA – speed of access to memory varies – Consider system boards containing CPUs and memory, interconnected over a system bus • Optimal performance comes from allocating memory “close to” the CPU on which the thread is scheduled – And modifying the scheduler to schedule the thread on the same system board when possible
  • 45. BITS Pilani, Pilani Campus Thrashing • If a process does not have “enough” pages, the page-fault rate is very high – Page fault to get page – Replace existing frame – But quickly need replaced frame back – This leads to: • Low CPU utilization • Operating system thinks that it needs to increase the degree of multiprogramming • Another process added to the system • Thrashing  a process is busy swapping pages in and out
  • 46. BITS Pilani, Pilani Campus Cause of Thrashing • Increasing Degree of Multiprogramming • Use global page replacement CS C372 OPERATING SYSTEMS 46
  • 47. BITS Pilani, Pilani Campus Thrashing (contd…) • We can limit thrashing – By local page replacement algorithm • If a process is thrashing, it can not cause other process to thrash – but average service time for a page fault still is large • Which affect the effective access time of a process that is not thrashing • Thrashing can be prevented, if a process that is thrashing is given the number of frames it needs – How to know its requirement? – Working set strategy based on locality model • Locality model of process execution: as a process executes, it moves from locality to locality • A locality is a set of pages that are actively used together. • A program is generally composed of several different localities • For example, when a function is called, it moves to a new locality CS C372 OPERATING SYSTEMS 47
  • 48. BITS Pilani, Pilani Campus Locality Model: Thrashing • Localities are defined by – Program structure – Its data structures – And the basic memory reference structure that the program exhibit while executing a function in a locality • Page faults (also thrashing) can be stopped if enough frames are allocated to a process to accommodate its current locality as the pages belonging to current locality are the pages it is actively using CS C372 OPERATING SYSTEMS 48
  • 49. BITS Pilani, Pilani Campus Working Set Model •   working-set window  a fixed number of page references • WSSi (working set of Process Pi) = total number of pages referenced in the most recent  (varies in time) – if  too small will not encompass entire locality – if  too large will encompass several localities – if  =   will encompass entire program • Working Set model is the approximation of the program’s locality • For example Δ = 10 CS C372 OPERATING SYSTEMS 49
  • 50. BITS Pilani, Pilani Campus Working Set Model • Once Δ has been selected, OS monitors the working set of each process and allocates it enough frames • if the sum of working set sizes increases, exceeding the total number of available frames, – OS selects a process to suspend. – Pages belonging to this suspended process are swapped out – the resulting available frames are allocated to the other processes. – Later, the suspended process can be restarted. • Working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible and maintaining the CPU Utilization CS C372 OPERATING SYSTEMS 50
  • 51. BITS Pilani, Pilani Campus • Queries ? CS C372 OPERATING SYSTEMS 51