SlideShare ist ein Scribd-Unternehmen logo
1 von 61
Downloaden Sie, um offline zu lesen
Memory Management




                    1
Process
• One or more threads of execution
• Resources required for execution
  – Memory (RAM)
    • Program code (“text”)
    • Data (initialised, uninitialised, stack)
    • Buffers held in the kernel on behalf of the process
  – Others
    • CPU time
    • Files, disk space, printers, etc.
                                                       2
Some Goals of an Operating
            System
•   Maximise memory utilisation
•   Maximise CPU utilization
•   Minimise response time
•   Prioritise “important” processes

• Note: Conflicting goals ⇒ tradeoffs
    – E.g. maximising CPU utilisation (by running
      many processes) increases (degrades)
      system response time.
                                                    3
Memory Management
• Keeps track of what memory is in use and
  what memory is free
• Allocates free memory to process when
  needed
  – And deallocates it when they don’t
• Manages the transfer of memory between
  RAM and disk.


                                         4
Memory Hierarchy
• Ideally, programmers
  want memory that is
   – Fast
   – Large
   – Nonvolatile
• Not possible
• Memory manager
  coordinates how
  memory hierarchy is
  used.
   – Focus usually on
     RAM ⇔ Disk


                                      5
Memory Management
• Two broad classes of memory
  management systems
  – Those that transfer processes to and from
    disk during execution.
    • Called swapping or paging
  – Those that don’t
    • Simple
    • Might find this scheme in an embedded device,
      phone, smartcard, or PDA.

                                                      6
Basic Memory Management
Monoprogramming without Swapping or Paging




Three simple ways of organizing memory
- an operating system with one user process7
Monoprogramming
• Okay if
  – Only have one thing to do
  – Memory available approximately equates to
    memory required
• Otherwise,
  – Poor CPU utilisation in the presence of I/O
    waiting
  – Poor memory utilisation with a varied job mix

                                                8
Idea
• Subdivide memory and run more than one
  process at once!!!!
  – Multiprogramming, Multitasking




                                      9
Modeling Multiprogramming




                   Degree of multiprogramming


CPU utilization as a function of number of processes in10
                         memory
Problem: How to divide memory
• One approach                 Process A
  – divide memory into fixed
    equal-sized partitions
  – Any process <= partition
                               Process B
    size can be loaded into
    any partition
                               Process C


                               Process D
                                       11
Simple MM: Fixed, equal-sized
          partitions
• Any unused space in the   Process A
  partition is wasted
  – Called internal
    fragmentation
                            Process B
• Processes smaller than
  main memory, but larger
  than a partition cannot   Process C
  run.
                            Process D
                                    12
Simple MM: Fixed, variable-sized
              partitions
• Multiple Queues:
   – Place process in queue for smallest
     partition that it fits in.




                                           13
• Issue
  – Some partitions may
    be idle
     • Small jobs available,
       but only large partition
       free




                                  14
• Single queue, search
  for any jobs that fits
     • Small jobs in large
       partition if necessary
  – Increases internal
    memory fragmentation




                                15
Fixed Partition Summary
• Simple
• Easy to implement
• Can result in poor memory utilisation
  – Due to internal fragmentation
• Used on OS/360 operating system
  (OS/MFT)
  – Old mainframe batch system
• Still applicable for simple embedded
  systems
                                          16
Dynamic Partitioning
• Partitions are of variable length
• Process is allocated exactly what it needs
  – Assume a process knows what it needs




                                           17
Dynamic Partitioning
• In previous diagram
   – We have 16 meg free in total, but it can’t be used to
     run any more processes requiring > 6 meg as it is
     fragmented
   – Called external fragmentation
• We end up with unusable holes
• Reduce external fragmentation by compaction
   – Shuffle memory contents to place all free memory together in
     one large block.
   – Compaction is possible only if relocation is dynamic, and is done
     at execution time.




                                                                  20
Recap: Fragmentation
• External Fragmentation:
  – The space wasted external to the allocated memory
    regions.
  – Memory space exists to satisfy a request, but it is
    unusable as it is not contiguous.
• Internal Fragmentation:
  – The space wasted internal to the allocated memory
    regions.
  – allocated memory may be slightly larger than
    requested memory; this size difference is wasted
    memory internal to a partition.
                                                      21
Dynamic Partition Allocation
         Algorithms
• Basic Requirements
  – Quickly locate a free partition satisfying the
    request
  – Minimise external fragmentation
  – Efficiently support merging two adjacent free
    partitions into a larger partition




                                                 22
Classic Approach
• Represent available memory as a linked
  list of available “holes”.
  – Base, size
  – Kept in order of increasing address
    • Simplifies merging of adjacent holes into larger
      holes.

       Address     Address     Address     Address
        Size        Size         Size       Size
         Link       Link         Link       Link

                                                         23
Coalescing Free Partitions with Linked
                Lists




Four neighbor combinations for the terminating
  process X
                                                 24
Dynamic Partitioning Placement
         Algorithm
• First-fit algorithm
   – Scan the list for the first entry that fits
      • If greater in size, break it into an allocated and free part
      • Intent: Minimise amount of searching performed
   – Aims to find a match quickly
   – Generally can result in holes at the front end of
     memory that must be searched over when trying to
     find a free block.
   – May have lots of unusable holes at the beginning.
      • External fragmentation
   – Tends to preserve larger blocks at the end of memory
            Address        Address         Address       Address
              Size          Size            Size           Size
              Link          Link            Link           Link
                                                                       25
Dynamic Partitioning Placement
         Algorithm
• Next-fit
   – Like first-fit, except it begins its search from the point
     in list where the last request succeeded instead of at
     the beginning.
      • Spread allocation more uniformly over entire memory
          – More often allocates a block of memory at the end of memory
            where the largest block is found
      • The largest block of memory is broken up into smaller blocks
          – May not be able to service larger request as well as first fit.



             Address          Address             Address          Address
              Size              Size                Size             Size
              Link              Link                Link              Link
                                                                              26
Dynamic Partitioning Placement
         Algorithm
• Best-fit algorithm
   – Chooses the block that is closest in size to the
     request
   – Poor performer
      • Has to search complete list
          – does more work than first- or next-fit
      • Since smallest block is chosen for a process, the smallest
        amount of external fragmentation is left
          – Create lots of unusable holes

            Address          Address            Address     Address
              Size             Size                  Size    Size
              Link              Link                 Link    Link
                                                                      27
Dynamic Partitioning Placement
         Algorithm
• Worst-fit algorithm
  – Chooses the block that is largest in size (worst-fit)
     • (whimsical) idea is to leave a usable fragment left over
  – Poor performer
     • Has to do more work (like best fit) to search complete list
     • Does not result in significantly less fragmentation




           Address       Address        Address       Address
            Size          Size            Size          Size
             Link          Link           Link          Link
                                                                     28
Dynamic Partition Allocation
           Algorithm
• Summary
  – First-fit and next-fit are generally better than the
    others and easiest to implement
• Note: Used rarely these days
  – Typical in-kernel allocators used are lazy buddy, and
    slab allocators
     • Might go through these later in extended
• You should be aware of them
  – useful as a simple allocator for simple systems


                                                           30
Compaction
• We can reduce
                               Process A
  external fragmentation
  by compaction
   – Only if we can relocate
     running programs
   – Generally requires
                                           Process A
                               Process B
     hardware support
                                           Process B

                               Process C   Process C


                               Process D   Process D
                                                31
Some Remaining Issues with Dynamic
          Partitioning
• We have ignored                      Process A
  – Relocation
    • How does a process run in
      different locations in memory?
  – Protection                         Process B
    • How do we prevent processes
      interfering with each other
                                       Process C


                                       Process D
                                               32
Example Logical Address-Space
   Layout                     0x0000



• Logical
  addresses refer
  to specific
  locations within
  the program
• Once running,
  these address
  must refer to real
  physical memory
• When are logical
                              0xFFFF
  addresses bound
  to physical?

                                       33
When are memory
addresses bound?
• Compile/link time
   – Compiler/Linker binds the
     addresses
   – Must know “run” location at
     compile time
   – Recompile if location changes
• Load time
   – Compiler generates relocatable
     code
   – Loader binds the addresses at
     load time
• Run time
   – Logical compile-time addresses
     translated to physical addresses
     by special hardware.               34
Hardware Support for Runtime
Binding and Protection                          0xFFFF



• For process B to run using logical                     Process A
  addresses
   – Need to add an appropriate offset to its
     logical addresses
       • Achieve relocation
       • Protect memory “lower” than B                   Process B
                                                limit                 base
   – Must limit the maximum logical address B
     can generate
       • Protect memory “higher” than B
                                                         Process C


                                                         Process D
                                                                 35
                                                0x0000
Hardware Support for Relocation and
          Limit Registers




                               36
Base and Limit Registers                    0xFFFF


                                      base=0x8000
• Also called                                             Process A
                                      limit = 0x2000
  – Base and bound registers
  – Relocation and limit registers
• Base and limit registers
  – Restrict and relocate the currently         0x9FFF
                                                 limit
                                                          0x1FFF
                                                                   Process B   base
                                                          0x0000
    active process                               0x8000


  – Base and limit registers must be
    changed at
     • Load time                                          Process C
     • Relocation (compaction time)
     • On a context switch
                                                          Process D
                                                                          37
                                                 0x0000
Base and Limit Registers                    0xFFFF


                                      base=0x4000
• Also called                                              Process A
                                      limit = 0x3000
  – Base and bound registers
  – Relocation and limit registers
• Base and limit registers
  – Restrict and relocate the currently                            Process B
    active process
  – Base and limit registers must be
                                                0x6FFF
    changed at                                            0x2FFF
                                                 limit             Process C
     • Load time                                          0x0000               base
                                                 0x4000
     • Relocation (compaction time)
     • On a context switch
                                                          Process D
                                                                          38
                                                 0x0000
Base and Limit Registers
• Cons
  – Physical memory allocation must still be
    contiguous
  – The entire process must be in memory
  – Do not support partial sharing of address
    spaces




                                                39
Timesharing        0xFFFF



• Thus far, we have a system                  Process A
  suitable for a batch system
  – Limited number of dynamically
    allocated processes
     • Enough to keep CPU utilised
  – Relocated at runtime                      Process B
  – Protected from each other
• But what about timesharing?
  – We need more than just a small            Process C
    number of processes running at
    once
                                              Process D
                                                      40
                                     0x0000
Swapping
• A process can be swapped temporarily out of memory to
  a backing store, and then brought back into memory for
  continued execution.
• Backing store – fast disk large enough to accommodate
  copies of all memory images for all users; must provide
  direct access to these memory images.
• Can prioritize – lower-priority process is swapped out so
  higher-priority process can be loaded and executed.
• Major part of swap time is transfer time; total transfer
  time is directly proportional to the amount of memory
  swapped.
   – slow




                                                        41
Schematic View of Swapping




                         42
So far we have assumed a
process is smaller than memory
• What can we do if a process is larger than
  main memory?




                                          43
Overlays
• Keep in memory only those instructions
  and data that are needed at any given
  time.
• Implemented by user, no special support
  needed from operating system
• Programming design of overlay structure
  is complex


                                        44
Overlays for a Two-Pass Assembler




                                45
Virtual Memory
• Developed to address the issues identified with
  the simple schemes covered thus far.
• Two classic variants
  – Paging
  – Segmentation


• Paging is now the dominant one of the two
• Some architectures support hybrids of the two
  schemes


                                                  46
Virtual Memory - Paging
• Partition physical memory into small
  equal sized chunks
    – Called frames
• Divide each process’s virtual (logical)
  address space into same size chunks
    – Called pages
    – Virtual memory addresses consist of a
      page number and offset within the page
• OS maintains a page table
    – contains the frame location for each page
    – Used by to translate each virtual address
      to physical address
    – The relation between
      virtual addresses and physical memory
      addresses is given by page table
• Process’s physical memory does not
  have to be contiguous


                                                  47
Paging
• No external fragmentation
• Small internal fragmentation (in last page)
• Allows sharing by mapping several pages
  to the same frame
• Abstracts physical organisation
  – Programmer only deal with virtual addresses
• Minimal support for logical organisation
  – Each unit is one or more pages
                                              50
Memory Management Unit
(also called Translation Look-aside Buffer – TLB)




   The position and function of the MMU
                                               51
MMU Operation

Assume for now that
the page table is
contained wholly in
registers within the
MMU – in practice it
is not




  Internal operation of simplified MMU with 16 4 KB pages
                                                            52
Virtual Memory - Segmentation
• Memory-management scheme
  that supports user’s view of
  memory.
• A program is a collection of
  segments. A segment is a
  logical unit such as:
   – main program, procedure,
     function, method, object, local
     variables, global variables,
     common block, stack, symbol
     table, arrays




                                       53
Logical View of Segmentation
                              1

                              4
        1

                 2


    3                         2
                 4

                              3




    user space       physical memory space



                                             54
Segmentation Architecture
• Logical address consists of a two tuple: <segment-
  number, offset>,
   – Addresses identify segment and address with segment
• Segment table – each table entry has:
   – base – contains the starting physical address where the
     segments reside in memory.
   – limit – specifies the length of the segment.
• Segment-table base register (STBR) points to the
  segment table’s location in memory.
• Segment-table length register (STLR) indicates number
  of segments used by a program;
            segment number s is legal if s < STLR.

                                                               55
Segmentation Hardware




                        56
Example of Segmentation




                          57
Segmentation Architecture
• Protection. With each entry in segment table
  associate:
  – validation bit = 0 ⇒ illegal segment
  – read/write/execute privileges
• Protection bits associated with segments; code
  sharing occurs at segment level.
• Since segments vary in length, memory
  allocation is a dynamic partition-allocation
  problem.
• A segmentation example is shown in the
  following diagram
                                                 58
Sharing of Segments




                      59
Segmentation Architecture
• Relocation.
  – dynamic
  ⇒ by segment table

• Sharing.
  – shared segments
  ⇒ same physical backing multiple segments
  ⇒ ideally, same segment number
• Allocation.
  – First/next/best fit
  ⇒ external fragmentation
                                              60
Comparison




Comparison of paging and segmentation   61

Weitere ähnliche Inhalte

Andere mochten auch (20)

http://accountants.inboxhilllocalarea.com/
http://accountants.inboxhilllocalarea.com/http://accountants.inboxhilllocalarea.com/
http://accountants.inboxhilllocalarea.com/
 
Getting Started
Getting StartedGetting Started
Getting Started
 
Ennio Ciriolo - Chi gioca con la nostra storia?
Ennio Ciriolo - Chi gioca con la nostra storia?Ennio Ciriolo - Chi gioca con la nostra storia?
Ennio Ciriolo - Chi gioca con la nostra storia?
 
Iphone 4
Iphone 4Iphone 4
Iphone 4
 
сараа...тест
сараа...тестсараа...тест
сараа...тест
 
安裝I qsystem考試系統
安裝I qsystem考試系統安裝I qsystem考試系統
安裝I qsystem考試系統
 
Chapter11 economics
Chapter11 economicsChapter11 economics
Chapter11 economics
 
Portfolio 2013
Portfolio 2013 Portfolio 2013
Portfolio 2013
 
My Resume Presentation
My Resume PresentationMy Resume Presentation
My Resume Presentation
 
Nagasaki
NagasakiNagasaki
Nagasaki
 
PR 101 Slide Show
PR 101 Slide ShowPR 101 Slide Show
PR 101 Slide Show
 
Era town
Era townEra town
Era town
 
2011 area celebrate service
2011 area celebrate service2011 area celebrate service
2011 area celebrate service
 
Designing call
Designing callDesigning call
Designing call
 
La casa presentation
La casa presentationLa casa presentation
La casa presentation
 
Social Media for NFP
Social Media for NFPSocial Media for NFP
Social Media for NFP
 
2015 04-22 presintaasje rr f kl
2015 04-22 presintaasje rr f kl2015 04-22 presintaasje rr f kl
2015 04-22 presintaasje rr f kl
 
soledades
soledadessoledades
soledades
 
Sebastian ingles
Sebastian inglesSebastian ingles
Sebastian ingles
 
Tele3113 wk7wed
Tele3113 wk7wedTele3113 wk7wed
Tele3113 wk7wed
 

Ähnlich wie Lect13

lecture 8 b main memory
lecture 8 b main memorylecture 8 b main memory
lecture 8 b main memory
ITNet
 

Ähnlich wie Lect13 (20)

Memory Management
Memory ManagementMemory Management
Memory Management
 
07-MemoryManagement.ppt
07-MemoryManagement.ppt07-MemoryManagement.ppt
07-MemoryManagement.ppt
 
Chapter07_ds.ppt
Chapter07_ds.pptChapter07_ds.ppt
Chapter07_ds.ppt
 
Os unit 3
Os unit 3Os unit 3
Os unit 3
 
Memory Management
Memory ManagementMemory Management
Memory Management
 
lecture 8 b main memory
lecture 8 b main memorylecture 8 b main memory
lecture 8 b main memory
 
Memory Management in Operating Systems for all
Memory Management in Operating Systems for allMemory Management in Operating Systems for all
Memory Management in Operating Systems for all
 
chapter 2 memory and process management
chapter 2 memory and process managementchapter 2 memory and process management
chapter 2 memory and process management
 
Heap Memory Management.pptx
Heap Memory Management.pptxHeap Memory Management.pptx
Heap Memory Management.pptx
 
Chapter08
Chapter08Chapter08
Chapter08
 
virtual_memory (3).pptx
virtual_memory (3).pptxvirtual_memory (3).pptx
virtual_memory (3).pptx
 
DB ppt OS unit - 3.pdf
DB ppt OS unit - 3.pdfDB ppt OS unit - 3.pdf
DB ppt OS unit - 3.pdf
 
Practical ,Transparent Operating System Support For Superpages
Practical ,Transparent Operating System Support For SuperpagesPractical ,Transparent Operating System Support For Superpages
Practical ,Transparent Operating System Support For Superpages
 
Memory Management Architecture.ppt
Memory Management Architecture.pptMemory Management Architecture.ppt
Memory Management Architecture.ppt
 
OS UNIT4.pptx
OS UNIT4.pptxOS UNIT4.pptx
OS UNIT4.pptx
 
M20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptxM20CA1030_391_2_Part2.pptx
M20CA1030_391_2_Part2.pptx
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Unit 4-Memory Management - operating systems.pptx
Unit 4-Memory Management - operating systems.pptxUnit 4-Memory Management - operating systems.pptx
Unit 4-Memory Management - operating systems.pptx
 
Understanding memory management
Understanding memory managementUnderstanding memory management
Understanding memory management
 
Operating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management ConceptsOperating Systems 1 (9/12) - Memory Management Concepts
Operating Systems 1 (9/12) - Memory Management Concepts
 

Mehr von Vin Voro

Tele3113 tut6
Tele3113 tut6Tele3113 tut6
Tele3113 tut6
Vin Voro
 
Tele3113 tut5
Tele3113 tut5Tele3113 tut5
Tele3113 tut5
Vin Voro
 
Tele3113 tut4
Tele3113 tut4Tele3113 tut4
Tele3113 tut4
Vin Voro
 
Tele3113 tut1
Tele3113 tut1Tele3113 tut1
Tele3113 tut1
Vin Voro
 
Tele3113 tut3
Tele3113 tut3Tele3113 tut3
Tele3113 tut3
Vin Voro
 
Tele3113 tut2
Tele3113 tut2Tele3113 tut2
Tele3113 tut2
Vin Voro
 
Tele3113 wk11tue
Tele3113 wk11tueTele3113 wk11tue
Tele3113 wk11tue
Vin Voro
 
Tele3113 wk10wed
Tele3113 wk10wedTele3113 wk10wed
Tele3113 wk10wed
Vin Voro
 
Tele3113 wk10tue
Tele3113 wk10tueTele3113 wk10tue
Tele3113 wk10tue
Vin Voro
 
Tele3113 wk11wed
Tele3113 wk11wedTele3113 wk11wed
Tele3113 wk11wed
Vin Voro
 
Tele3113 wk9tue
Tele3113 wk9tueTele3113 wk9tue
Tele3113 wk9tue
Vin Voro
 
Tele3113 wk8wed
Tele3113 wk8wedTele3113 wk8wed
Tele3113 wk8wed
Vin Voro
 
Tele3113 wk9wed
Tele3113 wk9wedTele3113 wk9wed
Tele3113 wk9wed
Vin Voro
 
Tele3113 wk7wed
Tele3113 wk7wedTele3113 wk7wed
Tele3113 wk7wed
Vin Voro
 
Tele3113 wk7wed
Tele3113 wk7wedTele3113 wk7wed
Tele3113 wk7wed
Vin Voro
 
Tele3113 wk7tue
Tele3113 wk7tueTele3113 wk7tue
Tele3113 wk7tue
Vin Voro
 
Tele3113 wk6wed
Tele3113 wk6wedTele3113 wk6wed
Tele3113 wk6wed
Vin Voro
 
Tele3113 wk6tue
Tele3113 wk6tueTele3113 wk6tue
Tele3113 wk6tue
Vin Voro
 
Tele3113 wk5tue
Tele3113 wk5tueTele3113 wk5tue
Tele3113 wk5tue
Vin Voro
 
Tele3113 wk4wed
Tele3113 wk4wedTele3113 wk4wed
Tele3113 wk4wed
Vin Voro
 

Mehr von Vin Voro (20)

Tele3113 tut6
Tele3113 tut6Tele3113 tut6
Tele3113 tut6
 
Tele3113 tut5
Tele3113 tut5Tele3113 tut5
Tele3113 tut5
 
Tele3113 tut4
Tele3113 tut4Tele3113 tut4
Tele3113 tut4
 
Tele3113 tut1
Tele3113 tut1Tele3113 tut1
Tele3113 tut1
 
Tele3113 tut3
Tele3113 tut3Tele3113 tut3
Tele3113 tut3
 
Tele3113 tut2
Tele3113 tut2Tele3113 tut2
Tele3113 tut2
 
Tele3113 wk11tue
Tele3113 wk11tueTele3113 wk11tue
Tele3113 wk11tue
 
Tele3113 wk10wed
Tele3113 wk10wedTele3113 wk10wed
Tele3113 wk10wed
 
Tele3113 wk10tue
Tele3113 wk10tueTele3113 wk10tue
Tele3113 wk10tue
 
Tele3113 wk11wed
Tele3113 wk11wedTele3113 wk11wed
Tele3113 wk11wed
 
Tele3113 wk9tue
Tele3113 wk9tueTele3113 wk9tue
Tele3113 wk9tue
 
Tele3113 wk8wed
Tele3113 wk8wedTele3113 wk8wed
Tele3113 wk8wed
 
Tele3113 wk9wed
Tele3113 wk9wedTele3113 wk9wed
Tele3113 wk9wed
 
Tele3113 wk7wed
Tele3113 wk7wedTele3113 wk7wed
Tele3113 wk7wed
 
Tele3113 wk7wed
Tele3113 wk7wedTele3113 wk7wed
Tele3113 wk7wed
 
Tele3113 wk7tue
Tele3113 wk7tueTele3113 wk7tue
Tele3113 wk7tue
 
Tele3113 wk6wed
Tele3113 wk6wedTele3113 wk6wed
Tele3113 wk6wed
 
Tele3113 wk6tue
Tele3113 wk6tueTele3113 wk6tue
Tele3113 wk6tue
 
Tele3113 wk5tue
Tele3113 wk5tueTele3113 wk5tue
Tele3113 wk5tue
 
Tele3113 wk4wed
Tele3113 wk4wedTele3113 wk4wed
Tele3113 wk4wed
 

KĂźrzlich hochgeladen

Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

KĂźrzlich hochgeladen (20)

COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptxCOMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
COMMUNICATING NEGATIVE NEWS - APPROACHES .pptx
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17
 
Plant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptxPlant propagation: Sexual and Asexual propapagation.pptx
Plant propagation: Sexual and Asexual propapagation.pptx
 

Lect13

  • 2. Process • One or more threads of execution • Resources required for execution – Memory (RAM) • Program code (“text”) • Data (initialised, uninitialised, stack) • Buffers held in the kernel on behalf of the process – Others • CPU time • Files, disk space, printers, etc. 2
  • 3. Some Goals of an Operating System • Maximise memory utilisation • Maximise CPU utilization • Minimise response time • Prioritise “important” processes • Note: Conflicting goals ⇒ tradeoffs – E.g. maximising CPU utilisation (by running many processes) increases (degrades) system response time. 3
  • 4. Memory Management • Keeps track of what memory is in use and what memory is free • Allocates free memory to process when needed – And deallocates it when they don’t • Manages the transfer of memory between RAM and disk. 4
  • 5. Memory Hierarchy • Ideally, programmers want memory that is – Fast – Large – Nonvolatile • Not possible • Memory manager coordinates how memory hierarchy is used. – Focus usually on RAM ⇔ Disk 5
  • 6. Memory Management • Two broad classes of memory management systems – Those that transfer processes to and from disk during execution. • Called swapping or paging – Those that don’t • Simple • Might find this scheme in an embedded device, phone, smartcard, or PDA. 6
  • 7. Basic Memory Management Monoprogramming without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process7
  • 8. Monoprogramming • Okay if – Only have one thing to do – Memory available approximately equates to memory required • Otherwise, – Poor CPU utilisation in the presence of I/O waiting – Poor memory utilisation with a varied job mix 8
  • 9. Idea • Subdivide memory and run more than one process at once!!!! – Multiprogramming, Multitasking 9
  • 10. Modeling Multiprogramming Degree of multiprogramming CPU utilization as a function of number of processes in10 memory
  • 11. Problem: How to divide memory • One approach Process A – divide memory into fixed equal-sized partitions – Any process <= partition Process B size can be loaded into any partition Process C Process D 11
  • 12. Simple MM: Fixed, equal-sized partitions • Any unused space in the Process A partition is wasted – Called internal fragmentation Process B • Processes smaller than main memory, but larger than a partition cannot Process C run. Process D 12
  • 13. Simple MM: Fixed, variable-sized partitions • Multiple Queues: – Place process in queue for smallest partition that it fits in. 13
  • 14. • Issue – Some partitions may be idle • Small jobs available, but only large partition free 14
  • 15. • Single queue, search for any jobs that fits • Small jobs in large partition if necessary – Increases internal memory fragmentation 15
  • 16. Fixed Partition Summary • Simple • Easy to implement • Can result in poor memory utilisation – Due to internal fragmentation • Used on OS/360 operating system (OS/MFT) – Old mainframe batch system • Still applicable for simple embedded systems 16
  • 17. Dynamic Partitioning • Partitions are of variable length • Process is allocated exactly what it needs – Assume a process knows what it needs 17
  • 18.
  • 19.
  • 20. Dynamic Partitioning • In previous diagram – We have 16 meg free in total, but it can’t be used to run any more processes requiring > 6 meg as it is fragmented – Called external fragmentation • We end up with unusable holes • Reduce external fragmentation by compaction – Shuffle memory contents to place all free memory together in one large block. – Compaction is possible only if relocation is dynamic, and is done at execution time. 20
  • 21. Recap: Fragmentation • External Fragmentation: – The space wasted external to the allocated memory regions. – Memory space exists to satisfy a request, but it is unusable as it is not contiguous. • Internal Fragmentation: – The space wasted internal to the allocated memory regions. – allocated memory may be slightly larger than requested memory; this size difference is wasted memory internal to a partition. 21
  • 22. Dynamic Partition Allocation Algorithms • Basic Requirements – Quickly locate a free partition satisfying the request – Minimise external fragmentation – Efficiently support merging two adjacent free partitions into a larger partition 22
  • 23. Classic Approach • Represent available memory as a linked list of available “holes”. – Base, size – Kept in order of increasing address • Simplifies merging of adjacent holes into larger holes. Address Address Address Address Size Size Size Size Link Link Link Link 23
  • 24. Coalescing Free Partitions with Linked Lists Four neighbor combinations for the terminating process X 24
  • 25. Dynamic Partitioning Placement Algorithm • First-fit algorithm – Scan the list for the first entry that fits • If greater in size, break it into an allocated and free part • Intent: Minimise amount of searching performed – Aims to find a match quickly – Generally can result in holes at the front end of memory that must be searched over when trying to find a free block. – May have lots of unusable holes at the beginning. • External fragmentation – Tends to preserve larger blocks at the end of memory Address Address Address Address Size Size Size Size Link Link Link Link 25
  • 26. Dynamic Partitioning Placement Algorithm • Next-fit – Like first-fit, except it begins its search from the point in list where the last request succeeded instead of at the beginning. • Spread allocation more uniformly over entire memory – More often allocates a block of memory at the end of memory where the largest block is found • The largest block of memory is broken up into smaller blocks – May not be able to service larger request as well as first fit. Address Address Address Address Size Size Size Size Link Link Link Link 26
  • 27. Dynamic Partitioning Placement Algorithm • Best-fit algorithm – Chooses the block that is closest in size to the request – Poor performer • Has to search complete list – does more work than first- or next-fit • Since smallest block is chosen for a process, the smallest amount of external fragmentation is left – Create lots of unusable holes Address Address Address Address Size Size Size Size Link Link Link Link 27
  • 28. Dynamic Partitioning Placement Algorithm • Worst-fit algorithm – Chooses the block that is largest in size (worst-fit) • (whimsical) idea is to leave a usable fragment left over – Poor performer • Has to do more work (like best fit) to search complete list • Does not result in significantly less fragmentation Address Address Address Address Size Size Size Size Link Link Link Link 28
  • 29.
  • 30. Dynamic Partition Allocation Algorithm • Summary – First-fit and next-fit are generally better than the others and easiest to implement • Note: Used rarely these days – Typical in-kernel allocators used are lazy buddy, and slab allocators • Might go through these later in extended • You should be aware of them – useful as a simple allocator for simple systems 30
  • 31. Compaction • We can reduce Process A external fragmentation by compaction – Only if we can relocate running programs – Generally requires Process A Process B hardware support Process B Process C Process C Process D Process D 31
  • 32. Some Remaining Issues with Dynamic Partitioning • We have ignored Process A – Relocation • How does a process run in different locations in memory? – Protection Process B • How do we prevent processes interfering with each other Process C Process D 32
  • 33. Example Logical Address-Space Layout 0x0000 • Logical addresses refer to specific locations within the program • Once running, these address must refer to real physical memory • When are logical 0xFFFF addresses bound to physical? 33
  • 34. When are memory addresses bound? • Compile/link time – Compiler/Linker binds the addresses – Must know “run” location at compile time – Recompile if location changes • Load time – Compiler generates relocatable code – Loader binds the addresses at load time • Run time – Logical compile-time addresses translated to physical addresses by special hardware. 34
  • 35. Hardware Support for Runtime Binding and Protection 0xFFFF • For process B to run using logical Process A addresses – Need to add an appropriate offset to its logical addresses • Achieve relocation • Protect memory “lower” than B Process B limit base – Must limit the maximum logical address B can generate • Protect memory “higher” than B Process C Process D 35 0x0000
  • 36. Hardware Support for Relocation and Limit Registers 36
  • 37. Base and Limit Registers 0xFFFF base=0x8000 • Also called Process A limit = 0x2000 – Base and bound registers – Relocation and limit registers • Base and limit registers – Restrict and relocate the currently 0x9FFF limit 0x1FFF Process B base 0x0000 active process 0x8000 – Base and limit registers must be changed at • Load time Process C • Relocation (compaction time) • On a context switch Process D 37 0x0000
  • 38. Base and Limit Registers 0xFFFF base=0x4000 • Also called Process A limit = 0x3000 – Base and bound registers – Relocation and limit registers • Base and limit registers – Restrict and relocate the currently Process B active process – Base and limit registers must be 0x6FFF changed at 0x2FFF limit Process C • Load time 0x0000 base 0x4000 • Relocation (compaction time) • On a context switch Process D 38 0x0000
  • 39. Base and Limit Registers • Cons – Physical memory allocation must still be contiguous – The entire process must be in memory – Do not support partial sharing of address spaces 39
  • 40. Timesharing 0xFFFF • Thus far, we have a system Process A suitable for a batch system – Limited number of dynamically allocated processes • Enough to keep CPU utilised – Relocated at runtime Process B – Protected from each other • But what about timesharing? – We need more than just a small Process C number of processes running at once Process D 40 0x0000
  • 41. Swapping • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Can prioritize – lower-priority process is swapped out so higher-priority process can be loaded and executed. • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. – slow 41
  • 42. Schematic View of Swapping 42
  • 43. So far we have assumed a process is smaller than memory • What can we do if a process is larger than main memory? 43
  • 44. Overlays • Keep in memory only those instructions and data that are needed at any given time. • Implemented by user, no special support needed from operating system • Programming design of overlay structure is complex 44
  • 45. Overlays for a Two-Pass Assembler 45
  • 46. Virtual Memory • Developed to address the issues identified with the simple schemes covered thus far. • Two classic variants – Paging – Segmentation • Paging is now the dominant one of the two • Some architectures support hybrids of the two schemes 46
  • 47. Virtual Memory - Paging • Partition physical memory into small equal sized chunks – Called frames • Divide each process’s virtual (logical) address space into same size chunks – Called pages – Virtual memory addresses consist of a page number and offset within the page • OS maintains a page table – contains the frame location for each page – Used by to translate each virtual address to physical address – The relation between virtual addresses and physical memory addresses is given by page table • Process’s physical memory does not have to be contiguous 47
  • 48.
  • 49.
  • 50. Paging • No external fragmentation • Small internal fragmentation (in last page) • Allows sharing by mapping several pages to the same frame • Abstracts physical organisation – Programmer only deal with virtual addresses • Minimal support for logical organisation – Each unit is one or more pages 50
  • 51. Memory Management Unit (also called Translation Look-aside Buffer – TLB) The position and function of the MMU 51
  • 52. MMU Operation Assume for now that the page table is contained wholly in registers within the MMU – in practice it is not Internal operation of simplified MMU with 16 4 KB pages 52
  • 53. Virtual Memory - Segmentation • Memory-management scheme that supports user’s view of memory. • A program is a collection of segments. A segment is a logical unit such as: – main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays 53
  • 54. Logical View of Segmentation 1 4 1 2 3 2 4 3 user space physical memory space 54
  • 55. Segmentation Architecture • Logical address consists of a two tuple: <segment- number, offset>, – Addresses identify segment and address with segment • Segment table – each table entry has: – base – contains the starting physical address where the segments reside in memory. – limit – specifies the length of the segment. • Segment-table base register (STBR) points to the segment table’s location in memory. • Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR. 55
  • 58. Segmentation Architecture • Protection. With each entry in segment table associate: – validation bit = 0 ⇒ illegal segment – read/write/execute privileges • Protection bits associated with segments; code sharing occurs at segment level. • Since segments vary in length, memory allocation is a dynamic partition-allocation problem. • A segmentation example is shown in the following diagram 58
  • 60. Segmentation Architecture • Relocation. – dynamic ⇒ by segment table • Sharing. – shared segments ⇒ same physical backing multiple segments ⇒ ideally, same segment number • Allocation. – First/next/best fit ⇒ external fragmentation 60
  • 61. Comparison Comparison of paging and segmentation 61