SlideShare ist ein Scribd-Unternehmen logo
1 von 38
15-213
          “ The course that gives CMU its Zip!”


           Disk-based Storage
              Oct. 23, 2008
        Topics
             
                 Storage technologies and trends
             
                 Locality of reference
             
                 Caching in the memory hierarchy




lecture-17.ppt
Announcements
Exam next Thursday
     
         style like exam #1: in class, open book/notes, no electronics




 2                                                             15-213, F’08
Disk-based storage in computers

    Memory/storage hierarchy
        
            Combining many technologies to balance costs/benefits
        
            Recall the memory hierarchy and virtual memory
            lectures




    3                                                     15-213, F’08
Memory/storage hierarchies

        Balancing performance with cost
        
            Small memories are fast but expensive
        
            Large memories are slow but cheap

        Exploit locality to get the best of both worlds
        
            locality = re-use/nearness of accesses
        
            allows most accesses to use small, fast memory




                                        Capacity




                                                               Performance
    4                                                        15-213, F’08
An Example Memory Hierarchy
 Smaller,                        L0:
  faster,                          registers     CPU registers hold words
   and                                           retrieved from L1 cache.
 costlier                      L1: on-chip L1
(per byte)                       cache (SRAM)        L1 cache holds cache lines
                                                     retrieved from the L2 cache
 storage                                             memory.
 devices                 L2:       off-chip L2
                                 cache (SRAM)              L2 cache holds cache lines
                                                           retrieved from main memory.

                   L3:           main memory
 Larger,                           (DRAM)
                                                                  Main memory holds disk
 slower,                                                          blocks retrieved from local
    and                                                           disks.
 cheaper                  local secondary storage
             L4:
(per byte)                      (local disks)                           Local disks hold files
  storage                                                               retrieved from disks on
 devices                                                                remote network
                                                                        servers.
       L5:              remote secondary storage
              (tapes, distributed file systems, Web servers)

  5     From lecture-9.ppt
                                                                            15-213, F’08
Page Faults
A page fault is caused by a reference to a VM word that is
 not in physical (main) memory
    
        Example: An instruction references a word contained in VP 3, a
        miss that triggers a page fault exception
                                                   Physical memory
         Virtual address       Physical page           (DRAM)
                                number or
                                                        VP 1         PP 0
                         Valid disk address
                                                        VP 2
                     PTE 0 0        null                VP 7
                         1                              VP 4         PP 3
                         1
                         0
                         1
                         0          null            Virtual memory
                         0                               (disk)
                   PTE 7 1                              VP 1
                             Memory resident            VP 2
                               page table               VP 3
                                (DRAM)
                                                        VP 4
                                                        VP 6
6         From lecture-14.ppt
                                                        VP 7     15-213, F’08
Disk-based storage in computers

    Memory/storage hierarchy
        
            Combining many technologies to balance costs/benefits
        
            Recall the memory hierarchy and virtual memory
            lectures



    Persistence
        
            Storing data for lengthy periods of time
             
                 DRAM/SRAM is “volatile”: contents lost if power lost
             
                 Disks are “non-volatile”: contents survive power outages
        
            To be useful, it must also be possible to find it again
            later
             
                 this brings in many interesting data organization,
                 consistency, and management issues
                   
                       take 18-746/15-746 Storage Systems 
    7        
                 we’ll talk a bit about file systems next             15-213, F’08
What’s Inside A Disk Drive?
                     Spindle
               Arm                                  Platters

    Actuator




                                            Electronics


          SCSI
        connector
                               Image courtesy of Seagate Technology

8                                                     15-213, F’08
Disk Electronics
                   Just like a small
                   computer – processor,
                   memory, network iface

                      • Connect to disk

                      • Control processor

                      • Cache memory
                      • Control ASIC

                      • Connect to motor



9                                 15-213, F’08
Disk “Geometry”
Disks contain platters, each with two surfaces
Each surface organized in concentric rings called
  tracks
Each track consists of sectors separated by gaps
     tracks
              surface
                                    track k   gaps




              spindle




                                    sectors
10                                             15-213, F’08
Disk Geometry (Muliple-Platter
View)
Aligned tracks form a cylinder
                         cylinder k


        surface 0
                                      platter 0
        surface 1
        surface 2
                                      platter 1
        surface 3
        surface 4
                                      platter 2
        surface 5

                     spindle




11                                                15-213, F’08
Disk Structure

      Read/Write Head   Arm

     Upper Surface
            Platter
     Lower Surface




       Cylinder




          Track

         Sector
                              Actuator




12                                       15-213, F’08
Disk Operation (Single-Platter
View)
The disk
surface                         The read/write head
spins at a                      is attached to the end
fixed                           of the arm and flies over
rotational rate                  the disk surface on
                                a thin cushion of air



                    spindle
                      spindle
                  spindle
                  spindle




                                By moving radially, the
                                arm can position the
                                read/write head over any
                                track




13                                                15-213, F’08
Disk Operation (Multi-Platter
View)
                   read/write heads
                    move in unison
                from cylinder to cylinder




                            arm




                spindle




14                                          15-213, F’08
Disk Structure - top view of
single platter

          Surface organized into tracks

          Tracks divided into sectors




15                                        15-213, F’08
Disk Access




      Head in position above a track


16                                     15-213, F’08
Disk Access




     Rotation is counter-clockwise


17                                   15-213, F’08
Disk Access – Read




      About to read blue sector


18                                15-213, F’08
Disk Access – Read




 After BLUE read


            After reading blue sector


19                                      15-213, F’08
Disk Access – Read




 After BLUE read


            Red request scheduled next


20                                       15-213, F’08
Disk Access – Seek




 After BLUE read   Seek for RED


            Seek to red’s track


21                                15-213, F’08
Disk Access – Rotational
Latency




 After BLUE read   Seek for RED   Rotational latency


            Wait for red sector to rotate around


22                                                     15-213, F’08
Disk Access – Read




 After BLUE read   Seek for RED   Rotational latency   After RED read


            Complete read of red


23                                                          15-213, F’08
Disk Access – Service Time
Components




 After BLUE read   Seek for RED   Rotational latency   After RED read

            Seek
            Rotational Latency
            Data Transfer

24                                                          15-213, F’08
Disk Access Time
Average time to access a specific sector
  approximated by:
     
         Taccess = Tavg seek + Tavg rotation + Tavg transfer
Seek time (Tavg seek)
     
         Time to position heads over cylinder containing target
         sector
     
         Typical Tavg seek = 3-5 ms
Rotational latency (Tavg rotation)
     
         Time waiting for first bit of target sector to pass under r/w
         head
     
         Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min
          
              e.g., 3ms for 10,000 RPM disk

Transfer time (Tavg transfer)
     
         Time to read the bits in the target sector
25   
                                                         15-213, F’08
         Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60
Disk Access Time Example
Given:
      
          Rotational rate = 7,200 RPM
      
          Average seek time = 5 ms
      
          Avg # sectors/track = 1000
Derived average time to access random sector:
      
          Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4
          ms
      
          Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000
          ms/sec = 0.008 ms
      
          Taccess = 5 ms + 4 ms + 0.008 ms = 9.008 ms
           
               Time to second sector: 0.008 ms

Important points:
      
          Access time dominated by seek time and rotational latency
      
          First bit in a sector is the most expensive, the rest are free
      
          SRAM access time is about 4 ns/doubleword, DRAM about
          60 ns
 26        
                                                                     15-213, F’08
               ~100,000 times longer to access a word on disk than in DRAM
Disk storage as array of blocks


    …5        6 7            12                          23   …
                    OS’s view of storage device
            (as exposed by SCSI or IDE/ATA protocols)

     Common “logical block” size: 512 bytes

     Number of blocks: device capacity / block size

     Common OS-to-storage requests defined by few fields
     
         R/W, block #, # of blocks, memory source/dest

27                                                            15-213, F’08
Page Faults
A page fault is caused by a reference to a VM word that is
 not in physical (main) memory
 
     Example: An instruction references block” number can be 3, a
                                “logical a word contained in VP
     miss that triggers a page fault exception in page table to
                                 remembered
      Virtual address                 identify disk location(DRAM)
                            Physical page                     for pages
                                                         Physical memory
                             number or not resident in main memory
                                                              VP 1     PP 0
                      Valid disk address
                                                              VP 2
                  PTE 0 0        null                         VP 7
                     1                                       VP 4        PP 3
                     1
                     0
                     1
                     0          null                    Virtual memory
                     0                                       (disk)
               PTE 7 1                                       VP 1
                         Memory resident                     VP 2
                           page table                        VP 3
                            (DRAM)
                                                             VP 4
                                                             VP 6
28     From lecture-14.ppt
                                                             VP 7    15-213, F’08
In device, “blocks” mapped to physical store




         Disk Sector
 (usually same size as block)




 29                                     15-213, F’08
Physical sectors of a single-
surface disk




30                              15-213, F’08
LBN-to-physical for a single-
surface disk
                           4              5

            3              16          17
                                                        6
                15             28    29            18
                     27                       30
                               40 41
       2             39                42                    7
           14                                           19
                26
                   38                       43 31
                      37                44 32
                25
       1   13             36           45               20   8
                               4    46
                     24        7          33
                12             35    34      21
            0                          22               9
                           23

                          11            10

31                                                               15-213, F’08
Disk Capacity

Capacity: maximum number of bits that can be stored
     
         Vendors express capacity in units of gigabytes (GB), where
         1 GB = 10 9 Bytes (Lawsuit pending! Claims deceptive
         advertising)


Capacity is determined by these technology factors:
     
         Recording density (bits/in): number of bits that can be
         squeezed into a 1 inch linear segment of a track
     
         Track density (tracks/in): number of tracks that can be
         squeezed into a 1 inch radial segment
     
         Areal density (bits/in2 ): product of recording and track
         density



32                                                               15-213, F’08
Computing Disk Capacity
Capacity =       (# bytes/sector) x (avg. # sectors/track) x
                 (# tracks/surface) x (# surfaces/platter) x
                 (# platters/disk)
Example:
     
         512 bytes/sector
     
         1000 sectors/track (on average)
     
         20,000 tracks/surface
     
         2 surfaces/platter
     
         5 platters/disk


Capacity = 512 x 1000 x 80000 x 2 x 5
            = 409,600,000,000
              = 409.6 GB
33                                                         15-213, F’08
Looking back at the hardware
     CPU chip
            register file

                            ALU




                                   main
        bus interface
                                  memory




34                                    15-213, F’08
Connecting I/O devices: the I/O
Bus
     CPU chip
            register file

                            ALU
                                  system bus     memory bus


                                        I/O                     main
        bus interface
                                       bridge                  memory




                                       I/O bus                Expansion slots for
                                                              other devices such
            USB             graphics               disk       as network adapters
          controller        adapter              controller

        mousekeyboard       monitor
                                                   disk
35                                                                   15-213, F’08
Reading from disk (1)
CPU chip
                                   CPU initiates a disk read by writing a READ
        register file
                                   command, logical block number, number of
                                   blocks, and destination memory address to a
                        ALU
                                   port (address) associated with disk controller



                                                          main
   bus interface
                                                         memory


                                                 I/O bus




        USB             graphics            disk
      controller        adapter           controller

   mousekeyboard        monitor
                                                        disk
 36                                                                   15-213, F’08
Reading from disk (2)
CPU chip
                                   Disk controller reads the sectors and
        register file
                                   performs a direct memory access (DMA)
                                   transfer into main memory
                        ALU




                                                      main
   bus interface
                                                     memory


                                              I/O bus




        USB             graphics          disk
      controller        adapter         controller

   mousekeyboard        monitor
                                           disk
 37                                                             15-213, F’08
Reading from disk (3)
CPU chip
                                   When the DMA transfer completes, the
        register file
                                   disk controller notifies the CPU with an
                                   interrupt (i.e., asserts a special “interrupt”
                        ALU
                                   pin on the CPU)



                                                            main
   bus interface
                                                           memory


                                                   I/O bus




        USB             graphics              disk
      controller        adapter             controller

   mousekeyboard        monitor
                                               disk
 38                                                                       15-213, F’08

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

P1 – Unit 3
P1 – Unit 3P1 – Unit 3
P1 – Unit 3
 
Storage memory
Storage memoryStorage memory
Storage memory
 
COMPUTER STORAGE
COMPUTER STORAGECOMPUTER STORAGE
COMPUTER STORAGE
 
01intro
01intro01intro
01intro
 
Syllabus for interview
Syllabus for  interviewSyllabus for  interview
Syllabus for interview
 
P1 Unit 3
P1 Unit 3 P1 Unit 3
P1 Unit 3
 
virtual memory - Computer operating system
virtual memory - Computer operating systemvirtual memory - Computer operating system
virtual memory - Computer operating system
 
Reliable Hydra SSD Architecture for General Purpose Controllers
Reliable Hydra SSD Architecture for General Purpose ControllersReliable Hydra SSD Architecture for General Purpose Controllers
Reliable Hydra SSD Architecture for General Purpose Controllers
 
os
osos
os
 
I/O System and Case study
I/O System and Case studyI/O System and Case study
I/O System and Case study
 
Barrelfish OS
Barrelfish OS Barrelfish OS
Barrelfish OS
 
Cpu spec
Cpu specCpu spec
Cpu spec
 
Nextserver Evo
Nextserver EvoNextserver Evo
Nextserver Evo
 
Linux Porting to a Custom Board
Linux Porting to a Custom BoardLinux Porting to a Custom Board
Linux Porting to a Custom Board
 
Data storage csc
Data storage cscData storage csc
Data storage csc
 
Storage hardware
Storage hardwareStorage hardware
Storage hardware
 
DB_ch11
DB_ch11DB_ch11
DB_ch11
 
34 single partition allocation
34 single partition allocation34 single partition allocation
34 single partition allocation
 
storage & file strucure in dbms
storage & file strucure in dbmsstorage & file strucure in dbms
storage & file strucure in dbms
 
Presentacion pujol
Presentacion pujolPresentacion pujol
Presentacion pujol
 

Andere mochten auch

Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?Aventis Systems, Inc.
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterFibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterStuart Miniman
 
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Cisco Canada
 
FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations  FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations EMC
 
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )EMC
 

Andere mochten auch (6)

Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
Answers to Your IT Nightmares - SAS, iSCSI, or Fibre Channel?
 
Storage
StorageStorage
Storage
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterFibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
 
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
 
FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations  FCoE - Topologies, Protocol, and Limitations
FCoE - Topologies, Protocol, and Limitations
 
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
 

Ähnlich wie Mem hierarchy

CSC1100 - Chapter05 - Storage
CSC1100 - Chapter05 - StorageCSC1100 - Chapter05 - Storage
CSC1100 - Chapter05 - StorageYhal Htet Aung
 
Basicarchitecturememory
BasicarchitecturememoryBasicarchitecturememory
BasicarchitecturememoryAditya Narang
 
Soumenu Patra Presentation_Types of Memory.pdf
Soumenu Patra Presentation_Types of Memory.pdfSoumenu Patra Presentation_Types of Memory.pdf
Soumenu Patra Presentation_Types of Memory.pdfSoumenduPatra3
 
SOUG_SDM_OracleDB_V3
SOUG_SDM_OracleDB_V3SOUG_SDM_OracleDB_V3
SOUG_SDM_OracleDB_V3UniFabric
 
301378156 design-of-sram-in-verilog
301378156 design-of-sram-in-verilog301378156 design-of-sram-in-verilog
301378156 design-of-sram-in-verilogSrinivas Naidu
 
2. the memory systems (module2)
2. the memory systems (module2)2. the memory systems (module2)
2. the memory systems (module2)Ajit Saraf
 
Z109889 z4 r-storage-dfsms-vegas-v1910b
Z109889 z4 r-storage-dfsms-vegas-v1910bZ109889 z4 r-storage-dfsms-vegas-v1910b
Z109889 z4 r-storage-dfsms-vegas-v1910bTony Pearson
 
5.6 Basic computer structure microprocessors
5.6 Basic computer structure   microprocessors5.6 Basic computer structure   microprocessors
5.6 Basic computer structure microprocessorslpapadop
 
Dsmp Whitepaper Release 3
Dsmp Whitepaper Release 3Dsmp Whitepaper Release 3
Dsmp Whitepaper Release 3gelfstrom
 
Primary and secondary storage devices
Primary and secondary storage devicesPrimary and secondary storage devices
Primary and secondary storage devicesPichano Kikon
 
Memory & storage devices
Memory & storage devicesMemory & storage devices
Memory & storage devicesHamza Mughal
 
Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...
Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...
Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...peknap
 
The Complex of an Internal Computer
The Complex of an Internal ComputerThe Complex of an Internal Computer
The Complex of an Internal Computerdheelan2995
 
Function of memory.4to5
Function of memory.4to5Function of memory.4to5
Function of memory.4to5myrajendra
 
Function of memory.4to5
Function of memory.4to5Function of memory.4to5
Function of memory.4to5myrajendra
 

Ähnlich wie Mem hierarchy (20)

lecture-17.ppt
lecture-17.pptlecture-17.ppt
lecture-17.ppt
 
CSC1100 - Chapter05 - Storage
CSC1100 - Chapter05 - StorageCSC1100 - Chapter05 - Storage
CSC1100 - Chapter05 - Storage
 
Basicarchitecturememory
BasicarchitecturememoryBasicarchitecturememory
Basicarchitecturememory
 
Soumenu Patra Presentation_Types of Memory.pdf
Soumenu Patra Presentation_Types of Memory.pdfSoumenu Patra Presentation_Types of Memory.pdf
Soumenu Patra Presentation_Types of Memory.pdf
 
2166 Quayle
2166 Quayle2166 Quayle
2166 Quayle
 
SOUG_SDM_OracleDB_V3
SOUG_SDM_OracleDB_V3SOUG_SDM_OracleDB_V3
SOUG_SDM_OracleDB_V3
 
301378156 design-of-sram-in-verilog
301378156 design-of-sram-in-verilog301378156 design-of-sram-in-verilog
301378156 design-of-sram-in-verilog
 
2. the memory systems (module2)
2. the memory systems (module2)2. the memory systems (module2)
2. the memory systems (module2)
 
Z109889 z4 r-storage-dfsms-vegas-v1910b
Z109889 z4 r-storage-dfsms-vegas-v1910bZ109889 z4 r-storage-dfsms-vegas-v1910b
Z109889 z4 r-storage-dfsms-vegas-v1910b
 
Linux on System z – disk I/O performance
Linux on System z – disk I/O performanceLinux on System z – disk I/O performance
Linux on System z – disk I/O performance
 
5.6 Basic computer structure microprocessors
5.6 Basic computer structure   microprocessors5.6 Basic computer structure   microprocessors
5.6 Basic computer structure microprocessors
 
Dsmp Whitepaper Release 3
Dsmp Whitepaper Release 3Dsmp Whitepaper Release 3
Dsmp Whitepaper Release 3
 
Primary and secondary storage devices
Primary and secondary storage devicesPrimary and secondary storage devices
Primary and secondary storage devices
 
Memory & storage devices
Memory & storage devicesMemory & storage devices
Memory & storage devices
 
Ram Disk
Ram DiskRam Disk
Ram Disk
 
Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...
Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...
Controlling Memory Footprint at All Layers: Linux Kernel, Applications, Libra...
 
The Complex of an Internal Computer
The Complex of an Internal ComputerThe Complex of an Internal Computer
The Complex of an Internal Computer
 
storage media
storage mediastorage media
storage media
 
Function of memory.4to5
Function of memory.4to5Function of memory.4to5
Function of memory.4to5
 
Function of memory.4to5
Function of memory.4to5Function of memory.4to5
Function of memory.4to5
 

Mehr von Vaibhav Bajaj (20)

Stroustrup c++0x overview
Stroustrup c++0x overviewStroustrup c++0x overview
Stroustrup c++0x overview
 
P smile
P smileP smile
P smile
 
Ppt history-of-apple2203 (1)
Ppt history-of-apple2203 (1)Ppt history-of-apple2203 (1)
Ppt history-of-apple2203 (1)
 
Os
OsOs
Os
 
Operating system.ppt (1)
Operating system.ppt (1)Operating system.ppt (1)
Operating system.ppt (1)
 
Oop1
Oop1Oop1
Oop1
 
Database
DatabaseDatabase
Database
 
C++0x
C++0xC++0x
C++0x
 
Blu ray disc slides
Blu ray disc slidesBlu ray disc slides
Blu ray disc slides
 
Assembler
AssemblerAssembler
Assembler
 
Assembler (2)
Assembler (2)Assembler (2)
Assembler (2)
 
Projection of solids
Projection of solidsProjection of solids
Projection of solids
 
Projection of planes
Projection of planesProjection of planes
Projection of planes
 
Ortographic projection
Ortographic projectionOrtographic projection
Ortographic projection
 
Isometric
IsometricIsometric
Isometric
 
Intersection 1
Intersection 1Intersection 1
Intersection 1
 
Important q
Important qImportant q
Important q
 
Eg o31
Eg o31Eg o31
Eg o31
 
Development of surfaces of solids
Development of surfaces of solidsDevelopment of surfaces of solids
Development of surfaces of solids
 
Development of surfaces of solids copy
Development of surfaces of solids   copyDevelopment of surfaces of solids   copy
Development of surfaces of solids copy
 

Kürzlich hochgeladen

Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 

Kürzlich hochgeladen (20)

Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 

Mem hierarchy

  • 1. 15-213 “ The course that gives CMU its Zip!” Disk-based Storage Oct. 23, 2008 Topics  Storage technologies and trends  Locality of reference  Caching in the memory hierarchy lecture-17.ppt
  • 2. Announcements Exam next Thursday  style like exam #1: in class, open book/notes, no electronics 2 15-213, F’08
  • 3. Disk-based storage in computers  Memory/storage hierarchy  Combining many technologies to balance costs/benefits  Recall the memory hierarchy and virtual memory lectures 3 15-213, F’08
  • 4. Memory/storage hierarchies  Balancing performance with cost  Small memories are fast but expensive  Large memories are slow but cheap  Exploit locality to get the best of both worlds  locality = re-use/nearness of accesses  allows most accesses to use small, fast memory Capacity Performance 4 15-213, F’08
  • 5. An Example Memory Hierarchy Smaller, L0: faster, registers CPU registers hold words and retrieved from L1 cache. costlier L1: on-chip L1 (per byte) cache (SRAM) L1 cache holds cache lines retrieved from the L2 cache storage memory. devices L2: off-chip L2 cache (SRAM) L2 cache holds cache lines retrieved from main memory. L3: main memory Larger, (DRAM) Main memory holds disk slower, blocks retrieved from local and disks. cheaper local secondary storage L4: (per byte) (local disks) Local disks hold files storage retrieved from disks on devices remote network servers. L5: remote secondary storage (tapes, distributed file systems, Web servers) 5 From lecture-9.ppt 15-213, F’08
  • 6. Page Faults A page fault is caused by a reference to a VM word that is not in physical (main) memory  Example: An instruction references a word contained in VP 3, a miss that triggers a page fault exception Physical memory Virtual address Physical page (DRAM) number or VP 1 PP 0 Valid disk address VP 2 PTE 0 0 null VP 7 1 VP 4 PP 3 1 0 1 0 null Virtual memory 0 (disk) PTE 7 1 VP 1 Memory resident VP 2 page table VP 3 (DRAM) VP 4 VP 6 6 From lecture-14.ppt VP 7 15-213, F’08
  • 7. Disk-based storage in computers  Memory/storage hierarchy  Combining many technologies to balance costs/benefits  Recall the memory hierarchy and virtual memory lectures  Persistence  Storing data for lengthy periods of time  DRAM/SRAM is “volatile”: contents lost if power lost  Disks are “non-volatile”: contents survive power outages  To be useful, it must also be possible to find it again later  this brings in many interesting data organization, consistency, and management issues  take 18-746/15-746 Storage Systems  7  we’ll talk a bit about file systems next 15-213, F’08
  • 8. What’s Inside A Disk Drive? Spindle Arm Platters Actuator Electronics SCSI connector Image courtesy of Seagate Technology 8 15-213, F’08
  • 9. Disk Electronics Just like a small computer – processor, memory, network iface • Connect to disk • Control processor • Cache memory • Control ASIC • Connect to motor 9 15-213, F’08
  • 10. Disk “Geometry” Disks contain platters, each with two surfaces Each surface organized in concentric rings called tracks Each track consists of sectors separated by gaps tracks surface track k gaps spindle sectors 10 15-213, F’08
  • 11. Disk Geometry (Muliple-Platter View) Aligned tracks form a cylinder cylinder k surface 0 platter 0 surface 1 surface 2 platter 1 surface 3 surface 4 platter 2 surface 5 spindle 11 15-213, F’08
  • 12. Disk Structure Read/Write Head Arm Upper Surface Platter Lower Surface Cylinder Track Sector Actuator 12 15-213, F’08
  • 13. Disk Operation (Single-Platter View) The disk surface The read/write head spins at a is attached to the end fixed of the arm and flies over rotational rate the disk surface on a thin cushion of air spindle spindle spindle spindle By moving radially, the arm can position the read/write head over any track 13 15-213, F’08
  • 14. Disk Operation (Multi-Platter View) read/write heads move in unison from cylinder to cylinder arm spindle 14 15-213, F’08
  • 15. Disk Structure - top view of single platter Surface organized into tracks Tracks divided into sectors 15 15-213, F’08
  • 16. Disk Access Head in position above a track 16 15-213, F’08
  • 17. Disk Access Rotation is counter-clockwise 17 15-213, F’08
  • 18. Disk Access – Read About to read blue sector 18 15-213, F’08
  • 19. Disk Access – Read After BLUE read After reading blue sector 19 15-213, F’08
  • 20. Disk Access – Read After BLUE read Red request scheduled next 20 15-213, F’08
  • 21. Disk Access – Seek After BLUE read Seek for RED Seek to red’s track 21 15-213, F’08
  • 22. Disk Access – Rotational Latency After BLUE read Seek for RED Rotational latency Wait for red sector to rotate around 22 15-213, F’08
  • 23. Disk Access – Read After BLUE read Seek for RED Rotational latency After RED read Complete read of red 23 15-213, F’08
  • 24. Disk Access – Service Time Components After BLUE read Seek for RED Rotational latency After RED read Seek Rotational Latency Data Transfer 24 15-213, F’08
  • 25. Disk Access Time Average time to access a specific sector approximated by:  Taccess = Tavg seek + Tavg rotation + Tavg transfer Seek time (Tavg seek)  Time to position heads over cylinder containing target sector  Typical Tavg seek = 3-5 ms Rotational latency (Tavg rotation)  Time waiting for first bit of target sector to pass under r/w head  Tavg rotation = 1/2 x 1/RPMs x 60 sec/1 min  e.g., 3ms for 10,000 RPM disk Transfer time (Tavg transfer)  Time to read the bits in the target sector 25  15-213, F’08 Tavg transfer = 1/RPM x 1/(avg # sectors/track) x 60
  • 26. Disk Access Time Example Given:  Rotational rate = 7,200 RPM  Average seek time = 5 ms  Avg # sectors/track = 1000 Derived average time to access random sector:  Tavg rotation = 1/2 x (60 secs/7200 RPM) x 1000 ms/sec = 4 ms  Tavg transfer = 60/7200 RPM x 1/400 secs/track x 1000 ms/sec = 0.008 ms  Taccess = 5 ms + 4 ms + 0.008 ms = 9.008 ms  Time to second sector: 0.008 ms Important points:  Access time dominated by seek time and rotational latency  First bit in a sector is the most expensive, the rest are free  SRAM access time is about 4 ns/doubleword, DRAM about 60 ns 26  15-213, F’08 ~100,000 times longer to access a word on disk than in DRAM
  • 27. Disk storage as array of blocks …5 6 7 12 23 … OS’s view of storage device (as exposed by SCSI or IDE/ATA protocols)  Common “logical block” size: 512 bytes  Number of blocks: device capacity / block size  Common OS-to-storage requests defined by few fields  R/W, block #, # of blocks, memory source/dest 27 15-213, F’08
  • 28. Page Faults A page fault is caused by a reference to a VM word that is not in physical (main) memory  Example: An instruction references block” number can be 3, a “logical a word contained in VP miss that triggers a page fault exception in page table to remembered Virtual address identify disk location(DRAM) Physical page for pages Physical memory number or not resident in main memory VP 1 PP 0 Valid disk address VP 2 PTE 0 0 null VP 7 1 VP 4 PP 3 1 0 1 0 null Virtual memory 0 (disk) PTE 7 1 VP 1 Memory resident VP 2 page table VP 3 (DRAM) VP 4 VP 6 28 From lecture-14.ppt VP 7 15-213, F’08
  • 29. In device, “blocks” mapped to physical store Disk Sector (usually same size as block) 29 15-213, F’08
  • 30. Physical sectors of a single- surface disk 30 15-213, F’08
  • 31. LBN-to-physical for a single- surface disk 4 5 3 16 17 6 15 28 29 18 27 30 40 41 2 39 42 7 14 19 26 38 43 31 37 44 32 25 1 13 36 45 20 8 4 46 24 7 33 12 35 34 21 0 22 9 23 11 10 31 15-213, F’08
  • 32. Disk Capacity Capacity: maximum number of bits that can be stored  Vendors express capacity in units of gigabytes (GB), where 1 GB = 10 9 Bytes (Lawsuit pending! Claims deceptive advertising) Capacity is determined by these technology factors:  Recording density (bits/in): number of bits that can be squeezed into a 1 inch linear segment of a track  Track density (tracks/in): number of tracks that can be squeezed into a 1 inch radial segment  Areal density (bits/in2 ): product of recording and track density 32 15-213, F’08
  • 33. Computing Disk Capacity Capacity = (# bytes/sector) x (avg. # sectors/track) x (# tracks/surface) x (# surfaces/platter) x (# platters/disk) Example:  512 bytes/sector  1000 sectors/track (on average)  20,000 tracks/surface  2 surfaces/platter  5 platters/disk Capacity = 512 x 1000 x 80000 x 2 x 5 = 409,600,000,000 = 409.6 GB 33 15-213, F’08
  • 34. Looking back at the hardware CPU chip register file ALU main bus interface memory 34 15-213, F’08
  • 35. Connecting I/O devices: the I/O Bus CPU chip register file ALU system bus memory bus I/O main bus interface bridge memory I/O bus Expansion slots for other devices such USB graphics disk as network adapters controller adapter controller mousekeyboard monitor disk 35 15-213, F’08
  • 36. Reading from disk (1) CPU chip CPU initiates a disk read by writing a READ register file command, logical block number, number of blocks, and destination memory address to a ALU port (address) associated with disk controller main bus interface memory I/O bus USB graphics disk controller adapter controller mousekeyboard monitor disk 36 15-213, F’08
  • 37. Reading from disk (2) CPU chip Disk controller reads the sectors and register file performs a direct memory access (DMA) transfer into main memory ALU main bus interface memory I/O bus USB graphics disk controller adapter controller mousekeyboard monitor disk 37 15-213, F’08
  • 38. Reading from disk (3) CPU chip When the DMA transfer completes, the register file disk controller notifies the CPU with an interrupt (i.e., asserts a special “interrupt” ALU pin on the CPU) main bus interface memory I/O bus USB graphics disk controller adapter controller mousekeyboard monitor disk 38 15-213, F’08

Hinweis der Redaktion

  1. *
  2. *
  3. No copyright needed – scanned by Erik Riedel.
  4. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  5. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  6. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  7. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  8. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  9. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  10. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  11. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  12. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows
  13. Goal: Show the inefficeincy of current disk requests. Conveyed Ideas: Rotational latency is wasted time that can be used to service tasks Background Information: None. Slide Background: None. Kill text and arrows