SlideShare ist ein Scribd-Unternehmen logo
1 von 48
The Future of Memory
and Storage:
Closing the Gaps
Dean A. Klein
VP Market Development
Micron Technology, Inc.
Processor Trends
 Increasing core performance
 Increasing cores
 Increasing bus speed
 More memory
Memory Trends
                     Increasing density
                     Faster interfaces
                     Increasing latency
                     NAND       1200

                                1000
                                              800

                                              600
                                              400

                                              200
                                                0
                                                    1997

                                                           1998

                                                                  1999

                                                                         2000

                                                                                2001

                                                                                       2002

                                                                                              2003

                                                                                                     2004

                                                                                                            2005

                                                                                                                   2006

                                                                                                                          2007
Memory Transfer Rate chart: Micron research
Growing Gaps
                                                                      1B
                    Normalized Latency per Core
                                                                      100M
                                                                      10M




                                                                              Clocks of Latency
                                                                      1M



                                                                      1,000
                                                                      100
                                                                      10
                                                                      1

                         32 16
                               8                                    HDD
                                   4                   FSBus DRAM
                                       2          L2
                                           1 L1
      Number of Cores
Source: Instat, Micron, Intel
Memory Hierarchy Expansion

         Level 1 Cache


         Level 2 Cache


         Main Memory


             Disk
The CPU-to-Memory Gap



           1
Processor Trends
 AMD “Barcelona”
   Quad core
   2M shared L3 cache
   Dedicated L2 cache
 Intel® “Penryn”
   Dual/Quad core
   6MB/12MB L2 cache
 Intel “Nehalem”
   Quad/8 core
Main Memory Data Rate Trends
                   7000
                                                            NGM Diff
                   6000
Data Rate (Mtps)




                   5000

                   4000

                   3000                                    NGM SE

                   2000
                                               DDR2
                   1000                 DDR
                             SDRAM                        DDR3
                     0
                      1995           2000        2005       2010
                                              Year
                     DRAM bandwidth requirements typically double
                     every 3 years
Memory Trends
 But latency is actually getting WORSE
 And power is a problem

 What drives memory evolution today?
Economics Drives Memory
                                                                                              DRAM Market Price-per-bit Decline
                                                                                                                 (Normalized- Millicent/bit)
                                                                                                                 (Normalized-
                                   1979

                                                 1980
               (Millicents )
 Price per Bit ( Millicents)




                                          1981                        1984
                                                 1982
                                                                                     1988
                                                        1983
                                                                                            1989
                                                               1985
                                                                                               1990
                                                                              1987                        1993
                                                                       1986                                           1995
                                                                                            1991
                                                                                                               1994
                                                                                                   1992                      1996
                                                                                                                                    1997

                                                                                                                                                   2000
                                                                                                                         1998
                                                                                                                                    1999
                                               Historical                                                                                                       2003
                                                                                                                                                                       2004F
                                                                                                                                           2001
                                      price-per-bit decline has
                                      price- per-                                                                                                                              2006F
                                                                                                                                                  2002
                                          averaged 35.5%                                                                                                         2005F
                                                                                                                                                                                       2008F
                                             (1978 - 2002)                                                                                                                 2007F




                                                                                                                                                                                               100,000,000
                                                                                                                                                    1,000,000
                                                                                                      10,000
                                                               100
                               1




                                                                                 Cumulative Bit Volume (1012)
Physics Drives Memory
DRAM Cell Layout: 8F2



                   2F




          4F
SRAM Cell Layout: 140F2
Compare:
Cell Sizes Compared
   Cell Size (µ2)        Tech Node (nm)   Cell Size(F2)
IBM/Infineon MRAM
       1.42                    180             44
Motorola 6T-SRAM
         6T-
       1.15                    90             142
       0.69                    65             163
Intel 65nm process 6T-SRAM
                   6T-
       0.57                    65             135
Motorola eDRAM
       0.12                    65              28
Motorola TFS: Nanocrystaline
       0.13                    90              16
Micron 30-series DRAM
       30-
       0.054                   95              6
Samsung 512Mb PRAM Device
       0.050                   95             5.5
Micron 50-series NAND
       50-
       0.013                   53             4.5
Power Trends
                                                                                                 Idle
   Active
   900.0E-03                                                                                     30.0E-03

   800.0E-03
                                                                                                 25.0E-03
   700.0E-03
                                            Active
   600.0E-03                                                                                     20.0E-03

   500.0E-03
                                                            Idle                                 15.0E-03
   400.0E-03

   300.0E-03                                                                                     10.0E-03

   200.0E-03
                                                                                                 5.0E-03
   100.0E-03

   000.0E+00                                                                                     000.0E+00
                  256Mbit SDRAM       512 Mbit DDR 200Mhz 1Gbit DDR2 333MHz 1Gbit DDR3 1333MHz
                     167MHz

• x16 devices at nominal VDD, linear trend lines
Voltage Scaling Trends
  7


  6


  5


  4


  3


  2


  1


  0
  1990   1995   2000   2005   2010
Process Cost Increase



$




          Years by Quarter
Options for L3 Cache
 SRAM L3
DRAM Can Be Fast
                                                 Random 16-Byte Transfers Max Envelope
                                     4,000


                                     3,500
 Bandwidth per device (MB/s / dev)




                                     3,000


                                     2,500


                                      2,000


                                      1,500


                                      1,000



                                                                                                      DDR2
                                       500
                                                                                        DDR3
                                                                       GDDR3
                                             0
                                                  RLDRAM®

                                                       Access pattern: 8 READS followed by 8 WRITES
Through-Wafer Interconnect
 Reduced parasitic loads
 Smaller ESD structures
 Greater numbers of interconnects
 TWI:
Redistribution Layers
 Layout flexibility
 Reduced parasitic loads
 Support for high number of interconnects
 RDL:
Stacked Silicon
 Goal of TWI and RDL
 Supports N≥2 layers of silicon
 Supports processes optimized for device
The Memory-to-Storage Gap



            2
Storage Demand
                         161 exabytes of digital data were
                         generated in 2006
  That’s about 168 million terabytes, or roughly the equivalent of:




                                                                      36
                                                    1                           43
                                                                     billion
                         million copies                                        trillion
                                                                     digital
                         of every book                                         digital
                                                                     movies    songs
                         in the Library
                          of Congress
Sources: IDC, UC Berkeley, CIA World Fact Book, USA TODAY Research
DRAM-to-Disk Evolution
 “Flash is Disk, Disk is Tape”
 Performance, not capacity, is the issue
 Disk will continue as the $/bit leader
 NAND pricing is on a steep decline
SSDs vs. HDDs
                                                         •    Based on recent
                                                              advances in NAND
                            SSD          HDD
                                                              lithography, SSD
Capacity
                                                              densities have
                                                              reached capacities
Performance
                                                              for mass market
Reliability
                                                              appeal
Endurance
Power                                                    •    SSD offers many
                                                              features that lead
Size
                                                              to improved user
Weight
                                                              experiences
Shock
                                                         •    Early shortcomings
Temperature
                                                              for reliability and
Cost per bit
                                                              endurance have
                                                              been overcome
NAND solid state storage devices are ready for deployment in many applications
NAND solid state storage devices are ready for deployment in many applications
NAND Density Trends
                         Beating Moore’s Law
                                                        HDD, NAND Flash Pricing (Log Chart)
               $100
                           $43.39



                                               $15.66


                                                               $7.12
                   $10
                                                                             $4.68
                            $3.76
                                                                                              $3.11
                                               $2.05                                                  $1.96
                                                               $1.34
                                                                             $1.08            $1.02
                                                                                                      $0.89
                    $1     $1.30
                                              $1.02
                                                              $0.81
                                                                             $0.58
                                                                                              $0.45
                                    HDD 0.85in, 1.0in, 1.8in Combined                                 $0.35
                                    NAND Flash
                                    Mobile HDD 2.5in (portable PCs)
                   $0

                                                               2007           2008             2009    2010
               $/GB         2005               2006



Source: IDC 2007
NAND in Notebooks and
          Consumer
                                                     Hard Disk    Solid State      Hard Disk   Hybrid Hard
                                                       Drive        Drive            Drive        Drive
             Average
                                                     1.8” HDD    SSD (1.8”/2.5”)   2.5” HDD     2.5” HHD
             Specifications
            Capacity                                 30-80GB        4-32GB         40-160GB    Up to 160GB
            Data Rate (max
            sustain)
                                                     25 MB/s        57 MB/s        44 MB/s
            Read
            Write                                    25 MB/s        32 MB/s        44 MB/s
            Spindle Speed                            4200 RPM        None          5400 RPM     5400 RPM
            Seek                                       15ms          None            12ms        12.5ms
            Non op shock                              1500G          2000G           900G         900G

                       SSD and HHD both provide power savings in various applications,
                       but the exact power savings fluctuate from application to application
                       In a test of a 32GB SSD drive, the power savings of the SSD was 1
                       watt better than the closest HDD tested

Source: Web-Feet Research, Seagate, Tom’s Hardware
What to Look for in an SSD
 SSD-optimized controller
 Parallel NAND channels
                             NAND
                             NAND
                            NAND
                            NAND

                             NAND
                             NAND
                            NAND
                            NAND
 SATA-II       Controller
                             NAND
                             NAND
                            NAND
                            NAND

                             NAND
                             NAND
                            NAND
                            NAND
SSDs in the Enterprise
                                               CPU
                          Relative                        Relative
                          Latency                         Cost/bit

                              1                L1 Cache    200

                            2.5                L2 Cache    140

                             35                L3 Cache    120

                            300                 DRAM         8

          NAND      250,000                     SSD         3
                    25,000,000                  HDD        0.7

                   NAND Flash closes the latency gap
                 Cost/bit data as of Aug ‘06
Data Center Issues
                                                           Observed Failure Rates
                                                                                              Fails/
                                                                              Part
                                                                                      Fails
                                       System      Source        Type
                                                                                              Year
                                                                             Years
                                                                 SCSI
                       Power                                                  858      24     2.8%
                                                                10 krpm
                                     TerraServer
                                                   Barclay
                                                               controllers    72       2      2.8%
                                        SAN
                       Reliability                             san switch      9       1      11.1%

                       Space                                     SATA
                                     TerraServer
                                                                              138      10     7.2%
                                                   Barclay
                                                                 7 krpm
                                        Brick

                       Performance                               SCSI
                                                                             15,805   972     6.0%
                                                                10 krpm
                                       Web
                                                    anon
                                     Property 1
                                                               Controllers    900     139     15.4%

                                                                 PATA
                                                                             22,400   740     3.3%
                                       Web                       7 krpm
                                                    anon
                                     Property 2
                                                              motherboard    3,769     66     1.7%




Source: Microsoft Research
Reliability and Endurance
                Effect      Description      Observed as…         Management
                          Cells not being
                                               Increased read
                          programmed
                                                                  ECC and Block
              Program
                                             errors immediately
                          receive charge
                                                                   Management
              Disturb
                                             after programming
                          via elevated
                          voltage stress
                          Cells not being
                                               Increased read
                          read receive
Reliability   Read                                                ECC and Block
                                                 error at high
                          charge via
                                                                   Management
              Disturb                         number of reads
                          elevated voltage
                          stress

                                                                  ECC and Block
                                              Increased read
                          Charge loss over
              Data
                                                                   Management
                                              errors with time
                          time
              Retention

                                                  Failed
              Endurance/ Cycles cause
                                                                   Retire Block
                                              program/erase
                         charge trapped in
Endurance
              Cycling                             status
                         dielectric


              NAND failure mechanisms are well understood and managed
               NAND failure mechanisms are well understood and managed
Management Stack
         Application          Flash Translation Layer
                                Interfaces to traditional HDD File System
                                Enables sector I/O to Flash
    Operating System            Wear leveling
                                Bad block management
                                Automatic reclamation of erased blocks
         File System            Power loss protection
                                Manages multiple NAND devices
  HDD                  SSD
                   Flash      Controller
Controller
                Translation     Manages physical protocol
   ECC          Layer (FTL)     NAND command encoding
                                High speed data transport (DMA/FIFO)
                 Controller
                       ECC    Error Control
                                Algorithm to control sector­level bit reliability
                                                     sector­
                                Implemented in hardware with software control
                   NAND         Algorithm depends upon Flash technology
SSD and HDD Reliability
              Hard Disk Drive           Solid State Drive
                                          Application Data
                 Application Data
Application                              Error Rate < 10-15
                Error Rate < 10-15



                 Bad Block
                 Bad Block
                                           Bad Block
                                           Bad Block
                Management
                Management
                                         Management
Data Mgmt.                               Management
                Channel and
                Channel and              Block Coding
                                         Block Coding
                Block Coding
                Block Coding



                             Typical                Typical Raw
Raw Media                  Raw Error                 Error Rate
                           Rate >10-4                  <10-5
SSD Quality and Reliability
                             NAND
                      Extended operation of
                      NAND
                      Ongoing production
                      management to ensure
                      reliability

                        Management
                      NAND­validated error
                      correction
                      Static and dynamic wear
                      leveling
                      Garbage collection
                      Bad block remapping
                      Other proprietary
                                                   SSD Specs
NAND Specs            schemes
                                                10­year operating life
100K P/E Cycles
                                                10-15 bit error rate
                        Applications
1­bit ECC
                                                1E+6 hours MTBF
Limited READ cycles   Optimizations based
10­year data          upon management and
retention             NAND
Endurance: Usage Example
                 Continuous write at MAX bus speed of 100 MB/s with a 5:1 R/W ratio
                 Continuous write at MAX bus speed of 100 MB/s with a 5:1 R/W ratio


                                 30 minutes                    50 years before 1M
                                 to fill up                    I/O cycle limit is
                                 disk                          exceeded

                                                  250 years before 1M I/O
Capacity: 32GB
                                                  cycle limit is exceeded
                                    At 20%
                                    utilization
                                                         Opportunities for
                                                         improvement, i.e., new
                                                         coding, will further extend
                                                         time to cycle limit

                 Micron flash drives are ready for deployment for various applications
                 Micron flash drives are ready for deployment for various applications
Wear Leveling
 Wear leveling is a plus on SLC devices where blocks
 can support up to 100,000 PROGRAM/ERASE cycles
 Wear leveling is imperative on MLC devices where
 blocks typically support less than 10,000 cycles
 If you erased and reprogrammed a block every
 minute, you would exceed the 10,000 cycle limit in
 just 7 days!
           60 x 24 x 7 = 10,080
 Rather than cycling the same block, wear leveling
 involves distributing the number of blocks that are
 cycled
Wear Leveling (continued)
  An 8Gb MLC device contains 4,096 independent blocks
  If we took the previous example and distributed the
  cycles over all 4,096 blocks, each block would have
  been programmed less than 3 times (versus the 10,800
  cycles when you cycle the same block)
  If you provided perfect wear leveling on a 4,096 block
  device, you could erase and program a block every
  minute, every day for 77 years!

10,000 X 4,096       40,960,00
                 =               = 28,444 days = 77.9 years
  60 X 24             1,440
ECC Code Selection is
                             Becoming More Important
                             1.0E-25




                                                                t=5

                                                                       4
                                                         t=6




                                                                            3
                                                                                                          1




                                                                                    2
                             1.0E-23




                                                                           t=
                                                                      t=
                                                                                                    t=




                                                                                  t=
                             1.0E-21
Application Bit Error Rate




                             1.0E-19
                                                                                                                                      For SLC
                             1.0E-17

                                                                                                                            A code with a correction
                             1.0E-15
                                                                                                        =0
                                                                                                                            threshold of 1 is sufficient
                                                                                                    t
                             1.0E-13

                             1.0E-11

                                                                                                                               t = 4 required (as a
                             1.0E-09

                                                                                                                                minimum) for MLC
                             1.0E-07

                             1.0E-05

                             1.0E-03

                             1.0E-01
                                  1.0E-01   1.0E-03   1.0E-05         1.0E-07   1.0E-09   1.0E-11       1.0E-13   1.0E-15

                                                          Raw NAND Bit Error Rate

                               As the raw NAND Flash BER increases, matching the
                               ECC to the application’s target BER becomes more
                               important
Meaningful Cycling Metrics
 Practical, testable solutions are needed
 Simply stating “the drive must meet 1
 million complete READ and WRITE cycles”
 is not realistic    1M



                    100K
                                               Years for
                                               one complete
           Cycles




                                               pass
                    10K
                                               13
                     1K




                           Capacity (%) 100%
Cycling for CE Applications
          1M                              Years for
                                          one complete
                                          pass


                                          0.93
         100K
Cycles




         10K




          1K




                5%   20%            75%


                           Capacity (%)
Cycling for Servers
          1M                                     Years for
                                                 one complete
                                                 pass


                                                 3.02
         100K
Cycles




         10K




          1K




                20%   30%                  50%


                            Capacity (%)
Cycling for Enterprise
         10M                              Years for
                                          one complete
                                          pass


                                          6.78
          1M




         100K
Cycles




         10K




          1K




                5%   20%            75%


                           Capacity (%)
Call to Action
  Close the gaps!
  Innovation opportunities exist close to the
  CPU with DRAM-based caches
  Innovation opportunities are being
  enabled by rapid NAND scaling for
  NAND-based storage
Additional Resources
 Web Resources
  Specs: http://www.micron.com/winhec07
  Whitepapers: http://www.micron.com/winhec07
 Related Sessions
  Main Memory Technology Direction
  Flash Memory Technology Direction
 E-Mail Address
  daklein@micron.com
© 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.
The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market
     conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation.
                                 MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

Weitere ähnliche Inhalte

Ähnlich wie The Future Of Memory Storage by Dean Klein

Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...
Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...
Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...
Enkitec
 
Comms day sydney 2013 anz international capacity perspectives
Comms day sydney 2013 anz international capacity perspectivesComms day sydney 2013 anz international capacity perspectives
Comms day sydney 2013 anz international capacity perspectives
SCCLA
 
Micron Q2-08EarningsCall
Micron Q2-08EarningsCallMicron Q2-08EarningsCall
Micron Q2-08EarningsCall
finance36
 
Lec Jan12 2009
Lec Jan12 2009Lec Jan12 2009
Lec Jan12 2009
Ravi Soni
 
03 05-13 ucan @ piedmont economics club
03 05-13 ucan @ piedmont economics club03 05-13 ucan @ piedmont economics club
03 05-13 ucan @ piedmont economics club
Matt Dunbar
 
Graeme marshall tauranga transport and logistics forum 16 nov2012
Graeme marshall tauranga transport and logistics forum 16 nov2012Graeme marshall tauranga transport and logistics forum 16 nov2012
Graeme marshall tauranga transport and logistics forum 16 nov2012
Greg Bold
 

Ähnlich wie The Future Of Memory Storage by Dean Klein (17)

Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...
Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...
Bottlenecks, Bottlenecks, and more Bottlenecks: Lessons Learned from 2 Years ...
 
It Transformation
It TransformationIt Transformation
It Transformation
 
Comms day sydney 2013 anz international capacity perspectives
Comms day sydney 2013 anz international capacity perspectivesComms day sydney 2013 anz international capacity perspectives
Comms day sydney 2013 anz international capacity perspectives
 
3 d ic
3 d ic3 d ic
3 d ic
 
dft
dftdft
dft
 
Micron Q2-08EarningsCall
Micron Q2-08EarningsCallMicron Q2-08EarningsCall
Micron Q2-08EarningsCall
 
Metro's Natural Area Program - Soll
Metro's Natural Area Program - SollMetro's Natural Area Program - Soll
Metro's Natural Area Program - Soll
 
System-on-Chip Design, Embedded System Design Challenges
System-on-Chip Design, Embedded System Design ChallengesSystem-on-Chip Design, Embedded System Design Challenges
System-on-Chip Design, Embedded System Design Challenges
 
Lec Jan12 2009
Lec Jan12 2009Lec Jan12 2009
Lec Jan12 2009
 
03 05-13 ucan @ piedmont economics club
03 05-13 ucan @ piedmont economics club03 05-13 ucan @ piedmont economics club
03 05-13 ucan @ piedmont economics club
 
Top500 11/2011 BOF Slides
Top500 11/2011 BOF SlidesTop500 11/2011 BOF Slides
Top500 11/2011 BOF Slides
 
Rural Road Safety
Rural Road SafetyRural Road Safety
Rural Road Safety
 
Cost Out Programs
Cost Out ProgramsCost Out Programs
Cost Out Programs
 
Graeme marshall tauranga transport and logistics forum 16 nov2012
Graeme marshall tauranga transport and logistics forum 16 nov2012Graeme marshall tauranga transport and logistics forum 16 nov2012
Graeme marshall tauranga transport and logistics forum 16 nov2012
 
Use of Simulation Modeling to Address Impact of Recurring Bad Weather Events,...
Use of Simulation Modeling to Address Impact of Recurring Bad Weather Events,...Use of Simulation Modeling to Address Impact of Recurring Bad Weather Events,...
Use of Simulation Modeling to Address Impact of Recurring Bad Weather Events,...
 
3 G Tutorial
3 G Tutorial3 G Tutorial
3 G Tutorial
 
3 g tutorial
3 g tutorial3 g tutorial
3 g tutorial
 

Kürzlich hochgeladen

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Kürzlich hochgeladen (20)

MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 

The Future Of Memory Storage by Dean Klein

  • 1.
  • 2. The Future of Memory and Storage: Closing the Gaps Dean A. Klein VP Market Development Micron Technology, Inc.
  • 3. Processor Trends Increasing core performance Increasing cores Increasing bus speed More memory
  • 4. Memory Trends Increasing density Faster interfaces Increasing latency NAND 1200 1000 800 600 400 200 0 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 Memory Transfer Rate chart: Micron research
  • 5. Growing Gaps 1B Normalized Latency per Core 100M 10M Clocks of Latency 1M 1,000 100 10 1 32 16 8 HDD 4 FSBus DRAM 2 L2 1 L1 Number of Cores Source: Instat, Micron, Intel
  • 6. Memory Hierarchy Expansion Level 1 Cache Level 2 Cache Main Memory Disk
  • 8. Processor Trends AMD “Barcelona” Quad core 2M shared L3 cache Dedicated L2 cache Intel® “Penryn” Dual/Quad core 6MB/12MB L2 cache Intel “Nehalem” Quad/8 core
  • 9. Main Memory Data Rate Trends 7000 NGM Diff 6000 Data Rate (Mtps) 5000 4000 3000 NGM SE 2000 DDR2 1000 DDR SDRAM DDR3 0 1995 2000 2005 2010 Year DRAM bandwidth requirements typically double every 3 years
  • 10. Memory Trends But latency is actually getting WORSE And power is a problem What drives memory evolution today?
  • 11. Economics Drives Memory DRAM Market Price-per-bit Decline (Normalized- Millicent/bit) (Normalized- 1979 1980 (Millicents ) Price per Bit ( Millicents) 1981 1984 1982 1988 1983 1989 1985 1990 1987 1993 1986 1995 1991 1994 1992 1996 1997 2000 1998 1999 Historical 2003 2004F 2001 price-per-bit decline has price- per- 2006F 2002 averaged 35.5% 2005F 2008F (1978 - 2002) 2007F 100,000,000 1,000,000 10,000 100 1 Cumulative Bit Volume (1012)
  • 13. DRAM Cell Layout: 8F2 2F 4F
  • 16. Cell Sizes Compared Cell Size (µ2) Tech Node (nm) Cell Size(F2) IBM/Infineon MRAM 1.42 180 44 Motorola 6T-SRAM 6T- 1.15 90 142 0.69 65 163 Intel 65nm process 6T-SRAM 6T- 0.57 65 135 Motorola eDRAM 0.12 65 28 Motorola TFS: Nanocrystaline 0.13 90 16 Micron 30-series DRAM 30- 0.054 95 6 Samsung 512Mb PRAM Device 0.050 95 5.5 Micron 50-series NAND 50- 0.013 53 4.5
  • 17. Power Trends Idle Active 900.0E-03 30.0E-03 800.0E-03 25.0E-03 700.0E-03 Active 600.0E-03 20.0E-03 500.0E-03 Idle 15.0E-03 400.0E-03 300.0E-03 10.0E-03 200.0E-03 5.0E-03 100.0E-03 000.0E+00 000.0E+00 256Mbit SDRAM 512 Mbit DDR 200Mhz 1Gbit DDR2 333MHz 1Gbit DDR3 1333MHz 167MHz • x16 devices at nominal VDD, linear trend lines
  • 18. Voltage Scaling Trends 7 6 5 4 3 2 1 0 1990 1995 2000 2005 2010
  • 19. Process Cost Increase $ Years by Quarter
  • 20. Options for L3 Cache SRAM L3
  • 21. DRAM Can Be Fast Random 16-Byte Transfers Max Envelope 4,000 3,500 Bandwidth per device (MB/s / dev) 3,000 2,500 2,000 1,500 1,000 DDR2 500 DDR3 GDDR3 0 RLDRAM® Access pattern: 8 READS followed by 8 WRITES
  • 22. Through-Wafer Interconnect Reduced parasitic loads Smaller ESD structures Greater numbers of interconnects TWI:
  • 23. Redistribution Layers Layout flexibility Reduced parasitic loads Support for high number of interconnects RDL:
  • 24. Stacked Silicon Goal of TWI and RDL Supports N≥2 layers of silicon Supports processes optimized for device
  • 26. Storage Demand 161 exabytes of digital data were generated in 2006 That’s about 168 million terabytes, or roughly the equivalent of: 36 1 43 billion million copies trillion digital of every book digital movies songs in the Library of Congress Sources: IDC, UC Berkeley, CIA World Fact Book, USA TODAY Research
  • 27. DRAM-to-Disk Evolution “Flash is Disk, Disk is Tape” Performance, not capacity, is the issue Disk will continue as the $/bit leader NAND pricing is on a steep decline
  • 28. SSDs vs. HDDs • Based on recent advances in NAND SSD HDD lithography, SSD Capacity densities have reached capacities Performance for mass market Reliability appeal Endurance Power • SSD offers many features that lead Size to improved user Weight experiences Shock • Early shortcomings Temperature for reliability and Cost per bit endurance have been overcome NAND solid state storage devices are ready for deployment in many applications NAND solid state storage devices are ready for deployment in many applications
  • 29. NAND Density Trends Beating Moore’s Law HDD, NAND Flash Pricing (Log Chart) $100 $43.39 $15.66 $7.12 $10 $4.68 $3.76 $3.11 $2.05 $1.96 $1.34 $1.08 $1.02 $0.89 $1 $1.30 $1.02 $0.81 $0.58 $0.45 HDD 0.85in, 1.0in, 1.8in Combined $0.35 NAND Flash Mobile HDD 2.5in (portable PCs) $0 2007 2008 2009 2010 $/GB 2005 2006 Source: IDC 2007
  • 30. NAND in Notebooks and Consumer Hard Disk Solid State Hard Disk Hybrid Hard Drive Drive Drive Drive Average 1.8” HDD SSD (1.8”/2.5”) 2.5” HDD 2.5” HHD Specifications Capacity 30-80GB 4-32GB 40-160GB Up to 160GB Data Rate (max sustain) 25 MB/s 57 MB/s 44 MB/s Read Write 25 MB/s 32 MB/s 44 MB/s Spindle Speed 4200 RPM None 5400 RPM 5400 RPM Seek 15ms None 12ms 12.5ms Non op shock 1500G 2000G 900G 900G SSD and HHD both provide power savings in various applications, but the exact power savings fluctuate from application to application In a test of a 32GB SSD drive, the power savings of the SSD was 1 watt better than the closest HDD tested Source: Web-Feet Research, Seagate, Tom’s Hardware
  • 31. What to Look for in an SSD SSD-optimized controller Parallel NAND channels NAND NAND NAND NAND NAND NAND NAND NAND SATA-II Controller NAND NAND NAND NAND NAND NAND NAND NAND
  • 32. SSDs in the Enterprise CPU Relative Relative Latency Cost/bit 1 L1 Cache 200 2.5 L2 Cache 140 35 L3 Cache 120 300 DRAM 8 NAND 250,000 SSD 3 25,000,000 HDD 0.7 NAND Flash closes the latency gap Cost/bit data as of Aug ‘06
  • 33. Data Center Issues Observed Failure Rates Fails/ Part Fails System Source Type Year Years SCSI Power 858 24 2.8% 10 krpm TerraServer Barclay controllers 72 2 2.8% SAN Reliability san switch 9 1 11.1% Space SATA TerraServer 138 10 7.2% Barclay 7 krpm Brick Performance SCSI 15,805 972 6.0% 10 krpm Web anon Property 1 Controllers 900 139 15.4% PATA 22,400 740 3.3% Web 7 krpm anon Property 2 motherboard 3,769 66 1.7% Source: Microsoft Research
  • 34. Reliability and Endurance Effect Description Observed as… Management Cells not being Increased read programmed ECC and Block Program errors immediately receive charge Management Disturb after programming via elevated voltage stress Cells not being Increased read read receive Reliability Read ECC and Block error at high charge via Management Disturb number of reads elevated voltage stress ECC and Block Increased read Charge loss over Data Management errors with time time Retention Failed Endurance/ Cycles cause Retire Block program/erase charge trapped in Endurance Cycling status dielectric NAND failure mechanisms are well understood and managed NAND failure mechanisms are well understood and managed
  • 35. Management Stack Application Flash Translation Layer Interfaces to traditional HDD File System Enables sector I/O to Flash Operating System Wear leveling Bad block management Automatic reclamation of erased blocks File System Power loss protection Manages multiple NAND devices HDD SSD Flash Controller Controller Translation Manages physical protocol ECC Layer (FTL) NAND command encoding High speed data transport (DMA/FIFO) Controller ECC Error Control Algorithm to control sector­level bit reliability sector­ Implemented in hardware with software control NAND Algorithm depends upon Flash technology
  • 36. SSD and HDD Reliability Hard Disk Drive Solid State Drive Application Data Application Data Application Error Rate < 10-15 Error Rate < 10-15 Bad Block Bad Block Bad Block Bad Block Management Management Management Data Mgmt. Management Channel and Channel and Block Coding Block Coding Block Coding Block Coding Typical Typical Raw Raw Media Raw Error Error Rate Rate >10-4 <10-5
  • 37. SSD Quality and Reliability NAND Extended operation of NAND Ongoing production management to ensure reliability Management NAND­validated error correction Static and dynamic wear leveling Garbage collection Bad block remapping Other proprietary SSD Specs NAND Specs schemes 10­year operating life 100K P/E Cycles 10-15 bit error rate Applications 1­bit ECC 1E+6 hours MTBF Limited READ cycles Optimizations based 10­year data upon management and retention NAND
  • 38. Endurance: Usage Example Continuous write at MAX bus speed of 100 MB/s with a 5:1 R/W ratio Continuous write at MAX bus speed of 100 MB/s with a 5:1 R/W ratio 30 minutes 50 years before 1M to fill up I/O cycle limit is disk exceeded 250 years before 1M I/O Capacity: 32GB cycle limit is exceeded At 20% utilization Opportunities for improvement, i.e., new coding, will further extend time to cycle limit Micron flash drives are ready for deployment for various applications Micron flash drives are ready for deployment for various applications
  • 39. Wear Leveling Wear leveling is a plus on SLC devices where blocks can support up to 100,000 PROGRAM/ERASE cycles Wear leveling is imperative on MLC devices where blocks typically support less than 10,000 cycles If you erased and reprogrammed a block every minute, you would exceed the 10,000 cycle limit in just 7 days! 60 x 24 x 7 = 10,080 Rather than cycling the same block, wear leveling involves distributing the number of blocks that are cycled
  • 40. Wear Leveling (continued) An 8Gb MLC device contains 4,096 independent blocks If we took the previous example and distributed the cycles over all 4,096 blocks, each block would have been programmed less than 3 times (versus the 10,800 cycles when you cycle the same block) If you provided perfect wear leveling on a 4,096 block device, you could erase and program a block every minute, every day for 77 years! 10,000 X 4,096 40,960,00 = = 28,444 days = 77.9 years 60 X 24 1,440
  • 41. ECC Code Selection is Becoming More Important 1.0E-25 t=5 4 t=6 3 1 2 1.0E-23 t= t= t= t= 1.0E-21 Application Bit Error Rate 1.0E-19 For SLC 1.0E-17 A code with a correction 1.0E-15 =0 threshold of 1 is sufficient t 1.0E-13 1.0E-11 t = 4 required (as a 1.0E-09 minimum) for MLC 1.0E-07 1.0E-05 1.0E-03 1.0E-01 1.0E-01 1.0E-03 1.0E-05 1.0E-07 1.0E-09 1.0E-11 1.0E-13 1.0E-15 Raw NAND Bit Error Rate As the raw NAND Flash BER increases, matching the ECC to the application’s target BER becomes more important
  • 42. Meaningful Cycling Metrics Practical, testable solutions are needed Simply stating “the drive must meet 1 million complete READ and WRITE cycles” is not realistic 1M 100K Years for one complete Cycles pass 10K 13 1K Capacity (%) 100%
  • 43. Cycling for CE Applications 1M Years for one complete pass 0.93 100K Cycles 10K 1K 5% 20% 75% Capacity (%)
  • 44. Cycling for Servers 1M Years for one complete pass 3.02 100K Cycles 10K 1K 20% 30% 50% Capacity (%)
  • 45. Cycling for Enterprise 10M Years for one complete pass 6.78 1M 100K Cycles 10K 1K 5% 20% 75% Capacity (%)
  • 46. Call to Action Close the gaps! Innovation opportunities exist close to the CPU with DRAM-based caches Innovation opportunities are being enabled by rapid NAND scaling for NAND-based storage
  • 47. Additional Resources Web Resources Specs: http://www.micron.com/winhec07 Whitepapers: http://www.micron.com/winhec07 Related Sessions Main Memory Technology Direction Flash Memory Technology Direction E-Mail Address daklein@micron.com
  • 48. © 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.