SlideShare a Scribd company logo
1 of 40
virtual machine block storage with
the ceph distributed storage system


               sage weil
       xensummit – august 28, 2012
outline
●   why you should care
●   what is it, what it does
●   how it works, how you can use it
    ●   architecture
    ●   objects, recovery
●   rados block device
    ●   integration
    ●   path forward
●   who we are, why we do this
why should you care about another
        storage system?

      requirements, time, cost
requirements
●   diverse storage needs
    ●   object storage
    ●   block devices (for VMs) with snapshots, cloning
    ●   shared file system with POSIX, coherent caches
    ●   structured data... files, block devices, or objects?
●   scale
    ●   terabytes, petabytes, exabytes
    ●   heterogeneous hardware
    ●   reliability and fault tolerance
time
●   ease of administration
●   no manual data migration, load balancing
●   painless scaling
    ●   expansion and contraction
    ●   seamless migration
cost
●   low cost per gigabyte
●   no vendor lock-in

●   software solution
    ●   run on commodity hardware
●   open source
what is ceph?
APP                    APP                   HOST/VM                  CLIENT



                        RADOSGW                RBD                      CEPH FS
   LIBRADOS
                        A bucket-based         A reliable and fully-    A POSIX-compliant
   A library allowing   REST gateway,          distributed block        distributed file
   apps to directly     compatible with S3     device, with a Linux     system, with a
   access RADOS,        and Swift              kernel client and a      Linux kernel client
   with support for                            QEMU/KVM driver          and support for
   C, C++, Java,                                                        FUSE
   Python, Ruby,
   and PHP




RADOS

A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes



                                                                                              8
open source
●   LGPLv2
    ●   copyleft
    ●   free to link to proprietary code
●   no copyright assignment
    ●   no dual licensing
    ●   no “enterprise-only” feature set
●   active community
●   commercial support available
distributed storage system
●   data center (not geo) scale
    ●   10s to 10,000s of machines
    ●   terabytes to exabytes
●   fault tolerant
    ●   no SPoF
    ●   commodity hardware
        –   ethernet, SATA/SAS, HDD/SSD
        –   RAID, SAN probably a waste of time, power, and money
object storage model
●   pools
    ●   1s to 100s
    ●   independent namespaces or object collections
    ●   replication level, placement policy
●   objects
    ●   trillions
    ●   blob of data (bytes to gigabytes)
    ●   attributes (e.g., “version=12”; bytes to kilobytes)
    ●   key/value bundle (bytes to gigabytes)
object storage cluster
●   conventional client/server model doesn't scale
    ●   server(s) become bottlenecks; proxies are inefficient
    ●   if storage devices don't coordinate, clients must
●   ceph-osds are intelligent storage daemons
    ●   coordinate with peers
    ●   sensible, cluster-aware protocols
    ●   sit on local file system
         –   btrfs, xfs, ext4, etc.
         –   leveldb
OSD    OSD    OSD    OSD    OSD




FS      FS    FS     FS     FS     btrfs
                                   xfs
                                   ext4
DISK   DISK   DISK   DISK   DISK




  M            M             M



                                           13
Monitors:
    •
        Maintain cluster state



M
    •
        Provide consensus for
        distributed decision-making
    •
        Small, odd number
    •
        These do not serve stored
        objects to clients
    •




    OSDs:
    •
        One per disk or RAID group
    •
        At least three in a cluster
    •
        Serve stored objects to clients
    •
        Intelligently peer to perform
        replication tasks
HUMAN




        M




M           M
data distribution
●   all objects are replicated N times
●   objects are automatically placed, balanced, migrated
    in a dynamic cluster
●   must consider physical infrastructure
    ●   ceph-osds on hosts in racks in rows in data centers

●   three approaches
    ●   pick a spot; remember where you put it
    ●   pick a spot; write down where you put it
    ●   calculate where to put it, where to find it
CRUSH
•   Pseudo-random placement algorithm
•   Fast calculation, no lookup
•   Ensures even distribution
•   Repeatable, deterministic
•   Rule-based configuration
    •   specifiable replication
    •   infrastructure topology aware
    •   allows weighting
•   Stable mapping
    •   Limited data migration
distributed object storage
●   CRUSH tells us where data should go
    ●   small “osd map” records cluster state at point in time
    ●   ceph-osd node status (up/down, weight, IP)
    ●   CRUSH function specifying desired data distribution
●   object storage daemons (RADOS)
    ●   store it there
    ●   migrate it as the cluster changes
●   decentralized, distributed approach allows
    ●   massive scales (10,000s of servers or more)
    ●   efficient data access
    ●   the illusion of a single copy with consistent behavior
large clusters aren't static
●   dynamic cluster
    ●   nodes are added, removed; nodes reboot, fail, recover
    ●   recovery is the norm
●   osd maps are versioned
    ●   shared via gossip
●   any map update potentially triggers data migration
    ●   ceph-osds monitor peers for failure
    ●   new nodes register with monitor
    ●   administrator adjusts weights, mark out old hardware, etc.
CLIENT

         ??
what does this mean for my cloud?
●   virtual disks
    ●   reliable
    ●   accessible from many hosts
●   appliances
    ●   great for small clouds
    ●   not viable for public or (large) private clouds
●   avoid single server bottlenecks
●   efficient management
VM




VIRTUALIZATION CONTAINER
             LIBRBD
            LIBRADOS




        M

   M                   M
CONTAINER            VM       CONTAINER
   LIBRBD                        LIBRBD
  LIBRADOS                      LIBRADOS




                 M

             M            M
HOST
    KRBD (KERNEL MODULE)
          LIBRADOS




      M

M                          M
RBD: RADOS Block Device
•
    Replicated, reliable, high-
    performance virtual disk
•
    Allows decoupling of VMs and
    containers
    •   Live migration!
•
    Images are striped across the
    cluster
•
    Snapshots!
•
    Native support in the Linux
    kernel
    •   /dev/rbd1
•
    librbd allows easy integration
HOW DO YOU
    SPIN UP
THOUSANDS OF VMs
   INSTANTLY
      AND
  EFFICIENTLY?
instant copy




144
      0        0         0   0   = 144
write
                          CLIENT
                  write


                  write


                  write




144   4   = 148
read


              read
                     CLIENT

              read




144   4   = 148
current RBD integration
●   native Linux kernel support
    ●   /dev/rbd0, /dev/rbd/<poolname>/<imagename>
●   librbd
    ●   user-level library
●   Qemu/KVM
    ●   links to librbd user-level library
●   libvirt
    ●   librbd-based storage pool
    ●   understands RBD images
    ●   can only start KVM VMs... :-(
●   CloudStack, OpenStack
what about Xen?
●   Linux kernel driver (i.e. /dev/rbd0)
    ●   easy fit into existing stacks
    ●   works today
    ●   need recent Linux kernel for dom0
●   blktap
    ●   generic kernel driver, userland process
    ●   easy integration with librbd
    ●   more featureful (cloning, caching), maybe faster
    ●   doesn't exist yet!
●   rbd-fuse
    ●   coming soon!
libvirt
●   CloudStack, OpenStack
●   libvirt understands rbd images, storage pools
    ●   xml specifies cluster, pool, image name, auth
●   currently only usable with KVM
●   could configure /dev/rbd devices for VMs
librbd
●   management
    ●   create, destroy, list, describe images
    ●   resize, snapshot, clone
●   I/O
    ●   open, read, write, discard, close
●   C, C++, Python bindings
RBD roadmap
●   locking
    ●   fence failed VM hosts
●   clone performance
●   KSM (kernel same-page merging) hints
●   caching
    ●   improved librbd caching
    ●   kernel RBD + bcache to local SSD/disk
why
●   limited options for scalable open source storage
●   proprietary solutions
    ●   marry hardware and software
    ●   expensive
    ●   don't scale (out)
●   industry needs to change
who we are
●   Ceph created at UC Santa Cruz (2007)
●   supported by DreamHost (2008-2011)
●   Inktank (2012)
●   growing user and developer community
●   we are hiring
    ●   C/C++/Python developers
    ●   sysadmins, testing engineers
    ●   Los Angeles, San Francisco, Sunnyvale, remote


                            http://ceph.com/
APP                    APP                   HOST/VM                  CLIENT



                        RADOSGW                RBD                      CEPH FS
   LIBRADOS
                        A bucket-based         A reliable and fully-    A POSIX-compliant
   A library allowing   REST gateway,          distributed block        distributed file
   apps to directly     compatible with S3     device, with a Linux     system, with a
   access RADOS,        and Swift              kernel client and a      Linux kernel client
   with support for                            QEMU/KVM driver          and support for
   C, C++, Java,                                                        FUSE
   Python, Ruby,
   and PHP




RADOS

A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes



                                                                                              39
Block Storage For VMs With Ceph

More Related Content

What's hot

Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uring
ShapeBlue
 
Dell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready NodesDell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready Nodes
Andrew McDaniel
 

What's hot (20)

Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
[네이버오픈소스세미나] Maglev Hashing Scheduler in IPVS, Linux Kernel - 송인주
[네이버오픈소스세미나] Maglev Hashing Scheduler in IPVS, Linux Kernel - 송인주[네이버오픈소스세미나] Maglev Hashing Scheduler in IPVS, Linux Kernel - 송인주
[네이버오픈소스세미나] Maglev Hashing Scheduler in IPVS, Linux Kernel - 송인주
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Linux-HA with Pacemaker
Linux-HA with PacemakerLinux-HA with Pacemaker
Linux-HA with Pacemaker
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
vSAN Beyond The Basics
vSAN Beyond The BasicsvSAN Beyond The Basics
vSAN Beyond The Basics
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uring
 
Dell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready NodesDell VMware Virtual SAN Ready Nodes
Dell VMware Virtual SAN Ready Nodes
 
OpenShift Virtualization - VM and OS Image Lifecycle
OpenShift Virtualization - VM and OS Image LifecycleOpenShift Virtualization - VM and OS Image Lifecycle
OpenShift Virtualization - VM and OS Image Lifecycle
 
Software-Defined Storage (SDS)
Software-Defined Storage (SDS)Software-Defined Storage (SDS)
Software-Defined Storage (SDS)
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0
 
CDW: SAN vs. NAS
CDW: SAN vs. NASCDW: SAN vs. NAS
CDW: SAN vs. NAS
 
Storage Basics
Storage BasicsStorage Basics
Storage Basics
 
IBM GPFS
IBM GPFSIBM GPFS
IBM GPFS
 

Viewers also liked

Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0
sprdd
 
Распределенное хранилище Ceph. Обзор и практические способы использования
Распределенное хранилище Ceph. Обзор и практические способы использованияРаспределенное хранилище Ceph. Обзор и практические способы использования
Распределенное хранилище Ceph. Обзор и практические способы использования
DevDay
 

Viewers also liked (20)

Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
Red Hat Gluster Storage Performance
Red Hat Gluster Storage PerformanceRed Hat Gluster Storage Performance
Red Hat Gluster Storage Performance
 
adp.ceph.openstack.talk
adp.ceph.openstack.talkadp.ceph.openstack.talk
adp.ceph.openstack.talk
 
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarryCeph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
 
Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0
 
Nutanix Acropolis - облако на базе KVM под ключ, Максим Шапошников (Nutanix)
Nutanix Acropolis - облако на базе KVM под ключ, Максим Шапошников (Nutanix)Nutanix Acropolis - облако на базе KVM под ключ, Максим Шапошников (Nutanix)
Nutanix Acropolis - облако на базе KVM под ключ, Максим Шапошников (Nutanix)
 
Ceph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and HowCeph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and How
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Private Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStackPrivate Cloud mit Ceph und OpenStack
Private Cloud mit Ceph und OpenStack
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Распределенное хранилище Ceph. Обзор и практические способы использования
Распределенное хранилище Ceph. Обзор и практические способы использованияРаспределенное хранилище Ceph. Обзор и практические способы использования
Распределенное хранилище Ceph. Обзор и практические способы использования
 
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephProtecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
 
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red HatMultiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
 
My SQL on Ceph
My SQL on CephMy SQL on Ceph
My SQL on Ceph
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 

Similar to Block Storage For VMs With Ceph

INFINISTORE(tm) - Scalable Open Source Storage Arhcitecture
INFINISTORE(tm) - Scalable Open Source Storage ArhcitectureINFINISTORE(tm) - Scalable Open Source Storage Arhcitecture
INFINISTORE(tm) - Scalable Open Source Storage Arhcitecture
Thomas Uhl
 
New Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and BeyondNew Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and Beyond
OpenStack Foundation
 
What CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBDWhat CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBD
ShapeBlue
 

Similar to Block Storage For VMs With Ceph (20)

Scale 10x 01:22:12
Scale 10x 01:22:12Scale 10x 01:22:12
Scale 10x 01:22:12
 
Strata - 03/31/2012
Strata - 03/31/2012Strata - 03/31/2012
Strata - 03/31/2012
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Dive
 
Ceph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFSCeph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFS
 
London Ceph Day: The Future of CephFS
London Ceph Day: The Future of CephFSLondon Ceph Day: The Future of CephFS
London Ceph Day: The Future of CephFS
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage System
 
Cache Tiering and Erasure Coding
Cache Tiering and Erasure CodingCache Tiering and Erasure Coding
Cache Tiering and Erasure Coding
 
Cache Tiering and Erasure Coding
Cache Tiering and Erasure CodingCache Tiering and Erasure Coding
Cache Tiering and Erasure Coding
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNET
 
Ceph Day New York 2014: Future of CephFS
Ceph Day New York 2014:  Future of CephFS Ceph Day New York 2014:  Future of CephFS
Ceph Day New York 2014: Future of CephFS
 
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
 
End of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph ReplicationEnd of RAID as we know it with Ceph Replication
End of RAID as we know it with Ceph Replication
 
INFINISTORE(tm) - Scalable Open Source Storage Arhcitecture
INFINISTORE(tm) - Scalable Open Source Storage ArhcitectureINFINISTORE(tm) - Scalable Open Source Storage Arhcitecture
INFINISTORE(tm) - Scalable Open Source Storage Arhcitecture
 
OpenVZ Linux Containers
OpenVZ Linux ContainersOpenVZ Linux Containers
OpenVZ Linux Containers
 
LINSTOR - Linux Block storage management tool (march 2019)
LINSTOR - Linux Block storage management tool (march 2019)LINSTOR - Linux Block storage management tool (march 2019)
LINSTOR - Linux Block storage management tool (march 2019)
 
DEVIEW 2013
DEVIEW 2013DEVIEW 2013
DEVIEW 2013
 
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
 
New features for Ceph with Cinder and Beyond
New features for Ceph with Cinder and BeyondNew features for Ceph with Cinder and Beyond
New features for Ceph with Cinder and Beyond
 
New Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and BeyondNew Features for Ceph with Cinder and Beyond
New Features for Ceph with Cinder and Beyond
 
What CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBDWhat CloudStackers Need To Know About LINSTOR/DRBD
What CloudStackers Need To Know About LINSTOR/DRBD
 

More from The Linux Foundation

More from The Linux Foundation (20)

ELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made SimpleELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made Simple
 
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
 
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
 
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
 
XPDDS19 Keynote: Unikraft Weather Report
XPDDS19 Keynote:  Unikraft Weather ReportXPDDS19 Keynote:  Unikraft Weather Report
XPDDS19 Keynote: Unikraft Weather Report
 
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
 
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxXPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
 
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
 
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, BitdefenderXPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
 
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
 
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making... OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixXPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
 
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdXPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
 
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
 
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DXPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
 
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsXPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
 
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
 
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
 
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
 
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEXPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
 

Recently uploaded

Recently uploaded (20)

Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 

Block Storage For VMs With Ceph

  • 1. virtual machine block storage with the ceph distributed storage system sage weil xensummit – august 28, 2012
  • 2. outline ● why you should care ● what is it, what it does ● how it works, how you can use it ● architecture ● objects, recovery ● rados block device ● integration ● path forward ● who we are, why we do this
  • 3. why should you care about another storage system? requirements, time, cost
  • 4. requirements ● diverse storage needs ● object storage ● block devices (for VMs) with snapshots, cloning ● shared file system with POSIX, coherent caches ● structured data... files, block devices, or objects? ● scale ● terabytes, petabytes, exabytes ● heterogeneous hardware ● reliability and fault tolerance
  • 5. time ● ease of administration ● no manual data migration, load balancing ● painless scaling ● expansion and contraction ● seamless migration
  • 6. cost ● low cost per gigabyte ● no vendor lock-in ● software solution ● run on commodity hardware ● open source
  • 8. APP APP HOST/VM CLIENT RADOSGW RBD CEPH FS LIBRADOS A bucket-based A reliable and fully- A POSIX-compliant A library allowing REST gateway, distributed block distributed file apps to directly compatible with S3 device, with a Linux system, with a access RADOS, and Swift kernel client and a Linux kernel client with support for QEMU/KVM driver and support for C, C++, Java, FUSE Python, Ruby, and PHP RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes 8
  • 9. open source ● LGPLv2 ● copyleft ● free to link to proprietary code ● no copyright assignment ● no dual licensing ● no “enterprise-only” feature set ● active community ● commercial support available
  • 10. distributed storage system ● data center (not geo) scale ● 10s to 10,000s of machines ● terabytes to exabytes ● fault tolerant ● no SPoF ● commodity hardware – ethernet, SATA/SAS, HDD/SSD – RAID, SAN probably a waste of time, power, and money
  • 11. object storage model ● pools ● 1s to 100s ● independent namespaces or object collections ● replication level, placement policy ● objects ● trillions ● blob of data (bytes to gigabytes) ● attributes (e.g., “version=12”; bytes to kilobytes) ● key/value bundle (bytes to gigabytes)
  • 12. object storage cluster ● conventional client/server model doesn't scale ● server(s) become bottlenecks; proxies are inefficient ● if storage devices don't coordinate, clients must ● ceph-osds are intelligent storage daemons ● coordinate with peers ● sensible, cluster-aware protocols ● sit on local file system – btrfs, xfs, ext4, etc. – leveldb
  • 13. OSD OSD OSD OSD OSD FS FS FS FS FS btrfs xfs ext4 DISK DISK DISK DISK DISK M M M 13
  • 14. Monitors: • Maintain cluster state M • Provide consensus for distributed decision-making • Small, odd number • These do not serve stored objects to clients • OSDs: • One per disk or RAID group • At least three in a cluster • Serve stored objects to clients • Intelligently peer to perform replication tasks
  • 15. HUMAN M M M
  • 16. data distribution ● all objects are replicated N times ● objects are automatically placed, balanced, migrated in a dynamic cluster ● must consider physical infrastructure ● ceph-osds on hosts in racks in rows in data centers ● three approaches ● pick a spot; remember where you put it ● pick a spot; write down where you put it ● calculate where to put it, where to find it
  • 17. CRUSH • Pseudo-random placement algorithm • Fast calculation, no lookup • Ensures even distribution • Repeatable, deterministic • Rule-based configuration • specifiable replication • infrastructure topology aware • allows weighting • Stable mapping • Limited data migration
  • 18. distributed object storage ● CRUSH tells us where data should go ● small “osd map” records cluster state at point in time ● ceph-osd node status (up/down, weight, IP) ● CRUSH function specifying desired data distribution ● object storage daemons (RADOS) ● store it there ● migrate it as the cluster changes ● decentralized, distributed approach allows ● massive scales (10,000s of servers or more) ● efficient data access ● the illusion of a single copy with consistent behavior
  • 19. large clusters aren't static ● dynamic cluster ● nodes are added, removed; nodes reboot, fail, recover ● recovery is the norm ● osd maps are versioned ● shared via gossip ● any map update potentially triggers data migration ● ceph-osds monitor peers for failure ● new nodes register with monitor ● administrator adjusts weights, mark out old hardware, etc.
  • 20.
  • 21.
  • 22. CLIENT ??
  • 23. what does this mean for my cloud? ● virtual disks ● reliable ● accessible from many hosts ● appliances ● great for small clouds ● not viable for public or (large) private clouds ● avoid single server bottlenecks ● efficient management
  • 24. VM VIRTUALIZATION CONTAINER LIBRBD LIBRADOS M M M
  • 25. CONTAINER VM CONTAINER LIBRBD LIBRBD LIBRADOS LIBRADOS M M M
  • 26. HOST KRBD (KERNEL MODULE) LIBRADOS M M M
  • 27. RBD: RADOS Block Device • Replicated, reliable, high- performance virtual disk • Allows decoupling of VMs and containers • Live migration! • Images are striped across the cluster • Snapshots! • Native support in the Linux kernel • /dev/rbd1 • librbd allows easy integration
  • 28. HOW DO YOU SPIN UP THOUSANDS OF VMs INSTANTLY AND EFFICIENTLY?
  • 29. instant copy 144 0 0 0 0 = 144
  • 30. write CLIENT write write write 144 4 = 148
  • 31. read read CLIENT read 144 4 = 148
  • 32. current RBD integration ● native Linux kernel support ● /dev/rbd0, /dev/rbd/<poolname>/<imagename> ● librbd ● user-level library ● Qemu/KVM ● links to librbd user-level library ● libvirt ● librbd-based storage pool ● understands RBD images ● can only start KVM VMs... :-( ● CloudStack, OpenStack
  • 33. what about Xen? ● Linux kernel driver (i.e. /dev/rbd0) ● easy fit into existing stacks ● works today ● need recent Linux kernel for dom0 ● blktap ● generic kernel driver, userland process ● easy integration with librbd ● more featureful (cloning, caching), maybe faster ● doesn't exist yet! ● rbd-fuse ● coming soon!
  • 34. libvirt ● CloudStack, OpenStack ● libvirt understands rbd images, storage pools ● xml specifies cluster, pool, image name, auth ● currently only usable with KVM ● could configure /dev/rbd devices for VMs
  • 35. librbd ● management ● create, destroy, list, describe images ● resize, snapshot, clone ● I/O ● open, read, write, discard, close ● C, C++, Python bindings
  • 36. RBD roadmap ● locking ● fence failed VM hosts ● clone performance ● KSM (kernel same-page merging) hints ● caching ● improved librbd caching ● kernel RBD + bcache to local SSD/disk
  • 37. why ● limited options for scalable open source storage ● proprietary solutions ● marry hardware and software ● expensive ● don't scale (out) ● industry needs to change
  • 38. who we are ● Ceph created at UC Santa Cruz (2007) ● supported by DreamHost (2008-2011) ● Inktank (2012) ● growing user and developer community ● we are hiring ● C/C++/Python developers ● sysadmins, testing engineers ● Los Angeles, San Francisco, Sunnyvale, remote http://ceph.com/
  • 39. APP APP HOST/VM CLIENT RADOSGW RBD CEPH FS LIBRADOS A bucket-based A reliable and fully- A POSIX-compliant A library allowing REST gateway, distributed block distributed file apps to directly compatible with S3 device, with a Linux system, with a access RADOS, and Swift kernel client and a Linux kernel client with support for QEMU/KVM driver and support for C, C++, Java, FUSE Python, Ruby, and PHP RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes 39

Editor's Notes

  1. So what *is* Ceph? Ceph is a massively scalable and flexible object store with tightly-integrated applications that provide REST access to objects, a distributed virtual block device, and a parallel filesystem.
  2. Let’s start with RADOS, Reliable Autonomic Distributed Object Storage. In this example, you’ve got five disks in a computer. You have initialized each disk with a filesystem (btrfs is the right filesystem to use someday, but until it’s stable we recommend XFS). On each filesystem, you deploy a Ceph OSD (Object Storage Daemon). That computer, with its five disks and five object storage daemons, becomes a single node in a RADOS cluster. Alongside these nodes are monitor nodes, which keep track of the current state of the cluster and provide users with an entry point into the cluster (although they do not serve any data themselves).
  3. Applications wanting to store objects into RADOS interact with the cluster as a single entity.
  4. The way CRUSH is configured is somewhat unique. Instead of defining pools for different data types, workgroups, subnets, or applications, CRUSH is configured with the physical topology of your storage network. You tell it how many buildings, rooms, shelves, racks, and nodes you have, and you tell it how you want data placed. For example, you could tell CRUSH that it’s okay to have two replicas in the same building, but not on the same power circuit. You also tell it how many copies to keep.
  5. What happens, though, when a node goes down? The OSDs are always talking to each other (and the monitors), and they know when something is amiss. The third and fifth node on the top row have noticed that the second node on the bottom row is gone, and they are also aware that they have replicas of the missing data.
  6. The OSDs collectively use the CRUSH algorithm to determine how the cluster should look based on its new state, and move the data to where clients running CRUSH expect it to be.
  7. Because of the way placement is calculated instead of centrally controlled, node failures are transparent to clients.
  8. The RADOS Block Device (RBD) allows users to store virtual disks inside RADOS. For example, you can use a virtualization container like KVM or QEMU to boot virtual machines from images that have been stored in RADOS. Images are striped across the entire cluster, which allows for simultaneous read access from different cluster nodes.
  9. Separating a virtual computer from its storage also lets you do really neat things, like migrate a virtual machine from one server to another without rebooting it.
  10. As an alternative, machines (even those running on bare metal) can mount an RBD image using native Linux kernel drivers.
  11. With Ceph, copying an RBD image four times gives you five total copies…but only takes the space of one. It also happens instantly.
  12. When clients mount one of the copied images and begin writing, they write to their copy.
  13. When they read, though, they read through to the original copy if there’s no newer data.
  14. So what *is* Ceph? Ceph is a massively scalable and flexible object store with tightly-integrated applications that provide REST access to objects, a distributed virtual block device, and a parallel filesystem.