SlideShare ist ein Scribd-Unternehmen logo
1 von 23
Downloaden Sie, um offline zu lesen
Ceph in the Cloud

         Using RBD in the cloud

         Again, use #cephday :-)

Implemented in CloudStack, OpenStack and
                Proxmox
              Wido den Hollander (42on)
Ceph quick overview
RBD (RADOS Block Device)
●   4MB stripe over RADOS objects
●   Sparse allocation (TRIM/discard support)
    –   Qemu with SCSI the driver support trim
    –   VirtIO lacks necesarry functions
●   Snapshotting
●   Layering
RBD
RBD with multiple VMs
TRIM/discard
●   Filesystem like ext4 or btrfs tell the block
    device which blocks can be discarded
●   Only works with Qemu and SCSI drives
●   Qemu will inform librbd about which blocks
    can be discarded
Using TRIM/discard with Qemu
●   Add 'discard_granularity=N' option where N is
    usually 512 (sector size)
    –   This sets the QUEUE_FLAG_DISCARD flag inside
        the guest indicating that the device supports
        discard
●   Only supported by SCSI with Qemu
    –   Feature set of SCSI is bigger than VirtIO
Snapshotting
●   Normal snapshotting like we are used to
    –   Copy-on-Write (CoW) snapshots
●   Snapshots can be created using either libvirt
    or the rbd tool
●   Only integrated into OpenStack, not in
    CloudStack or Proxmox
Layering
●   One parent / golden image
●   Each child records it's own changes, reads for
    unchanged data come from the parent image
●   Writes go into separate objects
●   Easily deploy hundreds of identical virtual
    machines in a short timeframe without using a
    lot of space
Layering – Writes
Layering – Reads
RBD in the Cloud?
●   High parallel performance due to object
    striping
●   Discard for removing discarded data by virtual
    machines
●   Snapshotting for rollback points in case of
    problem inside a virtual machine
●   Layering for easy and quick deployment
    –   Also saves space!
RBD integrations
●   CloudStack
●   OpenStack
●   Proxmox
RBD in Proxmox
●   Does not use libvirt
●   RBD integrated v2.2, not in the GUI yet
●   Snapshotting
●   No layering at this point
Proxmox demo
●   Show a small demo of proxmox
●   Adding the pool
●   Creating a VM with a RBD disk
RBD in CloudStack
●   Has been integrated in version 4.0
●   Relies on libvirt
●   Basic RBD implementation
    –   No snapshotting
    –   No layering
    –   No TRIM/discard
●   Still need NFS for SystemVMs
CloudStack demo
●   Show a CloudStack demo
●   Show RBD pool in CloudStack
●   Create an instance with RBD storage
RBD in OpenStack
●   Can use RBD for disk images both boot and
    data
●   Glance has RBD support for storing images
●   A lot of new RBD work went into Cinder
Using RBD in the Cloud
●   Virtual Machines have a random I/O pattern
●   70% write, 30% read disk I/O
    –   Reads are cached by the OSD and the virtual
        machine itself, so the disks mostly handle writes
    –   2x replication means you have to divide your write
        I/O by 2.
●   Use Journaling! (Gregory will tell more later)
●   Enable the RBD cache (rbd_cache=1)
Is it production ready?
●   We think it is!
●   Large scale deployments out there
    –   Big OpenStack clusters backed by Ceph
    –   CloudStack deployments known to be running with
        Ceph
●   It is not “1” or “0”, you will have to evaluate for
    yourself
Commodity hardware #1
Commodity hardware #2
Thank you!
I hope to see a lot of RBD powered clouds in
                  the future!


                Questions?

Weitere ähnliche Inhalte

Was ist angesagt?

Object Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container EnvirnomentObject Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container EnvirnomentMinio
 
Storage based snapshots for KVM VMs in CloudStack
Storage based snapshots for KVM VMs in CloudStackStorage based snapshots for KVM VMs in CloudStack
Storage based snapshots for KVM VMs in CloudStackShapeBlue
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Divejoshdurgin
 
OSv – The OS designed for the Cloud
OSv – The OS designed for the CloudOSv – The OS designed for the Cloud
OSv – The OS designed for the CloudYandex
 
Integrating CloudStack & Ceph
Integrating CloudStack & CephIntegrating CloudStack & Ceph
Integrating CloudStack & CephShapeBlue
 
Disaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBDDisaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBDViswesuwara Nathan
 
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephProtecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephSean Cohen
 
Glauber Costa on OSv as NoSQL platform
Glauber Costa on OSv as NoSQL platformGlauber Costa on OSv as NoSQL platform
Glauber Costa on OSv as NoSQL platformDon Marti
 
F19 slidedeck (OpenStack^H^H^H^Hhift, what the)
F19 slidedeck (OpenStack^H^H^H^Hhift, what the)F19 slidedeck (OpenStack^H^H^H^Hhift, what the)
F19 slidedeck (OpenStack^H^H^H^Hhift, what the)Gerard Braad
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...TomBarron
 
My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...
My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...
My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...Sathyajith Bhat
 
Virtual Machines and Docker
Virtual Machines and DockerVirtual Machines and Docker
Virtual Machines and DockerDanish Khakwani
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStackMirantis
 
Dockerizing Rails
Dockerizing RailsDockerizing Rails
Dockerizing RailsiGbanam
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldSage Weil
 
Docker 原理與實作
Docker 原理與實作Docker 原理與實作
Docker 原理與實作kao kuo-tung
 
Storage best practices
Storage best practicesStorage best practices
Storage best practicesMaor Lipchuk
 

Was ist angesagt? (20)

Object Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container EnvirnomentObject Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container Envirnoment
 
Storage based snapshots for KVM VMs in CloudStack
Storage based snapshots for KVM VMs in CloudStackStorage based snapshots for KVM VMs in CloudStack
Storage based snapshots for KVM VMs in CloudStack
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Dive
 
OSv – The OS designed for the Cloud
OSv – The OS designed for the CloudOSv – The OS designed for the Cloud
OSv – The OS designed for the Cloud
 
Integrating CloudStack & Ceph
Integrating CloudStack & CephIntegrating CloudStack & Ceph
Integrating CloudStack & Ceph
 
Disaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBDDisaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBD
 
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and CephProtecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph
 
Glauber Costa on OSv as NoSQL platform
Glauber Costa on OSv as NoSQL platformGlauber Costa on OSv as NoSQL platform
Glauber Costa on OSv as NoSQL platform
 
Docker orchestration with kontena
Docker orchestration with kontenaDocker orchestration with kontena
Docker orchestration with kontena
 
F19 slidedeck (OpenStack^H^H^H^Hhift, what the)
F19 slidedeck (OpenStack^H^H^H^Hhift, what the)F19 slidedeck (OpenStack^H^H^H^Hhift, what the)
F19 slidedeck (OpenStack^H^H^H^Hhift, what the)
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
 
My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...
My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...
My Learnings on Setting up a Kubernetes Cluster on AWS using Kubernetes Opera...
 
SmartOS Primer
SmartOS PrimerSmartOS Primer
SmartOS Primer
 
Virtual Machines and Docker
Virtual Machines and DockerVirtual Machines and Docker
Virtual Machines and Docker
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStack
 
Dockerizing Rails
Dockerizing RailsDockerizing Rails
Dockerizing Rails
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud world
 
Docker 原理與實作
Docker 原理與實作Docker 原理與實作
Docker 原理與實作
 
OpenStack on SmartOS
OpenStack on SmartOSOpenStack on SmartOS
OpenStack on SmartOS
 
Storage best practices
Storage best practicesStorage best practices
Storage best practices
 

Andere mochten auch

Ceph Day London 2014 - Ceph at IBM
Ceph Day London 2014 - Ceph at IBM Ceph Day London 2014 - Ceph at IBM
Ceph Day London 2014 - Ceph at IBM Ceph Community
 
Ceph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Community
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With CephCeph Community
 
Los Sistemas Operativs Alvaro
Los Sistemas Operativs AlvaroLos Sistemas Operativs Alvaro
Los Sistemas Operativs Alvaroalvarito2311
 
TABLERO Y CARTELES "El viaje de Kirima"
TABLERO Y CARTELES "El viaje de Kirima"TABLERO Y CARTELES "El viaje de Kirima"
TABLERO Y CARTELES "El viaje de Kirima"oOKHARLA
 
Investigacion pedaagogica compuuuu
Investigacion pedaagogica compuuuuInvestigacion pedaagogica compuuuu
Investigacion pedaagogica compuuuuZaiRaa Matz
 
Correccion acto y tectonica, unidad II, tarea III
Correccion acto y tectonica, unidad II, tarea IIICorreccion acto y tectonica, unidad II, tarea III
Correccion acto y tectonica, unidad II, tarea IIIDaniela
 
Azure Camp - Administrer et superviser un Cloud hybride
Azure Camp - Administrer et superviser un Cloud hybrideAzure Camp - Administrer et superviser un Cloud hybride
Azure Camp - Administrer et superviser un Cloud hybrideMichel HUBERT
 
A ESB Abraça a Ciência
A ESB Abraça a CiênciaA ESB Abraça a Ciência
A ESB Abraça a CiênciaDulce Gomes
 
Alberobello. italia
Alberobello. italiaAlberobello. italia
Alberobello. italiablankmay
 
Dispositivos logicos programables
Dispositivos logicos programablesDispositivos logicos programables
Dispositivos logicos programablesLuiS YmAY
 
Capitulo 1 Que Es El Cine
Capitulo 1 Que Es El CineCapitulo 1 Que Es El Cine
Capitulo 1 Que Es El CineAdrian Moure
 
MuCEM
MuCEMMuCEM
MuCEMOban_
 

Andere mochten auch (20)

Ignite OSCON 2012
Ignite OSCON 2012Ignite OSCON 2012
Ignite OSCON 2012
 
Ceph Day London 2014 - Ceph at IBM
Ceph Day London 2014 - Ceph at IBM Ceph Day London 2014 - Ceph at IBM
Ceph Day London 2014 - Ceph at IBM
 
Ceph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade inCeph Day New York: Ceph: one decade in
Ceph Day New York: Ceph: one decade in
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With Ceph
 
Los Sistemas Operativs Alvaro
Los Sistemas Operativs AlvaroLos Sistemas Operativs Alvaro
Los Sistemas Operativs Alvaro
 
Bitacora
BitacoraBitacora
Bitacora
 
Boletin Informativo
Boletin InformativoBoletin Informativo
Boletin Informativo
 
Pedido de desculpas
Pedido de desculpasPedido de desculpas
Pedido de desculpas
 
TABLERO Y CARTELES "El viaje de Kirima"
TABLERO Y CARTELES "El viaje de Kirima"TABLERO Y CARTELES "El viaje de Kirima"
TABLERO Y CARTELES "El viaje de Kirima"
 
Investigacion pedaagogica compuuuu
Investigacion pedaagogica compuuuuInvestigacion pedaagogica compuuuu
Investigacion pedaagogica compuuuu
 
modules.php
modules.phpmodules.php
modules.php
 
Correccion acto y tectonica, unidad II, tarea III
Correccion acto y tectonica, unidad II, tarea IIICorreccion acto y tectonica, unidad II, tarea III
Correccion acto y tectonica, unidad II, tarea III
 
Azure Camp - Administrer et superviser un Cloud hybride
Azure Camp - Administrer et superviser un Cloud hybrideAzure Camp - Administrer et superviser un Cloud hybride
Azure Camp - Administrer et superviser un Cloud hybride
 
A ESB Abraça a Ciência
A ESB Abraça a CiênciaA ESB Abraça a Ciência
A ESB Abraça a Ciência
 
Manual mattei
Manual matteiManual mattei
Manual mattei
 
Alberobello. italia
Alberobello. italiaAlberobello. italia
Alberobello. italia
 
Dispositivos logicos programables
Dispositivos logicos programablesDispositivos logicos programables
Dispositivos logicos programables
 
Capitulo 1 Que Es El Cine
Capitulo 1 Que Es El CineCapitulo 1 Que Es El Cine
Capitulo 1 Que Es El Cine
 
Sid & Nancy
Sid & NancySid & Nancy
Sid & Nancy
 
MuCEM
MuCEMMuCEM
MuCEM
 

Ähnlich wie Using RBD for Virtual Machine Storage in the Cloud

Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Community
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph Community
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph Community
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep DiveRed_Hat_Storage
 
Introduction to Docker at SF Peninsula Software Development Meetup @Guidewire
Introduction to Docker at SF Peninsula Software Development Meetup @GuidewireIntroduction to Docker at SF Peninsula Software Development Meetup @Guidewire
Introduction to Docker at SF Peninsula Software Development Meetup @GuidewiredotCloud
 
Docker 0.11 at MaxCDN meetup in Los Angeles
Docker 0.11 at MaxCDN meetup in Los AngelesDocker 0.11 at MaxCDN meetup in Los Angeles
Docker 0.11 at MaxCDN meetup in Los AngelesJérôme Petazzoni
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
Let's Containerize New York with Docker!
Let's Containerize New York with Docker!Let's Containerize New York with Docker!
Let's Containerize New York with Docker!Jérôme Petazzoni
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETNikos Kormpakis
 
Introduction to Docker and Containers
Introduction to Docker and ContainersIntroduction to Docker and Containers
Introduction to Docker and ContainersDocker, Inc.
 
DevEx | there’s no place like k3s
DevEx | there’s no place like k3sDevEx | there’s no place like k3s
DevEx | there’s no place like k3sHaggai Philip Zagury
 
RBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason DillamanRBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason DillamanCeph Community
 
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013dotCloud
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Docker, Inc.
 
LXC, Docker, and the future of software delivery | LinuxCon 2013
LXC, Docker, and the future of software delivery | LinuxCon 2013LXC, Docker, and the future of software delivery | LinuxCon 2013
LXC, Docker, and the future of software delivery | LinuxCon 2013dotCloud
 
LXC Docker and the Future of Software Delivery
LXC Docker and the Future of Software DeliveryLXC Docker and the Future of Software Delivery
LXC Docker and the Future of Software DeliveryDocker, Inc.
 
Taking Docker to Production: What You Need to Know and Decide
Taking Docker to Production: What You Need to Know and DecideTaking Docker to Production: What You Need to Know and Decide
Taking Docker to Production: What You Need to Know and DecideBret Fisher
 

Ähnlich wie Using RBD for Virtual Machine Storage in the Cloud (20)

Ceph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wildCeph Day London 2014 - Deploying ceph in the wild
Ceph Day London 2014 - Deploying ceph in the wild
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
Introduction to Docker at SF Peninsula Software Development Meetup @Guidewire
Introduction to Docker at SF Peninsula Software Development Meetup @GuidewireIntroduction to Docker at SF Peninsula Software Development Meetup @Guidewire
Introduction to Docker at SF Peninsula Software Development Meetup @Guidewire
 
Docker 0.11 at MaxCDN meetup in Los Angeles
Docker 0.11 at MaxCDN meetup in Los AngelesDocker 0.11 at MaxCDN meetup in Los Angeles
Docker 0.11 at MaxCDN meetup in Los Angeles
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Let's Containerize New York with Docker!
Let's Containerize New York with Docker!Let's Containerize New York with Docker!
Let's Containerize New York with Docker!
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNET
 
Introduction to Docker and Containers
Introduction to Docker and ContainersIntroduction to Docker and Containers
Introduction to Docker and Containers
 
DevEx | there’s no place like k3s
DevEx | there’s no place like k3sDevEx | there’s no place like k3s
DevEx | there’s no place like k3s
 
RBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason DillamanRBD: What will the future bring? - Jason Dillaman
RBD: What will the future bring? - Jason Dillaman
 
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
 
LXC, Docker, and the future of software delivery | LinuxCon 2013
LXC, Docker, and the future of software delivery | LinuxCon 2013LXC, Docker, and the future of software delivery | LinuxCon 2013
LXC, Docker, and the future of software delivery | LinuxCon 2013
 
LXC Docker and the Future of Software Delivery
LXC Docker and the Future of Software DeliveryLXC Docker and the Future of Software Delivery
LXC Docker and the Future of Software Delivery
 
Taking Docker to Production: What You Need to Know and Decide
Taking Docker to Production: What You Need to Know and DecideTaking Docker to Production: What You Need to Know and Decide
Taking Docker to Production: What You Need to Know and Decide
 

Using RBD for Virtual Machine Storage in the Cloud

  • 1. Ceph in the Cloud Using RBD in the cloud Again, use #cephday :-) Implemented in CloudStack, OpenStack and Proxmox Wido den Hollander (42on)
  • 3. RBD (RADOS Block Device) ● 4MB stripe over RADOS objects ● Sparse allocation (TRIM/discard support) – Qemu with SCSI the driver support trim – VirtIO lacks necesarry functions ● Snapshotting ● Layering
  • 4. RBD
  • 6. TRIM/discard ● Filesystem like ext4 or btrfs tell the block device which blocks can be discarded ● Only works with Qemu and SCSI drives ● Qemu will inform librbd about which blocks can be discarded
  • 7. Using TRIM/discard with Qemu ● Add 'discard_granularity=N' option where N is usually 512 (sector size) – This sets the QUEUE_FLAG_DISCARD flag inside the guest indicating that the device supports discard ● Only supported by SCSI with Qemu – Feature set of SCSI is bigger than VirtIO
  • 8. Snapshotting ● Normal snapshotting like we are used to – Copy-on-Write (CoW) snapshots ● Snapshots can be created using either libvirt or the rbd tool ● Only integrated into OpenStack, not in CloudStack or Proxmox
  • 9. Layering ● One parent / golden image ● Each child records it's own changes, reads for unchanged data come from the parent image ● Writes go into separate objects ● Easily deploy hundreds of identical virtual machines in a short timeframe without using a lot of space
  • 12. RBD in the Cloud? ● High parallel performance due to object striping ● Discard for removing discarded data by virtual machines ● Snapshotting for rollback points in case of problem inside a virtual machine ● Layering for easy and quick deployment – Also saves space!
  • 13. RBD integrations ● CloudStack ● OpenStack ● Proxmox
  • 14. RBD in Proxmox ● Does not use libvirt ● RBD integrated v2.2, not in the GUI yet ● Snapshotting ● No layering at this point
  • 15. Proxmox demo ● Show a small demo of proxmox ● Adding the pool ● Creating a VM with a RBD disk
  • 16. RBD in CloudStack ● Has been integrated in version 4.0 ● Relies on libvirt ● Basic RBD implementation – No snapshotting – No layering – No TRIM/discard ● Still need NFS for SystemVMs
  • 17. CloudStack demo ● Show a CloudStack demo ● Show RBD pool in CloudStack ● Create an instance with RBD storage
  • 18. RBD in OpenStack ● Can use RBD for disk images both boot and data ● Glance has RBD support for storing images ● A lot of new RBD work went into Cinder
  • 19. Using RBD in the Cloud ● Virtual Machines have a random I/O pattern ● 70% write, 30% read disk I/O – Reads are cached by the OSD and the virtual machine itself, so the disks mostly handle writes – 2x replication means you have to divide your write I/O by 2. ● Use Journaling! (Gregory will tell more later) ● Enable the RBD cache (rbd_cache=1)
  • 20. Is it production ready? ● We think it is! ● Large scale deployments out there – Big OpenStack clusters backed by Ceph – CloudStack deployments known to be running with Ceph ● It is not “1” or “0”, you will have to evaluate for yourself
  • 23. Thank you! I hope to see a lot of RBD powered clouds in the future! Questions?