SlideShare ist ein Scribd-Unternehmen logo
1 von 34
Downloaden Sie, um offline zu lesen
Running backups with Ceph-to-Ceph
OSDC 2019 May 15th, 2019
Michel Raabe
Cloud Solution Architect
B1 Systems GmbH
raabe@b1-systems.de
Introducing B1 Systems
founded in 2004
operating both nationally and internationally
more than 100 employees
vendor-independent (hardware and software)
focus:
consulting
support
development
training
operations
solutions
branch offices in Rockolding, Berlin, Cologne & Dresden
B1 Systems GmbH Running backups with Ceph-to-Ceph 2 / 34
Areas of Expertise
B1 Systems GmbH Running backups with Ceph-to-Ceph 3 / 34
Running backups with Ceph-to-Ceph
B1 Systems GmbH Running backups with Ceph-to-Ceph 4 / 34
Requirements
“We want to backup our Ceph cluster”
independent from OpenStack
asynchronous
not always offsite
fuc**** file browser
B1 Systems GmbH Running backups with Ceph-to-Ceph 5 / 34
Methods
native
rbd-mirror
s3 multisite
3rd party
“scripts”
backy2
rbd2qcow
???
B1 Systems GmbH Running backups with Ceph-to-Ceph 6 / 34
Challenges
B1 Systems GmbH Running backups with Ceph-to-Ceph 7 / 34
Challenges
crash consistency
disaster recovery
bandwidth
costs
B1 Systems GmbH Running backups with Ceph-to-Ceph 8 / 34
Crash consistency
only access to the base layer
unknown workload
corrupt filesystem
lost transactions
B1 Systems GmbH Running backups with Ceph-to-Ceph 9 / 34
Disaster recovery
how to access the remote cluster?
bandwidth?
route?
switch the storage backend?
supported by the application?
B1 Systems GmbH Running backups with Ceph-to-Ceph 10 / 34
Bandwidth
bandwidth vs. backup volume
20TB in 24h over 800mbit - ?
network latency
B1 Systems GmbH Running backups with Ceph-to-Ceph 11 / 34
Costs
a second cluster
different type of disks (hdd/ssd/nvme)
similar amount of available disk space
uplink
1/10/100 Gbit
B1 Systems GmbH Running backups with Ceph-to-Ceph 12 / 34
What can we do?
B1 Systems GmbH Running backups with Ceph-to-Ceph 13 / 34
rbd-mirror – Overview
does mirror support:
single image
whole pool
available since jewel (10.2.x)
asynchronous
daemon: rbd-mirror
no “file browser”
B1 Systems GmbH Running backups with Ceph-to-Ceph 14 / 34
rbd-mirror – What’s needed?
rbd feature: journaling (+ exclusive lock):
rbd feature enable ... journaling
30s default trigger:
rbd_mirror_sync_point_update_age
cluster name
default is “ceph”
it’s possible but ...
... hard to track
B1 Systems GmbH Running backups with Ceph-to-Ceph 15 / 34
rbd-mirror – Keyring sample
key layout:
ceph.client.admin.keyring
<clustername>.<type>.<id>.keyring
example config files:
remote.client.mirror.keyring
remote.conf
local.client.mirror.keyring
local.conf
B1 Systems GmbH Running backups with Ceph-to-Ceph 16 / 34
rbd-mirror – Problems
rbd-daemons must be able to connect to both clusters
no two public networks
same network or
routing kung fu
krbd module
B1 Systems GmbH Running backups with Ceph-to-Ceph 17 / 34
s3 multisite – Overview
S3 - simple storage service
compatible with Amazon S3
Swift compatible
Keystone integration
encryption
no “file browser”
B1 Systems GmbH Running backups with Ceph-to-Ceph 18 / 34
s3 multisite – What’s needed?
read-write or read-only
cloudsync plugin (e.g. aws)
NFS export possible
S3 “client”
B1 Systems GmbH Running backups with Ceph-to-Ceph 19 / 34
s3 multisite – Problems
Masterzone “default”
Zonegroup(s) “default”
<zone>-<zonegroup1> <-> <zone>-<zonegroup2>
default-default <-> default-default
de-fra <-> de-muc
Zonegroups are synced
one connection ...
... radosgw to radosgw
B1 Systems GmbH Running backups with Ceph-to-Ceph 20 / 34
3rd party
custom scripts
backy2
...
all based on snapshots and diff exports
B1 Systems GmbH Running backups with Ceph-to-Ceph 21 / 34
3rd party – Scripts – Overview
should use snapshot and ‘diff’
rbd snap create ..
rbd export-diff .. | ssh rbd import-diff ..
rbd export-diff --from-snap .. | ssh rbd import-diff ..
someone has to track the snapshots
B1 Systems GmbH Running backups with Ceph-to-Ceph 22 / 34
3rd party – backy2 – Overview
internal db
snapshots
rbd and file
can do backups to s3
tricky with k8s
python (only deb)
nbd mount
B1 Systems GmbH Running backups with Ceph-to-Ceph 23 / 34
3rd party – Problems
active/old snapshots
k8s with ceph backend
pv cannot be deleted
tracking/database
still no “file browser”
B1 Systems GmbH Running backups with Ceph-to-Ceph 24 / 34
3rd party – Example workflow
1 Create initial snapshot.
2 Copy first snapshot to remote cluster.
3 ... next hour/day/week/month ...
4 Create a snapshot.
5 Copy diff between snap1 vs snap2 to remote cluster.
6 Delete old snapshots.
B1 Systems GmbH Running backups with Ceph-to-Ceph 25 / 34
What can’t we do
B1 Systems GmbH Running backups with Ceph-to-Ceph 26 / 34
rbd export
plain rbd export
for one-time syncs
only disk images
Figure: rbd export | rbd import - 21min (20G)
B1 Systems GmbH Running backups with Ceph-to-Ceph 27 / 34
rbd export-diff
plain rbd export-diff
depends on the (total) diff size
only disk images
fast(er)?
scheduled
Figure: rbd export-diff - 8min (20G)
B1 Systems GmbH Running backups with Ceph-to-Ceph 28 / 34
rbd mirror
rbd-mirror
slow?
runs in the background
no schedule
Figure: rbd mirror - 30min (20G)
B1 Systems GmbH Running backups with Ceph-to-Ceph 29 / 34
s3 multisite
s3 capable client
“only” http(s)
scalable (really, really well)
B1 Systems GmbH Running backups with Ceph-to-Ceph 30 / 34
Wrap-up
B1 Systems GmbH Running backups with Ceph-to-Ceph 31 / 34
What now?
1 rbd snapshots
simple, easy, fast
controllable
can be a backup
2 s3 multisite
simple, fast
built-in
nfs export (ganesha)
no backup at all
3 rbd-mirror
disk-based
built-in
no backup at all
B1 Systems GmbH Running backups with Ceph-to-Ceph 32 / 34
What’s with ...
cephfs
no built-in replication
hourly/daily rsync?
snapshots?
B1 Systems GmbH Running backups with Ceph-to-Ceph 33 / 34
Thank You!
For more information, refer to info@b1-systems.de
or +49 (0)8457 - 931096

Weitere ähnliche Inhalte

Ähnlich wie OSDC 2019 | Running backups with Ceph-to-Ceph by Michael Raabe

Kubernetes - Shifting the mindset from servers to containers - microxchg 201...
Kubernetes  - Shifting the mindset from servers to containers - microxchg 201...Kubernetes  - Shifting the mindset from servers to containers - microxchg 201...
Kubernetes - Shifting the mindset from servers to containers - microxchg 201...
Schlomo Schapiro
 

Ähnlich wie OSDC 2019 | Running backups with Ceph-to-Ceph by Michael Raabe (20)

Images for the Clouds with KIWI & OBS
Images for the Clouds with KIWI & OBSImages for the Clouds with KIWI & OBS
Images for the Clouds with KIWI & OBS
 
Images for the Clouds with KIWI & OBS
Images for the Clouds with KIWI & OBSImages for the Clouds with KIWI & OBS
Images for the Clouds with KIWI & OBS
 
End of the Road - Facing Current Scaling Limits within OpenStack
End of the Road - Facing Current Scaling Limits within OpenStackEnd of the Road - Facing Current Scaling Limits within OpenStack
End of the Road - Facing Current Scaling Limits within OpenStack
 
C# Production Debugging Made Easy
 C# Production Debugging Made Easy C# Production Debugging Made Easy
C# Production Debugging Made Easy
 
Jenkins to Gitlab - Intelligent Build-Pipelines
Jenkins to Gitlab - Intelligent Build-PipelinesJenkins to Gitlab - Intelligent Build-Pipelines
Jenkins to Gitlab - Intelligent Build-Pipelines
 
Kubernetes - Shifting the mindset from servers to containers - microxchg 201...
Kubernetes  - Shifting the mindset from servers to containers - microxchg 201...Kubernetes  - Shifting the mindset from servers to containers - microxchg 201...
Kubernetes - Shifting the mindset from servers to containers - microxchg 201...
 
PowerAI Deep Dive ( key points )
PowerAI Deep Dive ( key points )PowerAI Deep Dive ( key points )
PowerAI Deep Dive ( key points )
 
Release webinar architecture
Release webinar   architectureRelease webinar   architecture
Release webinar architecture
 
Scaffolding for Serverless: lightning talk for AWS Arlington Meetup
Scaffolding for Serverless: lightning talk for AWS Arlington MeetupScaffolding for Serverless: lightning talk for AWS Arlington Meetup
Scaffolding for Serverless: lightning talk for AWS Arlington Meetup
 
Deep Learning and Gene Computing Acceleration with Alluxio in Kubernetes
Deep Learning and Gene Computing Acceleration with Alluxio in KubernetesDeep Learning and Gene Computing Acceleration with Alluxio in Kubernetes
Deep Learning and Gene Computing Acceleration with Alluxio in Kubernetes
 
Testing Persistent Storage Performance in Kubernetes with Sherlock
Testing Persistent Storage Performance in Kubernetes with SherlockTesting Persistent Storage Performance in Kubernetes with Sherlock
Testing Persistent Storage Performance in Kubernetes with Sherlock
 
Creating a dynamic software deployment solution using free/libre software
Creating a dynamic software deployment solution using free/libre softwareCreating a dynamic software deployment solution using free/libre software
Creating a dynamic software deployment solution using free/libre software
 
Building For Mer
Building For MerBuilding For Mer
Building For Mer
 
Automated CI with AEM Cloud service
Automated CI with AEM Cloud serviceAutomated CI with AEM Cloud service
Automated CI with AEM Cloud service
 
DCEU 18: Developing with Docker Containers
DCEU 18: Developing with Docker ContainersDCEU 18: Developing with Docker Containers
DCEU 18: Developing with Docker Containers
 
Scaleable PHP Applications in Kubernetes
Scaleable PHP Applications in KubernetesScaleable PHP Applications in Kubernetes
Scaleable PHP Applications in Kubernetes
 
Salt - A Scalable Systems Management Solution for Datacenters
Salt - A Scalable Systems Management Solution for DatacentersSalt - A Scalable Systems Management Solution for Datacenters
Salt - A Scalable Systems Management Solution for Datacenters
 
414: Build an agile CI/CD Pipeline for application integration
414: Build an agile CI/CD Pipeline for application integration414: Build an agile CI/CD Pipeline for application integration
414: Build an agile CI/CD Pipeline for application integration
 
Using Docker For Development
Using Docker For DevelopmentUsing Docker For Development
Using Docker For Development
 
Some experiences for porting application to Intel Xeon Phi
Some experiences for porting application to Intel Xeon PhiSome experiences for porting application to Intel Xeon Phi
Some experiences for porting application to Intel Xeon Phi
 

Kürzlich hochgeladen

Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
masabamasaba
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
masabamasaba
 
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
chiefasafspells
 
Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...
Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...
Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...
Medical / Health Care (+971588192166) Mifepristone and Misoprostol tablets 200mg
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
masabamasaba
 

Kürzlich hochgeladen (20)

WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...
WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...
WSO2CON 2024 - Navigating API Complexity: REST, GraphQL, gRPC, Websocket, Web...
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
 
WSO2CON 2024 - How to Run a Security Program
WSO2CON 2024 - How to Run a Security ProgramWSO2CON 2024 - How to Run a Security Program
WSO2CON 2024 - How to Run a Security Program
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
 
WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?WSO2CON 2024 - Does Open Source Still Matter?
WSO2CON 2024 - Does Open Source Still Matter?
 
%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
 
WSO2CON 2024 Slides - Open Source to SaaS
WSO2CON 2024 Slides - Open Source to SaaSWSO2CON 2024 Slides - Open Source to SaaS
WSO2CON 2024 Slides - Open Source to SaaS
 
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
 
Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...
Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...
Abortion Pills In Pretoria ](+27832195400*)[ 🏥 Women's Abortion Clinic In Pre...
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
 
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open SourceWSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
 
WSO2CON2024 - It's time to go Platformless
WSO2CON2024 - It's time to go PlatformlessWSO2CON2024 - It's time to go Platformless
WSO2CON2024 - It's time to go Platformless
 

OSDC 2019 | Running backups with Ceph-to-Ceph by Michael Raabe

  • 1. Running backups with Ceph-to-Ceph OSDC 2019 May 15th, 2019 Michel Raabe Cloud Solution Architect B1 Systems GmbH raabe@b1-systems.de
  • 2. Introducing B1 Systems founded in 2004 operating both nationally and internationally more than 100 employees vendor-independent (hardware and software) focus: consulting support development training operations solutions branch offices in Rockolding, Berlin, Cologne & Dresden B1 Systems GmbH Running backups with Ceph-to-Ceph 2 / 34
  • 3. Areas of Expertise B1 Systems GmbH Running backups with Ceph-to-Ceph 3 / 34
  • 4. Running backups with Ceph-to-Ceph B1 Systems GmbH Running backups with Ceph-to-Ceph 4 / 34
  • 5. Requirements “We want to backup our Ceph cluster” independent from OpenStack asynchronous not always offsite fuc**** file browser B1 Systems GmbH Running backups with Ceph-to-Ceph 5 / 34
  • 7. Challenges B1 Systems GmbH Running backups with Ceph-to-Ceph 7 / 34
  • 8. Challenges crash consistency disaster recovery bandwidth costs B1 Systems GmbH Running backups with Ceph-to-Ceph 8 / 34
  • 9. Crash consistency only access to the base layer unknown workload corrupt filesystem lost transactions B1 Systems GmbH Running backups with Ceph-to-Ceph 9 / 34
  • 10. Disaster recovery how to access the remote cluster? bandwidth? route? switch the storage backend? supported by the application? B1 Systems GmbH Running backups with Ceph-to-Ceph 10 / 34
  • 11. Bandwidth bandwidth vs. backup volume 20TB in 24h over 800mbit - ? network latency B1 Systems GmbH Running backups with Ceph-to-Ceph 11 / 34
  • 12. Costs a second cluster different type of disks (hdd/ssd/nvme) similar amount of available disk space uplink 1/10/100 Gbit B1 Systems GmbH Running backups with Ceph-to-Ceph 12 / 34
  • 13. What can we do? B1 Systems GmbH Running backups with Ceph-to-Ceph 13 / 34
  • 14. rbd-mirror – Overview does mirror support: single image whole pool available since jewel (10.2.x) asynchronous daemon: rbd-mirror no “file browser” B1 Systems GmbH Running backups with Ceph-to-Ceph 14 / 34
  • 15. rbd-mirror – What’s needed? rbd feature: journaling (+ exclusive lock): rbd feature enable ... journaling 30s default trigger: rbd_mirror_sync_point_update_age cluster name default is “ceph” it’s possible but ... ... hard to track B1 Systems GmbH Running backups with Ceph-to-Ceph 15 / 34
  • 16. rbd-mirror – Keyring sample key layout: ceph.client.admin.keyring <clustername>.<type>.<id>.keyring example config files: remote.client.mirror.keyring remote.conf local.client.mirror.keyring local.conf B1 Systems GmbH Running backups with Ceph-to-Ceph 16 / 34
  • 17. rbd-mirror – Problems rbd-daemons must be able to connect to both clusters no two public networks same network or routing kung fu krbd module B1 Systems GmbH Running backups with Ceph-to-Ceph 17 / 34
  • 18. s3 multisite – Overview S3 - simple storage service compatible with Amazon S3 Swift compatible Keystone integration encryption no “file browser” B1 Systems GmbH Running backups with Ceph-to-Ceph 18 / 34
  • 19. s3 multisite – What’s needed? read-write or read-only cloudsync plugin (e.g. aws) NFS export possible S3 “client” B1 Systems GmbH Running backups with Ceph-to-Ceph 19 / 34
  • 20. s3 multisite – Problems Masterzone “default” Zonegroup(s) “default” <zone>-<zonegroup1> <-> <zone>-<zonegroup2> default-default <-> default-default de-fra <-> de-muc Zonegroups are synced one connection ... ... radosgw to radosgw B1 Systems GmbH Running backups with Ceph-to-Ceph 20 / 34
  • 21. 3rd party custom scripts backy2 ... all based on snapshots and diff exports B1 Systems GmbH Running backups with Ceph-to-Ceph 21 / 34
  • 22. 3rd party – Scripts – Overview should use snapshot and ‘diff’ rbd snap create .. rbd export-diff .. | ssh rbd import-diff .. rbd export-diff --from-snap .. | ssh rbd import-diff .. someone has to track the snapshots B1 Systems GmbH Running backups with Ceph-to-Ceph 22 / 34
  • 23. 3rd party – backy2 – Overview internal db snapshots rbd and file can do backups to s3 tricky with k8s python (only deb) nbd mount B1 Systems GmbH Running backups with Ceph-to-Ceph 23 / 34
  • 24. 3rd party – Problems active/old snapshots k8s with ceph backend pv cannot be deleted tracking/database still no “file browser” B1 Systems GmbH Running backups with Ceph-to-Ceph 24 / 34
  • 25. 3rd party – Example workflow 1 Create initial snapshot. 2 Copy first snapshot to remote cluster. 3 ... next hour/day/week/month ... 4 Create a snapshot. 5 Copy diff between snap1 vs snap2 to remote cluster. 6 Delete old snapshots. B1 Systems GmbH Running backups with Ceph-to-Ceph 25 / 34
  • 26. What can’t we do B1 Systems GmbH Running backups with Ceph-to-Ceph 26 / 34
  • 27. rbd export plain rbd export for one-time syncs only disk images Figure: rbd export | rbd import - 21min (20G) B1 Systems GmbH Running backups with Ceph-to-Ceph 27 / 34
  • 28. rbd export-diff plain rbd export-diff depends on the (total) diff size only disk images fast(er)? scheduled Figure: rbd export-diff - 8min (20G) B1 Systems GmbH Running backups with Ceph-to-Ceph 28 / 34
  • 29. rbd mirror rbd-mirror slow? runs in the background no schedule Figure: rbd mirror - 30min (20G) B1 Systems GmbH Running backups with Ceph-to-Ceph 29 / 34
  • 30. s3 multisite s3 capable client “only” http(s) scalable (really, really well) B1 Systems GmbH Running backups with Ceph-to-Ceph 30 / 34
  • 31. Wrap-up B1 Systems GmbH Running backups with Ceph-to-Ceph 31 / 34
  • 32. What now? 1 rbd snapshots simple, easy, fast controllable can be a backup 2 s3 multisite simple, fast built-in nfs export (ganesha) no backup at all 3 rbd-mirror disk-based built-in no backup at all B1 Systems GmbH Running backups with Ceph-to-Ceph 32 / 34
  • 33. What’s with ... cephfs no built-in replication hourly/daily rsync? snapshots? B1 Systems GmbH Running backups with Ceph-to-Ceph 33 / 34
  • 34. Thank You! For more information, refer to info@b1-systems.de or +49 (0)8457 - 931096