Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

OSDC 2019 | Running backups with Ceph-to-Ceph by Michael Raabe

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige

Hier ansehen

1 von 34 Anzeige

OSDC 2019 | Running backups with Ceph-to-Ceph by Michael Raabe

Herunterladen, um offline zu lesen

This presentation highlights several different methods for backups inside of Ceph (clusters). We often receive requests for both local and remote backups. So we would like to introduce backup methods using external tools as well as some using Ceph’s own ‘rbd export’ or ‘rbd-mirror’ approaches. Learn about the pros and cons of each approach, be warned of possible pitfalls both of native use or OpenStack-based approaches.

This presentation highlights several different methods for backups inside of Ceph (clusters). We often receive requests for both local and remote backups. So we would like to introduce backup methods using external tools as well as some using Ceph’s own ‘rbd export’ or ‘rbd-mirror’ approaches. Learn about the pros and cons of each approach, be warned of possible pitfalls both of native use or OpenStack-based approaches.

Anzeige
Anzeige

Weitere Verwandte Inhalte

Ähnlich wie OSDC 2019 | Running backups with Ceph-to-Ceph by Michael Raabe (20)

Aktuellste (20)

Anzeige

OSDC 2019 | Running backups with Ceph-to-Ceph by Michael Raabe

  1. 1. Running backups with Ceph-to-Ceph OSDC 2019 May 15th, 2019 Michel Raabe Cloud Solution Architect B1 Systems GmbH raabe@b1-systems.de
  2. 2. Introducing B1 Systems founded in 2004 operating both nationally and internationally more than 100 employees vendor-independent (hardware and software) focus: consulting support development training operations solutions branch offices in Rockolding, Berlin, Cologne & Dresden B1 Systems GmbH Running backups with Ceph-to-Ceph 2 / 34
  3. 3. Areas of Expertise B1 Systems GmbH Running backups with Ceph-to-Ceph 3 / 34
  4. 4. Running backups with Ceph-to-Ceph B1 Systems GmbH Running backups with Ceph-to-Ceph 4 / 34
  5. 5. Requirements “We want to backup our Ceph cluster” independent from OpenStack asynchronous not always offsite fuc**** file browser B1 Systems GmbH Running backups with Ceph-to-Ceph 5 / 34
  6. 6. Methods native rbd-mirror s3 multisite 3rd party “scripts” backy2 rbd2qcow ??? B1 Systems GmbH Running backups with Ceph-to-Ceph 6 / 34
  7. 7. Challenges B1 Systems GmbH Running backups with Ceph-to-Ceph 7 / 34
  8. 8. Challenges crash consistency disaster recovery bandwidth costs B1 Systems GmbH Running backups with Ceph-to-Ceph 8 / 34
  9. 9. Crash consistency only access to the base layer unknown workload corrupt filesystem lost transactions B1 Systems GmbH Running backups with Ceph-to-Ceph 9 / 34
  10. 10. Disaster recovery how to access the remote cluster? bandwidth? route? switch the storage backend? supported by the application? B1 Systems GmbH Running backups with Ceph-to-Ceph 10 / 34
  11. 11. Bandwidth bandwidth vs. backup volume 20TB in 24h over 800mbit - ? network latency B1 Systems GmbH Running backups with Ceph-to-Ceph 11 / 34
  12. 12. Costs a second cluster different type of disks (hdd/ssd/nvme) similar amount of available disk space uplink 1/10/100 Gbit B1 Systems GmbH Running backups with Ceph-to-Ceph 12 / 34
  13. 13. What can we do? B1 Systems GmbH Running backups with Ceph-to-Ceph 13 / 34
  14. 14. rbd-mirror – Overview does mirror support: single image whole pool available since jewel (10.2.x) asynchronous daemon: rbd-mirror no “file browser” B1 Systems GmbH Running backups with Ceph-to-Ceph 14 / 34
  15. 15. rbd-mirror – What’s needed? rbd feature: journaling (+ exclusive lock): rbd feature enable ... journaling 30s default trigger: rbd_mirror_sync_point_update_age cluster name default is “ceph” it’s possible but ... ... hard to track B1 Systems GmbH Running backups with Ceph-to-Ceph 15 / 34
  16. 16. rbd-mirror – Keyring sample key layout: ceph.client.admin.keyring <clustername>.<type>.<id>.keyring example config files: remote.client.mirror.keyring remote.conf local.client.mirror.keyring local.conf B1 Systems GmbH Running backups with Ceph-to-Ceph 16 / 34
  17. 17. rbd-mirror – Problems rbd-daemons must be able to connect to both clusters no two public networks same network or routing kung fu krbd module B1 Systems GmbH Running backups with Ceph-to-Ceph 17 / 34
  18. 18. s3 multisite – Overview S3 - simple storage service compatible with Amazon S3 Swift compatible Keystone integration encryption no “file browser” B1 Systems GmbH Running backups with Ceph-to-Ceph 18 / 34
  19. 19. s3 multisite – What’s needed? read-write or read-only cloudsync plugin (e.g. aws) NFS export possible S3 “client” B1 Systems GmbH Running backups with Ceph-to-Ceph 19 / 34
  20. 20. s3 multisite – Problems Masterzone “default” Zonegroup(s) “default” <zone>-<zonegroup1> <-> <zone>-<zonegroup2> default-default <-> default-default de-fra <-> de-muc Zonegroups are synced one connection ... ... radosgw to radosgw B1 Systems GmbH Running backups with Ceph-to-Ceph 20 / 34
  21. 21. 3rd party custom scripts backy2 ... all based on snapshots and diff exports B1 Systems GmbH Running backups with Ceph-to-Ceph 21 / 34
  22. 22. 3rd party – Scripts – Overview should use snapshot and ‘diff’ rbd snap create .. rbd export-diff .. | ssh rbd import-diff .. rbd export-diff --from-snap .. | ssh rbd import-diff .. someone has to track the snapshots B1 Systems GmbH Running backups with Ceph-to-Ceph 22 / 34
  23. 23. 3rd party – backy2 – Overview internal db snapshots rbd and file can do backups to s3 tricky with k8s python (only deb) nbd mount B1 Systems GmbH Running backups with Ceph-to-Ceph 23 / 34
  24. 24. 3rd party – Problems active/old snapshots k8s with ceph backend pv cannot be deleted tracking/database still no “file browser” B1 Systems GmbH Running backups with Ceph-to-Ceph 24 / 34
  25. 25. 3rd party – Example workflow 1 Create initial snapshot. 2 Copy first snapshot to remote cluster. 3 ... next hour/day/week/month ... 4 Create a snapshot. 5 Copy diff between snap1 vs snap2 to remote cluster. 6 Delete old snapshots. B1 Systems GmbH Running backups with Ceph-to-Ceph 25 / 34
  26. 26. What can’t we do B1 Systems GmbH Running backups with Ceph-to-Ceph 26 / 34
  27. 27. rbd export plain rbd export for one-time syncs only disk images Figure: rbd export | rbd import - 21min (20G) B1 Systems GmbH Running backups with Ceph-to-Ceph 27 / 34
  28. 28. rbd export-diff plain rbd export-diff depends on the (total) diff size only disk images fast(er)? scheduled Figure: rbd export-diff - 8min (20G) B1 Systems GmbH Running backups with Ceph-to-Ceph 28 / 34
  29. 29. rbd mirror rbd-mirror slow? runs in the background no schedule Figure: rbd mirror - 30min (20G) B1 Systems GmbH Running backups with Ceph-to-Ceph 29 / 34
  30. 30. s3 multisite s3 capable client “only” http(s) scalable (really, really well) B1 Systems GmbH Running backups with Ceph-to-Ceph 30 / 34
  31. 31. Wrap-up B1 Systems GmbH Running backups with Ceph-to-Ceph 31 / 34
  32. 32. What now? 1 rbd snapshots simple, easy, fast controllable can be a backup 2 s3 multisite simple, fast built-in nfs export (ganesha) no backup at all 3 rbd-mirror disk-based built-in no backup at all B1 Systems GmbH Running backups with Ceph-to-Ceph 32 / 34
  33. 33. What’s with ... cephfs no built-in replication hourly/daily rsync? snapshots? B1 Systems GmbH Running backups with Ceph-to-Ceph 33 / 34
  34. 34. Thank You! For more information, refer to info@b1-systems.de or +49 (0)8457 - 931096

×