Moh Ahmed and Raymond Maika presented 'Using Rook to Manage Kubernetes Storage with Ceph' at Montreal's first Cloud Native Day, which took place on June 11 in Montreal.
08448380779 Call Girls In Civil Lines Women Seeking Men
Using Rook to Manage Kubernetes Storage with Ceph
1. USING ROOK TO MANAGE K8S
STORAGE WITH CEPH
MOH AHMED, RAYMOND MAIKA
JUNE 11TH, 2019
2. Agenda
• What is Rook?
• Rook Timeline
• Operator Pattern
• Rook Design
• Rook Architecture with Ceph
• Container Storage Interface
• Monitoring Ceph on Kubernetes
• Demo of a Ceph Cluster Upgrade
• Upcoming Rook Features
3. What is Rook?
• Reliance on external storage
– Not portable
– Requires these services to be accessible
– Deployment burden
• Reliance on cloud provider managed services
– Vendor lock-in
• Day 2 operations - who is managing the storage?
8. Operator Pattern
• A method of packaging, deploying, and managing an application
• Extends Kubernetes API through Custom Resource Definitions (CRDs)
• Reconciliation loops to enforce state of the CRDs
• The Operator will:
– Observe the objects
– Analyze current vs. desired state
– Act on changes
Observe
AnalyzeAct
9. etcdetcd
Rook Design
Kubernetes
API
kubectl
etcd
Rook Operator
Rook
Agent
Kubelet
Rook Flex Driver
Management &
Health API
New Objects:
Volume
Attachments
New Objects:
Storage
Clusters
Storage Pools
Object Store
File Store
Objects:
Deployments
DaemonSets
Pods
Services
StorageClass / PV / PVC
ClusterRole
Namespace
Config Maps
Daemons
CSI
Driver
11. Container Storage Interface
• A specification to establish a standard for block and file storage system
• Allows the freedom to develop volume plug-ins externally from the orchestrator
• Similar to how Container Network Interface (CNI) became a standard
• Orchestrator agnostic ensuring compatibility across different platforms
12. Ceph CSI Driver
• Implements an interface between a CSI Orchestrator (e.g. Kubernetes) and the
Ceph cluster
Ceph CSI Driver
Version
CSI Spec Version
v0.3.0 v0.3
v1.0.0 v1.0.0
Kubernetes CSI Spec
Compatibility
Status
v1.9 v0.1.0 Alpha
v1.10 v0.2.0 Beta
v1.11 v0.3.0 Beta
v1.13 v0.3.0, v1.0.0 GA
14. Monitoring Ceph on K8s
• Ceph Manager Daemon (ceph-mgr)
– Became required since 12.x
(luminous) release
– Used to provide monitoring
interfaces
– Has a Prometheus plugin
– Built in dashboard exposed with
Rook 0.8 release
15. Upgrade Workflow
• Since Rook v0.9:
– The operator and its storage can be upgraded independently
– Two separate images for the Rook operator and the Ceph cluster
• Rook 1.0 supports:
– Ceph Luminous (v12)
– Ceph Mimic (v13)
– Ceph Nautilus (v14)
• Upgrading the Ceph cluster is as simple as editing the image in the CephCluster
object
16. Demo
• Show the CRDs and the pod deployment
• Show monitoring tools
• Upgrade the Ceph cluster to a new version
17. Future Plans
• Rook v1.1
– Increased stability for other storage backends
– Stable release for Ceph-CSI plugin
– Improved upgrade workflows
18. Get Involved
• Contribute to Rook
– https://github.com/rook/rook
– https://rook.io/
• Slack - https://rook-io.slack.com/
• Twitter - @rook_io
• Forums - https://groups.google.com/forum/#!forum/rook-dev
• Community Meetings
19. References
• Thanks to Jared Watts, founder of Upbound and Senior maintainer of the Rook project for his
help and usage of some of his slides materials
• https://rook.io/docs/rook/v1.0/
• https://coreos.com/operators/
• https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-
2017
• https://kubernetes-csi.github.io/docs/
• https://github.com/kubernetes/community/blob/master/contributors/design-
proposals/storage/container-storage-interface.md
• https://github.com/kubernetes/community/blob/master/contributors/devel/sig-
storage/flexvolume.md
• https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/
Hinweis der Redaktion
Get in touch: moh.ahmed@cengn.ca
Slide referenced from Jared Watts, Rook Project Intro
Rook was a project hosted under the Cloud Native Computing Foundation (CNCF) early in 2018
Much like how Kubernetes is an orchestrator for containers, Rook is an orchestrator for storage
Automate
Deployment
Bootstrapping
Configuration
Upgrading
Provision
Mount storage with PVCs
More than just an operator:
Operator patterns/plumbing
Storage resource normalization
normalization: enables a user to easily specify whether to converge both storage and compute, or to keep those resources separated.
Common policies, specs, logic
Around backups and snapshots
Quality of service guarantees
Placement of system components across nodes in the cluster
Memory and CPU resource utilization
Networking configuration and topology
And more!
Testing effort
Our adoption started early in 0.6/0.7 – the focus then was Ceph
As the releases continued into 0.8 and 0.9, more storage backends were added
Some fundamental changes were needed to be made to accommodate the different backends – separate namespaces, new CRDs
In September of 2018, Rook reached incubation phase within CNCF further solidifying its role within the landscape
With 0.9, the independence between Rook and its backend was solidified: one can be updated without the need to update the other
1.0 introduced more maturity in the Ceph storage and an experimental implementation of the CSI-Ceph driver was released
What we don’t see here are the interim releases that have been pushed throughout the release cycle with a dot 1 release every 5 months or so
The Kubelet process connects to the Kubernetes API Server
The normal Kubernetes objects are leveraged: Deployments, Daemonsets, StorageClass/PV/PVC
Along with those objects, Custom Resource Definitions are also deployed: storage, clusters, storage pools
Once those objects are deployed, the Rook operator is deployed which begins running the reconciliation loops discussed previously
Various other daemons will run specific to the storage backend chosen. Rook begins querying the management and health APIs of those daemons to ensure a healthy cluster
Rook Agent - Daemonset running on all nodes to manage attachment of storage to the hosts
Previous to 1.0, the Rook Flex volume driver was the only way to manage the volume attachments
As mentioned before, the CSI driver is also another way to manage the the storage on the hosts but it interacts with the Kubelet directly through the CSI and cuts out the requirement to use the Flex volume driver
Traditionally, volume plugins were in-tree (code existing in the core Kubernetes repo)
New plugins would require going through the code repo – tight coupling and dependency on Kubernetes releases
The Flex Volume plugin tried to address this by exposing APIs but didn’t solve all the problems (e.g. dependencies)
CSI is a standard for exposing block and file storage system
Third-party storage providers can write and deploy plugins without needing to touch the Kubernetes code