SlideShare ist ein Scribd-Unternehmen logo
1 von 32
OpenEBS Hangout
#4
22nd December 2017
Introducing MayaOnline
5-10
minutes
Recap OpenEBS
5 minutes
Release updates:
5-10 minutes
○ OpenEBS
○ Kubernetes contributions
○ What is coming in OpenEBS 0.6 ?
cMotion overview & Demo
20
Agenda
MayaOnline Introduction
Maya: Cross-cloud control plane
○ Visibility, automation, collaboration, and, over
time, learning via machine learning
○ OpenEBS users can subscribe to a free
version and then are upsold to a subscription
that includes OpenEBS enterprise support
OpenEBS: Containerized Storage for
Containers
○ Open source software that allows each
workload - and DevOps team - to have their
own storage controller
API
MAYAOnline.io
✓ Visibility
✓ ChatOps
✓ Optimization
ChatOps
MAYA GUI
API
MAYAOnline.io
✓ Visibility
✓ ChatOps
✓ Optimization
ChatOps
MAYA GUI
OpenEBS Quick Recap
OpenEBS recap- Why, What, How?
● Containerized storage for containers
● Storage solution for stateful applications running with k8s
● One storage controller per application/team vs monolithic
storage controller
● Integrates nicely into k8s (provisioning)
Architecture: Kubernetes
K8s Master
Minion
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
Minion
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
Minion
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
etcd
APIs
Cntrl
Schld
Minions run on physical nodes
PODs group containers, share an IP address, and each include a Kubelet agent
K8S Master services include: etcd, APIs, the scheduler, the control manager & others
Converged: Kubernetes + OpenEBS
K8s Master
Minion
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
Minion
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
Minion
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
POD
Container
Container
ContainerKubelet
etcd
APIs
Cntrl
Schld
OpenEBSAPIs Schld
Data Containers run in PODs on physical machines - an entire enterprise class storage controller
Data Containers mean every workload - and every per app team - has their own controller
OpenEBS runs on the Master; delivers services such as: APIs, the storage scheduler, analytics & others
How to get started ?
On your kubemaster:
kubemaster~: kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
kubemaster~: kubectl apply -f percona.yaml
In the application yaml, choose the OpenEBS storage class, setup the policies, and launch the application
kubemaster~: kubectl get pods | grep pvc
pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-ctrl-696530238-ngrgj 2/2 Running 0 36s
pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-rep-3408218758-2ldzv 1/1 Running 0 36s
pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-rep-3408218758-6mwj5 1/1 Running 0 36s
OpenEBS provisions storage controller with requested number of replicas, volume is bound and ready
Kubernetes Cluster
Stateless
Ingress Service (App)
Stateful (DB, etc)
ov-vol1
(target)(cstor)
node1 node2
ov-vol1-r1
(replica) (cstor)
ov-vol1-r2
(replica)(cstor)
OpenEBS Volume
(Deployment, Service, PV -
Disk )
Storage Backend
(Disks)
Application
(Deployment, Service, PVC,
PV - OpenEBS Volume)
Stateful Apps using OpenEBS Volumes
OpenEBS release updates
Release updates:
OpenEBS 0.5:
● Prometheus
● Grafana
● Volume exporter side-car per storage controller
● New storage policies
○ # of replicas
○ monitoring = on/off
○ storage pool (AWS EBS, GPD, local LVM etc)
K8s contributions from OpenEBS:
In progress
What is coming in OpenEBS 0.6 ?● OpenEBS provisioner will support more storage spec from k8s
○ PV Resource Policy (Quota, #PVCs, etc., )
○ Volume Resize
○ Volume Snapshots
○ Block Volume Claims
● Disk Monitoring and Alerts
● Refactor Storage Policy specification as CRDs
● Support OpenEBS Upgrades via kubectl
● Enhance the debuggability
● Enhance CI with platform testing with OpenShift/CentOS, CoreOS, Rancher
cStor & cMotion
(Tech preview)
cStore can store up to 2^128 bits can achieve millions of IOPS with usec latency
what cStor is not
● A distributed file system typically needed for capacity and
performance scaling; you can't have one without the other
○ hard to manage in production (do not want a storage team
● Volumes are typically small, GBs certainly not PBs
○ no need to scale capacity using complex distributed algorithms
● What about performance?
○ NMVe devices widely available in the cloud
○ single NMVe can do up to 400K iops; 3d Xpoint on its way
● Cloud native application have built in scalability
○ no need to scale a monolith storage system by adding more
drives to “raid groups”
Reimagined how storage should for cloud native apps on prem and in the cloud
What is cStor
● New storage engine that brings enterprise class features in containers for
containers
○ snapshot, clones, compression, replication, data integrity, …..
● Key enabler for cMotion (demo)
○ the ability to move data efficiently and incrementally c2c
● Always consistent on disk (transactions)
● Data integrity and encryption, crucial for the clouds deployments
● Online expansion of existing volumes (resize)
● Cloud native design vs cloud washed
○ build from the ground up vs an existing solution with container lipstick
Copy on Write
Under the hood of cStor
controller
cStor
node1
cStor
node2
cStor
node3
iSCSI, iSER, NMVEoF, NFS(?)
● Controller serves out the blocks to the
application
○ defined in YAML, deployed by maya and
k8s
● Based on replication level, the controller
forwards the IO to the replicas (cStor)
● cStor is a transactional; which is always
consistent on disk.
● Copy on Write (CoW) data never gets
overwritten but written to unused block
Also a container!
Atomic updates, data always consistent on disk -- cStor itself is stateless
Transactions
● Each write is assigned a transaction
● Transactions are batched in to transactions groups (optimal bandwidth)
● Latest transaction number points to the “live” data
● Transaction numbers are updated atomically which means that all
writes in the group have succeeded or failed
● A snapshot is a reference to an old transaction (and its data)
○ quick scan of newly written blocks since last transaction
● cMotion; send the blocks that have changed between two transactions
● All form nice and comfortable user space
○ no kernel dependencies (needed for c2c)
○ no kernel taints
Hardware trends enforce a change in the way we do things
Storage performance challenges
● How to achieve high performance numbers from within user space?
○ copyin/copyout of data between kernel and user is expensive
○ context switches
● With current HW trends, the kernel becomes the bottleneck
○ white label boxes 1U, serving out 17mil IOPS (!!!)
● Low latency SSDs and 100GB network cards become the norm
● 10GB nic
○ 14.888e6 64 bytes per second; CPU has only a couple of cycles
per packet per NIC
● Frequency remains relatively the same, core count goes up
○ we’ve got cores to spare
HW performance kernel vs user
Its raining IOPS
Source : https://software.intel.com/en-us/articles/accelerating-your-nvme-drives-with-spdk
Achieve higher performance in user space
● Solution; bypass the kernel subsystem all together, running
everything in user space
● Instead of doing network or disk IO to the kernel, we submit the IO to
another container which is direct access to the HW resources (IOC)
○ map NIC rings to user space
○ map PCI bars from NMVe devices
○ lockless and message passing between cores
● Poll Mode Drivers (PMD)
○ 100% CPU
● Borrow from VM technology to construct interfaces between
containers
○ VHOST and VIRTIO-SCSI
Summary
● cStor provides enterprise class like features like your friendly
neighbourhood <insert vendor> storage system
● Provides data integrity features missing natively in the Linux Kernel
● Provides the ability to efficiently work with data by use of COW
● Bypasses kernel for IO to achieve higher performance than kernel
● Cloud native design; using cloud native paradigms to develop and
deploy
● Removes friction between developers and storage admins
QUESTIONS?
cMotion Demo setup overview
MayaOnline cMotion Demo setup overview
Jenkins
Pod
GCP Zone: US-Central
K8s Cluster: austin-cicd
User’s
CI/CD
GCP Zone: Europe East
K8s Cluster: Denmark-
cicd
AWS Zone: US East
K8s Cluster: mule-master
1
Part 1: Show the CICD
setup working with
Jenkins and github
MayaOnline cMotion Demo setup overview
Jenkins
Pod
GCP Zone: US-Central
K8s Cluster: austin-cicd
User’s
CI/CD
GCP Zone: Europe East
K8s Cluster: Denmark-
cicd
AWS Zone: US East
K8s Cluster: mule-master
1
Part 1: Show the CICD
setup working with
Jenkins and github
2
Part 2: Move Jenkins pod
to Denmark k8s cluster
and show the github
CICD working
Jenkins
Pod
MayaOnline cMotion Demo setup overview
Jenkins
Pod
GCP Zone: US-Central
K8s Cluster: austin-cicd
User’s
CI/CD
GCP Zone: Europe East
K8s Cluster: Denmark-
cicd
AWS Zone: US East
K8s Cluster: mule-master
1
Part 1: Show the CICD
setup working with
Jenkins and github
2
Part 2: Move Jenkins pod
to GCP Denmark k8s
cluster and show the
github CICD working
Jenkins
Pod
3
Part 3: Move Jenkins pod
to AWS k8s cluster and
show the github CICD
working
MayaOnline cMotion Demo setup overview
Jenkins
Pod
GCP Zone: US-Central
K8s Cluster: austin-cicd
User’s
CI/CD
GCP Zone: Europe East
K8s Cluster: Denmark-
cicd
AWS Zone: US East
K8s Cluster: mule-master
1
Part 1: Show the CICD
setup working with
Jenkins and github
2
Part 2: Move Jenkins pod
to GCP Denmark k8s
cluster and show the
github CICD working
Jenkins
Pod
3
Part 3: Move Jenkins pod
to AWS k8s cluster and
show the github CICD
working
4
Part 4: Move Jenkins pod
to GCP austin cluster and
show the github CICD
working
AMA:
Ask me anything
Q & A
Container Attached Storage = DAS++
DAS
Benefits:
Simple
Ties application to storage
Predictable for capacity
planning
App deals with resiliency
Can be faster
Concerns:
Under-utilized hardware
○ 10% or less utilization
Wastes data center
Difficult to manage
Lacks storage features
Cannot be repurposed - made
for one workload
Does not support mobility of
workloads via containers
Cross cloud impossible
OpenEBS = “CAS”
✓ Simple
✓ No new skills required
✓ Per microservice storage policy
✓ Data protection & snapshots
✓ Reduces cloud vendor lock-in
✓ Eliminates storage vendor lock-in
✓ Highest possible efficiency
✓ Large & growing OSS community
✓ Natively cross cloud
✓ Uses proven code - ZFS & Linux
✓ Maya -> ML based analytics &
tuning
“YASS”: Distributed
Benefits:
Centralized management
Greater density and efficiency
Storage features such as:
○ Data protection
○ Snapshots for versioning
Concerns:
Additional complexity
Enormous blast radius
Expensive
Requires storage engineering
Challenged by container dynamism
No per microservice storage policy
I/O blender impairs performance
Locks customers into vendor
Cross cloud impossible
CAS
DAS
Distributed

Weitere ähnliche Inhalte

Was ist angesagt?

Glusterfs and openstack
Glusterfs  and openstackGlusterfs  and openstack
Glusterfs and openstack
openstackindia
 

Was ist angesagt? (20)

Gluster fs hadoop_fifth-elephant
Gluster fs hadoop_fifth-elephantGluster fs hadoop_fifth-elephant
Gluster fs hadoop_fifth-elephant
 
State of Gluster Performance
State of Gluster PerformanceState of Gluster Performance
State of Gluster Performance
 
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS Plans
 
Glusterfs and openstack
Glusterfs  and openstackGlusterfs  and openstack
Glusterfs and openstack
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on gluster
 
GlusterFS And Big Data
GlusterFS And Big DataGlusterFS And Big Data
GlusterFS And Big Data
 
Tiering barcelona
Tiering barcelonaTiering barcelona
Tiering barcelona
 
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
GlusterFs Architecture & Roadmap - LinuxCon EU 2013GlusterFs Architecture & Roadmap - LinuxCon EU 2013
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
 
Challenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan LambrightChallenges with Gluster and Persistent Memory with Dan Lambright
Challenges with Gluster and Persistent Memory with Dan Lambright
 
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-CasesRed Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
 
Integration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpanaIntegration of Glusterfs in to commvault simpana
Integration of Glusterfs in to commvault simpana
 
Lisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introductionLisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introduction
 
Gluster Storage
Gluster StorageGluster Storage
Gluster Storage
 
Accessing gluster ufo_-_eco_willson
Accessing gluster ufo_-_eco_willsonAccessing gluster ufo_-_eco_willson
Accessing gluster ufo_-_eco_willson
 
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang HuiStor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
 
Gluster Data Tiering
Gluster Data TieringGluster Data Tiering
Gluster Data Tiering
 
GlusterFS as a DFS
GlusterFS as a DFSGlusterFS as a DFS
GlusterFS as a DFS
 
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 

Ähnlich wie OpenEBS hangout #4

Ähnlich wie OpenEBS hangout #4 (20)

Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageWebinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
Free GitOps Workshop + Intro to Kubernetes & GitOps
Free GitOps Workshop + Intro to Kubernetes & GitOpsFree GitOps Workshop + Intro to Kubernetes & GitOps
Free GitOps Workshop + Intro to Kubernetes & GitOps
 
Bandwidth: Use Cases for Elastic Cloud on Kubernetes
Bandwidth: Use Cases for Elastic Cloud on Kubernetes Bandwidth: Use Cases for Elastic Cloud on Kubernetes
Bandwidth: Use Cases for Elastic Cloud on Kubernetes
 
Introduction to Container Storage Interface (CSI)
Introduction to Container Storage Interface (CSI)Introduction to Container Storage Interface (CSI)
Introduction to Container Storage Interface (CSI)
 
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData  Datastax webinar - Operating Cassandra on Kubernetes with the help ...MayaData  Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...
 
Implementing data and databases on K8s within the Dutch government
Implementing data and databases on K8s within the Dutch governmentImplementing data and databases on K8s within the Dutch government
Implementing data and databases on K8s within the Dutch government
 
Free GitOps Workshop
Free GitOps WorkshopFree GitOps Workshop
Free GitOps Workshop
 
Intro to Kubernetes & GitOps Workshop
Intro to Kubernetes & GitOps WorkshopIntro to Kubernetes & GitOps Workshop
Intro to Kubernetes & GitOps Workshop
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
 
[WSO2Con Asia 2018] Deploying Applications in K8S and Docker
[WSO2Con Asia 2018] Deploying Applications in K8S and Docker[WSO2Con Asia 2018] Deploying Applications in K8S and Docker
[WSO2Con Asia 2018] Deploying Applications in K8S and Docker
 
DevOps Days Boston 2017: Real-world Kubernetes for DevOps
DevOps Days Boston 2017: Real-world Kubernetes for DevOpsDevOps Days Boston 2017: Real-world Kubernetes for DevOps
DevOps Days Boston 2017: Real-world Kubernetes for DevOps
 
OpenNebula and StorPool: Building Powerful Clouds
OpenNebula and StorPool: Building Powerful CloudsOpenNebula and StorPool: Building Powerful Clouds
OpenNebula and StorPool: Building Powerful Clouds
 
Kubernetes Forum Seoul 2019: Re-architecting Data Platform with Kubernetes
Kubernetes Forum Seoul 2019: Re-architecting Data Platform with KubernetesKubernetes Forum Seoul 2019: Re-architecting Data Platform with Kubernetes
Kubernetes Forum Seoul 2019: Re-architecting Data Platform with Kubernetes
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
 
Workday's Next Generation Private Cloud
Workday's Next Generation Private CloudWorkday's Next Generation Private Cloud
Workday's Next Generation Private Cloud
 
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)
 
Kubernetes and Cloud Native Update Q4 2018
Kubernetes and Cloud Native Update Q4 2018Kubernetes and Cloud Native Update Q4 2018
Kubernetes and Cloud Native Update Q4 2018
 
[WSO2Con EU 2018] Deploying Applications in K8S and Docker
[WSO2Con EU 2018] Deploying Applications in K8S and Docker[WSO2Con EU 2018] Deploying Applications in K8S and Docker
[WSO2Con EU 2018] Deploying Applications in K8S and Docker
 
Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications Latest (storage IO) patterns for cloud-native applications
Latest (storage IO) patterns for cloud-native applications
 

Mehr von OpenEBS

Volume Policies in OpenEBS 0.7
Volume Policies in OpenEBS 0.7Volume Policies in OpenEBS 0.7
Volume Policies in OpenEBS 0.7
OpenEBS
 

Mehr von OpenEBS (20)

Data Agility for Devops - OSI 2018
Data Agility for Devops - OSI 2018Data Agility for Devops - OSI 2018
Data Agility for Devops - OSI 2018
 
Introduction to cStor replica - Contributors Meet 5th Oct 2018
Introduction to cStor replica - Contributors Meet 5th Oct 2018Introduction to cStor replica - Contributors Meet 5th Oct 2018
Introduction to cStor replica - Contributors Meet 5th Oct 2018
 
Running OpenEBS on GPDs - Weekly Contributors Meet 28th Sep 2018
Running OpenEBS on GPDs - Weekly Contributors Meet 28th Sep 2018Running OpenEBS on GPDs - Weekly Contributors Meet 28th Sep 2018
Running OpenEBS on GPDs - Weekly Contributors Meet 28th Sep 2018
 
Container Attached Storage (CAS) with OpenEBS - SDC 2018
Container Attached Storage (CAS) with OpenEBS -  SDC 2018Container Attached Storage (CAS) with OpenEBS -  SDC 2018
Container Attached Storage (CAS) with OpenEBS - SDC 2018
 
Volume Policies in OpenEBS 0.7
Volume Policies in OpenEBS 0.7Volume Policies in OpenEBS 0.7
Volume Policies in OpenEBS 0.7
 
Thoughts on heptio's ark - Contributors Meet 21st Sept 2018
Thoughts on heptio's ark - Contributors Meet 21st Sept 2018Thoughts on heptio's ark - Contributors Meet 21st Sept 2018
Thoughts on heptio's ark - Contributors Meet 21st Sept 2018
 
Deploying OpenEBS with Availability Zones
Deploying OpenEBS with Availability ZonesDeploying OpenEBS with Availability Zones
Deploying OpenEBS with Availability Zones
 
Kubernetes Monitoring and Troubleshooting using Weavescope- Kubernetes Meetup...
Kubernetes Monitoring and Troubleshooting using Weavescope- Kubernetes Meetup...Kubernetes Monitoring and Troubleshooting using Weavescope- Kubernetes Meetup...
Kubernetes Monitoring and Troubleshooting using Weavescope- Kubernetes Meetup...
 
OpenEBS Visualization and Monitoring using Weave-scope - Contributors Meet 1s...
OpenEBS Visualization and Monitoring using Weave-scope - Contributors Meet 1s...OpenEBS Visualization and Monitoring using Weave-scope - Contributors Meet 1s...
OpenEBS Visualization and Monitoring using Weave-scope - Contributors Meet 1s...
 
BDD Testing Using Godog - Bangalore Golang Meetup # 32
BDD Testing Using Godog - Bangalore Golang Meetup # 32BDD Testing Using Godog - Bangalore Golang Meetup # 32
BDD Testing Using Godog - Bangalore Golang Meetup # 32
 
Container Attached Storage - Chennai Kubernetes Meetup #2 - April 21st 2018
Container Attached Storage - Chennai Kubernetes Meetup #2 - April 21st 2018Container Attached Storage - Chennai Kubernetes Meetup #2 - April 21st 2018
Container Attached Storage - Chennai Kubernetes Meetup #2 - April 21st 2018
 
Kubernetes Visualization-and-Monitoring-using-Weave-scope
Kubernetes Visualization-and-Monitoring-using-Weave-scopeKubernetes Visualization-and-Monitoring-using-Weave-scope
Kubernetes Visualization-and-Monitoring-using-Weave-scope
 
OpenEBS CAS SDC India - 2018
OpenEBS CAS SDC India - 2018OpenEBS CAS SDC India - 2018
OpenEBS CAS SDC India - 2018
 
Containerized Storage for Containers
Containerized Storage for ContainersContainerized Storage for Containers
Containerized Storage for Containers
 
South Bay Kubernetes DevOps
South Bay Kubernetes DevOps South Bay Kubernetes DevOps
South Bay Kubernetes DevOps
 
Containerized Storage for Containers Meetup #3
Containerized Storage for Containers Meetup #3Containerized Storage for Containers Meetup #3
Containerized Storage for Containers Meetup #3
 
Containerized Storage for Containers- Kubernetes LA Meetup , July 2017
Containerized Storage for Containers- Kubernetes LA Meetup , July 2017Containerized Storage for Containers- Kubernetes LA Meetup , July 2017
Containerized Storage for Containers- Kubernetes LA Meetup , July 2017
 
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
Dynamic Instrumentation- OpenEBS Golang Meetup July 2017
 
Kubernetes Bangalore Meetup- July 2017
Kubernetes Bangalore Meetup- July 2017Kubernetes Bangalore Meetup- July 2017
Kubernetes Bangalore Meetup- July 2017
 
OpenEBS Hangout #2 - Deploying Jupyter
OpenEBS Hangout #2 - Deploying Jupyter OpenEBS Hangout #2 - Deploying Jupyter
OpenEBS Hangout #2 - Deploying Jupyter
 

Kürzlich hochgeladen

+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
VictorSzoltysek
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
 

Kürzlich hochgeladen (20)

LEVEL 5 - SESSION 1 2023 (1).pptx - PDF 123456
LEVEL 5   - SESSION 1 2023 (1).pptx - PDF 123456LEVEL 5   - SESSION 1 2023 (1).pptx - PDF 123456
LEVEL 5 - SESSION 1 2023 (1).pptx - PDF 123456
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdfThe Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
BUS PASS MANGEMENT SYSTEM USING PHP.pptx
BUS PASS MANGEMENT SYSTEM USING PHP.pptxBUS PASS MANGEMENT SYSTEM USING PHP.pptx
BUS PASS MANGEMENT SYSTEM USING PHP.pptx
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 

OpenEBS hangout #4

  • 2. Introducing MayaOnline 5-10 minutes Recap OpenEBS 5 minutes Release updates: 5-10 minutes ○ OpenEBS ○ Kubernetes contributions ○ What is coming in OpenEBS 0.6 ? cMotion overview & Demo 20 Agenda
  • 4. Maya: Cross-cloud control plane ○ Visibility, automation, collaboration, and, over time, learning via machine learning ○ OpenEBS users can subscribe to a free version and then are upsold to a subscription that includes OpenEBS enterprise support OpenEBS: Containerized Storage for Containers ○ Open source software that allows each workload - and DevOps team - to have their own storage controller
  • 5. API MAYAOnline.io ✓ Visibility ✓ ChatOps ✓ Optimization ChatOps MAYA GUI
  • 6. API MAYAOnline.io ✓ Visibility ✓ ChatOps ✓ Optimization ChatOps MAYA GUI
  • 8. OpenEBS recap- Why, What, How? ● Containerized storage for containers ● Storage solution for stateful applications running with k8s ● One storage controller per application/team vs monolithic storage controller ● Integrates nicely into k8s (provisioning)
  • 10. Converged: Kubernetes + OpenEBS K8s Master Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet etcd APIs Cntrl Schld OpenEBSAPIs Schld Data Containers run in PODs on physical machines - an entire enterprise class storage controller Data Containers mean every workload - and every per app team - has their own controller OpenEBS runs on the Master; delivers services such as: APIs, the storage scheduler, analytics & others
  • 11. How to get started ? On your kubemaster: kubemaster~: kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml kubemaster~: kubectl apply -f percona.yaml In the application yaml, choose the OpenEBS storage class, setup the policies, and launch the application kubemaster~: kubectl get pods | grep pvc pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-ctrl-696530238-ngrgj 2/2 Running 0 36s pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-rep-3408218758-2ldzv 1/1 Running 0 36s pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-rep-3408218758-6mwj5 1/1 Running 0 36s OpenEBS provisions storage controller with requested number of replicas, volume is bound and ready
  • 12. Kubernetes Cluster Stateless Ingress Service (App) Stateful (DB, etc) ov-vol1 (target)(cstor) node1 node2 ov-vol1-r1 (replica) (cstor) ov-vol1-r2 (replica)(cstor) OpenEBS Volume (Deployment, Service, PV - Disk ) Storage Backend (Disks) Application (Deployment, Service, PVC, PV - OpenEBS Volume) Stateful Apps using OpenEBS Volumes
  • 14. Release updates: OpenEBS 0.5: ● Prometheus ● Grafana ● Volume exporter side-car per storage controller ● New storage policies ○ # of replicas ○ monitoring = on/off ○ storage pool (AWS EBS, GPD, local LVM etc)
  • 15. K8s contributions from OpenEBS: In progress
  • 16. What is coming in OpenEBS 0.6 ?● OpenEBS provisioner will support more storage spec from k8s ○ PV Resource Policy (Quota, #PVCs, etc., ) ○ Volume Resize ○ Volume Snapshots ○ Block Volume Claims ● Disk Monitoring and Alerts ● Refactor Storage Policy specification as CRDs ● Support OpenEBS Upgrades via kubectl ● Enhance the debuggability ● Enhance CI with platform testing with OpenShift/CentOS, CoreOS, Rancher
  • 18. cStore can store up to 2^128 bits can achieve millions of IOPS with usec latency what cStor is not ● A distributed file system typically needed for capacity and performance scaling; you can't have one without the other ○ hard to manage in production (do not want a storage team ● Volumes are typically small, GBs certainly not PBs ○ no need to scale capacity using complex distributed algorithms ● What about performance? ○ NMVe devices widely available in the cloud ○ single NMVe can do up to 400K iops; 3d Xpoint on its way ● Cloud native application have built in scalability ○ no need to scale a monolith storage system by adding more drives to “raid groups”
  • 19. Reimagined how storage should for cloud native apps on prem and in the cloud What is cStor ● New storage engine that brings enterprise class features in containers for containers ○ snapshot, clones, compression, replication, data integrity, ….. ● Key enabler for cMotion (demo) ○ the ability to move data efficiently and incrementally c2c ● Always consistent on disk (transactions) ● Data integrity and encryption, crucial for the clouds deployments ● Online expansion of existing volumes (resize) ● Cloud native design vs cloud washed ○ build from the ground up vs an existing solution with container lipstick
  • 20. Copy on Write Under the hood of cStor controller cStor node1 cStor node2 cStor node3 iSCSI, iSER, NMVEoF, NFS(?) ● Controller serves out the blocks to the application ○ defined in YAML, deployed by maya and k8s ● Based on replication level, the controller forwards the IO to the replicas (cStor) ● cStor is a transactional; which is always consistent on disk. ● Copy on Write (CoW) data never gets overwritten but written to unused block Also a container!
  • 21. Atomic updates, data always consistent on disk -- cStor itself is stateless Transactions ● Each write is assigned a transaction ● Transactions are batched in to transactions groups (optimal bandwidth) ● Latest transaction number points to the “live” data ● Transaction numbers are updated atomically which means that all writes in the group have succeeded or failed ● A snapshot is a reference to an old transaction (and its data) ○ quick scan of newly written blocks since last transaction ● cMotion; send the blocks that have changed between two transactions ● All form nice and comfortable user space ○ no kernel dependencies (needed for c2c) ○ no kernel taints
  • 22. Hardware trends enforce a change in the way we do things Storage performance challenges ● How to achieve high performance numbers from within user space? ○ copyin/copyout of data between kernel and user is expensive ○ context switches ● With current HW trends, the kernel becomes the bottleneck ○ white label boxes 1U, serving out 17mil IOPS (!!!) ● Low latency SSDs and 100GB network cards become the norm ● 10GB nic ○ 14.888e6 64 bytes per second; CPU has only a couple of cycles per packet per NIC ● Frequency remains relatively the same, core count goes up ○ we’ve got cores to spare
  • 23. HW performance kernel vs user Its raining IOPS Source : https://software.intel.com/en-us/articles/accelerating-your-nvme-drives-with-spdk
  • 24. Achieve higher performance in user space ● Solution; bypass the kernel subsystem all together, running everything in user space ● Instead of doing network or disk IO to the kernel, we submit the IO to another container which is direct access to the HW resources (IOC) ○ map NIC rings to user space ○ map PCI bars from NMVe devices ○ lockless and message passing between cores ● Poll Mode Drivers (PMD) ○ 100% CPU ● Borrow from VM technology to construct interfaces between containers ○ VHOST and VIRTIO-SCSI
  • 25. Summary ● cStor provides enterprise class like features like your friendly neighbourhood <insert vendor> storage system ● Provides data integrity features missing natively in the Linux Kernel ● Provides the ability to efficiently work with data by use of COW ● Bypasses kernel for IO to achieve higher performance than kernel ● Cloud native design; using cloud native paradigms to develop and deploy ● Removes friction between developers and storage admins QUESTIONS?
  • 26. cMotion Demo setup overview
  • 27. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github
  • 28. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github 2 Part 2: Move Jenkins pod to Denmark k8s cluster and show the github CICD working Jenkins Pod
  • 29. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github 2 Part 2: Move Jenkins pod to GCP Denmark k8s cluster and show the github CICD working Jenkins Pod 3 Part 3: Move Jenkins pod to AWS k8s cluster and show the github CICD working
  • 30. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github 2 Part 2: Move Jenkins pod to GCP Denmark k8s cluster and show the github CICD working Jenkins Pod 3 Part 3: Move Jenkins pod to AWS k8s cluster and show the github CICD working 4 Part 4: Move Jenkins pod to GCP austin cluster and show the github CICD working
  • 32. Container Attached Storage = DAS++ DAS Benefits: Simple Ties application to storage Predictable for capacity planning App deals with resiliency Can be faster Concerns: Under-utilized hardware ○ 10% or less utilization Wastes data center Difficult to manage Lacks storage features Cannot be repurposed - made for one workload Does not support mobility of workloads via containers Cross cloud impossible OpenEBS = “CAS” ✓ Simple ✓ No new skills required ✓ Per microservice storage policy ✓ Data protection & snapshots ✓ Reduces cloud vendor lock-in ✓ Eliminates storage vendor lock-in ✓ Highest possible efficiency ✓ Large & growing OSS community ✓ Natively cross cloud ✓ Uses proven code - ZFS & Linux ✓ Maya -> ML based analytics & tuning “YASS”: Distributed Benefits: Centralized management Greater density and efficiency Storage features such as: ○ Data protection ○ Snapshots for versioning Concerns: Additional complexity Enormous blast radius Expensive Requires storage engineering Challenged by container dynamism No per microservice storage policy I/O blender impairs performance Locks customers into vendor Cross cloud impossible CAS DAS Distributed

Hinweis der Redaktion

  1. Ask questions - what good is a plan to you? What are you hoping to get out of this session?
  2. Hyperconverged
  3. Hyperconverged with CO Smaller Blast radius with micro-services-like Storage Controller Architecture Seamless management interface - similar to Kurbernetes (kubernetes itself) Extends the capabilities of CO with Storage Management Benefits of Locally Attached Storage and High Availability provided via the Synchronous Replication Easy to Migrate across Nodes, Cluster, Infra ( No Cloud Vendor Lock-in)
  4. So first, this may seem odd, but I want to explain what cstor is not, but more importantly why not.
  5. Storage veterans, so features that we believe are needed not everything on the list is done yet, but certainly in the pipes
  6. Summarize into three for us