SlideShare ist ein Scribd-Unternehmen logo
1 von 42
Downloaden Sie, um offline zu lesen
KEEPING OPENSTACK STORAGE TRENDY
WITH CEPH AND CONTAINERS
SAGE WEIL, HAOMAI WANG
OPENSTACK SUMMIT - 2015.05.20
2
AGENDA
● Motivation
● Block
● File
● Container orchestration
● Summary
MOTIVATION
4
WEB APPLICATION
APP SERVER APP SERVER APP SERVER APP SERVER
A CLOUD SMORGASBORD
● Compelling clouds offer options
● Compute
– VM (KVM, Xen, …)
– Containers (lxc, Docker, OpenVZ, ...)
● Storage
– Block (virtual disk)
– File (shared)
– Object (RESTful, …)
– Key/value
– NoSQL
– SQL
5
WHY CONTAINERS?
Technology
● Performance
– Shared kernel
– Faster boot
– Lower baseline overhead
– Better resource sharing
● Storage
– Shared kernel → efficient IO
– Small image → efficient deployment
Ecosystem
● Emerging container host OSs
– Atomic – http://projectatomic.io
●
os-tree (s/rpm/git/)
– CoreOS
● systemd + etcd + fleet
– Snappy Ubuntu
● New app provisioning model
– Small, single-service containers
– Standalone execution environment
● New open container spec nulecule
– https://github.com/projectatomic/nulecule
6
WHY NOT CONTAINERS?
Technology
● Security
– Shared kernel
– Limited isolation
● OS flexibility
– Shared kernel limits OS choices
● Inertia
Ecosystem
● New models don't capture many
legacy services
7
WHY CEPH?
● All components scale horizontally
● No single point of failure
● Hardware agnostic, commodity hardware
● Self-manage whenever possible
● Open source (LGPL)
● Move beyond legacy approaches
– client/cluster instead of client/server
– avoid ad hoc HA
8
CEPH COMPONENTS
RGW
A web services gateway
for object storage,
compatible with S3 and
Swift
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP)
RADOS
A software-based, reliable, autonomous, distributed object store comprised of
self-healing, self-managing, intelligent storage nodes and lightweight monitors
RBD
A reliable, fully-distributed
block device with cloud
platform integration
CEPHFS
A distributed file system
with POSIX semantics and
scale-out metadata
management
APP HOST/VM CLIENT
BLOCK STORAGE
10
EXISTING BLOCK STORAGE MODEL
VM
●
VMs are the unit of cloud compute
●
Block devices are the unit of VM storage
– ephemeral: not redundant, discarded when VM dies
– persistent volumes: durable, (re)attached to any VM
●
Block devices are single-user
●
For shared storage,
– use objects (e.g., Swift or S3)
– use a database (e.g., Trove)
– ...
11
KVM + LIBRBD.SO
● Model
– Nova → libvirt → KVM → librbd.so
– Cinder → rbd.py → librbd.so
– Glance → rbd.py → librbd.so
● Pros
– proven
– decent performance
– good security
● Cons
– performance could be better
● Status
– most common deployment model
today (~44% in latest survey)
M M
RADOS CLUSTER
QEMU / KVM
LIBRBD
VM NOVA
CINDER
12
MULTIPLE CEPH DRIVERS
● librbd.so
– qemu-kvm
– rbd-fuse (experimental)
● rbd.ko (Linux kernel)
– /dev/rbd*
– stable and well-supported on modern kernels and distros
– some feature gap
● no client-side caching
● no “fancy striping”
– performance delta
● more efficient → more IOPS
● no client-side cache → higher latency for some workloads
13
LXC + CEPH.KO
● The model
– libvirt-based lxc containers
– map kernel RBD on host
– pass host device to libvirt, container
● Pros
– fast and efficient
– implement existing Nova API
● Cons
– weaker security than VM
● Status
– lxc is maintained
– lxc is less widely used
– no prototype
M M
RADOS CLUSTER
LINUX HOST
RBD.KO
CONTAINER
NOVA
14
NOVA-DOCKER + CEPH.KO
● The model
– docker container as mini-host
– map kernel RBD on host
– pass RBD device to container, or
– mount RBD, bind dir to container
● Pros
– buzzword-compliant
– fast and efficient
● Cons
– different image format
– different app model
– only a subset of docker feature set
● Status
– no prototype
– nova-docker is out of tree
https://wiki.openstack.org/wiki/Docker
15
IRONIC + CEPH.KO
● The model
– bare metal provisioning
– map kernel RBD directly from guest image
● Pros
– fast and efficient
– traditional app deployment model
● Cons
– guest OS must support rbd.ko
– requires agent
– boot-from-volume tricky
● Status
– Cinder and Ironic integration is a hot topic at
summit
● 5:20p Wednesday (cinder)
– no prototype
● References
– https://wiki.openstack.org/wiki/Ironic/blueprints/
cinder-integration
M M
RADOS CLUSTER
LINUX HOST
RBD.KO
16
BLOCK - SUMMARY
● But
– block storage is same old boring
– volumes are only semi-elastic (grow, not shrink; tedious to resize)
– storage is not shared between guests
performance efficiency VM
client
cache
striping
same
images?
exists
kvm + librbd.so best good X X X yes X
lxc + rbd.ko good best close
nova-docker + rbd.ko good best no
ironic + rbd.ko good best close? planned!
FILE STORAGE
18
MANILA FILE STORAGE
● Manila manages file volumes
– create/delete, share/unshare
– tenant network connectivity
– snapshot management
● Why file storage?
– familiar POSIX semantics
– fully shared volume – many clients can mount and share data
– elastic storage – amount of data can grow/shrink without explicit
provisioning
MANILA
19
MANILA CAVEATS
● Last mile problem
– must connect storage to guest network
– somewhat limited options (focus on Neutron)
● Mount problem
– Manila makes it possible for guest to mount
– guest is responsible for actual mount
– ongoing discussion around a guest agent …
● Current baked-in assumptions about both of these
MANILA
20
?
APPLIANCE DRIVERS
● Appliance drivers
– tell an appliance to export NFS to guests
– map appliance IP into tenant network
(Neutron)
– boring (closed, proprietary, expensive, etc.)
● Status
– several drivers from usual suspects
– security punted to vendor
NFS
MANILA
21
GANESHA DRIVER
● Model
– service VM running nfs-ganesha server
– mount file system on storage network
– export NFS to tenant network
– map IP into tenant network
● Status
– in-tree, well-supported
KVM
GANESHA
???
NFS
MANILA
???
22
KVM
GANESHA
KVM + GANESHA + LIBCEPHFS
● Model
– existing Ganesha driver, backed by
Ganesha's libcephfs FSAL
● Pros
– simple, existing model
– security
● Cons
– extra hop → higher latency
– service VM is SpoF
– service VM consumes resources
● Status
– Manila Ganesha driver exists
– untested with CephFS
M M
RADOS CLUSTER
LIBCEPHFS
KVM
NFS
NFS.KO
MANILA
NATIVE CEPH
23
KVM + CEPH.KO (CEPH-NATIVE)
● Model
– allow tenant access to storage network
– mount CephFS directly from tenant VM
● Pros
– best performance
– access to full CephFS feature set
– simple
● Cons
– guest must have modern distro/kernel
– exposes tenant to Ceph cluster
– must deliver mount secret to client
● Status
– no prototype
– CephFS isolation/security is work-in-progress
KVM
M M
RADOS CLUSTER
CEPH.KO
MANILA
NATIVE CEPH
24
NETWORK-ONLY MODEL IS LIMITING
● Current assumption of NFS or
CIFS sucks
● Always relying on guest mount
support sucks
– mount -t ceph -o what?
● Even assuming storage
connectivity is via the network
sucks
● There are other options!
– KVM virtfs/9p
● fs pass-through to host
● 9p protocol
● virtio for fast data transfer
● upstream; not widely used
– NFS re-export from host
● mount and export fs on host
● private host/guest net
● avoid network hop from NFS
service VM
– containers and 'mount --bind'
25
NOVA “ATTACH FS” API
● Mount problem is ongoing discussion by Manila team
– discussed this morning
– simple prototype using cloud-init
– Manila agent? leverage Zaqar tenant messaging service?
● A different proposal
– expand Nova to include “attach/detach file system” API
– analogous to current attach/detach volume for block
– each Nova driver may implement function differently
– “plumb” storage to tenant VM or container
● Open question
– Would API do the final “mount” step as well? (I say yes!)
26
KVM + VIRTFS/9P + CEPHFS.KO
● Model
– mount kernel CephFS on host
– pass-through to guest via virtfs/9p
● Pros
– security: tenant remains isolated from
storage net + locked inside a directory
● Cons
– require modern Linux guests
– 9p not supported on some distros
– “virtfs is ~50% slower than a native
mount?”
● Status
– Prototype from Haomai Wang
HOST
M M
RADOS CLUSTER
KVM VIRTFS
MANILA
NATIVE CEPH
CEPH.KO
VM
9P
NOVA
27
KVM + NFS + CEPHFS.KO
● Model
– mount kernel CephFS on host
– pass-through to guest via NFS
● Pros
– security: tenant remains isolated
from storage net + locked inside a
directory
– NFS is more standard
● Cons
– NFS has weak caching consistency
– NFS is slower
● Status
– no prototype
HOST
M M
RADOS CLUSTER
KVM
MANILA
NATIVE CEPH
CEPH.KO
VM
NFS
NOVA
28
(LXC, NOVA-DOCKER) + CEPHFS.KO
● Model
– host mounts CephFS directly
– mount --bind share into
container namespace
● Pros
– best performance
– full CephFS semantics
● Cons
– rely on container for security
● Status
– no prototype
HOST
M M
RADOS CLUSTER
CONTAINER
MANILA
NATIVE CEPH
CEPH.KO
NOVA
29
IRONIC + CEPHFS.KO
● Model
– mount CephFS directly from bare
metal “guest”
● Pros
– best performance
– full feature set
● Cons
– rely on CephFS security
– networking?
– agent to do the mount?
● Status
– no prototype
– no suitable (ironic) agent (yet)
HOST
M M
RADOS CLUSTER
MANILA
NATIVE CEPH
CEPH.KO
NOVA
30
THE MOUNT PROBLEM
● Containers may break the current 'network fs' assumption
– mounting becomes driver-dependent; harder for tenant to do the right thing
● Nova “attach fs” API could provide the needed entry point
– KVM: qemu-guest-agent
– Ironic: no guest agent yet...
– containers (lxc, nova-docker): use mount --bind from host
● Or, make tenant do the final mount?
– Manila API to provide command (template) to perform the mount
● e.g., “mount -t ceph $cephmonip:/manila/$uuid $PATH -o ...”
– Nova lxc and docker
● bind share to a “dummy” device /dev/manila/$uuid
● API mount command is 'mount --bind /dev/manila/$uuid $PATH'
31
SECURITY: NO FREE LUNCH
● (KVM, Ironic) + ceph.ko
– access to storage network relies on Ceph security
● KVM + (virtfs/9p, NFS) + ceph.ko
– better security, but
– pass-through/proxy limits performance
● (by how much?)
● Containers
– security (vs a VM) is weak at baseline, but
– host performs the mount; tenant locked into their share directory
32
PERFORMANCE
● 2 nodes
– Intel E5-2660
– 96GB RAM
– 10gb NIC
● Server
– 3 OSD (Intel S3500)
– 1 MON
– 1 MDS
● Client VMs
– 4 cores
– 2GB RAM
● iozone, 2x available RAM
● CephFS native
– VM ceph.ko → server
● CephFS 9p/virtfs
– VM 9p → host ceph.ko → server
● CephFS NFS
– VM NFS → server ceph.ko →
server
33
SEQUENTIAL
34
RANDOM
35
SUMMARY MATRIX
performance consistency VM gateway net hops security agent
mount
agent
prototype
kvm + ganesha +
libcephfs
slower (?) weak (nfs) X X 2 host X X
kvm + virtfs + ceph.ko good good X X 1 host X X
kvm + nfs + ceph.ko good weak (nfs) X X 1 host X
kvm + ceph.ko better best X 1 ceph X
lxc + ceph.ko best best 1 ceph
nova-docker + ceph.ko best best 1 ceph
IBM talk -
Thurs 9am
ironic + ceph.ko best best 1 ceph X X
CONTAINER ORCHESTRATION
37
CONTAINERS ARE DIFFERENT
● nova-docker implements a Nova view of a (Docker) container
– treats container like a standalone system
– does not leverage most of what Docker has to offer
– Nova == IaaS abstraction
● Kubernetes is the new hotness
– higher-level orchestration for containers
– draws on years of Google experience running containers at scale
– vibrant open source community
38
KUBERNETES SHARED STORAGE
● Pure Kubernetes – no OpenStack
● Volume drivers
– Local
● hostPath, emptyDir
– Unshared
● iSCSI, GCEPersistentDisk, Amazon EBS, Ceph RBD – local fs on top of existing device
– Shared
● NFS, GlusterFS, Amazon EFS, CephFS
● Status
– Ceph drivers under review
● Finalizing model for secret storage, cluster parameters (e.g., mon IPs)
– Drivers expect pre-existing volumes
● recycled; missing REST API to create/destroy volumes
39
KUBERNETES ON OPENSTACK
● Provision Nova VMs
– KVM or ironic
– Atomic or CoreOS
● Kubernetes per tenant
● Provision storage devices
– Cinder for volumes
– Manila for shares
● Kubernetes binds into pod/container
● Status
– Prototype Cinder plugin for Kubernetes
https://github.com/spothanis/kubernetes/tree/cinder-vol-plugin
KVM
Kube node
nginx pod
mysql pod
KVM
Kube node
nginx pod
mysql pod
KVM
Kube master
Volume
controller
...
CINDER MANILA
NOVA
40
WHAT NEXT?
● Ironic agent
– enable Cinder (and Manila?) on bare metal
– Cinder + Ironic
● 5:20p Wednesday (Cinder)
● Expand breadth of Manila drivers
– virtfs/9p, ceph-native, NFS proxy via host, etc.
– the last mile is not always the tenant network!
● Nova “attach fs” API (or equivalent)
– simplify tenant experience
– paper over VM vs container vs bare metal differences
THANK YOU!
Sage Weil
CEPH PRINCIPAL ARCHITECT
Haomai Wang
FREE AGENT
sage@redhat.com
haomaiwang@gmail.com
@liewegas
42
FOR MORE INFORMATION
● http://ceph.com
● http://github.com/ceph
● http://tracker.ceph.com
● Mailing lists
– ceph-users@ceph.com
– ceph-devel@vger.kernel.org
● irc.oftc.net
– #ceph
– #ceph-devel
● Twitter
– @ceph

Weitere ähnliche Inhalte

Was ist angesagt?

Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelColleen Corrice
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldSage Weil
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkXiaoxi Chen
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous updateinwin stack
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about cephEmma Haruka Iwao
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStackMirantis
 

Was ist angesagt? (20)

Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to Jewel
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud world
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Bluestore
BluestoreBluestore
Bluestore
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmark
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous update
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Ceph and Mirantis OpenStack
Ceph and Mirantis OpenStackCeph and Mirantis OpenStack
Ceph and Mirantis OpenStack
 

Andere mochten auch

Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Deploying and managing container-based applications with OpenStack and Kubern...
Deploying and managing container-based applications with OpenStack and Kubern...Deploying and managing container-based applications with OpenStack and Kubern...
Deploying and managing container-based applications with OpenStack and Kubern...Ihor Dvoretskyi
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red_Hat_Storage
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ian Colle
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep DiveRed_Hat_Storage
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Orchestrating Docker Containers with Google Kubernetes on OpenStack
Orchestrating Docker Containers with Google Kubernetes on OpenStackOrchestrating Docker Containers with Google Kubernetes on OpenStack
Orchestrating Docker Containers with Google Kubernetes on OpenStackTrevor Roberts Jr.
 
OpenStack Tokyo 2015: Connecting the Dots with Neutron
OpenStack Tokyo 2015: Connecting the Dots with NeutronOpenStack Tokyo 2015: Connecting the Dots with Neutron
OpenStack Tokyo 2015: Connecting the Dots with NeutronPhil Estes
 
Jenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David NalleyJenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David Nalleybuildacloud
 
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh BoddapatiPolicy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapatibuildacloud
 
L4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef LaribiL4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef Laribibuildacloud
 
The Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep VittalThe Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep Vittalbuildacloud
 
Open stack meetup 2014 11-13 - 101 + high availability
Open stack meetup 2014 11-13 - 101 + high availabilityOpen stack meetup 2014 11-13 - 101 + high availability
Open stack meetup 2014 11-13 - 101 + high availabilityRick Ashford
 
Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010Puppet
 

Andere mochten auch (18)

Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Deploying and managing container-based applications with OpenStack and Kubern...
Deploying and managing container-based applications with OpenStack and Kubern...Deploying and managing container-based applications with OpenStack and Kubern...
Deploying and managing container-based applications with OpenStack and Kubern...
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
 
Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014Ceph and OpenStack - Feb 2014
Ceph and OpenStack - Feb 2014
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Orchestrating Docker Containers with Google Kubernetes on OpenStack
Orchestrating Docker Containers with Google Kubernetes on OpenStackOrchestrating Docker Containers with Google Kubernetes on OpenStack
Orchestrating Docker Containers with Google Kubernetes on OpenStack
 
OpenStack Tokyo 2015: Connecting the Dots with Neutron
OpenStack Tokyo 2015: Connecting the Dots with NeutronOpenStack Tokyo 2015: Connecting the Dots with Neutron
OpenStack Tokyo 2015: Connecting the Dots with Neutron
 
Jenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David NalleyJenkins, jclouds, CloudStack, and CentOS by David Nalley
Jenkins, jclouds, CloudStack, and CentOS by David Nalley
 
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh BoddapatiPolicy Based SDN Solution for DC and Branch Office by Suresh Boddapati
Policy Based SDN Solution for DC and Branch Office by Suresh Boddapati
 
L4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef LaribiL4-L7 services for SDN and NVF by Youcef Laribi
L4-L7 services for SDN and NVF by Youcef Laribi
 
The Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep VittalThe Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep Vittal
 
Orchestrated Assurance
Orchestrated Assurance Orchestrated Assurance
Orchestrated Assurance
 
Open stack meetup 2014 11-13 - 101 + high availability
Open stack meetup 2014 11-13 - 101 + high availabilityOpen stack meetup 2014 11-13 - 101 + high availability
Open stack meetup 2014 11-13 - 101 + high availability
 
Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010
 

Ähnlich wie Keeping OpenStack storage trendy with Ceph and containers

CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph Community
 
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes][BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]Wong Hoi Sing Edison
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific DashboardCeph Community
 
Performance characterization in large distributed file system with gluster fs
Performance characterization in large distributed file system with gluster fsPerformance characterization in large distributed file system with gluster fs
Performance characterization in large distributed file system with gluster fsNeependra Khare
 
Academy PRO: Docker. Part 1
Academy PRO: Docker. Part 1Academy PRO: Docker. Part 1
Academy PRO: Docker. Part 1Binary Studio
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
 
Docker and coreos20141020b
Docker and coreos20141020bDocker and coreos20141020b
Docker and coreos20141020bRichard Kuo
 
Integrating CloudStack & Ceph
Integrating CloudStack & CephIntegrating CloudStack & Ceph
Integrating CloudStack & CephShapeBlue
 
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILACEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILACeph Community
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...TomBarron
 
[KubeCon NA 2020] containerd: Rootless Containers 2020
[KubeCon NA 2020] containerd: Rootless Containers 2020[KubeCon NA 2020] containerd: Rootless Containers 2020
[KubeCon NA 2020] containerd: Rootless Containers 2020Akihiro Suda
 
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebula Project
 
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Community
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph Community
 
Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Mainframe Project
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowEd Balduf
 
Ceph in the GRNET cloud stack
Ceph in the GRNET cloud stackCeph in the GRNET cloud stack
Ceph in the GRNET cloud stackNikos Kormpakis
 

Ähnlich wie Keeping OpenStack storage trendy with Ceph and containers (20)

CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack
 
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes][BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
[BarCamp2018][20180915][Tips for Virtual Hosting on Kubernetes]
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
 
Performance characterization in large distributed file system with gluster fs
Performance characterization in large distributed file system with gluster fsPerformance characterization in large distributed file system with gluster fs
Performance characterization in large distributed file system with gluster fs
 
Academy PRO: Docker. Part 1
Academy PRO: Docker. Part 1Academy PRO: Docker. Part 1
Academy PRO: Docker. Part 1
 
What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
Docker and coreos20141020b
Docker and coreos20141020bDocker and coreos20141020b
Docker and coreos20141020b
 
Integrating CloudStack & Ceph
Integrating CloudStack & CephIntegrating CloudStack & Ceph
Integrating CloudStack & Ceph
 
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILACEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
 
[KubeCon NA 2020] containerd: Rootless Containers 2020
[KubeCon NA 2020] containerd: Rootless Containers 2020[KubeCon NA 2020] containerd: Rootless Containers 2020
[KubeCon NA 2020] containerd: Rootless Containers 2020
 
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
 
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage Weil
 
Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...Open Source Investments in Mainframe Through the Next Generation - Showcasing...
Open Source Investments in Mainframe Through the Next Generation - Showcasing...
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
 
OpenVZ Linux Containers
OpenVZ Linux ContainersOpenVZ Linux Containers
OpenVZ Linux Containers
 
Ceph in the GRNET cloud stack
Ceph in the GRNET cloud stackCeph in the GRNET cloud stack
Ceph in the GRNET cloud stack
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 

Kürzlich hochgeladen

The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...Health
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...OnePlan Solutions
 
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AIABDERRAOUF MEHENNI
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdfWave PLM
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️anilsa9823
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...panagenda
 

Kürzlich hochgeladen (20)

The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 

Keeping OpenStack storage trendy with Ceph and containers

  • 1. KEEPING OPENSTACK STORAGE TRENDY WITH CEPH AND CONTAINERS SAGE WEIL, HAOMAI WANG OPENSTACK SUMMIT - 2015.05.20
  • 2. 2 AGENDA ● Motivation ● Block ● File ● Container orchestration ● Summary
  • 4. 4 WEB APPLICATION APP SERVER APP SERVER APP SERVER APP SERVER A CLOUD SMORGASBORD ● Compelling clouds offer options ● Compute – VM (KVM, Xen, …) – Containers (lxc, Docker, OpenVZ, ...) ● Storage – Block (virtual disk) – File (shared) – Object (RESTful, …) – Key/value – NoSQL – SQL
  • 5. 5 WHY CONTAINERS? Technology ● Performance – Shared kernel – Faster boot – Lower baseline overhead – Better resource sharing ● Storage – Shared kernel → efficient IO – Small image → efficient deployment Ecosystem ● Emerging container host OSs – Atomic – http://projectatomic.io ● os-tree (s/rpm/git/) – CoreOS ● systemd + etcd + fleet – Snappy Ubuntu ● New app provisioning model – Small, single-service containers – Standalone execution environment ● New open container spec nulecule – https://github.com/projectatomic/nulecule
  • 6. 6 WHY NOT CONTAINERS? Technology ● Security – Shared kernel – Limited isolation ● OS flexibility – Shared kernel limits OS choices ● Inertia Ecosystem ● New models don't capture many legacy services
  • 7. 7 WHY CEPH? ● All components scale horizontally ● No single point of failure ● Hardware agnostic, commodity hardware ● Self-manage whenever possible ● Open source (LGPL) ● Move beyond legacy approaches – client/cluster instead of client/server – avoid ad hoc HA
  • 8. 8 CEPH COMPONENTS RGW A web services gateway for object storage, compatible with S3 and Swift LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors RBD A reliable, fully-distributed block device with cloud platform integration CEPHFS A distributed file system with POSIX semantics and scale-out metadata management APP HOST/VM CLIENT
  • 10. 10 EXISTING BLOCK STORAGE MODEL VM ● VMs are the unit of cloud compute ● Block devices are the unit of VM storage – ephemeral: not redundant, discarded when VM dies – persistent volumes: durable, (re)attached to any VM ● Block devices are single-user ● For shared storage, – use objects (e.g., Swift or S3) – use a database (e.g., Trove) – ...
  • 11. 11 KVM + LIBRBD.SO ● Model – Nova → libvirt → KVM → librbd.so – Cinder → rbd.py → librbd.so – Glance → rbd.py → librbd.so ● Pros – proven – decent performance – good security ● Cons – performance could be better ● Status – most common deployment model today (~44% in latest survey) M M RADOS CLUSTER QEMU / KVM LIBRBD VM NOVA CINDER
  • 12. 12 MULTIPLE CEPH DRIVERS ● librbd.so – qemu-kvm – rbd-fuse (experimental) ● rbd.ko (Linux kernel) – /dev/rbd* – stable and well-supported on modern kernels and distros – some feature gap ● no client-side caching ● no “fancy striping” – performance delta ● more efficient → more IOPS ● no client-side cache → higher latency for some workloads
  • 13. 13 LXC + CEPH.KO ● The model – libvirt-based lxc containers – map kernel RBD on host – pass host device to libvirt, container ● Pros – fast and efficient – implement existing Nova API ● Cons – weaker security than VM ● Status – lxc is maintained – lxc is less widely used – no prototype M M RADOS CLUSTER LINUX HOST RBD.KO CONTAINER NOVA
  • 14. 14 NOVA-DOCKER + CEPH.KO ● The model – docker container as mini-host – map kernel RBD on host – pass RBD device to container, or – mount RBD, bind dir to container ● Pros – buzzword-compliant – fast and efficient ● Cons – different image format – different app model – only a subset of docker feature set ● Status – no prototype – nova-docker is out of tree https://wiki.openstack.org/wiki/Docker
  • 15. 15 IRONIC + CEPH.KO ● The model – bare metal provisioning – map kernel RBD directly from guest image ● Pros – fast and efficient – traditional app deployment model ● Cons – guest OS must support rbd.ko – requires agent – boot-from-volume tricky ● Status – Cinder and Ironic integration is a hot topic at summit ● 5:20p Wednesday (cinder) – no prototype ● References – https://wiki.openstack.org/wiki/Ironic/blueprints/ cinder-integration M M RADOS CLUSTER LINUX HOST RBD.KO
  • 16. 16 BLOCK - SUMMARY ● But – block storage is same old boring – volumes are only semi-elastic (grow, not shrink; tedious to resize) – storage is not shared between guests performance efficiency VM client cache striping same images? exists kvm + librbd.so best good X X X yes X lxc + rbd.ko good best close nova-docker + rbd.ko good best no ironic + rbd.ko good best close? planned!
  • 18. 18 MANILA FILE STORAGE ● Manila manages file volumes – create/delete, share/unshare – tenant network connectivity – snapshot management ● Why file storage? – familiar POSIX semantics – fully shared volume – many clients can mount and share data – elastic storage – amount of data can grow/shrink without explicit provisioning MANILA
  • 19. 19 MANILA CAVEATS ● Last mile problem – must connect storage to guest network – somewhat limited options (focus on Neutron) ● Mount problem – Manila makes it possible for guest to mount – guest is responsible for actual mount – ongoing discussion around a guest agent … ● Current baked-in assumptions about both of these MANILA
  • 20. 20 ? APPLIANCE DRIVERS ● Appliance drivers – tell an appliance to export NFS to guests – map appliance IP into tenant network (Neutron) – boring (closed, proprietary, expensive, etc.) ● Status – several drivers from usual suspects – security punted to vendor NFS MANILA
  • 21. 21 GANESHA DRIVER ● Model – service VM running nfs-ganesha server – mount file system on storage network – export NFS to tenant network – map IP into tenant network ● Status – in-tree, well-supported KVM GANESHA ??? NFS MANILA ???
  • 22. 22 KVM GANESHA KVM + GANESHA + LIBCEPHFS ● Model – existing Ganesha driver, backed by Ganesha's libcephfs FSAL ● Pros – simple, existing model – security ● Cons – extra hop → higher latency – service VM is SpoF – service VM consumes resources ● Status – Manila Ganesha driver exists – untested with CephFS M M RADOS CLUSTER LIBCEPHFS KVM NFS NFS.KO MANILA NATIVE CEPH
  • 23. 23 KVM + CEPH.KO (CEPH-NATIVE) ● Model – allow tenant access to storage network – mount CephFS directly from tenant VM ● Pros – best performance – access to full CephFS feature set – simple ● Cons – guest must have modern distro/kernel – exposes tenant to Ceph cluster – must deliver mount secret to client ● Status – no prototype – CephFS isolation/security is work-in-progress KVM M M RADOS CLUSTER CEPH.KO MANILA NATIVE CEPH
  • 24. 24 NETWORK-ONLY MODEL IS LIMITING ● Current assumption of NFS or CIFS sucks ● Always relying on guest mount support sucks – mount -t ceph -o what? ● Even assuming storage connectivity is via the network sucks ● There are other options! – KVM virtfs/9p ● fs pass-through to host ● 9p protocol ● virtio for fast data transfer ● upstream; not widely used – NFS re-export from host ● mount and export fs on host ● private host/guest net ● avoid network hop from NFS service VM – containers and 'mount --bind'
  • 25. 25 NOVA “ATTACH FS” API ● Mount problem is ongoing discussion by Manila team – discussed this morning – simple prototype using cloud-init – Manila agent? leverage Zaqar tenant messaging service? ● A different proposal – expand Nova to include “attach/detach file system” API – analogous to current attach/detach volume for block – each Nova driver may implement function differently – “plumb” storage to tenant VM or container ● Open question – Would API do the final “mount” step as well? (I say yes!)
  • 26. 26 KVM + VIRTFS/9P + CEPHFS.KO ● Model – mount kernel CephFS on host – pass-through to guest via virtfs/9p ● Pros – security: tenant remains isolated from storage net + locked inside a directory ● Cons – require modern Linux guests – 9p not supported on some distros – “virtfs is ~50% slower than a native mount?” ● Status – Prototype from Haomai Wang HOST M M RADOS CLUSTER KVM VIRTFS MANILA NATIVE CEPH CEPH.KO VM 9P NOVA
  • 27. 27 KVM + NFS + CEPHFS.KO ● Model – mount kernel CephFS on host – pass-through to guest via NFS ● Pros – security: tenant remains isolated from storage net + locked inside a directory – NFS is more standard ● Cons – NFS has weak caching consistency – NFS is slower ● Status – no prototype HOST M M RADOS CLUSTER KVM MANILA NATIVE CEPH CEPH.KO VM NFS NOVA
  • 28. 28 (LXC, NOVA-DOCKER) + CEPHFS.KO ● Model – host mounts CephFS directly – mount --bind share into container namespace ● Pros – best performance – full CephFS semantics ● Cons – rely on container for security ● Status – no prototype HOST M M RADOS CLUSTER CONTAINER MANILA NATIVE CEPH CEPH.KO NOVA
  • 29. 29 IRONIC + CEPHFS.KO ● Model – mount CephFS directly from bare metal “guest” ● Pros – best performance – full feature set ● Cons – rely on CephFS security – networking? – agent to do the mount? ● Status – no prototype – no suitable (ironic) agent (yet) HOST M M RADOS CLUSTER MANILA NATIVE CEPH CEPH.KO NOVA
  • 30. 30 THE MOUNT PROBLEM ● Containers may break the current 'network fs' assumption – mounting becomes driver-dependent; harder for tenant to do the right thing ● Nova “attach fs” API could provide the needed entry point – KVM: qemu-guest-agent – Ironic: no guest agent yet... – containers (lxc, nova-docker): use mount --bind from host ● Or, make tenant do the final mount? – Manila API to provide command (template) to perform the mount ● e.g., “mount -t ceph $cephmonip:/manila/$uuid $PATH -o ...” – Nova lxc and docker ● bind share to a “dummy” device /dev/manila/$uuid ● API mount command is 'mount --bind /dev/manila/$uuid $PATH'
  • 31. 31 SECURITY: NO FREE LUNCH ● (KVM, Ironic) + ceph.ko – access to storage network relies on Ceph security ● KVM + (virtfs/9p, NFS) + ceph.ko – better security, but – pass-through/proxy limits performance ● (by how much?) ● Containers – security (vs a VM) is weak at baseline, but – host performs the mount; tenant locked into their share directory
  • 32. 32 PERFORMANCE ● 2 nodes – Intel E5-2660 – 96GB RAM – 10gb NIC ● Server – 3 OSD (Intel S3500) – 1 MON – 1 MDS ● Client VMs – 4 cores – 2GB RAM ● iozone, 2x available RAM ● CephFS native – VM ceph.ko → server ● CephFS 9p/virtfs – VM 9p → host ceph.ko → server ● CephFS NFS – VM NFS → server ceph.ko → server
  • 35. 35 SUMMARY MATRIX performance consistency VM gateway net hops security agent mount agent prototype kvm + ganesha + libcephfs slower (?) weak (nfs) X X 2 host X X kvm + virtfs + ceph.ko good good X X 1 host X X kvm + nfs + ceph.ko good weak (nfs) X X 1 host X kvm + ceph.ko better best X 1 ceph X lxc + ceph.ko best best 1 ceph nova-docker + ceph.ko best best 1 ceph IBM talk - Thurs 9am ironic + ceph.ko best best 1 ceph X X
  • 37. 37 CONTAINERS ARE DIFFERENT ● nova-docker implements a Nova view of a (Docker) container – treats container like a standalone system – does not leverage most of what Docker has to offer – Nova == IaaS abstraction ● Kubernetes is the new hotness – higher-level orchestration for containers – draws on years of Google experience running containers at scale – vibrant open source community
  • 38. 38 KUBERNETES SHARED STORAGE ● Pure Kubernetes – no OpenStack ● Volume drivers – Local ● hostPath, emptyDir – Unshared ● iSCSI, GCEPersistentDisk, Amazon EBS, Ceph RBD – local fs on top of existing device – Shared ● NFS, GlusterFS, Amazon EFS, CephFS ● Status – Ceph drivers under review ● Finalizing model for secret storage, cluster parameters (e.g., mon IPs) – Drivers expect pre-existing volumes ● recycled; missing REST API to create/destroy volumes
  • 39. 39 KUBERNETES ON OPENSTACK ● Provision Nova VMs – KVM or ironic – Atomic or CoreOS ● Kubernetes per tenant ● Provision storage devices – Cinder for volumes – Manila for shares ● Kubernetes binds into pod/container ● Status – Prototype Cinder plugin for Kubernetes https://github.com/spothanis/kubernetes/tree/cinder-vol-plugin KVM Kube node nginx pod mysql pod KVM Kube node nginx pod mysql pod KVM Kube master Volume controller ... CINDER MANILA NOVA
  • 40. 40 WHAT NEXT? ● Ironic agent – enable Cinder (and Manila?) on bare metal – Cinder + Ironic ● 5:20p Wednesday (Cinder) ● Expand breadth of Manila drivers – virtfs/9p, ceph-native, NFS proxy via host, etc. – the last mile is not always the tenant network! ● Nova “attach fs” API (or equivalent) – simplify tenant experience – paper over VM vs container vs bare metal differences
  • 41. THANK YOU! Sage Weil CEPH PRINCIPAL ARCHITECT Haomai Wang FREE AGENT sage@redhat.com haomaiwang@gmail.com @liewegas
  • 42. 42 FOR MORE INFORMATION ● http://ceph.com ● http://github.com/ceph ● http://tracker.ceph.com ● Mailing lists – ceph-users@ceph.com – ceph-devel@vger.kernel.org ● irc.oftc.net – #ceph – #ceph-devel ● Twitter – @ceph