SlideShare a Scribd company logo
1 of 31
Download to read offline
Ceph Tech Talks:
CephFS Update
John Spray
john.spray@redhat.com
Feb 2016
2 Ceph Tech Talks: CephFS
Agenda
● Recap: CephFS architecture
● What's new for Jewel?
● Scrub/repair
● Fine-grained authorization
● RADOS namespace support in layouts
● OpenStack Manila
● Experimental multi-filesystem functionality
3 Ceph Tech Talks: CephFS
Ceph architecture
RGW
A web services
gateway for object
storage, compatible
with S3 and Swift
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP)
RADOS
A software-based, reliable, autonomous, distributed object store comprised of
self-healing, self-managing, intelligent storage nodes and lightweight monitors
RBD
A reliable, fully-
distributed block
device with cloud
platform integration
CEPHFS
A distributed file
system with POSIX
semantics and scale-
out metadata
management
APP HOST/VM CLIENT
4 Ceph Tech Talks: CephFS
CephFS
● POSIX interface: drop in replacement for any local or
network filesystem
● Scalable data: files stored directly in RADOS
● Scalable metadata: cluster of metadata servers
● Extra functionality: snapshots, recursive statistics
● Same storage backend as object (RGW) + block
(RBD): no separate silo needed for file
5
Ceph Tech Talks: CephFS
Components
Linux host
M M
M
Ceph server daemons
CephFS client
datametadata 01
10
M
OSD
Monitor
MDS
6 Ceph Tech Talks: CephFS
Why build a distributed filesystem?
● Existing filesystem-using workloads aren't going away
● POSIX filesystems are a lingua-franca, for
administrators as well as applications
● Interoperability with other storage systems in data
lifecycle (e.g. backup, archival)
● New platform container “volumes” are filesystems
● Permissions, directories are actually useful concepts!
7 Ceph Tech Talks: CephFS
Why not build a distributed filesystem?
● Harder to scale than object stores, because entities
(inodes, dentries, dirs) are related to one another, good
locality needed for performance.
● Some filesystem-using applications are gratuitously
inefficient (e.g. redundant “ls -l” calls, using files for
IPC) due to local filesystem latency expectations
● Complexity resulting from stateful clients e.g. taking
locks, opening files requires coordination and clients
can interfere with one another's responsiveness.
8 Ceph Tech Talks: CephFS
CephFS in practice
ceph-deploy mds create myserver
ceph osd pool create fs_data
ceph osd pool create fs_metadata
ceph fs new myfs fs_metadata fs_data
mount -t ceph x.x.x.x:6789 /mnt/ceph
9
Ceph Tech Talks: CephFS
Scrub and repair
10 Ceph Tech Talks: CephFS
Scrub/repair status
● In general, resilience and self-repair is RADOS's job:
all CephFS data & metadata lives in RADOS objects
● CephFS scrub/repair is for handling disasters: serious
software bugs, or permanently lost data in RADOS
● In Jewel, can now handle and recover from many
forms of metadata damage (corruptions, deletions)
● Repair tools require expertise: primarily for use during
(rare) support incidents, not everyday user activity
11 Ceph Tech Talks: CephFS
Scrub/repair: handling damage
● Fine-grained damage status (“damage ls”) instead of
taking whole rank offline
● Detect damage during normal load of metadata, or
during scrub
ceph tell mds.<id> damage ls
ceph tell mds.<id> damage rm
ceph mds 0 repaired
● Can repair damaged statistics online: other repairs
happen offline (i.e. stop MDS and writing directly to
metadata pool)
12 Ceph Tech Talks: CephFS
Scrub/repair: online scrub commands
● Forward scrub: traversing metadata from root
downwards
ceph daemon mds.<id> scrub_path
ceph daemon mds.<id> scrub_path recursive
ceph daemon mds.<id> scrub_path repair
ceph daemon mds.<id> tag path
● These commands will give you success or failure info
on completion, and emit cluster log messages about
issues.
13 Ceph Tech Talks: CephFS
Scrub/repair: offline repair commands
● Backward scrub: iterating over all data objects and
trying to relate them back to the metadata
● Potentially long running, but can run workers in parallel
● Find all the objects in files:
● cephfs­data­scan scan_extents
● Find (or insert) all the files into the metadata:
● cephfs­data­scan scan_inodes
14 Ceph Tech Talks: CephFS
Scrub/repair: parallel execution
● New functionality in RADOS to enable iterating over
subsets of the overall set of objects in pool.
● Currently one must coordinate collection of workers by
hand (or a short shell script)
● Example: invoke worker 3 of 10 like this:
● cephfs­data­scan scan_inodes –worker_n 3 
–worker_m 10
15 Ceph Tech Talks: CephFS
Scrub/repair: caveats
● This is still disaster recovery functionality: don't run
“repair” commands for fun.
● Not multi-MDS aware: commands operate directly on a
single MDS's share of the metadata.
● Not yet auto-run in background like RADOS scrub
16
Ceph Tech Talks: CephFS
Fine-grained authorisation
17 Ceph Tech Talks: CephFS
CephFS authorization
● Clients need to talk to MDS daemons, mons and
OSDs.
● OSD auth caps enable limiting clients to use only
particular data pools, but couldn't control which parts of
filesystem metadata they saw
● New MDS auth caps enable limiting access by path
and uid.
18 Ceph Tech Talks: CephFS
MDS auth caps
● Example: we created a dir `foodir` that has its layout
set to pool `foopool`. We create a client key 'foo' that
can only see metadata within that dir and data within
that pool.
ceph auth get­or­create client.foo 
  mds “allow rw path=/foodir” 
  osd “allow rw pool=foopool” 
  mon “allow r”
● Client must mount with “-r /foodir”, to treat that as its
root (doesn't have capability to see the real root)
19
Ceph Tech Talks: CephFS
RADOS namespaces in file
layouts
20 Ceph Tech Talks: CephFS
RADOS namespaces
● Namespaces offer a cheaper way to divide up objects
than pools.
● Pools consume physical resources (i.e. they create
PGs), whereas namespaces are effectively just a prefix
to object names.
● OSD auth caps can be limited by namespaces: when
we need to isolate two clients (e.g. two cephfs clients)
we can give them auth caps that allow access to
different namespaces.
21 Ceph Tech Talks: CephFS
Namespaces in layouts
● Existing fields: pool, stripe_unit, stripe_count,
object_size
● New field: pool_namespace
● setfattr ­n ceph.file.layout.pool_namespace ­v <ns>
● setfattr ­n ceph.dir.layout.pool_namespace ­v <ns>
● As with setting layout.pool, the data gets written there,
but the backtrace continues to be written to the default
pool (and default namespace). Backtrace not
accessed by client, so doesn't affect client side auth
configuration.
22
Ceph Tech Talks: CephFS
OpenStack Manila
23 Ceph Tech Talks: CephFS
Manila
● The OpenStack shared filesystem service
● Manila users request filesystem storage as shares
which are provisioned by drivers
● CephFS driver implements shares as directories:
● Manila expects shares to be size-constrained, we
use CephFS Quotas
● Client mount commands includes -r flag to treat the
share dir as the root
● Capacity stats reported for that directory using rstats
● Clients restricted to their directory and
pool/namespace using new auth caps
24 Ceph Tech Talks: CephFS
CephFSVolumeClient
● A new python interface in the Ceph tree, designed for
Manila and similar frameworks.
● Wraps up the directory+auth caps mechanism as a
“volume” concept.
Manila
CephFS Driver
CephFSVolumeClient
librados libcephfs
Ceph Cluster
Network
github.com/openstack/manila
github.com/ceph/ceph
25
Ceph Tech Talks: CephFS
Experimental multi-filesystem
functionality
26 Ceph Tech Talks: CephFS
Multiple filesystems
● Historical 1:1 mapping between Ceph cluster (RADOS)
and Ceph filesystem (cluster of MDSs)
● Artificial limitation: no reason we can't have multiple
CephFS filesystems, with multiple MDS clusters, all
backed onto one RADOS cluster.
● Use case ideas:
● Physically isolate workloads on separate MDS clusters
(vs. using dirs within one cluster)
● Disaster recovery: recover into a new filesystem on the
same cluster, instead of trying to do in-place
● Resilience: multiple filesystems become separate failure
domains in case of issues.
27 Ceph Tech Talks: CephFS
Multiple filesystems initial implementation
● You can now run “fs new” more than once (with
different pools)
● Old clients get the default filesystem (you can
configure which one that is)
● New userspace client config opt to select which
filesystem should be mounted
● MDS daemons are all equal: any one may get used for
any filesystem
● Switched off by default: must set a special flag to use
this (like snapshots, inline data)
28 Ceph Tech Talks: CephFS
Multiple filesystems future work
● Enable use of RADOS namespaces (not just separate
pools) for different filesystems to avoid needlessly
creating more pools
● Authorization capabilities to limit MDS and clients to
particular filesystems
● Enable selecting FS in kernel client
● Enable manually configured affinity of MDS daemons
to filesystem(s)
● More user friendly FS selection in userspace client
(filesystem name instead of ID)
29
Ceph Tech Talks: CephFS
Wrap up
30 Ceph Tech Talks: CephFS
Tips for early adopters
http://ceph.com/resources/mailing-list-irc/
http://tracker.ceph.com/projects/ceph/issues
http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
● Does the most recent development release or kernel
fix your issue?
● What is your configuration? MDS config, Ceph
version, client version, kclient or fuse
● What is your workload?
● Can you reproduce with debug logging enabled?
31
Ceph Tech Talks: CephFS
Questions?

More Related Content

What's hot

Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelColleen Corrice
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkXiaoxi Chen
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to CephCeph Community
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and BeyondSage Weil
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and BeyondSage Weil
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014Kyle Bader
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackOpenStack_Online
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldSage Weil
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 

What's hot (20)

Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to Jewel
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Bluestore
BluestoreBluestore
Bluestore
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmark
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and Beyond
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and Beyond
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud world
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 

Viewers also liked

Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelRed_Hat_Storage
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Red_Hat_Storage
 
Beyond Hadoop and MapReduce
Beyond Hadoop and MapReduceBeyond Hadoop and MapReduce
Beyond Hadoop and MapReduceAlexander Alten
 
Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬
Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬
Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬Hang Geng
 
Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05
Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05
Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05Hang Geng
 
Ceph中国社区9.19 Some Ceph Story-朱荣泽03
Ceph中国社区9.19 Some Ceph Story-朱荣泽03Ceph中国社区9.19 Some Ceph Story-朱荣泽03
Ceph中国社区9.19 Some Ceph Story-朱荣泽03Hang Geng
 
Scale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterScale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterGluster.org
 
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013Gluster.org
 
Software Defined storage
Software Defined storageSoftware Defined storage
Software Defined storageKirillos Akram
 
Award winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for XenAward winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for XenGlusterFS
 
How to Install Gluster Storage Platform
How to Install Gluster Storage PlatformHow to Install Gluster Storage Platform
How to Install Gluster Storage PlatformGlusterFS
 
Gluster Storage Platform Installation Guide
Gluster Storage Platform Installation GuideGluster Storage Platform Installation Guide
Gluster Storage Platform Installation GuideGlusterFS
 
Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011GlusterFS
 
Petascale Cloud Storage with GlusterFS
Petascale Cloud Storage with GlusterFSPetascale Cloud Storage with GlusterFS
Petascale Cloud Storage with GlusterFSThe Linux Foundation
 
Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0sprdd
 
Distributed Shared Memory Systems
Distributed Shared Memory SystemsDistributed Shared Memory Systems
Distributed Shared Memory SystemsAnkit Gupta
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2Tommy Lee
 
Glusterfs and openstack
Glusterfs  and openstackGlusterfs  and openstack
Glusterfs and openstackopenstackindia
 
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)Jens Hadlich
 

Viewers also liked (20)

Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to Jewel
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
 
Beyond Hadoop and MapReduce
Beyond Hadoop and MapReduceBeyond Hadoop and MapReduce
Beyond Hadoop and MapReduce
 
Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬
Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬
Ceph中国社区9.19 Ceph FS-基于RADOS的高性能分布式文件系统02-袁冬
 
Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05
Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05
Ceph中国社区9.19 Ceph IO 路径 和性能分析-王豪迈05
 
Ceph中国社区9.19 Some Ceph Story-朱荣泽03
Ceph中国社区9.19 Some Ceph Story-朱荣泽03Ceph中国社区9.19 Some Ceph Story-朱荣泽03
Ceph中国社区9.19 Some Ceph Story-朱荣泽03
 
Scale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_glusterScale out backups-with_bareos_and_gluster
Scale out backups-with_bareos_and_gluster
 
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
 
Software Defined storage
Software Defined storageSoftware Defined storage
Software Defined storage
 
Award winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for XenAward winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for Xen
 
How to Install Gluster Storage Platform
How to Install Gluster Storage PlatformHow to Install Gluster Storage Platform
How to Install Gluster Storage Platform
 
Gluster Storage Platform Installation Guide
Gluster Storage Platform Installation GuideGluster Storage Platform Installation Guide
Gluster Storage Platform Installation Guide
 
Intorduce to Ceph
Intorduce to CephIntorduce to Ceph
Intorduce to Ceph
 
Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011
 
Petascale Cloud Storage with GlusterFS
Petascale Cloud Storage with GlusterFSPetascale Cloud Storage with GlusterFS
Petascale Cloud Storage with GlusterFS
 
Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0
 
Distributed Shared Memory Systems
Distributed Shared Memory SystemsDistributed Shared Memory Systems
Distributed Shared Memory Systems
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2
 
Glusterfs and openstack
Glusterfs  and openstackGlusterfs  and openstack
Glusterfs and openstack
 
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)
 

Similar to CephFS update February 2016

Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Community
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With CephCeph Community
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemNETWAYS
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storageAndrew Underwood
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewMarcel Hergaarden
 
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)Hien Nguyen Van
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILACEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILACeph Community
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...TomBarron
 
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...NETWAYS
 
OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...
OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...
OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...OpenNebula Project
 
Ceph Day New York 2014: Future of CephFS
Ceph Day New York 2014:  Future of CephFS Ceph Day New York 2014:  Future of CephFS
Ceph Day New York 2014: Future of CephFS Ceph Community
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Community
 
HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1
HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1
HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1Alexander Zaytsev
 
Ceph at salesforce ceph day external presentation
Ceph at salesforce   ceph day external presentationCeph at salesforce   ceph day external presentation
Ceph at salesforce ceph day external presentationSameer Tiwari
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 

Similar to CephFS update February 2016 (20)

Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With Ceph
 
OSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage SystemOSDC 2015: John Spray | The Ceph Storage System
OSDC 2015: John Spray | The Ceph Storage System
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storage
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
 
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)Kubecon shanghai  rook deployed nfs clusters over ceph-fs (translator copy)
Kubecon shanghai rook deployed nfs clusters over ceph-fs (translator copy)
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILACEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
CEPH DAY BERLIN - PRACTICAL CEPHFS AND NFS USING OPENSTACK MANILA
 
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
Practical CephFS with nfs today using OpenStack Manila - Ceph Day Berlin - 12...
 
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...
 
OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...
OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...
OpenNebulaConf 2014 - Using Ceph to provide scalable storage for OpenNebula -...
 
Ceph Day New York 2014: Future of CephFS
Ceph Day New York 2014:  Future of CephFS Ceph Day New York 2014:  Future of CephFS
Ceph Day New York 2014: Future of CephFS
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
 
HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1
HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1
HEPiX2015_a2_RACF_azaytsev_Ceph_v4_mod1
 
Ceph at salesforce ceph day external presentation
Ceph at salesforce   ceph day external presentationCeph at salesforce   ceph day external presentation
Ceph at salesforce ceph day external presentation
 
Kfs presentation
Kfs presentationKfs presentation
Kfs presentation
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 

Recently uploaded

%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...masabamasaba
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech studentsHimanshiGarg82
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrainmasabamasaba
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...Jittipong Loespradit
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2
 
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...masabamasaba
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfonteinmasabamasaba
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesVictorSzoltysek
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024VictoriaMetrics
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionOnePlan Solutions
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...panagenda
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Hararemasabamasaba
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareJim McKeeth
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplatePresentation.STUDIO
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyviewmasabamasaba
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfkalichargn70th171
 

Recently uploaded (20)

%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
%+27788225528 love spells in new york Psychic Readings, Attraction spells,Bri...
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
 
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
 
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
%in Hazyview+277-882-255-28 abortion pills for sale in Hazyview
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
 

CephFS update February 2016

  • 1. Ceph Tech Talks: CephFS Update John Spray john.spray@redhat.com Feb 2016
  • 2. 2 Ceph Tech Talks: CephFS Agenda ● Recap: CephFS architecture ● What's new for Jewel? ● Scrub/repair ● Fine-grained authorization ● RADOS namespace support in layouts ● OpenStack Manila ● Experimental multi-filesystem functionality
  • 3. 3 Ceph Tech Talks: CephFS Ceph architecture RGW A web services gateway for object storage, compatible with S3 and Swift LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors RBD A reliable, fully- distributed block device with cloud platform integration CEPHFS A distributed file system with POSIX semantics and scale- out metadata management APP HOST/VM CLIENT
  • 4. 4 Ceph Tech Talks: CephFS CephFS ● POSIX interface: drop in replacement for any local or network filesystem ● Scalable data: files stored directly in RADOS ● Scalable metadata: cluster of metadata servers ● Extra functionality: snapshots, recursive statistics ● Same storage backend as object (RGW) + block (RBD): no separate silo needed for file
  • 5. 5 Ceph Tech Talks: CephFS Components Linux host M M M Ceph server daemons CephFS client datametadata 01 10 M OSD Monitor MDS
  • 6. 6 Ceph Tech Talks: CephFS Why build a distributed filesystem? ● Existing filesystem-using workloads aren't going away ● POSIX filesystems are a lingua-franca, for administrators as well as applications ● Interoperability with other storage systems in data lifecycle (e.g. backup, archival) ● New platform container “volumes” are filesystems ● Permissions, directories are actually useful concepts!
  • 7. 7 Ceph Tech Talks: CephFS Why not build a distributed filesystem? ● Harder to scale than object stores, because entities (inodes, dentries, dirs) are related to one another, good locality needed for performance. ● Some filesystem-using applications are gratuitously inefficient (e.g. redundant “ls -l” calls, using files for IPC) due to local filesystem latency expectations ● Complexity resulting from stateful clients e.g. taking locks, opening files requires coordination and clients can interfere with one another's responsiveness.
  • 8. 8 Ceph Tech Talks: CephFS CephFS in practice ceph-deploy mds create myserver ceph osd pool create fs_data ceph osd pool create fs_metadata ceph fs new myfs fs_metadata fs_data mount -t ceph x.x.x.x:6789 /mnt/ceph
  • 9. 9 Ceph Tech Talks: CephFS Scrub and repair
  • 10. 10 Ceph Tech Talks: CephFS Scrub/repair status ● In general, resilience and self-repair is RADOS's job: all CephFS data & metadata lives in RADOS objects ● CephFS scrub/repair is for handling disasters: serious software bugs, or permanently lost data in RADOS ● In Jewel, can now handle and recover from many forms of metadata damage (corruptions, deletions) ● Repair tools require expertise: primarily for use during (rare) support incidents, not everyday user activity
  • 11. 11 Ceph Tech Talks: CephFS Scrub/repair: handling damage ● Fine-grained damage status (“damage ls”) instead of taking whole rank offline ● Detect damage during normal load of metadata, or during scrub ceph tell mds.<id> damage ls ceph tell mds.<id> damage rm ceph mds 0 repaired ● Can repair damaged statistics online: other repairs happen offline (i.e. stop MDS and writing directly to metadata pool)
  • 12. 12 Ceph Tech Talks: CephFS Scrub/repair: online scrub commands ● Forward scrub: traversing metadata from root downwards ceph daemon mds.<id> scrub_path ceph daemon mds.<id> scrub_path recursive ceph daemon mds.<id> scrub_path repair ceph daemon mds.<id> tag path ● These commands will give you success or failure info on completion, and emit cluster log messages about issues.
  • 13. 13 Ceph Tech Talks: CephFS Scrub/repair: offline repair commands ● Backward scrub: iterating over all data objects and trying to relate them back to the metadata ● Potentially long running, but can run workers in parallel ● Find all the objects in files: ● cephfs­data­scan scan_extents ● Find (or insert) all the files into the metadata: ● cephfs­data­scan scan_inodes
  • 14. 14 Ceph Tech Talks: CephFS Scrub/repair: parallel execution ● New functionality in RADOS to enable iterating over subsets of the overall set of objects in pool. ● Currently one must coordinate collection of workers by hand (or a short shell script) ● Example: invoke worker 3 of 10 like this: ● cephfs­data­scan scan_inodes –worker_n 3  –worker_m 10
  • 15. 15 Ceph Tech Talks: CephFS Scrub/repair: caveats ● This is still disaster recovery functionality: don't run “repair” commands for fun. ● Not multi-MDS aware: commands operate directly on a single MDS's share of the metadata. ● Not yet auto-run in background like RADOS scrub
  • 16. 16 Ceph Tech Talks: CephFS Fine-grained authorisation
  • 17. 17 Ceph Tech Talks: CephFS CephFS authorization ● Clients need to talk to MDS daemons, mons and OSDs. ● OSD auth caps enable limiting clients to use only particular data pools, but couldn't control which parts of filesystem metadata they saw ● New MDS auth caps enable limiting access by path and uid.
  • 18. 18 Ceph Tech Talks: CephFS MDS auth caps ● Example: we created a dir `foodir` that has its layout set to pool `foopool`. We create a client key 'foo' that can only see metadata within that dir and data within that pool. ceph auth get­or­create client.foo    mds “allow rw path=/foodir”    osd “allow rw pool=foopool”    mon “allow r” ● Client must mount with “-r /foodir”, to treat that as its root (doesn't have capability to see the real root)
  • 19. 19 Ceph Tech Talks: CephFS RADOS namespaces in file layouts
  • 20. 20 Ceph Tech Talks: CephFS RADOS namespaces ● Namespaces offer a cheaper way to divide up objects than pools. ● Pools consume physical resources (i.e. they create PGs), whereas namespaces are effectively just a prefix to object names. ● OSD auth caps can be limited by namespaces: when we need to isolate two clients (e.g. two cephfs clients) we can give them auth caps that allow access to different namespaces.
  • 21. 21 Ceph Tech Talks: CephFS Namespaces in layouts ● Existing fields: pool, stripe_unit, stripe_count, object_size ● New field: pool_namespace ● setfattr ­n ceph.file.layout.pool_namespace ­v <ns> ● setfattr ­n ceph.dir.layout.pool_namespace ­v <ns> ● As with setting layout.pool, the data gets written there, but the backtrace continues to be written to the default pool (and default namespace). Backtrace not accessed by client, so doesn't affect client side auth configuration.
  • 22. 22 Ceph Tech Talks: CephFS OpenStack Manila
  • 23. 23 Ceph Tech Talks: CephFS Manila ● The OpenStack shared filesystem service ● Manila users request filesystem storage as shares which are provisioned by drivers ● CephFS driver implements shares as directories: ● Manila expects shares to be size-constrained, we use CephFS Quotas ● Client mount commands includes -r flag to treat the share dir as the root ● Capacity stats reported for that directory using rstats ● Clients restricted to their directory and pool/namespace using new auth caps
  • 24. 24 Ceph Tech Talks: CephFS CephFSVolumeClient ● A new python interface in the Ceph tree, designed for Manila and similar frameworks. ● Wraps up the directory+auth caps mechanism as a “volume” concept. Manila CephFS Driver CephFSVolumeClient librados libcephfs Ceph Cluster Network github.com/openstack/manila github.com/ceph/ceph
  • 25. 25 Ceph Tech Talks: CephFS Experimental multi-filesystem functionality
  • 26. 26 Ceph Tech Talks: CephFS Multiple filesystems ● Historical 1:1 mapping between Ceph cluster (RADOS) and Ceph filesystem (cluster of MDSs) ● Artificial limitation: no reason we can't have multiple CephFS filesystems, with multiple MDS clusters, all backed onto one RADOS cluster. ● Use case ideas: ● Physically isolate workloads on separate MDS clusters (vs. using dirs within one cluster) ● Disaster recovery: recover into a new filesystem on the same cluster, instead of trying to do in-place ● Resilience: multiple filesystems become separate failure domains in case of issues.
  • 27. 27 Ceph Tech Talks: CephFS Multiple filesystems initial implementation ● You can now run “fs new” more than once (with different pools) ● Old clients get the default filesystem (you can configure which one that is) ● New userspace client config opt to select which filesystem should be mounted ● MDS daemons are all equal: any one may get used for any filesystem ● Switched off by default: must set a special flag to use this (like snapshots, inline data)
  • 28. 28 Ceph Tech Talks: CephFS Multiple filesystems future work ● Enable use of RADOS namespaces (not just separate pools) for different filesystems to avoid needlessly creating more pools ● Authorization capabilities to limit MDS and clients to particular filesystems ● Enable selecting FS in kernel client ● Enable manually configured affinity of MDS daemons to filesystem(s) ● More user friendly FS selection in userspace client (filesystem name instead of ID)
  • 29. 29 Ceph Tech Talks: CephFS Wrap up
  • 30. 30 Ceph Tech Talks: CephFS Tips for early adopters http://ceph.com/resources/mailing-list-irc/ http://tracker.ceph.com/projects/ceph/issues http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/ ● Does the most recent development release or kernel fix your issue? ● What is your configuration? MDS config, Ceph version, client version, kclient or fuse ● What is your workload? ● Can you reproduce with debug logging enabled?
  • 31. 31 Ceph Tech Talks: CephFS Questions?