SlideShare ist ein Scribd-Unternehmen logo
1 von 54
Downloaden Sie, um offline zu lesen
1
WHAT’S NEW IN CEPH
PACIFIC
2021.02.25
2
The buzzwords
● “Software defined storage”
● “Unified storage system”
● “Scalable distributed storage”
● “The future of storage”
● “The Linux of storage”
WHAT IS CEPH?
The substance
● Ceph is open source software
● Runs on commodity hardware
○ Commodity servers
○ IP networks
○ HDDs, SSDs, NVMe, NV-DIMMs, ...
● A single cluster can serve object,
block, and file workloads
3
● Freedom to use (free as in beer)
● Freedom to introspect, modify,
and share (free as in speech)
● Freedom from vendor lock-in
● Freedom to innovate
CEPH IS FREE AND OPEN SOURCE
4
● Reliable storage service out of unreliable components
○ No single point of failure
○ Data durability via replication or erasure coding
○ No interruption of service from rolling upgrades, online expansion, etc.
● Favor consistency and correctness over performance
CEPH IS RELIABLE
5
● Ceph is elastic storage infrastructure
○ Storage cluster may grow or shrink
○ Add or remove hardware while system is
online and under load
● Scale up with bigger, faster hardware
● Scale out within a single cluster for
capacity and performance
● Federate multiple clusters across
sites with asynchronous replication
and disaster recovery capabilities
CEPH IS SCALABLE
6
CEPH IS A UNIFIED STORAGE SYSTEM
RGW
S3 and Swift
object storage
LIBRADOS
Low-level storage API
RADOS
Reliable, elastic, distributed storage layer with
replication and erasure coding
RBD
Virtual block device
CEPHFS
Distributed network
file system
OBJECT BLOCK FILE
7
RELEASE SCHEDULE
Octopus
Mar 2020
14.2.z
Nautilus
Mar 2019
WE ARE
HERE
15.2.z
16.2.z
Pacific
Mar 2021
17.2.z
Quincy
Mar 2022
● Stable, named release every 12 months
● Backports for 2 releases
○ Nautilus reaches EOL shortly after Pacific is released
● Upgrade up to 2 releases at a time
○ Nautilus → Pacific, Octopus → Quincy
8
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
9
New Features
● Automated upgrade from Octopus
○ (for clusters deployed with cephadm)
● Automated log-in to private registries
● iSCSI and NFS are now stable
● Automated HA for RGW
○ haproxy and keepalived
● Host maintenance mode
● cephadm exporter/agent for increased
performance/scalability
Robustness
● Lots of small usability improvements
● Lots of bug fixes
○ Backported into Octopus already
● Ongoing cleanup of docs.ceph.com
○ Removed ceph-deploy
CEPHADM
10
● Robust and responsive management GUI for cluster operations
○ All core Ceph services (object, block, file) and extensions (iSCSI, NFS Ganesha)
○ Monitoring, metrics, management
● Full OSD management
○ Bulk creation with DriveGroups (filter by host, device properties: size/type/model)
○ Disk replacement and SMART diagnostics
● Multisite capabilities
○ RBD mirroring
○ RGW multisite sync monitoring
● Orchestrator/cephadm integration
● Official Management REST API for Ceph
○ Stable, versioned, and fully documented
● Production-ready security
○ RBAC, account policies (including account lock-out), secure cookies, sanitized logs, …
DASHBOARD
11
● Improved hands-off defaults
○ Upmap balancer on by default
○ PG autoscaler has improved out-of-the-box experience
● Automatically detect and report daemon version mismatches
○ Associated health alert
○ Can be muted during upgrades and on demand
● Ability to cancel ongoing scrubs
● ceph -s simplified
○ Recovery progress shown as one progress bar - use ‘ceph progress’ to see more
● Framework for distributed tracing in the OSD (work in progress)
○ Opentracing tracepoints in the OSD I/O path
○ Can be collected and viewed via Jaeger's web ui.
○ To help with end-to-end performance analysis
RADOS USABILITY
12
● MultiFS is marked stable!
○ Automated file system creation: use ‘ceph fs volume create NAME’
○ MDS automatically deployed with cephadm
● MDS autoscaler (start/stop MDS based on file system max_mds, standby count)
● cephfs-top (preview)
○ See client sessions and performance of the file system
● Continued improvements to cephfs-shell
● Scheduled snapshots via new snap_schedule mgr module.
● First class NFS gateway support
○ active/active configurations
○ automatically deployed via the Ceph orchestrator (Rook and cephadm)
● MDS-side encrypted file support (kernel-side development on-going)
CEPHFS USABILITY
13
RBD
● “Instant” clone/recover from external
(file/HTTP/S3) data source
● Built-in support for LUKS1/LUKS2
encryption
● Native Windows driver
○ Signed, prebuilt driver available soon
● Restartable rbd-nbd daemon support
OTHER FEATURES AND USABILITY
RGW
● S3Select MVP (CSV-only)
● Lua scripting, RGW request path
● D3N (*)
14
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
15
● Improved PG deletion performance
● More controlled osdmap trimming in the monitor
● Msgr2.1
○ New wire format for msgr2 (both crc and secure modes)
● More efficient manager modules
○ Ability to turn off progress module
○ Efficient use of large C++ structures in the codebase
● Monitor/display SSD wear levels
○ ‘ceph device ls’ output
RADOS ROBUSTNESS
16
● Feature bit support for turning on/off required file system features
○ Clients not supporting features will be rejected
● Multiple MDS FS scrub (online integrity check)
● Kernel client (and mount.ceph) support for msgr2[.1]
○ Kernel mount option -o ms_mode=crc|secure|prefer-crc|prefer-secure
● Support for recovering mounts from blocklisting
○ Kernel reconnects with -o recover_session=clean
○ ceph-fuse reconnects with --client_reconnect_stale=1; page cache should be disabled
● Improved test coverage (doubled test matrix, 2500 -> 5000)
CEPHFS ROBUSTNESS
17
TELEMETRY AND CRASH REPORTS
● Public dashboards!
○ https://telemetry-public.ceph.com/
○ Clusters, devices
● Opt-in
○ Will require re-opt-in if telemetry content
is expanded in the future
○ Explicitly acknowledge data sharing
license
● Telemetry channels
○ basic - cluster size, version, etc.
○ crash - anonymized crash metadata
○ device - device health (SMART) data
○ ident - contact info (off by default!)
● Initial focus on crash reports
○ Integration with bug tracker
○ Daily reports on top crashes in wild
○ Fancy (internal) dashboard
● Extensive device dashboard
○ See which HDD and SSD models ceph
users are deploying
18
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
19
RADOS: BLUESTORE
● RocksDB sharding
○ Reduced disk space requirements
● Hybrid allocator
○ Lower memory use and disk fragmentation
● Better space utilization for small objects
○ 4K min_alloc_size for SSDs and HDDs
● More efficient caching
○ Better use of available memory
● Finer-grained memory tracking
○ Improved accounting of current usage
20
● Phase 1: QoS between recovery and client I/O using mclock scheduler
○ Different profiles to prioritize client I/O, recovery and background tasks
■ Config sets to hide complexity of tuning dmclock and recovery parameters
○ Better default values for Ceph parameters to get improved performance out of the system
based on extensive testing on SSDs
○ Pacific!
● Phase 2: Quincy
○ Optimize performance for HDDs
○ Account for background activities like scrubbing, PG deletion etc.
○ Further testing across different types of workloads
● Phase 3: client vs client QoS
RADOS: QoS
21
● High-performance rewrite of the OSD
● Recovery/backfill implemented
● Scrub state machine added to lay ground for scrub implementation in the
crimson osd
● Initial prototype of SeaStore in place
○ Targets both ZNS (zone-based) and traditional SSDs
○ Onode Tree implementation
○ Omap
○ LBA mappings
● Ability to run simple RBD workloads today
● Compatibility layer to run legacy BlueStore code
CRIMSON PROJECT
22
● Ephemeral pinning (policy based subtree pinning)
○ Distributed pins automatically shard sub-directories (think: /home)
○ Random pins shard descendent directories probabilistically
● Improved capability/cache management by MDS for large clusters
○ Cap recall defaults improved based on larger production clusters (CERN)
○ Capability acquisition throttling for some client workloads
● Asynchronous unlink/create (partial MDS support since Octopus)
○ Miscellaneous fixes and added testing
○ Kernel v5.7 and downstream in RHEL 8 / CentOS 8 (Stream)
○ libcephfs/ceph-fuse support in-progress
CEPHFS PERFORMANCE
23
RGW
● Avoid omap where unnecessary
○ FIFO queues for garbage collection
○ FIFO queues for data sync log
● Negative caching for bucket metadata
○ significant reduction in request latency for
many workloads
● Sync process performance improvements
○ Better state tracking and performance
when bucket sharding is enabled
● nginx authenticated HTTP front-cache
○ dramatically accelerate read-mostly
workloads
MISC PERFORMANCE
RBD
● librbd migration to boost::asio reactor
○ Event driven; uses neorados
○ May eventually allow tighter integration
with SPDK
24
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
25
● Replication targets (remote clusters) configured on any directory
● New cephfs-mirror daemon to migrate data
○ Managed by Rook or cephadm
● Snapshot-based
○ When snapshot is created on source cluster, it is replicated to remote cluster
● Initial implementation in Pacific
○ Single daemon
○ Inefficient incremental updates
○ Improvements will be backported
CEPHFS: SNAPSHOT-BASED MIRRORING
26
● Current multi-site supports
○ Federate multiple sites
○ Global bucket/user namespace
○ Async data replication at site/zone granularity
○ Bucket granularity replication
● Pacific adds:
○ Testing and QA to move bucket granularity replication out of experimental status
○ Foundation to support bucket resharding in multi-site environment
● Large-scale refactoring
○ Extensive multisite roadmap… lots of goodness should land in Quincy
RGW: PER-BUCKET REPLICATION
27
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
28
ROOK
● Stretch clusters
○ Configure storage in two datacenters, with a mon in a third location with higher latency
○ 5 mons
○ Pools with replication 4 (2 replicas in each datacenter)
● CephFS mirroring
○ Manage CephFS mirroring via CRDs
○ New, simpler snapshot-based mirroring
29
CSI / OPENSTACK MANILA
● RWX/ROX -- CephFS via mgr/volumes
○ New PV snapshot stabilization
■ Limits in place for snapshots on subvolumes.
○ New authorization API support (Manila)
○ New ephemeral pinning for volumes
● RWO/RWX/ROX -- RBD
○ dm-crypt encryption with Vault key management
○ PV snapshots and clones
○ Topology-aware provisioning
○ Integration with snapshot-based mirroring in-progress
30
● Removing instances of racially charged terms
○ blacklist/whitelist
○ master/slave
○ Some librados APIs/CLI affected: deprecated old calls, will remove in future
● Ongoing documentation improvements
● https://ceph.io website redesign
○ Static site generator, built from git (no more wordpress)
○ Should launch this spring
MISC
31
● (Consistent) CI and release builds
○ Thank you to Ampere for donated build hardware
● Container builds
● Limited QA/regression testing coverage
ARM: AARCH64
32
NEXT UP IS
QUINCY
33
● Ceph Developer Summit - March or April
○ Quincy planning
○ Traditional format (scheduled topical sessions, video + chat, recorded)
● Ceph Month - April or May
○ Topic per week (Core, RGW, RBD, CephFS)
○ 2-3 scheduled talks spread over the week
■ Including developers presenting what is new, what’s coming
○ Each talk followed by semi-/un-structured discussion meetup
■ Etherpad agenda
■ Open discussion
■ Opportunity for new/existing users/operators to compare notes
○ Lightning talks
○ Fully virtual (video + chat, recorded)
VIRTUAL EVENTS THIS SPRING
34
● https://ceph.io/
● Twitter: @ceph
● Docs: http://docs.ceph.com/
● Mailing lists: http://lists.ceph.io/
○ ceph-announce@ceph.io → announcements
○ ceph-users@ceph.io → user discussion
○ dev@ceph.io → developer discussion
● IRC: irc.oftc.net
○ #ceph, #ceph-devel
● GitHub: https://github.com/ceph/
● YouTube ‘Ceph’ channel
FOR MORE INFORMATION
35
36
● RBD NVMeoF target support
● Cephadm resource-aware scheduling, CIFS gateway
●
QUINCY SNEAK PEEK
37
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
38
● cephadm improvements
○ CIFS/SMB support
○ Resource-aware service placement (memory, CPU)
○ Moving Services out of failed hosts
○ Improved scalability/responsiveness for large clusters
● Rook improvements
○ Better integration with orchestrator API
○ Parity with cephadm
ORCHESTRATION
39
RGW
● Deduplicated storage
CephFS
● ‘fs top’
● NFS and SMB support via orchestrator
MISC USABILITY AND FEATURES
RBD
● Expose snapshots via RGW (object)
● Expose RBD via NVMeOF target gateway
● Improved rbd-nbd support
○ Expose kernel block device with full librbd
feature set
○ Improved integration with ceph-csi for
Kubernetes environments
40
● Multi-site monitoring & management (RBD, RGW, CephFS) and multi-cluster
support (single dashboard managing multiple Ceph clusters).
● RGW advanced features management (bucket policies, lifecycle, encryption,
notifications…)
● High-level user workflows:
○ Cluster installation wizard
○ Cluster upgrades
● Improved observability
○ Log aggregation
DASHBOARD
41
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
42
RADOS
● Enable ‘upmap’ balancer by default
○ More precise than ‘crush-compat’ mode
○ Hands-off by default
○ Improve balancing of ‘primary’ role
● Dynamically adjust recovery priority
based on load
● Automatic periodic security key rotation
● Distributed tracing framework
○ For end-to-end performance analysis
STABILITY AND ROBUSTNESS
CephFS
● MultiMDS metadata balancing
improvements
● Minor version upgrade improvements
43
● Work continues on backend analysis of telemetry data
○ Tools for developers to use crash reports identify and prioritize bug fixes
● Adjustments in collected data
○ Adjust what data is collected for Pacific
○ Periodic backport to Octopus (we re-opt-in)
○ e.g., which orchestrator module is in use (if any)
● Drive failure prediction
○ Building improved models for predictive drive failures
○ Expanding data set via Ceph collector, standalone collector, and other data sources
TELEMETRY
44
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
45
CephFS
● Async metadata operations
○ Support in both libcephfs
○ Async rmdir/mkdir
● Ceph-fuse performance
○ Take advantage of recent libfuse changes
MISC PERFORMANCE
RGW
● Data sync optimizations, sync fairness
● Sync metadata improvements
○ omap -> cls_fifo
○ Bucket index, metadata+data logs
● Ongoing async refactoring of RGW
○ Based on boost::asio
46
● Sharded RocksDB
○ Improve compaction performance
○ Reduce disk space requirements
● In-memory cache improvements
● SMR
○ Support for host-managed SMR HDDs
○ Targeting cold-stored workloads (e.g., RGW) only
RADOS: BLUESTORE
47
PROJECT CRIMSON
What
● Rewrite IO path in using Seastar
○ Preallocate cores
○ One thread per core
○ Explicitly shard all data structures
and work over cores
○ No locks and no blocking
○ Message passing between cores
○ Polling for IO
● DPDK, SPDK
○ Kernel bypass for network and
storage IO
● Goal: Working prototype for Pacific
Why
● Not just about how many IOPS we do…
● More about IOPS per CPU core
● Current Ceph is based on traditional
multi-threaded programming model
● Context switching is too expensive when
storage is almost as fast as memory
● New hardware devices coming
○ DIMM form-factor persistent memory
○ ZNS - zone-based SSDs
48
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
49
CEPHFS MULTI-SITE REPLICATION
● Automate periodic snapshot + sync to remote cluster
○ Arbitrary source tree, destination in remote cluster
○ Sync snapshots via rsync
○ May support non-CephFS targets
● Discussing more sophisticated models
○ Bidirectional, loosely/eventually consistent sync
○ Simple conflict resolution behavior?
50
● Nodes scale up (faster, bigger)
● Clusters scale out
○ Bigger clusters within a site
● Organizations scale globally
○ Multiple sites, data centers
○ Multiple public and private clouds
○ Multiple units within an organization
MOTIVATION, OBJECT
● Universal, global connectivity
○ Access your data from anywhere
● API consistency
○ Write apps to a single object API (e.g., S3)
regardless of which site, cloud it is
deployed on
● Disaster recovery
○ Replicate object data across sites
○ Synchronously or asynchronously
○ Failover application and reattach
○ Active/passive and active/active
● Migration
○ Migrate data set between sites, tiers
○ While it is being used
● Edge scenarios (caching and buffering)
○ Cache remote bucket locally
○ Buffer new data locally
51
● Project Zipper
○ Internal abstractions to allow alternate
storage backends (e.g., storage data in
external object store)
○ Policy layer based on LUA
○ Initial targets: database and file-based
stores, tiering to cloud (e.g., S3)
● Dynamic reshard vs multisite support
● Sync from external sources
○ AWS
● Lifecycle transition to cloud
RGW MULTISITE FOR QUINCY
52
RBD
● Consistency group support
MISC MULTI-SITE
53
Usability
Performance
Ecosystem
Multi-site
Quality
FIVE THEMES
54
Windows
● Windows port for RBD is underway
● Lightweight kernel pass-through to librbd
● CephFS to follow (based on Dokan)
Performance testing hardware
● Intel test cluster: officianalis
● AMD / Samsung / Mellanox cluster
● High-end ARM-based system?
OTHER ECOSYSTEM EFFORTS
ARM (aarch64)
● Loads of new build and test hardware
arriving in the lab
● CI and release builds for aarch64
IBM Z
● Collaboration with IBM Z team
● Build and test

Weitere ähnliche Inhalte

Was ist angesagt?

Performance tuning in BlueStore & RocksDB - Li Xiaoyan
Performance tuning in BlueStore & RocksDB - Li XiaoyanPerformance tuning in BlueStore & RocksDB - Li Xiaoyan
Performance tuning in BlueStore & RocksDB - Li XiaoyanCeph Community
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...OpenStack Korea Community
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDBSage Weil
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about cephEmma Haruka Iwao
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephScyllaDB
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Ceph Community
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Community
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Community
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화OpenStack Korea Community
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency CephShapeBlue
 

Was ist angesagt? (20)

Performance tuning in BlueStore & RocksDB - Li Xiaoyan
Performance tuning in BlueStore & RocksDB - Li XiaoyanPerformance tuning in BlueStore & RocksDB - Li Xiaoyan
Performance tuning in BlueStore & RocksDB - Li Xiaoyan
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Bluestore
BluestoreBluestore
Bluestore
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
 
Ceph as software define storage
Ceph as software define storageCeph as software define storage
Ceph as software define storage
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
 
CephFS Update
CephFS UpdateCephFS Update
CephFS Update
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 

Ähnlich wie 2021.02 new in Ceph Pacific Dashboard

What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project UpdateCeph Community
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonSage Weil
 
7. Cloud Native Computing - Kubernetes - Bratislava - Rook.io
7. Cloud Native Computing - Kubernetes - Bratislava - Rook.io7. Cloud Native Computing - Kubernetes - Bratislava - Rook.io
7. Cloud Native Computing - Kubernetes - Bratislava - Rook.ioDávid Kőszeghy
 
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed_Hat_Storage
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...javier ramirez
 
OpenStack Best Practices and Considerations - terasky tech day
OpenStack Best Practices and Considerations  - terasky tech dayOpenStack Best Practices and Considerations  - terasky tech day
OpenStack Best Practices and Considerations - terasky tech dayArthur Berezin
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph Community
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Thang Man
 
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...OpenStack
 
Deep dive into OpenStack storage, Sean Cohen, Red Hat
Deep dive into OpenStack storage, Sean Cohen, Red HatDeep dive into OpenStack storage, Sean Cohen, Red Hat
Deep dive into OpenStack storage, Sean Cohen, Red HatSean Cohen
 
Deep Dive into Openstack Storage, Sean Cohen, Red Hat
Deep Dive into Openstack Storage, Sean Cohen, Red HatDeep Dive into Openstack Storage, Sean Cohen, Red Hat
Deep Dive into Openstack Storage, Sean Cohen, Red HatCloud Native Day Tel Aviv
 
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebula Project
 
OpenNebula and StorPool: Building Powerful Clouds
OpenNebula and StorPool: Building Powerful CloudsOpenNebula and StorPool: Building Powerful Clouds
OpenNebula and StorPool: Building Powerful CloudsOpenNebula Project
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4OpenEBS
 
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a Month
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a MonthUSENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a Month
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a MonthNicolas Brousse
 

Ähnlich wie 2021.02 new in Ceph Pacific Dashboard (20)

What's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon ValleyWhat's New with Ceph - Ceph Day Silicon Valley
What's New with Ceph - Ceph Day Silicon Valley
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 
2021.06. Ceph Project Update
2021.06. Ceph Project Update2021.06. Ceph Project Update
2021.06. Ceph Project Update
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
Community Update at OpenStack Summit Boston
Community Update at OpenStack Summit BostonCommunity Update at OpenStack Summit Boston
Community Update at OpenStack Summit Boston
 
Cncf meetup-rook
Cncf meetup-rookCncf meetup-rook
Cncf meetup-rook
 
Cncf meetup-rook
Cncf meetup-rookCncf meetup-rook
Cncf meetup-rook
 
7. Cloud Native Computing - Kubernetes - Bratislava - Rook.io
7. Cloud Native Computing - Kubernetes - Bratislava - Rook.io7. Cloud Native Computing - Kubernetes - Bratislava - Rook.io
7. Cloud Native Computing - Kubernetes - Bratislava - Rook.io
 
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructureRed Hat Enterprise Linux: Open, hyperconverged infrastructure
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
 
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
 
OpenStack Best Practices and Considerations - terasky tech day
OpenStack Best Practices and Considerations  - terasky tech dayOpenStack Best Practices and Considerations  - terasky tech day
OpenStack Best Practices and Considerations - terasky tech day
 
Ceph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage WeilCeph, the future of Storage - Sage Weil
Ceph, the future of Storage - Sage Weil
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016
 
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
 
Deep dive into OpenStack storage, Sean Cohen, Red Hat
Deep dive into OpenStack storage, Sean Cohen, Red HatDeep dive into OpenStack storage, Sean Cohen, Red Hat
Deep dive into OpenStack storage, Sean Cohen, Red Hat
 
Deep Dive into Openstack Storage, Sean Cohen, Red Hat
Deep Dive into Openstack Storage, Sean Cohen, Red HatDeep Dive into Openstack Storage, Sean Cohen, Red Hat
Deep Dive into Openstack Storage, Sean Cohen, Red Hat
 
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
OpenNebulaConf2018 - Is Hyperconverged Infrastructure what you need? - Boyan ...
 
OpenNebula and StorPool: Building Powerful Clouds
OpenNebula and StorPool: Building Powerful CloudsOpenNebula and StorPool: Building Powerful Clouds
OpenNebula and StorPool: Building Powerful Clouds
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4
 
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a Month
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a MonthUSENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a Month
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a Month
 

Kürzlich hochgeladen

The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 

Kürzlich hochgeladen (20)

The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 

2021.02 new in Ceph Pacific Dashboard

  • 1. 1 WHAT’S NEW IN CEPH PACIFIC 2021.02.25
  • 2. 2 The buzzwords ● “Software defined storage” ● “Unified storage system” ● “Scalable distributed storage” ● “The future of storage” ● “The Linux of storage” WHAT IS CEPH? The substance ● Ceph is open source software ● Runs on commodity hardware ○ Commodity servers ○ IP networks ○ HDDs, SSDs, NVMe, NV-DIMMs, ... ● A single cluster can serve object, block, and file workloads
  • 3. 3 ● Freedom to use (free as in beer) ● Freedom to introspect, modify, and share (free as in speech) ● Freedom from vendor lock-in ● Freedom to innovate CEPH IS FREE AND OPEN SOURCE
  • 4. 4 ● Reliable storage service out of unreliable components ○ No single point of failure ○ Data durability via replication or erasure coding ○ No interruption of service from rolling upgrades, online expansion, etc. ● Favor consistency and correctness over performance CEPH IS RELIABLE
  • 5. 5 ● Ceph is elastic storage infrastructure ○ Storage cluster may grow or shrink ○ Add or remove hardware while system is online and under load ● Scale up with bigger, faster hardware ● Scale out within a single cluster for capacity and performance ● Federate multiple clusters across sites with asynchronous replication and disaster recovery capabilities CEPH IS SCALABLE
  • 6. 6 CEPH IS A UNIFIED STORAGE SYSTEM RGW S3 and Swift object storage LIBRADOS Low-level storage API RADOS Reliable, elastic, distributed storage layer with replication and erasure coding RBD Virtual block device CEPHFS Distributed network file system OBJECT BLOCK FILE
  • 7. 7 RELEASE SCHEDULE Octopus Mar 2020 14.2.z Nautilus Mar 2019 WE ARE HERE 15.2.z 16.2.z Pacific Mar 2021 17.2.z Quincy Mar 2022 ● Stable, named release every 12 months ● Backports for 2 releases ○ Nautilus reaches EOL shortly after Pacific is released ● Upgrade up to 2 releases at a time ○ Nautilus → Pacific, Octopus → Quincy
  • 9. 9 New Features ● Automated upgrade from Octopus ○ (for clusters deployed with cephadm) ● Automated log-in to private registries ● iSCSI and NFS are now stable ● Automated HA for RGW ○ haproxy and keepalived ● Host maintenance mode ● cephadm exporter/agent for increased performance/scalability Robustness ● Lots of small usability improvements ● Lots of bug fixes ○ Backported into Octopus already ● Ongoing cleanup of docs.ceph.com ○ Removed ceph-deploy CEPHADM
  • 10. 10 ● Robust and responsive management GUI for cluster operations ○ All core Ceph services (object, block, file) and extensions (iSCSI, NFS Ganesha) ○ Monitoring, metrics, management ● Full OSD management ○ Bulk creation with DriveGroups (filter by host, device properties: size/type/model) ○ Disk replacement and SMART diagnostics ● Multisite capabilities ○ RBD mirroring ○ RGW multisite sync monitoring ● Orchestrator/cephadm integration ● Official Management REST API for Ceph ○ Stable, versioned, and fully documented ● Production-ready security ○ RBAC, account policies (including account lock-out), secure cookies, sanitized logs, … DASHBOARD
  • 11. 11 ● Improved hands-off defaults ○ Upmap balancer on by default ○ PG autoscaler has improved out-of-the-box experience ● Automatically detect and report daemon version mismatches ○ Associated health alert ○ Can be muted during upgrades and on demand ● Ability to cancel ongoing scrubs ● ceph -s simplified ○ Recovery progress shown as one progress bar - use ‘ceph progress’ to see more ● Framework for distributed tracing in the OSD (work in progress) ○ Opentracing tracepoints in the OSD I/O path ○ Can be collected and viewed via Jaeger's web ui. ○ To help with end-to-end performance analysis RADOS USABILITY
  • 12. 12 ● MultiFS is marked stable! ○ Automated file system creation: use ‘ceph fs volume create NAME’ ○ MDS automatically deployed with cephadm ● MDS autoscaler (start/stop MDS based on file system max_mds, standby count) ● cephfs-top (preview) ○ See client sessions and performance of the file system ● Continued improvements to cephfs-shell ● Scheduled snapshots via new snap_schedule mgr module. ● First class NFS gateway support ○ active/active configurations ○ automatically deployed via the Ceph orchestrator (Rook and cephadm) ● MDS-side encrypted file support (kernel-side development on-going) CEPHFS USABILITY
  • 13. 13 RBD ● “Instant” clone/recover from external (file/HTTP/S3) data source ● Built-in support for LUKS1/LUKS2 encryption ● Native Windows driver ○ Signed, prebuilt driver available soon ● Restartable rbd-nbd daemon support OTHER FEATURES AND USABILITY RGW ● S3Select MVP (CSV-only) ● Lua scripting, RGW request path ● D3N (*)
  • 15. 15 ● Improved PG deletion performance ● More controlled osdmap trimming in the monitor ● Msgr2.1 ○ New wire format for msgr2 (both crc and secure modes) ● More efficient manager modules ○ Ability to turn off progress module ○ Efficient use of large C++ structures in the codebase ● Monitor/display SSD wear levels ○ ‘ceph device ls’ output RADOS ROBUSTNESS
  • 16. 16 ● Feature bit support for turning on/off required file system features ○ Clients not supporting features will be rejected ● Multiple MDS FS scrub (online integrity check) ● Kernel client (and mount.ceph) support for msgr2[.1] ○ Kernel mount option -o ms_mode=crc|secure|prefer-crc|prefer-secure ● Support for recovering mounts from blocklisting ○ Kernel reconnects with -o recover_session=clean ○ ceph-fuse reconnects with --client_reconnect_stale=1; page cache should be disabled ● Improved test coverage (doubled test matrix, 2500 -> 5000) CEPHFS ROBUSTNESS
  • 17. 17 TELEMETRY AND CRASH REPORTS ● Public dashboards! ○ https://telemetry-public.ceph.com/ ○ Clusters, devices ● Opt-in ○ Will require re-opt-in if telemetry content is expanded in the future ○ Explicitly acknowledge data sharing license ● Telemetry channels ○ basic - cluster size, version, etc. ○ crash - anonymized crash metadata ○ device - device health (SMART) data ○ ident - contact info (off by default!) ● Initial focus on crash reports ○ Integration with bug tracker ○ Daily reports on top crashes in wild ○ Fancy (internal) dashboard ● Extensive device dashboard ○ See which HDD and SSD models ceph users are deploying
  • 19. 19 RADOS: BLUESTORE ● RocksDB sharding ○ Reduced disk space requirements ● Hybrid allocator ○ Lower memory use and disk fragmentation ● Better space utilization for small objects ○ 4K min_alloc_size for SSDs and HDDs ● More efficient caching ○ Better use of available memory ● Finer-grained memory tracking ○ Improved accounting of current usage
  • 20. 20 ● Phase 1: QoS between recovery and client I/O using mclock scheduler ○ Different profiles to prioritize client I/O, recovery and background tasks ■ Config sets to hide complexity of tuning dmclock and recovery parameters ○ Better default values for Ceph parameters to get improved performance out of the system based on extensive testing on SSDs ○ Pacific! ● Phase 2: Quincy ○ Optimize performance for HDDs ○ Account for background activities like scrubbing, PG deletion etc. ○ Further testing across different types of workloads ● Phase 3: client vs client QoS RADOS: QoS
  • 21. 21 ● High-performance rewrite of the OSD ● Recovery/backfill implemented ● Scrub state machine added to lay ground for scrub implementation in the crimson osd ● Initial prototype of SeaStore in place ○ Targets both ZNS (zone-based) and traditional SSDs ○ Onode Tree implementation ○ Omap ○ LBA mappings ● Ability to run simple RBD workloads today ● Compatibility layer to run legacy BlueStore code CRIMSON PROJECT
  • 22. 22 ● Ephemeral pinning (policy based subtree pinning) ○ Distributed pins automatically shard sub-directories (think: /home) ○ Random pins shard descendent directories probabilistically ● Improved capability/cache management by MDS for large clusters ○ Cap recall defaults improved based on larger production clusters (CERN) ○ Capability acquisition throttling for some client workloads ● Asynchronous unlink/create (partial MDS support since Octopus) ○ Miscellaneous fixes and added testing ○ Kernel v5.7 and downstream in RHEL 8 / CentOS 8 (Stream) ○ libcephfs/ceph-fuse support in-progress CEPHFS PERFORMANCE
  • 23. 23 RGW ● Avoid omap where unnecessary ○ FIFO queues for garbage collection ○ FIFO queues for data sync log ● Negative caching for bucket metadata ○ significant reduction in request latency for many workloads ● Sync process performance improvements ○ Better state tracking and performance when bucket sharding is enabled ● nginx authenticated HTTP front-cache ○ dramatically accelerate read-mostly workloads MISC PERFORMANCE RBD ● librbd migration to boost::asio reactor ○ Event driven; uses neorados ○ May eventually allow tighter integration with SPDK
  • 25. 25 ● Replication targets (remote clusters) configured on any directory ● New cephfs-mirror daemon to migrate data ○ Managed by Rook or cephadm ● Snapshot-based ○ When snapshot is created on source cluster, it is replicated to remote cluster ● Initial implementation in Pacific ○ Single daemon ○ Inefficient incremental updates ○ Improvements will be backported CEPHFS: SNAPSHOT-BASED MIRRORING
  • 26. 26 ● Current multi-site supports ○ Federate multiple sites ○ Global bucket/user namespace ○ Async data replication at site/zone granularity ○ Bucket granularity replication ● Pacific adds: ○ Testing and QA to move bucket granularity replication out of experimental status ○ Foundation to support bucket resharding in multi-site environment ● Large-scale refactoring ○ Extensive multisite roadmap… lots of goodness should land in Quincy RGW: PER-BUCKET REPLICATION
  • 28. 28 ROOK ● Stretch clusters ○ Configure storage in two datacenters, with a mon in a third location with higher latency ○ 5 mons ○ Pools with replication 4 (2 replicas in each datacenter) ● CephFS mirroring ○ Manage CephFS mirroring via CRDs ○ New, simpler snapshot-based mirroring
  • 29. 29 CSI / OPENSTACK MANILA ● RWX/ROX -- CephFS via mgr/volumes ○ New PV snapshot stabilization ■ Limits in place for snapshots on subvolumes. ○ New authorization API support (Manila) ○ New ephemeral pinning for volumes ● RWO/RWX/ROX -- RBD ○ dm-crypt encryption with Vault key management ○ PV snapshots and clones ○ Topology-aware provisioning ○ Integration with snapshot-based mirroring in-progress
  • 30. 30 ● Removing instances of racially charged terms ○ blacklist/whitelist ○ master/slave ○ Some librados APIs/CLI affected: deprecated old calls, will remove in future ● Ongoing documentation improvements ● https://ceph.io website redesign ○ Static site generator, built from git (no more wordpress) ○ Should launch this spring MISC
  • 31. 31 ● (Consistent) CI and release builds ○ Thank you to Ampere for donated build hardware ● Container builds ● Limited QA/regression testing coverage ARM: AARCH64
  • 33. 33 ● Ceph Developer Summit - March or April ○ Quincy planning ○ Traditional format (scheduled topical sessions, video + chat, recorded) ● Ceph Month - April or May ○ Topic per week (Core, RGW, RBD, CephFS) ○ 2-3 scheduled talks spread over the week ■ Including developers presenting what is new, what’s coming ○ Each talk followed by semi-/un-structured discussion meetup ■ Etherpad agenda ■ Open discussion ■ Opportunity for new/existing users/operators to compare notes ○ Lightning talks ○ Fully virtual (video + chat, recorded) VIRTUAL EVENTS THIS SPRING
  • 34. 34 ● https://ceph.io/ ● Twitter: @ceph ● Docs: http://docs.ceph.com/ ● Mailing lists: http://lists.ceph.io/ ○ ceph-announce@ceph.io → announcements ○ ceph-users@ceph.io → user discussion ○ dev@ceph.io → developer discussion ● IRC: irc.oftc.net ○ #ceph, #ceph-devel ● GitHub: https://github.com/ceph/ ● YouTube ‘Ceph’ channel FOR MORE INFORMATION
  • 35. 35
  • 36. 36 ● RBD NVMeoF target support ● Cephadm resource-aware scheduling, CIFS gateway ● QUINCY SNEAK PEEK
  • 38. 38 ● cephadm improvements ○ CIFS/SMB support ○ Resource-aware service placement (memory, CPU) ○ Moving Services out of failed hosts ○ Improved scalability/responsiveness for large clusters ● Rook improvements ○ Better integration with orchestrator API ○ Parity with cephadm ORCHESTRATION
  • 39. 39 RGW ● Deduplicated storage CephFS ● ‘fs top’ ● NFS and SMB support via orchestrator MISC USABILITY AND FEATURES RBD ● Expose snapshots via RGW (object) ● Expose RBD via NVMeOF target gateway ● Improved rbd-nbd support ○ Expose kernel block device with full librbd feature set ○ Improved integration with ceph-csi for Kubernetes environments
  • 40. 40 ● Multi-site monitoring & management (RBD, RGW, CephFS) and multi-cluster support (single dashboard managing multiple Ceph clusters). ● RGW advanced features management (bucket policies, lifecycle, encryption, notifications…) ● High-level user workflows: ○ Cluster installation wizard ○ Cluster upgrades ● Improved observability ○ Log aggregation DASHBOARD
  • 42. 42 RADOS ● Enable ‘upmap’ balancer by default ○ More precise than ‘crush-compat’ mode ○ Hands-off by default ○ Improve balancing of ‘primary’ role ● Dynamically adjust recovery priority based on load ● Automatic periodic security key rotation ● Distributed tracing framework ○ For end-to-end performance analysis STABILITY AND ROBUSTNESS CephFS ● MultiMDS metadata balancing improvements ● Minor version upgrade improvements
  • 43. 43 ● Work continues on backend analysis of telemetry data ○ Tools for developers to use crash reports identify and prioritize bug fixes ● Adjustments in collected data ○ Adjust what data is collected for Pacific ○ Periodic backport to Octopus (we re-opt-in) ○ e.g., which orchestrator module is in use (if any) ● Drive failure prediction ○ Building improved models for predictive drive failures ○ Expanding data set via Ceph collector, standalone collector, and other data sources TELEMETRY
  • 45. 45 CephFS ● Async metadata operations ○ Support in both libcephfs ○ Async rmdir/mkdir ● Ceph-fuse performance ○ Take advantage of recent libfuse changes MISC PERFORMANCE RGW ● Data sync optimizations, sync fairness ● Sync metadata improvements ○ omap -> cls_fifo ○ Bucket index, metadata+data logs ● Ongoing async refactoring of RGW ○ Based on boost::asio
  • 46. 46 ● Sharded RocksDB ○ Improve compaction performance ○ Reduce disk space requirements ● In-memory cache improvements ● SMR ○ Support for host-managed SMR HDDs ○ Targeting cold-stored workloads (e.g., RGW) only RADOS: BLUESTORE
  • 47. 47 PROJECT CRIMSON What ● Rewrite IO path in using Seastar ○ Preallocate cores ○ One thread per core ○ Explicitly shard all data structures and work over cores ○ No locks and no blocking ○ Message passing between cores ○ Polling for IO ● DPDK, SPDK ○ Kernel bypass for network and storage IO ● Goal: Working prototype for Pacific Why ● Not just about how many IOPS we do… ● More about IOPS per CPU core ● Current Ceph is based on traditional multi-threaded programming model ● Context switching is too expensive when storage is almost as fast as memory ● New hardware devices coming ○ DIMM form-factor persistent memory ○ ZNS - zone-based SSDs
  • 49. 49 CEPHFS MULTI-SITE REPLICATION ● Automate periodic snapshot + sync to remote cluster ○ Arbitrary source tree, destination in remote cluster ○ Sync snapshots via rsync ○ May support non-CephFS targets ● Discussing more sophisticated models ○ Bidirectional, loosely/eventually consistent sync ○ Simple conflict resolution behavior?
  • 50. 50 ● Nodes scale up (faster, bigger) ● Clusters scale out ○ Bigger clusters within a site ● Organizations scale globally ○ Multiple sites, data centers ○ Multiple public and private clouds ○ Multiple units within an organization MOTIVATION, OBJECT ● Universal, global connectivity ○ Access your data from anywhere ● API consistency ○ Write apps to a single object API (e.g., S3) regardless of which site, cloud it is deployed on ● Disaster recovery ○ Replicate object data across sites ○ Synchronously or asynchronously ○ Failover application and reattach ○ Active/passive and active/active ● Migration ○ Migrate data set between sites, tiers ○ While it is being used ● Edge scenarios (caching and buffering) ○ Cache remote bucket locally ○ Buffer new data locally
  • 51. 51 ● Project Zipper ○ Internal abstractions to allow alternate storage backends (e.g., storage data in external object store) ○ Policy layer based on LUA ○ Initial targets: database and file-based stores, tiering to cloud (e.g., S3) ● Dynamic reshard vs multisite support ● Sync from external sources ○ AWS ● Lifecycle transition to cloud RGW MULTISITE FOR QUINCY
  • 52. 52 RBD ● Consistency group support MISC MULTI-SITE
  • 54. 54 Windows ● Windows port for RBD is underway ● Lightweight kernel pass-through to librbd ● CephFS to follow (based on Dokan) Performance testing hardware ● Intel test cluster: officianalis ● AMD / Samsung / Mellanox cluster ● High-end ARM-based system? OTHER ECOSYSTEM EFFORTS ARM (aarch64) ● Loads of new build and test hardware arriving in the lab ● CI and release builds for aarch64 IBM Z ● Collaboration with IBM Z team ● Build and test