SlideShare ist ein Scribd-Unternehmen logo
1 von 35
Downloaden Sie, um offline zu lesen
Ziye Yang, Senior software Engineer
Notices and Disclaimers
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or
service activation. Learn more at intel.com, or from the OEM or retailer.
Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those factors may cause the results to vary. You
should consult other information and performance tests to assist you in fully evaluating your contemplated purchases,
including the performance of that product when combined with other products.
No computer system can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel
microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems,
components, software, operations and functions. Any change to any of those factors may cause the results to vary. You
should consult other information and performance tests to assist you in fully evaluating your contemplated purchases,
including the performance of that product when combined with other products. For more complete information visit
http://www.intel.com/performance.
Intel, the Intel logo, Xeon, and others are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names
and brands may be claimed as the property of others.
*Other names and brands may be claimed as the property of others.
© 2017 Intel Corporation.
2
• SPDKintroductionandstatusupdate
• CurrentSDPKsupportinBluestore
• Casestudy:AccelerateiSCSIserviceexportedbyCeph
• SPDKsupportforCephin2017
• Summary
The Problem: Software is becoming the bottleneck
The Opportunity: Use Intel software ingredients to
unlock the potential of new media
HDD SATA NAND
SSD
NVMe* NAND
SSD
Intel® Optane™
SSD
Latency
I/O
Performance <500 IO/s
>25,000 IO/s
>400,000 IO/s
>2ms
<100µs <100µs
Storage
Performance
Development
Kit
6
Scalable and Efficient Software Ingredients
• User space, lockless, polled-mode components
• Up to millions of IOPS per core
• Designed for Intel Optane™ technology latencies
Intel® Platform Storage Reference Architecture
• Optimized for Intel platform characteristics
• Open source building blocks (BSD licensed)
• Available via spdk.io
Architecture
Drivers
Storage
Services
Storage
Protocols
iSCSI
Target
NVMe-oF*
Target
SCSI
vhost-scsi
Target
NVMe
NVMe Devices
Blobstore
NVMe-oF*
Initiator
Intel® QuickData
Technology Driver
Block Device Abstraction (BDEV)
Ceph
RBD
Linux
Async IO
Blob
bdev
3rd Party
NVMe
NVMe*
PCIe Driver
Released
Q2’17
Pathfinding
vhost-blk
Target
Object
BlobFS
Integration
RocksDB
Ceph
Core
Application
Framework
Benefits of using SPDK
SPDK
more performance
from Intel CPUs, non-
volatile media, and
networking
FASTER TTM/
LESS RESOURCES
than developing components
from scratch
10X MORE IOPS/coreUp to for NVMe-oF* vs. Linux kernel
as NVM technologies
increase in performanceFuture ProofingProvides
for NVMe vs. Linux kernel8X MORE IOPS/coreUp to
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using
specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to
assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance
350%Up to for RocksDB workloadsBETTER Tail Latency
SPDK Updates: 17.03 Release (Mar 2017)
Blobstore
• Block allocator for applications
• Variable granularity, defaults to 4KB
BlobFS
• Lightweight, non-POSIX filesystem
• Page caching & prefetch
• Initially limited to DB file semantic
requirements (e.g. file name and size)
RocksDB SPDK Environment
• Implement RocksDB using BlobFS
QEMU vhost-scsi Target
• Simplified I/O path to local QEMU
guest VMs with unmodified apps
NVMe over Fabrics Improvements
• Read latency improvement
• NVMe-oF Host (Initiator) zero-copy
• Discovery code simplification
• Quality, performance & hardening fixes
Newcomponents:
broader set of use cases for SPDK
libraries & ingredients
Existingcomponents:
feature and hardening
improvements
Current status
Fully realizing new media performance requires software optimizations
SPDK positioned to enable developers to realize this performance
SPDK available today via http://spdk.io
Help us build SPDK as an open source community!
Current SPDK support in BlueStore
New features
 Support multiple threads for doing I/Os on NVMe SSDs via SPDK user space NVMe driver
 Support running SPDK I/O threads on designated CPU cores in configuration file.
Upgrade in Ceph (now is 17.03)
 Upgraded SPDK to 16.11 in Dec, 2016
 Upgraded SPDK to 17.03 in April, 2017
Stability
 Fixed several compilation issues, running time bugs while using SPDK.
Totally 16 SPDK related Patches are merged in Bluestore (mainly in NVMEDEVICE
module)
(From iStaury’s talk in SPDK PRC meetup 2016)
Block service exported by Ceph via iSCSI protocol
 Cloud service providers which
provision VM service can use
iSCSI.
 If Ceph could export block
service with good performance, it
would be easy to glue those
providers to Ceph cluster solution.
APP
Multipath
iSCSI
initiator
dm-1
sdx sdy
iSCSI
target
RBD
iSCSI
target
RBD
OSD OSD OSD OSD
OSD OSD OSD OSD
Client
iSCSI gateway
Ceph cluster
iSCSI + RBD Gateway
Ceph server
 CPU:Intel(R) Xeon(R) CPU E5-2660 v4 @2.00GHz
 Four intel P3700 SSDs
 One OSD on each SSD, total 4 osds
 4 pools PG number 512, one 10G image in one pool
iSCSI target server (librbd+SPDK / librbd+tgt)
 CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
 Only one core enable
iSCSI initiator
 CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
iSCSI Initiator
iSCSI Target Server
iSCSI Target
librbd
Ceph Server
OSD0 OSD1
OSD2 OSD3
iSCSI + RBD Gateway
One CPU Core:
FIO + img
iSCSI type + op
1 FIO + 1 img
(IOPS)
2 FIO + 2 img
(IOPS)
3 FIO + 3 img
(IOPS)
SPDK iSCSI
tgt/TGT
ratio
TGT + 4k_randread 10K 20K 20K
140%
SPDK iSCSI tgt+ 4k_randread 20K 24K 28K
TGT + 4k_randwrite 6.5K 9.5K 18K
133%
SPDK iSCSI tgt + 4k_randwrite 14K 19K 24K
iSCSI + RBD Gateway
Two CPU Cores:
FIO + img
iSCSI type + op
1 FIO + 1 img
(IOPS)
2 FIO + 2 img
(IOPS)
3 FIO + 3 img
(IOPS)
4 FIO + 4 img
(IOPS)
SPDK iSCSI
tgt/TGT
ratio
TGT + 4k_randread 12K 24K 26K 26K
181%
SPDK iSCSI tgt + 4k_randread 37K 47K 47K 47K
TGT + 4k_randwrite 9.5K 13.5K 19K 22K
123%
SPDK iSCSI tgt + 4k_randwrite 16K 24K 25K 27K
Reading Comparison
10
20
12
37
20
24 24
47
20
28
26
47
0
5
10
15
20
25
30
35
40
45
50
One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI
4K_randread(IOPS(K))
1stream 2 streams 3streams
Writing Comparison
6.5
14
9.5
16
9.5
19
13.5
24
18
24
19
25
22
27
0
5
10
15
20
25
30
One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI
4K_randwrite(IOPS(K))
1stream 2 streams 3streams 4streams
SPDK support for Ceph in 2017
To make SPDK really useful in Ceph, we will still do the following works with
partners:
 Continue stability maintenance
– Version upgrade, bug fixing in compilation/running time.
 Performance enhancement
– Continue optimizing NVMEDEVICE module according to customers or partners’
feedback.
 New feature Development:
– Occasionally pickup some common requirements/feedback in community and may
upstream those features in NVMEDEVICE module
Proposals/opportunties for better leveraging SPDK
Multiple OSD support on same NVMe Device by using SPDK.
 Leverage SPDK’s multiple process features in user space NVMe driver.
 Risks: Same with kernel, i.e., fail all OSDs on the device if it is fail.
Enhance cache support in NVMEDEVICE via using SPDK
 Need better cache/buffer strategy for Read/Write performance improvement.
Optimize Rocksdb usage in Bluestore by SPDK’s blobfs/blobstore
 Make Rocksdb use SPDK’s Blobfs/Blostore instead of kernel file system for metadata
management.
Leverage SPDK to accelerate the block service
exported by Ceph
Optimization in front of Ceph
 Use optimized Block service daemon, e.g., SPDK iSCSI target or NVMe-oF target
 Introduce Cache policy in Block service daemon.
Store Optimization inside Ceph
 Use SPDK’s user space NVMe driver instead of Kernel NVMe driver (Already have)
 May replace “BlueRocksEnv + Bluefs” with “BlobfsENV + Blobfs/Blobstore”.
Ceph RBD service
SPDK optimized iSCSI target SPDK optimized NVMe-oF target
SPDK Ceph RBD bdev module (Leverage librbd/librados)
SPDK Cache module
Existing SPDK app/module
Existing Ceph
Service/component
FileStore
Export Block Service
KVStoreBluestore
metadata
RocksDB
BlueRocksENV
Bluefs
Kernel/SPDK
driver
NVMe device
metadata
RocksDB
SPDK BlobfsENV
SPDK Blobfs/
Blobstore
SPDK NVMe
driver
NVMe device
Optimized module to
be developed (TBD in
SPDK roadmap)
Accelerate block service exported by Ceph via
SPDK
Even replace RocksDB?
Summary
SPDK proves to useful to explore the capability of fast storage devices (e.g.,
NVMe SSDs)
But it still needs lots of development work to make SPDK useful for Bluestore in
product quality level.
Call for actions:
 Call for code contribution in SPDK community
 Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team
for help and collaboration.
Summary
SPDK proves to useful to explore the capability of fast storage devices (e.g.,
NVMe SSDs)
But it still needs lots of development work to make SPDK useful for Bluestore in
product quality level.
Call for actions:
 Call for code contribution in SPDK community
 Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team
for help and collaboration.
Vhost-scsi Performance
SPDK provides
1 Million IOPS with 1 core
and
8x VM performance vs. kernel!
Features Realized Benefit
High performance
storage virtualization
Increased VM
density
Reduced VM exit Reduced tail
latencies
1
11
System Configuration: Target system: 2x Intel® Xeon® E5-2695v4 (HT off), Intel® Speed Step enabled, Intel® Turbo Boost Technology enabled, 8x 8GB DDR4 2133 MT/s, 1 DIMM per channel, 8x Intel® P3700 NVMe SSD (800GB),
4x per CPU socket, FW 8DV10102, Network: Mellanox* ConnectX-4 100Gb RDMA, direct connection between initiator and target; Initiator OS: CentOS* Linux* 7.2, Linux kernel 4.7.0-rc2, Target OS (SPDK): CentOS Linux 7.2, Linux
kernel 3.10.0-327.el7.x86_64, Target OS (Linux kernel): CentOS Linux 7.2, Linux kernel 4.7.0-rc2 Performance as measured by: fio, 4KB Random Read I/O, 2 RDMA QP per remote SSD, Numjobs=4 per SSD, Queue Depth: 32/job
10
10
10
17
8
1
0 5 10 15 20 25 30
QEMU virtio-scsi
kernel vhost-scsi
SPDK vhost-scsi
VM cores I/O processing cores
0
200000
400000
600000
800000
1000000
QEMU virtio-scsi kernel vhost-scsi SPDK vhost-scsi
I/Os handled per I/O processing core
Alibaba* Cloud ECS Case Study: Write Performance
Source: http://mt.sohu.com/20170228/n481925423.shtml
* Other namesand brands may be claimedas the property of others
Ali Cloud sees 300% improvement
in IOPS and latency using SPDK
0
200
400
600
800
1000
1200
1400
1 2 4 8 16 32
Latency(usec)
Queue Depth
Random Write Latency (usec)
General Virtualization Infrastructure
Ali Cloud High-Performance Storage Infrastructure with SPDK
0
50000
100000
150000
200000
250000
300000
350000
400000
1 2 4 8 16 32
IOPS
Queue Depth
Random Write 4K IOPS
General Virtualization Infrastructure
Ali Cloud High-Performance Storage Infrastructure with SPDK
Alibaba* Cloud ECS Case Study: MySQL Sysbench
Source:http://mt.sohu.com/20170228/n481925423.shtml
* Other names and brands may be claimed as the propertyof others
Sysbench Update sees 4.6X QPS at
10% of the latency!
0
2
4
6
8
10
12
14
16
18
Select Update
Latency(ms)
MySQL Sysbench - Latency
General Virtualization Infrastructure High Performance Virtualization with SPDK
0
20000
40000
60000
80000
100000
120000
Select Update
MySQL Sysbench - TPS/QPS
General Virtualization Infrastructure High Performance Virtualization with SPDK
SPDK Blobstore Vs. Kernel: Key Tail Latency
0
20000
40000
60000
80000
100000
120000
140000
Readwrite
LatencyuS
db_bench 99.99th Percentile Latency
Lower is Better
Kernel (256KB sync) Blobstore (20GB Cache + Readahead)
372%
SPDK Blobstore reduces tail latency by 3.7X
Insert Randread Overwrite Readwrite
Kernel (256KB Sync) 366 6444 1675 122500
SPDK Blobstore
(20GB Cache + Readahead)
444 3607 1200 33052
0
1000
2000
3000
4000
5000
6000
7000
Insert Randread Overwrite
LatencyuS
db_bench 99.99th Percentile Latency
Lower is Better
Kernel (256KB sync) Blobstore (20GB Cache + Readahead)
21%
44%
28%
SPDK Blobstore Vs. Kernel: Key Transactions per sec
0
200000
400000
600000
800000
1000000
1200000
Insert Randread Overwrite Readwrite
Keyspersecond
db_bench Key Transactions
Higher is Better
85%
8% 4% ~0%
Insert Randread Overwrite Readwrite
Kernel (256KB Sync) 547046 92582 51421 30273
SPDK Blobstore
(20GB Cache + Readahead)
1011245 99918 53495 29804
SPDK Blobstore improves insert throughput by 85%

Weitere ähnliche Inhalte

Was ist angesagt?

Optimizing MariaDB for maximum performance
Optimizing MariaDB for maximum performanceOptimizing MariaDB for maximum performance
Optimizing MariaDB for maximum performanceMariaDB plc
 
YugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on KubernetesYugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on KubernetesDoKC
 
Bucketing 2.0: Improve Spark SQL Performance by Removing Shuffle
Bucketing 2.0: Improve Spark SQL Performance by Removing ShuffleBucketing 2.0: Improve Spark SQL Performance by Removing Shuffle
Bucketing 2.0: Improve Spark SQL Performance by Removing ShuffleDatabricks
 
Spark Tuning for Enterprise System Administrators By Anya Bida
Spark Tuning for Enterprise System Administrators By Anya BidaSpark Tuning for Enterprise System Administrators By Anya Bida
Spark Tuning for Enterprise System Administrators By Anya BidaSpark Summit
 
Ethernetの受信処理
Ethernetの受信処理Ethernetの受信処理
Ethernetの受信処理Takuya ASADA
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing GuideJose De La Rosa
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Ceph Community
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
さいきんの InnoDB Adaptive Flushing (仮)
さいきんの InnoDB Adaptive Flushing (仮)さいきんの InnoDB Adaptive Flushing (仮)
さいきんの InnoDB Adaptive Flushing (仮)Takanori Sejima
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephScyllaDB
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency CephShapeBlue
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific DashboardCeph Community
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
 
RocksDB detail
RocksDB detailRocksDB detail
RocksDB detailMIJIN AN
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFShapeBlue
 
[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep DiveAkihiro Suda
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph Community
 
Performance Wins with BPF: Getting Started
Performance Wins with BPF: Getting StartedPerformance Wins with BPF: Getting Started
Performance Wins with BPF: Getting StartedBrendan Gregg
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화OpenStack Korea Community
 

Was ist angesagt? (20)

Optimizing MariaDB for maximum performance
Optimizing MariaDB for maximum performanceOptimizing MariaDB for maximum performance
Optimizing MariaDB for maximum performance
 
YugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on KubernetesYugabyteDB - Distributed SQL Database on Kubernetes
YugabyteDB - Distributed SQL Database on Kubernetes
 
Bucketing 2.0: Improve Spark SQL Performance by Removing Shuffle
Bucketing 2.0: Improve Spark SQL Performance by Removing ShuffleBucketing 2.0: Improve Spark SQL Performance by Removing Shuffle
Bucketing 2.0: Improve Spark SQL Performance by Removing Shuffle
 
Spark Tuning for Enterprise System Administrators By Anya Bida
Spark Tuning for Enterprise System Administrators By Anya BidaSpark Tuning for Enterprise System Administrators By Anya Bida
Spark Tuning for Enterprise System Administrators By Anya Bida
 
Ethernetの受信処理
Ethernetの受信処理Ethernetの受信処理
Ethernetの受信処理
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0Performance optimization for all flash based on aarch64 v2.0
Performance optimization for all flash based on aarch64 v2.0
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
さいきんの InnoDB Adaptive Flushing (仮)
さいきんの InnoDB Adaptive Flushing (仮)さいきんの InnoDB Adaptive Flushing (仮)
さいきんの InnoDB Adaptive Flushing (仮)
 
Seastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for CephSeastore: Next Generation Backing Store for Ceph
Seastore: Next Generation Backing Store for Ceph
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
 
2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard2021.02 new in Ceph Pacific Dashboard
2021.02 new in Ceph Pacific Dashboard
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
RocksDB detail
RocksDB detailRocksDB detail
RocksDB detail
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
 
[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021
 
Performance Wins with BPF: Getting Started
Performance Wins with BPF: Getting StartedPerformance Wins with BPF: Getting Started
Performance Wins with BPF: Getting Started
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 

Andere mochten auch

XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...The Linux Foundation
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
 
Scsi express overview
Scsi express overviewScsi express overview
Scsi express overviewrbeetle
 
The Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageThe Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageKernel TLV
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous updateinwin stack
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Giuseppe Paterno'
 
Parallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network TrainingParallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network TrainingRomeo Kienzler
 

Andere mochten auch (9)

XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
XPDS16: A Paravirtualized Interface for Socket Syscalls - Dimitri Stiliadis, ...
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 
Scsi express overview
Scsi express overviewScsi express overview
Scsi express overview
 
The Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageThe Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast Storage
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous update
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2
 
Parallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network TrainingParallelization Stategies of DeepLearning Neural Network Training
Parallelization Stategies of DeepLearning Neural Network Training
 
Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
 
Xen Project: Windows PV Drivers
Xen Project: Windows PV DriversXen Project: Windows PV Drivers
Xen Project: Windows PV Drivers
 

Ähnlich wie Ceph Day Beijing - SPDK for Ceph

Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...PT Datacomm Diangraha
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Michelle Holley
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCMemVerge
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Community
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Community
 
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryAccelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryDatabricks
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Community
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Community
 

Ähnlich wie Ceph Day Beijing - SPDK for Ceph (20)

Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Ceph
CephCeph
Ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...Seminar Accelerating Business Using Microservices Architecture in Digital Age...
Seminar Accelerating Business Using Microservices Architecture in Digital Age...
 
Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...Accelerating Virtual Machine Access with the Storage Performance Development ...
Accelerating Virtual Machine Access with the Storage Performance Development ...
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPC
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryAccelerate Your Apache Spark with Intel Optane DC Persistent Memory
Accelerate Your Apache Spark with Intel Optane DC Persistent Memory
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
optimizing_ceph_flash
optimizing_ceph_flashoptimizing_ceph_flash
optimizing_ceph_flash
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 

Mehr von Danielle Womboldt

Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Ceph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateCeph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateDanielle Womboldt
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephDanielle Womboldt
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayDanielle Womboldt
 
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileCeph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileDanielle Womboldt
 
Ceph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsCeph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsDanielle Womboldt
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph Danielle Womboldt
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 

Mehr von Danielle Womboldt (9)

Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Ceph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community UpdateCeph Day Beijing- Ceph Community Update
Ceph Day Beijing- Ceph Community Update
 
Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and Ceph
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph Day
 
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China MobileCeph Day Beijing - Leverage Ceph for SDS in China Mobile
Ceph Day Beijing - Leverage Ceph for SDS in China Mobile
 
Ceph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and OptimizationsCeph Day Beijing - BlueStore and Optimizations
Ceph Day Beijing - BlueStore and Optimizations
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph Ceph Day Beijing - Small Files & All Flash:  Inspur's works on Ceph
Ceph Day Beijing - Small Files & All Flash: Inspur's works on Ceph
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 

Kürzlich hochgeladen

Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024SynarionITSolutions
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 

Kürzlich hochgeladen (20)

Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 

Ceph Day Beijing - SPDK for Ceph

  • 1. Ziye Yang, Senior software Engineer
  • 2. Notices and Disclaimers Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. No computer system can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Intel, the Intel logo, Xeon, and others are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. *Other names and brands may be claimed as the property of others. © 2017 Intel Corporation. 2
  • 3. • SPDKintroductionandstatusupdate • CurrentSDPKsupportinBluestore • Casestudy:AccelerateiSCSIserviceexportedbyCeph • SPDKsupportforCephin2017 • Summary
  • 4.
  • 5. The Problem: Software is becoming the bottleneck The Opportunity: Use Intel software ingredients to unlock the potential of new media HDD SATA NAND SSD NVMe* NAND SSD Intel® Optane™ SSD Latency I/O Performance <500 IO/s >25,000 IO/s >400,000 IO/s >2ms <100µs <100µs
  • 6. Storage Performance Development Kit 6 Scalable and Efficient Software Ingredients • User space, lockless, polled-mode components • Up to millions of IOPS per core • Designed for Intel Optane™ technology latencies Intel® Platform Storage Reference Architecture • Optimized for Intel platform characteristics • Open source building blocks (BSD licensed) • Available via spdk.io
  • 7. Architecture Drivers Storage Services Storage Protocols iSCSI Target NVMe-oF* Target SCSI vhost-scsi Target NVMe NVMe Devices Blobstore NVMe-oF* Initiator Intel® QuickData Technology Driver Block Device Abstraction (BDEV) Ceph RBD Linux Async IO Blob bdev 3rd Party NVMe NVMe* PCIe Driver Released Q2’17 Pathfinding vhost-blk Target Object BlobFS Integration RocksDB Ceph Core Application Framework
  • 8. Benefits of using SPDK SPDK more performance from Intel CPUs, non- volatile media, and networking FASTER TTM/ LESS RESOURCES than developing components from scratch 10X MORE IOPS/coreUp to for NVMe-oF* vs. Linux kernel as NVM technologies increase in performanceFuture ProofingProvides for NVMe vs. Linux kernel8X MORE IOPS/coreUp to Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance 350%Up to for RocksDB workloadsBETTER Tail Latency
  • 9. SPDK Updates: 17.03 Release (Mar 2017) Blobstore • Block allocator for applications • Variable granularity, defaults to 4KB BlobFS • Lightweight, non-POSIX filesystem • Page caching & prefetch • Initially limited to DB file semantic requirements (e.g. file name and size) RocksDB SPDK Environment • Implement RocksDB using BlobFS QEMU vhost-scsi Target • Simplified I/O path to local QEMU guest VMs with unmodified apps NVMe over Fabrics Improvements • Read latency improvement • NVMe-oF Host (Initiator) zero-copy • Discovery code simplification • Quality, performance & hardening fixes Newcomponents: broader set of use cases for SPDK libraries & ingredients Existingcomponents: feature and hardening improvements
  • 10. Current status Fully realizing new media performance requires software optimizations SPDK positioned to enable developers to realize this performance SPDK available today via http://spdk.io Help us build SPDK as an open source community!
  • 11.
  • 12. Current SPDK support in BlueStore New features  Support multiple threads for doing I/Os on NVMe SSDs via SPDK user space NVMe driver  Support running SPDK I/O threads on designated CPU cores in configuration file. Upgrade in Ceph (now is 17.03)  Upgraded SPDK to 16.11 in Dec, 2016  Upgraded SPDK to 17.03 in April, 2017 Stability  Fixed several compilation issues, running time bugs while using SPDK. Totally 16 SPDK related Patches are merged in Bluestore (mainly in NVMEDEVICE module)
  • 13. (From iStaury’s talk in SPDK PRC meetup 2016)
  • 14. Block service exported by Ceph via iSCSI protocol  Cloud service providers which provision VM service can use iSCSI.  If Ceph could export block service with good performance, it would be easy to glue those providers to Ceph cluster solution. APP Multipath iSCSI initiator dm-1 sdx sdy iSCSI target RBD iSCSI target RBD OSD OSD OSD OSD OSD OSD OSD OSD Client iSCSI gateway Ceph cluster
  • 15. iSCSI + RBD Gateway Ceph server  CPU:Intel(R) Xeon(R) CPU E5-2660 v4 @2.00GHz  Four intel P3700 SSDs  One OSD on each SSD, total 4 osds  4 pools PG number 512, one 10G image in one pool iSCSI target server (librbd+SPDK / librbd+tgt)  CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz  Only one core enable iSCSI initiator  CPU:Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz iSCSI Initiator iSCSI Target Server iSCSI Target librbd Ceph Server OSD0 OSD1 OSD2 OSD3
  • 16. iSCSI + RBD Gateway One CPU Core: FIO + img iSCSI type + op 1 FIO + 1 img (IOPS) 2 FIO + 2 img (IOPS) 3 FIO + 3 img (IOPS) SPDK iSCSI tgt/TGT ratio TGT + 4k_randread 10K 20K 20K 140% SPDK iSCSI tgt+ 4k_randread 20K 24K 28K TGT + 4k_randwrite 6.5K 9.5K 18K 133% SPDK iSCSI tgt + 4k_randwrite 14K 19K 24K
  • 17. iSCSI + RBD Gateway Two CPU Cores: FIO + img iSCSI type + op 1 FIO + 1 img (IOPS) 2 FIO + 2 img (IOPS) 3 FIO + 3 img (IOPS) 4 FIO + 4 img (IOPS) SPDK iSCSI tgt/TGT ratio TGT + 4k_randread 12K 24K 26K 26K 181% SPDK iSCSI tgt + 4k_randread 37K 47K 47K 47K TGT + 4k_randwrite 9.5K 13.5K 19K 22K 123% SPDK iSCSI tgt + 4k_randwrite 16K 24K 25K 27K
  • 18. Reading Comparison 10 20 12 37 20 24 24 47 20 28 26 47 0 5 10 15 20 25 30 35 40 45 50 One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI 4K_randread(IOPS(K)) 1stream 2 streams 3streams
  • 19. Writing Comparison 6.5 14 9.5 16 9.5 19 13.5 24 18 24 19 25 22 27 0 5 10 15 20 25 30 One core:TGT One core:SPDK-iSCSI Two cores:TGT Two cores:SPDK-iSCSI 4K_randwrite(IOPS(K)) 1stream 2 streams 3streams 4streams
  • 20.
  • 21. SPDK support for Ceph in 2017 To make SPDK really useful in Ceph, we will still do the following works with partners:  Continue stability maintenance – Version upgrade, bug fixing in compilation/running time.  Performance enhancement – Continue optimizing NVMEDEVICE module according to customers or partners’ feedback.  New feature Development: – Occasionally pickup some common requirements/feedback in community and may upstream those features in NVMEDEVICE module
  • 22. Proposals/opportunties for better leveraging SPDK Multiple OSD support on same NVMe Device by using SPDK.  Leverage SPDK’s multiple process features in user space NVMe driver.  Risks: Same with kernel, i.e., fail all OSDs on the device if it is fail. Enhance cache support in NVMEDEVICE via using SPDK  Need better cache/buffer strategy for Read/Write performance improvement. Optimize Rocksdb usage in Bluestore by SPDK’s blobfs/blobstore  Make Rocksdb use SPDK’s Blobfs/Blostore instead of kernel file system for metadata management.
  • 23. Leverage SPDK to accelerate the block service exported by Ceph Optimization in front of Ceph  Use optimized Block service daemon, e.g., SPDK iSCSI target or NVMe-oF target  Introduce Cache policy in Block service daemon. Store Optimization inside Ceph  Use SPDK’s user space NVMe driver instead of Kernel NVMe driver (Already have)  May replace “BlueRocksEnv + Bluefs” with “BlobfsENV + Blobfs/Blobstore”.
  • 24. Ceph RBD service SPDK optimized iSCSI target SPDK optimized NVMe-oF target SPDK Ceph RBD bdev module (Leverage librbd/librados) SPDK Cache module Existing SPDK app/module Existing Ceph Service/component FileStore Export Block Service KVStoreBluestore metadata RocksDB BlueRocksENV Bluefs Kernel/SPDK driver NVMe device metadata RocksDB SPDK BlobfsENV SPDK Blobfs/ Blobstore SPDK NVMe driver NVMe device Optimized module to be developed (TBD in SPDK roadmap) Accelerate block service exported by Ceph via SPDK Even replace RocksDB?
  • 25.
  • 26. Summary SPDK proves to useful to explore the capability of fast storage devices (e.g., NVMe SSDs) But it still needs lots of development work to make SPDK useful for Bluestore in product quality level. Call for actions:  Call for code contribution in SPDK community  Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team for help and collaboration.
  • 27.
  • 28. Summary SPDK proves to useful to explore the capability of fast storage devices (e.g., NVMe SSDs) But it still needs lots of development work to make SPDK useful for Bluestore in product quality level. Call for actions:  Call for code contribution in SPDK community  Call for leveraging SPDK for Ceph optimization, welcome to contact SPDK dev team for help and collaboration.
  • 29.
  • 30. Vhost-scsi Performance SPDK provides 1 Million IOPS with 1 core and 8x VM performance vs. kernel! Features Realized Benefit High performance storage virtualization Increased VM density Reduced VM exit Reduced tail latencies 1 11 System Configuration: Target system: 2x Intel® Xeon® E5-2695v4 (HT off), Intel® Speed Step enabled, Intel® Turbo Boost Technology enabled, 8x 8GB DDR4 2133 MT/s, 1 DIMM per channel, 8x Intel® P3700 NVMe SSD (800GB), 4x per CPU socket, FW 8DV10102, Network: Mellanox* ConnectX-4 100Gb RDMA, direct connection between initiator and target; Initiator OS: CentOS* Linux* 7.2, Linux kernel 4.7.0-rc2, Target OS (SPDK): CentOS Linux 7.2, Linux kernel 3.10.0-327.el7.x86_64, Target OS (Linux kernel): CentOS Linux 7.2, Linux kernel 4.7.0-rc2 Performance as measured by: fio, 4KB Random Read I/O, 2 RDMA QP per remote SSD, Numjobs=4 per SSD, Queue Depth: 32/job 10 10 10 17 8 1 0 5 10 15 20 25 30 QEMU virtio-scsi kernel vhost-scsi SPDK vhost-scsi VM cores I/O processing cores 0 200000 400000 600000 800000 1000000 QEMU virtio-scsi kernel vhost-scsi SPDK vhost-scsi I/Os handled per I/O processing core
  • 31. Alibaba* Cloud ECS Case Study: Write Performance Source: http://mt.sohu.com/20170228/n481925423.shtml * Other namesand brands may be claimedas the property of others Ali Cloud sees 300% improvement in IOPS and latency using SPDK 0 200 400 600 800 1000 1200 1400 1 2 4 8 16 32 Latency(usec) Queue Depth Random Write Latency (usec) General Virtualization Infrastructure Ali Cloud High-Performance Storage Infrastructure with SPDK 0 50000 100000 150000 200000 250000 300000 350000 400000 1 2 4 8 16 32 IOPS Queue Depth Random Write 4K IOPS General Virtualization Infrastructure Ali Cloud High-Performance Storage Infrastructure with SPDK
  • 32. Alibaba* Cloud ECS Case Study: MySQL Sysbench Source:http://mt.sohu.com/20170228/n481925423.shtml * Other names and brands may be claimed as the propertyof others Sysbench Update sees 4.6X QPS at 10% of the latency! 0 2 4 6 8 10 12 14 16 18 Select Update Latency(ms) MySQL Sysbench - Latency General Virtualization Infrastructure High Performance Virtualization with SPDK 0 20000 40000 60000 80000 100000 120000 Select Update MySQL Sysbench - TPS/QPS General Virtualization Infrastructure High Performance Virtualization with SPDK
  • 33.
  • 34. SPDK Blobstore Vs. Kernel: Key Tail Latency 0 20000 40000 60000 80000 100000 120000 140000 Readwrite LatencyuS db_bench 99.99th Percentile Latency Lower is Better Kernel (256KB sync) Blobstore (20GB Cache + Readahead) 372% SPDK Blobstore reduces tail latency by 3.7X Insert Randread Overwrite Readwrite Kernel (256KB Sync) 366 6444 1675 122500 SPDK Blobstore (20GB Cache + Readahead) 444 3607 1200 33052 0 1000 2000 3000 4000 5000 6000 7000 Insert Randread Overwrite LatencyuS db_bench 99.99th Percentile Latency Lower is Better Kernel (256KB sync) Blobstore (20GB Cache + Readahead) 21% 44% 28%
  • 35. SPDK Blobstore Vs. Kernel: Key Transactions per sec 0 200000 400000 600000 800000 1000000 1200000 Insert Randread Overwrite Readwrite Keyspersecond db_bench Key Transactions Higher is Better 85% 8% 4% ~0% Insert Randread Overwrite Readwrite Kernel (256KB Sync) 547046 92582 51421 30273 SPDK Blobstore (20GB Cache + Readahead) 1011245 99918 53495 29804 SPDK Blobstore improves insert throughput by 85%