SlideShare ist ein Scribd-Unternehmen logo
1 von 23
Ceph Benchmarking Tool (CBT)
Kyle BaderCeph Tech Talk
May 26, 2016
INTRO TO CBT
• Benchmarking framework written in python
• Began as a engineering benchmark tool for upstream developlment
• Adopted for downstream performance and sizing
• Used by many people in Ceph community
• Red Hat
• Intel / Samsung / SanDisk
• Quanta QCT / Supermicro / Dell
WHAT IS IT?
CBT PERSONALITIES
HEAD
• CBT checkout
• Key based authentication to all other hosts
• Including itself..
• PDSH packages
• Space to store results archives
• YAML testplans
CBT PERSONALITIES
CLIENT
• Generates load against the SUT
• Ceph admin keyring readable by cbt user
• Needs loadgen tools installed
• FIO
• COSbench
• Should be a VM for kvmrbdfio
• Can be containerized (good for rbdfio)
CBT PERSONALITIES
MON • Nodes to setup monitors on
OSD • Nodes to setup OSDs
• RADOS Bench
• FIO with RBD engine
• FIO on KRBD on EXT4
• FIO on KVM (vdb) on EXT4
• COSBench for S3/Swift against RGW
CBT BENCHMARKS
• Cluster creation ( optional, use_existing: true )
• Cache tier configuration
• Replicated and Erasure coded pools
• Collects monitoring information from every node
• Collectl – cpu/disk/net/etc.
CBT EXTRAS
• SSH Key on head
• Pub key in all hosts authorized_keys (including head)
• Ceph packages on all hosts
• PDSH packages on all hosts (for pdcp)
• Collectl installed on all hosts
BASIC SETUP
• Test network beforehand, bad network easily impairs performance
• All-to-All iperf
• Check network routes, interfaces
• Bonding
• Switches should use 5-tuple-hashing for LACP
• Nodes should use LACP xmit_hash_policy=layer3+4
TEST METHODOLOGY
• Use multiple iterations for micro benchmarks
• Use client sweeps to establish point of contention / max throughput
• Client sweeps should always start with X(1) ~ 1 client
• Should have 4-6 different increments of clients
• Eg. client1, client[1-2], client[1-3], client[1-4]
TEST METHODOLOGY
Testplan Examples
CBT CLUSTER CONFIGURATION
cluster:
head: "ceph@head”
clients: ["ceph@client"]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
osds_per_node: 1
fs: xfs mkfs_opts: -f -i size=2048
mount_opts: -o inode64,noatime,logbsize=256k
conf_file: /etc/ceph.conf
ceph.conf: /etc/ceph/cepf.conf
iterations: 3
rebuild_every_test: False
tmp_dir: "/tmp/cbt"
pool_profiles:
replicated:
pg_size: 4096
pgp_size: 4096
replication: 'replicated'
CLIENT SWEEPS
cluster:
head: "ceph@head”
clients: ["ceph@client1"]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
cluster:
head: "ceph@head”
clients: ["ceph@client1”,”ceph@client2”]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
cluster:
head: "ceph@head”
clients: ["ceph@client1”,”ceph@client2”,
”ceph@client3”]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
cluster:
head: "ceph@head”
clients: ["ceph@client1”,”ceph@client2”,
“ceph@client3”,”ceph@client4”]
osds: ["ceph@osd"]
mons: ["ceph@mon"]
• Spawns RADOS bench processes
on each client
• Establish raw RADOS throughput
• Works against replicated or EC pools
RADOS BENCH
benchmarks:
radosbench:
op_size: [ 4194304, 524288, 4096 ]
write_only: False
time: 300
concurrent_ops: [ 128 ]
concurrent_procs: 1
use_existing: True
pool_profile: replicated
osd_ra: [256]
• Spawns FIO proccesses on each client
• Uses RBD ioengine
• Establish raw librbd performance
• No VM / container setup required
FIO WITH RBD IO ENGINE
benchmarks:
librbdfio:
time: 900
vol_size: 65536
mode: [ randwrite, randread, randrw ]
rwmixread: 70
op_size: [ 4096, 16384 ]
procs_per_volume: [ 1 ]
volumes_per_client: [ 1 ]
iodepth: [ 16 ]
osd_ra: [ 128 ]
cmd_path: '/home/ceph-admin/fio/fio’
pool_profile: 'rbd’
log_avg_msec: 100
use_existing_volumes: true
• Maps KRBD volume to each client
• Creates EXT4 filesystem on KRBD
• Mounts filesystem
• Spawns FIO process per client
• Uses AIO IO Engine on filesystem
• Client can be container or bare metal
• Establishes KRBD performance potential
FIO WITH KRBD ON EXT4
benchmarks:
rbdfio:
time: 900
vol_size: 65536
mode: [ randwrite, randread, randrw ]
rwmixread: 70
op_size: [ 4096, 16384 ]
concurrnet_procs: [ 1 ]
iodepth: [ 16 ]
osd_ra: [ 128 ]
cmd_path: '/home/ceph-admin/fio/fio’
pool_profile: 'rbd’
log_avg_msec: 100
• Create KVM instances outside CBT
• KVM instances listed as clients
• Creates EXT4 filesystem on /dev/vdb
• Mounts filesystem
• Spanws FIO process per client
• Uses AIO IO Engine
• Establish RBD performance with QEMU
IO susbsystems
FIO WITH KVM (VDB) ON EXT4
benchmarks:
kvmrbdfio:
time: 900
vol_size: 65536
mode: [ randwrite, randread, randrw ]
rwmixread: 70
op_size: [ 4096, 16384 ]
concurrnet_procs: [ 1 ]
iodepth: [ 16 ]
osd_ra: [ 128 ]
cmd_path: '/home/ceph-admin/fio/fio’
pool_profile: 'rbd’
log_avg_msec: 100
• Install COSBench on
head/clients outside CBT
• Install / Configure RGW
outside CBT
• Translates CBT YAML to
COSBench XML
• Runs COSBench
COSBENCH
benchmarks:
cosbench:
cosbench_dir: /root/0.4.1.0
cosbench_xml_dir: /home/ceph-admin/plugin/cbt/conf/cosbench/
controller: client01
auth:
config: username=cosbench:operator;password=intel2012;url=…
obj_size: [128KB]
template: [default]
mode: [write]
ratio: [100]
….
Example at cbt/docs/cosbench.README
Running CBT
# Loop through each test plan
for clients in $(seq 1 6);do
cbt/cbt –archive=/tmp/${clients}-clients-results path/to/test.yaml
done
ANALYZING DATA
• No robust tools for analysis
• Nested archive directory based on YAML options
• Archive/000000/Librbdfio/osd_ra-00000128…
• Usually awk/grep/cut-fu to csv
• Plot charts with gnplot, Excel, R
ANALYZING DATA
THANK YOU!

Weitere ähnliche Inhalte

Was ist angesagt?

Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a NutshellKaran Singh
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency CephShapeBlue
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDBSage Weil
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...OpenStack Korea Community
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph Community
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Community
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about cephEmma Haruka Iwao
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryScyllaDB
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to CephCeph Community
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFShapeBlue
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Disk health prediction for Ceph
Disk health prediction for CephDisk health prediction for Ceph
Disk health prediction for CephCeph Community
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
 

Was ist angesagt? (20)

Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a Nutshell
 
Nick Fisk - low latency Ceph
Nick Fisk - low latency CephNick Fisk - low latency Ceph
Nick Fisk - low latency Ceph
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
[OpenInfra Days Korea 2018] Day 2 - CEPH 운영자를 위한 Object Storage Performance T...
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing Guide
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
Ceph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOceanCeph Tech Talk: Ceph at DigitalOcean
Ceph Tech Talk: Ceph at DigitalOcean
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Crimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent MemoryCrimson: Ceph for the Age of NVMe and Persistent Memory
Crimson: Ceph for the Age of NVMe and Persistent Memory
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Disk health prediction for Ceph
Disk health prediction for CephDisk health prediction for Ceph
Disk health prediction for Ceph
 
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionCeph Object Storage Performance Secrets and Ceph Data Lake Solution
Ceph Object Storage Performance Secrets and Ceph Data Lake Solution
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Ceph
CephCeph
Ceph
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 

Andere mochten auch

Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Community
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Community
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Community
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Community
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 

Andere mochten auch (20)

Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by WorkloadCeph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 

Ähnlich wie Ceph Tech Talk -- Ceph Benchmarking Tool

Kubernetes Internals
Kubernetes InternalsKubernetes Internals
Kubernetes InternalsShimi Bandiel
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
Codasip application class RISC-V processor solutions
Codasip application class RISC-V processor solutionsCodasip application class RISC-V processor solutions
Codasip application class RISC-V processor solutionsRISC-V International
 
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowEd Balduf
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...DataStax Academy
 
Leveraging Cassandra for real-time multi-datacenter public cloud analytics
Leveraging Cassandra for real-time multi-datacenter public cloud analyticsLeveraging Cassandra for real-time multi-datacenter public cloud analytics
Leveraging Cassandra for real-time multi-datacenter public cloud analyticsJulien Anguenot
 
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Community
 
DevEx | there’s no place like k3s
DevEx | there’s no place like k3sDevEx | there’s no place like k3s
DevEx | there’s no place like k3sHaggai Philip Zagury
 
Building an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent ConnectionsBuilding an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent Connections Renaun Erickson
 
Introduction to Chef
Introduction to ChefIntroduction to Chef
Introduction to Chefkevsmith
 
Current and Future of Non-Volatile Memory on Linux
Current and Future of Non-Volatile Memory on LinuxCurrent and Future of Non-Volatile Memory on Linux
Current and Future of Non-Volatile Memory on Linuxmountpoint.io
 
Rook - cloud-native storage
Rook - cloud-native storageRook - cloud-native storage
Rook - cloud-native storageKarol Chrapek
 
E2E PVS Technical Overview Stephane Thirion
E2E PVS Technical Overview Stephane ThirionE2E PVS Technical Overview Stephane Thirion
E2E PVS Technical Overview Stephane Thirionsthirion
 

Ähnlich wie Ceph Tech Talk -- Ceph Benchmarking Tool (20)

Kubernetes Internals
Kubernetes InternalsKubernetes Internals
Kubernetes Internals
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Codasip application class RISC-V processor solutions
Codasip application class RISC-V processor solutionsCodasip application class RISC-V processor solutions
Codasip application class RISC-V processor solutions
 
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
 
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for TomorrowOpenStack Cinder, Implementation Today and New Trends for Tomorrow
OpenStack Cinder, Implementation Today and New Trends for Tomorrow
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
iland Internet Solutions: Leveraging Cassandra for real-time multi-datacenter...
 
Leveraging Cassandra for real-time multi-datacenter public cloud analytics
Leveraging Cassandra for real-time multi-datacenter public cloud analyticsLeveraging Cassandra for real-time multi-datacenter public cloud analytics
Leveraging Cassandra for real-time multi-datacenter public cloud analytics
 
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
 
DevEx | there’s no place like k3s
DevEx | there’s no place like k3sDevEx | there’s no place like k3s
DevEx | there’s no place like k3s
 
Building an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent ConnectionsBuilding an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent Connections
 
Introduction to Chef
Introduction to ChefIntroduction to Chef
Introduction to Chef
 
Current and Future of Non-Volatile Memory on Linux
Current and Future of Non-Volatile Memory on LinuxCurrent and Future of Non-Volatile Memory on Linux
Current and Future of Non-Volatile Memory on Linux
 
pps Matters
pps Matterspps Matters
pps Matters
 
RISC V in Spacer
RISC V in SpacerRISC V in Spacer
RISC V in Spacer
 
Rook - cloud-native storage
Rook - cloud-native storageRook - cloud-native storage
Rook - cloud-native storage
 
E2E PVS Technical Overview Stephane Thirion
E2E PVS Technical Overview Stephane ThirionE2E PVS Technical Overview Stephane Thirion
E2E PVS Technical Overview Stephane Thirion
 

Kürzlich hochgeladen

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 

Kürzlich hochgeladen (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 

Ceph Tech Talk -- Ceph Benchmarking Tool

  • 1. Ceph Benchmarking Tool (CBT) Kyle BaderCeph Tech Talk May 26, 2016
  • 3. • Benchmarking framework written in python • Began as a engineering benchmark tool for upstream developlment • Adopted for downstream performance and sizing • Used by many people in Ceph community • Red Hat • Intel / Samsung / SanDisk • Quanta QCT / Supermicro / Dell WHAT IS IT?
  • 4. CBT PERSONALITIES HEAD • CBT checkout • Key based authentication to all other hosts • Including itself.. • PDSH packages • Space to store results archives • YAML testplans
  • 5. CBT PERSONALITIES CLIENT • Generates load against the SUT • Ceph admin keyring readable by cbt user • Needs loadgen tools installed • FIO • COSbench • Should be a VM for kvmrbdfio • Can be containerized (good for rbdfio)
  • 6. CBT PERSONALITIES MON • Nodes to setup monitors on OSD • Nodes to setup OSDs
  • 7. • RADOS Bench • FIO with RBD engine • FIO on KRBD on EXT4 • FIO on KVM (vdb) on EXT4 • COSBench for S3/Swift against RGW CBT BENCHMARKS
  • 8. • Cluster creation ( optional, use_existing: true ) • Cache tier configuration • Replicated and Erasure coded pools • Collects monitoring information from every node • Collectl – cpu/disk/net/etc. CBT EXTRAS
  • 9. • SSH Key on head • Pub key in all hosts authorized_keys (including head) • Ceph packages on all hosts • PDSH packages on all hosts (for pdcp) • Collectl installed on all hosts BASIC SETUP
  • 10. • Test network beforehand, bad network easily impairs performance • All-to-All iperf • Check network routes, interfaces • Bonding • Switches should use 5-tuple-hashing for LACP • Nodes should use LACP xmit_hash_policy=layer3+4 TEST METHODOLOGY
  • 11. • Use multiple iterations for micro benchmarks • Use client sweeps to establish point of contention / max throughput • Client sweeps should always start with X(1) ~ 1 client • Should have 4-6 different increments of clients • Eg. client1, client[1-2], client[1-3], client[1-4] TEST METHODOLOGY
  • 13. CBT CLUSTER CONFIGURATION cluster: head: "ceph@head” clients: ["ceph@client"] osds: ["ceph@osd"] mons: ["ceph@mon"] osds_per_node: 1 fs: xfs mkfs_opts: -f -i size=2048 mount_opts: -o inode64,noatime,logbsize=256k conf_file: /etc/ceph.conf ceph.conf: /etc/ceph/cepf.conf iterations: 3 rebuild_every_test: False tmp_dir: "/tmp/cbt" pool_profiles: replicated: pg_size: 4096 pgp_size: 4096 replication: 'replicated'
  • 14. CLIENT SWEEPS cluster: head: "ceph@head” clients: ["ceph@client1"] osds: ["ceph@osd"] mons: ["ceph@mon"] cluster: head: "ceph@head” clients: ["ceph@client1”,”ceph@client2”] osds: ["ceph@osd"] mons: ["ceph@mon"] cluster: head: "ceph@head” clients: ["ceph@client1”,”ceph@client2”, ”ceph@client3”] osds: ["ceph@osd"] mons: ["ceph@mon"] cluster: head: "ceph@head” clients: ["ceph@client1”,”ceph@client2”, “ceph@client3”,”ceph@client4”] osds: ["ceph@osd"] mons: ["ceph@mon"]
  • 15. • Spawns RADOS bench processes on each client • Establish raw RADOS throughput • Works against replicated or EC pools RADOS BENCH benchmarks: radosbench: op_size: [ 4194304, 524288, 4096 ] write_only: False time: 300 concurrent_ops: [ 128 ] concurrent_procs: 1 use_existing: True pool_profile: replicated osd_ra: [256]
  • 16. • Spawns FIO proccesses on each client • Uses RBD ioengine • Establish raw librbd performance • No VM / container setup required FIO WITH RBD IO ENGINE benchmarks: librbdfio: time: 900 vol_size: 65536 mode: [ randwrite, randread, randrw ] rwmixread: 70 op_size: [ 4096, 16384 ] procs_per_volume: [ 1 ] volumes_per_client: [ 1 ] iodepth: [ 16 ] osd_ra: [ 128 ] cmd_path: '/home/ceph-admin/fio/fio’ pool_profile: 'rbd’ log_avg_msec: 100 use_existing_volumes: true
  • 17. • Maps KRBD volume to each client • Creates EXT4 filesystem on KRBD • Mounts filesystem • Spawns FIO process per client • Uses AIO IO Engine on filesystem • Client can be container or bare metal • Establishes KRBD performance potential FIO WITH KRBD ON EXT4 benchmarks: rbdfio: time: 900 vol_size: 65536 mode: [ randwrite, randread, randrw ] rwmixread: 70 op_size: [ 4096, 16384 ] concurrnet_procs: [ 1 ] iodepth: [ 16 ] osd_ra: [ 128 ] cmd_path: '/home/ceph-admin/fio/fio’ pool_profile: 'rbd’ log_avg_msec: 100
  • 18. • Create KVM instances outside CBT • KVM instances listed as clients • Creates EXT4 filesystem on /dev/vdb • Mounts filesystem • Spanws FIO process per client • Uses AIO IO Engine • Establish RBD performance with QEMU IO susbsystems FIO WITH KVM (VDB) ON EXT4 benchmarks: kvmrbdfio: time: 900 vol_size: 65536 mode: [ randwrite, randread, randrw ] rwmixread: 70 op_size: [ 4096, 16384 ] concurrnet_procs: [ 1 ] iodepth: [ 16 ] osd_ra: [ 128 ] cmd_path: '/home/ceph-admin/fio/fio’ pool_profile: 'rbd’ log_avg_msec: 100
  • 19. • Install COSBench on head/clients outside CBT • Install / Configure RGW outside CBT • Translates CBT YAML to COSBench XML • Runs COSBench COSBENCH benchmarks: cosbench: cosbench_dir: /root/0.4.1.0 cosbench_xml_dir: /home/ceph-admin/plugin/cbt/conf/cosbench/ controller: client01 auth: config: username=cosbench:operator;password=intel2012;url=… obj_size: [128KB] template: [default] mode: [write] ratio: [100] …. Example at cbt/docs/cosbench.README
  • 20. Running CBT # Loop through each test plan for clients in $(seq 1 6);do cbt/cbt –archive=/tmp/${clients}-clients-results path/to/test.yaml done
  • 22. • No robust tools for analysis • Nested archive directory based on YAML options • Archive/000000/Librbdfio/osd_ra-00000128… • Usually awk/grep/cut-fu to csv • Plot charts with gnplot, Excel, R ANALYZING DATA