3. • Benchmarking framework written in python
• Began as a engineering benchmark tool for upstream developlment
• Adopted for downstream performance and sizing
• Used by many people in Ceph community
• Red Hat
• Intel / Samsung / SanDisk
• Quanta QCT / Supermicro / Dell
WHAT IS IT?
4. CBT PERSONALITIES
HEAD
• CBT checkout
• Key based authentication to all other hosts
• Including itself..
• PDSH packages
• Space to store results archives
• YAML testplans
5. CBT PERSONALITIES
CLIENT
• Generates load against the SUT
• Ceph admin keyring readable by cbt user
• Needs loadgen tools installed
• FIO
• COSbench
• Should be a VM for kvmrbdfio
• Can be containerized (good for rbdfio)
7. • RADOS Bench
• FIO with RBD engine
• FIO on KRBD on EXT4
• FIO on KVM (vdb) on EXT4
• COSBench for S3/Swift against RGW
CBT BENCHMARKS
8. • Cluster creation ( optional, use_existing: true )
• Cache tier configuration
• Replicated and Erasure coded pools
• Collects monitoring information from every node
• Collectl – cpu/disk/net/etc.
CBT EXTRAS
9. • SSH Key on head
• Pub key in all hosts authorized_keys (including head)
• Ceph packages on all hosts
• PDSH packages on all hosts (for pdcp)
• Collectl installed on all hosts
BASIC SETUP
10. • Test network beforehand, bad network easily impairs performance
• All-to-All iperf
• Check network routes, interfaces
• Bonding
• Switches should use 5-tuple-hashing for LACP
• Nodes should use LACP xmit_hash_policy=layer3+4
TEST METHODOLOGY
11. • Use multiple iterations for micro benchmarks
• Use client sweeps to establish point of contention / max throughput
• Client sweeps should always start with X(1) ~ 1 client
• Should have 4-6 different increments of clients
• Eg. client1, client[1-2], client[1-3], client[1-4]
TEST METHODOLOGY
22. • No robust tools for analysis
• Nested archive directory based on YAML options
• Archive/000000/Librbdfio/osd_ra-00000128…
• Usually awk/grep/cut-fu to csv
• Plot charts with gnplot, Excel, R
ANALYZING DATA