2. Sheepdog – Distributed Object Storage
● Replicated shared storage for VM
● Most intelligent storage in OSS
– Self-healing
– Self-managing
– No configuration file
– One-liner setup
● Scale-out (more than 1000+ nodes)
● Integrate well in QEMU/Libvirt/Openstack
4. Background Knowledge
● VM Storage stack
● QEMU/KVM stack
● Virtual Disk
● IO Requests Type
● Write Cache
● QEMU Snapshot
5. VM Storage Stack
Guset File System
Guset Block Driver
QEMU Image Format
QEMU Disk Emulation
QEMU Format Protocol
POSIX file, Raw device, Sheepdog, Ceph
Sheepdog block driver in
QEMU is implemented at
protocol layer
● Support all the formats of
QEMU
● Raw format as default
● Best performance
● Snapshot is supported by
the Sheepdog protocol
7. Virtual Disk
● Transports
– ATA, SCSI, Virtio
– Virtio – Designed for VM
● Simpler interface, better performance
● Virtio-scsi
– Enhancement of virtio-blk
– Advanced DISCARD operation supports
● Write-cache
– Essential for distributed backend storage to boost
performance
8. IO Requests Type of VD
● Read/Write
● Discard
– VM's FS (EXT4, XFS) transparently inform
underlying storage backend to release blocks
● FLUSH
– Assure dirty bits reach the underlying backend storage
● Write Cache Enable (WCE)
– VM uses it to change the VD cache mode on the fly
9. Write Cache
● Not a memory cache like page cache
– DirectIO(O_DIRECT) bypass page cache but not
bypass write cache
– O_SYNC or fsync(2) flush write cache
● All modern disks have it and have well-support
from OS
● Most virtual devices emulate write cache
– As safe as well-behaved hard-disk cache
10. QEMU Snapshot
● Two type of states
– Memory state (VM state) and disk state
● Users can optional save
– VM state only
– VM state + disk state
– Disk state only
● Internal snapshot & external snapshot
– Sheepdog choose external snapshot
14. Dual NIC
● One for control messages(heart-beat), the
other for data transfer
– If data NIC is down, data transfer will fallback on
control NIC
– But if control NIC is down, the node is considered as
dead
● Single NIC
– Control and data will share it
15. Data Management
● Object Management
● VM Requst Management
● Auto-weighting
● Multi-disk
● Object Cache
● Journaling
16. Object Management
● Data are stored as replicated objects
– Object is plain fix-sized POSIX file
● objects are auto-rebalanced at node
add/delete/crash events
● Replica are auto-recovered
● Different copy number for each VDI
● Support SAN-like or SAN-less or even mixed
architeture
18. VM Requst Management
● Parallel requests handling
– Every node can handle the requests concurrently
● Serve the requests even in the node change
events
– VM requests are prioritized againt replica recovery
requests
– VM requests will retry until it succeeds at node
change events
19. Auto-weighting
● Node storage is auto-weighted
– Different sized nodes will only store its proportional
share
● Use consistent hashing + virtual node
● Users can specify exported space
– Use all the free space as default
20. Multi-disk
● Single deamon manage multi-disks
– $ sheep /disk1,/disk2{,disk3...}
– Auto-weighting
– Auto-rebalance
– Recover objects from other Sheep
● Simply put, MD = raid0 + auto-recovery
● Eliminate need of hardware RAID
– Support hot-plug/unplug
21. Object cache
● Sheepdog's write cache of Virtual Disk
– $ sheep -w size=100G /store
● $ qemu -drive cache={writeback|writethrough|off}
– Support writeback, writethrough, directio
– LRU algorithm for reclaiming
– Share objects between the VM from the same base
● Use SSD for object cache to get a boost
23. Journaling
● $ sheep -j dir=/path/to/journal /store
● Sheepdog use O_SYNC write as default
● Object writes are fairly random
● Log all the write opertions as append write on
the rotated log file
– Transform random write into sequential write
– Objects write can then drop O_SYNC
● Boost performance + avoid partial write
25. Sparse Volume
● Only allocate one inode object for new VDI as
default
– Instant creation of new VDI
● Create data objects on demand
● Users can preallocate data objects
– Not recommended, performance gain is very
limited
26. Discard operation
● Release objects when users delete files inside
VM
● Only support IDE and virtio-scsi device
– CentOS 6.3+
– OS running vanilla kernel 3.4+
– We need QEMU 1.5+
27. Snapshot
● Live snapshot (VM state + vdisk)
– Save the snapshot in the sheepdog
● QEMU monitor > savevm tag
– Restore the snapshot on the fly
● QEMU monitor > loadvm tag
– Restore the snapshot at boot
● $ qemu -hda sheepdog -loadvm tag
● Live or off-line snapshot (vdisk only)
– $ qemu-img snapshot sheepdog:disk
28. Snapshot cont.
● Tree structure snapshots
base
● Rollback to any snapshot and make your branch
29. Snapshot cont.
● All snapshots are COW based
– Only create inode object for the snapshot
– Instantly taken
● Support incremental snapshot backup
● Read the snapshot out of cluster
– $ collie vdi read -s tag disk
● Snapshots are stored in the Sheepdog storage
so shared by all the nodes
30. Sheepfs
● FUSE-based pseudo file system to export
Sheepdog's virtual disks
– $ sheepfs /mountpoint
● Mount vdisk into local file system hierarchy as
a block file
– $ echo vdisk > /mountpoint/vdi/mount
– Then /mountpoint/volume/vdisk will show up
31. Features from the future
● Cluster-wide snapshot
– Useful for backup and inter-cluster VDI-
migration/sharing
– Dedup, compression, incremental snapshot
● QEMU-SD connection auto-restart
– Useful for upgrading sheep without stopping the VM
● QEMU-SD multi-connection
– Higher Availibility VM