SlideShare ist ein Scribd-Unternehmen logo
1 von 33
Downloaden Sie, um offline zu lesen
Ceph Day LA – July 16, 2015
Deploying Flash Storage For Ceph
Š 2015 Mellanox Technologies 2- Mellanox Confidential -
Leading Supplier of End-to-End Interconnect Solutions
Virtual Protocol Interconnect
Storage
Front / Back-EndServer / Compute Switch / Gateway
56G IB & FCoIB 56G InfiniBand
10/40/56GbE & FCoE 10/40/56GbE
Virtual Protocol Interconnect
Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules
Comprehensive End-to-End InfiniBand and Ethernet Portfolio
Metro / WAN
Š 2015 Mellanox Technologies 3- Mellanox Confidential -
Scale-Out Architecture Requires A Fast Network
§ Scale-out grows capacity and performance in parallel
§ Requires fast network for replication, sharing, and metadata (file)
•  Throughput requires bandwidth
•  IOPS requires low latency
§ Proven in HPC, storage appliances, cloud, and now… Ceph
Interconnect Capabilities Determine Scale Out Performance
Š 2015 Mellanox Technologies 4- Mellanox Confidential -
Solid State Storage Technology Evolution – Lower Latency
Advanced Networking and Protocol Offloads Required to Match Storage Media Performance
0.1
10
1000
HD SSD NVM
Access	
  Time	
  (micro-­‐Sec)
Storage	
  Media	
  Technology
50%
100%
Networked	
  Storage
Storage Protocol	
  (SW) Network
Storage Media
Network
HW & SW
Hard
Drives
NAND
Flash
Next Gen
NVM
Š 2015 Mellanox Technologies 5- Mellanox Confidential -
Ceph and Networks
§ High performance networks enable maximum cluster availability
•  Clients, OSD, Monitors and Metadata servers communicate over multiple network layers
•  Real-time requirements for heartbeat, replication, recovery and re-balancing
§ Cluster (“backend”) network performance dictates cluster’s performance and scalability
•  “Network load between Ceph OSD Daemons easily dwarfs the network load between Ceph Clients
and the Ceph Storage Cluster” (Ceph Documentation)
Š 2015 Mellanox Technologies 6- Mellanox Confidential -
Ceph Deployment Using 10GbE and 40GbE
§ Cluster (Private) Network @ 40/56GbE
•  Smooth HA, unblocked heartbeats, efficient data balancing
§ Throughput Clients @ 40/56GbE
•  Guaranties line rate for high ingress/egress clients
§ IOPs Clients @ 10GbE or 40/56GbE
•  100K+ IOPs/Client @4K blocks
2.5x Higher Throughput , 15% Higher IOPs with 40Gb Ethernet vs. 10GbE!
(http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf)
Throughput Testing results based on fio benchmark, 8MB block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2
IOPs Testing results based on fio benchmark, 4KB block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2
Cluster	
  Network
Admin	
  Node
40GbE
Public	
  Network
10GbE/40GBE
Ceph	
  Nodes
(Monitors,	
  OSDs,	
  MDS)
Client	
  Nodes
10GbE/40GbE
Š 2015 Mellanox Technologies 7- Mellanox Confidential -
Ceph Is Accelerated by A Faster Network – Optimized at 56GbE
4,300 4,350
5,475 5,495
-
1,000
2,000
3,000
4,000
5,000
6,000
Ceph fio_rbd 64K random read Ceph fio_rbd 256K random read
40 Gb/s (MB/s) 56 Gb/s (MB/s)
27% More Throughput On Random Reads
Š 2015 Mellanox Technologies 8- Mellanox Confidential -
Ceph Reference Architectures Using Disk
Š 2015 Mellanox Technologies 9- Mellanox Confidential -
Optimizing Ceph For Throughput and Price/Throughput
§ Red Hat, Supermicro, Seagate, Mellanox, Intel
§ Extensive Performance Testing: Disk, Flash, Network, CPU, OS, Ceph
§ Reference Architecture Published Soon
10GbE Network Setup 40GbE Network Setup
Š 2015 Mellanox Technologies 10- Mellanox Confidential -
Testing 12 to 72 Disks Per Node, 2x10GbE vs. 1x40GbE
§ Key Test Results
•  More disks = more MB/s per server, less/OSD
•  More flash is faster (usually)
•  All-flash 2 SSDs as fast as many disks
§ 40GbE Advantages
•  Up to 2x read throughput per server
•  Up to 50% decrease in latency
•  Easier than bonding multiple 10GbE links
Š 2015 Mellanox Technologies 11- Mellanox Confidential -
Cisco UCS Reference Architecture for Ceph
§ Cisco Test Setup
•  UCS C3160 servers, Nexus 9396PX switch
•  28 or 56 6TB SAS disks; Replication or EC
•  4x10GbE per server, bonded
§ Results
•  One node read: 3700 MB/s rep, 1100 MB/s EC
•  One node write: 860 MB/s rep, 1050 MB/s EC
•  3 nodes read: 9,700 MB/s rep, 7,700 MB/s EC
•  8 nodes read: 20,000 MB/s rep, 10,000 MB/s EC
Š 2015 Mellanox Technologies 12- Mellanox Confidential -
Optimizing Ceph for Flash
Š 2015 Mellanox Technologies 13- Mellanox Confidential -
Ceph Flash Optimization
Highlights Compared to Stock Ceph
•  Read performance up to 8x better
•  Write performance up to 2x better with tuning
Optimizations
•  All-flash storage for OSDs
•  Enhanced parallelism and lock optimization
•  Optimization for reads from flash
•  Improvements to Ceph messenger
Test Configuration
•  InfiniFlash Storage with IFOS 1.0 EAP3
•  Up to 4 RBDs
•  2 Ceph OSD nodes, connected to InfiniFlash
•  40GbE NICs from Mellanox
SanDisk InfiniFlash
Š 2015 Mellanox Technologies 14- Mellanox Confidential -
SanDisk InfiniFlash, Maximizing Ceph Random Read IOPS
Random Read IOPs Random Read Latency (ms)
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
200000
25% Read 50% Read 75% Read 100% Read
8KB Random Read, QD=16
Stock Ceph IF-500
0
2
4
6
8
10
12
14
25% Read 50% Read 75% Read 100% Read
8KB Random Read, QD=16
Stock Ceph IF-500
Š 2015 Mellanox Technologies 15- Mellanox Confidential -
SanDisk Ceph Optimizations for Flash
Setup SanDisk
InfiniFlash
Scalable
Informatics
Supermicro Mellanox
OSD Servers Dell R720 SI Unison Supermicro Supermicro
OSD Nodes 2 2 3 2
Flash 1 InfiniFlash
64x8TB = 512TB
24 SATA SSDs
per node
2x PCIe SSDs
per node
12x SAS SSDs
per node
Cluster Network 40GbE 100GbE 40GbE 56GbE
Total Read
Throughput
71.6 Gb/s 70 Gb/s 43 Gb/s 44 Gb/s
Š 2015 Mellanox Technologies 16- Mellanox Confidential -
XioMessenger
Adding RDMA To Ceph
Š 2015 Mellanox Technologies 17- Mellanox Confidential -
RDMA Enables Efficient Data Movement
§ Hardware Network Acceleration à Higher bandwidth, Lower latency
§ Highest CPU efficiency à more CPU Power To Run Applications
Efficient Data Movement
With RDMA
Higher Bandwidth
Lower Latency
More CPU Power For
Applications
Š 2015 Mellanox Technologies 18- Mellanox Confidential -
RDMA Enables Efficient Data Movement At 100Gb/s
§ Without RDMA
•  5.7 GB/s throughput
•  20-26% CPU utilization
•  4 cores 100% consumed by moving data
§ With Hardware RDMA
•  11.1 GB/s throughput at half the latency
•  13-14% CPU utilization
•  More CPU power for applications, better ROI
x x x x
100GbE With CPU Onload 100 GbE With Network Offload
CPU Onload Penalties
•  Half the Throughput
•  Twice the Latency
•  Higher CPU Consumption
2X Better Bandwidth
Half the Latency
33% Lower CPU
See the demo: https://www.youtube.com/watch?v=u8ZYhUjSUoI
Š 2015 Mellanox Technologies 19- Mellanox Confidential -
Adding RDMA to Ceph
§ RDMA Beta Included in Hammer
•  Mellanox, Red Hat, CohortFS, and Community collaboration
•  Full RDMA expected in Infernalis
§ Refactoring of Ceph Messaging Layer
•  New RDMA messenger layer called XioMessenger
•  New class hierarchy allowing multiple transports (simple one is TCP)
•  Async design that leverages Accelio
•  Reduced locks; Reduced number of threads
§ XioMessenger built on top of Accelio (RDMA abstraction layer)
•  Integrated into all CEPH user space components: daemons and clients
•  Both “public network” and “cloud network”
Š 2015 Mellanox Technologies 20- Mellanox Confidential -
§ Open source!
•  https://github.com/accelio/accelio/ && www.accelio.org
§ Faster RDMA integration to application
§ Asynchronous
§ Maximize msg and CPU parallelism
•  Enable >10GB/s from single node
•  Enable <10usec latency under load
§ In Giant and Hammer
•  http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger
Accelio, High-Performance Reliable Messaging and RPC Library
Š 2015 Mellanox Technologies 21- Mellanox Confidential -
Ceph 4KB Read IOPS: 40Gb TCP vs. 40Gb RDMA
0
50
100
150
200
250
300
350
400
450
2 OSDs, 4 clients 4 OSDs, 4 clients
ThousandsofIOPS
40Gb TCP
40Gb RDMA34coresinOSD
24coresinclient
38coresinOSD
24coresinclient
38coresinOSD
30coresinclient
38coresinOSD
24coresinclient
RDMATCP
RDMATCP
Š 2015 Mellanox Technologies 22- Mellanox Confidential -
Ceph-Powered Solutions
Deployment Examples
Š 2015 Mellanox Technologies 23- Mellanox Confidential -
Ceph For Large Scale Storage– Fujitsu Eternus CD10000
§ Hyperscale Storage
•  4 to 224 nodes
•  Up to 56 PB raw capacity
§ Runs Ceph with Enhancements
•  3 different storage nodes
•  Object, block, and file storage
§ Mellanox InfiniBand Cluster Network
•  40Gb InfiniBand cluster network
•  10Gb Ethernet front end network
Š 2015 Mellanox Technologies 24- Mellanox Confidential -
Media & Entertainment Storage – StorageFoundry Nautilus
§ Turnkey Object Storage
•  Built on Ceph
•  Pre-configured for rapid deployment
•  Mellanox 10/40GbE networking
§ High-Capacity Configuration
•  6-8TB Helium-filled drives
•  Up to 2PB in 18U
§ High-Performance Configuration
•  Single client read 2.2 GB/s
•  SSD caching + Hard Drives
•  Supports Ethernet, IB, FC, FCoE front-end ports
§ More information: www.storagefoundry.net
Š 2015 Mellanox Technologies 25- Mellanox Confidential -
SanDisk InfiniFlash
§ Flash Storage System
•  Announced March 2015
•  512 TB (raw) in one 3U enclosure
•  Tested with 40GbE networking
§ High Throughput
•  8 SAS ports, up to 7GB/s
•  Connect to 2 or 4 OSD nodes
•  Up to 1M IOPS with two nodes
§ More information:
•  http://bigdataflash.sandisk.com/infiniflash
Š 2015 Mellanox Technologies 26- Mellanox Confidential -
More Ceph Solutions
§ Cloud – OnyxCCS ElectraStack
•  Turnkey IaaS
•  Multi-tenant computing system
•  5x faster Node/Data restoration
•  https://www.onyxccs.com/products/8-series
§ Flextronics CloudLabs
•  OpenStack on CloudX design
•  2SSD + 20HDD per node
•  Mix of 1Gb/40GbE network
•  http://www.flextronics.com/
§ ISS Storage Supercore
•  Healthcare solution
•  82,000 IOPS on 512B reads
•  74,000 IOPS on 4KB reads
•  1.1GB/s on 256KB reads
•  http://www.iss-integration.com/supercore.html
§ Scalable Informatics Unison
•  High availability cluster
•  60 HDD in 4U
•  Tier 1 performance at archive cost
•  https://scalableinformatics.com/unison.html
Š 2015 Mellanox Technologies 27- Mellanox Confidential -
Even More Ceph Solutions
§ Keeper Technology – keeperSAFE
•  Ceph appliance
•  For US Government
•  File Gateway for NFS, SMB, & StorNext
•  Mellanox Switches
§ Monash University -- Melbourne, Australia
•  3 Ceph Clusters, >6PB total storage
•  8, 17 (27), and 37 nodes
•  OpenStack Cinder and S3/Swift Object Storage
•  Mellanox networking, 10GbE nodes, 56GbE ISLs
Š 2015 Mellanox Technologies 28- Mellanox Confidential -
Summary
§ Ceph scalability and performance benefit from high performance networks
•  Especially with lots of disk
§ Ceph being optimized for flash storage
§ End-to-end 40/56 Gb/s transport accelerates Ceph today
•  100Gb/s testing has begun!
•  Available in various Ceph solutions and appliances
§ RDMA is next to optimize flash performance—beta in Hammer
Thank You
Š 2015 Mellanox Technologies 30- Mellanox Confidential -
SanDisk IF-500 topology on a single 512 TB IF-100
Flash Memory Summit 2015
Santa Clara, CA 30
-­‐	
  	
  	
  	
  IF-­‐100	
  BW	
  is	
  ~8.5GB/s	
  (with	
  6Gb	
  SAS,	
  12	
  Gb	
  is	
  coming	
  EOY)	
  and	
  ~1.5M	
  4K	
  IOPS	
  
-­‐  We	
  saw	
  that	
  Ceph	
  is	
  very	
  resource	
  hungry,	
  so,	
  need	
  at	
  least	
  2	
  physical	
  nodes	
  on	
  top	
  of	
  IF-­‐100	
  
-­‐  We	
  need	
  to	
  connect	
  all	
  8	
  ports	
  of	
  an	
  HBA	
  to	
  saturate	
  IF-­‐100	
  for	
  bigger	
  block	
  size	
  
Š 2015 Mellanox Technologies 31- Mellanox Confidential -
SanDisk Ceph-InfiniFlash Setup Details
Flash Memory Summit 2015
Santa Clara, CA 31
Performance	
  Cong	
  	
  -­‐	
  IF-­‐500	
  
2	
  Node	
  Cluster	
  (	
  32	
  drives	
  shared	
  to	
  each	
  OSD	
  node)	
  
Node	
  	
  
	
  2	
  Servers	
  
(Dell	
  R720)	
   2x	
  E5-­‐2680	
  12C	
  2.8GHz	
  	
  	
  4x	
  16GB	
  RDIMM,	
  dual	
  rank	
  x4	
  (64GB)	
  	
  	
  
1x	
  Mellanox	
  X3	
  Dual	
  40GbE	
  	
  	
  1x	
  LSI	
  9207	
  HBA	
  card	
  
RBD	
  Client	
  
	
  4	
  Servers	
  
(Dell	
  R620)	
  
1	
  x	
  E5-­‐2680	
  10C	
  2.8GHz	
  	
  	
  2	
  x	
  16GB	
  RDIMM,	
  dual	
  rank	
  x4	
  (32	
  GB)	
  	
  1x	
  Mellanox	
  X3	
  
Dual	
  40GbE	
  	
  	
  
Storage	
  –	
  IF-­‐100	
  with	
  64	
  Icechips	
  in	
  A2	
  Cong	
  
IF-­‐100	
   IF-­‐100	
  is	
  connected	
  64	
  x	
  1YX2	
  Icechips	
  in	
  A2	
  topology.	
   Total	
  storage	
  -­‐	
  64	
  *	
  8	
  tb	
  =	
  512tb	
  
Network	
  Details	
  
40G	
  Switch	
   NA	
   	
  	
  
OS	
  Details	
  	
  
OS	
  	
   Ubuntu	
  14.04	
  LTS	
  64bit	
   3.13.0-­‐32	
  
LSI	
  card/	
  driver	
   SAS2308(9207)	
   mpt2sas	
  	
  
Mellanox	
  40gbps	
  nw	
  card	
   MT27500	
  [ConnectX-­‐3]	
   mlx4_en	
  	
  -­‐	
  	
  2.2-­‐1	
  (Feb	
  2014)	
  
Cluster	
  Congura[on	
  	
  
CEPH	
  Version	
   sndk-­‐ifos-­‐1.0.0.04	
   0.86.rc.eap2	
  
Replicadon	
  (Default)	
  
2	
  	
  [Host]	
  
	
  
	
  
Note:	
  -­‐	
  Host	
  level	
  replicadon.	
  
Number	
  of	
  Pools,	
  PGs	
  &	
  RBDs	
  
pool	
  =	
  4	
  	
  ;PG	
  =	
  	
  2048	
  per	
  pool	
  
	
  
2	
  RBDs	
  from	
  each	
  pool	
  
RBD	
  size	
   2TB	
   	
  	
  
Number	
  of	
  Monitors	
   1	
   	
  	
  
Number	
  of	
  OSD	
  Nodes	
   2	
   	
  	
  
Number	
  of	
  OSDs	
  per	
  Node	
   32	
   total	
  OSDs	
  	
  =	
  32	
  *	
  2	
  =	
  	
  64	
  
Š 2015 Mellanox Technologies 32- Mellanox Confidential -
SanDisk: 8K Random - 2 RBD/Client with File System
IOPS: 2 LUNs /Client (Total 4 Clients)
0	
  
50000	
  
100000	
  
150000	
  
200000	
  
250000	
  
300000	
  
1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
  
0	
   25	
   50	
   75	
   100	
  
Stock	
  Ceph	
   IFOS	
  1.0	
  
Lat(ms): 2 LUNs/Client (Total 4 Clients)
[Queue Depth]
Read Percent
0	
  
20	
  
40	
  
60	
  
80	
  
100	
  
120	
  
1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
  
0	
   25	
   50	
   75	
   100	
  
IOPS Latency
(ms)
Š 2015 Mellanox Technologies 33- Mellanox Confidential -
SanDisk: 64K Random -2 RBD/Client with File System
IOPS: 2 LUNs/Client (Total 4 Clients) Lat(ms): 2 LUNs/Client (Total 4 Clients)
[Queue Depth]
Read Percent
0	
  
20000	
  
40000	
  
60000	
  
80000	
  
100000	
  
120000	
  
140000	
  
160000	
  
1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
  
0	
   25	
   50	
   75	
   100	
  
Stock	
  Ceph	
  
IFOS	
  1.0	
  
0	
  
20	
  
40	
  
60	
  
80	
  
100	
  
120	
  
140	
  
160	
  
180	
  
1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
   1	
   2	
   4	
   8	
   16	
   32	
  
0	
   25	
   50	
   75	
   100	
  
IOPS Latency
(ms)

Weitere ähnliche Inhalte

Was ist angesagt?

Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Ceph Community
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Community
 
NVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To FallNVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To Fallinside-BigData.com
 
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDHigh-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDinside-BigData.com
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateDanielle Womboldt
 
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015 Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015 Tony Antony
 
Redis on NVMe SSD - Zvika Guz, Samsung
 Redis on NVMe SSD - Zvika Guz, Samsung Redis on NVMe SSD - Zvika Guz, Samsung
Redis on NVMe SSD - Zvika Guz, SamsungRedis Labs
 
SAN Extension Design and Solutions
SAN Extension Design and SolutionsSAN Extension Design and Solutions
SAN Extension Design and SolutionsTony Antony
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph Community
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Community
 
Advanced Traffic Engineering (TE++)
Advanced Traffic Engineering (TE++)Advanced Traffic Engineering (TE++)
Advanced Traffic Engineering (TE++)Pravin Bhandarkar
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Bridging Big - Small, Fast - Slow with Campaign Storage
Bridging Big - Small, Fast - Slow with Campaign StorageBridging Big - Small, Fast - Slow with Campaign Storage
Bridging Big - Small, Fast - Slow with Campaign Storageinside-BigData.com
 

Was ist angesagt? (20)

Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
 
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
Accelerating Ceph Performance with High Speed Networks and Protocols - Qingch...
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and Reporting
 
NVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To FallNVMe Takes It All, SCSI Has To Fall
NVMe Takes It All, SCSI Has To Fall
 
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDHigh-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
 
Ceph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA UpdateCeph Day Beijing - Ceph RDMA Update
Ceph Day Beijing - Ceph RDMA Update
 
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015 Eliminating SAN Congestion Just Got Much Easier-  webinar - Nov 2015
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015
 
Suggestion for an IPv6 Roll Out
Suggestion for an IPv6 Roll OutSuggestion for an IPv6 Roll Out
Suggestion for an IPv6 Roll Out
 
Redis on NVMe SSD - Zvika Guz, Samsung
 Redis on NVMe SSD - Zvika Guz, Samsung Redis on NVMe SSD - Zvika Guz, Samsung
Redis on NVMe SSD - Zvika Guz, Samsung
 
SAN Extension Design and Solutions
SAN Extension Design and SolutionsSAN Extension Design and Solutions
SAN Extension Design and Solutions
 
HP C7000 Cconfiguration Guide v.10
HP C7000 Cconfiguration Guide v.10HP C7000 Cconfiguration Guide v.10
HP C7000 Cconfiguration Guide v.10
 
Power overview 2018 08-13b
Power overview 2018 08-13bPower overview 2018 08-13b
Power overview 2018 08-13b
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance Barriers
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Cumulus Linux 2.5.3
Cumulus Linux 2.5.3Cumulus Linux 2.5.3
Cumulus Linux 2.5.3
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
 
Advanced Traffic Engineering (TE++)
Advanced Traffic Engineering (TE++)Advanced Traffic Engineering (TE++)
Advanced Traffic Engineering (TE++)
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Bridging Big - Small, Fast - Slow with Campaign Storage
Bridging Big - Small, Fast - Slow with Campaign StorageBridging Big - Small, Fast - Slow with Campaign Storage
Bridging Big - Small, Fast - Slow with Campaign Storage
 

Andere mochten auch

Is Your Safety Inbox a Black Hole?
Is Your Safety Inbox a Black Hole?Is Your Safety Inbox a Black Hole?
Is Your Safety Inbox a Black Hole?November Research Group
 
GAB DOCUMENTS
GAB DOCUMENTSGAB DOCUMENTS
GAB DOCUMENTSEDEH GABRIEL
 
Net app potentialofhybridcloud_final copy[1]
Net app potentialofhybridcloud_final copy[1]Net app potentialofhybridcloud_final copy[1]
Net app potentialofhybridcloud_final copy[1]NetApp
 
13 Line Quran┇Full ┇PDF
13 Line Quran┇Full ┇PDF13 Line Quran┇Full ┇PDF
13 Line Quran┇Full ┇PDFQuran Juz (Para)
 
lnv15 Job-st
lnv15 Job-st lnv15 Job-st
lnv15 Job-st alfred99d50
 
IP and Startups: Gold or Fool's Gold
IP and Startups: Gold or Fool's GoldIP and Startups: Gold or Fool's Gold
IP and Startups: Gold or Fool's Goldandreweisenberg
 
การใช้สำนวนちょくちょく
การใช้สำนวนちょくちょくการใช้สำนวนちょくちょく
การใช้สำนวนちょくちょくDr.Thavorn Ngarmtrakulchol (Tokyojuku Japanese School)
 

Andere mochten auch (9)

Lecciones de la pelicula los tres idiotas
Lecciones de la pelicula los tres idiotasLecciones de la pelicula los tres idiotas
Lecciones de la pelicula los tres idiotas
 
Kuliah 7
Kuliah 7Kuliah 7
Kuliah 7
 
Is Your Safety Inbox a Black Hole?
Is Your Safety Inbox a Black Hole?Is Your Safety Inbox a Black Hole?
Is Your Safety Inbox a Black Hole?
 
GAB DOCUMENTS
GAB DOCUMENTSGAB DOCUMENTS
GAB DOCUMENTS
 
Net app potentialofhybridcloud_final copy[1]
Net app potentialofhybridcloud_final copy[1]Net app potentialofhybridcloud_final copy[1]
Net app potentialofhybridcloud_final copy[1]
 
13 Line Quran┇Full ┇PDF
13 Line Quran┇Full ┇PDF13 Line Quran┇Full ┇PDF
13 Line Quran┇Full ┇PDF
 
lnv15 Job-st
lnv15 Job-st lnv15 Job-st
lnv15 Job-st
 
IP and Startups: Gold or Fool's Gold
IP and Startups: Gold or Fool's GoldIP and Startups: Gold or Fool's Gold
IP and Startups: Gold or Fool's Gold
 
การใช้สำนวนちょくちょく
การใช้สำนวนちょくちょくการใช้สำนวนちょくちょく
การใช้สำนวนちょくちょく
 

Ähnlich wie Deploying flash storage for Ceph without compromising performance

Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Community
 
Introduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RIntroduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RSimon Huang
 
DPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles ShiflettDPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles ShiflettJim St. Leger
 
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Netronome
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...Indonesia Network Operators Group
 
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017Cloud Native Day Tel Aviv
 
Mellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDNMellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDNMellanox Technologies
 
IBM Power9 Features and Specifications
IBM Power9 Features and SpecificationsIBM Power9 Features and Specifications
IBM Power9 Features and Specificationsinside-BigData.com
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...OpenStack Korea Community
 
Software Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVSoftware Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVYoshihiro Nakajima
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceShapeBlue
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
 
Lenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rackLenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rackLenovo Data Center
 
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...PROIDEA
 
Open coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpen coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpenInfra Days Poland 2019
 

Ähnlich wie Deploying flash storage for Ceph without compromising performance (20)

Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
Ceph Day Amsterdam 2015 - Deploying flash storage for Ceph without compromisi...
 
Introduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3RIntroduction to NVMe Over Fabrics-V3R
Introduction to NVMe Over Fabrics-V3R
 
Scale Out Database Solution
Scale Out Database SolutionScale Out Database Solution
Scale Out Database Solution
 
DPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles ShiflettDPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles Shiflett
 
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
Disaggregation a Primer: Optimizing design for Edge Cloud & Bare Metal applic...
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
22 - IDNOG03 - Christopher Lim (Mellanox) - Efficient Virtual Network for Ser...
 
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
Born to be fast! - Aviram Bar Haim - OpenStack Israel 2017
 
Mellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDNMellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDN
 
IBM Power9 Features and Specifications
IBM Power9 Features and SpecificationsIBM Power9 Features and Specifications
IBM Power9 Features and Specifications
 
Mellanox Storage Solutions
Mellanox Storage SolutionsMellanox Storage Solutions
Mellanox Storage Solutions
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
 
Software Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFVSoftware Stacks to enable SDN and NFV
Software Stacks to enable SDN and NFV
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experience
 
Mellanox's Sales Strategy
Mellanox's Sales StrategyMellanox's Sales Strategy
Mellanox's Sales Strategy
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
 
Lenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rackLenovo networking: top of the top of the rack
Lenovo networking: top of the top of the rack
 
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
 
Open coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi AlkobiOpen coud networking at full speed - Avi Alkobi
Open coud networking at full speed - Avi Alkobi
 

KĂźrzlich hochgeladen

Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 

KĂźrzlich hochgeladen (20)

Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 

Deploying flash storage for Ceph without compromising performance

  • 1. Ceph Day LA – July 16, 2015 Deploying Flash Storage For Ceph
  • 2. Š 2015 Mellanox Technologies 2- Mellanox Confidential - Leading Supplier of End-to-End Interconnect Solutions Virtual Protocol Interconnect Storage Front / Back-EndServer / Compute Switch / Gateway 56G IB & FCoIB 56G InfiniBand 10/40/56GbE & FCoE 10/40/56GbE Virtual Protocol Interconnect Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules Comprehensive End-to-End InfiniBand and Ethernet Portfolio Metro / WAN
  • 3. Š 2015 Mellanox Technologies 3- Mellanox Confidential - Scale-Out Architecture Requires A Fast Network § Scale-out grows capacity and performance in parallel § Requires fast network for replication, sharing, and metadata (file) •  Throughput requires bandwidth •  IOPS requires low latency § Proven in HPC, storage appliances, cloud, and now… Ceph Interconnect Capabilities Determine Scale Out Performance
  • 4. Š 2015 Mellanox Technologies 4- Mellanox Confidential - Solid State Storage Technology Evolution – Lower Latency Advanced Networking and Protocol Offloads Required to Match Storage Media Performance 0.1 10 1000 HD SSD NVM Access  Time  (micro-­‐Sec) Storage  Media  Technology 50% 100% Networked  Storage Storage Protocol  (SW) Network Storage Media Network HW & SW Hard Drives NAND Flash Next Gen NVM
  • 5. Š 2015 Mellanox Technologies 5- Mellanox Confidential - Ceph and Networks § High performance networks enable maximum cluster availability •  Clients, OSD, Monitors and Metadata servers communicate over multiple network layers •  Real-time requirements for heartbeat, replication, recovery and re-balancing § Cluster (“backend”) network performance dictates cluster’s performance and scalability •  “Network load between Ceph OSD Daemons easily dwarfs the network load between Ceph Clients and the Ceph Storage Cluster” (Ceph Documentation)
  • 6. Š 2015 Mellanox Technologies 6- Mellanox Confidential - Ceph Deployment Using 10GbE and 40GbE § Cluster (Private) Network @ 40/56GbE •  Smooth HA, unblocked heartbeats, efficient data balancing § Throughput Clients @ 40/56GbE •  Guaranties line rate for high ingress/egress clients § IOPs Clients @ 10GbE or 40/56GbE •  100K+ IOPs/Client @4K blocks 2.5x Higher Throughput , 15% Higher IOPs with 40Gb Ethernet vs. 10GbE! (http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf) Throughput Testing results based on fio benchmark, 8MB block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2 IOPs Testing results based on fio benchmark, 4KB block, 20GB file,128 parallel jobs, RBD Kernel Driver with Linux Kernel 3.13.3 RHEL 6.3, Ceph 0.72.2 Cluster  Network Admin  Node 40GbE Public  Network 10GbE/40GBE Ceph  Nodes (Monitors,  OSDs,  MDS) Client  Nodes 10GbE/40GbE
  • 7. Š 2015 Mellanox Technologies 7- Mellanox Confidential - Ceph Is Accelerated by A Faster Network – Optimized at 56GbE 4,300 4,350 5,475 5,495 - 1,000 2,000 3,000 4,000 5,000 6,000 Ceph fio_rbd 64K random read Ceph fio_rbd 256K random read 40 Gb/s (MB/s) 56 Gb/s (MB/s) 27% More Throughput On Random Reads
  • 8. Š 2015 Mellanox Technologies 8- Mellanox Confidential - Ceph Reference Architectures Using Disk
  • 9. Š 2015 Mellanox Technologies 9- Mellanox Confidential - Optimizing Ceph For Throughput and Price/Throughput § Red Hat, Supermicro, Seagate, Mellanox, Intel § Extensive Performance Testing: Disk, Flash, Network, CPU, OS, Ceph § Reference Architecture Published Soon 10GbE Network Setup 40GbE Network Setup
  • 10. Š 2015 Mellanox Technologies 10- Mellanox Confidential - Testing 12 to 72 Disks Per Node, 2x10GbE vs. 1x40GbE § Key Test Results •  More disks = more MB/s per server, less/OSD •  More flash is faster (usually) •  All-flash 2 SSDs as fast as many disks § 40GbE Advantages •  Up to 2x read throughput per server •  Up to 50% decrease in latency •  Easier than bonding multiple 10GbE links
  • 11. Š 2015 Mellanox Technologies 11- Mellanox Confidential - Cisco UCS Reference Architecture for Ceph § Cisco Test Setup •  UCS C3160 servers, Nexus 9396PX switch •  28 or 56 6TB SAS disks; Replication or EC •  4x10GbE per server, bonded § Results •  One node read: 3700 MB/s rep, 1100 MB/s EC •  One node write: 860 MB/s rep, 1050 MB/s EC •  3 nodes read: 9,700 MB/s rep, 7,700 MB/s EC •  8 nodes read: 20,000 MB/s rep, 10,000 MB/s EC
  • 12. Š 2015 Mellanox Technologies 12- Mellanox Confidential - Optimizing Ceph for Flash
  • 13. Š 2015 Mellanox Technologies 13- Mellanox Confidential - Ceph Flash Optimization Highlights Compared to Stock Ceph •  Read performance up to 8x better •  Write performance up to 2x better with tuning Optimizations •  All-flash storage for OSDs •  Enhanced parallelism and lock optimization •  Optimization for reads from flash •  Improvements to Ceph messenger Test Configuration •  InfiniFlash Storage with IFOS 1.0 EAP3 •  Up to 4 RBDs •  2 Ceph OSD nodes, connected to InfiniFlash •  40GbE NICs from Mellanox SanDisk InfiniFlash
  • 14. Š 2015 Mellanox Technologies 14- Mellanox Confidential - SanDisk InfiniFlash, Maximizing Ceph Random Read IOPS Random Read IOPs Random Read Latency (ms) 0 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 25% Read 50% Read 75% Read 100% Read 8KB Random Read, QD=16 Stock Ceph IF-500 0 2 4 6 8 10 12 14 25% Read 50% Read 75% Read 100% Read 8KB Random Read, QD=16 Stock Ceph IF-500
  • 15. Š 2015 Mellanox Technologies 15- Mellanox Confidential - SanDisk Ceph Optimizations for Flash Setup SanDisk InfiniFlash Scalable Informatics Supermicro Mellanox OSD Servers Dell R720 SI Unison Supermicro Supermicro OSD Nodes 2 2 3 2 Flash 1 InfiniFlash 64x8TB = 512TB 24 SATA SSDs per node 2x PCIe SSDs per node 12x SAS SSDs per node Cluster Network 40GbE 100GbE 40GbE 56GbE Total Read Throughput 71.6 Gb/s 70 Gb/s 43 Gb/s 44 Gb/s
  • 16. Š 2015 Mellanox Technologies 16- Mellanox Confidential - XioMessenger Adding RDMA To Ceph
  • 17. Š 2015 Mellanox Technologies 17- Mellanox Confidential - RDMA Enables Efficient Data Movement § Hardware Network Acceleration à Higher bandwidth, Lower latency § Highest CPU efficiency à more CPU Power To Run Applications Efficient Data Movement With RDMA Higher Bandwidth Lower Latency More CPU Power For Applications
  • 18. Š 2015 Mellanox Technologies 18- Mellanox Confidential - RDMA Enables Efficient Data Movement At 100Gb/s § Without RDMA •  5.7 GB/s throughput •  20-26% CPU utilization •  4 cores 100% consumed by moving data § With Hardware RDMA •  11.1 GB/s throughput at half the latency •  13-14% CPU utilization •  More CPU power for applications, better ROI x x x x 100GbE With CPU Onload 100 GbE With Network Offload CPU Onload Penalties •  Half the Throughput •  Twice the Latency •  Higher CPU Consumption 2X Better Bandwidth Half the Latency 33% Lower CPU See the demo: https://www.youtube.com/watch?v=u8ZYhUjSUoI
  • 19. Š 2015 Mellanox Technologies 19- Mellanox Confidential - Adding RDMA to Ceph § RDMA Beta Included in Hammer •  Mellanox, Red Hat, CohortFS, and Community collaboration •  Full RDMA expected in Infernalis § Refactoring of Ceph Messaging Layer •  New RDMA messenger layer called XioMessenger •  New class hierarchy allowing multiple transports (simple one is TCP) •  Async design that leverages Accelio •  Reduced locks; Reduced number of threads § XioMessenger built on top of Accelio (RDMA abstraction layer) •  Integrated into all CEPH user space components: daemons and clients •  Both “public network” and “cloud network”
  • 20. Š 2015 Mellanox Technologies 20- Mellanox Confidential - § Open source! •  https://github.com/accelio/accelio/ && www.accelio.org § Faster RDMA integration to application § Asynchronous § Maximize msg and CPU parallelism •  Enable >10GB/s from single node •  Enable <10usec latency under load § In Giant and Hammer •  http://wiki.ceph.com/Planning/Blueprints/Giant/Accelio_RDMA_Messenger Accelio, High-Performance Reliable Messaging and RPC Library
  • 21. Š 2015 Mellanox Technologies 21- Mellanox Confidential - Ceph 4KB Read IOPS: 40Gb TCP vs. 40Gb RDMA 0 50 100 150 200 250 300 350 400 450 2 OSDs, 4 clients 4 OSDs, 4 clients ThousandsofIOPS 40Gb TCP 40Gb RDMA34coresinOSD 24coresinclient 38coresinOSD 24coresinclient 38coresinOSD 30coresinclient 38coresinOSD 24coresinclient RDMATCP RDMATCP
  • 22. Š 2015 Mellanox Technologies 22- Mellanox Confidential - Ceph-Powered Solutions Deployment Examples
  • 23. Š 2015 Mellanox Technologies 23- Mellanox Confidential - Ceph For Large Scale Storage– Fujitsu Eternus CD10000 § Hyperscale Storage •  4 to 224 nodes •  Up to 56 PB raw capacity § Runs Ceph with Enhancements •  3 different storage nodes •  Object, block, and file storage § Mellanox InfiniBand Cluster Network •  40Gb InfiniBand cluster network •  10Gb Ethernet front end network
  • 24. Š 2015 Mellanox Technologies 24- Mellanox Confidential - Media & Entertainment Storage – StorageFoundry Nautilus § Turnkey Object Storage •  Built on Ceph •  Pre-configured for rapid deployment •  Mellanox 10/40GbE networking § High-Capacity Configuration •  6-8TB Helium-filled drives •  Up to 2PB in 18U § High-Performance Configuration •  Single client read 2.2 GB/s •  SSD caching + Hard Drives •  Supports Ethernet, IB, FC, FCoE front-end ports § More information: www.storagefoundry.net
  • 25. Š 2015 Mellanox Technologies 25- Mellanox Confidential - SanDisk InfiniFlash § Flash Storage System •  Announced March 2015 •  512 TB (raw) in one 3U enclosure •  Tested with 40GbE networking § High Throughput •  8 SAS ports, up to 7GB/s •  Connect to 2 or 4 OSD nodes •  Up to 1M IOPS with two nodes § More information: •  http://bigdataflash.sandisk.com/infiniflash
  • 26. Š 2015 Mellanox Technologies 26- Mellanox Confidential - More Ceph Solutions § Cloud – OnyxCCS ElectraStack •  Turnkey IaaS •  Multi-tenant computing system •  5x faster Node/Data restoration •  https://www.onyxccs.com/products/8-series § Flextronics CloudLabs •  OpenStack on CloudX design •  2SSD + 20HDD per node •  Mix of 1Gb/40GbE network •  http://www.flextronics.com/ § ISS Storage Supercore •  Healthcare solution •  82,000 IOPS on 512B reads •  74,000 IOPS on 4KB reads •  1.1GB/s on 256KB reads •  http://www.iss-integration.com/supercore.html § Scalable Informatics Unison •  High availability cluster •  60 HDD in 4U •  Tier 1 performance at archive cost •  https://scalableinformatics.com/unison.html
  • 27. Š 2015 Mellanox Technologies 27- Mellanox Confidential - Even More Ceph Solutions § Keeper Technology – keeperSAFE •  Ceph appliance •  For US Government •  File Gateway for NFS, SMB, & StorNext •  Mellanox Switches § Monash University -- Melbourne, Australia •  3 Ceph Clusters, >6PB total storage •  8, 17 (27), and 37 nodes •  OpenStack Cinder and S3/Swift Object Storage •  Mellanox networking, 10GbE nodes, 56GbE ISLs
  • 28. Š 2015 Mellanox Technologies 28- Mellanox Confidential - Summary § Ceph scalability and performance benefit from high performance networks •  Especially with lots of disk § Ceph being optimized for flash storage § End-to-end 40/56 Gb/s transport accelerates Ceph today •  100Gb/s testing has begun! •  Available in various Ceph solutions and appliances § RDMA is next to optimize flash performance—beta in Hammer
  • 30. Š 2015 Mellanox Technologies 30- Mellanox Confidential - SanDisk IF-500 topology on a single 512 TB IF-100 Flash Memory Summit 2015 Santa Clara, CA 30 -­‐        IF-­‐100  BW  is  ~8.5GB/s  (with  6Gb  SAS,  12  Gb  is  coming  EOY)  and  ~1.5M  4K  IOPS   -­‐  We  saw  that  Ceph  is  very  resource  hungry,  so,  need  at  least  2  physical  nodes  on  top  of  IF-­‐100   -­‐  We  need  to  connect  all  8  ports  of  an  HBA  to  saturate  IF-­‐100  for  bigger  block  size  
  • 31. Š 2015 Mellanox Technologies 31- Mellanox Confidential - SanDisk Ceph-InfiniFlash Setup Details Flash Memory Summit 2015 Santa Clara, CA 31 Performance  Cong    -­‐  IF-­‐500   2  Node  Cluster  (  32  drives  shared  to  each  OSD  node)   Node      2  Servers   (Dell  R720)   2x  E5-­‐2680  12C  2.8GHz      4x  16GB  RDIMM,  dual  rank  x4  (64GB)       1x  Mellanox  X3  Dual  40GbE      1x  LSI  9207  HBA  card   RBD  Client    4  Servers   (Dell  R620)   1  x  E5-­‐2680  10C  2.8GHz      2  x  16GB  RDIMM,  dual  rank  x4  (32  GB)    1x  Mellanox  X3   Dual  40GbE       Storage  –  IF-­‐100  with  64  Icechips  in  A2  Cong   IF-­‐100   IF-­‐100  is  connected  64  x  1YX2  Icechips  in  A2  topology.   Total  storage  -­‐  64  *  8  tb  =  512tb   Network  Details   40G  Switch   NA       OS  Details     OS     Ubuntu  14.04  LTS  64bit   3.13.0-­‐32   LSI  card/  driver   SAS2308(9207)   mpt2sas     Mellanox  40gbps  nw  card   MT27500  [ConnectX-­‐3]   mlx4_en    -­‐    2.2-­‐1  (Feb  2014)   Cluster  Congura[on     CEPH  Version   sndk-­‐ifos-­‐1.0.0.04   0.86.rc.eap2   Replicadon  (Default)   2    [Host]       Note:  -­‐  Host  level  replicadon.   Number  of  Pools,  PGs  &  RBDs   pool  =  4    ;PG  =    2048  per  pool     2  RBDs  from  each  pool   RBD  size   2TB       Number  of  Monitors   1       Number  of  OSD  Nodes   2       Number  of  OSDs  per  Node   32   total  OSDs    =  32  *  2  =    64  
  • 32. Š 2015 Mellanox Technologies 32- Mellanox Confidential - SanDisk: 8K Random - 2 RBD/Client with File System IOPS: 2 LUNs /Client (Total 4 Clients) 0   50000   100000   150000   200000   250000   300000   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   0   25   50   75   100   Stock  Ceph   IFOS  1.0   Lat(ms): 2 LUNs/Client (Total 4 Clients) [Queue Depth] Read Percent 0   20   40   60   80   100   120   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   0   25   50   75   100   IOPS Latency (ms)
  • 33. Š 2015 Mellanox Technologies 33- Mellanox Confidential - SanDisk: 64K Random -2 RBD/Client with File System IOPS: 2 LUNs/Client (Total 4 Clients) Lat(ms): 2 LUNs/Client (Total 4 Clients) [Queue Depth] Read Percent 0   20000   40000   60000   80000   100000   120000   140000   160000   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   0   25   50   75   100   Stock  Ceph   IFOS  1.0   0   20   40   60   80   100   120   140   160   180   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   1   2   4   8   16   32   0   25   50   75   100   IOPS Latency (ms)