SlideShare ist ein Scribd-Unternehmen logo
1 von 22
© Supermicro 2015
Supermicro Ceph Solutions
Open Solutions Defined by Workload
Ahmad Ali
Director, Storage Solutions
AhmadAli@supermicro.com
Chicago, 8/18/2015
Objectives
 Generate well-balanced
solutions without
compromising on availability,
performance, capacity and cost
 Derive configurations from
empirical data during the
testing.
 Provide reference architectures
to prime opportunities and
accelerate the sales cycle
Proof of Concept - Objective
Usable Capacity XS - 100TB S - 500TB M- 1PB L - 2PB
IOPS-Optimized
3x replicated data,
tiered over EC X X X X
Throughput
Optimized*
3x replicated data X X X X
Capacity Optimized*
6+2 EC protected data
X X X X
ScaleWorkload
The Ceph Performance Bingo Card
36-Bay Server Flexibility
Framework to support multiple
configurations in the same box
 Dual Socket Motherboard
 Enabling Single and Dual CPU
Configurations
 Dual Backplanes
 x8 lane SAS Connectivity
 Dual IT mode controllers
 2x Available PCI slots for NVMe
 Improving drive to SSD ratio
 2x Dual port 10G cards
 Separate Cluster and Client Networks
POC Hardware Description
POC Hardware Description
Write Journals for OSDs
can be stored on SSD
Media for optimal OSD
write performance.
The Ratio of SSD to HDD
allows Ceph to be tuned for
use in a broad spectrum of
applications
X9 Ceph Drive Configuration
5x OSDs 5x OSDs
SSD
SSD
OSD Node
5:1 Ratio
X9 generation
Write Journals for OSDs
can be stored on SSD
Media for optimal OSD
write performance.
The Ratio of SSD to HDD
allows Ceph to be tuned for
use in a broad spectrum of
applications
For 12-36 Bay OSD Nodes
the use of PCI-E flash /
NVMe instead of SAS
based SSD increases
available I/O bandwidth to
the media, enabling greater
HDD:SSD ratios to be used
X10 Ceph Drive Configuration
12 Bay Node (12+1)
12:1 Ratio
NVMe
NVMe
18:1 Ratio
NVMe
36 Bay Node (36+2)
X10 generation
0
500
1000
1500
2000
2500
4 64 4096
MB/sec
Object Size (KB)
12+1
10G+10G
18+1
10G+10G
18+0
10G+10G
36+2
10G+10G
36+0
10G+10G
36+2
40G (shared)
60+12
40G (shared)
72+0
40G (shared)
Sequential Read Throughput per Server
3x replication, librados
Sequential Write Throughput per Server
3x replication, librados
0
100
200
300
400
500
600
700
4 64 4096
MB/sec
Object Size (KB)
12+1
10G+10G
18+1
10G+10G
18+0
10G+10G
36+2
10G+10G
36+0
10G+10G
36+2
40G (shared)
60+12
40G (shared)
72+0
40G (shared)
Sequential Read Throughput per Server
3+2 Erasure Coding, librados
0
300
600
900
1200
1500
1800
4 64 4096
MB/sec
Object Size (KB)
12+1
10G+10G
12+0
10G+10G
18+1
10G+10G
18+0
10G+10G
36+2
10G+10G
36+0
10G+10G
36+2
40G (shared)
60+12
40G (shared)
72+0
40G (shared)
Sequential Write Throughput per Server
3+2 Erasure Coding, librados
0
200
400
600
800
1000
1200
4 64 4096
MB/sec
Object Size (KB)
12+1
10G+10G
12+0
10G+10G
18+1
10G+10G
18+0
10G+10G
36+2
10G+10G
36+0
10G+10G
36+2
40G (shared)
60+12
40G (shared)
72+0
40G (shared)
Red Hat Ceph Reference Architecture
http://www.redhat.com/en/resources/red-hat-ceph-storage-clusters-supermicro-storage-servers
Proof of Concept - Objective
Usable Capacity XS - 100TB S - 500TB M- 1PB L - 2PB
IOPS-Optimized
3x replicated data,
tiered over EC X X X X
Throughput
Optimized*
3x replicated data X X X X
Capacity Optimized*
6+2 EC protected data
X X X X
ScaleWorkload
The Ceph Performance Bingo Card
Ceph Workload Positioning
Usable Capacity XS - 100TB S - 500TB M- 1PB L - 2PB
IOPS-Optimized
3x replicated data,
tiered over EC
Planned for Winter 2015
Throughput Optimized*
3x replicated data 6 node
• Read: 5,300 MB/s
• Write: 1,400 MB/s
32 node
• Read: 28,000 MB/s
• Write: 9,500 MB/s
63 node
• Read: 55,000 MB/s
• Write: 19,000 MB/s
125 node
• Read: 110,000 MB/s
• Write: 37,000 MB/s
Capacity Optimized*
6+2 EC protected data 10 node
• Read: 8,000 MB/s
• Write: 2,000 MB/s
8 node
• Read: 7,000 MB/s
• Write: 3,400 MB/s
13 node
• Read: 11,000 MB/s
• Write: 5,000 MB/s
* Projected performance based on node performance in 10 node cluster
(12 x 4TB + 1NVMe) (12 x 4TB + 1NVMe) (12 x 4TB + 1NVMe) (12 x 4TB + 1NVMe)
(12 x 6TB) (36 x 6TB) (72 x 6TB)
Price/Performance Comparisons
© Supermicro 2015
X10 SuperStorage
X10 Optimized
Configurations for Ceph
X10 and X9 Platforms for Ceph
X10 Models CPU/Mem
Drive
Config X9 Models CPU/Mem
Drive
Config
SSG-6018R-MON2
Dual Intel Xeon
E5-2630 v3
64GB
Ix 800GB PCI-flash /
NVMe
SSG-6017R-MON1
Dual Intel Xeon
E5-2630 v2
64GB
NA
SSG-F618H-OSD288P
Single Intel Xeon
E5-2620 v3
64GB
(12+1)
12x 6TB HDD + 1x
NVMe
NA NA NA
SSG-6028R-OSD072
Single Intel Xeon
E5-2620 v3
64GB
(12+0)
12x 6TB HDD
NA NA NA
SSG-6028R-OSD072P
Single Intel Xeon
E5-2620 v3
64GB
(12+1)
12x 6TB HDD + 1x
NVMe
SSG-6027R-OSD040H
Single Intel Xeon
E5-2630 v2
64GB
(10+2)
10x 4TB HDD +
2x SSD
SSG-6048R-OSD216
Dual Intel Xeon
E5-2630 v3
128GB
(36+0)
36x 6TB HDD
NA NA NA
SSG-6048R-OSD216P
Dual Intel Xeon
E5-2630 v3
128GB
(36+2)
36x 6TB HDD + 2x
NVMe
SSG-6047R-OSD120H
Dual Intel Xeon
E5-2630 v2
128GB
(30+6)
30x 4TB HDD +
6x SSD
SSG-6048R-OSD432
Dual Intel Xeon
E5-2690 v3
256GB
(72+0)
72x 6TB HDD
NA NA NA
SSG-6048R-OSD360P
Dual Intel Xeon
E5-2690 v3
256GB
(60+12)
60x 6TB HDD + 12x
SSD
SSG-6047R-OSD320H
Dual Intel Xeon
E5-2670 v2
(E5-2697
recommended)
128GB
(60+12)
60x 4TB HDD +
12x SSD
Conclusion
 Single CPU 12+1 config seemed to be optimal for
throughput-optimized clusters.
 SSD write journaling helps greatly. PCIe flash allows
greater HDD ratios for both throughput (replication)
and capacity (EC) optimized configs.
 A lot more work is still needed
 Identify IO optimized reference architecture
 Study all flash and tiered performance
 Understand OSD performance and implications of
non-uniform memory access (NUMA) systems
 Proc-core to OSD ratios with/without hyper-threading
© Supermicro 2015
Thank You
AhmadAli@supermicro.com
189 187
261
250
262
228
0
50
100
150
200
250
300
GB per Watt
X10 Ceph Node Power Consumption*
(12+0)
SSG-6028R-
OSD072
(12+1)
SSG-6028R-
OSD072P
(36+0)
SSG-6048R-
OSD216
(36+2)
SSG-6048R-
OSD216P
(72+0)
SSG-6048R-
0SD432
(60+12)
SSG-6048R-
0SD360P
AC input /
Watts@240V= 374 379 812 850 1621 1555
Total
BTU/hour= 1279 1294 2773 2903 5533 5309
VA= 390 395 846 886 1688 1620
72-Bay36-Bay12-Bay
* Paper
estimate based
on maximum
loading of all
components,
real world is
typically lower
Rack Power / Density Guidance
42U rack-level sizing projected from OSD node paper power budgets
Supermicro Models
5KVA 8.5 KVA 10KVA 16KVA
Nodes
Rack
Units HDDs Nodes
Rack
Units HDDs Nodes
Rack
Units HDDs Nodes
Rack
Units HDDs
SSG-F618H-OSD288P 12 13 144 20 22 240 24 26 288 36 40 432
SSG-6028R-OSD072
SSG-6028R-OSD072P 13 26 156 20 40 240
SSG-6048R-OSD216
SSG-6048R-OSD216P 5 20 180 9 36 324
SSG-6048R-OSD432 3 12 216 5 20 360 6 24 432 9 36 648
SSG-6048R-OSD360P 3 12 180 5 20 300 6 24 360 9 36 540
1U/12-bay nodes
2U/12-bay nodes
4U/36-bay nodes
4U/72-bay nodes
SRS-42E112-CEPH-02
SRS-42E112-CEPH-03
SRS-42E136-CEPH-02
SRS-42E136-CEPH-03
SRS-42E172-CEPH-02
SRS-42E172-CEPH-03
SRS-14E412-CEPH-01
SRS-42E412-CEPH-01TBD
TBD
TBD
TBD

Weitere ähnliche Inhalte

Was ist angesagt?

20201006_PGconf_Online_Large_Data_Processing
20201006_PGconf_Online_Large_Data_Processing20201006_PGconf_Online_Large_Data_Processing
20201006_PGconf_Online_Large_Data_ProcessingKohei KaiGai
 
Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...
Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...
Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...DataStax
 
Шардинг в MongoDB, Henrik Ingo (MongoDB)
Шардинг в MongoDB, Henrik Ingo (MongoDB)Шардинг в MongoDB, Henrik Ingo (MongoDB)
Шардинг в MongoDB, Henrik Ingo (MongoDB)Ontico
 
MongoDB: Intro & Application for Big Data
MongoDB: Intro & Application  for Big DataMongoDB: Intro & Application  for Big Data
MongoDB: Intro & Application for Big DataTakahiro Inoue
 
Data storage systems
Data storage systemsData storage systems
Data storage systemsdelimitry
 
Using MongoDB and Python
Using MongoDB and PythonUsing MongoDB and Python
Using MongoDB and PythonMike Bright
 
Linux Resource Management - Мариян Маринов (Siteground)
Linux Resource Management - Мариян Маринов (Siteground)Linux Resource Management - Мариян Маринов (Siteground)
Linux Resource Management - Мариян Маринов (Siteground)PlovDev Conference
 
MongoDB - External Authentication
MongoDB - External AuthenticationMongoDB - External Authentication
MongoDB - External AuthenticationJason Terpko
 
MongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster TutorialMongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster TutorialJason Terpko
 
MongoDB: Comparing WiredTiger In-Memory Engine to Redis
MongoDB: Comparing WiredTiger In-Memory Engine to RedisMongoDB: Comparing WiredTiger In-Memory Engine to Redis
MongoDB: Comparing WiredTiger In-Memory Engine to RedisJason Terpko
 
MongoDB World 2019: RDBMS Versus MongoDB Aggregation Performance
MongoDB World 2019: RDBMS Versus MongoDB Aggregation PerformanceMongoDB World 2019: RDBMS Versus MongoDB Aggregation Performance
MongoDB World 2019: RDBMS Versus MongoDB Aggregation PerformanceMongoDB
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Community
 
Column store indexes and batch processing mode (nx power lite)
Column store indexes and batch processing mode (nx power lite)Column store indexes and batch processing mode (nx power lite)
Column store indexes and batch processing mode (nx power lite)Chris Adkin
 
NoSQL для PostgreSQL: Jsquery — язык запросов
NoSQL для PostgreSQL: Jsquery — язык запросовNoSQL для PostgreSQL: Jsquery — язык запросов
NoSQL для PostgreSQL: Jsquery — язык запросовCodeFest
 
OSDC 2012 | Scaling with MongoDB by Ross Lawley
OSDC 2012 | Scaling with MongoDB by Ross LawleyOSDC 2012 | Scaling with MongoDB by Ross Lawley
OSDC 2012 | Scaling with MongoDB by Ross LawleyNETWAYS
 

Was ist angesagt? (20)

20201006_PGconf_Online_Large_Data_Processing
20201006_PGconf_Online_Large_Data_Processing20201006_PGconf_Online_Large_Data_Processing
20201006_PGconf_Online_Large_Data_Processing
 
Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...
Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...
Bucket Your Partitions Wisely (Markus Höfer, codecentric AG) | Cassandra Summ...
 
Шардинг в MongoDB, Henrik Ingo (MongoDB)
Шардинг в MongoDB, Henrik Ingo (MongoDB)Шардинг в MongoDB, Henrik Ingo (MongoDB)
Шардинг в MongoDB, Henrik Ingo (MongoDB)
 
MongoDB: Intro & Application for Big Data
MongoDB: Intro & Application  for Big DataMongoDB: Intro & Application  for Big Data
MongoDB: Intro & Application for Big Data
 
Data storage systems
Data storage systemsData storage systems
Data storage systems
 
MongoDB and Python
MongoDB and PythonMongoDB and Python
MongoDB and Python
 
NoSQL Infrastructure
NoSQL InfrastructureNoSQL Infrastructure
NoSQL Infrastructure
 
Using MongoDB and Python
Using MongoDB and PythonUsing MongoDB and Python
Using MongoDB and Python
 
Linux Resource Management - Мариян Маринов (Siteground)
Linux Resource Management - Мариян Маринов (Siteground)Linux Resource Management - Мариян Маринов (Siteground)
Linux Resource Management - Мариян Маринов (Siteground)
 
Linux resource limits
Linux resource limitsLinux resource limits
Linux resource limits
 
MongoDB - External Authentication
MongoDB - External AuthenticationMongoDB - External Authentication
MongoDB - External Authentication
 
MongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster TutorialMongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster Tutorial
 
MongoDB: Comparing WiredTiger In-Memory Engine to Redis
MongoDB: Comparing WiredTiger In-Memory Engine to RedisMongoDB: Comparing WiredTiger In-Memory Engine to Redis
MongoDB: Comparing WiredTiger In-Memory Engine to Redis
 
HDFSvTACHYON
HDFSvTACHYONHDFSvTACHYON
HDFSvTACHYON
 
MongoDB World 2019: RDBMS Versus MongoDB Aggregation Performance
MongoDB World 2019: RDBMS Versus MongoDB Aggregation PerformanceMongoDB World 2019: RDBMS Versus MongoDB Aggregation Performance
MongoDB World 2019: RDBMS Versus MongoDB Aggregation Performance
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 
Column store indexes and batch processing mode (nx power lite)
Column store indexes and batch processing mode (nx power lite)Column store indexes and batch processing mode (nx power lite)
Column store indexes and batch processing mode (nx power lite)
 
MesosCon 2018
MesosCon 2018MesosCon 2018
MesosCon 2018
 
NoSQL для PostgreSQL: Jsquery — язык запросов
NoSQL для PostgreSQL: Jsquery — язык запросовNoSQL для PostgreSQL: Jsquery — язык запросов
NoSQL для PostgreSQL: Jsquery — язык запросов
 
OSDC 2012 | Scaling with MongoDB by Ross Lawley
OSDC 2012 | Scaling with MongoDB by Ross LawleyOSDC 2012 | Scaling with MongoDB by Ross Lawley
OSDC 2012 | Scaling with MongoDB by Ross Lawley
 

Andere mochten auch

Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Community
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Community
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Community
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Community
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Community
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 

Andere mochten auch (20)

Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
Ceph Day Chicago: Using Ceph for Large Hadron Collider Data
 
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom LabsCeph Day Shanghai - Ceph in Chinau Unicom Labs
Ceph Day Shanghai - Ceph in Chinau Unicom Labs
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
 
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
Ceph Day Chicago - Deploying flash storage for Ceph without compromising perf...
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update Ceph Day Shanghai - Community Update
Ceph Day Shanghai - Community Update
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
 
Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update Ceph Day Taipei - Community Update
Ceph Day Taipei - Community Update
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 

Ähnlich wie Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload

Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
 
Webinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareWebinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareNetgear Italia
 
Introducción a Microsoft azure
Introducción a Microsoft azureIntroducción a Microsoft azure
Introducción a Microsoft azureMariano Kovo
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300qsantechnology
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing GuideJose De La Rosa
 
Blade Svr Comaprision sheet.pdf
Blade Svr Comaprision sheet.pdfBlade Svr Comaprision sheet.pdf
Blade Svr Comaprision sheet.pdfGreeshSharma
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Community
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
 
High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications Rebekah Rodriguez
 
Product Roadmap iEi 2017
Product Roadmap iEi 2017Product Roadmap iEi 2017
Product Roadmap iEi 2017Andrei Teleanu
 
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New FeaturesAmazon Web Services
 
JetStor JBOD Microsoft Storage Spaces Xces BV
JetStor JBOD Microsoft Storage Spaces Xces BV JetStor JBOD Microsoft Storage Spaces Xces BV
JetStor JBOD Microsoft Storage Spaces Xces BV Gene Leyzarovich
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...
Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...
Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...Dell Technologies
 
Deep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance PerformanceDeep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance PerformanceAmazon Web Services
 
Delivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStor
Delivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStorDelivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStor
Delivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStorRebekah Rodriguez
 

Ähnlich wie Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload (20)

Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
Webinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareWebinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e software
 
Introducción a Microsoft azure
Introducción a Microsoft azureIntroducción a Microsoft azure
Introducción a Microsoft azure
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Blade Svr Comaprision sheet.pdf
Blade Svr Comaprision sheet.pdfBlade Svr Comaprision sheet.pdf
Blade Svr Comaprision sheet.pdf
 
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day New York 2014: Ceph, a physical perspective
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
 
High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications
 
Product Roadmap iEi 2017
Product Roadmap iEi 2017Product Roadmap iEi 2017
Product Roadmap iEi 2017
 
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
 
JetStor JBOD Microsoft Storage Spaces Xces BV
JetStor JBOD Microsoft Storage Spaces Xces BV JetStor JBOD Microsoft Storage Spaces Xces BV
JetStor JBOD Microsoft Storage Spaces Xces BV
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
CCBoot Cloud Manual.docx
CCBoot Cloud Manual.docxCCBoot Cloud Manual.docx
CCBoot Cloud Manual.docx
 
Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...
Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...
Dell Technologies Dell EMC Data Protection Solutions On One Single Page - POS...
 
Deep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance PerformanceDeep Dive on Delivering Amazon EC2 Instance Performance
Deep Dive on Delivering Amazon EC2 Instance Performance
 
QNAP Portfolio 2016
QNAP Portfolio 2016 QNAP Portfolio 2016
QNAP Portfolio 2016
 
Tvs x82 range
Tvs x82 range Tvs x82 range
Tvs x82 range
 
Delivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStor
Delivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStorDelivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStor
Delivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStor
 

Kürzlich hochgeladen

How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 

Kürzlich hochgeladen (20)

How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 

Ceph Day Chicago - Supermicro Ceph - Open SolutionsDefined by Workload

  • 1. © Supermicro 2015 Supermicro Ceph Solutions Open Solutions Defined by Workload Ahmad Ali Director, Storage Solutions AhmadAli@supermicro.com Chicago, 8/18/2015
  • 2. Objectives  Generate well-balanced solutions without compromising on availability, performance, capacity and cost  Derive configurations from empirical data during the testing.  Provide reference architectures to prime opportunities and accelerate the sales cycle
  • 3. Proof of Concept - Objective Usable Capacity XS - 100TB S - 500TB M- 1PB L - 2PB IOPS-Optimized 3x replicated data, tiered over EC X X X X Throughput Optimized* 3x replicated data X X X X Capacity Optimized* 6+2 EC protected data X X X X ScaleWorkload The Ceph Performance Bingo Card
  • 4. 36-Bay Server Flexibility Framework to support multiple configurations in the same box  Dual Socket Motherboard  Enabling Single and Dual CPU Configurations  Dual Backplanes  x8 lane SAS Connectivity  Dual IT mode controllers  2x Available PCI slots for NVMe  Improving drive to SSD ratio  2x Dual port 10G cards  Separate Cluster and Client Networks
  • 7. Write Journals for OSDs can be stored on SSD Media for optimal OSD write performance. The Ratio of SSD to HDD allows Ceph to be tuned for use in a broad spectrum of applications X9 Ceph Drive Configuration 5x OSDs 5x OSDs SSD SSD OSD Node 5:1 Ratio X9 generation
  • 8. Write Journals for OSDs can be stored on SSD Media for optimal OSD write performance. The Ratio of SSD to HDD allows Ceph to be tuned for use in a broad spectrum of applications For 12-36 Bay OSD Nodes the use of PCI-E flash / NVMe instead of SAS based SSD increases available I/O bandwidth to the media, enabling greater HDD:SSD ratios to be used X10 Ceph Drive Configuration 12 Bay Node (12+1) 12:1 Ratio NVMe NVMe 18:1 Ratio NVMe 36 Bay Node (36+2) X10 generation
  • 9. 0 500 1000 1500 2000 2500 4 64 4096 MB/sec Object Size (KB) 12+1 10G+10G 18+1 10G+10G 18+0 10G+10G 36+2 10G+10G 36+0 10G+10G 36+2 40G (shared) 60+12 40G (shared) 72+0 40G (shared) Sequential Read Throughput per Server 3x replication, librados
  • 10. Sequential Write Throughput per Server 3x replication, librados 0 100 200 300 400 500 600 700 4 64 4096 MB/sec Object Size (KB) 12+1 10G+10G 18+1 10G+10G 18+0 10G+10G 36+2 10G+10G 36+0 10G+10G 36+2 40G (shared) 60+12 40G (shared) 72+0 40G (shared)
  • 11. Sequential Read Throughput per Server 3+2 Erasure Coding, librados 0 300 600 900 1200 1500 1800 4 64 4096 MB/sec Object Size (KB) 12+1 10G+10G 12+0 10G+10G 18+1 10G+10G 18+0 10G+10G 36+2 10G+10G 36+0 10G+10G 36+2 40G (shared) 60+12 40G (shared) 72+0 40G (shared)
  • 12. Sequential Write Throughput per Server 3+2 Erasure Coding, librados 0 200 400 600 800 1000 1200 4 64 4096 MB/sec Object Size (KB) 12+1 10G+10G 12+0 10G+10G 18+1 10G+10G 18+0 10G+10G 36+2 10G+10G 36+0 10G+10G 36+2 40G (shared) 60+12 40G (shared) 72+0 40G (shared)
  • 13. Red Hat Ceph Reference Architecture http://www.redhat.com/en/resources/red-hat-ceph-storage-clusters-supermicro-storage-servers
  • 14. Proof of Concept - Objective Usable Capacity XS - 100TB S - 500TB M- 1PB L - 2PB IOPS-Optimized 3x replicated data, tiered over EC X X X X Throughput Optimized* 3x replicated data X X X X Capacity Optimized* 6+2 EC protected data X X X X ScaleWorkload The Ceph Performance Bingo Card
  • 15. Ceph Workload Positioning Usable Capacity XS - 100TB S - 500TB M- 1PB L - 2PB IOPS-Optimized 3x replicated data, tiered over EC Planned for Winter 2015 Throughput Optimized* 3x replicated data 6 node • Read: 5,300 MB/s • Write: 1,400 MB/s 32 node • Read: 28,000 MB/s • Write: 9,500 MB/s 63 node • Read: 55,000 MB/s • Write: 19,000 MB/s 125 node • Read: 110,000 MB/s • Write: 37,000 MB/s Capacity Optimized* 6+2 EC protected data 10 node • Read: 8,000 MB/s • Write: 2,000 MB/s 8 node • Read: 7,000 MB/s • Write: 3,400 MB/s 13 node • Read: 11,000 MB/s • Write: 5,000 MB/s * Projected performance based on node performance in 10 node cluster (12 x 4TB + 1NVMe) (12 x 4TB + 1NVMe) (12 x 4TB + 1NVMe) (12 x 4TB + 1NVMe) (12 x 6TB) (36 x 6TB) (72 x 6TB)
  • 17. © Supermicro 2015 X10 SuperStorage X10 Optimized Configurations for Ceph
  • 18. X10 and X9 Platforms for Ceph X10 Models CPU/Mem Drive Config X9 Models CPU/Mem Drive Config SSG-6018R-MON2 Dual Intel Xeon E5-2630 v3 64GB Ix 800GB PCI-flash / NVMe SSG-6017R-MON1 Dual Intel Xeon E5-2630 v2 64GB NA SSG-F618H-OSD288P Single Intel Xeon E5-2620 v3 64GB (12+1) 12x 6TB HDD + 1x NVMe NA NA NA SSG-6028R-OSD072 Single Intel Xeon E5-2620 v3 64GB (12+0) 12x 6TB HDD NA NA NA SSG-6028R-OSD072P Single Intel Xeon E5-2620 v3 64GB (12+1) 12x 6TB HDD + 1x NVMe SSG-6027R-OSD040H Single Intel Xeon E5-2630 v2 64GB (10+2) 10x 4TB HDD + 2x SSD SSG-6048R-OSD216 Dual Intel Xeon E5-2630 v3 128GB (36+0) 36x 6TB HDD NA NA NA SSG-6048R-OSD216P Dual Intel Xeon E5-2630 v3 128GB (36+2) 36x 6TB HDD + 2x NVMe SSG-6047R-OSD120H Dual Intel Xeon E5-2630 v2 128GB (30+6) 30x 4TB HDD + 6x SSD SSG-6048R-OSD432 Dual Intel Xeon E5-2690 v3 256GB (72+0) 72x 6TB HDD NA NA NA SSG-6048R-OSD360P Dual Intel Xeon E5-2690 v3 256GB (60+12) 60x 6TB HDD + 12x SSD SSG-6047R-OSD320H Dual Intel Xeon E5-2670 v2 (E5-2697 recommended) 128GB (60+12) 60x 4TB HDD + 12x SSD
  • 19. Conclusion  Single CPU 12+1 config seemed to be optimal for throughput-optimized clusters.  SSD write journaling helps greatly. PCIe flash allows greater HDD ratios for both throughput (replication) and capacity (EC) optimized configs.  A lot more work is still needed  Identify IO optimized reference architecture  Study all flash and tiered performance  Understand OSD performance and implications of non-uniform memory access (NUMA) systems  Proc-core to OSD ratios with/without hyper-threading
  • 20. © Supermicro 2015 Thank You AhmadAli@supermicro.com
  • 21. 189 187 261 250 262 228 0 50 100 150 200 250 300 GB per Watt X10 Ceph Node Power Consumption* (12+0) SSG-6028R- OSD072 (12+1) SSG-6028R- OSD072P (36+0) SSG-6048R- OSD216 (36+2) SSG-6048R- OSD216P (72+0) SSG-6048R- 0SD432 (60+12) SSG-6048R- 0SD360P AC input / Watts@240V= 374 379 812 850 1621 1555 Total BTU/hour= 1279 1294 2773 2903 5533 5309 VA= 390 395 846 886 1688 1620 72-Bay36-Bay12-Bay * Paper estimate based on maximum loading of all components, real world is typically lower
  • 22. Rack Power / Density Guidance 42U rack-level sizing projected from OSD node paper power budgets Supermicro Models 5KVA 8.5 KVA 10KVA 16KVA Nodes Rack Units HDDs Nodes Rack Units HDDs Nodes Rack Units HDDs Nodes Rack Units HDDs SSG-F618H-OSD288P 12 13 144 20 22 240 24 26 288 36 40 432 SSG-6028R-OSD072 SSG-6028R-OSD072P 13 26 156 20 40 240 SSG-6048R-OSD216 SSG-6048R-OSD216P 5 20 180 9 36 324 SSG-6048R-OSD432 3 12 216 5 20 360 6 24 432 9 36 648 SSG-6048R-OSD360P 3 12 180 5 20 300 6 24 360 9 36 540 1U/12-bay nodes 2U/12-bay nodes 4U/36-bay nodes 4U/72-bay nodes SRS-42E112-CEPH-02 SRS-42E112-CEPH-03 SRS-42E136-CEPH-02 SRS-42E136-CEPH-03 SRS-42E172-CEPH-02 SRS-42E172-CEPH-03 SRS-14E412-CEPH-01 SRS-42E412-CEPH-01TBD TBD TBD TBD

Hinweis der Redaktion

  1. Notes: All Networks are 10G Clients and Monitors only access the public network OSD nodes have 18:1 ratio, 36x HDD + 2x PCI-E SSDs
  2. Notes: Clients and Monitors only access the public network All Networks are 10G OSD nodes have 5:1 ratio, 60x HDD + 12x SSDs (72 x 3.5”bays per node)