SlideShare ist ein Scribd-Unternehmen logo
1 von 31
Dell and CEPH 
Steve Smith: 
Steve_l_smith@dell.com 
@SteveSAtDell 
Paul Brook 
Paul_brook@dell.com 
Twitter @PaulBrookAtDell 
Ceph Day London 
October 22nd 2014
agenda 
• Why we are here. – we sell CEPH support 
• You need hardware to sit this on. Here are some ideas 
• Some best practice shared with CEPH colleagues this year 
• A concept – (Research Data – would like your input) 
Dell Corporation
Dell is a certified reseller of Red Hat-Inktank 
Services, Support and Training. 
• Need to Access and buy Red Hat Services & Support? 
15+ Years of Red Hat and Dell 
• Red Hat 1-year /3-year subscription packages 
– Inktank Pre-Production subscription 
– Gold (24*7) Subscription 
• Red Hat Professional Services 
– Ceph Pro services Starter Pack 
– Additional days services options 
– Ceph Training from Red Hat 
Or…you can download CEPH for Free 
Dell Corporation 
3Confidential
Components Involved 
http://docs.openstack.org/training-guides/content/module001-ch004-openstack-architecture.html 
Dell Corporation
Dell OpenStack Cloud Solution 
You Get 
Stuff 
Stuff 
Dell Corporation
Best Practices 
(well…….some) 
With acknowledgement and thanks to Kyle and Mark at InkTank 
Dell Corporation
Planning your Ceph Implementation 
• Business Requirements 
– Budget considerations, organisational commitment 
– Replacing Enterprise SAN/NAS for cost saving 
– xaaS use cases for massive-scale, cost-effective storage 
– Avoid lock-in – use open source and industry standards 
– Steady-state vs. Spike data usage 
• Sizing requirements 
– What is the initial storage capacity? 
– What is the expected growth rate? 
• Workload requirements 
– Does the workload need high performance or it is more capacity 
focused? 
– What are IOPS/Throughput requirements? 
– What applications will be running on Ceph cluster? 
– What type of data will be stored? 
Dell Corporation
Architectural considerations – Redundancy and 
replication considerations 
• Tradeoff between Cost vs. Reliability (use-case dependent) 
• How many node failures can be tolerated? 
• In a multi-rack scenario, should a whole rack failure be 
tolerated? 
• Is there a need for multi-site data replication? 
• Erasure coding (more capacity with the same raw disk. More 
CPU load) 
• Plan for redundancy of the monitor nodes – distribute across 
fault zones 
• 3 copies = 8 nines availability, less than 1 second downtime per 
year 
• Many many things affect performance - in Ceph, above Ceph 
and below Ceph. 
Dell Corporation
Understanding Your Workload 
Dell Corporation
CEPH Architecture Refresh 
Dell Corporation
Understanding Ceph (1) 
Dell Corporation
Understanding Ceph (2) 
Dell Corporation
Understanding The Storage Server 
Dell Corporation
Multi-Site Issues 
• Within a CEPH cluster RADOS enforces Strong Consistency 
• The Writer process will wait for the ACK, which happens after the 
primary copy, the replicated copies and the journals have all been 
written. 
• On a WAN this might extend latencies unacceptably. 
• Alternatives 
• For S3/Swift systems, federated gateways between CEPH clusters, 
RADOS uses Eventual Consistency. 
• For remote backup use RBD with sync agents and incremental 
snapshots. 
Dell Corporation
Recommended Storage Server Configurations 
CEPH and InkTank recommendations are a bit out of date. 
• CPU – 1 core GHz per OSD 
– so a 2 x 8-core Intel Haswell 2.0GHz server could support 32 OSDs 
– less for AMD 
• Memory – 2GB per OSD 
– Must be ECC 
• Disk Controller – SAS or SATA without extender for data and 
journal, RAID 1 for operating system disks 
• Data Disks – Size doesn’t matter! Rebuilds happen across 
hundreds of placement groups. 
– 12 disks seems a good number 
• Journal Disks – SSDs – write optimised 
Dell Corporation
Intel Processors 
Dell Corporation
Memory Considerations 
C0 C1 C2 C3 
C0 C1 C2 C3 
C4 C5 C6 C7 C4 C5 C6 C7 
• Always populate all channels – groups of 8 
• Anything less loses significant memory bandwidth 
• Speed drops with 3DPC (sometimes 2DPC) 
• Use Dual Rank RDIMMs for maximum performance and expandability 
• Important to PIN process and data to same NUMA node 
• But let OS processes float 
• Or try Hyperthreading 
• Sensible memory is now 64GB (8 x 8GB RDIMMs) 
Dell Corporation
STORAGE NODE LOAD BALANCER x2 
Dell PowerEdge R515 
6 core AMD CPU, 32GB RAM 
2x 300GB SAS drives (OS) 
12x 3TB SATA drives 
2x 10GbE, 1x 1GbE, IPMI 
M 
RADOS GATEWAY 
STORAGE NODE 
DreamObjects Hardware Specs 
STORAGE NODE 
STORAGE NODE 
STORAGE NODE 
STORAGE NODE 
STORAGE NODE 
x4 
x90 
MANAGEMENT NODE x3 
MANAGEMENT NODE 
Dell PowerEdge R415 
2x 1TB SATA 
1x 10GbE 
Dell Corporation
Ceph Gateway Server 
• Gateway does CRC32 and MD5 checksumming 
– Now included in Intel AVX2 on Haswell 
• 64GB memory (minimum sensible) 
• 2 separate 10GbE NICs, 1 for client comms, 1 for store/retrieve 
• Make sure you have enough file handles, default is 100 - you should 
start at 4096! 
• Load balancing with multiple gateways 
Dell Corporation
Ceph Cluster Monitors 
• Best practice to deploy monitor role on dedicated hardware 
– Not resource intensive but critical – Stewards of the cluster 
– Using separate hardware ensures no contention for resources 
• Make sure monitor processes are never starved for resources 
– If running monitor process on shared hardware, fence off resources 
• Deploy an odd number of monitors (3 or 5) 
– Need to have an odd number of monitors for quorum voting 
– Clusters < 200 nodes work well with 3 monitors 
– Larger clusters may benefit from 5 
– Main reason to go to 7 is to have redundancy in fault zones 
• Add redundancy to monitor nodes as appropriate 
– Make sure the monitor nodes are distributed across fault zones 
– Consider refactoring fault zones if needing more than 7 monitors 
– Build in redundant power, cooling, disk 
2 
0 
Dell Corporation
Networking Overview 
• Plan for low latency and high bandwidth 
• Use 10GbE switches within the rack 
• Use 40GbE uplinks between racks in the datacentre 
• Use more bandwidth at the backend compared to the front end 
• Enable Jumbo frames 
• Replication is done by the storage not the client 
• Client writes to primary and journal 
• Primary writes to replicas through back end network 
• Backend also does recovery and rebalancing 
2 
1 
Dell Corporation
Potential Dell Server Hardware Choices 
• Rackable Storage Node 
– Dell PowerEdge R720XD OR new 13g R730/R730xd 
• Bladed Storage Node 
– Dell PowerEdge C8000XD Disk 
and PowerEdge C8220 CPU 
– 2x Xeon E5-2687 CPU, 128GB RAM 
– 2x 400GB SSD drives 
(OS and optionally Journals) 
– 12x 3TB NL SAS drive 
– 2x 10GbE, 1x 1GbE, IPMI 
• Monitor Node 
– Dell PowerEdge R415 
– 2x 1TB SATA 
– 1x 10GbE 
Dell Corporation 
2Confidential 
2
Mixed Use Deployments 
• For simplicity, dedicate hardware to specific role 
– That may not always be practical (e.g., small clusters) 
– If needed, can combine multiple functions on same hardware 
• Multiple Ceph Roles (e.g., OSD+RGW, OSD+MDS, Mon+RGW) 
– Balance IO-intensive with CPU/memory intensive roles 
– If both roles are relatively light (e.g., Mon and RGW) can 
combine 
• Multiple Applications (e.g., OSD+Compute, Mon+Horizon) 
– In OpenStack environment, may need to mix components 
– Follow same logic of balancing IO-intensive with CPU intensive 
2 
3 
Dell Corporation
Super-size CEPH 
• Lots of Disk space 
• CEPH Rules apply 
• Great for cold dark storage 
• Surprisingly popular with 
Customers 
• 3PB raw in a rack! 
R730/R730XD or R720/R720XD 
PowerVault JBOD 
Dell Corporation
Other Design Guidelines 
• Use simple components, don't buy more than you 
need. 
–Save money on RAID, redundant NICs, PS 
and buy more disks 
• Keep networks as flat as possible (East-West) 
–VLANs don't scale 
– Use Software Defined Networking for multi-tenancy in 
cloud 
• Design the fault zones carefully for NoSPoF 
–Rack 
–Row 
–Datacentre 
2 
5 
Dell Corporation
Research Data: 
Beta Slides 
Dell Corporation
Concept: Get started? 
Keep, 
Search, 
Collaborate- 
Publish 
Research Data & Publications 
Digital - Pre-Publication 
(Any Format?) 
Digital -Other (Any Format?) 
Dell Corporation
Concept: Get started? 
Keep, 
Search, 
Collaborate- 
Publish 
Research Data & Publications 
Digital - Pre-Publication 
(Any Format?) 
Digital -Other (Any Format?) 
How tag metadata? 
How Search? 
Data Security? 
File types to store? 
How long to store? 
How Collaborate? 
Dell Corporation
Holding a tin cup below a Niagara Falls of data!" 
Data keeps on 
coming &……. 
..coming……& 
coming……….. 
Has anyone else had this problem and already solved it. ? 
Open Source is best protection/longevity. “Web 2.0/Social has already solved scale-storage 
problem” 
Dell Corporation
Solve problems one at a time 
OpenStack 
Layer 
(Access) 
CEPH Storage 
Identity 
Management 
Governance 
Policy & 
Control 
PUBLISH: 
Existing 
Publishing 
routes 
Dell Corporation
Solve problems one at a time 
OpenStack 
Layer 
(Access) 
CEPH Storage 
Identity 
Management 
Governance 
Policy & 
Control 
Start Here 
PUBLISH: 
Existing 
Publishing 
routes 
Dell Corporation

Weitere ähnliche Inhalte

Was ist angesagt?

Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntu
Sim Janghoon
 

Was ist angesagt? (20)

How to upgrade like a boss to MySQL 8.0 - PLE19
How to upgrade like a boss to MySQL 8.0 -  PLE19How to upgrade like a boss to MySQL 8.0 -  PLE19
How to upgrade like a boss to MySQL 8.0 - PLE19
 
Achieving compliance With MongoDB Security
Achieving compliance With MongoDB Security Achieving compliance With MongoDB Security
Achieving compliance With MongoDB Security
 
Spring Boot to Quarkus: A real app migration experience | DevNation Tech Talk
Spring Boot to Quarkus: A real app migration experience | DevNation Tech TalkSpring Boot to Quarkus: A real app migration experience | DevNation Tech Talk
Spring Boot to Quarkus: A real app migration experience | DevNation Tech Talk
 
Exadata master series_asm_2020
Exadata master series_asm_2020Exadata master series_asm_2020
Exadata master series_asm_2020
 
2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph2019.06.27 Intro to Ceph
2019.06.27 Intro to Ceph
 
Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntu
 
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャZero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
Zero Data Loss Recovery Applianceによるデータベース保護のアーキテクチャ
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red HatMultiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
Multiple Sites and Disaster Recovery with Ceph: Andrew Hatfield, Red Hat
 
Disk health prediction for Ceph
Disk health prediction for CephDisk health prediction for Ceph
Disk health prediction for Ceph
 
ProxySQL High Availability (Clustering)
ProxySQL High Availability (Clustering)ProxySQL High Availability (Clustering)
ProxySQL High Availability (Clustering)
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
MySQL GTID Concepts, Implementation and troubleshooting
MySQL GTID Concepts, Implementation and troubleshooting MySQL GTID Concepts, Implementation and troubleshooting
MySQL GTID Concepts, Implementation and troubleshooting
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
MySQL Performance for DevOps
MySQL Performance for DevOpsMySQL Performance for DevOps
MySQL Performance for DevOps
 
MySQL 5.7にやられないためにおぼえておいてほしいこと
MySQL 5.7にやられないためにおぼえておいてほしいことMySQL 5.7にやられないためにおぼえておいてほしいこと
MySQL 5.7にやられないためにおぼえておいてほしいこと
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLON
 
All of the Performance Tuning Features in Oracle SQL Developer
All of the Performance Tuning Features in Oracle SQL DeveloperAll of the Performance Tuning Features in Oracle SQL Developer
All of the Performance Tuning Features in Oracle SQL Developer
 
What's Coming in CloudStack 4.19
What's Coming in CloudStack 4.19What's Coming in CloudStack 4.19
What's Coming in CloudStack 4.19
 

Andere mochten auch

B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
softlayerjp
 

Andere mochten auch (8)

Rabbit mq, amqp and php
Rabbit mq, amqp and phpRabbit mq, amqp and php
Rabbit mq, amqp and php
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To EnterpriseCeph Day Bring Ceph To Enterprise
Ceph Day Bring Ceph To Enterprise
 
Ceph ベンチマーク
Ceph ベンチマークCeph ベンチマーク
Ceph ベンチマーク
 
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
Ceph アーキテクチャ概説
Ceph アーキテクチャ概説Ceph アーキテクチャ概説
Ceph アーキテクチャ概説
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 

Ähnlich wie Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Storage as-a-Service

Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final
Juergen Domnik
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
solarisyourep
 
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix
 

Ähnlich wie Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Storage as-a-Service (20)

Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data Analysis
 
Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis Mike Pittaro - High Performance Hardware for Data Analysis
Mike Pittaro - High Performance Hardware for Data Analysis
 
Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
 
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Výhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database ApplianceVýhody a benefity nasazení Oracle Database Appliance
Výhody a benefity nasazení Oracle Database Appliance
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data Analysis
 
High Performance Hardware for Data Analysis
High Performance Hardware for Data AnalysisHigh Performance Hardware for Data Analysis
High Performance Hardware for Data Analysis
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Presentation architecting a cloud infrastructure
Presentation   architecting a cloud infrastructurePresentation   architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
 
Optimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for HadoopOptimizing Dell PowerEdge Configurations for Hadoop
Optimizing Dell PowerEdge Configurations for Hadoop
 
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
 
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, SuccessesSQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
 
Citrix Synergy 2014: Going the CloudPlatform Way
Citrix Synergy 2014: Going the CloudPlatform WayCitrix Synergy 2014: Going the CloudPlatform Way
Citrix Synergy 2014: Going the CloudPlatform Way
 
How to Build a Compute Cluster
How to Build a Compute ClusterHow to Build a Compute Cluster
How to Build a Compute Cluster
 

Kürzlich hochgeladen

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Kürzlich hochgeladen (20)

Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 

Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Storage as-a-Service

  • 1. Dell and CEPH Steve Smith: Steve_l_smith@dell.com @SteveSAtDell Paul Brook Paul_brook@dell.com Twitter @PaulBrookAtDell Ceph Day London October 22nd 2014
  • 2. agenda • Why we are here. – we sell CEPH support • You need hardware to sit this on. Here are some ideas • Some best practice shared with CEPH colleagues this year • A concept – (Research Data – would like your input) Dell Corporation
  • 3. Dell is a certified reseller of Red Hat-Inktank Services, Support and Training. • Need to Access and buy Red Hat Services & Support? 15+ Years of Red Hat and Dell • Red Hat 1-year /3-year subscription packages – Inktank Pre-Production subscription – Gold (24*7) Subscription • Red Hat Professional Services – Ceph Pro services Starter Pack – Additional days services options – Ceph Training from Red Hat Or…you can download CEPH for Free Dell Corporation 3Confidential
  • 5. Dell OpenStack Cloud Solution You Get Stuff Stuff Dell Corporation
  • 6. Best Practices (well…….some) With acknowledgement and thanks to Kyle and Mark at InkTank Dell Corporation
  • 7. Planning your Ceph Implementation • Business Requirements – Budget considerations, organisational commitment – Replacing Enterprise SAN/NAS for cost saving – xaaS use cases for massive-scale, cost-effective storage – Avoid lock-in – use open source and industry standards – Steady-state vs. Spike data usage • Sizing requirements – What is the initial storage capacity? – What is the expected growth rate? • Workload requirements – Does the workload need high performance or it is more capacity focused? – What are IOPS/Throughput requirements? – What applications will be running on Ceph cluster? – What type of data will be stored? Dell Corporation
  • 8. Architectural considerations – Redundancy and replication considerations • Tradeoff between Cost vs. Reliability (use-case dependent) • How many node failures can be tolerated? • In a multi-rack scenario, should a whole rack failure be tolerated? • Is there a need for multi-site data replication? • Erasure coding (more capacity with the same raw disk. More CPU load) • Plan for redundancy of the monitor nodes – distribute across fault zones • 3 copies = 8 nines availability, less than 1 second downtime per year • Many many things affect performance - in Ceph, above Ceph and below Ceph. Dell Corporation
  • 9. Understanding Your Workload Dell Corporation
  • 10. CEPH Architecture Refresh Dell Corporation
  • 11. Understanding Ceph (1) Dell Corporation
  • 12. Understanding Ceph (2) Dell Corporation
  • 13. Understanding The Storage Server Dell Corporation
  • 14. Multi-Site Issues • Within a CEPH cluster RADOS enforces Strong Consistency • The Writer process will wait for the ACK, which happens after the primary copy, the replicated copies and the journals have all been written. • On a WAN this might extend latencies unacceptably. • Alternatives • For S3/Swift systems, federated gateways between CEPH clusters, RADOS uses Eventual Consistency. • For remote backup use RBD with sync agents and incremental snapshots. Dell Corporation
  • 15. Recommended Storage Server Configurations CEPH and InkTank recommendations are a bit out of date. • CPU – 1 core GHz per OSD – so a 2 x 8-core Intel Haswell 2.0GHz server could support 32 OSDs – less for AMD • Memory – 2GB per OSD – Must be ECC • Disk Controller – SAS or SATA without extender for data and journal, RAID 1 for operating system disks • Data Disks – Size doesn’t matter! Rebuilds happen across hundreds of placement groups. – 12 disks seems a good number • Journal Disks – SSDs – write optimised Dell Corporation
  • 16. Intel Processors Dell Corporation
  • 17. Memory Considerations C0 C1 C2 C3 C0 C1 C2 C3 C4 C5 C6 C7 C4 C5 C6 C7 • Always populate all channels – groups of 8 • Anything less loses significant memory bandwidth • Speed drops with 3DPC (sometimes 2DPC) • Use Dual Rank RDIMMs for maximum performance and expandability • Important to PIN process and data to same NUMA node • But let OS processes float • Or try Hyperthreading • Sensible memory is now 64GB (8 x 8GB RDIMMs) Dell Corporation
  • 18. STORAGE NODE LOAD BALANCER x2 Dell PowerEdge R515 6 core AMD CPU, 32GB RAM 2x 300GB SAS drives (OS) 12x 3TB SATA drives 2x 10GbE, 1x 1GbE, IPMI M RADOS GATEWAY STORAGE NODE DreamObjects Hardware Specs STORAGE NODE STORAGE NODE STORAGE NODE STORAGE NODE STORAGE NODE x4 x90 MANAGEMENT NODE x3 MANAGEMENT NODE Dell PowerEdge R415 2x 1TB SATA 1x 10GbE Dell Corporation
  • 19. Ceph Gateway Server • Gateway does CRC32 and MD5 checksumming – Now included in Intel AVX2 on Haswell • 64GB memory (minimum sensible) • 2 separate 10GbE NICs, 1 for client comms, 1 for store/retrieve • Make sure you have enough file handles, default is 100 - you should start at 4096! • Load balancing with multiple gateways Dell Corporation
  • 20. Ceph Cluster Monitors • Best practice to deploy monitor role on dedicated hardware – Not resource intensive but critical – Stewards of the cluster – Using separate hardware ensures no contention for resources • Make sure monitor processes are never starved for resources – If running monitor process on shared hardware, fence off resources • Deploy an odd number of monitors (3 or 5) – Need to have an odd number of monitors for quorum voting – Clusters < 200 nodes work well with 3 monitors – Larger clusters may benefit from 5 – Main reason to go to 7 is to have redundancy in fault zones • Add redundancy to monitor nodes as appropriate – Make sure the monitor nodes are distributed across fault zones – Consider refactoring fault zones if needing more than 7 monitors – Build in redundant power, cooling, disk 2 0 Dell Corporation
  • 21. Networking Overview • Plan for low latency and high bandwidth • Use 10GbE switches within the rack • Use 40GbE uplinks between racks in the datacentre • Use more bandwidth at the backend compared to the front end • Enable Jumbo frames • Replication is done by the storage not the client • Client writes to primary and journal • Primary writes to replicas through back end network • Backend also does recovery and rebalancing 2 1 Dell Corporation
  • 22. Potential Dell Server Hardware Choices • Rackable Storage Node – Dell PowerEdge R720XD OR new 13g R730/R730xd • Bladed Storage Node – Dell PowerEdge C8000XD Disk and PowerEdge C8220 CPU – 2x Xeon E5-2687 CPU, 128GB RAM – 2x 400GB SSD drives (OS and optionally Journals) – 12x 3TB NL SAS drive – 2x 10GbE, 1x 1GbE, IPMI • Monitor Node – Dell PowerEdge R415 – 2x 1TB SATA – 1x 10GbE Dell Corporation 2Confidential 2
  • 23. Mixed Use Deployments • For simplicity, dedicate hardware to specific role – That may not always be practical (e.g., small clusters) – If needed, can combine multiple functions on same hardware • Multiple Ceph Roles (e.g., OSD+RGW, OSD+MDS, Mon+RGW) – Balance IO-intensive with CPU/memory intensive roles – If both roles are relatively light (e.g., Mon and RGW) can combine • Multiple Applications (e.g., OSD+Compute, Mon+Horizon) – In OpenStack environment, may need to mix components – Follow same logic of balancing IO-intensive with CPU intensive 2 3 Dell Corporation
  • 24. Super-size CEPH • Lots of Disk space • CEPH Rules apply • Great for cold dark storage • Surprisingly popular with Customers • 3PB raw in a rack! R730/R730XD or R720/R720XD PowerVault JBOD Dell Corporation
  • 25. Other Design Guidelines • Use simple components, don't buy more than you need. –Save money on RAID, redundant NICs, PS and buy more disks • Keep networks as flat as possible (East-West) –VLANs don't scale – Use Software Defined Networking for multi-tenancy in cloud • Design the fault zones carefully for NoSPoF –Rack –Row –Datacentre 2 5 Dell Corporation
  • 26. Research Data: Beta Slides Dell Corporation
  • 27. Concept: Get started? Keep, Search, Collaborate- Publish Research Data & Publications Digital - Pre-Publication (Any Format?) Digital -Other (Any Format?) Dell Corporation
  • 28. Concept: Get started? Keep, Search, Collaborate- Publish Research Data & Publications Digital - Pre-Publication (Any Format?) Digital -Other (Any Format?) How tag metadata? How Search? Data Security? File types to store? How long to store? How Collaborate? Dell Corporation
  • 29. Holding a tin cup below a Niagara Falls of data!" Data keeps on coming &……. ..coming……& coming……….. Has anyone else had this problem and already solved it. ? Open Source is best protection/longevity. “Web 2.0/Social has already solved scale-storage problem” Dell Corporation
  • 30. Solve problems one at a time OpenStack Layer (Access) CEPH Storage Identity Management Governance Policy & Control PUBLISH: Existing Publishing routes Dell Corporation
  • 31. Solve problems one at a time OpenStack Layer (Access) CEPH Storage Identity Management Governance Policy & Control Start Here PUBLISH: Existing Publishing routes Dell Corporation

Hinweis der Redaktion

  1. Welcome to a short overview of Ceph storage in Dell OpenStack-Powered Cloud Solutions Ceph is a transformational storage technology available as free open source software. It’s a universal storage solution that provides block, file, and object storage from a scalable cluster built on top of standard utility server hardware. Dell has partnered with Inktank, the Ceph experts, to bring a validated Ceph storage solution to Dell cloud customers
  2. Suggested notes: Paul_ We sell Red Hat /Inktank support and training and stuff. If you want it/need it – we can help you get it
  3. Not even the least bit complicated. – But if we are positioning this OUTSIDE CEPH community – what is best way ? Cloud scale-low cost-flexible stoRage -
  4. “Executive Pitch”