SlideShare ist ein Scribd-Unternehmen logo
1 von 27
CEPH AT WORK
IN BLOOMBERG
Object Store, RBD and OpenStack
January 19, 2016
By: Chris Jones & Chris Morgan
BLOOMBERG
2
30 Years in under 30 Seconds
● Subscriber based financial provider (Bloomberg Terminal)
● Online, TV, print, real-time streaming information
● Offices and customers in every major financial market and institution
worldwide
BLOOMBERG
3
Primary product - Information
● Bloomberg Terminal
− Approximately 60,000 features/functions. For
example, ability to track oil tankers in real-time via
satellite feeds
− Note: Exact numbers are not specified. Contact
media relations for specifics and other important
information.
CLOUD INFRASTRUCTURE
4
CLOUD INFRASTRUCTURE GROUP
5
Primary customers
– Developers
– Product Groups
● Many different
development
groups throughout
our organization
● Currently about
3,000 R&D
developers
● Everyone of them
wants and needs
resources
CLOUD INFRASTRUCTURE GROUP
6
Resource Challenges
● Developers
− Development
− Testing
− Automation (Cattle vs. Pets)
● Organizations
− POC
− Products in production
− Automation
● Security/Networking
− Compliance
USER BASE (EXAMPLES)
7
Resources and Use cases
● Multiple Data Centers
− Each DC contains *many* Network Tiers which includes a DMZ for Public-
facing Bloomberg assets
− There is at least one Ceph/OpenStack Cluster per Network Tier
● Developer Community Supported
− Public facing Bloomberg products
− Machine learning backend for smart apps
− Compliance-based resources
− Use cases continue to climb as Devs need more storage and compute
capacity
INFRASTRUCTURE
8
USED IN BLOOMBERG
9
● Ceph – RGW (Object Store)
● Ceph – Block/Volume
● OpenStack
─ Different flavors of compute
─ Ephemeral storage
● Object Store is
becoming one of
the most popular
items
● OpenStack
compute with Ceph
backed block store
volumes are very
popular
● We introduced
ephemeral
compute storage
SUPER HYPER-CONVERGED STACK
10
On EVERY Network Tier
SUPER HYPER-CONVERGED STACK
11
(Original) Converged Architecture Rack Layout
● 3 Head Nodes (Controller Nodes)
− Ceph Monitor
− Ceph OSD
− OpenStack Controllers (All of them!)
− HAProxy
● 1 Bootstrap Node
− Cobbler (PXE Boot)
− Repos
− Chef Server
− Rally/Tempest
● Remaining Nodes
− Nova Compute
− Ceph OSDs
− RGW – Apache
● Ubuntu
● Shared spine with Hadoop resources
Bootstrap Node
Compute/Ceph OSDs/RGW/Apache
Remaining Stack
Sliced View of Stack
NEW POD ARCHITECTURE
12
POD
(TOR)
HAProxy
OS-Nova
OS-NovaOS-Rabbit
OS-DB
Number of large providers have taken similar approaches
Note: Illustrative only – Not Representative
POD
(TOR)
Ceph
OSD
Ceph
Mon
Ceph
Mon
Ceph
Mon
Ceph
OSD
Ceph
OSD
RBD Only
Bootstrap
Monitoring
Ephemeral
Ephemeral – Fast/Dangerous
Host aggregates & flavors
Not Ceph backed
POD ARCHITECTURE (OPENSTACK/CEPH)
13
POD
(TOR)
Ceph
Block
OS-Nova
OS-NovaOS-Rabbit
OS-NovaOS-DB
Number of large providers have taken similar approaches
Note: Illustrative only – Not Representative
POD
(TOR)
Ceph
OSD
Ceph
Mon
Ceph
Mon
Ceph
Mon
Ceph
OSD
Ceph
OSD
POD
(TOR)
Ceph
OSD
Ceph
Mon
Ceph
Mon
Ceph
Mon
Ceph
OSD
Ceph
OSD
• Scale and re-
provision as needed
• 3 PODs per rack
EPHEMERAL VS. CEPH BLOCK STORAGE
14
Numbers will vary in different environments. Illustrations are simplified.
Ceph Ephemeral
New feature option added to address high IOP applications
EPHEMERAL VS. CEPH BLOCK STORAGE
15
Numbers will vary in different environments. Illustrations are simplified.
Ceph – Advantages
● All data is replicated at least 3 ways across the cluster
● Ceph RBD volumes can be created, attached and detached from any hypervisor
● Very fast provisioning using COW (copy-on-write) images
● Allows easy instance re-launch in the event of hypervisor failure
● High read performance
Ephemeral – Advantages
● Offers read/write speeds that can be 3-4 times faster than Ceph with lower latency
● Can provide fairly large volumes for cheap
Ceph – Disadvantages
● All writes must be acknowledged by multiple nodes before being considered as committed (tradeoff for reliability)
● Higher latency due to Ceph being network based instead of local
Ephemeral – Disadvantages
● Trades data integrity for speed: if one drive fails at RAID 0 then all data on that node is lost
● May be difficult to add more capacity (depends on type of RAID)
● Running in JBOD LVM mode w/o RAID performance was not as good as Ceph
● Less important, with RAID your drives need to be same size or you lose capacity
EPHEMERAL VS. CEPH BLOCK STORAGE
16
Numbers will vary in different environments. Illustrations are simplified.
EPHEMERAL CEPH
Block write bandwidth (MB/s) 1,094.02 642.15
Block read bandwidth (MB/s) 1,826.43 639.47
Character read bandwidth (MB/s) 4.93 4.31
Character write bandwidth (MB/s) 0.83 0.75
Block write latency (ms) 9.502 37.096
Block read latency (ms) 8.121 4.941
Character read latency (ms) 2.395 3.322
Character write latency (ms) 11.052 13.587
Note: Ephemeral in JBOD/LVM mode is not as fast as Ceph
Numbers can also increase with additional tuning and different devices
CHALLENGES – LESSONS LEARNED
17
Network
● It’s all about the network.
− Changed MTU from 1500 to 9000 on certain interfaces (Float interface – Storage interface)
− Hardware Load Balancers – keep an eye on performance
● Hardware
− Moving to a more commodity driven hardware
− All flash storage in compute cluster (high cost, good for block and ephemeral)
Costs
● Storage costs are very high in a converged compute cluster for Object Store
Analytics
● Need to know how the cluster is being used
● Need to know if the tps meets the SLA
● Test going directly against nodes and then layer in network components until you can
verify all choke points in the data flow path
● Monitor and test always
NEW CEPH OBJECT STORE
18
OBJECT STORE STACK (RACK CONFIG)
19
RedHat 7.1
● 1 TOR and 1 Rack Mgt Node
● 3 1U nodes (Mon, RGW, Util)
● 17 2U Ceph OSD nodes
● 2x or 3x Replication depending on need (3x default)
● Secondary RGW (may coexist with OSD Node)
● 10g Cluster interface
● 10g Public interface
● 1 IPMI interface
● OSD Nodes (high density server nodes)
− 6TB HDD x 12 – Journal partitions on SSD
− No RAID1 OS drives – instead we partitioned off a
small amount of SSD1 for OS and swap with remainder
of SSD1 used for some journals and SSD2 used for
remaining journals
− Failure domain is a node
3 1U Nodes
TOR/IPMI
Converged
Storage Nodes
2U each
OBJECT STORE STACK (ARCHITECTURE)
20
1 Mon/RGW Node
Per rack
TOR - Leaf
Storage Nodes
Spine Spine LBLB
OBJECT STORE STACK
21
Standard configuration (Archive Cluster)
● Min of 3 Racks = Cluster
● OS – Redhat 7.1
● Cluster Network: Bonded 10g or higher depending on size of cluster
● Public Network: Bonded 10g for RGW interfaces
● 1 Ceph Mon node per rack except on more than 3 racks. Need to keep odd number of Mons so some racks may not
have Mons. We try to keep larger cluster racks & Mons in different power zones
● We have developed a healthy “Pain” tolerance. We mainly see drive failures and some node failures.
● Min 1 RGW (dedicated Node) per rack (may want more)
● Hardware load balancers to RGWs with redundancy
● Erasure coded pools (no cache tiers at present – testing). We also use a host profile with 8/3 (k/m)
● Near full and full ratios are .75/.85 respectfully
● Index sharding
● Federated (regions/zones)
● All server nodes, no JBOD expansions
● S3 only at present but we do have a few requests for Swift
● Fully AUTOMATED – Chef cookbooks to configure and manage cluster (some Ansible)
AUTOMATION
22
All of what we do only happens because of automation
● Company policy – Chef
● Cloud Infrastructure Group uses Chef and Ansible. We use Ansible for
orchestration and maintenance
● Bloomberg Github: https://github.com/bloomberg/bcpc
● Ceph specific options
− Ceph Chef: https://github.com/ceph/ceph-chef
− Bloomberg Object Store: https://github.com/bloomberg/chef-bcs
− Ceph Deploy: https://github.com/ceph/ceph-deploy
− Ceph Ansible: https://github.com/ceph/ceph-ansible
● Our bootstrap server is our Chef server per cluster
TESTING
23
Testing is critical. We use different strategies for the different parts of
OpenStack and Ceph we test
● OpenStack
− Tempest – We currently only use this for patches we make. We plan to use this more in our
DevOps pipeline
− Rally – Can’t do distributed testing but we use it to test bottlenecks in OpenStack itself
● Ceph
− RADOS Bench
− COS Bench – Going to try this with CBT
− CBT – Ceph Benchmark Testing
− Bonnie++
− FIO
● Ceph – RGW
− Jmeter – Need to test load at scale. Takes a cloud to test a cloud 
● A lot of the times you find it’s your network, load balancers etc
CEPH USE CASE DEMAND – GROWING!
24
Ceph
*Real-time
Object
ImmutableOpenStack
Big Data*?
*Possible use cases if performance is enhanced
WHAT’S NEXT?
25
Continue to evolve our POD architecture
● OpenStack
− Work on performance improvements and track stats on usage for departments
− Better monitoring
− LBaaS, Neutron
● Containers and PaaS
− We’re currently evaluating PaaS software and container strategies now
● Better DevOps Pipelining
− GO CD and/or Jenkins improved strategies
− Continue to enhance automation and re-provisioning
− Add testing to automation
● Ceph
− New Block Storage Cluster
− Super Cluster design
− Performance improvements – testing Jewel
− RGW Multi-Master (multi-sync) between datacenters
− Enhanced security – encryption at rest (can already do) but with better key management
− NVMe for Journals and maybe for high IOP block devices
− Cache Tier (need validation tests)
THANK YOU
ADDITIONAL RESOURCES
27
● Chris Jones: cjones303@bloomberg.net
− Github: cloudm2
● Chris Morgan: cmorgan2@bloomberg.net
− Github: mihalis68
Cookbooks:
● BCC: https://github.com/bloomberg/bcpc
− Current repo for Bloomberg’s Converged OpenStack and Ceph cluster
● BCS: https://github.com/bloomberg/chef-bcs
● Ceph-Chef: https://github.com/ceph/ceph-chef
The last two repos make up the Ceph Object Store and full Ceph Chef
Cookbooks.

Weitere ähnliche Inhalte

Was ist angesagt?

Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and BeyondSage Weil
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and BeyondSage Weil
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
 
Experiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsExperiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsCeph Community
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersSage Weil
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014Kyle Bader
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
 

Was ist angesagt? (19)

Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and Beyond
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
What's new in Luminous and Beyond
What's new in Luminous and BeyondWhat's new in Luminous and Beyond
What's new in Luminous and Beyond
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Bluestore
BluestoreBluestore
Bluestore
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
Experiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah WatkinsExperiences building a distributed shared log on RADOS - Noah Watkins
Experiences building a distributed shared log on RADOS - Noah Watkins
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containers
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 

Andere mochten auch

Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph SolutionsRed_Hat_Storage
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Ceph Object Storage at Spreadshirt
Ceph Object Storage at SpreadshirtCeph Object Storage at Spreadshirt
Ceph Object Storage at SpreadshirtJens Hadlich
 
Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)Jens Hadlich
 
Scalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the CloudScalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the CloudRed_Hat_Storage
 
How to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft AzureHow to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft AzureBuurst
 
High availability and fault tolerance of openstack
High availability and fault tolerance of openstackHigh availability and fault tolerance of openstack
High availability and fault tolerance of openstackDeepak Mane
 
Disenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-othDisenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-othiiark .
 
Listing packet condo villa-twnhm
Listing packet condo villa-twnhmListing packet condo villa-twnhm
Listing packet condo villa-twnhmDuPreeMarketing
 
My Role @ Symantec
My Role @ SymantecMy Role @ Symantec
My Role @ SymantecVineet Sood
 
Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997Venugopala Rao P
 
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...Universidad de Cantabria
 
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_singC cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_singJohn Sing
 

Andere mochten auch (20)

Architecting Ceph Solutions
Architecting Ceph SolutionsArchitecting Ceph Solutions
Architecting Ceph Solutions
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Ceph Object Storage at Spreadshirt
Ceph Object Storage at SpreadshirtCeph Object Storage at Spreadshirt
Ceph Object Storage at Spreadshirt
 
Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)Ceph at Spreadshirt (June 2016)
Ceph at Spreadshirt (June 2016)
 
Scalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the CloudScalable POSIX File Systems in the Cloud
Scalable POSIX File Systems in the Cloud
 
How to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft AzureHow to Build Highly Available Shared Storage on Microsoft Azure
How to Build Highly Available Shared Storage on Microsoft Azure
 
Cloudinit
CloudinitCloudinit
Cloudinit
 
High availability and fault tolerance of openstack
High availability and fault tolerance of openstackHigh availability and fault tolerance of openstack
High availability and fault tolerance of openstack
 
Disenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-othDisenome biblioteca 201109-como-aplicar-tendencias-oth
Disenome biblioteca 201109-como-aplicar-tendencias-oth
 
FOTOGRAFIA
FOTOGRAFIAFOTOGRAFIA
FOTOGRAFIA
 
Listing packet condo villa-twnhm
Listing packet condo villa-twnhmListing packet condo villa-twnhm
Listing packet condo villa-twnhm
 
My Role @ Symantec
My Role @ SymantecMy Role @ Symantec
My Role @ Symantec
 
Curriculum nuevo1
Curriculum nuevo1Curriculum nuevo1
Curriculum nuevo1
 
Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997Ies electrical-engineering-paper-2-1997
Ies electrical-engineering-paper-2-1997
 
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
SERSEO Servicios de SEO-SEM-SOCIAL MEDIA en Cantabria, Asturias, Pais Vasco, ...
 
CloudInit Introduction
CloudInit IntroductionCloudInit Introduction
CloudInit Introduction
 
Productos biotecnológicos
Productos biotecnológicosProductos biotecnológicos
Productos biotecnológicos
 
Longboard :3
Longboard :3Longboard :3
Longboard :3
 
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_singC cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
C cloud organizational_impacts_big_data_on-prem_vs_off-premise_john_sing
 

Ähnlich wie Ceph at Work in Bloomberg: Object Store, RBD and OpenStack

Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterEttore Simone
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelRed_Hat_Storage
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes IntroductionMiloš Zubal
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Community
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph Community
 
Red Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShiftRed Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShiftJeremy Eder
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceShapeBlue
 
Benchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsdata://disrupted®
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfClyso GmbH
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Thang Man
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDKKernel TLV
 
Memory Bandwidth QoS
Memory Bandwidth QoSMemory Bandwidth QoS
Memory Bandwidth QoSRohit Jnagal
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH Ceph Community
 

Ähnlich wie Ceph at Work in Bloomberg: Object Store, RBD and OpenStack (20)

Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Chicago - Ceph at work at Bloomberg
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to Jewel
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
 
Ceph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der SterCeph for Big Science - Dan van der Ster
Ceph for Big Science - Dan van der Ster
 
Red Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShiftRed Hat Summit 2018 5 New High Performance Features in OpenShift
Red Hat Summit 2018 5 New High Performance Features in OpenShift
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experience
 
Benchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public clouds
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016Running OpenStack in Production - Barcamp Saigon 2016
Running OpenStack in Production - Barcamp Saigon 2016
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 
Memory Bandwidth QoS
Memory Bandwidth QoSMemory Bandwidth QoS
Memory Bandwidth QoS
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH CEPH DAY BERLIN - WHAT'S NEW IN CEPH
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
 

Mehr von Red_Hat_Storage

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red_Hat_Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed_Hat_Storage
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red_Hat_Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the FanRed_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red_Hat_Storage
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed_Hat_Storage
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red_Hat_Storage
 

Mehr von Red_Hat_Storage (20)

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 

Kürzlich hochgeladen

Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGSujit Pal
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 

Kürzlich hochgeladen (20)

Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAG
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 

Ceph at Work in Bloomberg: Object Store, RBD and OpenStack

  • 1. CEPH AT WORK IN BLOOMBERG Object Store, RBD and OpenStack January 19, 2016 By: Chris Jones & Chris Morgan
  • 2. BLOOMBERG 2 30 Years in under 30 Seconds ● Subscriber based financial provider (Bloomberg Terminal) ● Online, TV, print, real-time streaming information ● Offices and customers in every major financial market and institution worldwide
  • 3. BLOOMBERG 3 Primary product - Information ● Bloomberg Terminal − Approximately 60,000 features/functions. For example, ability to track oil tankers in real-time via satellite feeds − Note: Exact numbers are not specified. Contact media relations for specifics and other important information.
  • 5. CLOUD INFRASTRUCTURE GROUP 5 Primary customers – Developers – Product Groups ● Many different development groups throughout our organization ● Currently about 3,000 R&D developers ● Everyone of them wants and needs resources
  • 6. CLOUD INFRASTRUCTURE GROUP 6 Resource Challenges ● Developers − Development − Testing − Automation (Cattle vs. Pets) ● Organizations − POC − Products in production − Automation ● Security/Networking − Compliance
  • 7. USER BASE (EXAMPLES) 7 Resources and Use cases ● Multiple Data Centers − Each DC contains *many* Network Tiers which includes a DMZ for Public- facing Bloomberg assets − There is at least one Ceph/OpenStack Cluster per Network Tier ● Developer Community Supported − Public facing Bloomberg products − Machine learning backend for smart apps − Compliance-based resources − Use cases continue to climb as Devs need more storage and compute capacity
  • 9. USED IN BLOOMBERG 9 ● Ceph – RGW (Object Store) ● Ceph – Block/Volume ● OpenStack ─ Different flavors of compute ─ Ephemeral storage ● Object Store is becoming one of the most popular items ● OpenStack compute with Ceph backed block store volumes are very popular ● We introduced ephemeral compute storage
  • 10. SUPER HYPER-CONVERGED STACK 10 On EVERY Network Tier
  • 11. SUPER HYPER-CONVERGED STACK 11 (Original) Converged Architecture Rack Layout ● 3 Head Nodes (Controller Nodes) − Ceph Monitor − Ceph OSD − OpenStack Controllers (All of them!) − HAProxy ● 1 Bootstrap Node − Cobbler (PXE Boot) − Repos − Chef Server − Rally/Tempest ● Remaining Nodes − Nova Compute − Ceph OSDs − RGW – Apache ● Ubuntu ● Shared spine with Hadoop resources Bootstrap Node Compute/Ceph OSDs/RGW/Apache Remaining Stack Sliced View of Stack
  • 12. NEW POD ARCHITECTURE 12 POD (TOR) HAProxy OS-Nova OS-NovaOS-Rabbit OS-DB Number of large providers have taken similar approaches Note: Illustrative only – Not Representative POD (TOR) Ceph OSD Ceph Mon Ceph Mon Ceph Mon Ceph OSD Ceph OSD RBD Only Bootstrap Monitoring Ephemeral Ephemeral – Fast/Dangerous Host aggregates & flavors Not Ceph backed
  • 13. POD ARCHITECTURE (OPENSTACK/CEPH) 13 POD (TOR) Ceph Block OS-Nova OS-NovaOS-Rabbit OS-NovaOS-DB Number of large providers have taken similar approaches Note: Illustrative only – Not Representative POD (TOR) Ceph OSD Ceph Mon Ceph Mon Ceph Mon Ceph OSD Ceph OSD POD (TOR) Ceph OSD Ceph Mon Ceph Mon Ceph Mon Ceph OSD Ceph OSD • Scale and re- provision as needed • 3 PODs per rack
  • 14. EPHEMERAL VS. CEPH BLOCK STORAGE 14 Numbers will vary in different environments. Illustrations are simplified. Ceph Ephemeral New feature option added to address high IOP applications
  • 15. EPHEMERAL VS. CEPH BLOCK STORAGE 15 Numbers will vary in different environments. Illustrations are simplified. Ceph – Advantages ● All data is replicated at least 3 ways across the cluster ● Ceph RBD volumes can be created, attached and detached from any hypervisor ● Very fast provisioning using COW (copy-on-write) images ● Allows easy instance re-launch in the event of hypervisor failure ● High read performance Ephemeral – Advantages ● Offers read/write speeds that can be 3-4 times faster than Ceph with lower latency ● Can provide fairly large volumes for cheap Ceph – Disadvantages ● All writes must be acknowledged by multiple nodes before being considered as committed (tradeoff for reliability) ● Higher latency due to Ceph being network based instead of local Ephemeral – Disadvantages ● Trades data integrity for speed: if one drive fails at RAID 0 then all data on that node is lost ● May be difficult to add more capacity (depends on type of RAID) ● Running in JBOD LVM mode w/o RAID performance was not as good as Ceph ● Less important, with RAID your drives need to be same size or you lose capacity
  • 16. EPHEMERAL VS. CEPH BLOCK STORAGE 16 Numbers will vary in different environments. Illustrations are simplified. EPHEMERAL CEPH Block write bandwidth (MB/s) 1,094.02 642.15 Block read bandwidth (MB/s) 1,826.43 639.47 Character read bandwidth (MB/s) 4.93 4.31 Character write bandwidth (MB/s) 0.83 0.75 Block write latency (ms) 9.502 37.096 Block read latency (ms) 8.121 4.941 Character read latency (ms) 2.395 3.322 Character write latency (ms) 11.052 13.587 Note: Ephemeral in JBOD/LVM mode is not as fast as Ceph Numbers can also increase with additional tuning and different devices
  • 17. CHALLENGES – LESSONS LEARNED 17 Network ● It’s all about the network. − Changed MTU from 1500 to 9000 on certain interfaces (Float interface – Storage interface) − Hardware Load Balancers – keep an eye on performance ● Hardware − Moving to a more commodity driven hardware − All flash storage in compute cluster (high cost, good for block and ephemeral) Costs ● Storage costs are very high in a converged compute cluster for Object Store Analytics ● Need to know how the cluster is being used ● Need to know if the tps meets the SLA ● Test going directly against nodes and then layer in network components until you can verify all choke points in the data flow path ● Monitor and test always
  • 18. NEW CEPH OBJECT STORE 18
  • 19. OBJECT STORE STACK (RACK CONFIG) 19 RedHat 7.1 ● 1 TOR and 1 Rack Mgt Node ● 3 1U nodes (Mon, RGW, Util) ● 17 2U Ceph OSD nodes ● 2x or 3x Replication depending on need (3x default) ● Secondary RGW (may coexist with OSD Node) ● 10g Cluster interface ● 10g Public interface ● 1 IPMI interface ● OSD Nodes (high density server nodes) − 6TB HDD x 12 – Journal partitions on SSD − No RAID1 OS drives – instead we partitioned off a small amount of SSD1 for OS and swap with remainder of SSD1 used for some journals and SSD2 used for remaining journals − Failure domain is a node 3 1U Nodes TOR/IPMI Converged Storage Nodes 2U each
  • 20. OBJECT STORE STACK (ARCHITECTURE) 20 1 Mon/RGW Node Per rack TOR - Leaf Storage Nodes Spine Spine LBLB
  • 21. OBJECT STORE STACK 21 Standard configuration (Archive Cluster) ● Min of 3 Racks = Cluster ● OS – Redhat 7.1 ● Cluster Network: Bonded 10g or higher depending on size of cluster ● Public Network: Bonded 10g for RGW interfaces ● 1 Ceph Mon node per rack except on more than 3 racks. Need to keep odd number of Mons so some racks may not have Mons. We try to keep larger cluster racks & Mons in different power zones ● We have developed a healthy “Pain” tolerance. We mainly see drive failures and some node failures. ● Min 1 RGW (dedicated Node) per rack (may want more) ● Hardware load balancers to RGWs with redundancy ● Erasure coded pools (no cache tiers at present – testing). We also use a host profile with 8/3 (k/m) ● Near full and full ratios are .75/.85 respectfully ● Index sharding ● Federated (regions/zones) ● All server nodes, no JBOD expansions ● S3 only at present but we do have a few requests for Swift ● Fully AUTOMATED – Chef cookbooks to configure and manage cluster (some Ansible)
  • 22. AUTOMATION 22 All of what we do only happens because of automation ● Company policy – Chef ● Cloud Infrastructure Group uses Chef and Ansible. We use Ansible for orchestration and maintenance ● Bloomberg Github: https://github.com/bloomberg/bcpc ● Ceph specific options − Ceph Chef: https://github.com/ceph/ceph-chef − Bloomberg Object Store: https://github.com/bloomberg/chef-bcs − Ceph Deploy: https://github.com/ceph/ceph-deploy − Ceph Ansible: https://github.com/ceph/ceph-ansible ● Our bootstrap server is our Chef server per cluster
  • 23. TESTING 23 Testing is critical. We use different strategies for the different parts of OpenStack and Ceph we test ● OpenStack − Tempest – We currently only use this for patches we make. We plan to use this more in our DevOps pipeline − Rally – Can’t do distributed testing but we use it to test bottlenecks in OpenStack itself ● Ceph − RADOS Bench − COS Bench – Going to try this with CBT − CBT – Ceph Benchmark Testing − Bonnie++ − FIO ● Ceph – RGW − Jmeter – Need to test load at scale. Takes a cloud to test a cloud  ● A lot of the times you find it’s your network, load balancers etc
  • 24. CEPH USE CASE DEMAND – GROWING! 24 Ceph *Real-time Object ImmutableOpenStack Big Data*? *Possible use cases if performance is enhanced
  • 25. WHAT’S NEXT? 25 Continue to evolve our POD architecture ● OpenStack − Work on performance improvements and track stats on usage for departments − Better monitoring − LBaaS, Neutron ● Containers and PaaS − We’re currently evaluating PaaS software and container strategies now ● Better DevOps Pipelining − GO CD and/or Jenkins improved strategies − Continue to enhance automation and re-provisioning − Add testing to automation ● Ceph − New Block Storage Cluster − Super Cluster design − Performance improvements – testing Jewel − RGW Multi-Master (multi-sync) between datacenters − Enhanced security – encryption at rest (can already do) but with better key management − NVMe for Journals and maybe for high IOP block devices − Cache Tier (need validation tests)
  • 27. ADDITIONAL RESOURCES 27 ● Chris Jones: cjones303@bloomberg.net − Github: cloudm2 ● Chris Morgan: cmorgan2@bloomberg.net − Github: mihalis68 Cookbooks: ● BCC: https://github.com/bloomberg/bcpc − Current repo for Bloomberg’s Converged OpenStack and Ceph cluster ● BCS: https://github.com/bloomberg/chef-bcs ● Ceph-Chef: https://github.com/ceph/ceph-chef The last two repos make up the Ceph Object Store and full Ceph Chef Cookbooks.