Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Arm - ceph on arm update

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Wird geladen in …3
×

Hier ansehen

1 von 17 Anzeige

Arm - ceph on arm update

Herunterladen, um offline zu lesen

講者:
Jeff Chu (Director of Enterprise Solutions, ARM)
Kan Yan Rong (Technical Expert in Storage and Application) Technology, WDC/SanDisk)

概要:
Jeff from ARM will provide a brief update on the activities furthering Ceph on ARM including some recent progress from ARM as well some increased community activity. After that Chris and Yan from Western Digital/San Disk will be presenting the topic on Ceph Block Performance on Cavium ARM and SATA SSDs.

講者:
Jeff Chu (Director of Enterprise Solutions, ARM)
Kan Yan Rong (Technical Expert in Storage and Application) Technology, WDC/SanDisk)

概要:
Jeff from ARM will provide a brief update on the activities furthering Ceph on ARM including some recent progress from ARM as well some increased community activity. After that Chris and Yan from Western Digital/San Disk will be presenting the topic on Ceph Block Performance on Cavium ARM and SATA SSDs.

Anzeige
Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (20)

Ähnlich wie Arm - ceph on arm update (20)

Anzeige

Weitere von inwin stack (20)

Aktuellste (20)

Anzeige

Arm - ceph on arm update

  1. 1. CEPH on ARM Update Jeff Chu CEPH DaysTaipei Director Enterprise Solutions July 19, 2017
  2. 2. ©ARM 2017 Devices Edge “Data Center” Access Scalability Across the Network from Edge to Cloud Not all workloads are the same Differentiated solutions – Right-size compute with scalable architecture Choice in the marketplace using open standards for interoperability A CC ACS ion Storage ion Storage Packet Flows Packet Flows Acceleration Storage Compute Packet Flows S A CS C A C A S S A C IoT
  3. 3. ©ARM 2017 Scalability – many meanings  Integration for cost reduction  Integrated 10/40 GigE  Integrated SATA  Storage acceleration  Security acceleration  Efficiency  Power  Density  System cost  Choices in system on chips (SoCs)
  4. 4. ©ARM 2017 Storage is key for high growth server workloads Source: IDC workload study 2015 ARMTarget Segments 72% of market in 2020  ARM is initially focusing ecosystem enablement on high volume/high growth segments 0.3% 1.9% 2.8% 2.8% 5.0% 6.6% 8.3% 2.6% 3.6% 3.8% 4.4% 7.4% 7.7% 8.1% 9.5% 11.9% 13.4% 0.0% 2.0% 4.0% 6.0% 8.0% 10.0% 12.0% 14.0% 16.0% Other VDI Compute Systems Management App Dev/Test Security Business Media Streaming Structured DA Unstructured DA Engineering Structured DM Content Networking Collaborative Web Serving Storage % of Total Servers Shipped 2020 Server Workload Forecast % Storage – 13.4% of workloads Networking – 8.1% of workloads Web Serving and content – ~20% of workloads Web Serving and content – ~20% of workloads High Performance Computing - $11B Market
  5. 5. ©ARM 2017 ARM-based storage solutions  Storage appliances  Rack level solutions  Cloud storage  30-50% of the power  2x-7x lower cost  Greater density  Commercial support:
  6. 6. ©ARM 2017 www.linaro.org
  7. 7. ©ARM 2017 Confidential Mobile - LMG ● OSS for the digital home ● W3C EME Secure Media playback for RDK and Android ● Middleware and user-space stack ● DLNA, CVP-2, HTML5 ● LSK kernel version for STB/IPTV ● ivenCommon media frameworks ● OSS for mobile devices  Android 64 bit  big.LITTLE power management  Performance and power optimizations (ART)  Memory optimization  Project Ara ● Support members, ARM and Google Android development Linaro segment groups ● OSS for networking ● Real Time Support ● Virtualization ● Core isolation ● OpenDataPlane (ODP) ● Big-endian legacy support ● ODP cross-platform support for SoC accelerators ● OSS for ARM servers  ATF/UEFI/ACPI  KVM/Xen  ARMv8 optimization  OpenJDK, Hadoop, OpenStack, Docker, DPDK ● Reduces fragmentation, cost, accelerates time to market Digital Home - LHG Networking - LNG Enterprise - LEG
  8. 8. ©ARM 2017 About Developer Cloud The Developer Cloud is the combination of ARM SoC vendors’ server hardware platforms, emerging cloud technologies, and many Linaro member driven projects, including server class boot architecture, kernel and virtualization.These projects have been under development for several years and Linaro has been delivering them in a limited colocation data-center facility that has been providing bare metal access to ARM servers to key developers for the last year. The Developer Cloud is based on OpenStack, leveraging both Debian and CentOS, as the underlying cloud OS infrastructure. It will use ARM server platforms from Linaro members AMD, Cavium, HPE, Huawei and Qualcomm, and will expand with demand, and as new server platforms come to market. www.linaro.cloud
  9. 9. ©ARM 2017 Metric 16.06 16.12 % Growth 17.07 # of participating members 5 5 40% 7 # of member engineers on project 6 10 20% 12 # of locations 2 3 100% 6 (Q3) # of nodes Austin 26 Cambridge 23 Austin 38 Cambridge 20 China pending 15 80% Austin 45 Cambridge 45 China 15 Developer Cloud Capacity
  10. 10. ©ARM 2017 Linaro Member Confidential Developer Cloud Demand Highlights ● 457 request for access in 12 months ○ 11 for this month in 3 days ● All major Linux vendors applied for access ● 50+ Requests from Mainland China ● 100+ Requests by Higher Education ○ 中国科学院 Georgia Institute of Technology, University of Cambridge,... ○ Notable current activities ■ Linux Foundation - Hadoop ■ Red Hat - Ceph ■ MongoDB Inc - MongoDB ■ Debian - OpenStack for ARM64 ■ CentOS - OpenStack for ARM64 ■ FreeBSD - FreeBSD porting ■ CERN - HPC ■ OpenHPC ■ HHVM - Facebook
  11. 11. ©ARM 2017 Ceph on OpenStack Linaro provides three developer clouds, located in China, Europe and North America. For all regions, Ceph is used for all storage requirements for OpenStack. The Linaro Developer Clouds have been in production since Q2 of 2016, supporting multiple organizations and open source projects. ● Linaro is actively developing and porting Ceph on ARM ● Linaro is actively contributing within the Ceph upstream for the ARM64 architecture ● Linaro has full QA/CI test suites for Ceph on ARM ● Linaro will be the reference CI for ARM on Ceph through the hosting of ARM64 systems within the Linaro Lab ● Linaro leverages Ceph as the sole storage for 3 OpenStack deployments to support the ARM ecosystem ● Linaro releases full builds of Ceph through the Linaro reference platform
  12. 12. ©ARM 2017 12 https://www.linaro.org/projects/reference-platforms/
  13. 13. ©ARM 2017 Linaro Reference Architecture on AArch64 + + Nova, Neutron, Glance, Cinder, Heat, Horizon, Ceph-Monitor Openvswitch-agent, DHCP-agent, Metadata-agent Nova-compute, libvirt, QEMU, Openvswtich-agent, Ceph-OSD Nova-compute, libvirt, QEMU, Openvswtich-agent, Ceph-OSD Jump server Control node Network node Compute node Compute node Reference Architecture AArch64 nodes Tenant network Control network Public network Danube Reference Stack
  14. 14. ©ARM 2017 ARM Ceph Patches and Optimizations  Recent ARM patches relevant to Ceph  Kernel inline copy_user patches  Kernel exception handler patches Ceph configuration tuning  Ceph CRC32C optimizations  Early Evaluations  The performance effect are not consistent across different test scenarios  Positive performance benefit for 4KB sequential write, 6.3%  Negative performance benefit for 128KB/4MB sequential writes, up to -17%  With large block size of 4MB and for reads, the CRC32C optimization and Ceph configuration tuning shows the most benefit when more disks/node are used  Up to +44% on 4 disks  No apparent benefit to slight performance degradation (up to -10%) with only 1 or 2 disks  With more disks the benefits of the CRC32C optimization becomes more pronounced. Our system configurations (specifically, the number of rotational drives per node) may not be able to fully demonstrate the effect of these optimizations
  15. 15. ©ARM 2017 Ceph on ARM with SanDisk SSDs and ThunderX • Initial testing of SandDisk SSD withThunderX for 4k and 8k block sizes across different levels of replication and queue depth • Other reports test MySQL and MemCache • http://www.tiriasresearch.com/downloads/high-performance-mysql-database-using-thunderx/ 23 • http://www.tiriasresearch.com/downloads/high-performance-memory-caching-using-thunderx/ http://ceph.com/arm/
  16. 16. ©ARM 2017 Ceph on ARM with SanDisk SSDs and ThunderX • Initial testing of SandDisk SSD with CaviumThunderX servers base on ARMv8 • 4k and 8k block sizes • Different levels of replication and queue depth
  17. 17. The trademarks featured in this presentation are registered and/or unregistered trademarks of ARM Limited (or its subsidiaries) in the EU and/or elsewhere. All rights reserved. All other marks featured may be trademarks of their respective owners. Copyright © 2017 ARM Limited

×