Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Mi-ROSS Reliable Object Storage System For Software Defined Storage and Cloud

1.029 Aufrufe

Veröffentlicht am

OpenNebula TechDay Kuala Lumpur, MY, 17 Feb 2016

Veröffentlicht in: Software
  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

Mi-ROSS Reliable Object Storage System For Software Defined Storage and Cloud

  1. 1. 2/23/16 1 Reliable Object Storage System For Software Defined Storageand Cloud Mi-ROSS Luke Jing Yuan Mohd Bazli Ab Karim Wong Ming Tat Storage Systems Advanced Computing Lab Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo???
  2. 2. 2/23/16 2 Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo??? Motivations The Hype Cycle for Storage Technologies 2014
  3. 3. 2/23/16 3 Motivations (cont’d) Average Cost per 100TB (raw)? ~RM10M++ More features required? RMxk < +y <RMxxxk More space required? Most likely only from same vendor Storage Network (SAN)? RMxxk < +z <RMxxxk Consider the normal “branded” way Going commodity and open platform + Open Source Storage Platform/Software 2U x86 based 12 x 4TB (raw) ~ RM50k/unit 1/10Gbps Ethernet Switch < RM100k/unit More space required? Just get any x86 box and disks Features? Mostly already available in the open source storage software = 144TB (raw) @ < RM350k Can we go Commodity and Open Platform? After some studies… Ceph is chosen Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo???
  4. 4. 2/23/16 4 Ceph? • Its an open source scalable and reliable distributed object storage • Stores data by stripping/chunking those data into smaller objects and store the objects across different storage elements (disks) • Objects can be replicated multiple times for redundancy or using Erasure Code techniquesif storage capacityis desired • Clients access the distributed storage via RADOS Block Device (RBD), CephFS, RADOS Gateway (RGW), Ceph/RADOS libraries. • KVM (Qemu-KVM), Libvirt and OpenNebula have Ceph support Ceph? (cont’d) APP HOST/VM CLIENT Source: Patrick McGarry, Inktank
  5. 5. 2/23/16 5 Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo??? Why Ceph as Backend? A Use Case • Let’s consider a typical DR deployment scenario: Data Center Disaster Recovery Site(s) SAN/NAS $$$ SAN/NAS $$$ R/W
  6. 6. 2/23/16 6 • What if? 11 Use Case (cont’d) Data Center SAN/NAS $$$ Disaster Recovery Site(s) SAN/NAS $$$ Data Center 2 Local/DAS $ Data Center 1 Local/DAS $ Mi-ROSS [One/Multiple Virtual Volume(s)] R/W R e p l i ca t i ons D a t a S t r i p i ng a n d P a ra l le l R / W Software-Defined Networking F Programmable F Redundancy F Availability/Reliability F Performance Use Case #2 • Initial POC simulates both KHTP and TPM with 3 zones using existing but slightly different hardware configuration (differentdisk specs). • For actual implementation, replicate POC setup but with similar hardware configurations. – KHTP zones setup is all in a single data center – TPM zones setup uses both DC1 and DC2 12 3 MIMOS TPMHPCC2 HPCC1 321 MIMOS KHTP
  7. 7. 2/23/16 7 Use Case #3: VDI • Due to project requirement, we needed a controlled environment where users remotely access a windows desktop/client for development • Solution – Open Nebula + Ceph (Emperor) – 60+ Windows 7 VMs with RDP – 30+ development VMs – Additional attached storage Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo???
  8. 8. 2/23/16 8 Some Little Annoyances • Command line driven management • How to ease management of pools and other capabilities? • What if I need to access the storage differently? – NFS – SAMBA – Etc. • Is there a way to orchestrate or provide a management interface to other cloud management platform, e.g. OpenNebula, etc. – Register pools – Configure Libvirt – Etc. Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo???
  9. 9. 2/23/16 9 What’s Mi-ROSS • To provide a simple access and management to a distributed storage that can be both implemented in a LAN as well as in a WAN/Campus network. • Leveraging on the strength of the availability and redundancy provided by its chosen backend, i.e. Ceph. • Mi-ROSS – MIMOS Reliable Object Storage System, is an initiative in Software-Defined Storage. Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo???
  10. 10. 2/23/16 10 Mi-ROSS Dashboard/Simple Monitoring Mi-ROSS Pools & Block Devices Management
  11. 11. 2/23/16 11 Mi-ROSS NFS Management Mi-ROSS Samba Management
  12. 12. 2/23/16 12 Agenda • Motivations • Ceph? • Why Ceph as Backend? Some Use Cases • Some LittleAnnoyances • What’s Mi-ROSS • Features • Demo??? Disclaimer: We are running froma Production Environment DEMO
  13. 13. 2/23/16 13 What’ Next? • Additional management features • Hierarchical Storage and Data Management • New export option(s) (e.g. iSCSI, etc.) • Web Services • Better integration to OpenNebula/Mi-Cloud

×