Cloudian HyperStore offer 100% S3 compatibility for low-cost, scalable smart object storage.
With HyperStore 6.0, we are focused on bringing down operational costs so that you can more effectively track, manage, and optimize your data storage as you scale.
2. FINALIS
History of Innovation
3
2011 2012 2013 2014 2015 2016
HyperStore 1.5
• Software
• Scale-out
• Peer-to-peer
• Multi Tenancy
• Quality of Service
(QoS)
HyperStore 2.3
• Software
• S3 API
• Multi Region Storage
HyperStore 3.0
• Software
• Compression
• Virtual Appliance
HyperStore 2.4
• Software
• Server Side
Encryption
• Billing & Chargeback
• Citrix Cloud Platform
HyperStore 4.0
• Software
• Erasure Coding
• OpenStack
• NFS
• Cloud Archive
HyperStore 5.0
• Software &
Appliances
• Server Side
Encryption
• Billing & Chargeback
• Citrix Cloud Platform
HyperStore 5.1
• Smart Data
• Hadoop
• Smart Support
HyperStore 5.2
• Smart Data Policies
• Faster Repair
HyperStore ‘forever live’
3000
• Hot plug everything
• Seamless Scale
• Petabyte Scale
HyperStore 6.0
• Operations @scale
• Durable @scale
• Tuning @scale
3. New Economics of Storage
Hardware/Software (62%)
$1245
Operations FTE (26%)
$511
• In 2016 estimated that 1 FTE can manage 344TB
• Software Defined Storage has dramatically lowered storage costs to $122/TB/Year
• Operational Costs now significantly outweigh the Acquisition related costs
Other $253 (12%)
$2009/TB/Yr TCO
Hardware/Software (14%) $122
Operations FTE (58%)
$511
Other $253 (28%)
$886/TB/Yr TCO
Source : Gartner 2016 IT Key Metrics Data
Cloudian HyperStore
1c per GB/month
6.0
Focus
4. Introducing @scale Storage
5
Operations @scale
New Operations Console for one click management
Fully automated add/remove of nodes
Non disruptive rolling upgrades
Durable @scale
Always repaired, Always verified
Dynamic Object Routing for automated failure avoidance
Simple Disaster Recovery with Cross Region Replication
Tuning @scale
Visual Storage Analytics reports to automatically identify hot spots
Object ‘GPS’ to locate objects
Key announcement messages
1. Double the management capacity for your Storage Administrators
2. Continuous and automated failure resolution for data durability
3. Proactive low-cost management for Petabyte Scale storage
NEW
NEW
April
12th
NEW
7. Operations @scale : Datacenters
• One screen for hundreds of nodes
• Instantly view health of nodes
• Add nodes with one click
• Cluster dynamically rebalances
12. Tuning @scale : “Object GPS”
• Locate any object parts
• Object digests and timestamps validation
13. Operations @scale : Rolling Upgrade
• Automates
Upgrade/Patch
installation - touchless
• Uses Puppet framework
for distribution and
management
• No downtime – node-
by-node rolling upgrade
✓ Peer-to-peer system = no SPOF
Distributed Everything – Data,
MetaData, Configuration
✓
New version- Old version
Node 1 Node 2 Node 3t0
t1
t2
t3
Node 4 Node 5 Node 6
Node 1 Node 2 Node 3 Node 4 Node 5 Node 6
Node 1 Node 2 Node 3 Node 4 Node 5 Node 6
Node 1 Node 2 Node 3 Node 4 Node 5 Node 6
14. Durable @scale : Proactive Repair
Problem
If the node/disk/network failed
today and IOs that were
incomplete will have to be
identified by scanning the whole
system and making a lot of Disk
IOs
VERY slow process
Exposes customer for days to
data loss in event of double
failures
Solution
When an IO fails due to
network/node or disk failure we
keep hints as to what failed
Now when we need to run repair
we know what exactly needs
repair
Very FAST process
Drastically Reduce exposure
window from days to hours
15. Durable @scale : Rebuild Analytics
• View Data Rebuild info
• View Cluster Rebalance info
16. To protect against unplanned outages
Durable @scale : Smart Redirect
Introducing Local cache copy
Replication
Client
Node 1 Node 2 Node 3 Node 4
Cassandra Local Cache
1- req In
2- Node1 Down
3- Node 2 makes Extra Copy stored in local cache
4- Node 2 keeps hint
in Cassandra
5- req Success
Copy2 Copy1
17. To protect against unplanned outages
Introducing Hot Spares
(k+m+L)
No Local Cache, only hints
Repair handled by proactive
repair
If customer has 20 nodes it
can only fail 2 nodes - (k+m+L)
fixes this
Erasure Coding
Node 1
Node 4 Node 2
Node 3
Node 5 Node 6`
Node 7 Node 8
Client
Durable @scale : Smart Redirect
18. Durable @scale : Smart Disk Balancing
1. Scenario 1 :Disk Imbalance
If we notice an imbalance it will change the tokens pointing from “highly used disk” to “low used
disk”
2. Scenario 2 : Disk failure
New data automatically routes to newly assigned resources
t0
t1
t0
t1
Auto Dynamic MappingStatic Mapping
19. Durable @scale : Repair-On-Read
NODE 1 NODE 2 NODE N+1
Replica Replica
Read
🔍 🔍 🔧on
Read
• All replicas of the object are checked on a read request
• Missing/out-of-date replicas are automatically replaced or updated
Object has
Changed
Out-of-Date replicas are replaced or updated
🔍 🔧on
Read
20. Durable @scale
S3 Cross Region Replication
• Restore Script restores object in a new bucket
• Restore scripts generates source list as a CSV file
• No CMC support in 6.0
21. S3 Compatibility
Ceph S3 tests
Total tests: 416
OPEN 4 (3 bugs open: POST filename subst., minor ACL)
PASS 386
PASS* 26 (Illegal character case handled at Jetty)
22. Introducing @scale Storage
23
Operations @scale
New Operations Console for one click management
Fully automated add/remove of nodes
Non disruptive rolling upgrades
Durable @scale
Always repaired, Always verified
Dynamic Object Routing for automated failure avoidance
Simple Disaster Recovery with Cross Region Replication
Tuning @scale
Visual Storage Analytics reports to automatically identify hot spots
Object ‘GPS’ to locate objects
Key announcement messages
1. Double the management capacity for your Storage Administrators
2. Continuous and automated failure resolution for data durability
3. Proactive low-cost management for Petabyte Scale storage
NEW
NEW
April
12th
NEW