Ensuring Technical Readiness For Copilot in Microsoft 365
Scality, Cloud Storage pour Zimbra
1.
2. Data and Storage Challenges
100s Millions of Users, 10s-100s PB of Data
and Billions of files to store and serve
What all these companies have in common ?
Internet/Cloud business impossible to sustain and develop
with traditional IT approaches
Slide 2
3. Scality – Quick Facts
Founded 2009
Experienced management team
HQ in the San Francisco, Global reach
~ 50 Employees, 20 engineers in Paris
“Aggressive use of a scale-out
24 x 7 support team architecture like that enabled
by Scality's RING architecture
will become more prevalent, as
US patents IT organizations develop best
practices that boost storage
$13M invested in Scality to-date asset use, reduce operational
overhead, and meet
120% annual growth high data availability
expectations.”
Industry Associations
Slide 3
4. Customers in US, Europe and Japan
Email Service Providers
Cloud Providers (s3 Compatible, FileSync, Cloud Backup…)
Consumer Internet Big Data
Hardware Alliances
Slide 4
5. Scality RING 4
Email File Storage StaaS Digital Media Big Data
Enterprise & Data
Scale Out
Cloud Email S3 & CDMI API Origin Server Processing
File System
System with Hadoop
Scality RING Organic Storage 4
MD S3
P2P
DATA
ARC CDMI
Ring End-to-End Object MESA Replication Erasure Geo Tiering Standard Management
Topology Parallelism Storage NewSQL DB Coding Redundancy
x86
Slide 5
6. Les Défis du Stockage Mail
L’explosion des données et l’explosion des attentes
La gestion des volumes, la croissance et la distribution
La complexité
Les performances
La protection
Ne pas perdre les données
Garder les données disponibles
Slide 6
7. La gestion des volumes
Chaque ZCS gère une liste
fixe d’utilisateurs, on doit :
Distribuer la charge CPU, DBs,
IOPS
Gérer les volumes
Les volumes sont limités en
nombre de fichiers
Slide 7
12. Scality's Zimbra Architecture
Zimbra 7/8
Servers
SCALITY'S
RING
Spare Zimbra Server
Legend
Stateless Zimbra Node
Data
Standard GE Network MySQL backups
13. Les Performances
La combinaison de:
volume important
Densité d’ IO
Quantité de fichiers ~ milliards
La croissance ne peut pas réduire
les performances
Webmail et IMAP exige les réponses
< 100msec en tout temps
Slide 13
14. Distributed Architecture P2P
Scality RING
Servers (6) Storage nodes
(ex: 6/node, total=36)
Storage nodes projected on a ring
From Servers to Storage Nodes
RING Topology, P2P Architecture
Limitless Scale-Out Storage based on Shared Nothing model
Fully Distributed Storage (Data and Meta-data)
Slide 14
15. End-to-End Parallelism
APPLICATIONS / APPLICATIONS / APPLICATIONS / APPLICATIONS / APPLICATIONS /
Parallel Connectors access CONNECTORS CONNECTORS CONNECTORS CONNECTORS CONNECTORS
to Storage Nodes
Performance aggregation STORAGE STORAGE STORAGE STORAGE
NODES NODES NODES NODES
Redundant Data Path
Multiple Storage Nodes I/O
DAEMONS
I/O
DAEMONS
I/O
DAEMONS
I/O
DAEMONS
I/O
DAEMONS
I/O
DAEMONS
per server
Minimum 6 to increase parallelism SSD SATA
TIERED
STORAGE
and data independence
Fast and easy rebuild
Scality Parallelism Factor
Multiple IO Daemons #Storage Nodes x #IO Daemons
per Server vs
Control Physical and Boost IO Simple server node with only 1 IO Engine
Independent Performance and Capacity Scalability
Slide 15
16. « Exceptional performance »
“ESG Lab verified exceptional performance for an object-based storage
solution, which rivals block-based solutions. Aggregate throughput
scaled linearly as nodes were added to a RING. Response times
improved as the RING grew in size, allowing for predictability when
deploying a RING.”
Table 3. Content Delivery
Simultaneously Sustained
Object Type Objects Delivered
5 servers with Intel SSD
Internet Audio (MP3) 211,424
Internet Image (JPG) 135,311
Internet Video
90,208
(MPEG)
CD Audio (ISO) 18,877
Broadcast TV (HD) 2,298
Slide 16
17. La protection
La perte des messages n’est
pas acceptable
La disponibilité de la
messagerie est essentielle
Souvent le cout oblige à des
compromis
Slide 17
18. Data Replication
11
00
No data transformation 220 20
Clear/Native data format
200 40
Very fast access
Simple projection
180 60
Class of Storage 171
Up to 5 replicas (6 copies) 160
80
91
Rack-aware 140 100
120
Guarantee of full independent object location
Self-healing
Balance misplaced objects
Transparently proxy misplaced objects
Rebuild missing replicas
Permanent CRC of all contents (no silent data corruption)
Slide 18
19. Scality ARC ARC
Data fragments = Native Data (no transformation)
Direct and fast read access
Calculation only required when data is missing
Highly Configurable eg ARC(2,4)
Scality
4
RING
Check
sums
Data Scality
inputs ARC(14,4)
Data
14
Data source Output to be stored
Slide 19
20. Geo Redundancy
Business Continuity with “true 99.999%” availability
Multi-site topology with Scality RING (up to 6 sites)
Replication or Geo Erasure Coding implementation – Synchronous
Or Multi-RINGs on Multi-site (independent topology) –
Asynchronous
Asynchronous
multiple
Synchronous independent
stretched RING RINGs
across 2 sites
Slide 20
21. 2 Million Users, 1 Billion Objects:
Telenet – Email Service Provider Case Study
Problem: Outdated Infrastructure, Fast Growing
Customer Base, Budgetary Constraints, Exponential
Growth of Storage Projected.
Solution: Scality RING
Unlimited number of files: Millions of files per bucket. Up
to 1B files per node.
Pay as you grow model
Fully integrated product includes Zimbra Connectors
“Scality’s storage tiering capability allows us to use
lower cost disk systems at geographically dispersed
sites to provide Tier 2 storage with true redundancy
and disaster recovery, without incurring the extra
hardware expense and back-up overhead
associated with using NAS and SAN storage
systems.
In addition, the cost risk associated with email
storage has been significantly reduced with Scality
as we can add lower cost storage as and when we
need it, without large incremental spending chunks”
Nick De Jonghe, Manager Network Strategy &
Architecture, at Telenet
Slide 21
22. En resumé
Challenges Zimbra Solutions Scality
L'explosion du stockage coute cher Stockage distribué sur des serveurs
en NAS/SAN standards baisse le TCO
Limitation du nombre d'objets par pas de limitation (no inode): un seul
volume nécessite de monter volume: aggregation de la capacité
plusieurs volumes indépendants qui de stockage
ne partagent pas leur capacité
Performance L'ajout de capacité de stockage
augmente la performance
(aggregation CPU & IO operations)
Sauvegardes complexes sauvegardes intégrées distribuées
géographiquement (géo-redondant)
Ajout de capacité sans interruption système distribué – zéro interruption