08448380779 Call Girls In Civil Lines Women Seeking Men
DAX: A Widely Distributed Multi-tenant Storage Service for DBMS Hosting
1. DAX: A Widely Distributed Multi-tenant
Storage Service for DBMS Hosting
Rui Liu, Ashraf Aboulnaga, Kenneth Salem
University of Waterloo
2. RDBMS in the Cloud
● Many cloud/non-cloud applications use
relational DBMS
○ easy-to-use
○ robust apps
○ good performance
○ possibility of scale-out (e.g., by sharding)
3. DBMS Setting in Cloud
Storage service
DB Log
DBMS
● Data and log are stored in network accessible
storage
● On failure, a new DBMS is launched
4. DBMS Setting in Cloud
Storage service
DB Log
DBMS
● Data and log are stored in network accessible
storage
● On failure, a new DBMS is launched
5. DBMS Setting in Cloud
Storage service
DB Log
DBMS DBMS
● Data and log are stored in network accessible
storage
● On failure, a new DBMS is launched
6. Data Center
Disaster Tolerance
● Current solutions: log shipping, storage
mirror or backup
○ Complex
○ Slow (synchronous)
○ Or, lost data (asynchronous)
Storage service
DB Log
DBMS
Data Center
Storage service
DB Log
DBMSLog Shipping
Mirror/Backup
OR
7. Our Solution
Disaster tolerant storage for DBMS Hosting
DBMS DBMS
App AppApp App
Data Center
Storage service
DBMS DBMS
App AppApp App
Data Center
Storage service
8. DAX
Distributed Application-controlled
Consistent Store
● Block I/O interface to DBMS
● Quorum-based, multi-
master replication
● distributed, replicated “key-
value store”
○ block ID ⇒ key
○ block contents ⇒ value
DAX client library
DBMS
DAX
9. Baseline: Quorum Reads and Writes
● Block replication: N times across DAX
servers
○ servers and copies cover all data center
● Reads and writes
○ Read: request N copies, return after 1 ≤ R ≤ N
○ Write: request N copies, return after 1 ≤ W ≤ N
● R = W = N/2 +1
○ replicas distributed across data centers
○ write quorum includes at least one remote replica
10. Local Read and Write Latencies
●N = 3, in a single data center
●Consistency penalty: waiting for more replicas is
slower
11. Wide-Area Read and Write Latencies
●N = 3, copies in Oregon and Virginia
●Much larger consistency penalty!
12. Goal of DAX
● Achieve the performance of R=1/W=1 while
maintaining consistency, durability, and
availability
Contributions
● Performance optimization
○ Optimistic I/O - reduce read latency
○ Client-managed durability - hide write latency
● Novel consistency guarantee
13. Optimistic I/O
● Key observation
With R=W=1, most reads do not return stale data
○ Single writer
○ DBMS buffer pool
○ Parallel writes
○ Network topology
● Solution
R=W=1, detect stale data, recover from it
○ record data block version numbers; check against
record when reading
14. Client-Managed Durability
● Key observations
○ Transaction semantics dictate when writes
must be safe (Durable) and when the DBMS
can tolerate unsafe writes
○ Database systems explicitly enforce data
durability at necessary points
■ Durable means “replicated W times”
15. Client-Managed Durability
● DAX has two kinds of writes
○ Data write, without durability guarantee
○ Synchronized write that ensures durability of previous
writes
● DBMS issues the synchronized write only
when its transaction protocols demand it
● Time between data write and synchronized
write enables latency hiding
16. Consistency in DAX
● Key idea: make DAX look like an
unreplicated storage system with explicit
synchronization points for updates
○ DAX’s clients (DBMS) are designed to work with this
model
● Two guarantees:
○ DBMS sees the latest version as long as it is up
○ Exactly one version survives a database failure; it is
the latest synchronized version, or later
17. Consistency Example
●Which write should each read see?
●R1 must see W3
●R2 can see W1, W2, or W3
●R3 must see the version seen by R2
18. Experiment Configurations
●9 DAX servers
●6 replicas distributed in
1 EC2 region
(1 zone)
3 EC2 regions
r1 r2 r3 r4 r5
DBMS
r6
DBMS
Virginia California Oregon
r1 r2 r3 r4 r5 r6
20. Summary
● DAX: disaster-tolerant storage for DBMS
hosting
○ Scalable in storage capacity, bandwidth and the
number of database tenants
● Data replication across data centers
● Optimistic I/O and client-managed durability
to reduce request latencies
● DBMS-appropriate consistency guarantee,
even across failures