3. Companies continuously increase
More and more data and traffic
More and more computing resources needed
SOLUTION
SCALING
15/12/2012 Scalability â The need for speed 3
4. vertical scalability = scale up
ď˝ single server
ď˝ performance â more resources (CPUs, storage, memory)
ď˝ volumes increase â more difficult and expensive to scale
ď˝ not reliable: individual machine failures are common
horizontal scalability = scale out
ď˝ cluster of servers
ď˝ performance â more servers
ď˝ cheaper hardware (more likely to fail)
ď˝ volumes increase â complexity ~ constant, costs ~ linear
ď˝ reliability: CAN operate despite failures
ď˝ complex: use only if benefits are compelling
15/12/2012 Scalability â The need for speed 4
6. All data on a single node
Use cases
ď˝ data usage = mostly processing aggregates
ď˝ many graph databases
Pros/Cons
ď˝ RDBMSs or NoSQL databases
ď˝ simplest and most often recommended option
ď˝ only vertical scalability
15/12/2012 Scalability â Vertical scalability 6
8. Shared everything
ď˝ every node has access to all data
ď˝ all nodes share memory and disk storage
ď˝ used on some RDBMSs
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 8
9. Shared disk
ď˝ every node has access to all data
ď˝ all nodes share disk storage
ď˝ used on some RDBMSs
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 9
10. Shared nothing
ď˝ nodes are independent and self-sufficient
ď˝ no shared memory or disk storage
ď˝ used on some RDBMSs and all NoSQL databases
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 10
11. Sharding
different data put on different nodes
Replication
same data copied over multiple nodes
Sharding + replication
the two orthogonal techniques combined
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 11
12. Different parts of the data onto different nodes
ď˝ data accessed together (aggregates) are on the same node
ď˝ clumps arranged by physical location, to keep load
even, or according to any domain-specific access rule
R W R W W
R
A B C
F E D
H G I
Shard Shard Shard
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 12
13. Use cases
ď˝ different people access different parts of the dataset
ď˝ to horizontally scale writes
Pros/Cons
ď˝ âmanualâ sharding with every RDBMS or NoSQL store
ď˝ better read performance
ď˝ better write performance
ď˝ low resilience: all but failing node data available
ď˝ high licensing costs for RDBMSs
ď˝ difficult or impossible cluster-level operations
(querying, transactions, consistency controls)
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 13
14. Data replicated across multiple nodes
ď˝ One designated master (primary) node
⢠contains the original
⢠processes writes and passes them on
ď˝ All other nodes are slave (secondary)
⢠contain the copies
⢠synchronized with the master during a replication process
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 14
15. R R
A A
B B
C C
Slave Slave
R W
A
MASTER-SLAVE REPLICATION
B
C
Master
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 15
16. Use cases
ď˝ load balancing cluster: data usage mostly read-intensive
ď˝ failover cluster: single server with hot backup
Pros/Cons
ď˝ better read performance
ď˝ worse write performance (write management)
ď˝ high read (slave) resilience:
master failure â slaves can still handle read requests
ď˝ low write (master) resilience:
master failure â no writes until old/new master is up
ď˝ read inconsistencies: update not propagated to all slaves
ď˝ master = bottleneck and single point of failure
ď˝ high licensing costs for RDBMSs
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 16
17. Data replicated across multiple nodes
ď˝ All nodes are peer (equal weight): no master, no slaves
ď˝ All nodes can both read and write
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 17
18. R W R W
A A
B B
C C
Peer Peer
R W
A
B
C
Peer
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 18
19. Use cases
ď˝ load balancing cluster: data usage read/write-intensive
ď˝ need to scale out more easily
Pros/Cons
ď˝ better read performance
ď˝ better write performance
ď˝ high resilience:
node failure â reads/writes handled by other nodes
ď˝ read inconsistencies: update not propagated to all nodes
ď˝ write inconsistencies: same record at the same time
ď˝ high licensing costs for RDBMSs
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 19
20. Sharding + master-slave replication
ď˝ multiple masters
ď˝ each data item has a single master
ď˝ node configurations:
⢠master
⢠slave
⢠master for some data / slave for other data
Sharding + peer-to-peer replication
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 20
21. R W R W
R
A B C
F E D
H G I
Master 1 Master/Slave 2 Slave 3
R R W W
R
A B C
F E D
H G I
Slave 1 Slave/Master 2 Master 3
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 21
22. R W R W R W
A B C
F E D
H G I
Peer 1/2 Peer 3/4 Peer 5/6
R W R W W
R
A B C
F H D
E G I
Peer 1/4 Peer 2/3 Peer 5/6
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 22
23. Oracle Database
Oracle RAC shared everything
Microsoft SQL Server
All editions shared nothing
master-slave replication
IBM DB2
DB2 pureScale shared disk
DB2 HADR shared nothing
master-slave replication (failover cluster)
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 23
24. Oracle MySQL
MySQL Cluster shared nothing
sharding, replication, sharding + replication
The PostgreSQL Global Development Group PostgreSQL
PGCluster-II shared disk
Postgres-XC shared nothing
sharding, replication, sharding + replication
15/12/2012 Scalability â Horizontal scalability: architectures and distribution models 24
26. Inconsistent write = write-write conflict
multiple writes of the same data at the same time
(highly likely with peer-to-peer replication)
Inconsistent read = read-write conflict
read in the middle of someone elseâs write
15/12/2012 Scalability â Horizontal scalability: consistency 26
27. ď˝ Pessimistic approach
prevent conflicts from occurring
ď˝ Optimistic approach
detect conflicts and fix them
15/12/2012 Scalability â Horizontal scalability: consistency 27
28. Implementation
ď˝ write locks â acquire a lock before updating a value
(only one lock at a time can be tacken)
Pros/Cons
ď˝ often severely degrade system responsiveness
ď˝ often leads to deadlocks (hard to prevent/debug)
ď˝ rely on a consistent serialization of the updates*
* sequential consistency
ensuring that all nodes apply operations in the same order
15/12/2012 Scalability â Horizontal scalability: consistency 28
29. Implementation
ď˝ conditional updates â test a value before updating it
(to see if it's changed since the last read)
ď˝ merged updates â merge conflicted updates somehow
(save updates, record conflict and merge somehow)
Pros/Cons
ď˝ conditional updates
rely on a consistent serialization of the updates*
* sequential consistency
ensuring that all nodes apply operations in the same order
15/12/2012 Scalability â Horizontal scalability: consistency 29
30. ď˝ Logical consistency
different data make sense together
ď˝ Replication consistency
same data â same value on different replicas
ď˝ Read-your-writes consistency
users continue seeing their updates
15/12/2012 Scalability â Horizontal scalability: consistency 30
31. ACID transactions â aggregate-ignorant DBs
Partially atomic updates â aggregate-oriented DBs
ď˝ atomic updates within an aggregate
ď˝ no atomic updates between aggregates
ď˝ updates of multiple aggregates: inconsistency window
ď˝ replication can lengthen inconsistency windows
15/12/2012 Scalability â Horizontal scalability: consistency 31
32. Eventual consistency
ď˝ nodes may have replication inconsistencies:
stale (out of date) data
ď˝ eventually all nodes will be synchronized
15/12/2012 Scalability â Horizontal scalability: consistency 32
33. Session consistency
ď˝ within a userâs session there is read-your-writes consistency
(no stale data read from a node after an update on another one)
ď˝ consistency lost if
⢠session ends
⢠the system is accessed simultaneously from different PCs
ď˝ implementations
⢠sticky session/session affinity = sessions tied to one node
ď§ affects load balancing
ď§ quite intricate with master-slave replication
⢠version stamps
ď§ track latest version stamp seen by a session
ď§ ensure that all interactions with the data store include it
15/12/2012 Scalability â Horizontal scalability: consistency 33
35. Consistency
all nodes see the same data at the same time
Latency
the response time in interactions between nodes
Availability
ď every nonfailing node must reply to requests
ď the limit of latency that we are prepared to tolerate:
once latency gets too high, we give up and treat data as
unavailable
Partition tolerance
the cluster can survive communication breakages
(separating it into partitions unable to communicate with each other)
15/12/2012 Scalability â Horizontal scalability: CAP theorem 35
36. 1) read(A)
2) A = A â 50
Transaction to transfer $50 3) write(A)
from account A to account B 4) read(B)
5) B = B + 50
6) write(B)
ď˝ Atomicity
⢠transaction fails after 3 and before 6 â the system should
ensure that its updates are not reflected in the database
ď˝ Consistency
⢠A + B is unchanged by the execution of the transaction
15/12/2012 Scalability â Horizontal scalability: CAP theorem 36
37. 1) read(A)
2) A = A â 50
Transaction to transfer $50 3) write(A)
from account A to account B 4) read(B)
5) B = B + 50
6) write(B)
ď˝ Isolation
⢠another transaction will see inconsistent data between 3 and 6
(A + B will be less than it should be)
⢠Isolation can be ensured trivially by running transactions
serially â performance issue
ď˝ Durability
⢠user notified that transaction completed ($50 transferred)
â transaction updates must persist despite failures
15/12/2012 Scalability â Horizontal scalability: CAP theorem 37
38. Basically Available
Soft state
Eventually consistent
Soft state and eventual consistency are techniques that work
well in the presence of partitions and thus promote availability
15/12/2012 Scalability â Horizontal scalability: CAP theorem 38
39. Given the three properties of
Consistency, Availability and
Partition tolerance,
you can only get two
15/12/2012 Scalability â Horizontal scalability: CAP theorem 39
40. C
being up and keeping consistency is reasonable
A
one node: if itâs up itâs available
P
a single machine canât partition
15/12/2012 Scalability â Horizontal scalability: CAP theorem 40
41. AP ( C )
partition â update on one node = inconsistency
15/12/2012 Scalability â Horizontal scalability: CAP theorem 41
42. CP ( A )
partition â consistency only if one nonfailing
node stops replying to requests
15/12/2012 Scalability â Horizontal scalability: CAP theorem 42
43. CA ( P )
nodes communicate â C and A can be preserved
partition â all nodes on one partition must be
turned off (failing nodes preserve A)
difficult and expensive
15/12/2012 Scalability â Horizontal scalability: CAP theorem 43
44. ACID databases
focus on consistency first and availability second
BASE databases
focus on availability first and consistency second
15/12/2012 Scalability â Horizontal scalability: CAP theorem 44
45. Single server
ď˝ no partitions
ď˝ consistency versus performance: relaxed isolation
levels or no transactions
Cluster
ď˝ consistency versus latency/availability
ď˝ durability versus performance (e.g. in memory DBs)
ď˝ durability versus latency (e.g. the master
acknowledges the update to the client only after
having been acknowledged by some slaves)
15/12/2012 Scalability â Horizontal scalability: CAP theorem 45
46. strong write consistency â write to the master
strong read consistency â read from the master
15/12/2012 Scalability â Horizontal scalability: CAP theorem 46
47. N = replication factor
(nodes involved in replication NOT nodes in the cluster)
W = nodes confirming a write
R = nodes needed for a consistent read
write quorum: W > N/2 read quorum: R + W > N
Consistency is on a per operation basis
Choose the most appropriate combination of
problems and advantages
15/12/2012 Scalability â Horizontal scalability: CAP theorem 47