Más contenido relacionado

Self healing data

  1. Self-healing data An introduction to consistency models, Quorums and CRDTs Uwe Friedrichsen, codecentric AG, 2014-2015
  2. @ufried Uwe Friedrichsen | uwe.friedrichsen@codecentric.de | http://slideshare.net/ufried | http://ufried.tumblr.com
  3. Why NoSQL? •  Scalability •  Easier schema evolution •  Availability on unreliable OTS hardware •  It‘s more fun ...
  4. Why NoSQL? •  Scalability •  Easier schema evolution •  Availability on unreliable OTS hardware •  It‘s more fun ...
  5. Challenges •  Giving up ACID transactions •  (Temporal) anomalies and inconsistencies
 short-term due to replication or long-term due to partitioning It might happen. Thus, it will happen!
  6. C Consistency A Availability P Partition Tolerance Strict Consistency ACID / 2PC Strong Consistency Quorum R&W / Paxos Eventual Consistency CRDT / Gossip / Hinted Handoff
  7. Strict Consistency (CA) •  Great programming model
 no anomalies or inconsistencies need to be considered •  Does not scale well
 best for single node databases „We know ACID – It works!“ „We know 2PC – It sucks!“ Use for moderate data amounts
  8. And what if I need more data? •  Distributed datastore •  Partition tolerance is a must •  Need to give up strict consistency (CP or AP)
  9. Strong Consistency (CP) •  Majority based consistency model
 can tolerate up to N nodes failing out of 2N+1 nodes •  Good programming model
 Single-copy consistency •  Trades consistency for availability
 in case of partitioning Paxos (for sequential consistency) Quorum-based reads & writes
  10. Quorum-based reads & writes •  N – number of nodes (replicas) •  R – number of reads delivering the same result required •  W – number of confirmed writes required W > N / 2 R + W > N
  11. Example 1 •  3 replica nodes
 This is: N = 3 •  W > N / 2 à W > 3 / 2 à W > 1,5
 Let‘s pick: W = 2 (can tolerate failure of 1 node) •  R + W > N à R + 2 > 3 à R > 1
 Let‘s pick: R = 2 (can tolerate failure of 1 node)
  12. Client Node 1 Node 3 Node 2
  13. W W W Client Node 1 Node 3 Node 2
  14. Client Node 1 Node 3 Node 2 ✔ ✖ ✔ W = 2 ✔
  15. Client Node 1 Node 3 Node 2
  16. R R R Client Node 1 Node 3 Node 2 ✔ ✔ ✔
  17. Client Node 1 Node 3 Node 2 2x / 1x ✔ R = 2 ✔
  18. Client Node 1 Node 3 Node 2
  19. R R R Client Node 1 Node 3 Node 2 ✔ ✔ ✖
  20. Client Node 1 Node 3 Node 2 1x / 1x ✖ R = 1 ✖
  21. Example 2 •  3 replica nodes
 This is: N = 3 •  W > N / 2 à W > 3 / 2 à W > 1,5
 Let‘s pick: W = 3 (strict consistency – does not tolerate any failure) •  R + W > N à R + 3 > 3 à R > 0
 Let‘s pick: R = 1 (single read from any node sufficient)
  22. Example 3 •  5 replica nodes
 This is: N = 5 •  W > N / 2 à W > 5 / 2 à W > 2,5
 Let‘s pick: W = 3 (can tolerate failure of 2 nodes) •  R + W > N à R + 3 > 5 à R > 2
 Let‘s pick: R = 3 (can tolerate failure of 2 nodes)
  23. Limitations of QB R&W •  Requires fixed set of replica nodes
 Dynamic adding and removal of nodes not possible •  Client-centric consistency
 Consistency only perceived on client, not on nodes •  Requires additional measures on nodes
 Nodes need to implement at least eventual consistency Use if client-perceived strong consistency is
 sufficient and simplicity trumps formal precision
  24. And what if I need more availability? •  Need to give up strong consistency (CP) •  Relax required consistency properties even more •  Leads to eventual consistency (AP)
  25. Eventual Consistency (AP) •  Gives up some consistency guarantees
 no sequential consistency, anomalies become visible •  Maximum availability possible
 can tolerate up to N-1 nodes failing out of N nodes •  Challenging programming model
 anomalies usually need to be resolved explicitly Gossip / Hinted Handoffs CRDT
  26. Conflict-free Replicated Data Types •  Eventually consistent, self-stabilizing data structures •  Designed for maximum availability •  Tolerates up to N-1 out of N nodes failing State-based CRDT: Convergent Replicated Data Type (CvRDT) Operation-based CRDT: Commutative Replicated Data Type (CmRDT)
  27. A bit of theory first ...
  28. Convergent Replicated Data Type State-based CRDT – CvRDT •  All replicas (usually) connected •  Exchange state between replicas, calculate new state on target replica •  State transfer at least once over eventually-reliable channels •  Set of possible states form a Semilattice •  Partially ordered set of elements where all subsets have a Least Upper Bound (LUB) •  All state changes advance upwards with respect to the partial order
  29. Commutative Replicated Data Type Operation-based CRDT - CmRDT •  All replicas (usually) connected •  Exchange update operations between replicas, apply on target replica •  Reliable broadcast with ordering guarantee for non-concurrent updates •  Concurrent updates must be commutative
  30. That‘s been enough theory ...
  31. Counter
  32. Op-based Counter Data Integer i Init i ≔ 0 Query return i Operations increment(): i ≔ i + 1 decrement(): i ≔ i - 1
  33. State-based G-Counter (grow only) (Naïve approach) Data Integer i Init i ≔ 0 Query return i Update increment(): i ≔ i + 1 Merge( j) i ≔ max(i, j)
  34. State-based G-Counter (grow only) (Naïve approach) R1 R3 R2 i = 1 U i = 1 U i = 0 i = 0 i = 0 I I I M i = 1 i = 1 M
  35. State-based G-Counter (grow only) (Vector-based approach) Data Integer V[] / one element per replica set Init V ≔ [0, 0, ... , 0] Query return ∑i V[i] Update increment(): V[i] ≔ V[i] + 1 / i is replica set number Merge(V‘) ∀i ∈ [0, n-1] : V[i] ≔ max(V[i], V‘[i])
  36. State-based G-Counter (grow only) (Vector-based approach) R1 R3 R2 V = [1, 0, 0] U V = [0, 0, 0] I I I V = [0, 0, 0] V = [0, 0, 0] U V = [0, 0, 1] M V = [1, 0, 0] M V = [1, 0, 1]
  37. State-based PN-Counter (pos./neg.) •  Simple vector approach as with G-Counter does not work •  Violates monotonicity requirement of semilattice •  Need to use two vectors •  Vector P to track incements •  Vector N to track decrements •  Query result is ∑i P[i] – N[i]
  38. State-based PN-Counter (pos./neg.) Data Integer P[], N[] / one element per replica set Init P ≔ [0, 0, ... , 0], N ≔ [0, 0, ... , 0] Query Return ∑i P[i] – N[i] Update increment(): P[i] ≔ P[i] + 1 / i is replica set number decrement(): N[i] ≔ N[i] + 1 / i is replica set number Merge(P‘, N‘) ∀i ∈ [0, n-1] : P[i] ≔ max(P[i], P‘[i]) ∀i ∈ [0, n-1] : N[i] ≔ max(N[i], N‘[i])
  39. Non-negative Counter Problem: How to check a global invariant with local information only? •  Approach 1: Only dec when local state is > 0 •  Concurrent decs could still lead to negative value •  Approach 2: Externalize negative values as 0 •  inc(negative value) == noop(), violates counter semantics •  Approach 3: Local invariant – only allow dec if P[i] - N[i] > 0 •  Works, but may be too strong limitation •  Approach 4: Synchronize •  Works, but violates assumptions and prerequisites of CRDTs
  40. Sets
  41. Op-based Set (Naïve approach) Data Set S Init S ≔ {} Query(e) return e ∈ S Operations add(e) : S ≔ S ∪ {e} remove(e): S ≔ S {e}
  42. Op-based Set (Naïve approach) R1 R3 R2 S = {e} add(e) S = {} S = {} S = {} I I I S = {e} S = {} rmv(e) add(e) add(e) add(e) rmv(e) add(e) S = {e} S = {e} S = {e} S = {}
  43. State-based G-Set (grow only) Data Set S Init S ≔ {} Query(e) return e ∈ S Update add(e): S ≔ S ∪ {e} Merge(S‘) S = S ∪ S‘
  44. State-based 2P-Set (two-phase) Data Set A, R / A: added, R: removed Init A ≔ {}, R ≔ {} Query(e) return e ∈ A ∧ e ∉ R Update add(e): A ≔ A ∪ {e} remove(e): (pre query(e)) R ≔ R ∪ {e} Merge(A‘, R‘) A ≔ A ∪ A‘, R ≔ R ∪ R‘
  45. Op-based OR-Set (observed-remove) Data Set S / Set of pairs { (element e, unique tag u), ... } Init S ≔ {} Query(e) return ∃u : (e, u) ∈ S Operations add(e): S ≔ S ∪ { (e, u) } / u is generated unique tag remove(e): pre query(e) R ≔ { (e, u) | ∃u : (e, u) ∈ S } /at source („prepare“) S ≔ S R /downstream („execute“)
  46. Op-based OR-Set (observed-remove) R1 R3 R2 S = {ea} add(e) S = {} S = {} S = {} I I I S = {eb} S = {} rmv(e) add(e) add(eb) add(ea) rmv(ea) add(eb) S = {eb} S = {eb} S = {ea, eb} S = {eb}
  47. More datatypes •  Register •  Dictionary (Map) •  Tree •  Graph •  Array •  List plus more representations for each datatype
  48. Garbage collection •  Sets could grow infinitely in worst case •  Garbage collection possible, but a bit tricky •  Only remove updates that can be deleted safely and were received by all replicas •  Usually implemented using vector clocks •  Can tolerate up to n-1 crashes •  Live only while no replicas are crashed •  Can induce surprising behavior sometimes •  Sometimes stronger consensus is needed (Paxos, ...)
  49. Limitations of CRDTs •  Very weak consistency guarantees
 Strives for „quiescent consistency“ •  Eventually consistent
 Not suitable for high-volume ID generator or alike •  Not easy to understand and model •  Not all data structures representable Use if availability is extremely important
  50. Further reading 1.  Shapiro et al., Conflict-free Replicated Data Types, Inria Research report, 2011 2.  Shapiro et al., A comprehensive study of Convergent and Commutative Replicated Data Types,
 Inria Research report, 2011 3.  Basho Tag Archives: CRDT,
 https://basho.com/tag/crdt/ 4.  Leslie Lamport,
 Time, clocks, and the ordering of events in a distributed system,
 Communications of the ACM (1978)
  51. Wrap-up •  CAP requires rethinking consistency •  Strict Consistency
 ACID / 2PC •  Strong Consistency
 Quorum-based R&W, Paxos •  Eventual Consistency
 CRDT, Gossip, Hinted Handoffs Pick your consistency model based on your
 consistency and availability requirements
  52. The real world is not ACID Thus, it is perfectly fine to go for a relaxed consistency model
  53. @ufried Uwe Friedrichsen | uwe.friedrichsen@codecentric.de | http://slideshare.net/ufried | http://ufried.tumblr.com