Diese Präsentation wurde erfolgreich gemeldet.

Modern Algorithms and Data Structures - 1. Bloom Filters, Merkle Trees

159

Teilen

Nächste SlideShare
Bloom filter
Bloom filter
Wird geladen in …3
×
1 von 49
1 von 49

Modern Algorithms and Data Structures - 1. Bloom Filters, Merkle Trees

159

Teilen

Beschreibung

The first part of a series of talks about modern algorithms and data structures, used by nosql databases like HBase and Cassandra. An explanation of Bloom Filters and several derivates, and Merkle Trees.

Transkript

  1. 1. Lorenzo Alberton @lorenzoalberton “Modern” Algorithms and Data Structures Part 1 Bloom Filters, Merkle Trees Cassandra-London, Monday 18th April 2011 1
  2. 2. Bloom Filters Burton Howard Bloom, 1970 http://portal.acm.org/citation.cfm?doid=362686.362692 2
  3. 3. Bloom Filter Space-efficient probabilistic data structure used to test set membership http://en.wikipedia.org/wiki/Bloom_filter 3
  4. 4. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set 4
  5. 5. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Hash Table ⇒ chance of collision hash(x) hash(y) 4
  6. 6. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Hash Table ⇒ chance of collision hash(x) hash(y) False positives are possible, false negatives are not. It might be beneficial to build an exception list of known false positives. 4
  7. 7. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set 5
  8. 8. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Not a Key-Value store 5
  9. 9. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Not a Key-Value store Array of bits indicating the presence of a key in the filter 5
  10. 10. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Not a Key-Value store Array of bits indicating the presence of a key in the filter (*) Removing an element from the filter is not possible 5
  11. 11. Bloom Filter: Add & Query m bits (initially set to 0) k hash functions S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 m-1 m 6
  12. 12. Bloom Filter: Add & Query m bits (initially set to 0) k hash functions Add S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 m-1 m 6
  13. 13. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 m-1 m 6
  14. 14. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add f(x) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 2 m-1 m 6
  15. 15. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add g(x) f(x) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 2 m-1 m 6
  16. 16. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add g(x) f(x) h(x) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 2 m-1 m 6
  17. 17. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m 6
  18. 18. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m Query 6
  19. 19. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m f(z) h(z) g(z) Query z 6
  20. 20. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m f(z) h(z) g(z) Query one bit set to 0 z ⇒z∉S 6
  21. 21. Bloom Filter: Hash Functions k Hash functions: uniform random distribution in [1...m) k different hash functions The same hash functions with different salts Double or triple hashing : g (x) = h (x) + ih (x) mod m [1] i 1 2 2 hash functions can mimic k hashing functions Dillinger, Peter C.; Manolios, Panagiotis (2004b), "Bloom Filters in Probabilistic Verification", [1] http://www.ccs.neu.edu/home/pete/pub/bloom-filters-verification.pdf http://www.strchr.com/hash_functions 7
  22. 22. Bloom Filter: Hash Functions k Hash functions: uniform random distribution in [1...m) k different hash functions ‣ Cryptographic Hash different salts The same hash functions withFunctions (MD5, SHA-1, SHA-256, Tiger, Whirlpool ...) Double or triple hashing : g (x) = h (x) + ih (x) mod m [1] i 1 2 2 hash functions can mimic k hashing functions ‣ Murmur Hashes http://code.google.com/p/smhasher/ Dillinger, Peter C.; Manolios, Panagiotis (2004b), "Bloom Filters in Probabilistic Verification", [1] http://www.ccs.neu.edu/home/pete/pub/bloom-filters-verification.pdf http://www.strchr.com/hash_functions 7
  23. 23. Bloom Filter: Usage Guard against First line of defence Peer to Peer Routing - expensive operations in high performance communication Resource Location (like disk access) (distributed) caches ... Squid Google Various Google Cisco Cassandra HBase Proxy Cache BigTable RDBMS’ Chrome Routers 8
  24. 24. Bloom Filter: Usage in Cassandra Used to save I/O during key look-ups (check for non-existent keys) One bloom filter per SSTable. 9
  25. 25. Bloom Filter: Usage in Cassandra Used to save I/O during key look-ups (check for non-existent keys) One bloom filter per SSTable. org.apache.cassandra.utils.BloomFilter 9
  26. 26. Bloom Filter: False Positive Rate m = number of bits in the filter n = number of elements k = number of hashing functions http://pages.cs.wisc.edu/~cao/papers/summary-cache/node8.html 10
  27. 27. Bloom Filter: False Positive Rate m = number of bits in the filter n = number of elements k = number of hashing functions http://pages.cs.wisc.edu/~cao/papers/summary-cache/node8.html 10
  28. 28. Bloom Filter: False Positive Rate A bloom filter with an optimal value for k and 1% error rate only needs 9.6 bits per key. Add 4.8 bits/key and the error rate decreases by 10 times. 10.000 words, 1% error rate 10.000 words, 0.1% error rate 7 hash functions 11 hash functions ~12 KB of memory ~18 KB of memory http://www.igvita.com/2008/12/27/scalable-datasets-bloom-filters-in-ruby/ 11
  29. 29. Bloom Filter: False Positive Rate false positive probability bloom filter size (n) http://en.wikipedia.org/wiki/Bloom_filter 12
  30. 30. Counting Bloom Filter Can handle deletions Use counters instead of 0/1s When adding an element, increment the counters When deleting an element, decrement the counters Counters must be large enough to avoid overflow (4 bits) x y g(y) f(y) g(x) f(x) h(x) h(y) S 1 0 0 0 1 0 0 0 2 0 0 0 1 0 1 13
  31. 31. Stable (Time-Based) Bloom Filter Input Stream Duplicate 1 0 0 0 1 0 0 0 1 0 Filter Output Stream 14
  32. 32. Stable (Time-Based) Bloom Filter Input Before each insertion, P random Stream cells are decremented by one. The k cells for the new value xi are set to Max (usually < 7) http://webdocs.cs.ualberta.ca/~drafiei/papers/DupDet06Sigmod.pdf Duplicate 1 0 0 0 1 0 0 0 1 0 Filter Output Stream 14
  33. 33. Stable (Time-Based) Bloom Filter Input Before each insertion, P random Stream cells are decremented by one. The k cells for the new value xi are set to Max (usually < 7) http://webdocs.cs.ualberta.ca/~drafiei/papers/DupDet06Sigmod.pdf Duplicate 1 0 0 0 1 0 0 0 1 0 Filter Alternatively, set an expiry time Output for each cell, with a TTL dependent on the volume of data Stream http://www.igvita.com/2010/01/06/flow-analysis-time-based-bloom-filters/ 14
  34. 34. Bloom Filters: Further reading Compressed Bloom Filters Improve performance when the Bloom filter is passed as a message, and its transmission size is a limiting factor. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3346 Retouched Bloom Filters Allow networked applications to trade off selected false positives against false negatives http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.172.8453 Bloomier Filters Extended to handle approximate functions (each element of the set has an associated function value) http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.4154 http://arxiv.org/abs/0807.0928 Attenuated B.F., Spectral B.F., Distance-Sensitive B.F. ... 15
  35. 35. Merkle Trees Ralph C. Merkle, 1979 http://www.springerlink.com/content/q865hwxq73ex1am9/ 16
  36. 36. Merkle Trees (Hash Trees) Data Structure containing a tree of summary information about a larger piece of data to verify its contents http://en.wikipedia.org/wiki/Hash_Tree 17
  37. 37. Merkle Trees (Hash Trees) Leaves: hashes of ROOT hash(A, B) data blocks. Nodes: hashes of their children. A B hash(C, D) hash(E, F) Used to detect inconsistencies C D E F between replicas hash(001) hash(002) hash(003) hash(004) (anti-entropy) and to minimise the Data Data Data Data Block Block Block Block amount of 001 002 003 004 transferred data 18
  38. 38. Merkle Trees Node A Node B gossip exchange 19
  39. 39. Merkle Trees Node A Node B gossip exchange Minimal data transfer Differences are easy to locate 19
  40. 40. Merkle Trees Node A Node B gossip exchange Minimal data transfer Differences are easy to locate SHA-1, Whirlpool or Tiger (TTH) hash functions 19
  41. 41. Merkle Trees: Usage Peer to Peer communication 20
  42. 42. Merkle Trees: Usage DC++ Peer to Peer communication 20
  43. 43. Merkle Trees: Usage DC++ Peer to Peer communication ... Amazon Google Google Cassandra HBase ZFS Dynamo BigTable Wave 20
  44. 44. Merkle Trees: Usage in Cassandra Ensure the P2P network of nodes receives data blocks unaltered and unharmed. Anti-entropy during major compactions (via Scuttlebutt reconciliation). http://wiki.apache.org/cassandra/ArchitectureAntiEntropy 21
  45. 45. Merkle Trees: Usage in Cassandra Ensure the P2P network of nodes receives data blocks unaltered and unharmed. Anti-entropy during major compactions (via Scuttlebutt reconciliation). One Merkle Tree per Column Family (in Dynamo, one per node / key range) http://wiki.apache.org/cassandra/ArchitectureAntiEntropy 21
  46. 46. Merkle Trees: Usage in Cassandra Ensure the P2P network of nodes receives data blocks unaltered and unharmed. Anti-entropy during major compactions (via Scuttlebutt reconciliation). One Merkle Tree per Column Family (in Dynamo, one per node / key range) org.apache.cassandra.utils.MerkleTree http://wiki.apache.org/cassandra/ArchitectureAntiEntropy 21
  47. 47. References Bloom Filters http://bit.ly/bundles/quipo/1 Merkle Trees http://bit.ly/bundles/quipo/2 22
  48. 48. We’re Hiring! http://mediasift.com/careers 23
  49. 49. Lorenzo Alberton @lorenzoalberton Thank you! lorenzo@alberton.info http://www.alberton.info/talks 24

Hinweis der Redaktion

  • \n
  • \n
  • \n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • \n
  • \n
  • \n
  • \n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • Tiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits).\nMurmur hash is very very fast and low collision rate (2008).\nAnother good non-cryptographic hash function is the Jenkins Hash Function (Bob Jenkins, 1997)\nHashing with checksum functions is possible, and may produce a sufficiently uniform distribution of hash values, as long as the hash range size n is small compared to the range of the checksum or fingerprint function. The CRC32 checksum provides only 16 bits (the higher half of the result) that are usable for hashing.\n\n\n
  • Popular in distributed web caches (small cost, big potential gain).\nThe Google Chrome web browser uses Bloom filters to speed up its Safe Browsing service.[6]\nIn Relational Databases, Bloom Filters are often used for JOINs\n
  • \n
  • All the bits for an element not yet inserted might already be set.\nThere is a clear tradeoff between m and the probability of a false positive.\nThe value of k that minimizes the probability of false positives is 0.7m/n\n
  • \n
  • An optimal number of hash functions k has been assumed\n
  • Standard bloom filters can&amp;#x2019;t handle deletions: if deleting x means resetting 1s to 0s, then deleting an entry might delete several others.\n\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • RBF: permit the removal of selected false positives at the expense of generating random false negatives.\n
  • \n
  • They are used to protect any kind of data stored, handled and transferred in and between computers\n
  • Each inner node is the hash value of the concatenation of its two children.\nThe principal advantage of Merkle tree is that each branch of the tree can be checked independently without requiring nodes to download the entire tree or the entire data set.\n\n\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Merkle trees are exchanged, if they disagree, Cassandra does a range-repair via compaction (using the Scuttlebutt reconciliation)\nTo ensure the data is still in sync even there is no READ and WRITE occurs to the data, replica nodes periodically gossip with each other to figure out if anyone out of sync. For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nAnti-entropy is the &quot;catch-all&quot; way to guarantee eventual consistency, but is also pretty expensive and therefore is not done frequently. By combining the data sync with read repair and hinted handoff, we can keep the replicas pretty up-to-date.\n\nThe key difference in Cassandra&apos;s implementation of anti-entropy is that the Merkle trees are built per column family, and they are not maintained for longer than it takes to send them to neighboring nodes. Instead, the trees are generated as snapshots of the dataset during major compactions: this means that excess data might be sent across the network, but it saves local disk IO, and is preferable for very large datasets.\n
  • Merkle trees are exchanged, if they disagree, Cassandra does a range-repair via compaction (using the Scuttlebutt reconciliation)\nTo ensure the data is still in sync even there is no READ and WRITE occurs to the data, replica nodes periodically gossip with each other to figure out if anyone out of sync. For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nAnti-entropy is the &quot;catch-all&quot; way to guarantee eventual consistency, but is also pretty expensive and therefore is not done frequently. By combining the data sync with read repair and hinted handoff, we can keep the replicas pretty up-to-date.\n\nThe key difference in Cassandra&apos;s implementation of anti-entropy is that the Merkle trees are built per column family, and they are not maintained for longer than it takes to send them to neighboring nodes. Instead, the trees are generated as snapshots of the dataset during major compactions: this means that excess data might be sent across the network, but it saves local disk IO, and is preferable for very large datasets.\n
  • \n
  • \n
  • \n
  • Beschreibung

    The first part of a series of talks about modern algorithms and data structures, used by nosql databases like HBase and Cassandra. An explanation of Bloom Filters and several derivates, and Merkle Trees.

    Transkript

    1. 1. Lorenzo Alberton @lorenzoalberton “Modern” Algorithms and Data Structures Part 1 Bloom Filters, Merkle Trees Cassandra-London, Monday 18th April 2011 1
    2. 2. Bloom Filters Burton Howard Bloom, 1970 http://portal.acm.org/citation.cfm?doid=362686.362692 2
    3. 3. Bloom Filter Space-efficient probabilistic data structure used to test set membership http://en.wikipedia.org/wiki/Bloom_filter 3
    4. 4. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set 4
    5. 5. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Hash Table ⇒ chance of collision hash(x) hash(y) 4
    6. 6. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Hash Table ⇒ chance of collision hash(x) hash(y) False positives are possible, false negatives are not. It might be beneficial to build an exception list of known false positives. 4
    7. 7. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set 5
    8. 8. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Not a Key-Value store 5
    9. 9. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Not a Key-Value store Array of bits indicating the presence of a key in the filter 5
    10. 10. Bloom Filter Space-efficient probabilistic data structure that is used to test whether an element is a member of a set Not a Key-Value store Array of bits indicating the presence of a key in the filter (*) Removing an element from the filter is not possible 5
    11. 11. Bloom Filter: Add & Query m bits (initially set to 0) k hash functions S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 m-1 m 6
    12. 12. Bloom Filter: Add & Query m bits (initially set to 0) k hash functions Add S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 m-1 m 6
    13. 13. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 m-1 m 6
    14. 14. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add f(x) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 2 m-1 m 6
    15. 15. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add g(x) f(x) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 2 m-1 m 6
    16. 16. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x Add g(x) f(x) h(x) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 2 m-1 m 6
    17. 17. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m 6
    18. 18. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m Query 6
    19. 19. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m f(z) h(z) g(z) Query z 6
    20. 20. Bloom Filter: Add & Query m bits (initially set to 0) if f(x) = A, k hash functions set S[A] = 1 x y g(y) Add f(y) g(x) f(x) h(x) h(y) S 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 2 m-1 m f(z) h(z) g(z) Query one bit set to 0 z ⇒z∉S 6
    21. 21. Bloom Filter: Hash Functions k Hash functions: uniform random distribution in [1...m) k different hash functions The same hash functions with different salts Double or triple hashing : g (x) = h (x) + ih (x) mod m [1] i 1 2 2 hash functions can mimic k hashing functions Dillinger, Peter C.; Manolios, Panagiotis (2004b), "Bloom Filters in Probabilistic Verification", [1] http://www.ccs.neu.edu/home/pete/pub/bloom-filters-verification.pdf http://www.strchr.com/hash_functions 7
    22. 22. Bloom Filter: Hash Functions k Hash functions: uniform random distribution in [1...m) k different hash functions ‣ Cryptographic Hash different salts The same hash functions withFunctions (MD5, SHA-1, SHA-256, Tiger, Whirlpool ...) Double or triple hashing : g (x) = h (x) + ih (x) mod m [1] i 1 2 2 hash functions can mimic k hashing functions ‣ Murmur Hashes http://code.google.com/p/smhasher/ Dillinger, Peter C.; Manolios, Panagiotis (2004b), "Bloom Filters in Probabilistic Verification", [1] http://www.ccs.neu.edu/home/pete/pub/bloom-filters-verification.pdf http://www.strchr.com/hash_functions 7
    23. 23. Bloom Filter: Usage Guard against First line of defence Peer to Peer Routing - expensive operations in high performance communication Resource Location (like disk access) (distributed) caches ... Squid Google Various Google Cisco Cassandra HBase Proxy Cache BigTable RDBMS’ Chrome Routers 8
    24. 24. Bloom Filter: Usage in Cassandra Used to save I/O during key look-ups (check for non-existent keys) One bloom filter per SSTable. 9
    25. 25. Bloom Filter: Usage in Cassandra Used to save I/O during key look-ups (check for non-existent keys) One bloom filter per SSTable. org.apache.cassandra.utils.BloomFilter 9
    26. 26. Bloom Filter: False Positive Rate m = number of bits in the filter n = number of elements k = number of hashing functions http://pages.cs.wisc.edu/~cao/papers/summary-cache/node8.html 10
    27. 27. Bloom Filter: False Positive Rate m = number of bits in the filter n = number of elements k = number of hashing functions http://pages.cs.wisc.edu/~cao/papers/summary-cache/node8.html 10
    28. 28. Bloom Filter: False Positive Rate A bloom filter with an optimal value for k and 1% error rate only needs 9.6 bits per key. Add 4.8 bits/key and the error rate decreases by 10 times. 10.000 words, 1% error rate 10.000 words, 0.1% error rate 7 hash functions 11 hash functions ~12 KB of memory ~18 KB of memory http://www.igvita.com/2008/12/27/scalable-datasets-bloom-filters-in-ruby/ 11
    29. 29. Bloom Filter: False Positive Rate false positive probability bloom filter size (n) http://en.wikipedia.org/wiki/Bloom_filter 12
    30. 30. Counting Bloom Filter Can handle deletions Use counters instead of 0/1s When adding an element, increment the counters When deleting an element, decrement the counters Counters must be large enough to avoid overflow (4 bits) x y g(y) f(y) g(x) f(x) h(x) h(y) S 1 0 0 0 1 0 0 0 2 0 0 0 1 0 1 13
    31. 31. Stable (Time-Based) Bloom Filter Input Stream Duplicate 1 0 0 0 1 0 0 0 1 0 Filter Output Stream 14
    32. 32. Stable (Time-Based) Bloom Filter Input Before each insertion, P random Stream cells are decremented by one. The k cells for the new value xi are set to Max (usually < 7) http://webdocs.cs.ualberta.ca/~drafiei/papers/DupDet06Sigmod.pdf Duplicate 1 0 0 0 1 0 0 0 1 0 Filter Output Stream 14
    33. 33. Stable (Time-Based) Bloom Filter Input Before each insertion, P random Stream cells are decremented by one. The k cells for the new value xi are set to Max (usually < 7) http://webdocs.cs.ualberta.ca/~drafiei/papers/DupDet06Sigmod.pdf Duplicate 1 0 0 0 1 0 0 0 1 0 Filter Alternatively, set an expiry time Output for each cell, with a TTL dependent on the volume of data Stream http://www.igvita.com/2010/01/06/flow-analysis-time-based-bloom-filters/ 14
    34. 34. Bloom Filters: Further reading Compressed Bloom Filters Improve performance when the Bloom filter is passed as a message, and its transmission size is a limiting factor. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3346 Retouched Bloom Filters Allow networked applications to trade off selected false positives against false negatives http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.172.8453 Bloomier Filters Extended to handle approximate functions (each element of the set has an associated function value) http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.4154 http://arxiv.org/abs/0807.0928 Attenuated B.F., Spectral B.F., Distance-Sensitive B.F. ... 15
    35. 35. Merkle Trees Ralph C. Merkle, 1979 http://www.springerlink.com/content/q865hwxq73ex1am9/ 16
    36. 36. Merkle Trees (Hash Trees) Data Structure containing a tree of summary information about a larger piece of data to verify its contents http://en.wikipedia.org/wiki/Hash_Tree 17
    37. 37. Merkle Trees (Hash Trees) Leaves: hashes of ROOT hash(A, B) data blocks. Nodes: hashes of their children. A B hash(C, D) hash(E, F) Used to detect inconsistencies C D E F between replicas hash(001) hash(002) hash(003) hash(004) (anti-entropy) and to minimise the Data Data Data Data Block Block Block Block amount of 001 002 003 004 transferred data 18
    38. 38. Merkle Trees Node A Node B gossip exchange 19
    39. 39. Merkle Trees Node A Node B gossip exchange Minimal data transfer Differences are easy to locate 19
    40. 40. Merkle Trees Node A Node B gossip exchange Minimal data transfer Differences are easy to locate SHA-1, Whirlpool or Tiger (TTH) hash functions 19
    41. 41. Merkle Trees: Usage Peer to Peer communication 20
    42. 42. Merkle Trees: Usage DC++ Peer to Peer communication 20
    43. 43. Merkle Trees: Usage DC++ Peer to Peer communication ... Amazon Google Google Cassandra HBase ZFS Dynamo BigTable Wave 20
    44. 44. Merkle Trees: Usage in Cassandra Ensure the P2P network of nodes receives data blocks unaltered and unharmed. Anti-entropy during major compactions (via Scuttlebutt reconciliation). http://wiki.apache.org/cassandra/ArchitectureAntiEntropy 21
    45. 45. Merkle Trees: Usage in Cassandra Ensure the P2P network of nodes receives data blocks unaltered and unharmed. Anti-entropy during major compactions (via Scuttlebutt reconciliation). One Merkle Tree per Column Family (in Dynamo, one per node / key range) http://wiki.apache.org/cassandra/ArchitectureAntiEntropy 21
    46. 46. Merkle Trees: Usage in Cassandra Ensure the P2P network of nodes receives data blocks unaltered and unharmed. Anti-entropy during major compactions (via Scuttlebutt reconciliation). One Merkle Tree per Column Family (in Dynamo, one per node / key range) org.apache.cassandra.utils.MerkleTree http://wiki.apache.org/cassandra/ArchitectureAntiEntropy 21
    47. 47. References Bloom Filters http://bit.ly/bundles/quipo/1 Merkle Trees http://bit.ly/bundles/quipo/2 22
    48. 48. We’re Hiring! http://mediasift.com/careers 23
    49. 49. Lorenzo Alberton @lorenzoalberton Thank you! lorenzo@alberton.info http://www.alberton.info/talks 24

    Hinweis der Redaktion

  • \n
  • \n
  • \n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • Two keys might map into the same bucket\n
  • \n
  • \n
  • \n
  • \n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • An empty Bloom Filter is an array of m bits, all set to 0. There must be K hash functions defined, each of which maps some element to one of the m array positions with an uniform random distribution.\nTo add an element, feed it to each of the k hash functions to get k array positions, and set the bits to 1.\nTo test for an element, feed it to each of the k hash functions to get k array positions: if any of the bits at these positions are 0, the element is not in the set.\nUnion and intersection of Bloom filters: A simple bitwise OR and AND operations\n
  • Tiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits).\nMurmur hash is very very fast and low collision rate (2008).\nAnother good non-cryptographic hash function is the Jenkins Hash Function (Bob Jenkins, 1997)\nHashing with checksum functions is possible, and may produce a sufficiently uniform distribution of hash values, as long as the hash range size n is small compared to the range of the checksum or fingerprint function. The CRC32 checksum provides only 16 bits (the higher half of the result) that are usable for hashing.\n\n\n
  • Popular in distributed web caches (small cost, big potential gain).\nThe Google Chrome web browser uses Bloom filters to speed up its Safe Browsing service.[6]\nIn Relational Databases, Bloom Filters are often used for JOINs\n
  • \n
  • All the bits for an element not yet inserted might already be set.\nThere is a clear tradeoff between m and the probability of a false positive.\nThe value of k that minimizes the probability of false positives is 0.7m/n\n
  • \n
  • An optimal number of hash functions k has been assumed\n
  • Standard bloom filters can&amp;#x2019;t handle deletions: if deleting x means resetting 1s to 0s, then deleting an entry might delete several others.\n\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • 2006. Precisely eliminating duplicates in an unbounded data stream (i.e. when you don&amp;#x2019;t kow the size of the data set up front) is not feasible in many streaming scenarios. A common characteristic of these algorithms is the underlying assumption that the whole data set is stored and can be accessed if needed.\nUse cases: URL crawlers, Network monitoring (number of accesses by IP in the past hour), trending topics.\nIn many data stream applications, the allocated space is rather small compared to the size of the stream. When more and more elements arrive, the fraction of zeros\nin the Bloom Filter will decrease continuously, and the false positive rate will increase accordingly, finally reaching the limit, 1, where every distinct element will be reported as a\nduplicate, indicating that the Bloom Filter is useless.\nFor the regular Bloom Filter, there is no way to distinguish the recent elements from the past ones\n\ngithub?\n
  • RBF: permit the removal of selected false positives at the expense of generating random false negatives.\n
  • \n
  • They are used to protect any kind of data stored, handled and transferred in and between computers\n
  • Each inner node is the hash value of the concatenation of its two children.\nThe principal advantage of Merkle tree is that each branch of the tree can be checked independently without requiring nodes to download the entire tree or the entire data set.\n\n\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nTiger is a cryptographic hash function optimised for 64-bit platform (1995)\nSize: 192 bits (truncated versions: 128 and 160 bits)\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Hash trees can be used to protect any kind of data stored, handled and transferred in and between computers.\nBefore downloading a file on a p2p network, the top hash is acquired from a trusted source. When the top hash (root hash) is available, the hash tree can be received form any non-trusted source.\nCurrently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks\n
  • Merkle trees are exchanged, if they disagree, Cassandra does a range-repair via compaction (using the Scuttlebutt reconciliation)\nTo ensure the data is still in sync even there is no READ and WRITE occurs to the data, replica nodes periodically gossip with each other to figure out if anyone out of sync. For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nAnti-entropy is the &quot;catch-all&quot; way to guarantee eventual consistency, but is also pretty expensive and therefore is not done frequently. By combining the data sync with read repair and hinted handoff, we can keep the replicas pretty up-to-date.\n\nThe key difference in Cassandra&apos;s implementation of anti-entropy is that the Merkle trees are built per column family, and they are not maintained for longer than it takes to send them to neighboring nodes. Instead, the trees are generated as snapshots of the dataset during major compactions: this means that excess data might be sent across the network, but it saves local disk IO, and is preferable for very large datasets.\n
  • Merkle trees are exchanged, if they disagree, Cassandra does a range-repair via compaction (using the Scuttlebutt reconciliation)\nTo ensure the data is still in sync even there is no READ and WRITE occurs to the data, replica nodes periodically gossip with each other to figure out if anyone out of sync. For each key range of data, each member in the replica group compute a Merkel tree (a hash encoding tree where the difference can be located quickly) and send it to other neighbors. By comparing the received Merkel tree with its own tree, each member can quickly determine which data portion is out of sync. If so, it will send the diff to the left-behind members.\n\nAnti-entropy is the &quot;catch-all&quot; way to guarantee eventual consistency, but is also pretty expensive and therefore is not done frequently. By combining the data sync with read repair and hinted handoff, we can keep the replicas pretty up-to-date.\n\nThe key difference in Cassandra&apos;s implementation of anti-entropy is that the Merkle trees are built per column family, and they are not maintained for longer than it takes to send them to neighboring nodes. Instead, the trees are generated as snapshots of the dataset during major compactions: this means that excess data might be sent across the network, but it saves local disk IO, and is preferable for very large datasets.\n
  • \n
  • \n
  • \n
  • Weitere Verwandte Inhalte

    Ähnliche Bücher

    Kostenlos mit einer 30-tägigen Testversion von Scribd

    Alle anzeigen

    Ähnliche Hörbücher

    Kostenlos mit einer 30-tägigen Testversion von Scribd

    Alle anzeigen

    ×