%in Benoni+277-882-255-28 abortion pills for sale in Benoni
Distributed fun with etcd
1. Distributed Fun with
And the consensus problem
DistSys Riyadh Meetup
Abdulaziz AlMalki @almalki_am
2. Agenda
● The consensus problem
● Paxos and raft
● What is etcd?
● etcd use cases
● etcd as a kv store
● etcd consistency guarantees
● etcd failure modes
● Leader election
● Distributed locks
3. Agenda
● Distributed cluster configuration
● Service discovery
● How kubernetes uses etcd
● Demo:
○ PostgreSQL leader election with patroni and etcd
○ Using etcd and confd for dynamic pull based cluster reconfiguration
4. The consensus problem
What is consensus?
Getting a group of processes to agree on a value
Properties:
● Termination: eventually, every non-faulty process decides some value
● Agreement: all processes select the same value
● Integrity: a process decides only once
● Validity: The value must have proposed by some process
5. The consensus problem
Reaching an agreement (consensus) is an important step in many distributed
computing problems:
● synchronizing replicated state machines and making sure all replicas have the
same (consistent) view of system state.
● electing a leader
● mutual exclusion (distributed locks)
● managing group membership/failure detection
● deciding to commit or abort for distributed transactions
6. But...
There's always a but.
Is it possible to achieve consensus in distributed systems?
It depends..
7. Distributed System Models
Synchronous model
● messages are received within a known bounded time
● drift of each process local clock has a known bound
● Each step in a process has a known bound
● e.g supercomputer
Asynchronous model
● no bounds on message transmission delays
● arbitrary drift rate of local clocks
● no bounds on process execution
● e.g The Internet
8. Back to consensus
Is it possible to achieve consensus in distributed systems?
Yes & No
Yes in Synchronous model
Not in Asynchronous model
Why?
9. FLP Proof
Impossibility of distributed consensus with one faulty process (1985)
Fischer, Lynch and Paterson
https://groups.csail.mit.edu/tds/papers/Lynch/jacm85.pdf
Result:
“We show that every protocol for this problem has the possibility of nontermination,
even with only one faulty process. By way of contrast, solutions are known for the
synchronous case, the "Byzantine Generals" problem.”
10. Paxos
Leslie Lamport discovered the algorithm in the late 1980s
Used by Google Chubby
Guarantees safety, but not liveness
● Safety: agreement property, guaranteed
● Liveness: termination property, not guaranteed
Eventual liveness
Hard to understand and implement!
11. Raft
Reliable, Replicated, Redundant, And Fault-Tolerant
(was supposed to be named Redundo)
https://groups.google.com/forum/#!topic/raft-dev/95rZqptGpmU
Developed by Diego Ongaro and John Ousterhout from Stanford University
Designed to be easy to understand
Published in 2014: https://raft.github.io/raft.pdf
More Info and related research can be found here: https://raft.github.io/
12. Demo
The Secret Lives of Data (An interactive demo that explains how raft works)
http://thesecretlivesofdata.com/raft/
RaftScope: a raft cluster running in your browser that you can interact with to see
Raft in action
https://raft.github.io/raftscope/
etcd playground
http://play.etcd.io/play
13. etcd
etcd is a distributed key value store that provides a reliable way to store data
across a cluster of machines.
etcd is used by kubernetes for the backend for service discovery and storing
cluster state and configuration
Cloud Foundry uses etcd to store cluster state and configuration and as a global
lock service
14. etcd
etcd is written in Go and uses the Raft consensus algorithm to manage a
highly-available replicated log.
https://github.com/etcd-io/etcd
Production-grade
Name from unix "/etc" folder and "d"istributed systems
Originally developed for CoreOS to get automatic, zero-downtime Linux kernel
updates using Locksmith which implements a distributed semaphore over etcd to
ensure only a subset of a cluster is rebooting at any given time.
15. etcd use cases
Should be used to store metadata and configurations, such as to coordinate
processes
Can handle a few GB of data with consistent ordering
etcd replicates all data within a single consistent replication group, no sharding
etcd provides distributed coordination primitives such as event watches, leases,
elections, and distributed shared locks out of the box.
16. etcd as a kv store
gRPC remote procedure call
● KV - Creates, updates, fetches, and deletes key-value pairs.
● Watch - Monitors changes to keys.
● Lease - Primitives for consuming client keep-alive messages.
18. etcd consistency guarantees
● Atomicity
○ All API requests are atomic; an operation either completes entirely or not at all.
○ For watch requests, all events generated by one operation will be in one watch response.
● Consistency
○ sequential consistency: a client reads the same events in the same order
○ etcd does not ensure linearizability for watch operations
○ etcd ensures linearizability for all other operations by default
○ For lower latencies and higher throughput, use serializable, may access stale data with respect
to quorum
● Isolation
○ etcd ensures serializable isolation
● Durability
○ Any completed operations are durable
19. etcd failure modes
Minor followers failure
● with less than half of the members failing, etcd continues running
● clients should automatically reconnect to other operating members
Leader failure
● etcd cluster automatically elects a new leader
● takes about an election timeout to elect a new leader
● requests sent during the election are queued
● writes already sent to the old leader but not yet committed may be lost
20. etcd failure modes
Majority failure
● etcd cluster fails and cannot accept more writes
● recover from a majority failure once the majority of members become available
Network partition
● either minor followers failure or a leader failure
23. Distributed cluster configuration
Use etcd as a central configuration store
● all consumers have immediate access to configuration data
● etcd makes it easy for applications to watch for changes
● reduces the time between a configuration change and propagation of that
change throughout the infrastructure
● failed nodes get latest config immediately after recovery
(Pushing config files to servers lacks all of the above)
25. How kubernetes uses etcd
● Kubernetes stores data, state, and metadata in etcd
● All access to etcd goes through the apiserver
● Kubernetes stores the ideal state and the actual state.
● Kubernetes control loop (kube-controller-manager) watches these states of the
cluster through the apiserver and if these two states have diverged, it’ll make
changes to reconcile them.
● Clusters using etcd3 preserve changes in the last 5 minutes by default.
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245
27. Patroni
Patroni: A Template for PostgreSQL HA with ZooKeeper, etcd or Consul
https://github.com/zalando/patroni
https://github.com/zalando/patroni/blob/master/patroni/dcs/etcd.py
Patroni originated as a fork of Governor, the project from Compose
https://github.com/helm/charts/tree/master/incubator/patroni
HA PostgreSQL Clusters with Docker
https://github.com/zalando/spilo
28. Confd
Manage local application configuration files using templates and data from etcd
http://www.confd.io/
● Sync configuration files by polling etcd and processing template resources.
● Reloading applications to pick up new config file changes
29. References and further reading
A Brief Tour of FLP Impossibility
https://www.the-paper-trail.org/post/2008-08-13-a-brief-tour-of-flp-impossibility/
Distributed Systems, Failures, and Consensus
https://www2.cs.duke.edu/courses/fall07/cps212/consensus.pdf
Consensus
https://www.cs.rutgers.edu/~pxk/417/notes/content/consensus.html
30. References and further reading
etcd github
https://github.com/etcd-io/etcd
etcd Concurrency primitives
https://github.com/etcd-io/etcd/tree/master/clientv3/concurrency
Consistency Models
https://jepsen.io/consistency
https://aphyr.com/posts/313-strong-consistency-models
31. References and further reading
Cloud Computing Concepts, Part 1 & 2
https://www.coursera.org/learn/cloud-computing/
https://www.coursera.org/learn/cloud-computing-2
Distributed Consensus
https://homepage.cs.uiowa.edu/~ghosh/16612.week11.pdf
How to Build a Highly Available System Using Consensus
https://www.microsoft.com/en-us/research/publication/how-to-build-a-highly-availab
le-system-using-consensus/
32. References and further reading
In Search of an Understandable Consensus Algorithm
https://www.usenix.org/conference/atc14/technical-sessions/presentation/ongaro
Tech Talk - Raft, In Search of an Understandable Consensus Algorithm by Diego
Ongaro
https://www.youtube.com/watch?v=LAqyTyNUYSY&feature=youtu.be
The Raft Consensus Algorithm
https://raft.github.io/
33. References and further reading
State machine replication
https://en.wikipedia.org/wiki/State_machine_replication
Kube-controller-manager
https://kubernetes.io/docs/concepts/overview/components/
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller
-manager/
go-config: a dynamic config framework
https://github.com/micro/go-config