Even wondered what Kubernetes was all about? Ever felt intimidated trying to understand the difference between Daemon sets and Replica sets? Well this presentation is for you.
5. PresentationSchedule
● Part 0: What Does This Have to Do With Batman?
● Part 1: Kubernetes is very opinionated but I agree with
most of them.
● Part 2: All about drawings.
● Part 3: Demo Time!
7. IliketothinkthatIamBatman
✓ Does not have superpowers
✓ Relies on his intuition and
mental skills
✓ Has lots of cool gadgets
✓ Likes to surprise people
17. FromChapter4ofGettingRealby37Signals
The best software has a vision.
The best software takes sides.
When someone uses software,
they're not just looking for
features, they're looking for an
approach. They're looking for a
vision. Decide what your vision
is and run with it.
19. GooglehasanopinioncalledKubernetes.
● Pronounced /koo-ber-nay'-tace/. It’s actually a Greek
term for “ship master”.
● Developed at Google. The third iteration of container
management.
○ Daddy was Omega.
○ Grandaddy was Borg.
● Kubernetes is not a PaaS, but you can build one with it.
● Google says that Kubernetes is planet scale.
20. k8s
BTW, Google wants you to stop writing Kubernetes and use
this clever acronym instead. Although it technically should
be pronounced “Kates”.
24. ● For the most part …
● Pods can contain one or more containers.
● The containers in a pod are scheduled on the same node.
● Everything in Kubernetes is some flavor of of pod or an
extension of the pod spec.
● Remember this for now, we’ll get back to it in a second.
Apodisacollectionofcontainers.
25. Podsareflatfiles.No,really.LikeYAMLorJSON(boo*).
apiVersion: v1
kind: Pod
metadata:
name: ""
labels:
name: ""
namespace: ""
annotations: []
generateName: ""
spec:
? "// See 'The spec schema' for
details."
: ~
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "",
"labels": {
"name": ""
},
"generateName": "",
"namespace": "",
"annotations": []
},
"spec": {
// See 'The spec schema' for details.
}
}
*Fontsize14vsfontsize10,YAMListheclearwinner.EspeciallyinthecontextofShannon’sInformationTheory.Thesamedensityofinformationcanbetransmittedinless
lineswithYAML.
27. ThePodLifecycleinaCluster
Let’s say you want to fire up a pod. With kubectl you would:
1. Make a Pod request to the API server using a local pod
definition file.
2. The API server saves the info for the pod in ETCD.
3. The scheduler finds the unscheduled pod and schedules it
to a node.
4. Kubelet sees the pod scheduled and fires up docker.
5. Docker runs the container.
The entire lifecycle state of the pod is stored in ETCD.
30. Labelsandselectorsarethefairydustink8s.
● A label is a key-value pair that is assigned to objects
in k8s.
○ Pods, services, lots of things can have labels.
● A selector is a way to filter for labels that match a
certain criteria or logic.
○ There are two types of selectors:
■ Equality based
■ Set based
34. Abasiccluster.
K8S Node 1
redis-django pod 1
redis
container
django
container
some-other pod
K8S Node 2
redis-django pod 2
redis
container
django
container
redis-django pod 3
redis
container
django
container
K8S Master
SkyDns
pod
ETCD pod
Kibana
pod
Grafana
pod
Elasticsearch
pod
Heapster
pod
basic-cluster-01
35. bonusstuff
● When you launch a
cluster, you get some
built in services.
● Each one of these has
their own endpoints and
/ or UIs.
● They run on the master
directly though you
could schedule them
across the cluster or
other masters.
● To find the endpoints
type: kubectl
cluster-info
Heapster
37. AVirtualClusterinYourCluster
● A namespace as an isolated section of a cluster.
● It’s a virtual cluster in your cluster.
● Each cluster can have multiple namespaces.
● The root services have their own.
● Namespaces are in network isolation from each other and
can are (normally) used to house different environments
on the same cluster.
41. Themaster
● Everything is done via
kubectl, which then
makes calls against the
kube-apiserver.
● The Controller Manager,
Scheduler Service, and
ETCD can be spread
across nodes based on
cluster size.
● All state about
everything is stored in
ETCD.
● Also, kubelet is
running here too (more
on that next slide).
42. TheNode
● The name of the agent
process is called
kubelet. Think “cubed
omelette”.
● The kubelet process
manages the Pods,
including containers &
volumes.
● The kube-proxy service
handles network routing
and service exposure.
45. Mymentalmodelofk8s
● I find it easiest to think of everything as a variation
of a Pod or another object.
● Google has done a very good job at extending base objects
to add flexibility or support new features.
● This also means that the Pod spec is relatively stable
given the massive list of features that is dropped every
release.
48. TheBaseThingsinContainersarecalledSpecs
(NotlikeDust,likeSpecification)
● The only required field is
containers.
○ And it requires two entries
■ name
■ image
● restartPolicy is for all
containers in a pod.
● volumes are volumes (duh) that
any container in a pod can
mount.
● The spec is very extensible by
design.
Spec
Container
49. Thenthereisthepod
● Specs don’t do anything by
themselves; for that you need a
pod.
● Pods are just collections of
containers that share a few
things:
○ Access to volumes.
○ Networking.
○ Are co-located.
● Pods can be run by themselves but
have no guarantee to restart or
stay running or scale or do
anything useful really.
Pod
Spec
Container
50. Services.
● Services point to a Pod.
● … or to an external source.
● With Pods a virtual endpoint is
created then routed to using the
kube-proxy.
● For non-pod services a virtual IP
in the cluster is used to route
externally.
Service
Pod
51. IngressService=AWSAPI
Gateway.
● An Ingress Controller sits at the
boundary of the cluster and routes
requests to Services.
● One Ingress Controller can handle
multiple domains.
● Each route can point to a
different Service.
● Relies on the creation of an
Ingress Controller in the cluster
(another service that is not
enabled by default).
Service
Pod
Ingress
Service
52. Daemonsets.Scary.
● Daemons is an object that ensures that a copy
of each Pod runs on each node.
● This is commonly used to make sure side-car
containers are running across the cluster.
● If new nodes come up they’ll get a copy of the
daemon set and will come up.
● Daemon sets don’t have scaling rules.
Daemon Set
Pod
Spec
Container
53. Petsets.Notsoscary.
● New in 1.3, Pet Sets allow you to create
complex microservices across the cluster.
● They have the ability to set dependency on
other containers.
● They require:
○ A stable hostname, available in DNS
○ An ordinal index
○ Stable storage: linked to the ordinal &
hostname
● It’s for launching a cluster in your cluster.
Pet Set
Pod
Spec
Container
54. ReplicationController(deprecated)
● A Replication Controller was the best way to
run Pods.
● You set a number of pods to run and the
Replication Controller made sure that the
number was running across the cluster.
● Rolling updates could be performed by starting
a new Replication Controller and scaling up.
Replication
Controller
Pod
Spec
Container
55. ReplciaSet.Thenewhotness.
● A Replica Set differs from the Replication
Controller because it can be updated.
● If you update the Replica Set template you can
fire and update and automatically roll
changes.
● Roll backs are also built in.
● These are not designed to use directly. For
that you need ...
Pod
Spec
Container
Replica Set
56. Deployments.Thekingofthehill.
● A Deployment controls the running state of
Pods and Replica Sets.
● In k8s 1.3 it is the primary object you should
be manipulating.
● Deployments have:
○ History.
○ Rolling updates.
○ Pausing updates.
○ Roll-backs.
Deployment
Replica Set
Pod
Spec
Container
58. Otherstuff.
● Secrets:
○ K8s comes with a built-in secret store that is namespaced and uses
labels to control pod read access.
● Network Policies:
○ You can use labels to define whitelist rules between pods.
● Persistent Volumes:
○ These live outside of normal pod volumes and can be used for shared
storage for things like databases. Yes, databases in containers.
● Ubernetes:
○ A way to cluster your clusters.
60. YouOnlyNeedaComputer,BTW
● Minikube
○ https://github.com/kubernetes/minikube
○ Runs a Kubernetes node on top of your favorite (probably Virtualbox)
VM.
○ Lots of involvement from the K8s community.
● Kube-solo
○ https://github.com/TheNewNormal/kube-solo-osx
○ Uses the Corectl app to run a Kube VM.
○ Also has a multi-node version.