SlideShare a Scribd company logo
1 of 34
ATELIER : DEVOPS
Année Universitaire 2020-2021 1
Mohamed HAMMOUDA
A Successful Path To
Continuous Integration
And Continuous Delivery
DevOps
PLAN DE L’ATELIER
2
INTRODUCTION AU DEVOPS
1
LE CONTRÔLE DES VERSIONS : GIT & GITLAB
2
LES CONTENEURS APPLICATIVES : DOCKER
4
INTÉGRATION CONTINUE ET DÉPLOIEMENT
CONTINU
5
LE CONTRÔLE DE QUALITÉ DES LOGICIELS
3
PLAN DE L’ATELIER
3
INTRODUCTION AU DEVOPS
1
LE CONTRÔLE DES VERSIONS : GIT & GITLAB
2
LES CONTENEURS APPLICATIVES : DOCKER
4
INTÉGRATION CONTINUE ET DÉPLOIEMENT
CONTINU
5
LE CONTRÔLE DE QUALITÉ DES LOGICIELS
3
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
4
KUBERNETES
KUBERNETES
K8S
8 letters
10 letters
5
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
 Kubernetes was originally developed by Google.
 Google has practically always run applications in containers.
 As early as 2014, it was reported that they start two bilion containers every
week. That’s over 3,000 containers per second.
 They run these containers on thousands of computers distributed across dozens
of data centers around the world.
 Now imagine doing all this manually => it’s clear that you need automation, and
at this massive scale, it better be perfect.
 Kubernetes is not an open-sourced version of Borg or Omega. It’s
more like Kubernetes shares its DNA and family history with them.
 The word Kubernetes is Greek for pilot or helmsman
6
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
 Kubernetes is a software system for automating the deployment and
management of complex, large-scale application systems composed of
computer processes running in containers.
 Deploy your application
 Scale it up and down dynamically based on demand
 Self-heal it when things break
 Perform zero-downtime rolling updates and rollbacks
Kubernetes
can :
7
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
 is one that’s designed to meet
cloud-like demands of auto-scaling,
self-healing, rolling updates,
rollbacks and more.
 It’s important to be clear that cloud-
native apps are not applications
that will only run in the public
cloud, but they can also run
anywhere that you have
Kubernetes, even your on-premises
datacenter.
 Cloud-native is about the way
applications behave and react
to events.
Cloud-native app Microservices app
 Is built from lots of independent
small specialised parts that work
together to form a meaningful
application.
 For example, you might have an e-
commerce app that comprises all of
the following small specialised
components:
 Web front-end
 Catalog service
 Shopping cart
 Authentication service
 Logging service
 Persistent store
8
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
 Docker is the low-level technology
that starts and stops the
containerised applications.
 Kubernetes is the higher-level
technology that looks after the
bigger picture, such as deciding
which nodes to run containers on,
deciding when to scale up or down,
and executing updates.
Kubernetes and Docker  Docker isn’t the only container
runtime Kubernetes supports. In fact,
Kubernetes has a couple of features
that abstract the container runtime
and make it interchangeable:
1. The Container Runtime Interface (CRI)
is an abstraction layer that
standardizes the way 3rd-party
container runtimes work with
Kubernetes.
2. Runtime Classes allows you to create
different classes of runtimes.
9
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
 In 2016 and 2017 we had the
orchestrator wars where Docker
Swarm, Mesosphere DCOS, and
Kubernetes competed to become
the de-facto container orchestrator.
 To cut a long story short,
Kubernetes won.
 However, Docker Swarm is still
under active development and is
popular with small companies that
need asimple alternative to
Kubernetes.
What about Kubernetes vs Docker Swarm
As you can see in the following figure,
the underlying infrastructure, meaning
the computers, the network and other
components, is hidden from the
applications, making it easier to
develop and configure them.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
ABSTRACTING AWAY THE INFRASTRUCTURE
 Kubernetes provides an abstraction layer over the underlying hardware to both
users and applications.
STANDARDIZING HOW WE DEPLOY APPLICATIONS
 A single manifest that describes the application can be used for local deployment
and for deploying on any cloud provider. All differences in the underlying
infrastructure are handled by Kubernetes, so you can focus on the application and
the business logic it contains.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
DEPLOYING APPLICATIONS DECLARATIVELY
 Kubernetes uses a declarative
(imperative kubectl) model to
define an application. You
describe the components that
make up your application and
Kubernetes turns this
description into a running
application.
 It then keeps the
application healthy by
restarting or recreating
parts of it as needed.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
 As soon as you deploy an
application to Kubernetes, it
takes over the daily
management of the
application.
 If the application fails,
Kubernetes will
automatically restart it.
 If thehardware fails or the
infrastructure topology
changes so that the
application needs to be
moved to other machines
TAKING ON THE DAILY MANAGEMENT OF APPLICATIONS
Kubernetes does this all by itself.
13
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
HOW KUBERNETES FITS INTO A COMPUTER CLUSTER
.
You start with a fleet of machines that you
divide into two groups
 the master : will run the Kubernetes
Control Plane, which represents the
brain of your system and controls the
cluster,
 the worker nodes : will run your
applications - your workloads - and will
therefore represent the Workload Plane.
Non-production clusters
can use a single master
node, but highly available
clusters use at least three
physical master nodes to
host the Control Plane.
The number of worker
nodes depends on the
number of applications
you’ll deploy.
 Regardless of the number of worker
nodes in your cluster, they all become a
single space where you deploy your
applications. You do this using the
Kubernetes API, which is provided by
the Kubernetes Control Plane.
14
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
THE ARCHITECTURE OF A KUBERNETES CLUSTER
.
As you’ve already learned, a
Kubernetes cluster consists of
nodes divided into two groups:
 A set of master nodes that
host the Control Plane
components
 A set of worker nodes that
form the Workload Plane
The two types of nodes, run different Kubernetes
components.
15
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
CONTROL PLANE COMPONENTS
.
 A Kubernetes control plane node is a
server running collection of system
services that make up the control
plane of the cluster.
 The simplest setups run a single
control plane node. However, this is
only suitable for labs and test
environments.
 For production environments,
multiple control plane nodes
configured for high availability (HA) is
vital.
 It’s also considered a good practice
not to run user applications on control
plane nodes. This frees them up to
concentrate entirely on managing the
cluster.
16
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
THE API SERVER
.
 The API server is the Grand Central of
Kubernetes. All communication, between all
components, must go through the API server.
 It exposes a RESTful API that you POST YAML
configuration files to over HTTPS.
 These YAML files, which we sometimescall
manifests,describe the desired state of an
application (which container image to use,
which ports to expose, and how many Pod
replicas to run).
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
THE CLUSTER STORE
 The cluster store is the only stateful part of
the control plane and persistently stores the
entire configuration and state of the cluster.
 As such, it’s a vital component of every
Kubernetes cluster – no cluster store, no
cluster.
 The cluster store is currently based on etcd,
a popular distributed database. As it’s the
single source of truth for a cluster, you
should run between 3-5 etcd replicas for
high-availability, and you should provide
adequate ways to recover when things go
wrong.
 A default installation of Kubernetes installs a
replica of the cluster store on every control
plane node and automatically configures HA.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
THE CONTROLLER MANAGER AND CONTROLLERS
 The controller manager implements all the
background controllers that monitor cluster
components and respond to events.
 Architecturally, it’s a controller of controllers,
meaning it spawns all the independent
controllers and monitors them. Some of the
controllers include the Deployment
controller, the Stateful Set controller, and the
Replica Set controller. Each one runs as a
background watch-loop constantly watching
the API Server for changes.
 The aim of the game is to ensure the
observed state of the cluster matches the
desired state. The logic implemented by each
controller is as follows, and is at the heart of
Kubernetes and declarative design patterns.
1. Obtain desired state
2. Observe current state
3. Determine differences
4. Reconcile differences
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
THE SCHEDULER
 At a high level, the scheduler watches
the API server for new work tasks and
assigns them to appropriate healthy
worker nodes.
 Behind the scenes, it implements
complex logic that filters out nodes
incapable of running tasks, and then
ranks the nodes that are capable. The
ranking system is complex, but the node
with the highest ranking score is selected
to run the task.
 The scheduler isn’t responsible for
running tasks, just picking the nodes to
run them. A task is normally a
Pod/container. You’ll learn about Pods
and containers in later chapters.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
WORKER NODE COMPONENTS
 The worker nodes are the
computers on which your
applications run.
 They form the cluster’s
Workload Plane. In addition
to applications, several
Kubernetes components also
run on these nodes.
 They perform the task of
running, monitoring and
providing connectivity
between your applications.
21
 The kubelet is main Kubernetes agent and
runs on every cluster node. In fact, it’s
common to use the terms node and
kubelet interchangeably.
 When you join a node to a cluster, the
process installs the kubelet, which is then
responsible for registering it with the
cluster. This process registers the node’s
CPU, memory, and storage into the wider
cluster pool.
 One of the main jobs of the kubelet is to
watch the API server for new work tasks.
Any time it sees one, it executes the task
and maintains a reporting channel back to
the control plane.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
KUBELET
22
 The kubelet needs a container runtime to
perform container-relatedtasks–things like
pulling images and starting and stopping
containers.
 Kubernetes is dropping support for Docker
as a container runtime. This is because
Docker is bloated and doesn’t support the
CRI (requires a shim).
 containerd is replacing it as the most
common container runtime on Kubernetes.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
CONTAINER RUNTIME
23
 The last piece of the node
puzzle is the kube-proxy.
 It runs on every node and is
responsible for local cluster
networking.
 It ensures each node gets
its own unique IP address,
and it implements local
iptables or IPVS rules to
handle routing and load-
balancing of traffic on the
Pod network.
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
KUBE-PROXY
24
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
DEFINING YOUR APPLICATION
 Everything in Kubernetes is
represented by an object.
 You create and retrieve these
objects via the Kubernetes API.
 Your application consists of
several types of these objects.
One type represents the
application deployment as a
whole, another represents the
service provided by a set of
these instances and allows
reaching them at a single IP
address, and there are many
others. .
25
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
INTRODUCING PODS
 In Kubernetes, instead of deploying
individual containers, you deploy
groups of co-located containers : pods.
 A pod is a group of one or more closely
related containers that run together on
the same worker node and need to
share certain Linux namespaces.
 The simplest model is to run a single
container in every Pod. This is why we
often use the terms “Pod” and
“container” interchangeably.
 However, there are advanced use-cases
that run multiple containers in a single
Pod.
26
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
POD anatomy
 At the highest-level, a Pod is a ring-fenced
environment to run containers. Pods themselves
don’t actually run applications – applications
always run in containers, the Pod is just a sandbox
to run one or more containers.
 If you’re running multiple containers in a Pod,
they all share the same Pod environment. This
includes the network stack, volumes, IPC
namespace, shared memory, and more. As an
example, this means all containers in the same
Pod will share the same IP address (the Pod’s IP).
 Two containers in the same Pod need to talk to
each other (container-to-container within the
Pod) they can use the Pod’s localhost interface
27
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
SERVICE OBJECTS AND STABLE NETWORKING
 You’ve just learned that Pods are mortal and can die.
However, if they’re managed via higher level
controllers, they get replaced when they fail. But
replacements come with totally different IP addresses.
This also happens with rollouts and scaling operations.
Rollouts replace old Pods with new ones with new IPs.
Scaling up adds new Pods with new IP addresses,
whereas scaling down takes existing Pods away. Events
like these cause a lot of IP churn.
 Assume you’ve got a microservices app with a bunch of
Pods performing video rendering. How will this work if
other parts of the app that use the rendering service
can’t rely on rendering Pods being there when needed?
This is where Services come in to play. They provide
reliable networking for a set of Pods.
 The uploader microservice talking to the renderer
microservice via a Kubernetes Service object. The
Service (capital “S” because it’s a Kubernetes API
object) is providing a reliable name and IP. It’s also
loadbalancing requests to the two renderer Pods
behind it.
28
 KUBERNETES
 KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM
How Kubernetes runs an application
These actions take place when you deploy the application:
1. You submit the application manifest to the Kubernetes API.
2. The API Server writes the objects defined in the manifest to
etcd.
3. A controller notices the newly created objects and creates
several new objects - one for each application instance.
4. The Scheduler assigns a node to each instance.
5. The Kubelet notices that an instance is assigned to the
Kubelet’s node. It runs the application instance via the
Container Runtime.
6. The Kube Proxy notices that the application instances are
ready to accept connections from clients and configures a
load balancer for them.
7. The Kubelets and the Controllers monitor the system and keep
the applications running.
After you’ve created your YAML or JSON file(s), you submit the
file to the API, usually via the Kubernetes command-line tool
called kubectl.
NOTE Kubectl is pronounced kube-control, but the softer souls
in the community prefer to call it kubecuddle. Some refer to it
as kube-C-T-L.
Kubectl splits the file into individual objects and creates each
of them by sending an HTTP PUT or POST request to the API,
as is usually the case with RESTful APIs. The API Server
validates the objects and stores them in the etcd datastore. In
addition, it notifies all interested components that these
objects have been created. Controllers, which are explained
next, are one of these components.
• Download Binary Minikube
$ curl -Lo minikube
https://storage.googleapis.com/miniku
be/releases/latest/minikube-linux-
amd64
$ chmod +x minikube
$ sudo mv minikube /usr/local/bin
$ minikube start
 KUBERNETES
 Lab on
MINIKUBE START
minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
😄 minikube v1.27.1 on Ubuntu 20.04
👎 Unable to pick a default driver. Here is what was considered, in preference order:
▪ docker: Not healthy: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1:
Got permission denied while trying to connect to the Docker daemon socket at
unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/version": dial unix
/var/run/docker.sock: connect: permission denied
▪ docker: Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER &&
newgrp docker' <https://docs.docker.com/engine/install/linux-postinstall/>
💡 Alternatively you could install one of these drivers:
▪ kvm2: Not installed: exec: "virsh": executable file not found in $PATH
▪ vmware: Not installed: exec: "docker-machine-driver-vmware": executable file not found in
$PATH
▪ podman: Not installed: exec: "podman": executable file not found in $PATH
▪ virtualbox: Not installed: unable to find VBoxManage in $PATH
▪ qemu2: Not installed: exec: "qemu-system-x86_64": executable file not found in $PATH
❌ Exiting due to DRV_NOT_HEALTHY: Found driver(s) but none were healthy. See above for
suggestions how to fix installed drivers.
• Connect to Minikube add list informations about k8s components
$ Minikube ssh
$ docker ps --filter "name=kube-apiserver" --filter "name=etcd" --filter " name=kube-scheduler" --filter "
name=kube-controller-manager" | grep -v " pause
• Add docker to the user group
$ sudo usermod -aG docker $USER
&& newgrp docker
$ minikube start --driver=docker
• Download Binary Kubectl
$ curl -LO https://storage.googleapis.com/kubernetes-
release/release/$(curl -s https://storage.googleapis.com/kubernetes-
release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x
kubectl && sudo mv kubectl /usr/local/bin/
 KUBERNETES
 Lab on
INTERACTING WITH KUBERNETES
To interact with Kubernetes, you use a
commandline tool called kubectl.
The tool communicates with the Kubernetes API
server, which is part of the Kubernetes Control
Plane. To download the latest version, first go to
https://storage.googleapis.com/kubernetes-
release/release/stable.txt to see what the
latest stable version is and then replace the
version number in the first URL with this
version. To check if you’ve installed it correctly,
run kubectl --help.
SETTING UP A SHORT ALIAS FOR KUBECTL
You can speed up use of kubectl commands by setting up an alias and tab
completion for it.
• Define alias for kubectl command
$ alias k=kubectl
INTERACTING WITH KUBERNETES THROUGH WEB DASHBOARDS
If you prefer using graphical web user interfaces, you’ll be happy to hear that Kubernetes also comes with a
nice web dashboard.
• Display minikube dashbord
$ minikube dashboard
DEPLOYING YOUR APPLICATION
The imperative way to deploy an application is to use the kubectl create deployment
command.
By using the imperative command, you avoid the need to know the structure of
Deployment objects as when you write YAML or JSON manifests.
CREATING A DEPLOYMENT
• deploy Nginx server to your Kubernetes cluster.
$ kubectl create deployment Nginx --image=nginx
deployment.apps/kubia created
 KUBERNETES
 Lab on
In this command we have specified three things here:
 You want to create a deployment object.
 You want the object to be called nginx.
 You want the deployment to use the container image nginx
The Deployment object is now stored in the Kubernetes API. The existence of this object tells
Kubernetes that the nginx container must run in your cluster.
You’ve stated your desired state. Kubernetes must now ensure that the actual state reflects
your wishes.
 KUBERNETES
 Lab on
LISTING DEPLOYMENTS
The interaction with Kubernetes consists mainly of
the creation and manipulation of objects via its API.
Kubernetes stores these objects and then performs
operations to bring them to life. For example, when
you create a Deployment object, Kubernetes runs an
application.
Kubernetes then keeps you informed about the
current state of the application by writing the status
to the same Deployment object.
• List all Deployment objects
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kubia 0/1 1 0 6s
The easiest way to create the service is to use the following imperative command:
• Create a loadBalancer Service
$ kubectl expose deployment nginx --type=LoadBalancer --port 8080
service/kubia exposed
 KUBERNETES
 Lab on
EXPOSING YOUR APPLICATION TO THE WORLD
The next question to answer is how to access it. I mentioned that each pod gets its own IP address, but this
address is internal to the cluster and not accessible from the outside. To make the pod accessible externally,
you’ll expose it by creating a Service object.
Several types of Service objects exist.
• Some expose pods only within the cluster :NodePort
• while others expose them externally : LoadBalancer
A service with the type LoadBalancer provisions an external load balancer, which makes the service accessible
via a public IP. This is the type of service you’ll create now.
This is what running the above command tells Kubernetes:
• You want to expose all pods that belong to the kubia Deployment as a new service.
• You want the pods to be accessible from outside the cluster via a load balancer.
• The application listens on port 8080, so you want to access it via that port.
You didn’t specify a name for the Service object, so it inherits the name of the Deployment.
 KUBERNETES
 Lab on
LISTING SERVICES
Services are API objects, just like Pods,
Deployments, Nodes and virtually everything else
in Kubernetes, so you can list them by executing
kubectl get services, as in the next listing. The list shows two services with their types, IPs and the
ports they expose. The kubia service doesn’t yet have an
external IP address. Whether it gets one depends on how
you’ve deployed the cluster.
mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia2
deployment.apps "kubia2" deleted
mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia
deployment.apps "kubia" deleted
mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia4
deployment.apps "kubia4" deleted
mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia3
deployment.apps "kubia3" deleted
mohamed@Sw2:~$ k get pod
NAME READY STATUS RESTARTS AGE
kubia-59fcf787df-9l4fn 0/1 Terminating 0 11m
kubia2-76f984565-9jtm6 0/1 Terminating 0 12m
kubia3-84dd9687f7-wdrd4 0/1 Terminating 0 12m
kubia4-5d56d66678-8hmq7 0/1 Terminating 0 12m
nginx-76d6c9b8c-cpwnx 1/1 Running 0 115m
mohamed@Sw2:~$ k get pod
NAME READY STATUS RESTARTS AGE
kubia4-5d56d66678-8hmq7 0/1 Terminating 0 12m
nginx-76d6c9b8c-cpwnx 1/1 Running 0 115m
Tester le scenario suivant :
• Supprimer un pod
$ kubectl delete pod « name »
• Faites de get successifs, quest ce que vous
remarquer
$ kubectl get pod
• Supprimer un deployment
$ kubectl delete -n default DEPLOYMENT « name »
• Faites de get successifs, quest ce que vous
remarquer
$ kubectl get pod

More Related Content

Similar to Jenkins_K8s (2).pptx

"Experienced Kubernetes Administrator skilled in cluster deployment, maintena...
"Experienced Kubernetes Administrator skilled in cluster deployment, maintena..."Experienced Kubernetes Administrator skilled in cluster deployment, maintena...
"Experienced Kubernetes Administrator skilled in cluster deployment, maintena...arjunnegi34
 
fundamental Kubernetes Administrator.pdf
fundamental Kubernetes Administrator.pdffundamental Kubernetes Administrator.pdf
fundamental Kubernetes Administrator.pdfarjunnegi34
 
Container Orchestration with Docker Swarm and Kubernetes
Container Orchestration with Docker Swarm and KubernetesContainer Orchestration with Docker Swarm and Kubernetes
Container Orchestration with Docker Swarm and KubernetesWill Hall
 
Best online kubernetes course in H2KInfosys.pdf
Best online kubernetes course in H2KInfosys.pdfBest online kubernetes course in H2KInfosys.pdf
Best online kubernetes course in H2KInfosys.pdfabhayah2k
 
Sumo Logic Cert Jam - Advanced Metrics with Kubernetes
Sumo Logic Cert Jam - Advanced Metrics with KubernetesSumo Logic Cert Jam - Advanced Metrics with Kubernetes
Sumo Logic Cert Jam - Advanced Metrics with KubernetesSumo Logic
 
Building Cloud-Native Applications with Kubernetes, Helm and Kubeless
Building Cloud-Native Applications with Kubernetes, Helm and KubelessBuilding Cloud-Native Applications with Kubernetes, Helm and Kubeless
Building Cloud-Native Applications with Kubernetes, Helm and KubelessBitnami
 
Develop and deploy Kubernetes applications with Docker - IBM Index 2018
Develop and deploy Kubernetes  applications with Docker - IBM Index 2018Develop and deploy Kubernetes  applications with Docker - IBM Index 2018
Develop and deploy Kubernetes applications with Docker - IBM Index 2018Patrick Chanezon
 
04_Azure Kubernetes Service: Basic Practices for Developers_GAB2019
04_Azure Kubernetes Service: Basic Practices for Developers_GAB201904_Azure Kubernetes Service: Basic Practices for Developers_GAB2019
04_Azure Kubernetes Service: Basic Practices for Developers_GAB2019Kumton Suttiraksiri
 
Your Developers Can Be Heroes on Kubernetes
Your Developers Can Be Heroes on KubernetesYour Developers Can Be Heroes on Kubernetes
Your Developers Can Be Heroes on KubernetesAmbassador Labs
 
Deploying your first application with Kubernetes
Deploying your first application with KubernetesDeploying your first application with Kubernetes
Deploying your first application with KubernetesOVHcloud
 
Kubernetes - An introduction
Kubernetes - An introductionKubernetes - An introduction
Kubernetes - An introductionLoves Cloud
 
Kubernetes Immersion
Kubernetes ImmersionKubernetes Immersion
Kubernetes ImmersionJuan Larriba
 
Introduction to containers, k8s, Microservices & Cloud Native
Introduction to containers, k8s, Microservices & Cloud NativeIntroduction to containers, k8s, Microservices & Cloud Native
Introduction to containers, k8s, Microservices & Cloud NativeTerry Wang
 
A Deeper Look Into How Kubernetes Works.pdf
A Deeper Look Into How Kubernetes Works.pdfA Deeper Look Into How Kubernetes Works.pdf
A Deeper Look Into How Kubernetes Works.pdfPetaBytz Technologies
 
Getting started with google kubernetes engine
Getting started with google kubernetes engineGetting started with google kubernetes engine
Getting started with google kubernetes engineShreya Pohekar
 
Google Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTEGoogle Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTEGokhan Boranalp
 
Ansible vs Kubernetes.pdf
Ansible vs Kubernetes.pdfAnsible vs Kubernetes.pdf
Ansible vs Kubernetes.pdfVishnuGone
 
Cloud technology with practical knowledge
Cloud technology with practical knowledgeCloud technology with practical knowledge
Cloud technology with practical knowledgeAnshikaNigam8
 

Similar to Jenkins_K8s (2).pptx (20)

"Experienced Kubernetes Administrator skilled in cluster deployment, maintena...
"Experienced Kubernetes Administrator skilled in cluster deployment, maintena..."Experienced Kubernetes Administrator skilled in cluster deployment, maintena...
"Experienced Kubernetes Administrator skilled in cluster deployment, maintena...
 
fundamental Kubernetes Administrator.pdf
fundamental Kubernetes Administrator.pdffundamental Kubernetes Administrator.pdf
fundamental Kubernetes Administrator.pdf
 
Kubernetes intro
Kubernetes introKubernetes intro
Kubernetes intro
 
Container Orchestration with Docker Swarm and Kubernetes
Container Orchestration with Docker Swarm and KubernetesContainer Orchestration with Docker Swarm and Kubernetes
Container Orchestration with Docker Swarm and Kubernetes
 
Best online kubernetes course in H2KInfosys.pdf
Best online kubernetes course in H2KInfosys.pdfBest online kubernetes course in H2KInfosys.pdf
Best online kubernetes course in H2KInfosys.pdf
 
Sumo Logic Cert Jam - Advanced Metrics with Kubernetes
Sumo Logic Cert Jam - Advanced Metrics with KubernetesSumo Logic Cert Jam - Advanced Metrics with Kubernetes
Sumo Logic Cert Jam - Advanced Metrics with Kubernetes
 
Building Cloud-Native Applications with Kubernetes, Helm and Kubeless
Building Cloud-Native Applications with Kubernetes, Helm and KubelessBuilding Cloud-Native Applications with Kubernetes, Helm and Kubeless
Building Cloud-Native Applications with Kubernetes, Helm and Kubeless
 
Develop and deploy Kubernetes applications with Docker - IBM Index 2018
Develop and deploy Kubernetes  applications with Docker - IBM Index 2018Develop and deploy Kubernetes  applications with Docker - IBM Index 2018
Develop and deploy Kubernetes applications with Docker - IBM Index 2018
 
04_Azure Kubernetes Service: Basic Practices for Developers_GAB2019
04_Azure Kubernetes Service: Basic Practices for Developers_GAB201904_Azure Kubernetes Service: Basic Practices for Developers_GAB2019
04_Azure Kubernetes Service: Basic Practices for Developers_GAB2019
 
Your Developers Can Be Heroes on Kubernetes
Your Developers Can Be Heroes on KubernetesYour Developers Can Be Heroes on Kubernetes
Your Developers Can Be Heroes on Kubernetes
 
01. Kubernetes-PPT.pptx
01. Kubernetes-PPT.pptx01. Kubernetes-PPT.pptx
01. Kubernetes-PPT.pptx
 
Deploying your first application with Kubernetes
Deploying your first application with KubernetesDeploying your first application with Kubernetes
Deploying your first application with Kubernetes
 
Kubernetes - An introduction
Kubernetes - An introductionKubernetes - An introduction
Kubernetes - An introduction
 
Kubernetes Immersion
Kubernetes ImmersionKubernetes Immersion
Kubernetes Immersion
 
Introduction to containers, k8s, Microservices & Cloud Native
Introduction to containers, k8s, Microservices & Cloud NativeIntroduction to containers, k8s, Microservices & Cloud Native
Introduction to containers, k8s, Microservices & Cloud Native
 
A Deeper Look Into How Kubernetes Works.pdf
A Deeper Look Into How Kubernetes Works.pdfA Deeper Look Into How Kubernetes Works.pdf
A Deeper Look Into How Kubernetes Works.pdf
 
Getting started with google kubernetes engine
Getting started with google kubernetes engineGetting started with google kubernetes engine
Getting started with google kubernetes engine
 
Google Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTEGoogle Cloud Platform Kubernetes Workshop IYTE
Google Cloud Platform Kubernetes Workshop IYTE
 
Ansible vs Kubernetes.pdf
Ansible vs Kubernetes.pdfAnsible vs Kubernetes.pdf
Ansible vs Kubernetes.pdf
 
Cloud technology with practical knowledge
Cloud technology with practical knowledgeCloud technology with practical knowledge
Cloud technology with practical knowledge
 

Recently uploaded

Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...OnePlan Solutions
 
why an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfwhy an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfjoe51371421
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about usDynamic Netsoft
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...panagenda
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software DevelopersVinodh Ram
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AIABDERRAOUF MEHENNI
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 

Recently uploaded (20)

Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
why an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfwhy an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdf
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about us
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software Developers
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 

Jenkins_K8s (2).pptx

  • 1. ATELIER : DEVOPS Année Universitaire 2020-2021 1 Mohamed HAMMOUDA A Successful Path To Continuous Integration And Continuous Delivery DevOps
  • 2. PLAN DE L’ATELIER 2 INTRODUCTION AU DEVOPS 1 LE CONTRÔLE DES VERSIONS : GIT & GITLAB 2 LES CONTENEURS APPLICATIVES : DOCKER 4 INTÉGRATION CONTINUE ET DÉPLOIEMENT CONTINU 5 LE CONTRÔLE DE QUALITÉ DES LOGICIELS 3
  • 3. PLAN DE L’ATELIER 3 INTRODUCTION AU DEVOPS 1 LE CONTRÔLE DES VERSIONS : GIT & GITLAB 2 LES CONTENEURS APPLICATIVES : DOCKER 4 INTÉGRATION CONTINUE ET DÉPLOIEMENT CONTINU 5 LE CONTRÔLE DE QUALITÉ DES LOGICIELS 3
  • 4.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM 4 KUBERNETES KUBERNETES K8S 8 letters 10 letters
  • 5. 5  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM  Kubernetes was originally developed by Google.  Google has practically always run applications in containers.  As early as 2014, it was reported that they start two bilion containers every week. That’s over 3,000 containers per second.  They run these containers on thousands of computers distributed across dozens of data centers around the world.  Now imagine doing all this manually => it’s clear that you need automation, and at this massive scale, it better be perfect.  Kubernetes is not an open-sourced version of Borg or Omega. It’s more like Kubernetes shares its DNA and family history with them.  The word Kubernetes is Greek for pilot or helmsman
  • 6. 6  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM  Kubernetes is a software system for automating the deployment and management of complex, large-scale application systems composed of computer processes running in containers.  Deploy your application  Scale it up and down dynamically based on demand  Self-heal it when things break  Perform zero-downtime rolling updates and rollbacks Kubernetes can :
  • 7. 7  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM  is one that’s designed to meet cloud-like demands of auto-scaling, self-healing, rolling updates, rollbacks and more.  It’s important to be clear that cloud- native apps are not applications that will only run in the public cloud, but they can also run anywhere that you have Kubernetes, even your on-premises datacenter.  Cloud-native is about the way applications behave and react to events. Cloud-native app Microservices app  Is built from lots of independent small specialised parts that work together to form a meaningful application.  For example, you might have an e- commerce app that comprises all of the following small specialised components:  Web front-end  Catalog service  Shopping cart  Authentication service  Logging service  Persistent store
  • 8. 8  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM  Docker is the low-level technology that starts and stops the containerised applications.  Kubernetes is the higher-level technology that looks after the bigger picture, such as deciding which nodes to run containers on, deciding when to scale up or down, and executing updates. Kubernetes and Docker  Docker isn’t the only container runtime Kubernetes supports. In fact, Kubernetes has a couple of features that abstract the container runtime and make it interchangeable: 1. The Container Runtime Interface (CRI) is an abstraction layer that standardizes the way 3rd-party container runtimes work with Kubernetes. 2. Runtime Classes allows you to create different classes of runtimes.
  • 9. 9  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM  In 2016 and 2017 we had the orchestrator wars where Docker Swarm, Mesosphere DCOS, and Kubernetes competed to become the de-facto container orchestrator.  To cut a long story short, Kubernetes won.  However, Docker Swarm is still under active development and is popular with small companies that need asimple alternative to Kubernetes. What about Kubernetes vs Docker Swarm
  • 10. As you can see in the following figure, the underlying infrastructure, meaning the computers, the network and other components, is hidden from the applications, making it easier to develop and configure them.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM ABSTRACTING AWAY THE INFRASTRUCTURE  Kubernetes provides an abstraction layer over the underlying hardware to both users and applications. STANDARDIZING HOW WE DEPLOY APPLICATIONS  A single manifest that describes the application can be used for local deployment and for deploying on any cloud provider. All differences in the underlying infrastructure are handled by Kubernetes, so you can focus on the application and the business logic it contains.
  • 11.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM DEPLOYING APPLICATIONS DECLARATIVELY  Kubernetes uses a declarative (imperative kubectl) model to define an application. You describe the components that make up your application and Kubernetes turns this description into a running application.  It then keeps the application healthy by restarting or recreating parts of it as needed.
  • 12.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM  As soon as you deploy an application to Kubernetes, it takes over the daily management of the application.  If the application fails, Kubernetes will automatically restart it.  If thehardware fails or the infrastructure topology changes so that the application needs to be moved to other machines TAKING ON THE DAILY MANAGEMENT OF APPLICATIONS Kubernetes does this all by itself.
  • 13. 13  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM HOW KUBERNETES FITS INTO A COMPUTER CLUSTER . You start with a fleet of machines that you divide into two groups  the master : will run the Kubernetes Control Plane, which represents the brain of your system and controls the cluster,  the worker nodes : will run your applications - your workloads - and will therefore represent the Workload Plane. Non-production clusters can use a single master node, but highly available clusters use at least three physical master nodes to host the Control Plane. The number of worker nodes depends on the number of applications you’ll deploy.  Regardless of the number of worker nodes in your cluster, they all become a single space where you deploy your applications. You do this using the Kubernetes API, which is provided by the Kubernetes Control Plane.
  • 14. 14  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM THE ARCHITECTURE OF A KUBERNETES CLUSTER . As you’ve already learned, a Kubernetes cluster consists of nodes divided into two groups:  A set of master nodes that host the Control Plane components  A set of worker nodes that form the Workload Plane The two types of nodes, run different Kubernetes components.
  • 15. 15  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM CONTROL PLANE COMPONENTS .  A Kubernetes control plane node is a server running collection of system services that make up the control plane of the cluster.  The simplest setups run a single control plane node. However, this is only suitable for labs and test environments.  For production environments, multiple control plane nodes configured for high availability (HA) is vital.  It’s also considered a good practice not to run user applications on control plane nodes. This frees them up to concentrate entirely on managing the cluster.
  • 16. 16  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM THE API SERVER .  The API server is the Grand Central of Kubernetes. All communication, between all components, must go through the API server.  It exposes a RESTful API that you POST YAML configuration files to over HTTPS.  These YAML files, which we sometimescall manifests,describe the desired state of an application (which container image to use, which ports to expose, and how many Pod replicas to run).
  • 17.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM THE CLUSTER STORE  The cluster store is the only stateful part of the control plane and persistently stores the entire configuration and state of the cluster.  As such, it’s a vital component of every Kubernetes cluster – no cluster store, no cluster.  The cluster store is currently based on etcd, a popular distributed database. As it’s the single source of truth for a cluster, you should run between 3-5 etcd replicas for high-availability, and you should provide adequate ways to recover when things go wrong.  A default installation of Kubernetes installs a replica of the cluster store on every control plane node and automatically configures HA.
  • 18.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM THE CONTROLLER MANAGER AND CONTROLLERS  The controller manager implements all the background controllers that monitor cluster components and respond to events.  Architecturally, it’s a controller of controllers, meaning it spawns all the independent controllers and monitors them. Some of the controllers include the Deployment controller, the Stateful Set controller, and the Replica Set controller. Each one runs as a background watch-loop constantly watching the API Server for changes.  The aim of the game is to ensure the observed state of the cluster matches the desired state. The logic implemented by each controller is as follows, and is at the heart of Kubernetes and declarative design patterns. 1. Obtain desired state 2. Observe current state 3. Determine differences 4. Reconcile differences
  • 19.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM THE SCHEDULER  At a high level, the scheduler watches the API server for new work tasks and assigns them to appropriate healthy worker nodes.  Behind the scenes, it implements complex logic that filters out nodes incapable of running tasks, and then ranks the nodes that are capable. The ranking system is complex, but the node with the highest ranking score is selected to run the task.  The scheduler isn’t responsible for running tasks, just picking the nodes to run them. A task is normally a Pod/container. You’ll learn about Pods and containers in later chapters.
  • 20.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM WORKER NODE COMPONENTS  The worker nodes are the computers on which your applications run.  They form the cluster’s Workload Plane. In addition to applications, several Kubernetes components also run on these nodes.  They perform the task of running, monitoring and providing connectivity between your applications.
  • 21. 21  The kubelet is main Kubernetes agent and runs on every cluster node. In fact, it’s common to use the terms node and kubelet interchangeably.  When you join a node to a cluster, the process installs the kubelet, which is then responsible for registering it with the cluster. This process registers the node’s CPU, memory, and storage into the wider cluster pool.  One of the main jobs of the kubelet is to watch the API server for new work tasks. Any time it sees one, it executes the task and maintains a reporting channel back to the control plane.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM KUBELET
  • 22. 22  The kubelet needs a container runtime to perform container-relatedtasks–things like pulling images and starting and stopping containers.  Kubernetes is dropping support for Docker as a container runtime. This is because Docker is bloated and doesn’t support the CRI (requires a shim).  containerd is replacing it as the most common container runtime on Kubernetes.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM CONTAINER RUNTIME
  • 23. 23  The last piece of the node puzzle is the kube-proxy.  It runs on every node and is responsible for local cluster networking.  It ensures each node gets its own unique IP address, and it implements local iptables or IPVS rules to handle routing and load- balancing of traffic on the Pod network.  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM KUBE-PROXY
  • 24. 24  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM DEFINING YOUR APPLICATION  Everything in Kubernetes is represented by an object.  You create and retrieve these objects via the Kubernetes API.  Your application consists of several types of these objects. One type represents the application deployment as a whole, another represents the service provided by a set of these instances and allows reaching them at a single IP address, and there are many others. .
  • 25. 25  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM INTRODUCING PODS  In Kubernetes, instead of deploying individual containers, you deploy groups of co-located containers : pods.  A pod is a group of one or more closely related containers that run together on the same worker node and need to share certain Linux namespaces.  The simplest model is to run a single container in every Pod. This is why we often use the terms “Pod” and “container” interchangeably.  However, there are advanced use-cases that run multiple containers in a single Pod.
  • 26. 26  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM POD anatomy  At the highest-level, a Pod is a ring-fenced environment to run containers. Pods themselves don’t actually run applications – applications always run in containers, the Pod is just a sandbox to run one or more containers.  If you’re running multiple containers in a Pod, they all share the same Pod environment. This includes the network stack, volumes, IPC namespace, shared memory, and more. As an example, this means all containers in the same Pod will share the same IP address (the Pod’s IP).  Two containers in the same Pod need to talk to each other (container-to-container within the Pod) they can use the Pod’s localhost interface
  • 27. 27  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM SERVICE OBJECTS AND STABLE NETWORKING  You’ve just learned that Pods are mortal and can die. However, if they’re managed via higher level controllers, they get replaced when they fail. But replacements come with totally different IP addresses. This also happens with rollouts and scaling operations. Rollouts replace old Pods with new ones with new IPs. Scaling up adds new Pods with new IP addresses, whereas scaling down takes existing Pods away. Events like these cause a lot of IP churn.  Assume you’ve got a microservices app with a bunch of Pods performing video rendering. How will this work if other parts of the app that use the rendering service can’t rely on rendering Pods being there when needed? This is where Services come in to play. They provide reliable networking for a set of Pods.  The uploader microservice talking to the renderer microservice via a Kubernetes Service object. The Service (capital “S” because it’s a Kubernetes API object) is providing a reliable name and IP. It’s also loadbalancing requests to the two renderer Pods behind it.
  • 28. 28  KUBERNETES  KUBERNETES IS CONTAINER ORCHESTRATION SYSTEM How Kubernetes runs an application These actions take place when you deploy the application: 1. You submit the application manifest to the Kubernetes API. 2. The API Server writes the objects defined in the manifest to etcd. 3. A controller notices the newly created objects and creates several new objects - one for each application instance. 4. The Scheduler assigns a node to each instance. 5. The Kubelet notices that an instance is assigned to the Kubelet’s node. It runs the application instance via the Container Runtime. 6. The Kube Proxy notices that the application instances are ready to accept connections from clients and configures a load balancer for them. 7. The Kubelets and the Controllers monitor the system and keep the applications running. After you’ve created your YAML or JSON file(s), you submit the file to the API, usually via the Kubernetes command-line tool called kubectl. NOTE Kubectl is pronounced kube-control, but the softer souls in the community prefer to call it kubecuddle. Some refer to it as kube-C-T-L. Kubectl splits the file into individual objects and creates each of them by sending an HTTP PUT or POST request to the API, as is usually the case with RESTful APIs. The API Server validates the objects and stores them in the etcd datastore. In addition, it notifies all interested components that these objects have been created. Controllers, which are explained next, are one of these components.
  • 29. • Download Binary Minikube $ curl -Lo minikube https://storage.googleapis.com/miniku be/releases/latest/minikube-linux- amd64 $ chmod +x minikube $ sudo mv minikube /usr/local/bin $ minikube start  KUBERNETES  Lab on MINIKUBE START minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes. 😄 minikube v1.27.1 on Ubuntu 20.04 👎 Unable to pick a default driver. Here is what was considered, in preference order: ▪ docker: Not healthy: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/version": dial unix /var/run/docker.sock: connect: permission denied ▪ docker: Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker' <https://docs.docker.com/engine/install/linux-postinstall/> 💡 Alternatively you could install one of these drivers: ▪ kvm2: Not installed: exec: "virsh": executable file not found in $PATH ▪ vmware: Not installed: exec: "docker-machine-driver-vmware": executable file not found in $PATH ▪ podman: Not installed: exec: "podman": executable file not found in $PATH ▪ virtualbox: Not installed: unable to find VBoxManage in $PATH ▪ qemu2: Not installed: exec: "qemu-system-x86_64": executable file not found in $PATH ❌ Exiting due to DRV_NOT_HEALTHY: Found driver(s) but none were healthy. See above for suggestions how to fix installed drivers. • Connect to Minikube add list informations about k8s components $ Minikube ssh $ docker ps --filter "name=kube-apiserver" --filter "name=etcd" --filter " name=kube-scheduler" --filter " name=kube-controller-manager" | grep -v " pause • Add docker to the user group $ sudo usermod -aG docker $USER && newgrp docker $ minikube start --driver=docker
  • 30. • Download Binary Kubectl $ curl -LO https://storage.googleapis.com/kubernetes- release/release/$(curl -s https://storage.googleapis.com/kubernetes- release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/  KUBERNETES  Lab on INTERACTING WITH KUBERNETES To interact with Kubernetes, you use a commandline tool called kubectl. The tool communicates with the Kubernetes API server, which is part of the Kubernetes Control Plane. To download the latest version, first go to https://storage.googleapis.com/kubernetes- release/release/stable.txt to see what the latest stable version is and then replace the version number in the first URL with this version. To check if you’ve installed it correctly, run kubectl --help. SETTING UP A SHORT ALIAS FOR KUBECTL You can speed up use of kubectl commands by setting up an alias and tab completion for it. • Define alias for kubectl command $ alias k=kubectl INTERACTING WITH KUBERNETES THROUGH WEB DASHBOARDS If you prefer using graphical web user interfaces, you’ll be happy to hear that Kubernetes also comes with a nice web dashboard. • Display minikube dashbord $ minikube dashboard
  • 31. DEPLOYING YOUR APPLICATION The imperative way to deploy an application is to use the kubectl create deployment command. By using the imperative command, you avoid the need to know the structure of Deployment objects as when you write YAML or JSON manifests. CREATING A DEPLOYMENT • deploy Nginx server to your Kubernetes cluster. $ kubectl create deployment Nginx --image=nginx deployment.apps/kubia created  KUBERNETES  Lab on In this command we have specified three things here:  You want to create a deployment object.  You want the object to be called nginx.  You want the deployment to use the container image nginx The Deployment object is now stored in the Kubernetes API. The existence of this object tells Kubernetes that the nginx container must run in your cluster. You’ve stated your desired state. Kubernetes must now ensure that the actual state reflects your wishes.
  • 32.  KUBERNETES  Lab on LISTING DEPLOYMENTS The interaction with Kubernetes consists mainly of the creation and manipulation of objects via its API. Kubernetes stores these objects and then performs operations to bring them to life. For example, when you create a Deployment object, Kubernetes runs an application. Kubernetes then keeps you informed about the current state of the application by writing the status to the same Deployment object. • List all Deployment objects $ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE kubia 0/1 1 0 6s
  • 33. The easiest way to create the service is to use the following imperative command: • Create a loadBalancer Service $ kubectl expose deployment nginx --type=LoadBalancer --port 8080 service/kubia exposed  KUBERNETES  Lab on EXPOSING YOUR APPLICATION TO THE WORLD The next question to answer is how to access it. I mentioned that each pod gets its own IP address, but this address is internal to the cluster and not accessible from the outside. To make the pod accessible externally, you’ll expose it by creating a Service object. Several types of Service objects exist. • Some expose pods only within the cluster :NodePort • while others expose them externally : LoadBalancer A service with the type LoadBalancer provisions an external load balancer, which makes the service accessible via a public IP. This is the type of service you’ll create now. This is what running the above command tells Kubernetes: • You want to expose all pods that belong to the kubia Deployment as a new service. • You want the pods to be accessible from outside the cluster via a load balancer. • The application listens on port 8080, so you want to access it via that port. You didn’t specify a name for the Service object, so it inherits the name of the Deployment.
  • 34.  KUBERNETES  Lab on LISTING SERVICES Services are API objects, just like Pods, Deployments, Nodes and virtually everything else in Kubernetes, so you can list them by executing kubectl get services, as in the next listing. The list shows two services with their types, IPs and the ports they expose. The kubia service doesn’t yet have an external IP address. Whether it gets one depends on how you’ve deployed the cluster. mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia2 deployment.apps "kubia2" deleted mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia deployment.apps "kubia" deleted mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia4 deployment.apps "kubia4" deleted mohamed@Sw2:~$ kubectl delete -n default DEPLOYMENT kubia3 deployment.apps "kubia3" deleted mohamed@Sw2:~$ k get pod NAME READY STATUS RESTARTS AGE kubia-59fcf787df-9l4fn 0/1 Terminating 0 11m kubia2-76f984565-9jtm6 0/1 Terminating 0 12m kubia3-84dd9687f7-wdrd4 0/1 Terminating 0 12m kubia4-5d56d66678-8hmq7 0/1 Terminating 0 12m nginx-76d6c9b8c-cpwnx 1/1 Running 0 115m mohamed@Sw2:~$ k get pod NAME READY STATUS RESTARTS AGE kubia4-5d56d66678-8hmq7 0/1 Terminating 0 12m nginx-76d6c9b8c-cpwnx 1/1 Running 0 115m Tester le scenario suivant : • Supprimer un pod $ kubectl delete pod « name » • Faites de get successifs, quest ce que vous remarquer $ kubectl get pod • Supprimer un deployment $ kubectl delete -n default DEPLOYMENT « name » • Faites de get successifs, quest ce que vous remarquer $ kubectl get pod

Editor's Notes

  1. And the best part about Kubernetes… it does all of this without you having to supervise orget involved.Obviously, you have to set things up in the first place, but once you’ve done that, you sit back and let Kubernetes work its magic
  2. Each of these individual services is called a microservice. Typically, each is coded and owned by a different team. Each can have its own release cycle and can be scaled independently. For example, you can patch and scale the logging microservice without affecting any of the others. Building applications this way is vital for cloud-native features. Forthemostpart,eachmicroservicerunsasacontainer.Assumingthise-commerceappwiththe6microservices, there’d be one or more web front-end containers, one or more catalog containers, one or more shopping cart containers etc.
  3. Each of these individual services is called a microservice. Typically, each is coded and owned by a different team. Each can have its own release cycle and can be scaled independently. For example, you can patch and scale the logging microservice without affecting any of the others. Building applications this way is vital for cloud-native features. Forthemostpart,eachmicroservicerunsasacontainer.Assumingthise-commerceappwiththe6microservices, there’d be one or more web front-end containers, one or more catalog containers, one or more shopping cart containers etc.
  4. Each of these individual services is called a microservice. Typically, each is coded and owned by a different team. Each can have its own release cycle and can be scaled independently. For example, you can patch and scale the logging microservice without affecting any of the others. Building applications this way is vital for cloud-native features. Forthemostpart,eachmicroservicerunsasacontainer.Assumingthise-commerceappwiththe6microservices, there’d be one or more web front-end containers, one or more catalog containers, one or more shopping cart containers etc.
  5. Because the details of the underlying infrastructure no longer affect the deployment of applications, you deploy applications to your corporate data center in the same way as you do in the cloud. A single manifest that describes the application can be used for local deployment and for deploying on any cloud provider. All differences in the underlying infrastructure are handled by Kubernetes, so you can focus on the application and the business logic it contains.
  6. When software developers or operators decide to deploy an application, they do this through Kubernetes instead of deploying the application to individual computers. Kubernetes provides an abstraction layer over the underlying hardware to both users and applications.
  7. The engineers responsible for operating the system can focus on the big picture instead of wasting time on the details. To circle back to the sailing analogy: the development and operations engineers are the ship’s officers who make high-level decisions while sitting comfortably in their armchairs, and Kubernetes is the helmsman who takes care of the low-level tasks of steering the system through the rough waters your applications and infrastructure sail through.
  8. To get a concrete example of how Kubernetes is deployed onto a cluster of computers, look at the following figure After Kubernetes is installed on the computers, you no longer need to think about individual computers when deploying applications. Regardless of the number of worker nodes in your cluster, they all become a single space where you deploy your applications. You do this using the Kubernetes API, which is provided by the Kubernetes Control Plane.
  9. To get a concrete example of how Kubernetes is deployed onto a cluster of computers, look at the following figure After Kubernetes is installed on the computers, you no longer need to think about individual computers when deploying applications. Regardless of the number of worker nodes in your cluster, they all become a single space where you deploy your applications. You do this using the Kubernetes API, which is provided by the Kubernetes Control Plane.
  10. A Kubernetes control plane node is a server running collection of system services that make up the control plane of the cluster. Sometimes we call them Masters, Heads or Head nodes. The simplest setups run a single control plane node. However, this is only suitable for labs and test environments. For production environments, multiple control plane nodes configured for high availability (HA) is vital. Generally speaking, 3 or 5 is recommended for HA. It’s also considered a good practice not to run user applications on control plane nodes. This frees them up to concentrate entirely on managing the cluster. Let’s take a quick look at the different services making up the control plane.
  11. The API server is the Grand Central of Kubernetes. All communication, between all components, must go through the API server. We’ll get into the detail later, but it’s important to understand that internal system components, as well as external user components, all communicate via the API server – all roads lead to the API Server. It exposes a RESTful API that you POST YAML configuration files to over HTTPS. These YAML files, which we sometimescall manifests,describethedesiredstateofanapplication.Thisdesiredstateincludesthingslikewhich container image to use, which ports to expose, and how many Pod replicas to run. All requests to the API server are subject to authentication and authorization checks. Once these are done, the config in the YAML file is validated, persisted to the cluster store, and work is scheduled to the cluster.
  12. The cluster store The cluster store is the only stateful part of the control plane and persistently stores the entire configuration and state of the cluster. As such, it’s a vital component of every Kubernetes cluster – no cluster store, no cluster. The cluster store is currently based on etcd, a popular distributed database. As it’s the single source of truth for a cluster, you should run between 3-5 etcd replicas for high-availability, and you should provide adequate ways to recover when things go wrong. A default installation of Kubernetes installs a replica of the cluster store on every control plane node and automatically configures HA. On the topic of availability, etcd prefers consistency over availability. This means it doesn’t tolerate split-brains and will halt updates to the cluster in order to maintain consistency. However, if this happens, user applications should continue to work, you just won’t be able to update the cluster config. As with all distributed databases, consistency of writes to the database is vital. For example, multiple writes to the same value originating from different places need to be handled. etcd uses the popular RAFT consensus algorithm to accomplish this.
  13. The controller manager and controllers The controller manager implements all the background controllers that monitor cluster components and respond to events. Architecturally, it’s a controller of controllers, meaning it spawns all the independent controllers and monitors them. SomeofthecontrollersincludetheDeploymentcontroller,theStatefulSetcontroller,andtheReplicaSetcontroller. Each one is responsible for a small subset of cluster intelligence and runs as a background watch-loop constantly watching the API Server for changes. The aim of the game is to ensure the observed state of the cluster matches the desired state (more on this shortly). The logic implemented by each controller is as follows, and is at the heart of Kubernetes and declarative design patterns. 1. Obtain desired state 2. Observe current state 3. Determine differences Each controller is also extremely specialized and only interested in its own little corner of the Kubernetes cluster. No attempt is made to over-complicate design by implementing awareness of other parts of the system – each controller takes care of its own business and leaves everything else alone. This is key to the distributed design of Kubernetes and adheres to the Unix philosophy of building complex systems from small specialized parts. Terminology: Throughout the book we’ll use terms like controller, control loop, watch loop, and reconciliation loop to mean the same thing.
  14. At a high level, the scheduler watches the API server for new work tasks and assigns them to appropriate healthy worker nodes. Behind the scenes, it implements complex logic that filters out nodes incapable of running tasks, and then ranks the nodes that are capable. The ranking system is complex, but the node with the highest ranking score is selected to run the task. Whenidentifyingnodescapableofrunningatask,theschedulerperformsvariouspredicatechecks.Theseinclude is the node tainted, are there any affinity or anti-affinity rules, is the required network port available on the node, does it have sufficient available resources etc. Any node incapable of running the task is ignored, and those remaining are ranked according to things such as does it already have the required image, how much free resource does it have, how many tasks is it currently running. Each is worth points, and the node with the most points is selected to run the task. If the scheduler doesn’t find a suitable node, the task isn’t scheduled and gets marked as pending. The scheduler isn’t responsible for running tasks, just picking the nodes to run them. A task is normally a Pod/container. You’ll learn about Pods and containers in later chapters.
  15. At a high level, the scheduler watches the API server for new work tasks and assigns them to appropriate healthy worker nodes. Behind the scenes, it implements complex logic that filters out nodes incapable of running tasks, and then ranks the nodes that are capable. The ranking system is complex, but the node with the highest ranking score is selected to run the task. Whenidentifyingnodescapableofrunningatask,theschedulerperformsvariouspredicatechecks.Theseinclude is the node tainted, are there any affinity or anti-affinity rules, is the required network port available on the node, does it have sufficient available resources etc. Any node incapable of running the task is ignored, and those remaining are ranked according to things such as does it already have the required image, how much free resource does it have, how many tasks is it currently running. Each is worth points, and the node with the most points is selected to run the task. If the scheduler doesn’t find a suitable node, the task isn’t scheduled and gets marked as pending. The scheduler isn’t responsible for running tasks, just picking the nodes to run them. A task is normally a Pod/container. You’ll learn about Pods and containers in later chapters.
  16. The kubelet is main Kubernetes agent and runs on every cluster node. In fact, it’s common to use the terms node and kubelet interchangeably. When you join a node to a cluster, the process installs the kubelet, which is then responsible for registering it with the cluster. This process registers the node’s CPU, memory, and storage into the wider cluster pool. One of the main jobs of the kubelet is to watch the API server for new work tasks. Any time it sees one, it executes the task and maintains a reporting channel back to the control plane. If a kubelet can’t run a task, it reports back to the control plane and lets the control plane decide what actions to take. For example, if a kubelet cannot execute a task, it is not responsible for finding another node to run it on. It simply reports back to the control plane and the control plane decides what to do.
  17. DEFINING YOUR APPLICATION Everything in Kubernetes is represented by an object. You create and retrieve these objects via the Kubernetes API. Your application consists of several types of these objects - one type represents the application deployment as a whole, another represents a running instance of your application, another represents the service provided by a set of these instances and allows reaching them at a single IP address, and there are many others. All these types are explained in detail in the second part of the book. At the moment, it’s enough to know that you define your application through several types of objects. These objects are usually defined in one or more manifest files in either YAML or JSON format.
  18. As illustrated in figure 3.8, you can think of each pod as a separate logical computer that contains one application. The application can consist of a single process running in a container, or a main application process and additional supporting processes, each running in a separate container. Pods are distributed across all the worker nodes of the cluster. Each pod has its own IP, hostname, processes, network interfaces and other resources. Containers that are part of the same pod think that they’re the only ones running on the computer. They don’t see the processes of any other pod, even if located on the same node.
  19. As illustrated in figure 3.8, you can think of each pod as a separate logical computer that contains one application. The application can consist of a single process running in a container, or a main application process and additional supporting processes, each running in a separate container. Pods are distributed across all the worker nodes of the cluster. Each pod has its own IP, hostname, processes, network interfaces and other resources. Containers that are part of the same pod think that they’re the only ones running on the computer. They don’t see the processes of any other pod, even if located on the same node.
  20. Service objects and stable networking You’ve just learned that Pods are mortal and can die. However, if they’re managed via higher level controllers, they get replaced when they fail. But replacements come with totally different IP addresses. This also happens with rollouts and scaling operations. Rollouts replace old Pods with new ones with new IPs. Scaling up adds new Pods with new IP addresses, whereas scaling down takes existing Pods away. Events like these cause a lot of IP churn. The point we’re making is that Pods are unreliable, and this poses challenges… Assume you’ve got a microservices app with a bunch of Pods performing video rendering. How will this work if other parts of the app that use the rendering service can’t rely on rendering Pods being there when needed? This is where Services come in to play. They provide reliable networking for a set of Pods. Figure 2.11 shows the uploader microservice talking to the renderer microservice via a Kubernetes Service object. The Service (capital “S” because it’s a Kubernetes API object) is providing a reliable name and IP. It’s also loadbalancing requests to the two renderer Pods behind it. Digging into a bit more detail. Services are fully-fledged objects in the Kubernetes API – just like Pods and Deployments.Theyhaveafront-endconsistingofastableDNSname,IPaddress,andport.Ontheback-end,they load-balance traffic across a dynamic set of Pods. As Pods come and go, the Service observes this, automatically updates itself, and continues to provide that stable networking endpoint. The same applies if you scale the number of Pods up or down. New Pods are seamlessly added to the Service and will receive traffic. Terminated Pods are seamlessly removed from the Service and will not receive traffic. That’s the job of a Service – it’s a stable network abstraction point that provides TCP and UDP load-balancing across a dynamic set of Pods. As they operate at the TCP and UDP layer, they don’t possess application intelligence. This means they cannot provide application-layer host and path routing. For that, you need an Ingress, which understands HTTP and provides host and path-based routing. That’s the basics. Services bring stable IP addresses and DNS names to the unstable world of Pods. 1
  21. DEFINING YOUR APPLICATION Everything in Kubernetes is represented by an object. You create and retrieve these objects via the Kubernetes API. Your application consists of several types of these objects - one type represents the application deployment as a whole, another represents a running instance of your application, another represents the service provided by a set of these instances and allows reaching them at a single IP address, and there are many others. All these types are explained in detail in the second part of the book. At the moment, it’s enough to know that you define your application through several types of objects. These objects are usually defined in one or more manifest files in either YAML or JSON format.
  22. Interacting with Kubernetes You’ve now learned about several possible methods to deploy a Kubernetes cluster. Now’s the time to learn how to use the cluster. To interact with Kubernetes, you use a commandline tool called kubectl, pronounced kube-control, kube-C-T-L or kube-cuddle. As the next figure shows, the tool communicates with the Kubernetes API server, which is part of the Kubernetes Control Plane. The control plane then triggers the other components to do whatever needs to be done based on the changes you made via the API. curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetesrelease/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/ To download the latest version, first go to https://storage.googleapis.com/kubernetes-release/release/stable.txt to see what the latest stable version is and then replace the version number in the first URL with this version. To check if you’ve installed it correctly, run kubectl --help. SETTING UP A SHORT ALIAS FOR KUBECTL You’ll use kubectl often. Having to type the full command every time is needlessly timeconsuming, but you can speed things up by setting up an alias and tab completion for it. Most users of Kubernetes use k as the alias for kubectl. If you haven’t used aliases yet, here’s how to define it in Linux and macOS. Add the following line to your ~/.bashrc or equivalent file: alias k=kubectl CONFIGURING TAB COMPLETION FOR KUBECTL Even with a short alias like k, you’ll still have to type a lot. Fortunately, the kubectl command can also output shell completion code for both the bash and the zsh shell. It enables tab completion of not only command names but also the object names. For example, later you’ll learn how to view details of a particular cluster node by executing the following command: $ kubectl describe node gke-kubia-default-pool-9bba9b18-4glf That’s a lot of typing that you’ll repeat all the time. With tab completion, things are much easier. You just press TAB after typing the first few characters of each token: $ kubectl desc<TAB> no<TAB> gke-ku<TAB> To enable tab completion in bash, you must first install a package called bash-completion and then run the following command (you can also add it to ~/.bashrc or equivalent): $ source <(kubectl completion bash) But there’s one caveat. This will only complete your commands when you use the full kubectl command name. It won’t work when you use the k alias. To make it work with the alias, you must transform the output of the kubectl completion command using the sed tool: $ source <(kubectl completion bash | sed s/kubectl/k/g)
  23. Deploying your application The imperative way to deploy an application is to use the kubectl create deployment command. As the command itself suggests, it creates a Deployment object, which represents an application deployed in the cluster. By using the imperative command, you avoid the need to know the structure of Deployment objects as when you write YAML or JSON manifests. CREATING A DEPLOYMENT In the previous chapter, you created a Node.js application that you packaged into a container image and pushed to Docker Hub to make it easily distributable to any computer. Let’s deploy that application to your Kubernetes cluster. Here’s the command you need to execute: $ kubectl create deployment kubia --image=luksa/kubia:1.0 deployment.apps/kubia created You’ve specified three things here: • You want to create a deployment object. • You want the object to be called kubia. • You want the deployment to use the container image luksa/kubia:1.0. By default, the image is pulled from Docker Hub, but you can also specify the image registry in the image name The Deployment object is now stored in the Kubernetes API. The existence of this object tells Kubernetes that the luksa/kubia:1.0 container must run in your cluster. You’ve stated your desired state. Kubernetes must now ensure that the actual state reflects your wishes. LISTING DEPLOYMENTS The interaction with Kubernetes consists mainly of the creation and manipulation of objects via its API. Kubernetes stores these objects and then performs operations to bring them to life. For example, when you create a Deployment object, Kubernetes runs an application. Kubernetes then keeps you informed about the current state of the application by writing the status to the same Deployment object. You can view the status by reading back the object. One way to do this is to list all Deployment objects as follows: $ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE kubia 0/1 1 0 6s The kubectl get deployments command lists all Deployment objects that currently exist in the cluster. You have only one Deployment in your cluster.
  24. Deploying your application The imperative way to deploy an application is to use the kubectl create deployment command. As the command itself suggests, it creates a Deployment object, which represents an application deployed in the cluster. By using the imperative command, you avoid the need to know the structure of Deployment objects as when you write YAML or JSON manifests. CREATING A DEPLOYMENT In the previous chapter, you created a Node.js application that you packaged into a container image and pushed to Docker Hub to make it easily distributable to any computer. Let’s deploy that application to your Kubernetes cluster. Here’s the command you need to execute: $ kubectl create deployment kubia --image=luksa/kubia:1.0 deployment.apps/kubia created You’ve specified three things here: • You want to create a deployment object. • You want the object to be called kubia. • You want the deployment to use the container image luksa/kubia:1.0. By default, the image is pulled from Docker Hub, but you can also specify the image registry in the image name The Deployment object is now stored in the Kubernetes API. The existence of this object tells Kubernetes that the luksa/kubia:1.0 container must run in your cluster. You’ve stated your desired state. Kubernetes must now ensure that the actual state reflects your wishes. LISTING DEPLOYMENTS The interaction with Kubernetes consists mainly of the creation and manipulation of objects via its API. Kubernetes stores these objects and then performs operations to bring them to life. For example, when you create a Deployment object, Kubernetes runs an application. Kubernetes then keeps you informed about the current state of the application by writing the status to the same Deployment object. You can view the status by reading back the object. One way to do this is to list all Deployment objects as follows: $ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE kubia 0/1 1 0 6s The kubectl get deployments command lists all Deployment objects that currently exist in the cluster. You have only one Deployment in your cluster.
  25. Exposing your application to the world Your application is now running, so the next question to answer is how to access it. I mentioned that each pod gets its own IP address, but this address is internal to the cluster and not accessible from the outside. To make the pod accessible externally, you’ll expose it by creating a Service object. Several types of Service objects exist. You decide what type you need. Some expose pods only within the cluster, while others expose them externally. A service with the type LoadBalancer provisions an external load balancer, which makes the service accessible via a public IP. This is the type of service you’ll create now. CREATING A SERVICE The easiest way to create the service is to use the following imperative command: $ kubectl expose deployment kubia --type=LoadBalancer --port 8080 service/kubia exposed The create deployment command that you ran previously created a Deployment object, whereas the expose deployment command creates a Service object. This is what running the above command tells Kubernetes: • You want to expose all pods that belong to the kubia Deployment as a new service. • You want the pods to be accessible from outside the cluster via a load balancer. • The application listens on port 8080, so you want to access it via that port. You didn’t specify a name for the Service object, so it inherits the name of the Deployment. LISTING SERVICES Services are API objects, just like Pods, Deployments, Nodes and virtually everything else in Kubernetes, so you can list them by executing kubectl get services, as in the next listing. The list shows two services with their types, IPs and the ports they expose. Ignore the kubernetes service for now and take a close look at the kubia service. It doesn’t yet have an external IP address. Whether it gets one depends on how you’ve deployed the cluster.
  26. Exposing your application to the world Your application is now running, so the next question to answer is how to access it. I mentioned that each pod gets its own IP address, but this address is internal to the cluster and not accessible from the outside. To make the pod accessible externally, you’ll expose it by creating a Service object. Several types of Service objects exist. You decide what type you need. Some expose pods only within the cluster, while others expose them externally. A service with the type LoadBalancer provisions an external load balancer, which makes the service accessible via a public IP. This is the type of service you’ll create now. CREATING A SERVICE The easiest way to create the service is to use the following imperative command: $ kubectl expose deployment kubia --type=LoadBalancer --port 8080 service/kubia exposed The create deployment command that you ran previously created a Deployment object, whereas the expose deployment command creates a Service object. This is what running the above command tells Kubernetes: • You want to expose all pods that belong to the kubia Deployment as a new service. • You want the pods to be accessible from outside the cluster via a load balancer. • The application listens on port 8080, so you want to access it via that port. You didn’t specify a name for the Service object, so it inherits the name of the Deployment. LISTING SERVICES Services are API objects, just like Pods, Deployments, Nodes and virtually everything else in Kubernetes, so you can list them by executing kubectl get services, as in the next listing. The list shows two services with their types, IPs and the ports they expose. Ignore the kubernetes service for now and take a close look at the kubia service. It doesn’t yet have an external IP address. Whether it gets one depends on how you’ve deployed the cluster.