Professor | QA Architect | Technology Evangelist | Technical Manager | Agile Practitioner | Blogger | Kathakali Dancer | Classical Music Composer um Amrita Vishwa Vidyapeetham
16. Dec 2018•0 gefällt mir•643 views
1 von 104
Containerization Principles Overview for app development and deployment
16. Dec 2018•0 gefällt mir•643 views
Downloaden Sie, um offline zu lesen
Melden
Technologie
This is the slide deck from recent Workshop conducted as part of IEEE INDICON 2018 on Containerization principles for next-generation application development and deployment.
Containerization Principles Overview for app development and deployment
1. WS01: Containerization For Next Generation
Application Development And Deployment
INDICON 2018 Workshop
Dr Ganesh Neelakanta Iyer
Amrita Vishwa Vidyapeetham, Coimbatore
Associate Professor, Dept of Computer Science and Engg
https://amrita.edu/faculty/ni-ganesh
http://ganeshniyer.com
2. About Me • Associate Professor, Amrita Vishwa Vidyapeetham
• Masters & PhD from National University of Singapore (NUS)
• Several years in Industry/Academia
• Sasken Communications, NXP Semiconductors, Progress
Software, IIIT-HYD, NUS (Singapore)
• Architect, Manager, Technology Evangelist, Visiting Faculty
• Talks/workshops in USA, Europe, Australia, Asia
• Cloud/Edge Computing, IoT, Game Theory, Software QA
• Kathakali Artist, Composer, Speaker, Traveler, Photographer
GANESHNIYER http://ganeshniyer.com
5. A Lot of Servers/Machines...
• Web server
• Mail server
• Database server
• File server
• Proxy server
• Application server
• …and many others
6. A Lot of Servers/Machines...
• The data-centre is FULL
– Full of under utilized servers
– Complicate in management
• Power consumption
– Greater wattage per unit area than ever
– Electricity overloaded
– Cooling at capacity
• Environmental problem
– Green IT
7. Recent Advances
• Multi-core: how to fully harness the power of multi-core?
Intel has been trying really hard to make us all program
for multi-core!!!
• Large scale data: How to manage large scale data?
Google, Yahoo, NSF and CRA have been promoting their
file systems and MapReduce!!
• Parallel processing of data: How to configure clusters to
process data in parallel?
• Answer: Virtualization?
8. Virtualization
• Virtualization -- the abstraction of computer resources.
• Virtualization hides the physical characteristics of computing resources from
their users, be they applications, or end users.
• This includes making a single physical resource (such as a server, an
operating system, an application, or storage device) appear to function as
multiple virtual resources; it can also include making multiple physical
resources (such as storage devices or servers) appear as a single virtual
resource.
9. The Use of Computers
Hardware
Operating
System
Applications
11. Traditional vs Virtual Architecture
Dr Ganesh Neelakanta Iyer 11
Access to the virtual machine and
the host machine or server is
facilitated by a software known as
Hypervisor. Hypervisor acts as a link
between the hardware and the
virtual environment and distributes
the hardware resources such as CPU
usage, memory allotment between
the different virtual environments.
12. Virtualization -- a Server for Multiple Applications/OS
Hardware
Operating
System
Applications
Hardware
Operating
System
Application
Hypervisor
Operating
System
Application
Operating
System
Application
Operating
System
Application
Operating
System
Applications
Hypervisor is a software program that manages multiple operating systems (or multiple instances of the same
operating system) on a single computer system.
The hypervisor manages the system's processor, memory, and other resources to allocate what each operating
system requires.
Hypervisors are designed for a particular processor architecture and may also be called virtualization managers.
13. Dr Ganesh Neelakanta Iyer 13
Hardware
CPU Memory NIC DISK
•Only one OS can run at a
time within a server.
•Under utilization of resources.
•Inflexible and costly
infrastructure.
•Hardware changes require
manual effort and access to
the physical server
Operating System
Multiple Software
Applications
Server without virtualization
14. Dr Ganesh Neelakanta Iyer 14
Hardware
CPU Memory NIC DISK
Hypervisor
• Can run multiple OS simultaneously.
• Each OS can have different hardware
configuration.
• Efficient utilization of hardware resources.
• Each virtual machine is independent.
• Save electricity, initial cost to buy servers,
space etc.
• Easy to manage and monitor virtual
machines centrally.
Virtual Server 1
Operating System
Multiple Software
Applications
Virtual Server 2
Operating System
Multiple Software
Applications
Server with virtualization
22. and spawned an Intermodal Shipping Container Ecosystem
• 90% of all cargo now shipped in a standard container
• Order of magnitude reduction in cost and time to load and unload ships
• Massive reduction in losses due to theft or damage
• Huge reduction in freight cost as percent of final goods (from >25% to <3%) massive globalizations
• 5000 ships deliver 200M containers per year
23. What is Docker?
Docker containers wrap a
piece of software in a
complete filesystem that
contains everything needed to
run: code, runtime, system
tools, system libraries –
anything that can be installed
on a server. This guarantees
that the software will always
run the same, regardless of
its environment.
[www.docker.com]
http://altinvesthq.com/news/wp-content/uploads/2015/06/container-ship.jpg
24. What is Docker?
• Developers use Docker to eliminate “works on my machine” problems
when collaborating on code with co-workers.
• Operators use Docker to run and manage apps side-by-side in
isolated containers to get better compute density.
• Enterprises use Docker to build agile software delivery pipelines to
ship new features faster, more securely and with confidence for both
Linux and Windows Server apps.
[www.docker.com]
25. Static website
Web frontend
User DB
Queue Analytics DB
Background workers
API endpoint
nginx 1.5 + modsecurity + openssl + bootstrap 2
postgresql + pgv8 + v8
hadoop + hive + thrift + OpenJDK
Ruby + Rails + sass + Unicorn
Redis + redis-sentinel
Python 3.0 + celery + pyredis + libcurl + ffmpeg + libopencv + nodejs +
phantomjs
Python 2.7 + Flask + pyredis + celery + psycopg + postgresql-client
Development VM
QA server
Public Cloud
Disaster recovery
Contributor’s laptop
Production Servers
The Challenge
Multiplicityof
Stacks
Multiplicityof
hardware
environments
Production Cluster
Customer Data Center
Doservicesand
appsinteract
appropriately?
CanImigrate
smoothlyand
quickly?
26. Results in M x N compatibility nightmare
Static website
Web frontend
Background workers
User DB
Analytics DB
Queue
Development
VM
QA Server
Single Prod
Server
Onsite Cluster Public Cloud
Contributor’s
laptop
Customer
Servers
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
27. Static website Web frontendUser DB Queue Analytics DB
Development
VM
QA server Public Cloud Contributor’s
laptop
Docker is a shipping container system for
code
Multiplicityof
Stacks
Multiplicityof
hardware
environments
Production Cluster
Customer Data
Center
Doservicesand
appsinteract
appropriately?
CanImigrate
smoothlyand
quickly
…that can be manipulated using
standard operations and run
consistently on virtually any
hardware platform
An engine that enables any
payload to be encapsulated
as a lightweight, portable,
self-sufficient container…
28. Static website Web frontendUser DB Queue Analytics DB
Development
VM
QA server Public Cloud Contributor’s
laptop
Or…put more simply
Multiplicityof
Stacks
Multiplicityof
hardware
environments
Production Cluster
Customer Data
Center
Doservicesand
appsinteract
appropriately?
CanImigrate
smoothlyand
quickly
Operator: Configure Once, Run
Anything
Developer: Build Once, Run
Anywhere (Finally)
29. Static website
Web frontend
Background workers
User DB
Analytics DB
Queue
Development
VM
QA Server
Single Prod
Server
Onsite Cluster Public Cloud
Contributor’s
laptop
Customer
Servers
Docker solves the M x N problem
30. Docker containers
• Wrap up a piece of software in a
complete file system that contains
everything it needs to run:
– Code, runtime, system tools, system
libraries
– Anything you can install on a server
• This guarantees that it will always
run the same, regardless of the
environment it is running in
31. Why containers matter
Physical Containers Docker
Content Agnostic The same container can hold almost
any type of cargo
Can encapsulate any payload and its
dependencies
Hardware Agnostic Standard shape and interface allow
same container to move from ship to
train to semi-truck to warehouse to
crane without being modified or
opened
Using operating system primitives (e.g.
LXC) can run consistently on virtually
any hardware—VMs, bare metal,
openstack, public IAAS, etc.—without
modification
Content Isolation and
Interaction
No worry about anvils crushing
bananas. Containers can be stacked
and shipped together
Resource, network, and content
isolation. Avoids dependency hell
Automation Standard interfaces make it easy to
automate loading, unloading, moving,
etc.
Standard operations to run, start, stop,
commit, search, etc. Perfect for devops:
CI, CD, autoscaling, hybrid clouds
Highly efficient No opening or modification, quick to
move between waypoints
Lightweight, virtually no perf or start-up
penalty, quick to move and manipulate
Separation of duties Shipper worries about inside of box,
carrier worries about outside of box
Developer worries about code. Ops
worries about infrastructure.
32. Docker containers
Lightweight
• Containers running on
one machine all share
the same OS kernel
• They start instantly
and make more
efficient use of RAM
• Images are
constructed from
layered file systems
• They can share
common files, making
disk usage and image
downloads much
more efficient
Open
• Based on open
standards
• Allowing containers to
run on all major Linux
distributions and
Microsoft OS with
support for every
infrastructure
Secure
• Containers isolate
applications from
each other and the
underlying
infrastructure while
providing an added
layer of protection for
the application
33. Docker / Containers vs. Virtual Machine
https://www.docker.com/whatisdocker/
Containers have similar resource
isolation and allocation benefits as
VMs but a different architectural
approach allows them to be much
more portable and efficient
34. Docker / Containers vs. Virtual Machine
https://www.docker.com/whatisdocker/
• Each virtual machine includes the application,
the necessary binaries and libraries and an
entire guest operating system - all of which
may be tens of GBs in size
• It includes the application and all of its
dependencies, but share the kernel with other
containers.
• They run as an isolated process in userspace
on the host operating system.
• Docker containers run on any computer, on
any infrastructure and in any cloud
35. Virtual Machines
Virtual machines run guest operating systems—note the OS
layer in each box. This is resource intensive, and the
resulting disk image and application state is an entanglement
of OS settings, system-installed dependencies, OS security
patches, and other easy-to-lose, hard-to-replicate ephemera
Containers vs Virtual Machines
Containers
Containers can share a single kernel, and the only
information that needs to be in a container image is the
executable and its package dependencies, which never need
to be installed on the host system. These processes run like
native processes, and you can manage them individually
36. Why are Docker containers lightweight?
Bins/
Libs
App
A
Original App
(No OS to take
up space, resources,
or require restart)
AppΔ
Bins
/
App
A
Bins/
Libs
App
A’
Gues
t
OS
Bins/
Libs
Modified App
Union file system allows
us to only save the diffs
Between container A
and container
A’
VMs
Every app, every copy of an
app, and every slight modification
of the app requires a new virtual server
App
A
Guest
OS
Bins/
Libs
Copy of
App
No OS. Can
Share bins/libs
App
A
Guest
OS
Guest
OS
VMs Containers
37. What are the basics of the Docker system?
Source
Code
Repository
Dockerfile
For
A
Docker Engine
Docker
Container
Image
Registry
Build
Docker Engine
Host 2 OS 2 (Windows / Linux)
Container
A
Container
B
Container
C
ContainerA
Push
Search
Pull
Run
Host 1 OS (Linux)
38. Changes and Updates
Docker Engine
Docker
Container
Image
Registry
Docker Engine
Push
Update
Bins/
Libs
App
A
AppΔ
Bins
/
Base
Container
Image
Host is now running A’’
Container
Mod A’’
AppΔ
Bins
/
Bins/
Libs
App
A
Bins
/
Bins/
Libs
App
A’’
Host running A wants to upgrade to A’’.
Requests update. Gets only diffs
Container
Mod A’
39. Easily Share and Collaborate on Applications
• Distribute and share content
– Store, distribute and manage your Docker images in your Docker
Hub with your team
– Image updates, changes and history are automatically shared
across your organization.
• Simply share your application with others
– Ship your containers to others without worrying about different
environment dependencies creating issues with your application.
– Other teams can easily link to or test against your app without
having to learn or worry about how it works.
Docker creates a common framework for developers and sysadmins to work together on distributed
applications
40. Get Started with Docker
• Install Docker
• Run a software image in a container
• Browse for an image on Docker Hub
• Create your own image and run it in a
container
• Create a Docker Hub account and an
image repository
• Create an image of your own
• Push your image to Docker Hub for
others to use
https://www.docker.com/products/docker
https://www.docker.com/products/docker-toolbox
41. Docker Container as a Service (CaaS)
Deliver an IT secured and managed application environment for developers to build and deploy
applications in a self service manner
44. Continuous Integration and Deployment (CI /
CD)
• The modern development pipeline is fast, continuous and automated
with the goal of more reliable software
• CI/CD allows teams to integrate new code as often as every time
code is checked in by developers and passes testing
• A cornerstone of devops methodology, CI/CD creates a real time
feedback loop with a constant stream of small iterative changes that
accelerates change and improves quality
• CI environments are often fully automated to trigger a test at git push
and to automatically build a new image if the test is successful and
push to a Docker Registry
• Further automation and scripting can deploy a container from the
new image to staging for further testing.
45. Microservices
• App architecture is changing from monolithic code bases with waterfall development
methodologies to loosely coupled services that are developed and deployed
independently
• Tens to thousands of these services can be connected to form an app
• Docker allows developers are able to choose the best tool or stack for each service
and isolates them to eliminate any potential conflicts and avoids the “matrix from
hell.”
• These containers can be easily shared, deployed, updated and scaled instantly and
independently of the other services that make up the app
• Docker’s end to end security features allow teams to build and operate a least
privilege microservices model where services only get access to the resources
(other apps, secrets, compute) they need to run at just the right time to create.
46. IT Infrastructure optimization
• Docker and containers help optimize the utilization and cost
of your IT infrastructure
• Optimization not just cost reduction, it is ensuring the right
amount of resources are available at the right time and
used efficiently
• Because containers are lightweight ways of packaging and
isolating app workloads, Docker allows multiple workloads
to run on the same physical or virtual server without conflict
• Businesses can consolidate datacenters, integrate IT from
mergers and acquisitions and enable portability to cloud
while reducing the footprint of operating systems and
servers to maintain
47. Hybrid Cloud
• Docker guarantees apps are cloud enabled - ready
to move across private and public clouds with a
higher level of control and guarantee apps will
operate as designed
• The Docker platform is infrastructure independent
and ensures everything the app needs to run is
packaged and transported together from one site to
another
• Docker uniquely provides flexibility and choice for
businesses to adopt a single, multi or hybrid cloud
environment without conflict
48. How does this help you build better software?
• Stop wasting hours trying to setup developer environments
• Spin up new instances and make copies of production code to run locally
• With Docker, you can easily take copies of your live environment and run on any new
endpoint running Docker.
Accelerate Developer Onboarding
• The isolation capabilities of Docker containers free developers from the worries of using
“approved” language stacks and tooling
• Developers can use the best language and tools for their application service without
worrying about causing conflict issues
Empower Developer Creativity
• By packaging up the application with its configs and dependencies together and shipping
as a container, the application will always work as designed locally, on another machine,
in test or production
• No more worries about having to install the same configs into a different environment
Eliminate Environment Inconsistencies
51. Setting up
• Before we get started,
make sure your system
has the latest version of
Docker installed.
• Docker is available in
two editions
– Docker Enterprise and
– Docker Desktop
56. If your windows is not in latest version…
https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-17062-ce-win27-2017-09-06-stable
57. Docker for Windows
When the whale in the status
bar stays steady, Docker is
up-and-running, and
accessible from any terminal
window.
59. Hello-world
• Open command prompt / windows power shell and run
docker run hello-world
▪ Now would also be a good time to make sure you are using
version 1.13 or higher. Run docker --version to check it out.
60. Building an app the Docker way
• In the past, if you were to start writing a Python app, your first order
of business was to install a Python runtime onto your machine
• But, that creates a situation where the environment on your machine
has to be just so in order for your app to run as expected; ditto for
the server that runs your app
• With Docker, you can just grab a portable Python runtime as an
image, no installation necessary
• Then, your build can include the base Python image right alongside
your app code, ensuring that your app, its dependencies, and the
runtime, all travel together
• These portable images are defined by something called a Dockerfile
61. Define a container with a Dockerfile
• Dockerfile will define what goes on in the environment
inside your container
• Access to resources like networking interfaces and disk
drives is virtualized inside this environment, which is
isolated from the rest of your system, so you have to map
ports to the outside world, and be specific about what files
you want to “copy in” to that environment
• However, after doing that, you can expect that the build of
your app defined in this Dockerfile will behave exactly
the same wherever it runs
Stack
Services
Container
62. Dockerfile
• Create an empty directory
• Change directories (cd) into the new directory, create a
file called Dockerfile
63. Dockerfile
• In windows, open notepad, copy the content below, click on Save as, type “Dockerfile”
This Dockerfile refers to a couple of files we
haven’t created yet, namely app.py and
requirements.txt. Let’s create those next.
64. The app itself
• Create two more files,
requirements.txt and app.py, and
put them in the same folder with the
Dockerfile
• This completes our app, which as you
can see is quite simple
• When the above Dockerfile is built
into an image, app.py and
requirements.txt will be present
because of that Dockerfile’s ADD
command, and the output from app.py
will be accessible over HTTP thanks to
the EXPOSE command.
65. The App itself
Requirements.txt
app.py
That’s it! You don’t need Python
or anything in
requirements.txt on your
system, nor will building or
running this image install them
on your system. It doesn’t seem
like you’ve really set up an
environment with Python and
Flask, but you have.
66. Building the app
• We are ready to build the app. Make sure you are still at the
top level of your new directory. Here’s what ls should show
• Now run the build command. This creates a Docker image,
which we’re going to tag using -t so it has a friendly name.
69. Run the app
• Run the app, mapping your machine’s port 4000 to the container’s published port 80
using –p
• docker run -p 4000:80 friendlyhello
• You should see a notice that Python is serving your app at http://0.0.0.0:80.
But that message is coming from inside the container, which doesn’t know you
mapped port 80 of that container to 4000, making the correct URL
http://localhost:4000
• Go to that URL in a web browser to see the display content served up on a web
page, including “Hello World” text, the container ID, and the Redis error message
71. End the process
• Hit CTRL+C in your terminal to quit
• Now use docker stop to end the process, using the
CONTAINER ID, like so
72. • Now let’s run the app in the background, in detached mode:
• docker run -d -p 4000:80 friendlyhello
• You get the long container ID for your app and then are kicked back
to your terminal. Your container is running in the background. You
can also see the abbreviated container ID with docker container ls
(and both work interchangeably when running commands):
• docker container ls
73. Share image
• To demonstrate the portability of what we just created, let’s
upload our built image and run it somewhere else
• After all, you’ll need to learn how to push to registries when you
want to deploy containers to production
• A registry is a collection of repositories, and a repository is a
collection of images—sort of like a GitHub repository, except the
code is already built. An account on a registry can create many
repositories. The docker CLI uses Docker’s public registry by
default
• If you don’t have a Docker account, sign up for one at
cloud.docker.com. Make note of your username.
77. Login with your docker id
• Log in to the Docker public registry on your local machine.
• docker login
78. Tag the image
• The notation for associating a local image with a repository on a
registry is username/repository:tag. The tag is optional, but
recommended, since it is the mechanism that registries use to give
Docker images a version. Give the repository and tag meaningful
names for the context, such as get-started:part1. This will put
the image in the get-started repository and tag it as part1.
• Now, put it all together to tag the image. Run docker tag image
with your username, repository, and tag names so that the image will
upload to your desired destination. The syntax of the command is:
80. Publish the image
• Upload your tagged image to the repository
• docker push username/repository:tag
• Once complete, the results of this upload are publicly available. If
you log in to Docker Hub, you will see the new image there, with its
pull command
81. Publish the image
• Upload your tagged image to the repository
• docker push username/repository:tag
• Once complete, the results of this upload are publicly available. If
you log in to Docker Hub, you will see the new image there, with its
pull command
83. Pull and run the image from the remote
repository
• From now on, you can use docker run and run your app on any
machine with this command:
• docker run -p 4000:80 username/repository:tag
• If the image isn’t available locally on the machine, Docker will pull it
from the repository.
• If you don’t specify the :tag portion of these commands, the tag of
:latest will be assumed, both when you build and when you run
images. Docker will use the last version of the image that ran without
a tag specified (not necessarily the most recent image).
No matter where executes, it pulls your image, along with Python and all the dependencies
from , and runs your code. It all travels together in a neat little package, and the host machine
doesn’t have to install anything but Docker to run it.
84. What have you seen so far?
• Basics of Docker
• How to create your first app in the Docker way
• Building the app
• Run the app
• Sharing and Publishing images
• Pull and run images
86. Services
• We can scale our application and enable load-balancing
• To do this, we must go one level up in the hierarchy of a
distributed application: the service.
• In a distributed application, different pieces of the app are
called “services.”
• For example, if you imagine a video sharing site, it probably
includes a service for storing application data in a database, a
service for video transcoding in the background after a user
uploads something, a service for the front-end, and so on
Stack
Services
Container
87. Services
• It’s very easy to define, run, and scale services with the
Docker platform
• just write a docker-compose.yml file
• This helps you define how your app should run in production
by turning it into a service, scaling it up in the process
• You can deploy this application onto a cluster, running it on
multiple machines
• Multi-container, multi-machine applications are made possible
by joining multiple machines into a “Dockerized” cluster called
a swarm.
Stack
Services
Container
88. docker-compose.yml
• Pull the image we uploaded before from the registry.
• Run 5 instances of that image as a service called
web, limiting each one to use, at most, 10% of the
CPU (across all cores), and 50MB of RAM.
• Immediately restart containers if one fails.
• Map port 80 on the host to web’s port 80.
• Instruct web’s containers to share port 80 via a load-
balanced network called webnet. (Internally, the
containers themselves will publish to web’s port 80 at
an ephemeral port.)
• Define the webnet network with the default settings
(which is a load-balanced overlay network).
89. Stacks
• A stack is a group of interrelated services that share
dependencies, and can be orchestrated and scaled
together
• A single stack is capable of defining and coordinating the
functionality of an entire application
• Though very complex applications may want to use
multiple stacks
Stack
Services
Container
90. Summary
• We have seen very basic hands on experience of
– How to build an app in the docker way
– Push your app to a registry
– Pull existing apps from registry
• Overview of scaling apps
• Useful reference: https://docs.docker.com/get-started/
92. Run a static website in a container
• First, we'll use Docker to run a static website in a
container
• The website is based on an existing image
• We'll pull a Docker image from Docker Store, run the
container, and see how easy it is to set up a web server
docker run -d dockersamples/static-site
93. Run a static website in a container
• What happens when you run this command?
• Since the image doesn't exist on your Docker host, the
Docker daemon first fetches it from the registry and then runs
it as a container.
• Now that the server is running, do you see the website? What
port is it running on? And more importantly, how do you
access the container directly from our host machine?
• Actually, you probably won't be able to answer any of these
questions yet! ☺ In this case, the client didn't tell the Docker
Engine to publish any of the ports, so you need to re-run the
docker run command to add this instruction.
94. • Let's re-run the command with some new flags to publish ports and pass
your name to the container to customize the message displayed.
• First, stop the container that you have just launched. In order to do this,
we need the container ID
• Run docker ps to view the running containers
• Check out the CONTAINER ID column. You will need to use this
CONTAINER ID value, a long sequence of characters, to identify the
container you want to stop, and then to remove it.
docker stop e666eace16b1
docker rm e666eace16b1
95. • docker run --name static-site -e AUTHOR="Your
Name" -d -P dockersamples/static-site
• -d will create a container with the process detached from our
terminal
• -P will publish all the exposed container ports to random ports on
the Docker host
• -e is how you pass environment variables to the container
• --name allows you to specify a container name
• AUTHOR is the environment variable name and Your Name is the
value that you can pass
96. • Now you can see the ports by running the command
– docker port static-site
• You can now open
http://localhost:[YOUR_PORT_FOR 80/tcp] to
see your site live!
97. • Let's stop and remove the containers since you won't be
using them anymore
• rm –f is a shortcut to remove the site
99. The Need for Orchestration Systems
• While Docker provided an open standard for packaging
and distributing containerized applications, there arose a
new problem
• How would all of these containers be coordinated and
scheduled?
• How do all the different containers in your application
communicate with each other?
• How can container instances be scaled?
Dr Ganesh Neelakanta Iyer 99
100. The Need for Orchestration Systems
• Solutions for orchestrating
containers soon emerged.
• Kubernetes, Mesos, and
Docker Swarm are some of
the more popular options for
providing an abstraction to
make a cluster of machines
behave like one big
machine, which is vital in a
large-scale environment
Dr Ganesh Neelakanta Iyer 100
101. What is Kubernetes
• Kubernetes is the container orchestrator that was
developed at Google which is now open source
• It has the advantage of leveraging Google’s years of
expertise in container management
• It is a comprehensive system for automating deployment,
scheduling and scaling of containerized applications, and
supports many containerization tools such as Docker
Dr Ganesh Neelakanta Iyer 101
102. How it helps?
• Running containers across many different machines
• Scaling up or down by adding or removing containers
when demand changes
• Keeping storage consistent with multiple instances of an
application
• Distributing load between the containers
• Launching new containers on different machines if
something fails
Dr Ganesh Neelakanta Iyer 102