1. 2016 Newt Global |www.NewtGlobal.com | Confidential
Newt Global Services and Offerings
2. Docker 101 Tri Series Webinar
By:
Venkatnadhan Thirunalai
Jai Karthik
3. Founded in 2004 ,HQ at Dallas TX,
present in multiple locations in USA and
India
Leader in DevOps Transformation, Cloud
Enablement and Test Automation
One of top 100 fastest growing
companies of Dallas twice in a row
Clientele includes Fortune 50 companies
About Newt Global
4. Speakers
1/27/2017 Copyright 4
• Venkat is DevOps Practice Leader, His area of expertise includes DevOps
Practice, Consult Fortune 100 customers on DevOps IT Strategy. Responsible
for building the global pre-sales, consulting and delivery team for
NewtGlobal
• He has 16+ years of IT industry experience and delivered multiple enterprise
scale projects for Fortune 500 customer base
Venkatnadhan Thirunalai
DevOps, Practice Leader
NewtGlobal
• AWS solution specialist, DevOps strategist. Area of expertise includes AWS
infrastructure management and architectural design, Docker container
management solution, DevOps strategy for automation, Ansible scripter for
automation, Jenkins work practice for design architecture. Responsible for
AWS management, Docker management and DevOps automation works with
jenkins and ansible
• Industry experience of 6+ years in IT and worked on 24 projects with smooth
deliverables for International/Natinonal enterprise clients
Jayakarthi Dhanabalan
AWS Solution Specialist
Newt Global
5. Housekeeping Instructions
• All phones are set to mute. If you have any questions, please type them in the Chat window located beside the
presentation panel
• We have already received several questions from the registrants, which will be answered by the speakers during
the Q & A session
• We will continue to collect more questions during the session as we receive and will try to answer them during
today’s session
• In case if you do not receive answers to your question today, you will certainly receive answers via email shortly
• Thanks for your participation and enjoy the session!
1/27/2017 Copyright 5
6. Contents
Introduction to Docker, Containers, and the Matrix from Hell
Why people care: Separation of Concerns
Technical Discussion
Ecosystem
Docker Basics
Dockerfile
Docker Compose
Docker Swarm
7. Static website
Web frontend
User DB
Queue Analytics DB
Background workers
API endpoint
nginx 1.5 + modsecurity + openssl + bootstrap 2
postgresql + pgv8 + v8
hadoop + hive + thrift + OpenJDK
Ruby + Rails + sass + Unicorn
Redis + redis-sentinel
Python 3.0 + celery + pyredis + libcurl + ffmpeg + libopencv + nodejs +
phantomjs
Python 2.7 + Flask + pyredis + celery + psycopg + postgresql-client
Development VM
QA server
Public Cloud
Disaster recovery
Contributor’s laptop
Production Servers
The ChallengeMultiplicityofStacks
Multiplicityof
hardware
environments
Production Cluster
Customer Data Center
Doservicesandapps
interact
appropriately?
CanImigrate
smoothlyand
quickly?
8. The Matrix From Hell
Static website
Web frontend
Background workers
User DB
Analytics DB
Queue
Development
VM
QA Server
Single Prod
Server
Onsite
Cluster
Public Cloud
Contributor’s
laptop
Customer
Servers
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
12. Static website Web frontendUser DB Queue Analytics DB
Development
VM
QA server Public Cloud Contributor’s
laptop
Docker is a shipping container system for codeMultiplicityofStacks
Multiplicityof
hardware
environments
Production
Cluster
Customer Data
Center
Doservicesandapps
interact
appropriately?
CanImigrate
smoothlyandquickly
…that can be manipulated using
standard operations and run
consistently on virtually any
hardware platform
An engine that enables any
payload to be encapsulated
as a lightweight, portable,
self-sufficient container…
13. Static website
Web frontend
Background workers
User DB
Analytics DB
Queue
Development
VM
QA Server
Single Prod
Server
Onsite
Cluster
Public Cloud
Contributor’s
laptop
Customer
Servers
Docker eliminates the matrix from Hell
14. Why Developers Care
Build
once…(finally)
run anywhere
A clean, safe, hygienic and portable runtime environment for your app.
No worries about missing dependencies, packages and other pain points during
subsequent deployments.
Run each app in its own isolated container, so you can run various versions of
libraries and other dependencies for each app without worrying
Automate testing, integration, packaging…anything you can script
Reduce/eliminate concerns about compatibility on different platforms, either your
own or your customers.
Cheap, zero-penalty containers to deploy services? A VM without the overhead of
a VM? Instant replay and reset of image snapshots? That’s the power of Docker
15. Why Devops Cares?
Configure
once…run
anything
Make the entire lifecycle more efficient, consistent, and repeatable
Increase the quality of code produced by developers.
Eliminate inconsistencies between development, test, production, and
customer environments
Support segregation of duties
Significantly improves the speed and reliability of continuous deployment and
continuous integration systems
Because the containers are so lightweight, address significant performance,
costs, deployment, and portability issues normally associated with VMs
16. Why it works—separation of concerns
• Dan the Developer
• Worries about what’s “inside” the container
• His code
• His Libraries
• His Package Manager
• His Apps
• His Data
• All Linux servers look the same
• Oscar the Ops Guy
• Worries about what’s “outside”
the container
• Logging
• Remote access
• Monitoring
• Network config
• All containers start, stop, copy,
attach, migrate, etc. the same
way
17. More technical explanation
High
Level—It’s
a
lightweight
VM
Own process space
Own network interface
Can run stuff as root
Can have its own /sbin/init (different from host)
<<machine container>>
Low Level—
It’s chroot
on steroids
Can also not have its own /sbin/init
Container=isolated processes
Share kernel with host
No device emulation (neither HVM nor PV) from host)
<<application container>>
Run
everywhere
Regardless of kernel version
Regardless of host distro
Physical or virtual, cloud or not
Container and host architecture must match*
Run
anything
If it can run on the host, it can run in the
container
i.e. if it can run on a Linux kernel, it can run
WHY WHAT
18. App
A
Containers vs. VMs
Hypervisor (Type 2)
Host OS
Server
Guest
OS
Bins/
Libs
App
A’
Guest
OS
Bins/
Libs
App
B
Guest
OS
Bins/
Libs
AppA’
Docker
Host OS
Server
Bins/Libs
AppA
Bins/Libs
AppB
AppB’
AppB’
AppB’
VM
Container
Containers are isolated,
but share OS and, where
appropriate, bins/libraries
Guest
OS
Guest
OS
…result is significantly faster deployment,
much less overhead, easier migration,
faster restart
19. Why are Docker containers lightweight?
Bins/
Libs
App
A
Original App
(No OS to take
up space, resources,
or require restart)
AppΔ
Bins/
App
A
Bins/
Libs
App
A’
Guest
OS
Bins/
Libs
Modified App
Copy on write
capabilities allow
us to only save the diffs
Between container A
and container
A’
VMs
Every app, every copy of an
app, and every slight modification
of the app requires a new virtual server
App
A
Guest
OS
Bins/
Libs
Copy of
App
No OS. Can
Share bins/libs
App
A
Guest
OS
Guest
OS
VMs Containers
20. What are the basics of the Docker system?
Source
Code
Repository
Dockerfile
For
A
Docker Engine
Docker
Container
Image
Registry
Build
Docker
Host 2 OS (Linux)
ContainerA
ContainerB
ContainerC
ContainerA
Push
Search
Pull
Run
Host 1 OS (Linux)
21. Changes and Updates
Docker Engine
Docker
Container
Image
Registry
Docker Engine
Push
Update
Bins/
Libs
App
A
AppΔ
Bins/
Base
Container
Image
Host is now running A’’
Container
Mod A’’
AppΔ
Bins/
Bins/
Libs
App
A
Bins/
Bins/
Libs
App
A’’
Host running A wants to upgrade to A’’.
Requests update. Gets only diffs
Container
Mod A’
22. Some Docker vocabulary
Docker Image
The basis of a Docker container. Represents a full application
Docker Container
The standard unit in which the application service resides and executes
Docker Engine
Creates, ships and runs Docker containers deployable on a physical or virtual, host
locally, in a datacenter or cloud service provider
Registry Service (Docker Hub or Docker Trusted Registry)
Cloud or server based storage and distribution service for your images
22
24. Docker File System
• Logical file system by grouping different file system primitives into
branches (directories, file systems, subvolumes, snapshots)
• Each branch represents a layer in a Docker image
• Allows images to be constructed / deconstructed as needed vs. a huge
monolithic image (ala traditional virtual machines)
• When a container is started a writeable layer is added to the “top” of the
file system
24
25. Copy on Write
• Super efficient:
• Sub second instantiation times for containers
• New container can take <1 Mb of space
• Containers appears to be a copy of the original image
• But, it is really just a link to the original shared image
• If someone writes a change to the file system, a copy of the affected file/directory is “copied up”
25
26. What about data persistence?
• Volumes allow you to specify a directory in the container that exists outside of the
docker file system structure
• Can be used to share (and persist) data between containers
• Directory persists after the container is deleted
• Unless you explicitly delete it
• Can be created in a Dockerfile or via CLI
26
28. Dockerfile – Linux Example
28
• Instructions on how
to build a Docker
image
• Looks very similar to
“native” commands
• Important to
optimize your
Dockerfile
29. Dockerfiles
• Dockerfiles = image representations
• Simple syntax for building images
• Automate and script the images creation
30. FROM
• Sets the base image for subsequent instructions
• Usage: FROM <image>
• Example: FROM ubuntu
• Needs to be the first instruction of every Dockerfile
• TIP: find images with the command: docker search
31. RUN
• Executes any commands on the current image and commit the results
• Usage: RUN <command>
• Example: RUN apt-get install –y memcached
FROM ubuntu
RUN apt-get install -y memcached
• Is equivalent to:
docker run ubuntu apt-get install -y memcached
docker commit XXX
32. docker build
• Creates an image from a Dockerfile
• From the current directory = docker build
• From stdin = docker build - < Dockerfile
• From GitHub = docker build github.com/creack/docker-firefox
• TIP: Use –t to tag your image
33. Compose
• One binary to start/manage multiple containers and volumes on a single Docker host
• Originated from Fig
• Move your lengthy docker run commands to a YAML file
34. Installation
• Via pip
• Or just get the binary
$ sudo curl -L
https://github.com/docker/compose/releases/download/1.1.0/
docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-
compose
35. Use
$ docker-compose up -d
Creating vagrant_mysql_1...
Creating vagrant_wordpress_1...
$ docker-compose ps
Name Command State
Ports
-----------------------------------------------------------------
----------------
vagrant_mysql_1 /entrypoint.sh mysqld Up
3306/tcp
vagrant_wordpress_1 /entrypoint.sh apache2-for ... Up
0.0.0.0:80->80/tcp
37. Machine
• One binary to create a remote Docker host and setup the TLS communication with your local docker client.
• Automates the TLS setup and the configuration of the local environment
• Can manage multiple machines in different clouds at the same time
39. Local Use
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev * virtualbox Running tcp://192.168.99.100:2376
$ docker-machine env dev
export DOCKER_TLS_VERIFY=1
export
DOCKER_CERT_PATH=/Users/sebastiengoasguen/.docker/machine/machine
s/dev
export DOCKER_HOST=tcp://192.168.99.100:2376
$ docker images
REPOSITORY TAG … CREATED VIRTUAL SIZE
wordpress latest … 2 weeks ago 451.4 MB
mysql latest … 2 weeks ago 282.8 MB
mysql 5.5 … 2 weeks ago 214.5 MB
40. Cloud Use
• Many drivers
• Many more waiting for merge (i.e cloudstack )
$ ./docker-machine create -d digitalocean foobar
INFO[0000] Creating SSH key...
INFO[0001] Creating Digital Ocean droplet...
INFO[0005] Waiting for SSH...
INFO[0072] Configuring Machine...
41. Swarm
• Docker client endpoint that proxies requests to docker daemons running in a cluster.
• Cluster manager that keeps state of the cluster nodes
• Easily run as a container itself
• Multiple service discoveries for cluster nodes (docker hosted, etcd, consul, zookeeper, file based)
43. Use
• No install, run the container
• docker pull swarm
$ docker run -v /vagrant:/tmp/vagrant -p 1234:1234 -d swarm manage
file://tmp/vagrant/swarm-cluster.cfg -H=0.0.0.0:1234
72acd5bc00de0b411f025ef6f297353a1869a3cc8c36d687e1f28a2d8f422a06
$ docker -H 0.0.0.0:1234 info
Containers: 0
Nodes: 3
swarm-2: 192.168.33.12:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 490 MiB
…
$ docker -H 0.0.0.0:1234 run -d -p 80:80 nginx
44. Use Machine to create Swarm
• Get a token for discovery
• Start nodes with machine using the --swarm option
$ docker run swarm create
31e61710169a7d3568502b0e9fb09d66
$ docker-machine create -d virtualbox --swarm --swarm-master --
swarm-discovery token://31e61710169a7d3568502b0e9fb09d66 head
...
$ docker-machine create -d digitalocean --swarm --swarm-discovery
token://31e61710169a7d3568502b0e9fb09d66 worker-00
...
$ docker-machine create -d azure --swarm --swarm-discovery
token://31e61710169a7d3568502b0e9fb09d66 swarm-worker-01
45. Put it all together: Build, Ship, Run Workflow
Developers IT Operations
BUILD
Development Environments
SHIP
Create & Store Images
RUN
Deploy, Manage, Scale
46. Contact Us
For any questions/clarifications please contact:
Satheesh Reddy, Sales Manager
Newt Global Consulting LLC.
satheeshr@newtglobalcorp.com
http://newtglobal.com/
1/27/2017 Copyright 46
Let’s take a look at what build, ship run means in a little more detail, but before that we need to level set some Docker vocabulary and commands:
Image
The static component that represents a on-running applicatoin
Containers are derived from images
images contain EVERYTHING an application needs to run
Should always be built via a Dockerfile (which we’ll talk about in a bit_
Container
The standard unit in which the application service resides
Package app and dependencies together
Isolated from other containers
One container per app / service
Docker Engine
The program that creates, ships and runs containers
Deployable on any physical or vm host locally, in datacenters or cloud
Communicates with Docker Hub
Registry
The service that store, distributes and manages container images
Receives commands from Docker Client via Engine
Access control with public, private repos
Each action in a docker file creates a new layer in the image.
If we visualize our earlier dockerfile example, you can see the changes in the image that each step created (we only show the first five commands). Images are built from the bottom up, so any change made by a subsequent step, is layered on top of the previous changes already made.
Image layers can be shared between different images. This means that the layers are not duplicated on your Docker host (or on the Registry when they’re pushed). Depending on the underlying filesystem each of these layers is represented by a directory on the Docker host.
You’ll notice if you ever look at a complicated dockerfile that authors will work to put as many commands into a single line by concatenating them together. This is to reduce the numbers of layers in an image.
When you do a docker run command, an additional read / write layer is added to the image. An important point is that even if you started 100 containers, all that is created is 100 Read / Write layers, and they all point back to the read only image on the host.
As we mentioned on the previous slide, the layers are represented on the the disk as individual file primatives. In the case of AUFS each layer is a subdirectory holding the file system changes that layer created.
The layers are “stacked” on top of each other, and if you shell into a running container it will look like one cohesive file system.
In some cases there might be a file or directory that exists on multiple layers. In such a case, the ‘top most’ object is what’s represented in the container file system. This is because that object represents the last change made to the image (remember images are built from the bottom up)
This layering construct lets you start with the bare minimum and add exactly what you want. For instance, Alpine Linux is a very stripped down operating system. It’s about 2.6mb. When an image is built on that you need to explicitly add almost anything you’d want in your final image.
Copy on write is the technology that manages runtime changes to the container.
When you create a new container, you’re not booting a full operating system. You’re just creating a subdirectory on the Docker host to house any changes that are made to the running container. This is why new containers take <1 MB of space when they are started (since initially the new RW layer is empty) and why containers start so quickly
If you were to shell into the container, it would look like it was a full copy of the original image from which it was instantiated, but in reality you’d be looking at at a read only copy of that image.
At least until a change is made to the running container (file deletion, creation, update, etc)
When a change is detected in the container, the affected object is copied up the layers to the top most RW layer.
One of the tricky things about containers is that when a container is destroyed, that RW layer is removed. Any changes that were made to the container are destroyed in the process.
Clearly this is suboptimal in many cases. For instance you might want to save off some logs from your application or your container was running a database and you want the data to last after the container is destroyed.
The solution to this problem is something called a Volume. A volume is simply a subdirectory in your container that is mapped to a subdirectory that lives outside of the directory structure where your images and containers are stored on your docker host.
For instance, let’s say your application writes to /var/logs, and you wanted to save the logs after the container was destroyed.
You would create a new volume, and tell Docker to send any data destined to /var/logs to the directory that is being managed by the volume.
From a docker perspective we don’t really care where your volumes live. It simply needs to be on storage that is accessible by the Docekr host operating system.
You can create volumes at build time through the dockerfile or at run time via command line switch.
Docker pull pulls an image from the registry to the local host. This example shows us pulling the 1.0 version of the Catweb image from the mikegcoleman repo on Docker Hub
Docker images will list all the images on your docker host
Docker run will start a new container. In this case we are instructing the docker engine to run the mikegcoleman/catweb image we pulled earlier -d tells docker It should start the application in detached mode (running in the backgorund) and –p 5000:5000 tells docker engine that any requests coming into port 5000 on the host should be directed to port 5000 on this container. –name catweb specifies a name for our running container. If you do not specify a name, Docker will generate one (and they can be pretty funny)
Docker ps shows runing containers. ps –a will show all containers, including ones that have been stopped
Docker stop stops a running container (but does not delete it). You can specify the container name (catweb in our example) or the container ID (every image and container are assigned a unique ID)
Docker rm removes the stopped container. if you specify rm –f you will force docker to remove container even if it’s running. again you can specify the name or ID
Docker rmi removes the specified image
Docker build will create a new image from a docker file. In this example we are creating an image called mikegcoleman/catweb and we’re tagging it with a 2.0 version number. The period says to build the image from the docker file in the current directory. You can explicitly specify a path to your dockerfile if you so choose
Docker push is the opposite of Docker – it pushes an image up to a registry. In this case we’re pushing our newly created docker image up on to hub.
A Dockerfile describes how to build a docker image (using the ‘docker build’ command). The commands in the file are a mix of commands you’d actually run to install an application locally (in this case we’re building a python app, and if you’re familiar with Python, you’ll instantly recognize much of what is up on the screen) and specific keywords that tell Docker what to do (RUN a command COPY a file, etc).
An important point about Dockefiles is that they can live with your source code, and be versioned by your version control system. This means Docker images are 100% reproducible. This is another area where VMs and Containers can be different – many times VMs are hand built, and if you lose your golden image you’re out of luck.
This Dockerfile builds a simple flask-based python webapp.
Let’s step through the Dockerfile line by line – note that when the Docker file is processed, all these commands are being run on the Docker image – not your local machine.
Line 1: Build this new Docker image based on the official Apline Linux base image
Line 5: Install Pythong and Pip (the python package manager)
Line 7: use Pip to ensure that Pip is the latest version
Line 11: Requirements.txt holds a list of libraries the app will need, and is used by pip to actually install the library. This line copies the file from your local machine into your Docker image
Line 12: Uses that file and Pip to install the requirements into the container
Line 15: Copy my application code (app.py) into the /usr/src/app directory of the image
Line 16: Copy our index.html file into the /usr/src/app/templates directory of the image
Line 19: The application communicates on port 5000, so we tell Docker to listen on that Port
Line 22: When the container starts up – we fire up Python and pass it out application code to start the app
Again, these steps are pretty much EXACTLY what you would do on a traditional machine to run this application, only here there are being used to create an Docker image.
So, let’s put all this together and look how it might look in real environment
Starting in the middle, your IT organization might provide your developers with a set of blessed base images. These could be operating systems, languages, or components (redis, postgres, rabbitmq). Those keys indicate that those images are digitally signed, so your developers know that they are actually coming from your IT team (vs. being placed there by some nefarious actor in the hopes of introducing a security vulnerability into your organization).
Your developers download those signed images, add their code and libraries, and then build the application Docker images. These are then pushed back up to Docker hub. The important thing here is that developers still use all the tools they are familiar with – they use their preferred integrated development environment (IDE) for instance. The only major change is that at the end of the process they create a Docker file, and build an image.
We don’t show it here, but at some point you (hopefully) QA your app, and then we move over to the ops side of the house.
Here your ops team takes that application image and they put into production. They can use Docker Datacenter or Docker Cloud to deploy the application to whatever infrastructure makes sense. Cloud, physical, virtual, whatever. The important point here is that the application will run wherever they need it to, and the targeted server only needs to have the Docker engine – no need to control library versions, or ensure that all the right languages are installed. It just works.