9.0.1 FP10 brings support for Domino on a docker platform. You may know that docker is a container solution but what does that mean and how could it affect your Domino infrstructure? In this session we'll review how to install and run Domino in a docker container, whether it can support external clustering and the decisions to consider when designing container architecture.
1. Docker for Domino
• Gabriella Davis
• IBM Lifetime Champion for Social Business
• gabriella@turtlepartnership.com
2. Gab Davis
• Admin of all things and especially quite
complicated things where the fun is
• Working with the design, deployment and
security of IBM technologies within global
infrastructures
• working with the real world security and privacy
aspects of expanding data ecosystems
• Stubborn and relentless problem solver
• http://turtleblog.infohttps://
www.turtlepartnership.com
• IBM Lifetime Champion
4. DevOps
• DevOps or Developer Operations refers to the collaboration of software development
and developers with IT operations
• it refers to practices, processes and communication not specific technologies
• good DevOps practices are designed around rapid, consistent and reliable systems
• The goal of DevOps is to ensure the seamless delivery and maintenance of
applications
5. Microservices
• Applications were traditionally developed in entirety with every function of the application grouped together
and operating in concert
• for that reason applications can often be large, over developed and hard to update
• a change to a single function has to be incorporated into the entire application without any impact
• Microservices architecture refers to applications that are developed as separate functional or core services
each operating in their own isolated container but able to talk to each other
• updates are simpler and minimising the overall application size is easier by deploying just those micro
services that are needed
• Domino Tasks = Microservices ?
6. Virtualisation
• Isolating applications running on a single physical server
• Virtualisation allows us to use software to mimic physical hardware
• Using virtual machines we can more easily create new server instances and scale them
• This saves both time and cost
• The use of virtual machines and virtual environments has grown exponentially in the
past decade
7. Virtual Machine vs Container
With little OS of their own, containers are more lightweight and allow
the host OS and hardware to be utilised more efficiently
Virtual
Machines
8. Virtual Machine vs Container
With little OS of their own, containers are more lightweight and allow
the host OS and hardware to be utilised more efficiently
Virtual
Machines
Containers
9. Virtual Machine or Container?
It’s not an either / or - both architectures have their benefits
and drawbacks
Virtual Machine Container
More isolated and more secure
Portable, simple to move between hosts or deploy from
development directly to production
Can run different operating systems in each virtual machine
and not be tied to the host OS
Fast to start up with no OS overhead
Able to granularly scale use of resources Able to make more efficient use of host resources
More work to set up and manage Collectively dependent upon and all using the same host OS
Each VM must have enough resources assigned to also run
the VMs OS
Potential for security vulnerability via a “bleed” from the
container to the OS and the process that started it
10. Containers Offer..
• Self-contained sandbox environments that host applications including micro services
• Containers do not have an entire OS installed inside them the way virtual servers do but instead
share the OS of the host machine
• Multiple containers can share the OS of a host machine with their own isolated application and
file system
• Container architecture is designed to be portable and simple to update / maintain
• A container would usually contain a single service so that maximum benefit can be leveraged from
the portability
• one service or application to one container
• each application environment is not dependent on the other
11. Docker
• Docker is an open source container based virtualisation solution
• There is both a “Docker” client and a “Docker” server
• Docker is not the only container environment, there are others such as rkt (Rocket)
but IBM are using Docker for Connections Pink and it’s supported in a wide variety of
hosting environments including AWS, Azure and Rackspace
• Docker can be quickly installed on Linux, Windows and Mac
12. Kubernetes and Docker Swarm
• Containers must be deployed and managed
• management tools aren’t easy
• Containers can also be clustered and load managed by a cluster manager
• Docker Swarm is a native cluster manager using the Docker API so it requires Docker containers
• Kubernetes evolved out of Google’s expertise and was far ahead of Docker Swarm for many years but
no longer
• There are many tools out there to help cluster and manage Docker containers
• If you are going to have exclusively docker containers than Docker Swarm may be a better approach
than Kubernetes
• Clustering at Container level has to be very carefully considered with Domino containers
14. Devops and Containers
• Developers love containers
• They make it easy to isolate microservices and swap out updated code
• However that ease comes with risk
• each container is drawing resources from the same host
• each container has separately mounted storage and often nested dependencies
• spawning a new container from an image will not deploy changes made inside an existing
spawned container
• Process is everything
• Process is Operations and Development working together
15. Mac docker install supports Linux containers
Windows docker install supports Windows
2016 server core containers and Linux (kind
of)| using Linuxkit (don’t do it!)
16. Images and Containers
• You don’t run the image itself but use the Docker server to
spawn a container based upon that image
• You can spawn as many containers as you want using the
same image on the same host
• Each time a new container starts it is given a name, an ID
and a tag
• Changes made inside the container are not saved when
you quit it unless you commit those changes back to a new
image
• Starting a container from an image also includes mounting
storage
• So is having a re-usable Domino image useful?
IMAGE
CONTAINER
ID
TAG
Running Process
NEW IMAGE
docker run spawns a container
instance based on an image
docker commit creates a new
image based on a container
Name
17. Commands For Reviewing
Containers
• Once docker CE or EE is installed and active the command “docker” will work in a command
window
• docker images - shows all existing images in the registry
• docker ps - shows all existing containers
18. Docker Pull Images
• the Docker Store (store.docker.com) has hundreds of containers that you can use
• docker pull microsoft/windowsservercore (the windows 2016 server core container)
• docker pull registry.access.redhat.com/rhel7 (rhel 7)
• docker pull mongo
• docker pull store/ibmcorp/db2_developer_c:11.1.3.3-x86_64
• For Domino I use RHEL
• There are many CentOS and Debian containers available but those are not supported
Domino platforms
19. Images & Containers
• docker image ls - to show all available images
• docker ps —all to show all running and non running containers
• docker-machine <command> <machinename> e.g. docker-machine inspect turtle test
• docker exec <container name>- run a new process in the named container e.g. bash
20. Commands For Containers
• docker run - lets you start a new container from an image
• -d starts in background mode -i starts in interactive mode
• https://docs.docker.com/engine/reference/commandline/run/
• docker attach - lets you connect to a running container
• CTRL P, CTRL Q exists a running container without closing it
• CTRL D - exits and closes a container, this isn’t the same as removing it but does lose all
your changes
• docker logs <containername>
21.
22. Using Docker For Domino?
• Using the image registry.access.redhat.com/rhel7
docker run -t —i -v /Users/GabAir/Downloads/Notesdata:/home -p 1352:1352 -p 80:80
registry.access.redhat.com/rhel7
docker exec -i -t <containername> /bin/bash
• docker commit <containername> <newimagename>
• Development
• Testing
• Low priority cluster mate
• DR hot swap
• Consider disk and storage
24. Choosing an OS For Your Container
• Domino runs on both Linux and Windows so how do we choose?
• the OS kernel is on the host machine so that OS has to be a supported Domino OS
• both the image and therefore the container access the host kernel for their core functionality
• Windows 2016 and Windows 10 can run Docker CE and EE
• Linuxkit can be used to run Linux containers on Windows (dodgy at best)
• Windows containers are Windows 2016 core server
• Domino on Docker will primarily run on a Linux host with a Linux container
• Adding an extra layer of running a Linux container on a Windows docker install impacts performance
25. Resources
• When creating a docker container from an image you do have some control over the
resources on the host that it can consume. This includes
• Maximum allowed memory
• Allocated CPU % as a total of the host and/or relative to other containers running
• This will prevent a container from consuming too much resource
• However Domino cares about
• disk performance
• cpu
• memory
• designing the correct storage is the most critical aspect of a production container
26. Container Clustering With Domino
• Clustering at Container level has to be very carefully designed if you’re deploying Domino containers
• Domino won’t like container clustering across identical live Domino servers
• Data storage should be outside the container as containers are designed to be temporary and self
destruct on quitting (more on that in a bit)
• Having a failover cluster as a Domino container
• Having multiple active / passive containers representing the same Domino server
File based storage for
MailA
Domino Active
Container
MailA
Domino Passive
Container
MailA
28. Docker Data Volumes
• Shared storage areas that can be used by the containers to access data on the host or
within another container
• You don’t create volumes within a container so you create volumes that link to either data
stored in another container or on the host
• Volumes defined in an image and deployed as the container creates can only be applied to
that container and are not removed when it is removed
• Volumes defined within a container can be accessed by other containers using the volumes-
from option
29. Data Volume Containers
• You are essentially creating containers to be NFS
stores
• Since they are containers they can be moved to new
locations and the references to them will still work
• However if the data container isn’t running the data
can’t be reached
• Backing up the data means backing up the
container
• All containers that mount that volume are reading
and writing to the same space
• Be careful not to destroy the data Container
• Docker has limited data integrity protection
30. Directory Mounts
• A location on the host machine that is “mapped” to a
mount point essentially in one or multiple containers
• It is accessible and exists regardless of whether any
containers are running or using it
• It can be backed up as standard data storage
• Access is controlled by host file permissions
• It can’t be as easily moved to a new location
• Be careful of tying yourself in knots with relative
references to data volumes
• Be VERY careful of launching a container if you
don’t know the mount points that are defined inside it
31. Directory Mounts vs Docker
Data Volumes
• A directory mount can be assigned to multiple
containers even after they are created
• A directory mount can point to any part of the
host file system that the account running the
docker container has access to
• Directory Mounts have security and data loss
risks that need to be carefully managed
• Data volumes are created when the
container is created and cannot be re-
used directly by other containers
• Docker data volumes are created within
the docker file structure on the host and
are managed (or not managed)
separately from the container
• Deleting a container will not remove the
data volume
32. Risks
• Storage containers can easily be deleted
• especially if it’s not clear that another container is using that storage
• Directory mounts can be easily overwritten if another container runs with the same mount
points
• Deploying new code via a container that retains the storage references from a previous
version will overwrite production storage
• Ease of use and flexibility must be tempered with Devops process and planning
34. Docker Networking
• Docker can create a private network for each container it starts
• Containers can be linked together to share the same private network and isolate themselves
from other containers started by the same docker machine
• by linking containers you can ensure if they are killed then recreated with the same name,
the network link is maintained
• We can also tell the docker server to expose specific ports inside the containers to external
ports that can be reached outside the containers. For instance a port 25 SMTP listener or
443 web server (old school method)
35. Bridged Driver Networks
• Each container is created as part of a
defined bridged network
• The bridge networks are private and on
their own subnet
• Containers on the same bridge network
can be seen and addressed within their
own private network without routing traffic
through the host
36. Overlay Driver Networks
• Each container is created as part of a
defined overlay network
• Overlays are similar to bridge networks
but are designed to work with multi host
networks so containers do not have to be
on the same host to see each other
• Docker swarm is used to manage and
route traffic between containers using the
overlay driver
37. Port Forwarding
• When running the containers we specify
both a port to open and how it is reached
from the host machine
• This port forwarding can then be used by
other containers to talk to each other via the
host
38. Macvlan Drivers
• Each container is created as part of the host
network
• The routing and accessibility is controlled as if the
container were simply another machine on the host
network
• This makes macvlan the most lightweight of drivers
39. Summary
• Docker is a great architecture for temporary or changing environments such as development
• It really does require understanding and a level of comfort with Linux OS
• Containers are designed to destroy themselves and their content unless you specify otherwise so in most cases we want
to keep content outside the container
• Domino runs well on Docker for Linux but it really isn’t a collection of microservices (yet**) so isn’t taking advantage of
the core architectural benefits of containers
• ** hello HCL - one for the wishlist
• Domino is very dependent on disk performance and I/O so choosing and optimising the right storage is critical
• Deploying docker as production architecture for Domino is not something I’d recommend but for test, development or
small scale failover it is a good solution