This is a presentation from the OpenStack Austin Summit. It talks about managing containers in an OpenStack native way where containers are treated as first class citizens.
2. OTSUKA, Motohiro/Yuanying
NEC Solution Innovators
OpenStack Magnum Core Reviewer
Haiwei Xu
NEC Solution Innovators
OpenStack Senlin Core Reviewer
Qiming Teng
IBM, Research Scientist
OpenStack Senlin PTL, OpenStack Heat Core Reviewer
3. Agenda
• Why containers if you already have OpenStack
• What are the use cases?
• The many roads leading to Roma
• Container as first-class citizens on OpenStack
• Deployment and management
• Technology Gaps
• Experience Sharing and Outlook
• What we can do today
• Things to expect in Newton cycle
5. Photographer: Captain Albert E. Theberge, NOAA Corps (ret.)
from http://www.photolib.noaa.gov/coastline/line3174.htm
6. X-ray: NASA/CXC/RIKEN/D.Takei et al; Optical: NASA/STScI; Radio: NRAO/VLA
from http://www.nasa.gov/sites/default/files/thumbnails/image/gkper.jpg
7. Advantages of container technology
Server
Host OS
Hypervisor
Guest OS
libs / bins
Application
Guest OS
libs / bins
Application
Server
Host OS
libs / bins
Application
libs / bins
Application
Virtual Machine Container
8. Container Image
libs / bins
Application
Container Image
libs / bins
Application
Advantages of container technology
Server A
Host OS
Container Image
libs / bins
Application
Container Image
Server B
Host OS
libs / bins
Application
Dockerfile
Docker Registry
Development Production
Version managiment
10. Major Use Cases
• For application/service users
• IF self-serviced THEN
deploy/launch; simple configuration
ENDIF
• go...
• For application developers
• Develop, Commit, Test
• Build, Deploy, Push
• Pull, Patch, Push
• For cloud deployers/operators
• Build Infrastructure
• Install, Configure, Upgrade
• Monitor, Fix, Bill,
11. All Roads Lead to Roma
• How many roads do we have?
• nova lxc
• nova docker
• heat docker
• heat deployment
• magnum bay
• docker swarm
• kubernetes
• mesos
• marathon, ...
• openstack ansible
• kolla
• kolla-mesos
• .....
12. Nova: Docker / LXC
Ironic
Bare
Metal
Bare
Metal
VM VM VM
VM VM VM
Docker/LXC
???
Virtualization Bare metal Container
VM VM VM
VM VM VM
VM VM VM
VM VM VM
libvirt VMware Xen
Nova
driver
nova-docker virt driver
LXC (libvirt) driver
18. Balancing across the abstraction layer
• Container as another compute API?
• maybe pm, vm, lwVM
• so many backends
• An abstraction over all existing container management
software?
• it is possible, but many questions to be answered, e.g. why?
• do you really need to switch between these software frequently?
• are you willing to develop a client software to interact with all of them?
• So ... container clustering
• better integration with OpenStack
• ease of use
21. Senlin Features
• Profiles: A specification for the objects to be managed
• Policies: Rules to be checked/enforced before/after actions are performed
21
(others)
Senlin
Nova
Docker
Heat
Ironic BareMetal
VMs
Stacks
Containers
placement
deletion
scaling
health
load-balance
affinity
Policies as Plugins Profiles as Plugins Cluster/Nodes Managed
22. Senlin Server Architecture
openstacksdk
identity
compute
orchestration
network
...
engineengine lock
scheduler
actions
nodecluster
service
registry
receiverparser
drivers
openstack
dummy
(others)
dbapi
rpc client
policies
placement
deletion
scaling
health
load-balance
affinity
receiver
webhoook
MsgQueue
extension points
for external
monitoring
services
extension points
facilitating a
smarter cluster
management
extension points to talk to different
endpoints for object CRUD operations
extension points for interfacing
with different services or clouds
profiles
os.heat.stack
(others)
os.nova.server
senlin-api
WSGI
middleware
apiv1
23. Senlin Server Architecture (for containers)
engineengine lock
scheduler
actions
nodecluster
service
registry
receiverparser
drivers
docker-py
dummy
lxc
dbapi
rpc client
policies
placement
deletion
scaling
health
load-balance
affinity
receiver
webhoook
MsgQueue
extension points
for external
monitoring
services
extension points
facilitating a
smarter cluster
management
extension points to talk to different
endpoints for object CRUD operations
extension points for interfacing
with different services or clouds
profiles
container.docker
(others)
container.lxc
senlin-api
WSGI
middleware
apiv1
29. Container node and container cluster
node
Heat stack
Nova server
Profile type
Container
Nova server
Heat stack
Container
Nova server
Template for Heat Heat stack
container
cluster
Template for container
Template for Nova
Nova serverNova server
Heat stackHeat stack
containercontainer
30. How to create a container cluster?
container profile
cluster1
vm server
vm server
vm server
cluster1
container
vm
vm
container
container
vm
cluster2
container
container
container
container
container
31. The scalability of vm cluster and container cluster
cluster1
container
vm
container
vm
cluster2
container
container
user
Placement policy
Deletion policy
Scaling policy
Placement policy
Deletion policy
Scaling policy
vm
container container
The section which I will talk is “Why Containers, If you already have OpenStack.”
Container is a type of virtualization technology, and we can use it as computing resource.
But computing resource?
OpenStack already has the Nova, which is an abstraction layoer of computing resource.
Basically Nova handles virtual machine, and Nova provides abstraction layer for managing virtual machine.
So if container is a type of virtual machine, “Why Containers, If you already have OpenStack.”
This diagram shows the difference between virtual machine and container model.
Left side is a traditional virtual machine model.
And right side is a container model.
Virtual machine requires a hypervisor which emulate and translate the hardware,
and has its own OS.
Container provide isolation for processes sharing compute resources.
They are similar to virtualized machine but share the host kernel and avoid hardware emulation.
So you can use a Host resources more effectively than Virtual machine.
Soin this case, you can use container like a virtual machine,
It means that nova can manage containers.
In addition, Docker provides a simple tools and eco system for container which makes container technology become very polular .
You can create container image easily using Dockerfile.
You can share container image using docker registry,
Container Image has all the additional dependencies that application need, above what is provided by host.
You can move application from host to host easily.
Futthermore container scalability and elasticity are much better than virtual machine,
and thanks to some managiment tools like kubernetes and docke swarm, managiment of containers between defferent host become much easier.
So OpenStack needs this technology to make cloud managiment easiler.
This slide show major use cases of container technology.
The first one is for application users, who only want the application to be started quickly,
The don’t care how the application is started.
For the application developers, they care about application lifecycles, version managiment and portablicty.
And for cloud operators they care about how to manage infrastructure effectively, how to upgrade the system and so on.
Let’s see what container technology exists in OpenStack.
we have nova lxc, docker and magnum so many projects supporting container technlogy.
For example, Nova has lxc driver and docker driver which provides same interface with virtual machine.
User can start container like virtual machine.
This model doesn’t support all the advantage of container technology,
But this can meet the application user’s needs who just want to deploy application quickly.
Next heat.
Heat has two way to manage cotainer.
One is Docker::Container resource, and the other is SoftwareConfig or StructuredConfig Resource.
This can also meet the application user’s needs but it has limitaion of managing the containers after they are created.
And the next is magnum.
Magnum is a container orchestration engine as a service which deploy and manage COE.
When magnum deploy the COE, user can use all of the advantage of container technology through it’s COE specific tools such as kubectl or docker cli.
This can meet the developer and operator use cases.
But you must manage containers without an OpenStack native way.
Next one is Kolla.
Kolla is a OpenStack as a Service.
which uses container technology to make managiment openstack easier.
This is one caese of operator’s usecases.
this is just use a containers but not a way to manage containers itself.
So in order to manage container well in OpenStack, we need to find the new solution.
But we have a some problem to solve it.
The commuty has disscussed a lot about this issues.
Create an unified api which can support vm, baremetal and containers?
But the usecase of vm and conrainer are different so we can’t provider unified API.
next issue is how create an unified abstraction api for container orchestration engine?
This also have same problem which is a difference between kuber
But we can’t get an agreement on it.
As introduced previously Senlin is a project which provides clustering service, currently it only supports vm cluster. When supports container clustering, Senlin will do it in the similar way of vm clustering.
So at first, we need a new type profile – a container type profile. In the profile we will define some properties which will be used to create containers. All necessary properties can be defined in this profile. The format is similar to Nova server format.
With the profile , we can create container nodes and container clusters, When create containers, we need host vms, and the host vms are also managed by Senlin.So Senlin can manage both vm layer and container layer.
This is the workflow of creating container cluster. We can create multiple containers in one vm depending on the vm resource.
We can see physically containers are running on vms, but logically the container cluster(cluster2) and the vm cluster(cluster1) are separated clusters, Senlin can manage them separately.
That means to end uses who just want containers, they may just see the container cluster.
Lets see how to manage the resources.
The scalability control is the advantage of Senlin. When we have a vm cluster and container cluster, sometime the resource is not enough, it need to scale out.
Senlin invents policies to tell the cluster how to scale out/scale in. As we see the policies are attached to the cluster. When the resource is not enough, we got an alarm from the ceilometer, the policy will be triggered. Then it will tell the vm cluster to create a vm, after that the scaling policy attached to the container cluster will be triggered, then a new container will be created.
This is the scale out model, of course when the resources are idle, some vm/containers will be deleted. This is the way how senlin control cluster scalability.
About the design for container cluster, we still have some issues to think about. Everyone has its own advantage. When starting a container we need to determine starting it on which cluster which node. So do we need a scheduler to do this job. In senlin we have a placement policy, in the policy we can define where to start to nodes, it is a kind of scheduler, but very simple one, it’s not smart enough, we still need to improve it to meet our needs. Anyway it is a solution of this issue.
So we hope we can use Kuryr to create container network automatically just like we create vm network.
After all about container cluster support in senlin, we have had some discussions and have made some agreements on some issues. But we still want to hear more voices from the community, we need your ideas, your suggestions and also new hands, so please join us if you are interested in this job. You can find us on IRC senin channel and can also join our weekly meeting, any ideas are appreciated.