Scaling Your App With Docker Swarm using Terraform, Packer on Openstack
1. Scaling With Docker Swarm using
Packer, Terraform & OpenStack
Bobby DeVeaux - March 9th 2017
https://joind.in/talk/55008
2. https://joind.in/talk/55008 @bobbyjason
• Created my first website at 9 Years old in 1995 😮
• Started coding PHP in 2001 - 16 years ago
• Developer, Team Leader, CTO, Director & Consultant
• Been using AWS for over 5 years
• Web Development, Message Queues, Automation, CI&CD
• Previously worked at SkyBet & DVSA
• Now a DevOps Consultant with UKCloud, Evangelising OpenStack
• Contributor to Terraform
• I ♥️ Docker, Terraform & Golang (or anything Hashicorp)
• #twitter: @bobbyjason
About Me ☁️
2
3. https://joind.in/talk/55008 @bobbyjason
• I’m here to spread the awareness of UKCloud & OpenStack
• I want you to use Docker Swarm
• I want you to love Terraform
• I want to show you how to scale an app using all the above
Why Am I Here?
3
6. https://joind.in/talk/55008 @bobbyjason
• Who’s using Docker yet?
• Who’s using Docker Swarm?
• Who’s using Terraform?
• Who’s using Packer?
• Who’s not played with any of them, and would love to?
Hands Up
6
7. https://joind.in/talk/55008 @bobbyjason
What Is Docker?
7
“Docker containers wrap a piece of software in a complete
filesystem that contains everything needed to run: code,
runtime, system tools, system libraries – anything that can be
installed on a server. This guarantees that the software will
always run the same, regardless of its environment.”
- docker.com
8. https://joind.in/talk/55008 @bobbyjason
• Define multi-container setup
• docker-compose.yml
• beats running multiple docker run commands
• can specify multiple Dockerfiles
• docker-compose up .. similar to vagrant up
Docker Compose
8
14. https://joind.in/talk/55008 @bobbyjason
What Is Docker Swarm?
14
“Docker Swarm provides native clustering capabilities to turn a
group of Docker engines into a single, virtual Docker Engine.
With these pooled resources, you can scale out your
application as if it were running on a single, huge computer.”
- docker.com
23. https://joind.in/talk/55008 @bobbyjason
• Terraform is a tool for building, changing, and versioning
infrastructure safely and efficiently. Terraform can manage existing
and popular service providers as well as custom in-house solutions.
• Infrastructure as Code: Infrastructure is described using a high-level
configuration syntax. This allows a blueprint of your datacenter to be
versioned and treated as you would any other code. Additionally,
infrastructure can be shared and re-used.
• Execution Plans: Terraform has a "planning" step where it generates
an execution plan. The execution plan shows what Terraform will do
when you call apply. This lets you avoid any surprises when
Terraform manipulates infrastructure
What Is Terraform?
23
25. https://joind.in/talk/55008 @bobbyjason
• terraform apply
• Applies the changes as shown in terraform plan using it’s resource
graph. It know’s which resources have dependant resources and
which ones don’t
• Parallelised building - using the resource graph it will make all the
changes as quickly/efficiently as possible
Terraform Apply
25
26. https://joind.in/talk/55008 @bobbyjason
• Using the .tfstate file, it is fully aware of the resources.
• Terraform destroy will literally remove all the infrastructure it built for
you. WARNING: There is no ctrl+z!
Terraform Destroy
26
28. https://joind.in/talk/55008 @bobbyjason
• How long do your builds & deployments in travis / Jenkins take?
• What’s acceptable?
• ‘Quick’ is relative, and depends on your requirements.
• When I say quick deployments, I’m referring to efficient
deployments using Foundation Images.
Who Likes Quick Deployments?
28
29. https://joind.in/talk/55008 @bobbyjason
• Ansible / Puppet / Chef means that lots of projects now build from
the base box image, i.e. CentOS6 or Ubuntu 14.04 etc.
• Do you want to be building this each build? Some of you are clever,
and don’t. Some of you are clever, but didn’t consider an
alternative, or didn’t know how. Maybe some of you don’t even use
automated builds…
• Using Packer and your provisioner of choice, you can export the
artefact and store it as a Docker Container or Image in your cloud
provider (Amazon AMI, OpenStack Glance, etc).
Foundation Images
29
30. https://joind.in/talk/55008 @bobbyjason
• Tool for creating identical machine images
• Supports multiple platforms
• Supports many provisioners (Ansible, Chef, Puppet, Bash.. etc.)
• Can export image in multiple formats AMIs for EC2, VMDK/VMX
files for VMware, OVF exports for VirtualBox, etc.
What Is Packer?
30
39. https://joind.in/talk/55008 @bobbyjason
Build Basic Docker Images - php-fpm
39
FROM php:7.0.8-fpm-alpine
RUN apk update && apk upgrade &&
apk add --update curl wget bash tree autoconf gcc g++ make libffi-dev openssl-dev
RUN apk add supervisor
RUN pecl install xdebug
&& docker-php-ext-enable xdebug
&& docker-php-ext-install opcache
RUN docker-php-ext-install pdo_mysql
RUN mkdir -p /var/log/supervisor
COPY ./conf/php-dev.ini /usr/local/etc/php/
COPY ./conf/php-prod.ini /usr/local/etc/php/
COPY ./conf/php-prod.ini /usr/local/etc/php/php.ini
COPY ./conf/envars-development.conf /usr/local/etc/php-fpm.d
COPY ./conf/supervisord.conf /etc
WORKDIR /srv
COPY ./index.php /srv/web/
WORKDIR /srv
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin --filename=composer
CMD ["/usr/bin/supervisord","-n","-c","/etc/supervisord.conf"]
40. https://joind.in/talk/55008 @bobbyjason
Build Basic Docker Images - php-fpm
40
<?php
echo 'Welcome to your php-fpm Docker container.
You should copy your application into the /srv folder and overwrite this file.';
var_dump($_SERVER);
phpinfo();
[program:php-fpm]
command=php-fpm --nodaemonize -c /usr/local/etc/php/php-%(ENV_APPLICATION_ENV)s.ini
Key takeaway: UKPS are serious about transforming government IT. UKCloud uniquely focussed on providing enabling technologies and services which enabled us to become one of the fastest growing tech companies in Europe. Today, we remain 100% focussed on UKPS and are the market leading cloud provider. We support almost 200 workloads across over 30 direct customers and over 200 partners
This slide provides an at-a-glance view of UKCloud.
Along the bottom are key government policies and initiatives that have enabled a fundamental transformation of how IT is delivered across UK public sector. Digital by default is a core component of Civil Service Reform and seeks to enable a digital government, where interactions with businesses and citizens happen online rather than via call centres, drop-in centres or postal services. These new digital transactions require new applications and new architectures, and hence the government’s Technology Code of Practice advocates a Cloud First policy, favouring open-source and open standards over proprietary solutions, procured via the G-Cloud framework and appropriately assured through evaluation against the Cloud Security Principles. Importantly, Social Justice features prominently under the Theresa May government and UKCloud, as a British company, employing British people, creating British innovation and paying tax in Britain, is ideally aligned with the Social Value Act. In addition, the Greening ICT initiative incentivises the use of shared and efficient services such as cloud. And the dis-aggregation policy ensures that the large, legacy IT contracts are broken down and awarded to multiple suppliers rather than a single supplier.
It’s this context that drives demand for what we do and gives us a clear purpose.
Along the top are key characteristics of UKCloud. We were founded in 2011, as Skyscape Cloud Services, and born to deliver genuine cloud services exclusively to UK public sector and to therefore disrupt the inefficient way government IT was being delivered. In the past 5 years, we’ve grown rapidly including a 96% year-on-year growth in our last financial year. Indeed, we’re recognised as one of the fastest growing technology companies in the whole of Europe. This growth has enabled us to rapidly expand our company and we now have over 180 employees – all focused on delivering the best cloud for UK public sector. And our focus is paying dividends as we’re the market leading IaaS provider in G-Cloud with a 34% market share, bigger then the next three providers combined. Indeed, we’ve extended our market share every month despite increasing competition. And unlike other providers in G-Cloud who have but a few UK public sector customers, we have scores of customers and almost 200 UK public sector workloads, applications or projects.
The centre of the slide shows that those 200 workloads consist of over 30 direct customer contracts with the likes of DVLA, HMRC, MOJ and others, as well as solutions delivered via a growing ecosystem of over 200 partners which includes the likes of SopraSteria and Capgemini delivering Systems Integration to the likes of Kainos, Equal Experts and CACI which deliver more specialised managed services and professional services. Over time, we believe the majority of our workloads will be delivered via our partner ecosystem.
We already have Docker for AWS
We already have Docker for Azure
UKCloud have an Openstack offering
not that scary
Decentralized design: Instead of handling differentiation between node roles at deployment time, the Docker Engine handles any specialization at runtime. You can deploy both kinds of nodes, managers and workers, using the Docker Engine. This means you can build an entire swarm from a single disk image.
Declarative service model: Docker Engine uses a declarative approach to let you define the desired state of the various services in your application stack. For example, you might describe an application comprised of a web front end service with message queueing services and a database backend.
Desired state reconciliation: The swarm manager node constantly monitors the cluster state and reconciles any differences between the actual state and your expressed desired state. For example, if you set up a service to run 10 replicas of a container, and a worker machine hosting two of those replicas crashes, the manager will create two new replicas to replace the replicas that crashed. The swarm manager assigns the new replicas to workers that are running and available.
Multi-host networking: You can specify an overlay network for your services. The swarm manager automatically assigns addresses to the containers on the overlay network when it initializes or updates the application.
Service discovery: Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers. You can query every container running in the swarm through a DNS server embedded in the swarm.
Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.
Secure by default: Each node in the swarm enforces TLS mutual authentication and encryption to secure communications between itself and all other nodes. You have the option to use self-signed root certificates or certificates from a custom root CA.
Rolling updates: At rollout time you can apply service updates to nodes incrementally. The swarm manager lets you control the delay between service deployment to different sets of nodes. If anything goes wrong, you can roll-back a task to a previous version of the service.
- Create services
docker service ls
- Create services
- Scale services
docker service ls
docker kill container
docker service ls
Docker stack deploy —compose-file docker-compose.yml mystic
docker stack ls
docker service ls
docker ps
We’ve covered docker compose & swarm basics, creating services and deploying stacks..
We have a great development enivronment
Any questions so far?
Hopefully most of you know the already… :)
terraform apply
3 things:
> Updating the image to use the alpha build, so we can have Docker 1.13
> Installing Docker-Compose
> Copying ssh key
Basic Nginx Docker Container
Nginx config to process PHP via PHP-FPM
Grabbing the PHP 7 docker container
Installing supervisor + debug + opcache
Copy 2 php.ini files. 1 for dev, 1 for prod
Supervisor accepting ENV var to determine which php file to load,
This will build our docker containers locally, but there’s a better way..
Bash Script to wrap it up and build the containers on Jenkins
Show Jenkins
Show docker hub
Had an email asking if I knew I’d posted my password in my blog post…
Pop this file on your Jenkins server and all will be good
commented it out for nostalgic purposes
Parameterised build report
Docker-hub
Doesn’t do much other than copy that latest version of the code onto our latest foundation image
No need to rebuild the PHP box, no reinstalling go OPcache or Xdebug
When we commit/merge into our master branch, we want to build our new PHP dummy app container
show Docker Hub
Red - fail to deploy as we have not created the infrastructure yet - lets do that next
Here we are building the first manager and passing in a user_data cloudinit file
the cloud init sets up the swarm and saves the join tokens
it also uses our docker-compose.yml file to deploy the stack
For each of the other ‘secondary masters’ we use a different init file
Copies the join tokens from the Primary master, and also copies the docker compose file
Then joins the swarm using the token
we now have 3 masters, all capable of being the Leader.
terraform apply
ssh
docker node ls
visit IP and show load balancing
Copying the worker tokens
scaling the nodes to the numbers of workers
forcing redistribution if scaling more than 1 at a time
terraform apply
Committing code to our PHP App and seeing it deployed
Load Balancing
Killing a container
Points about being Cloud Native - Database as a Service,