How do you break a monolithic application into microservices? Learn how AWS delivers the integrated building blocks to support the move to containerized microservices for any application architecture, regardless of scale, load, or complexity. Learn more about the newly released AWS App Mesh and how it makes it easy to monitor and control containerized microservices. We will explore different options for running containers on AWS, such as AWS Fargate (serverless containers), EKS, and ECS.
54. Stateful container stores state in local disk or local memory.
Workload ends up tied to a specific host that has state data.
eu-west-1b
Container 1
Disk
eu-west-1ceu-west-1a
Lets set out what a 12 factor app is and why it’s a good principle.
In the past we would often treat a server as a machine which has a variety of roles. A single server may be responsible for serving web content, email, processing background jobs, and even hosting a database system. Your application is really only one of the many things that runs on that machine.
Scaling with a traditional server model is hard. Adding a new machine takestime to provision. Those machines may store application data directly on theirstorage devices, including log files. It becomes more difficult to manage asyour deployment infrastructure grows larger.
A more modern approach is to use many virtual machines, containers, or evenphysical machines that serve a single purpose. This architecture will allow anapplication to scale horizontally without requiring lots of extra effort. Ifyour application follows the 12 factor principles then scaling will be mucheasier and your app will be ready to run in a modern infrastructure.
First lets talk about ESC and fargate,
Then we’ll look how a 12 factor app fits into this.
Getting started with containers is rather easy, you could even spin up one locally without much effort
Or if you want to run containers in the cloud, you spin up some EC2 instances, launch containers on them and get going in minutes. This would work even if you are using dozens of containers. But as you think about scaling this, managing hundreds of such instances, monitoring their health, scaling them and launching your containers on them and the whole lifecycle around them…how do you scale for that?
So lets say you plan to run several highly available applications across 3 different availability zones.
[CLICK]
ECS enables you to be able to operationalize your containerized workloads at very high scales. No management software to install or worry about its high availability.
The cluster management piece enabled you to be able to monitor the cluster, scale it using autoscaling groups and be able to manage state of the instances in the cluster.
The placement engine enables you to be able to set rules to target landing your containers on the right instances based on your preferences.
And then finally, the advanced scheduling features help maintain the desired state of the application, spawn new containers to automatically respond to scaling needs and maintain resiliency by deploying across multiple availability zones while being resource aware of the underlying compute.
If you double click on the instances it reveals that there is additional supporting software that you need to run, maintain and patch on ALL your virtual machines to support your containers like the Docker daemon and ECS Agent.
So the real picture looked something like this. There are these additional layers of management you need to be aware of when all you wanted to do was run containers!
[CLICK]
Fargate support for ECS enables you to do just that – fully managed orchestration as well as data plane experience bringing your focus to only containers.
Fargate you don’t even need to think of the infrastructure.
First you have your code,
But you need somewhere to store it
So version control is the key, this example uses code commit but you can use other tools such as GitHub or Gitlab
This lets you ensure code in all environments are aligned
Code is one part but its not the end of the story,
You often have libs and dependcies you need to include
Docker lends it’s self to this very well.
You can pull in code, bins and deps into one package
Then the same code deps and libs can run anywhere.
This helps get past the famous it runs on my machine problem.
Config is just as important
Its very tempting to do this.
However this leads to the potential of having different config in dev/stage and prod. This may introduce errors into your deployment pipeline and effect production.
So avoid this pattern at all costs
Use the same docker container in all environments,
Use environment variables or even secrets manager to pull configuration into that env
If you’ve ever run docker locally and run the –E “XYZ=foobar” flag this can be used to configure the container.
ECS and fargate also allow you to do this!
You can also do it in the application code
Consume everything like its an API and external service.
Write your code to discover those services
Or use cloud map with ECS
AWS Cloud Map
Use Cases :
Service Discovery
Continuous integration and delivery
Automated health monitoring
Increased availability
Cloud Map constantly monitors the health of every IP-based component of your application and also dynamically updates the location of each microservice as they are added or removed. This ensures that your applications only discover the most up-to-date location of its resources, increasing the availability of the application.
Increased developer productivity
Cloud Map provides a single registry for all your application services which you can define with custom names. This ensures that your development teams don’t have to constantly store, track, and update resource name and location information or make changes directly within the application code.
Use code build to look for changes committed to code commit and on a change start a build
Don’t forget you can add the configuration to the Task definition also,
So whilst our container has a config file, that file should rely on configuration stored in the Task definition of ECS/Fargate
Explain a Task definition
(to the k8s people in the room a task is a pod)
The image and configuration of the application defined as code. A task can have up to 10(15 now check that) containers defined.
And task definitions
ALB’s and target groups are your friends.
ALB’s allow you to route apps on a layer 7 basis to different microservice backends. Supporting IV. Processes (Execute the app as one or more stateless processes)
When you use fargate / ECS a service is responsible for tracking ports and adding them to target groups, this takes away the heavy lifting.
So ecs allows you to easily spread your work load out around your cluster and scale that application inside the cluster.
You can also use placement constraints to stop two versions of the same app running on the same host.
You can scale the hosts and the application inside.
You scale hosts when you run out of resources to schedule more tasks or a placement group denies the task access due to your rules.
You can also scale the number of tasks to cope with demand.
This lets you ensure code in all environments are aligned
This lets you ensure code in all environments are aligned
The AWS CDK is an infrastructure modeling framework that allows you to define your cloud resources using an imperative programming interface. The CDK is currently in developer preview. We look forward to community feedback and collaboration.
CDK is most useful to create high level structures, for example a VPC including your standard configurations for subnets, gateways, NAT, routing, security groups.
OPTIONAL: a live demo with the CDK with this sample code
Centralised logging is key!
AWS logs is cloudwatch logs and is great for capturing logs without running infrastructure.
If you want to use another rlog driver you can, but you make need to run log forwarders in you cluster and this just increases you maint and workload.
Put admin tasks in containers
Have the clusters pull and execute them
Terminate the process properly