5. ● It offers basic container orchestration capabilities
● Ideal for small clusters
● it integrates very well with AWS services
● it's very cheap to run
● It's a lock in solution but so is the whole AWS ecosystem
● we lose secrets, and configuration management from K8S
5
Amazon ECS
7. *LB is just not Nginx
● too limiting in terms of routing and URL rewriting
● configuration is cumbersome via API calls
● nothing can beat the simplicity of the Nginx text based
configuration
8. Can we have it all?*
8
*spoiler alert: yes we can
9. The problem
9
A load balancing solution that
integrates natively with ECS but
is as easy to configure as Nginx?
10. The solution
10
ECS Ingress
● a small golang executable that
spawn a vanilla nginx instance
● loosely modelled after ingress-
nginx but 10x simpler :)
● leverages continuously updated
upstreams to integrate with ECS
services
● reads the Nginx conf
dynamically from in S3
github.com/fratuz610/ecs-ingress
11. Visually
AWS VPC
EC2 #3
EC2 #2
EC2 #1
ECS CLUSTER
SERVICE 1
SERVICE 2
SERVICE 2
incoming
HTTP / TCP
traffic
SERVICE 1
ECS INGRESS ECS INGRESS ECS INGRESS
NGINX
CONFIG
CLUSTER
CHANGES
app.example.com. ::
59 IN A <EC2-1-public-ip>
59 IN A <EC2-2-public-ip>
59 IN A <EC2-3-public-ip>
CD tool
12. Basic nginx config
http {
...
# all upstreams
# this is the dynamic reference that always needs to be there
include /app/nginx/upstreams.conf;
server {
server_name app.example.com;
location / {
# app-ui-prod should be the name of the ECS service
proxy_pass http://app-ui-prod;
}
location /v2/api {
# app-api-prod should be the name of the ECS service
proxy_pass http://app-api-prod;
}
}
}
13. Nginx config with HTTPS
http {
...
# all upstreams
# this is the dynamic reference that always needs to be there
include /app/nginx/upstreams.conf;
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /app/nginx/fullchain.pem;
ssl_certificate_key /app/nginx/privkey.pem;
...
location /v2/api {
# app-api-prod should be the name of the ECS service
proxy_pass http://app-api-prod;
}
}
}
14. Nginx with TCP tunnelling
stream {
# all upstreams
# this needs to be repeated here as it's context sensitive - http and stream
include /app/nginx/upstreams.conf;
server {
listen 1883 so_keepalive=on;
proxy_pass mqtt-server:1883;
proxy_connect_timeout 1s;
}
}
15. Nginx with TCP tunnelling #2
# PGSQL Connector to the postgres-prod upstream
stream {
# all upstreams
include /app/nginx/upstreams.conf;
server {
listen 5432 so_keepalive=on;
proxy_pass postgres-prod;
# allows access only from the current host
allow 172.17.0.0/16;
deny all;
}
}
You can connect to Pgsql on 172.17.0.1:5432 from each container in the cluster.
16. Gotchas
● A valid Nginx config is required to start the container
● Only ECS RUNNING tasks are considered
● ECS ingress combines NGINX logs and the golang ones*
● It uses polling (every 10 seconds).
API calls are free, S3 calls are metered.
*for easy ingestion into CloudWatch
18. Does anyone have any questions?
18
Thanks!
stefanofratini610
bitsandpieces.it
@fratuz610
github.com/fratuz610/ecs-ingress
Hinweis der Redaktion
As all companies we started small
Trying to find our market fitness
At the beginning we had 1 server with everything on it
it worked fine but we had no CI/CD of any sort
Looked into containers to
- simplify management / high aviability
- provide seemless CD capabilities
- provide a cost effective solution -> margins
I had managed teams that had got into the K8S journey early on and
- it comes with complexities and overhead
- we don't have a dedicated devops resource
- it's expensive to run on AWS
- It offers basic container orchestration capabilities
-- Amazon Elastic Container Service (Amazon ECS) is a container orchestration service that runs and manages Docker containers
- Fits our requirements for small clusters
- it integrates very well with AWS services (even too well) - for example cloudwatch, VPC, EFS, code build and code deploy
- it's very cheap to run - free - spot instances
- It's a lock in solution but so is the whole AWS ecosystem
- we lose secrets, and configuration management from K8S
- documentation is lacking
- learning curve is not as steep as K8s but still
- incoming networking is lacking
-- Specifically the ELB/ALB/NLB trio are just not good enough for anything above basic
-- ELB/ALB/NLB are black boxes and expensive to run
- "it's too limiting when it comes to routing" compared to NGINX
- We run everything behind the same domain for SSL cert management simplicity but also to get rid of CORS
- load balancers -> listeners (ports) -> rules that link to placement groups
- BG: I wrote 6 or 7 blog posts a few years ago on NGINX conf and they are still the highest hits
- Nginx is fast, actively developed and has an expressive configuration - that simply cannot be matched by any other way
We want to use ECS because we are on Amazon + the alternative is too expensive/complicated
But we want to still use Nginx for routing
- ECS-Ingress
- https://github.com/fratuz610/ecs-ingress
- a small golang executable that spawn a vanilla nginx instance
- loosely modelled after ingress-nginx but 10x simpler :)
- leverages continuously updated upstreams to integrate with ECS services
- reads the Nginx conf dynamically and stored in S3
- it's deployed as a daemon with HOST networking
- all services are deployed with Bridge networking and a mapped port of 0
- Change on the S3 bundle OR the ECS cluster => reload
- We use any DNS service to add multiple A records pointing to all the members of the cluster.
- Modern DNS services have a built in health check
- Each instance needs to have a public IP
- source control the configuration
- A valid config is required
- Only running tasks are considered
- ECS ingress combines NGINX logs and the golang ones in 1 stadout/stderr stream for easy ingestion into Cloudwatch Logs
- Uses polling (every 10 seconds). API calls are free, S3 calls are metered.
- Slack Hooks support for automatic update notifications
- Automatic support for Route53 updates to reflect changes in the instances attached to a ECS cluster
- Letsencrypt support to automatically generate new HTTPS certificates (Gossip protocol coordination across running containers in a cluster to coordinate Letsencrypt requests)
- Move to openresty to avoid potentially costly config reloads from NGINX