On-demand webinar recording: http://bit.ly/2mRjk2g
Docker and other container technologies continue to gain in popularity. We recently surveyed the broad community of NGINX and NGINX Plus users and found that two-thirds of organizations are either investigating containers, using them in development, or using them in production. Why? Because abstracting your applications from the underlying infrastructure makes developing, distributing, and running software simpler, faster, and more robust than ever before.
But when you move from running your app in a development environment to deploying containers in production, you face new challenges – such as how to effectively run and scale an application across multiple hosts with the performance and uptime that your customers demand.
The latest Docker release, 1.12, supports multihost container orchestration, which simplifies deployment and management of containers across a cluster of Docker hosts. In a complex environment like this, load balancing plays an essential part in delivering your container-based application with reliability and high performance.
Join us in this webinar to learn:
* The basic built-in load balancing options available in Docker Swarm Mode
* The pros and cons of moving to an advanced load balancer like NGINX
* How to integrate NGINX and NGINX Plus with Swarm Mode to provide an advanced load-balancing solution for a cluster with orchestration
* How to scale your Docker-based application with Swarm Mode and NGINX Plus
2. Rick Nelson
Head of Pre-sales at NGINX, Inc.
Formerly:
- Riverbed, Zeus, VMware, BEA and more
Michael Pleshakov
Platform Integration Engineer at NGINX, Inc.
23. NGINX Plus – Active Health Checks
Sophisticated, app-specific health checks
Detect application failures, orchestrate upgrades
Internet
Server 1 is
failing
Servers 2 and 3
are active
25. NGINX Plus – Session Persistence - Draining
Internet
Server 1 is
draining
Sessions (orange) against server 1 are allowed to complete.
New sessions are directed to servers 2 and 3
Servers 2 and 3
are active
27. NGINX Plus – Dynamic Reconfiguration
http {
resolver 192.168.0.2;
upstream backends1 {
zone backends1 64k;
server api.u.com resolve;
}
upstream backends2 {
zone backends2 64k;
server 192.168.100.10;
server 192.168.100.11;
}
server {
. . .
location /upstream_conf {
upstream_conf;
}
. . .
}
}
DNS
Changes in DNS can
dynamically update
NGINX Plus’ load-balancing
configuration API
Upstream_conf API is a
simple HTTP API to control
configuration
36. • Proxy Model
• Inbound traffic is managed
through a reverse proxy/load
balancer
• Services are left to
themselves to connect to
each other
• Often through round-robin
DNS
39. • Router Mesh
Model
• Inbound routing through
reverse proxy
• Centralized load balancing
through a separate
load-balancing service
• Marathon LB and Deis Router
work like this
40. • Fabric Model
• Routing is done at the
container level
• Services connect to each
other as needed
• NGINX Plus acts as the
forward and reverse proxy
for all requests
• NGINX Plus caches SSL
sessions
45. 1. Docker Swarm
2. NGINX Plus and Docker
3. DNS Service Discovery with NGINX
4. The Fabric Model
Resources
Hinweis der Redaktion
Thanks everyone for joining today. This presentation is a preview of the talk I will be giving at the upcoming NGINX conference next month. I’ll be talking about using NGINX and NGINX Plus with some of the new features in Docker.
My name is Rick Nelson, and I head up the pre-sales team at NGINX. Prior to NGINX I spent time at Riverbed, Zeus, VMware and BEA, to name a few companies. Later on during question and answer time, Michael Peshakov will join me to help answer questions. He provided a lot in getting the demos up and running.
At the recent Docker conference, Docker announced v1.12, integrating Docker Engine and Swarm and adding new orchestration features. This now provids a platform similar to other platforms such as Kubernetes. It was just released in the last few days. We have been having fun working with the release candidate, testing out these new features and seeing how NGINX and NGINX Plus can provide an advanced load balancing solution on Swarm. And that’s what I’ll be talking about today. This is a preview of the talk I will be giving at the upcoming NGINX conference next month.
I’ll start by discussing Docker Swarm and the new orchestration features and show a demo, then I’ll discuss and demonstrate how you can use open source NGINX with the new orchestration features and I will finish discussing NGINX Plus, the commercial version of NGINX and what advantages it can bring and that will also include a demo. Michael Pleshakov helped me setup these demonstrations and he will join us at end to help answers questions.
Now let’s talk about Docker. Docker Swarm provides the ability to build a cluster of Docker hosts and schedule containers across those hosts.
In v1.12, Swarm Mode allows you to combine a set of Docker hosts into a swarm, providing a fault tolerant, self healing, decentralized architecture.
It is also easier to setup a Swarm cluster.
Also all nodes are secured with a key and all communications between nodes is done over TLS.
And also, the Docker API has been expanded to be aware of services. Services are sets of containers using the same image. Similar to services in Docker compose, but with more features. You can create and scale services, do rolling updates, create health checks. I will be showing this API a lot in my demos, along with scaling.
And you can also setup cluster-wide overlay networks and DNS service discovery as well as load balancing are built-in. We will be seeing all of these used in the demos.
Now let’s take a high level look at the Swarm architecture and how I have it setup for my demos. I have three Swarm nodes, one of which is master. The master node is where I run the Swarm commands. Swarm handles the scheduling, DNS service discovery, scaling and load balancing.
To provide network communications between containers inside the cluster, Swarm allows containers to be connected to multiple internal overlay networks that span across all the nodes of the cluster. Swarm also allows you to expose the containers to the outside of the cluster though the Swarm load balancer.
As I mentioned, Swarm now comes with built-in load balancing. This can handle all the inbound client requests as well as internal service to service requests. The load balancer is running on each host and can load balance requests across any of containers on any of the hosts in the cluster.
Now let’s see a demo of this. A disclaimer before I get started. My demo was setup with the release candidate, and I haven’t upgraded to the HA release, so I sometimes see networking issues and these may show up in my demo.
You may be wondering, if Swarm now includes load balancing, why would I want to want to use another load balancer? The Swarm load balancer is a basic, layer 4, TCP load balancer. Your application may require additional features, for example to name just a few:
SSL Termination
Content based routing. For example, routing based on URL or a header
Access control and authorization
Rewrites and redirects
And in addition, you can use the same load balancing that you have deployed in your other environments so you take advantage of the tools and knowledge you already have.
One of the other load balancers you can use with Swarm is NGINX open source. Adding open source NGINX can let you support the features I mentioned in the last slide and more, for example:
A choice of load balancing algorithms
More protocols, for example HTTP2, WebSocket
Configurable logging
Traffic limits, include requests rate, bandwidth and connections
Scripting for advanced use cases. With Lua and Perl and JavaScript
Additional security features, such as white lists and black lists
Let’s bring NGINX into the picture. We deploy NGINX as a service. We expose the NGINX service in the cluster at some port, so that if we make a request to any node on that port, the request will be distributed to the NGINX container by the Swarm load balancer. For simplicity we have only one NGINX container, but in a real-world scenario, for high availability you’d have multiple NGINX containers running.
We deploy the backend service A in the cluster and scale it to have three containers. Inside the cluster the service gets a virtual IP address. If we make a request to that VIP, the Swarm load balancer will distribute the request to one of the containers of the service. We use the VIP of the service A in the NGINX configuration rather then adding the IP address of each container. This allows us to scale service A without changing the NGINX configuration because the Swarm load balancing will handle the load balancing.
So, the client makes a request for service A to the first node. The swarm load balancer distribute it to the NGINX. The NGINX processes the request and distributes it to the VIP of the service A. The VIP is handled by the swarm load balancer and it distributes the request to one of the containers of the service A.
It is possible to get rid of the swarm load balancer between NGINX and the backend containers. However It requires a more complex solution. The solution will have to change the NGINX configuration and reload NGINX every time service A is scaled.
Now let’s see a demo of open source NGINX with Swarm, taking advantage of using NGINX for SSL termination.
I assume that most of you are aware of the open source NGINX software, but you may not know about NGINX Plus, the commercial version of NGINX.
NGINX Plus is based on the open source NGINX, but with additional features.
Mainly focused on load balancing
To make NGINX Plus an enterprise ready ADC
NGINX Plus extends NGINX with a number of advanced features and this diagram gives a high level view of where the features of NGINX versus NGINX Plus and to give you and idea of where the dividing line its. The main features are:
Active health checks,.
where NGINX Plus will continuously check the backend nodes to make sure they are healthy and remove unhealthy servers from the load balancing rotation
And Session persistence or sticky sessions is supported for applications that require that the client continues to have its requests sent to the same backend.
And this includes session draining for when you need to take backend servers offline without impacting the clients who have open sessions.
And the ability to scale your backends up and down without requiring you to modify the NGINX configuration and do a configuration reload. This is especially useful when doing service discovery with a microservices platform such as Swarm and it is the most important to feature when it comes to allowing NGINX Plus to be fully integrated with these platforms.
There are two methods for this, one an API that allows you to push changes to NGINX Plus and DNS, where NGINX Plus will continually check DNS for changes in the number of nodes attached to a domain name. This is the method I utilize in my NGINX Plus demo to integrate with the built-in service discovery in Swarm. Here we see a configuration snippet that shows how to configure both methods.
And live activity monitoring gives you an API to get extensive statistics from NGINX Plus and
a web dashboard, built on this API, that I will show in my demo
To remind you, when we showed NGINX open source with Swarm, the requests from NGINX to the backends went through the Swarm load balancer so that scaling could be handled without having to reconfigure NGINX.
With NGIINX Plus, we deploy it as a service and expose it in the cluster. The client makes a request to one of the nodes and the request gets load balanced to the NGINX Plus container by the Swarm load balancer. NGINX Plus processes the request and distributes it to one of the backend containers directly, bypassing the Swarm load balancer, which was the case when using NGINX F/OSS.
Two things make it possible to bypass the swarm load balancer:
1. First is that we can get the IP addresses of the backend containers via DNS on Swarm
2. Second is that NGINX Plus, as mentioned previously, supports the DNS-based dynamic reconfiguration. If we scale the service A, Swarm will update the DNS and NGINX Plus updates itself via DNS.
Let’s see NGINX Plus working with Swarm
Now let’s make things more interested. To the demo I just showed you I’m going to add a Python program I’ve written that gets data from NGINX Plus using the status API and calculates the current requests per second. Based on this data it makes a scaling decision and makes a call to the Service API to add or remove backend containers.
Let’s see the auto scaling in action.
You may have noticed that in the three different environments I demonstrated, the load balancers where used a bit differently. In my use of NGINX and NGINX Plus in my demos, I was only showing how to handle external inbound traffic and I have said much about internal service to service communications. I think it is worth spending a bit of time discussing the various load balancing configurations you can use in a microservices, containerized environment. We see three models for how NGINX and NGINX Plus can be used, two of which I demonstrated variations of.
The Proxy model addresses the in-bound client traffic and doesn’t concern itself with the internal service to service communication. In this model, NGINX or NGINX Plus handle inbound client requests and load balance these across the backend service instances. If a service makes a call to another service, that requests is handled by something internal to that platform. This allows you to have NGINX or NGINX Plus on the edge handling things such as:
High concurrency
SSL termination
Traffic shaping and security
Caching
If you add NGINX Plus, then you can take advantage of its advanced features that I have discussed, especially the better support for service discovery that I demonstrated.
In the configuration of my demo of NGINX open source, from an NGINX perspective, it is handling inbound requests
But the Swarm load balancer is handling all service to service requests, so this is an example of the proxy model.
The Router Mesh model is like the Proxy Model, but instead of having NGINX or NGINX Plus handling just the external inbound requests it also handles internal service to service communications. Now you can get the advantages that I previously talked about of using NGINX and NGINX Plus for the internal communications as well. If you use the Docker Swarm load balancer for all requests, as I did in the first demo I showed you, that is also a Router Mesh. Today I didn’t demonstrate any service service communications, buy NGINX Plus with its ability to integrated with service discovery can work in the Router Mesh module to provide internal load balancing.
The previous two models are straight forward and can work well for a number of applications. We are seeing many enterprises moving to an SSL everywhere architecture, where even the internal service to service communications are secured, then you can run into performance problems with all the SSL handshakes that need to occur. This is because most application SSL libraries don’t reuse SSL sessions, so each request is a new session. This is very costly in CPU resources. The Fabric Model address this by having NGINX Plus at the edge but for the internal communications, NGINX Plus is in every container acting as both a forward and reverse proxy. All the SSL handling is done by NGINX Plus which does support SSL session reuse and this can dramatically reduce the resources required to do SSL handshakes. In this model, you need NGINX Plus rather then open source NGINX, because of the dynamic service discovery capabilities. I will not be demonstrating this model today, but we have write-ups on our website about it, and we will be giving a day of training on it at our NGINX conference.
I’ve shown you how some of the power of the new orchestration features in Docker v1.12
And how NGINX open source can provide a more advanced load balancing solution
And how NGINX Plus brings even more advanced features to provide enterprise load balancing
I hope you have found today’s webinar informative and will join us for one of our future webinars. Now we open it up to questions.
And you can also setup cluster-wide overlay networks and DNS service discovery as well as load balancing are built-in. We will be seeing all of these used in the demos.