This document discusses Kubernetes networking on OpenStack. It begins by explaining why Kubernetes is used to organize Docker containers into pods and provide services. It then describes the default Docker networking model of host-private networking and issues it can cause. Next, it outlines Kubernetes' networking model of assigning each pod its own routable IP without NAT. It details two options for routing pod subnets: 1) creating routable pod networks through IP forwarding and static routes or 2) building an overlay network using technologies like Flannel. The document concludes by recommending reviewing Kubernetes documentation for deployment examples.
2. Why Kubernetes?
Provides a higher level abstraction to a lower level docker
interface
Organize applications running in docker containers into
PODs
PODs form the basic unit of operation
POD == set{ one or more containers }
Users declare end state using a POD manifest
Scheduling mechanism for PODs
Containers in a POD are tightly coupled i.e. co-located on a
host and share network namespace, volumes and hostname
3. Why Kubernetes?
Ability to group PODs using labels
Enable access to the POD group using a service abstraction
(provides a stable service VIP)
The service will keep track of its PODs - endpoints of a service
When traffic hits the service virtual IP, it will be proxied to one of the
backend PODs
POD Management
Restart a failed container in a POD automatically
Self healing - ability to replace PODs when the machine fails
Horizontal scaling
6. Default Networking Model in
Docker
• Host-Private Networking
• Creates a virtual bridge named docker0 on each host
• Allocates a private subnet (e.g. 172.17.0.0/16) from RFC 1918
for that bridge
• Attaches each container to docker0 using a virtual ethernet
device
• Assigns an IP from the private subnet to the container and sets
the bridge IP address is set as the gateway for the container
8. Container reachability across
hosts
Docker may allocate the same IP addresses to containers
across hosts
Containers can talk to each other on the same machine
Containers cannot route traffic directly across hosts using their
private IP address
Containers communicate across hosts by using DNAT
Host IP:Port To Container IP:Port
9. Default Networking model in
docker can pose issues to
AppsCoordinating static port allocations to containers is very difficult in
practice across multiple developers and groups that share hosts
If using dynamic port allocation, there are still complications
service discovery, application configuration etc.
NAT is hard to troubleshoot
Application running in a container does not know its actual IP address
– so some apps will break
apps that need to register their actual IP address
apps that perform IP based access control/authentication
10. Networking in Kubernetes
Containers communicate directly over a routed IP network
without using NAT
A container sees the real IP of another container
The host sees the real IP of the container
The default networking model of docker must be modified
for Kubernetes to work
11. Networking in
Kubernetes
• A routable IP address is assigned per
POD
• All containers within a POD share the
network namespace including the IP
address and port
• Implemented by creating a docker container
for the POD
• This “pod-container” is wired to the POD IP
• All other containers are configured to share
the network stack of the POD container
using the --net=container:<name | id >
function in docker
12. POD networking
• Each VM is assigned a subnet for
POD networking (Note: This is in
addition to the main neutron subnet
used by the VM)
• The default docker bridge docker0 is
replaced with a linux bridge say “cbr0”
• cbr0 is configured on the POD subnet
• Docker daemon is started with this
bridge using --bridge=cbr0 in its
options
• Docker allocates IPs to the containers
from the POD subnet block
13. Routing POD Subnets
Option 1:
Create routable POD networks
1. Configure instances to forward IP packets to the bridged POD
network by enabling IP forwarding in the kernel
• sudo sysctl net.ipv4.ip_forward = 1
2. Add static routes on the L3 neutron gateway to route traffic to the
instance
• neutron router-update --routes type=dict list=true
destination=NODE_X_POD_CIDR,
nexthop=NODE_X_INTERFACE_IP_ADDR
14. Routing POD Subnets
Option 1 :
3. When neutron security-groups is enabled, traffic is restricted to/from the
instance IP address by neutron
• Add iptables FORWARD chain rules on the host to allow incoming and
outgoing traffic to/from the POD CIDR
POD_CIDR=10.5.0.0/16
sudo iptables -I FORWARD 1 -p all -s $POD_CIDR –d $POD_CIDR -j ACCEPT
16. Routing POD Subnets
Option2:
Build an overlay network to route POD networks
• Proceed with caution for production deployment
• These technologies are still in experimental stage
• Creates a layered virtual network architecture
• Create POD virtual network overlay using the neutron virtual
networks as the underlay
• Open source options:
• Flannel, Weave, Calico
17. Flannel
• Designed for Kubernetes
• Creates a POD subnet on each instance
• Uses etcd to maintain the subnet to real host IP mapping
• Builds an overlay mesh network between instances using UDP
tunneling to connect the subnets
• Requires UDP port 8285 opened in the instance security groups
• Adjust the MTU size for performance
19. Conclusion
Checkout the Kubernetes github repo
Latest docs
Contains several deployment examples
SaltStack scripts to automate a cluster deployment across
multiple providers
Hinweis der Redaktion
End State: Describe the containers and state that you want them running. If the containers stop for some reason, say – program fails. Kubernetes will re-create the containers to attain the desired state. This process will continue until the POD is deleted.
Example of a POD: A set of containers supporting a content management system – containers that run web server (presentation layer), file loading, data loading, cache management
In kubernetes, the basic unit of operation is a POD, which means you deploy the set of containers, replicate them, scale, delete.
Hostname for apps running in a POD = name of the POD
Kubernetes has a control layer that monitors the state and make sure that the current state == end state
The service keeps track of the PODs
End result : High friction porting of Apps from VMs to Containers