The Kuryr project offers an interesting approach to network cloud native workloads, by enabling container orchestration engines to consume network services from OpenStack Neutron.With pod-in-VM support, Kuryr-Kubernetes enables a whole slew of new hybrid workloads, like bare metal or in-VM pods accessing services that run on VMs, multiple COEs (e.g. Docker Swarm to Kubernetes), and more. Unified networking simplifies deployment, configuration and provides single pane of glass into management and troubleshooting.
Let’s dive into Kuryr Kubernetes and learn how different open source technologies can complement each other in order to enable number of complicated deployment scenarios.
Ähnlich wie Kuryr-Kubernetes: The perfect match for networking cloud native workloads - Irena Berezovsky, Antoni Segura Puimedon - OpenStack Day Israel 2017
Ähnlich wie Kuryr-Kubernetes: The perfect match for networking cloud native workloads - Irena Berezovsky, Antoni Segura Puimedon - OpenStack Day Israel 2017 (20)
2. Cloud Native Workloads and Networking
12 Factor Application
Containers are the primary workload
encapsulation mechanism
Microservices
Automation and self-healing is a key principal of a
cloud native application
3. Kubernetes
Overview
Cluster is a groups of
nodes running
Kubernetes
Node is physical server or
virtual machine
managed by K8S
Master Node runs Control
Plane
4. K8s Data Model
Pod - basic scheduled
unit
Service - abstraction of
logical set of pods and
policy to access them
Namespace - virtual cluster
5. Kuryr-Kubernetes Project motivation
Hard to connect VMs, bare metal and nested containers
No unified networking infrastructure
Overlay2 for Pods running in VMs
Performance, latency, SLA, management penalties
Need for a smooth transition to the Cloud Native
Applications
Ability to transition workloads to microservices at your own pace
6. Kuryr-Kubernetes Project Mission
Neutron, unified, community sourced networking for
Pods & VMs
OpenStack vendor support experience in the
Container space
Get Neutron users faster into container workloads
VMs and Pods on the same Neutron network
Enable both L2 and L3 connectivity between OS VMs and K8s
Pods
7. Bare Metal Use
Case
Centralized Kuryr Controller
Kuryr Controller maps
K8s Pods into Neutron
ports
K8s Services into
Neutron Load
Balancers
Kuryr CNI on each Worker
node performs Pod
binding
8. Pod in VM Use
Case
Security
Easier node allocation
Single overlay
VM and Pods as targetable
network resources
Can use either Neutron
trunk ports or macvlan
based VM port
allocation
9. Mixed Use Case
Connect to existing
services in VMs
Legacy applications
alongside
microservices
VM NFVs
Use existing cloud for
Kubernetes workloads
10. Supported functionality
Pods networking
Kubernetes native networking
Pods as Neutron ports on the cluster
Neutron network
Single tenant
Full connectivity enabled by default
Kubernetes ClusterIP Services
13. Kuryr Controller
Secure connection to the Neutron API Server
Keystone as Authorization service
Watches Kubernetes API resources with a service account
Stevedore Plugin based Network resources translation
Handlers: Receive Kubernetes resource events and patch them
Drivers: Used by handlers to allocate Neutron resources, allowing multiple
implementations and vendors.
17. Controller - CNI
pod-in-VM creation
● Uses trunk ports to provide
Neutron ports to containers
● Uses VLAN segmentation so
Pod communication still goes
to the vSwitch
● Plugging is just creating a
VLAN device
● Polling for neutron trunk agent
to build the infra
22. Features
LoadBalancer Kubernetes Service Type
Resource Management
Ingress support
Policy support
Multi-Tenancy, Multiple Networks support
Management CLI
What’s Next
26. Kubernetes
Overview
Cluster is a groups of
nodes running
Kubernetes
Node is physical server or
virtual machine
managed by K8S
Master Node runs Control
Plane
Hinweis der Redaktion
Mention cloud-provider in contrast with this approach, both storage and networking
OpenStack services running inside containers
VMs and containers sharing Neutron virtual topology
Keystone as a façade to Orgs’ identity and role management
Ability to transition workloads to containers/micro-services at your own pace
Fuxi adds support for BM and Manila support versus Cloud Provider
Examples: To enable better performance, resource allocation (Containers), but nested is not required.
Easier cluster creation like with Magnum
The deep dive comes later
Example: Gradual moving from Legacy to Microservices App implementation
This is for baremetal case. In pod-in-VM the vif plug is a no-op
Os-vif is also used by nova presenting a common binding layer for OpenStack compute backends
Os-vif is also used by nova presenting a common binding layer for OpenStack compute backends
Binding only in baremetal, otherwise noop
No kube-proxy
Mention Octavia future integration
Load balancer
The load balancer occupies a neutron network port and has an IP address assigned from a subnet.
Listener
Load balancers can listen for requests on multiple ports. Each one of those ports is specified by a listener.
Pool
A pool holds a list of members that serve content through the load balancer.
Member
Members are servers that serve traffic behind a load balancer. Each member is specified by the IP address and port that it uses to serve traffic.
https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
Pod name so we see it is load balanced
Service that access the other service
Explain how the loadbalancer service type will just be a small addition to the services handler and a FIP driver