Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
Secure Your Containers!
What Network Admins
Should Know When Moving
Into Production Cynthia Thomas
Systems Engineer
@_tech...
{ Why is networking an afterthought?
Containers, Containers,
Containers!
Why Containers?
• Much lighter weight and less overhead than virtual
machines
• Don’t need to copy entire OS or libraries ...
Containers: Old and New
• LXC: operating system-level virtualization through a virtual
environment that has its own proces...
Containers: Old and New
• Explosive growth: Docker created a de-facto standard image format and API for
defining and inter...
Container Orchestration Engines
• Step forth the management of containers for application
deployment!
• Scale applications...
Today’s COEs have vulnerabilities
What’s the problem?
Why are containers insecure?
• They weren’t designed with full isolation like VMs
• Not everything in ...
COEs help container orchestration!
…but what about networking?
• Scaling Issues for ad-hoc security
implementation with Se...
{ Your Network Security team!
And you should too.
Who’s going to care?
Containers add network complexity!!!
• More components
= more endpoints
• Network Scaling
Issues
• Security/Policy
complex...
Perimeter Security approach is not enough
• Legacy architectures
tended to put higher layer
services like Security and
FWs...
#ThrowbackThursday
What did OpenStack do?
• Started in 2010 as an open source community for cloud compute
• Gained a huge ...
#ThrowbackThursday
Neutron came late in the game!
• Took 3 years before dedicated project formed
• Neutron enabled third p...
What is Neutron?
• Production-grade open framework for Networking:
 Multi-tenancy
 Scalable, fault-tolerant devices (or ...
Hardened Neutron Plugins
{ Leverage Neutron
Kuryr Can Deliver Networking
to Containers
{
Bridging the container
networking framework with
OpenStack network abstractions
The Kuryr Mission
What is Kuryr?
Kuryr has become a collection of projects
and repositories:
- kuryr-lib: common libraries (neutron-client,
...
Project Kuryr Contributions
As of Oct. 18th, 2016: http://stackalytics.com/?release=all&module=kuryr-
group&metric=commits
Some previous* networking options with
Docker
STOP
IPtables maybe?
IPtables maybe?
Done with Neutron? Tell me more,
please...
Kuryr: Docker (1.9+)’s remote driver
for Neutron networking
Kuryr implements a libnetwork remote network
driver and maps i...
Libnetwork implements CNM
• CNM has 3 main networking components: sandbox, endpoint,
and network
Kuryr translation please!
• Docker uses PUSH model to call a service for libnetwork
• Kuryr maps the 3 main CNM components...
Networking services from Neutron, for containers!
Distributed Layer 2 Switching
Distributed Layer 3 Gateways
Floating IPs
...
Launching a Container in Docker with Kuryr/MidoNet
{ It’s an enabler for existing, well-defined
networking plugins for containers
Kuryr delivers for CNM,
but what about CNI?
Kubernetes Presence in Container Orchestration
• Open sourced from production-grade, scalable technology used by
Borg & Om...
Kubernetes Architecture
• Uses PULL model
architecture for config
changes
• Mean K8S emits events on
its API server
• etcd
• All persistent master state is
stored in an instance of etcd
• To date, runs as single instance;
HA clusters in f...
• K8S API Server
• Serves up the Kubernetes API
• Intended to be a CRUD-y server, with separate components or in plug-ins
...
• kubelet
• Manages pods and their
containers, their images, their
volumes, etc
• kube-proxy
• Run on each node to provide...
Kubernetes Networking Model
There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container
co...
Kubernetes Networking Options
Flannel provides an overlay to enable cross-host communication
- IP per POD
- VXLAN tunnelin...
MidoNet Integration with
Kubernetes using Kuryr
35
MidoNet: 6+ years of steady growth
Security at the edge
1. vPort1 initiates a packet flow through the virtual network
2. MN Agent fetches the virtual topolog...
Kubernetes Integration: How with Kuryr?
Kubernetes 1.2+
Two integration components:
CNI driver
• Standard container networ...
Kubernetes Integration: How with Kuryr+MidoNet?
Defaults:
kube-proxy: generates iptables rules which map portal_ips
such t...
Kubernetes Integration: How with Kuryr?
Raven: used to proxy K8S API to Neutron API + IPAM
- focuses only on building the ...
Kubernetes Integration: How with Kuryr+MidoNet?
Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care o...
Kubernetes Integration: How with Kuryr+MidoNet?
Raven: used to proxy K8S API to Neutron API
Kuryr CNI driver: takes care o...
Completed integration components:
- CNI driver
- Raven
- Namespace Implementation (a mechanism to partition resources crea...
Where will Kuryr go next?
• Bring container and VM networking under one API
• Multi-tenancy
• Advanced networking services...
Kuryr
 Project Launchpad
 https://launchpad.net/kuryr
 Project Git Repository
 https://github.com/openstack/kuryr
 We...
{
Cynthia Thomas
Systems Engineer
@_techcet_
Thank you!
Nächste SlideShare
Wird geladen in …5
×

Secure Your Containers: What Network Admins Should Know When Moving Into Production

1.405 Aufrufe

Veröffentlicht am

This session offers techniques for securing Docker containers and hosts using open source network virtualization technologies to implement microsegmentation. Come learn real tips and tricks that you can apply to keep your production environment secure.

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

Secure Your Containers: What Network Admins Should Know When Moving Into Production

  1. 1. Secure Your Containers! What Network Admins Should Know When Moving Into Production Cynthia Thomas Systems Engineer @_techcet_
  2. 2. { Why is networking an afterthought? Containers, Containers, Containers!
  3. 3. Why Containers? • Much lighter weight and less overhead than virtual machines • Don’t need to copy entire OS or libraries – keep track of deltas • More efficient unit of work for cloud-native aps • Crucial tools for rapid-scale application development • Increase density on a physical host • Portable container image for moving/migrating resources
  4. 4. Containers: Old and New • LXC: operating system-level virtualization through a virtual environment that has its own process and network space • 8 year old technology • Leverages Linux kernel cgroup • Also other namespaces for isolation • Focus on System Containers • Security: • Previously possible to run code on Host systems as root on guest system • LXC 1.0 brought “unprivileged containers” for HW accessibility restrictions • Ecosystem: • Vendor neutral, Evolving LXD, CGManager, LXCFS
  5. 5. Containers: Old and New • Explosive growth: Docker created a de-facto standard image format and API for defining and interacting with containers • Docker: also operating system-level virtualization through a virtual environment • 3 year old technology • Application-centric API • Also leverages Linux kernel cgroups and kernal namespaces • Moved from LXC to libcontainer implementation • Portable deployment across machines • Brings image management and more seamless updates through versioning • Security: • Networking: linuxbridge, IPtables • Ecosystem: • CoreOS, Rancher, Kubernetes
  6. 6. Container Orchestration Engines • Step forth the management of containers for application deployment! • Scale applications with clusters where the underlying deployment unit is a container • Examples include Docker Swarm, Kubernetes, Apache Mesos
  7. 7. Today’s COEs have vulnerabilities
  8. 8. What’s the problem? Why are containers insecure? • They weren’t designed with full isolation like VMs • Not everything in Linux is namespaced • What do they do to the network?
  9. 9. COEs help container orchestration! …but what about networking? • Scaling Issues for ad-hoc security implementation with Security/Policy complexity • Which networking model to choose? CNM? CNI? • Why is network security always seemingly considered last?
  10. 10. { Your Network Security team! And you should too. Who’s going to care?
  11. 11. Containers add network complexity!!! • More components = more endpoints • Network Scaling Issues • Security/Policy complexity
  12. 12. Perimeter Security approach is not enough • Legacy architectures tended to put higher layer services like Security and FWs at the core • Perimeter protection is useful for north-south flows, but what about east-west? • More = better? How to manage more pinch points?
  13. 13. #ThrowbackThursday What did OpenStack do? • Started in 2010 as an open source community for cloud compute • Gained a huge following and became production ready • Enabled collaboration amongst engineers for technology advancement
  14. 14. #ThrowbackThursday Neutron came late in the game! • Took 3 years before dedicated project formed • Neutron enabled third party plugin solutions • Formed advanced networking framework via community
  15. 15. What is Neutron? • Production-grade open framework for Networking:  Multi-tenancy  Scalable, fault-tolerant devices (or device- agnostic network services).  L2 isolation  L3 routing isolation • VPC • Like VRF (virtual routing and fwd-ing)  Scalable Gateways  Scalable control plane • ARP, DHCP, ICMP  Floating/Elastic Ips  Decoupled from Physical Network  Stateful NAT • Port masquerading • DNAT  ACLs  Stateful (L4) Firewalls • Security Groups  Load Balancing with health checks  Single Pane of Glass (API, CLI, GUI)  Integration with COEs & management platforms • Docker Swarm, K8S • OpenStack, CloudStack • vSphere, RHEV, System Center
  16. 16. Hardened Neutron Plugins
  17. 17. { Leverage Neutron Kuryr Can Deliver Networking to Containers
  18. 18. { Bridging the container networking framework with OpenStack network abstractions The Kuryr Mission
  19. 19. What is Kuryr? Kuryr has become a collection of projects and repositories: - kuryr-lib: common libraries (neutron-client, keystone-client) - kuryr-libnetwork: docker networking plugin - kuryr-kubernetes: k8s api watcher and CNI driver - fuxi: docker cinder driver
  20. 20. Project Kuryr Contributions As of Oct. 18th, 2016: http://stackalytics.com/?release=all&module=kuryr- group&metric=commits
  21. 21. Some previous* networking options with Docker STOP IPtables maybe? IPtables maybe? Done with Neutron? Tell me more, please! • libnetwork: • Null (with nothing in its networking namespace) • Bridge • Overlay • Remote
  22. 22. Kuryr: Docker (1.9+)’s remote driver for Neutron networking Kuryr implements a libnetwork remote network driver and maps its calls to OpenStack Neutron. It translates between libnetwork's Container Network Model (CNM) and Neutron's networking model. Kuryr also acts as a libnetwork IPAM driver.
  23. 23. Libnetwork implements CNM • CNM has 3 main networking components: sandbox, endpoint, and network
  24. 24. Kuryr translation please! • Docker uses PUSH model to call a service for libnetwork • Kuryr maps the 3 main CNM components to Neutron networking constructs • Ability to attach to existing Neutron networks with host isolation (container cannot see host network) libnetwork neutron Network Network Sandbox Subnet, Ports, netns Endpoint Port
  25. 25. Networking services from Neutron, for containers! Distributed Layer 2 Switching Distributed Layer 3 Gateways Floating IPs Service Insertion Layer 4 Distributed Stateful NAT Distributed Firewall VTEP Gateways Distributed DHCP Layer 4 Load Balancer-as-a- Service (with Health Checks) Policy without the need for IP tables Distributed Metadata TAP-as-a-Service
  26. 26. Launching a Container in Docker with Kuryr/MidoNet
  27. 27. { It’s an enabler for existing, well-defined networking plugins for containers Kuryr delivers for CNM, but what about CNI?
  28. 28. Kubernetes Presence in Container Orchestration • Open sourced from production-grade, scalable technology used by Borg & Omega at Google for over 10 years • Explosive use over the last 12 months, including users like eBay and Lithium Technologies • Portable, extensible, self-healing Impressive automated rollouts & rollbacks with one command • Growing ecosystem supporting Kubernetes: • CoreOS, RH OpenShift, Platform9, Weaveworks, Midokura!
  29. 29. Kubernetes Architecture • Uses PULL model architecture for config changes • Mean K8S emits events on its API server
  30. 30. • etcd • All persistent master state is stored in an instance of etcd • To date, runs as single instance; HA clusters in future • Provides a “great” way to store configuration data reliably • With watch support, coordinating components can be notified very quickly of changes Kubernetes Control Plane
  31. 31. • K8S API Server • Serves up the Kubernetes API • Intended to be a CRUD-y server, with separate components or in plug-ins for logic implementation • Processes REST operations, validates them, and updates the corresponding objects in etcd • Scheduler • Binds unscheduled pods to nodes • Pluggable, for multiple cluster schedulers and even user-provided schedulers in the future • K8S Controller Manager Server • All other cluster-level functions are currently performed by the Controller Manager • E.g. Endpoints objects are created and updated by the endpoints controller; and nodes are discovered, managed, and monitored by the node controller. • The replicationcontroller is a mechanism that is layered on top of the simple pod API • Planned to be a pluggable mechanism Kubernetes Control Plane Continued
  32. 32. • kubelet • Manages pods and their containers, their images, their volumes, etc • kube-proxy • Run on each node to provide a simple network proxy and load balancer • Reflects services as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends Kubernetes Worker Node
  33. 33. Kubernetes Networking Model There are 4 distinct networking problems to solve: 1. Highly-coupled container-to-container communications 2. Pod-to-Pod communications 3. Pod-to-Service communications 4. External-to-internal communications
  34. 34. Kubernetes Networking Options Flannel provides an overlay to enable cross-host communication - IP per POD - VXLAN tunneling between hosts - IPtables for NAT - Multi-tenancy? - Host per tenant? - Cluster per tenant? - How to share VMs and containers on the same network for the same tenant? - Security Risk on docker bridge? Shared networking stack
  35. 35. MidoNet Integration with Kubernetes using Kuryr 35
  36. 36. MidoNet: 6+ years of steady growth
  37. 37. Security at the edge 1. vPort1 initiates a packet flow through the virtual network 2. MN Agent fetches the virtual topology/state 3. MN simulates the packet through the virtual network 4. MN installs a flow in the kernel at the ingress host 5. Packet is sent in tunnel to egress host
  38. 38. Kubernetes Integration: How with Kuryr? Kubernetes 1.2+ Two integration components: CNI driver • Standard container networking: preferred K8S network extension point • Can serve rkt, appc, docker • Uses Kuryr port binding library to bind local pod using metadata Raven (Part of Kuryr project) • Python 3 • AsyncIO • Extensible API watcher • Drives the K8S API to Neutron API translation
  39. 39. Kubernetes Integration: How with Kuryr+MidoNet? Defaults: kube-proxy: generates iptables rules which map portal_ips such that the traffic gets to the local kube-proxy daemon. Does the equivalent of a NAT to the actual pod address flannel: default networking integration in CoreOS Enhanced by: Kuryr CNI driver: enables the host binding Raven: process used to proxy K8S API to Neutron API MidoNet agent: provides higher layer services to the pods
  40. 40. Kubernetes Integration: How with Kuryr? Raven: used to proxy K8S API to Neutron API + IPAM - focuses only on building the virtual network topology translated from the events of the internal state changes of K8S through its API server Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods Kubernetes API Neutron API Namespace Network Cluster Subnet Subnet Pod Port Service LBaaS Pool LBaaS VIP (FIP) Endpoint LBaaS Pool Member
  41. 41. Kubernetes Integration: How with Kuryr+MidoNet? Raven: used to proxy K8S API to Neutron API Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods
  42. 42. Kubernetes Integration: How with Kuryr+MidoNet? Raven: used to proxy K8S API to Neutron API Kuryr CNI driver: takes care of binding virtual ports to physical interfaces on worker nodes for deployed pods
  43. 43. Completed integration components: - CNI driver - Raven - Namespace Implementation (a mechanism to partition resources created by users into a logically named group): - - each namespace gets its own router - - all pods driven by the RC should be on the same logical network CoreOS support - Containerized MidoNet services Kubernetes Integration: Where are we now with MidoNet?
  44. 44. Where will Kuryr go next? • Bring container and VM networking under one API • Multi-tenancy • Advanced networking services/map Network Policies • QoS • Adapt implementation to work with other COEs • kuryr-mesos • kuryr-cloudfoundry • kuryr-openshift • Magnum Support (containers in VMs) in OpenStack
  45. 45. Kuryr  Project Launchpad  https://launchpad.net/kuryr  Project Git Repository  https://github.com/openstack/kuryr  Weekly IRC Meeting  http://eavesdrop.openstack.org/#Kuryr_Projec t_Meeting  IRC  #openstack-neutron @ Freenode MidoNet  Community Site  www.midonet.org  Project Git Repository  https://github.com/midonet/midonet  Try MidoNet with one command:  $> curl -sL quickstart.midonet.org | sudo bash  Join Slack  slack.midonet.org Get Involved!
  46. 46. { Cynthia Thomas Systems Engineer @_techcet_ Thank you!

×