Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Festive Tech Calendar: Festive time with AKS networking

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Wird geladen in …3
×

Hier ansehen

1 von 31 Anzeige
Anzeige

Weitere Verwandte Inhalte

Weitere von Nico Meisenzahl (20)

Aktuellste (20)

Anzeige

Festive Tech Calendar: Festive time with AKS networking

  1. 1. Festive time with AKS networking Festive Tech Calendar 2022
  2. 2. Who we are © white duck GmbH 2022 Nico Meisenzahl (Head of DevOps Consulting and Operations, Cloud Solution Architect, Azure & Developer Technologies MVP, GitLab Hero) Email: nico.meisenzahl@whiteduck.de Twitter: @nmeisenzahl LinkedIn: https://www.linkedin.com/in/nicomeisenzahl/ Philip Welz (Senior Kubernetes & DevOps Engineer, GitLab Hero, CKA, CKAD & CKS) Twitter: @philip_welz LinkedIn: https://www.linkedin.com/in/philip-welz
  3. 3. Agenda • Intro to AKS networking • Control plane networking options • Cilium-powered data plane • Private API server options © white duck GmbH 2022
  4. 4. INTRO TO AKS NETWORKING © white duck GmbH 2022
  5. 5. The layers of AKS networking • pod-to-pod traffic • pod-to-service networking • in-cluster DNS • cluster ingress/egress traffic • traffic between K8s control and data plane • API-server access © white duck GmbH 2022
  6. 6. What we are focusing on today • pod-to-pod traffic • pod-to-service networking • in-cluster DNS • cluster ingress/egress traffic • traffic between K8s control and data plane • API-server access © white duck GmbH 2022
  7. 7. CONTROL PLANE NETWORKING OPTIONS © white duck GmbH 2022
  8. 8. Cluster-wide east-west traffic • mandatory to get a functioning Kubernetes cluster • pod-to-pod communication • there are multiple implementations with AKS • you have the choice J © white duck GmbH 2022
  9. 9. Container Network Interface (CNI) • Container Network Interface (CNI) is • an abstraction layer • a vendor-neutral specification • used by Kubernetes and others (Mesos, CloudFoundry) • vendor implementations are called plugins • https://github.com/containernetworking/cni © white duck GmbH 2022
  10. 10. Kubenet • very basic & simple plugin implementation • typically used with single-node or cloud provider that sets up routing rules for communication between nodes • itself does not implement advanced features such as cross-node networking or network policies • Linux only • no Windows nodes/pods © white duck GmbH 2022
  11. 11. Kubenet & Azure Kubernetes Service • requires outbound internet connectivity • one cluster per subnet • max of 400 nodes (due to UDR limit) • additional hop is required in the design of Kubenet • no direct pod addressing/routing • no support for • Azure Network Policies • Calico Network Policies support Kubenet • Virtual node addon (ACI) • https://learn.microsoft.com/azure/aks/configure-kubenet © white duck GmbH 2022
  12. 12. Kubenet & Azure Kubernetes Service © white duck GmbH 2022
  13. 13. When to use Kubenet with AKS • you require dual-stack (IPv4/IPv6) • most of the pod communication is within the cluster • you don't need advanced AKS features • (you have limited IP address space) © white duck GmbH 2022
  14. 14. CNI & Azure Kubernetes Service • you have further options … • Azure CNI • with dynamic allocation of IP addresses • with advanced subnet support • Azure CNI Overlay (preview) • the better Kubenet • Bring-your-own CNI © white duck GmbH 2022
  15. 15. Azure CNI • flexible • supports all AKS features and use-cases • got even more flexible with dynamic allocation and advanced subnet support • latter can fix issues with IP address planning • https://learn.microsoft.com/azure/aks/configure-azure-cni © white duck GmbH 2022
  16. 16. Azure CNI © white duck GmbH 2022
  17. 17. When to use Azure CNI • most of the time: the preferred way • pod communication is also with resources outside of the cluster • you need AKS advanced features • (you have available IP address space) © white duck GmbH 2022
  18. 18. Azure CNI Overlay • still preview! • currently only available in • North Central US • West Central US • the better Kubenet (when GA) • up to 1000 nodes • no performance degrade • full support for Network Policies • still Linux only • no support for some advanced features like virtual node • https://learn.microsoft.com/azure/aks/azure-cni-overlay © white duck GmbH 2022
  19. 19. Azure CNI Overlay © white duck GmbH 2022
  20. 20. When to use Azure CNI Overlay • you would like to scale but have limited IP address spaces • most of the pod communication is within the cluster • you want to use Kubernetes Network Policies • you don't need advanced AKS features © white duck GmbH 2022
  21. 21. Bring-your-own CNI • full flexibility • deploy your CNI plugin of choice • no support by Azure on CNI related issues • limitations very based on the chosen plugin • https://learn.microsoft.com/azure/aks/use-byo-cni © white duck GmbH 2022
  22. 22. CILIUM-POWERED DATA PLANE © white duck GmbH 2022
  23. 23. Azure CNI powered by Cilium • still preview! • managed Cilium offering • offers Pod networking, basic Kubernetes Network Policies, and high-performance service load balancing • eBPF-based data plane • socket-based load-balancing instead of iptables • relies on Azure IPAM (IP Address Management on Azure) control plane • therefore, supported by Azure CNI and Azure CNI Overlay © white duck GmbH 2022
  24. 24. Azure CNI & Cilium © white duck GmbH 2022
  25. 25. eBPF big picture https://ebpf.io/what-is-ebpf
  26. 26. Cilium benefits • faster service routing • more efficient network policy enforcement • better observability of cluster traffic • support for larger clusters • Cilium ecosystem © white duck GmbH 2022
  27. 27. Current limitations • only supports Linux • CiliumNetworkPolicy currently not supported • Cilium L7 policy enforcement is disabled • Hubble is disabled • advanced Cilium configurations require BYO CNI © white duck GmbH 2022
  28. 28. PRIVATE API SERVER OPTIONS © white duck GmbH 2022
  29. 29. API server private endpoint connection • based on Private Link Endpoints • exposes the API server endpoint into a subnet • you can still expose services externally • things to think about • DNS resolution • DNS Resolver • Public DNS entry • private/self-hosted Build Agent (or GitOps) • “az aks command invoke” can be helpful as well • https://learn.microsoft.com/azure/aks/private-clusters © white duck GmbH 2022
  30. 30. API server vNet integration • still preview! • API server is exposed into a delegated subnet • enables network communication between the API server and the cluster nodes • without vNet integration this is done via a private tunnel between the control plane and nodes (Konnectivity) • supports private and public clusters • no Private Link Endpoint required • https://learn.microsoft.com/azure/aks/api-server-vnet-integration © white duck GmbH 2022
  31. 31. Questions? © white duck GmbH 2022 Nico Meisenzahl (Head of DevOps Consulting and Operations, Cloud Solution Architect, Azure & Developer Technologies MVP, GitLab Hero) Email: nico.meisenzahl@whiteduck.de Twitter: @nmeisenzahl LinkedIn: https://www.linkedin.com/in/nicomeisenzahl/ Philip Welz (Senior Kubernetes & DevOps Engineer, GitLab Hero, CKA, CKAD & CKS) Twitter: @philip_welz LinkedIn: https://www.linkedin.com/in/philip-welz Slides: https://www.slideshare.net/nmeisenzahl

×