SlideShare ist ein Scribd-Unternehmen logo
1 von 48
Downloaden Sie, um offline zu lesen
CloudNativeforNFV
AlonBernstein
alonb@cisco.com
DistinguishedEngineer
CiscoSystemsinc.
KuralamudhanRamakrishnan
Kuralamudhan.Ramakrishnan@intel.com
SeniorSoftwareEngineer -Intel
RepresentingonBehalfof
IVANCOUGHLAN
IVAN.COUGHLAN@INTEL.COM
SeniorSoftwareArchitect
SoftwareDefinedDatacenterSolutionsGroup-Intel
Cloud Native Network
Function Virtualization
Alon Bernstein
Distinguished Engineer
Cisco Systems Inc.
Cloud Native Network Functions
• The term “Cloud Native Networking” covers many topics
• This presentation focuses on micro-services dedicated to packet
forwarding
• The first part of the presentation is a high level outline of the
motivations and challenges in cloud native network functions (CNF)
• Second part will focus on tools used for CNF
• Cisco is building a CNF based CMTS (cable modem termination
system).
What Is Cloud Native Software ?
• Allow a software developer to focus on the core business logic
- Break functions to “micro services”
- Scaling/Availability are provided by the infrastructure
- Open source software tools that reduce development time (databases/buses/productivity
tools etc).
- “12 factor app” and domain driven design
• More in “cloud native computing foundation”
• How does that translate to the network function virtualization ?
POD
(x+y)
POD
(x+y)
POD
(x+y)
Load
sharing
Assume a very simple stateless micro-
service: add two numbers. Run the micro-
service in a POD.
- If a POD crashes just start another one.
Almost no service impact
- If we run out of processing capacity add
another POD
- If we don’t need capacity remove PODs
- To upgrade the code start adding PODs
with the new SW version and remove
the old ones
- Kubernetes declarative model helps
deploy design patterns such as this
example very easily.
Add (x,y)
Cloud Native Software By Example
• Cloud Native NFV
What are the benefits of cloud native ?
• Service velocity
• Availability
• Scale
What Are The Risks Of Going Cloud Native ?
• Complex.
• Just the infra consists of 20+ software packages.
• Automation is a must.
• Managing a highly distributed system is not easy.
• Complete rearrangement of the organization (CI/CD, devops, merge of
IT/DC/access orgs)
What About Containers ?
• Containers are a packaging
option.
• Cloud native uses containers, but
its possible to build a container
system that is not cloud native.
• Containers are useful in the cloud
native context because of the
breakup to micro-services (lots of
lightweight functions).
• Cloud Native NFV can be
high-performance
What About Performance ?
Cloud Native For NFV
• Current reference for NFV is
ETSI-NFV which is a “lift-and-
shift” architecture; create an
equivalent network appliance in
a VM and place it in the data
center.
• Cloud native is focused on web
and e-commerce. Some
adjustments need to be made
for NFV.
NFV Cloud Native
Goal: implement networking functions
in a data center environment.
Networking focused
Goal: treat a data center as a single
OS. Application focus
Orchestration Scheduling
Relies on active/standby and
orchestration for
scaling/availability/upgrade
Relies on load balancing for
scaling/availability/upgrade
Network Management Framework Application Deployment Framework
ETSI-NFV is a common framework Depends on the tools used
Long lived application, dynamic
configuration
Short lived application, configuration
stored centrally, immutable
Availability
• Breakdown to micro-
services is the first line
of defense
• Cattle vs. Pets
Two key applications
• Control plane: for the most part “classic cloud native”
• Data plane: not supported out-of-the-box
Issues With Cloud Native Data Plane
• IP packet streams are not HTTP transactions –
how does load balancing work ?
• CPU and memory are counted as resource in
the Kubernetes scheduler – but what about
bandwidth ?
• Solutions have to be built, they don’t come as
part as the basic package
Conclusion
Back to basics. Behind the term “cloud native” are:
• ABCs of software engineering (modularity)
• Parallel computing
• Distributed systems
All of the above can be, and should be applied to
NFV
CloudNativeForNFV
ONSLA
March2018
Intel
KuralamudhanRaMakrishnan
Kuralamudhan.Ramakrishnan@intel.com
SeniorSoftwareEngineer, SoftwareDefinedDatacenterSolutionsGroup-Intel
Representing Onbehalfof
IVANCOUGHLAN
IVAN.COUGHLAN@INTEL.COM
SeniorSoftwareArchitect, SoftwareDefinedDatacenterSolutionsGroup-Intel
Contributors:AbdulHalim;LouiseDaly;SwatiSehgal
SoftwareDefinedDatacenterSolutionGroup 18
NoticesandDisclaimersŠ 2017 Intel Corporation. Intel, the Intel logo, Xeon and Xeon logos are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may
be claimed as the property of others.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or
from the OEM or retailer.
All products, computer systems, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
​Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-
infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
Intel processors of the same SKU may vary in frequency or power as a result of natural variability in the production process.
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.
Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether
referenced data are accurate.
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel
microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or
effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel
microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and
Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies
depending on system configuration. No computer system can be absolutely secure.
IntelÂŽ Advanced Vector Extensions (IntelÂŽ AVX)* provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX
instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with IntelÂŽ Turbo Boost Technology 2.0 to not achieve any or
maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at http://www.intel.com/go/turbo.
IntelÂŽ Hyper-Threading Technology available on select IntelÂŽ processors. Requires an IntelÂŽ HT Technology-enabled system. Your performance varies depending on the
specific hardware and software you use. Learn more by visiting http://www.intel.com/info/hyperthreading.
All SKUs, frequencies, features and performance estimates are PRELIMINARY and can change without notice
Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Software and workloads used in performance tests may
have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer
systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and
performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For
more complete information visit http://www.intel.com/performance. Configurations: Based on Intel estimates.
SoftwareDefinedDatacenterSolutionGroup 19
Monolith
CloudnativeSoftwareEvolutionjourney
Monolith
VM
Containers
Micro-services
TodayEmerging
Hindsight
What Happened
Virtualize Physical Appliances
Automated;
Secure; Flexible;
Performant;
Resilient;
SoftwareDefinedDatacenterSolutionGroup 20
Monolith
CloudnativeSoftwareEvolutionjourney
Monolith
VM
Containers
Micro-services
TodayEmerging
Hindsight
What Happened
Containers Bare Metal
Deployment Model
Automated;
Secure; Flexible;
Performant;
Resilient;
SoftwareDefinedDatacenterSolutionGroup 21
ContainerBareMetalDeploymentModel
Collaboratewithearlymovers,driveOpenSourcedevelopmentsandenabletheindustry
VNFs
vCMTS vIMS vEPC vCPE vSBC
NFVi- Network
SR-IOV
NFV Orchestration
CLOUD NATIVE
COMPUTING
FOUNDATION
Containers Bare Metal
Hardware
Containerized Virtual Network Functions
vRNCvCPE
vEPC vFirewall
vSBCvCMTS
vIMS vRouter
Orchestration
Host OS Docker Engine
SoftwareDefinedDatacenterSolutionGroup 22
AddresskeyChallengesincontainersBareMetal
Open Source: Available on Intel github https://github.com/Intel-Corp | NFD at https://github.com/kubernetes-incubator/node-feature-discovery
Ability to request/allocate platform capabilities
High performance Data Plane (N-S)
High performance Data Plane (E-W)
Multiple network interfaces for VNFs
CPU Core-Pinning and isolation for K8s pods
Dynamic Huge Page allocation
Platform telemetry information
Challengesbeingaddressed
Discovery, Advertise, schedule and manage devices with K8s
Guarantee NUMA node resource alignment
Kubernetes Networking
Data Plane Acceleration
Telemetry
Enhance Platform Awareness (EPA)
Open Source: CNI plug-in - V2.0 June ‘17
Upstream K8s: TBD
Open Source: CNI plug-in - V1.0 Sep ‘17
Open Source: CNI plug-in - V2.0 April ‘17
Open Source: Nov. ‘16
Upstream K8: Incubation Graduation TBD
Open Source: V1.2 April ‘17
Upstream K8: Phase 1 - V1.8 Sept ‘17
Upstream K8: V1.8 Sept ‘17
SOFTWARE
AVAILABILITY*
Upstream collectd: V5.7.2 June ‘17 ; 5.8.0
((Q4 2017 date TBD)
Solution
Node Feature Discovery
CPU Manager for
Kubernetes
Native Huge page support
for Kubernetes
VHOST USER
SR-IOV
Device Plugin
NUMA Manager
New implementation WIP
Upstream K8: Phase 1 - V1.8 Alpha
New implementation – WIP PoC proposal
Upstream K8: TBD
SoftwareDefinedDatacenterSolutionGroup 23
MultipleNetworkInterfacesforVNFs
PROBLEM
Kubernetes support only one Network interface – “eth0”
In NFV use cases, it is required to provide multiple network
interfaces to the virtualized operating environment of the VNF
eth0
Pod
eth1
eth2
eth0
interface
Pod
Container
Container
Container
Container
Container
Container
USE CASES
Functional separation of control and data network planes
link aggregation/bonding for redundancy of the network
Support for implementation of different network SLAs
Network segregation and Security
REFERENCE
Multus CNI – https://github.com/Intel-Corp/multus-cni
Network plumbing WG Kubernetes proposal- Multus PoC
https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7Ud
TPAG_RNydhVE1Kx54kFQ
Network Control
Flow with Multus
Pod Network Interfaces
with Multus
SR-IOV
net1
KUBELET
SR-IOV
net0eth0
LINUX BRIDGE
VF0 VF1
SR-IOV
Logging
vFirewall
net0 net1
eth0
FlannelLinuxBridge
Kubernetes Pod
Mutli net
Example
SoftwareDefinedDatacenterSolutionGroup 24
VhostUserCNIPlugin
PROBLEM
No Container Networking with software acceleration
for NFV, particularly for East – West Traffic
SOLUTION
Virtio_user/ vhost_user performance better than VETH pairs
Supports VPP as well as DPDK OVS
Vhost_user CNI plugin enables K8s
to leverage data plane acceleration
REFERENCE
https://github.com/intel/vhost-user-net-plugin (V1.0 Sep ’17)
NIC
eth0
OVS- DPDK / VPP
vhostuser
Kubernetes Pod
Container
VNF Application
DPDK
virtio_user
SoftwareDefinedDatacenterSolutionGroup 25
DPDK–SRIOVCNIPlugin
Kubernetes Pod
Container
VNF Application
DPDK
Kernel
SR-IOV Enabled Network Interface
VFVF VF
uio_pci_generic/igb_uio/vfio-pci
PROBLEM
Lack of support for physical platform resource isolation
No guaranteed network IO performance
No support for Data Plane Networking
SOLUTION
Allows SRIOV support in Kubernetes via a CNI plugin
Supports two modes of operation:
SR-IOV: SR-IOV VFs are allocated to pod network namespace
DPDK: SR-IOV VFs are bounded to DPDK drivers in the userspace
REFERENCE
github.com/Intel-Corp/sriov-cni
SoftwareDefinedDatacenterSolutionGroup 26
NodeFeatureDiscovery
SR-IOV Network Features Single Root I/O Virtualization
BootGuard A hardware-based boot integrity protection mechanism (New feature on Purley).
UEFI Secure Boot Boot Firmware verification and authorization of OS Loader/Kernel components
AVX CPUID Features: IntelÂŽ Advances Vector Extensions 512 (IntelÂŽ AVX 512)
Turbo Boost IntelÂŽ Turbo Boost Processor accelerator
Node Feature Discovery Label Details
NODE FEATUR DISCOVERY IN K8sPROBLEM
No way to identify hardware
capabilities or configuration
Inability for workload to request
certain hardware feature
SOLUTION
Node Feature Discovery brings Enhanced
Platform Awareness (EPA) in K8s
NFD detects resources on each node in a
Kubernetes cluster and advertises those features
Allows matching of workload to
platform capabilities
REFERENCE
github.com/kubernetes-incubator/node-feature-discovery
NODE 1
NFD
DISCOVERY
POD
NODE 2
NFD
DISCOVERY
POD
Application A
Application B
POD label:
Application B
Application A
POD label:
SR-IOV
Bootguard
MASTER
ETCD
NODE 1
NODE 2
AVX
Turbo boost
Bootguard
Secureboot
SR-IOV
NFD
DISCOVERY
POD
SoftwareDefinedDatacenterSolutionGroup 27
CPUManagerforKubernetes–CPUPinningandIsolation
PROBLEM
Kubernetes has no mechanism to support
core pinning and isolation
Results in high priority workloads not achieving SLAs
* Noisy Neighbor Workload: An application that effect causes other virtual applications that share the infrastructure to suffer
from uneven performance
WITHOUT CMK: CPU Pinning and Isolation
Core0
CPU0 CPU1
Target
Workload
Core1
CPU2 CPU3
Noisy
Neighbour
Workload
REFERENCE
https://github.com/Intel-Corp/CPU-Manager-for-Kubernetes
SOLUTION
CPU-Manager-For-Kubernetes introduces core pinning and isolation
to K8s without requiring changes to the code base
CMK guarantees high priority workloads are pinned to
exclusive cores
Gives a performance boost to high priority applications
Negates the noisy neighbour* scenario
Core0
CPU0 CPU1
Target
Workload
Core1
CPU2 CPU3
Noisy
Neighbour
Workload
WITH CMK: CPU Pinning and Isolation
Noisy
Neighbour
Workload
SoftwareDefinedDatacenterSolutionGroup 28
ExperienceKitExample:CPUManagerforKubernetesBenchmarkTest
Core Isolation decrease latency of target workload up
>x13 in presence of Noisy Workload
Uptox55latencydecrease
Test are done with 16 Target Workloads” (=16 Containers) and with or without Noisy Workload present
1 Core with 2 threads are assigned to each container. Noisy Workload uses any available (non-isolated) cores in the system
Platform: IntelÂŽ XeonÂŽ Gold Processor 20C@2.00 GHz (6138T); DPDK L2 Forwarding using XXV710 NICs
Core Isolation increase throughput of target-workload
>200% for small packets in presence of Noisy Workload
CoreIsolationleadstoperformanceconsistencysolvingnoisyworkloadsproblem
Uptox4throughput
increase
SoftwareDefinedDatacenterSolutionGroup 29
DevicePlugins-overview
WHY?
Device vendors have to write custom Kubernetes code in order to integrate
their device with the ecosystem
Results in multiple vendors maintaining custom code making it difficult for a
customer to consume
REFERENCE
https://kubernetes.io/docs/concepts/cluster-administration/device-plugins/
HOW?
Provide a device plugin framework which enables vendors to advertise, schedule and
setup devices with native Kubernetes integration
Device Plugins are easily deployed via K8s Daemonsets and workload device requests
made via extended resource requests in the Pod Specification
Provides predictability to workloads as Devices are allocated in a controlled manner
with device health being monitored - *Predictability to K8s*
WorkloadResourceRequests: Device
KiubernetesNode
Kubelet–devicemgr DevicePlugin
Devicea
Devicec
gRPC
Workload
Deviceb
DeviceC
Devicea
LIMITATIONS?
Device Plugin API has no de-allocation resulting in no hook to free devices on
Pod completion
SoftwareDefinedDatacenterSolutionGroup 30
NUMAManagerforKubernetes–NUMAalignmentofResources
PROBLEM
Kubernetes has multiple independent components that handle resource
allocation resulting in no alignment on Multi NUMA Node systems
Results in high priority workloads not achieving SLAs or increased
resource utilization
REFERENCE
https://github.com/kubernetes/community/pull/1680
SOLUTION
NUMA Manager provides a mechanism to guarantee NUMA Node Affinity of
resources requested by a workload
NUMA Manager interfaces with components( eg. CPU Manager & Device Manager)
that have NUMA awareness to enable NUMA aligned resource allocations
Gives a performance boost to high priority applications as resources are NUMA
Node aligned
CPU1
Interconnect
Device0
NUMANode1
Socket0 Socket1Device1
PriorityWorkload
It
Interconnect
NUMANode0
WITH NUMA MANAGER
Device0
Socket0
It
Interconnect
WITHOUT NUMA MANAGER
PriorityWorkloadResourceRequests: CPU Device
Socket1Device1
NUMANode0 NUMANode1
PriorityWorkload
SoftwareDefinedDatacenterSolutionGroup 31
Monolith
CloudnativeSoftwareEvolutionjourney
Monolith
VM
Containers
Micro-services
TodayEmerging
Hindsight
What Happened
Cloud-Native Network
Functions
Automated;
Secure; Flexible;
Performant;
Resilient;
SoftwareDefinedDatacenterSolutionGroup 32
keyChallengestoaddressedCloudnativeNetworkfunction
Placement considering the Network resources
QoS
Service Function chains
High-speed wiring of NFVs
Data plane Management
Support for Cloud Native & high performance
Kubernetes Networking
NFV – Specific Policy APIs
Management/Control
Solutionstoworktogether
Ligato
CloudNativeimplementationChallengestoaddress
Cloud CMTS
• The Cloud CMTS is a cloud native
implementation of the Cable Modem Termination
System
• The Physical layer is handled by the “Remote
phy device (RPD)”, similar to the phy split in 5G
• The Cloud CMTS processing byte streams
below the MAC layer because the RPD performs
only mod/demod at the phy layer
34
SoftwareDefinedDatacenterSolutionGroup
CALLtoaction
• We need feedback on the current ingredients e.g.
• Multus
• SR-IOV
• Vhost user
• Be active in K8s SIGs
• Network
• Resource Management
• Try out the Ligato Dev Framework
SoftwareDefinedDatacenterSolutionGroup 35
EngagewithIntel
OPEN SOURCE
POC
EXPERIENCE KITS
Best Practice Guidelines
Software community Engagewithintel
CONTAINER
CAPABILITIES
CONTAINER
NETWORKING
Intel is addressing key challenges to using containers for NFV use cases
Many of these have been open sourced already
Explore more information available on Intel’s Network Builders site while additional material will be made available
throughout January 2018 :https://networkbuilders.intel.com/network-technologies/container-experience-kits
VNF
CloudNativeforNFV
AlonBernstein
alonb@cisco.com
DistinguishedEngineer
CiscoSystemsinc.
KuralamudhanRamakrishnan
Kuralamudhan.Ramakrishnan@intel.com
SeniorSoftwareEngineer -Intel
RepresentingonBehalfof
IVANCOUGHLAN
IVAN.COUGHLAN@INTEL.COM
SeniorSoftwareArchitect
SoftwareDefinedDatacenterSolutionsGroup-Intel
37
SoftwareDefinedDatacenterSolutionGroup
backupSlides
SoftwareDefinedDatacenterSolutionGroup
LigatoFrameworkForCloudNativeVNFs
Contiv – VPP Switch
VPP
Kernel Host stack
Legacy apps High Performance apps
• Service Mesh
• Istio / Envoy
Cloud-Native VNFs
POD
Istio Sidecar
VPP
APP
Sockets
POD
APP
Sockets
POD
VNF
VPP
Agent
VPP
TCP
Stack
Contiv-VPP
Contiv-VPP
etcd
Ligato
Kubelet
mem-if
Define ServicesDefine Topology
mem-if
IPv4/IPv6
Reference: Jan Medved’s Ligato: a platform for development of Cloud-Native VNFs
SoftwareDefinedDatacenterSolutionGroup 40
Test configuration:
Master & Minion Nodes: {mother board: Intel Corporation; S2600WFQ; CPU: IntelÂŽ XeonÂŽ Gold Processor 6138T; 2.0 Ghz; 2
socket; 20 cores; 27.5 MB; 125 W; Memory: Micron MTA36ASF2G72PZ; 1 DIMM/Channel, 6 Channel/Socket; BIOS: Intel
Corporation SE5C620.86B.0X.01.0007.060920171037; NIC: Intel Corporation; Ethernet Controller XXV710 for 2x25GbE
Firmware version 5.50; SW: Ubuntu 16.04.2 64bit; Kernel 4.4.0-62-generic x86_64; DPDK 17.05}
IXIA* - IxNetwork 8.10.1046.6 EA; Protocols: 8.10.1105.9, IxOS 8.10.1250.8 EA-Patch1
ExperienceKitExample–TestsetupforCPUManagerforKubernetesBenchmarkTest
Testing Results Summary: Managed Noisy neighbors using CMK Benchmark Test
With core isolation Performance is consistent with or without ‘noisy’ application” present
Without core isolation EPA feature, in
presence of “noisy application”
• >70% Throughput drops for small packet sizes
• > 10% Throughput drops for large packet sizes
• > x10 Packet latency increased
SoftwareDefinedDatacenterSolutionGroup 41
BondingCNIPlugin
PROBLEM
There is no redundancy of network link failure in container
environment. This results in high-priority workloads not
achieving expected high-availability. e.g., due to failure of NIC,
network Switch or cable breakdowns.
SOLUTION
Bonding CNI provides a mechanism to aggregate multiple
network interfaces into a single logical “bonded” interface in a
Container environment. Thus providing a fail-over, high-
availability network for containerized applications e.g., VNF.
REFERENCE
https://github.com/Intel-Corp/bond-cni
network
SRIOV
PF 0
Kubernetes Pod
Container
VNF Application
Net 0 Net 1
SRIOV
PF 1
bond
VF VF VF VFVF VF VF VF
SoftwareDefinedDatacenterSolutionGroup 42
HugepageNativeSupportinKubernetes
PROBLEM
No resource management of Huge Pages in kubernetes
Responsibility of the cluster operator to handle it manually
SOLUTION
Huge Pages introduced as first class resource in kubernetes
Support for Huge Pages via hugetlbfs enabled through a memory backed volume plugin
Inherent accounting of Huge Pages
Automatic relinquishing of Huge Pages in case of unexpected process termination
REFERENCE
Alpha support for pre-allocated hugepages
Hugetlbfs support based on empty dir volume plugin
SoftwareDefinedDatacenterSolutionGroup 43
QATsupportinkubernetes
PROBLEM
No way to identify QAT devices available in a kubernetes cluster
Inability for a workload to request
a QAT device along with other compute resources
SOLUTION
QAT support enabled through Device plugins introduced in
kubernetes 1.8
A vendor independent framework for users to consume hardware
devices
Enables device discovery, advertisement to kubelet, allocation to
pod and device health checks
REFERENCE
https://kubernetes.io/docs/concepts/cluster-administration/device-plugins
apiVersion: v1
kind: Pod
metadata:
name: dpdkpod
spec:
containers:
- image: dpdkapp
name: dpdkcontainer
resources:
requests:
cpu: "10"
memory: "4Gi"
pod.alpha.kubernetes.io/opaque-int-resource-qat: '1'
limits:
cpu: "10"
memory: "4Gi"
pod.alpha.kubernetes.io/opaque-int-resource-qat: '1'
SoftwareDefinedDatacenterSolutionGroup 44
NFDSecurebootUSECASE
PROBLEM
The kernel does not allow IGB_UIO based DPDK applications
on UEFI Secure Boot enabled systems
SOLUTION
Using node antiaffinty feature in kubernetes to prevent DPDK
application requiring IGB_UIO driver support from landing on
nodes with SecureBoot label created by Node feature Discovery
apiVersion: v1
kind: Pod
metadata:
name: dpdkpodRequiringUIOSupport
spec:
affinity:
nodeAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: “nfd-SecureBoot”
operator: In
values:
- “true”
containers:
- image: dpdkapp
name: dpdkcontainer
SoftwareDefinedDatacenterSolutionGroup
PlatformTelemetrySystemsupportinKubernetes-Remove
Compute Network Storage
IntelÂŽ Run Sure Technologies
Prometheus
Resource
Telemetry Interfaces
Open Collection
BasePlatform
IntelÂŽ Infrastructure Management Technologies
See: Platform Service Assurance site not including containers specific data: https://networkbuilders.intel.com/network-technologies/serviceassurance
C-Advisor
Container
Container and
Platform Telemetry
Platform
Telemetry
Container
Telemetry
• Service Mesh
• Istio / Envoy
SoftwareDefinedDatacenterSolutionGroup 46
VNFs
vCMTS vIMS vEPC vCPE vSBC
NFVi- Network
SR-IOV
NFV Orchestration
Hybrid Unified
Containers
VM Containers
Containers
VM
Bare Metal
VM
CLOUD NATIVE
COMPUTING
FOUNDATION
Containersnetworkingdeploymentsconsiderations
MultipleDeploymentModels…..
SoftwareDefinedDatacenterSolutionGroup 47
ContainerUnifiedInfrastructureDeploymentModel
VNFs
vCMTS vIMS vEPC vCPE vSBC
NFVi- Network
SR-IOV
NFV Orchestration
CLOUD NATIVE
COMPUTING
FOUNDATION
Containers Unified Infrastructure
Hardware
Orchestration
VM
Guest OS
Docker Engine
VM
Guest OS
Docker Engine
VM
Guest OS
Docker Engine
Hypervisor
App App App App App App
SoftwareDefinedDatacenterSolutionGroup
CPU Manager for
KubernetesSupport for CPU Core Pinning for Kuryr-K8s pods
Support for high performance Data Plane (E-W)
Multiple network interfaces for VNFs
Kuryr-
KubernetesRemoving Network performance penalties for container in VM
IndustrychallengesincontainersUnifiedInfrastructure
MASTER VM
Same as in
Container
Bare Metal
Challengesbeingaddressed Solution

Weitere ähnliche Inhalte

Was ist angesagt?

“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...
“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...
“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...
Edge AI and Vision Alliance
 
GTC15-Manoj-Roge-OpenPOWER
GTC15-Manoj-Roge-OpenPOWERGTC15-Manoj-Roge-OpenPOWER
GTC15-Manoj-Roge-OpenPOWER
Achronix
 
Intel Itanium Hotchips 2011 Overview
Intel Itanium Hotchips 2011 OverviewIntel Itanium Hotchips 2011 Overview
Intel Itanium Hotchips 2011 Overview
Pauline Nist
 

Was ist angesagt? (20)

“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...
“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...
“Khronos Group Standards: Powering the Future of Embedded Vision,” a Presenta...
 
NFF-GO (YANFF) - Yet Another Network Function Framework
NFF-GO (YANFF) - Yet Another Network Function FrameworkNFF-GO (YANFF) - Yet Another Network Function Framework
NFF-GO (YANFF) - Yet Another Network Function Framework
 
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
Build a Deep Learning Video Analytics Framework | SIGGRAPH 2019 Technical Ses...
 
Closed Loop Network Automation for Optimal Resource Allocation via Reinforcem...
Closed Loop Network Automation for Optimal Resource Allocation via Reinforcem...Closed Loop Network Automation for Optimal Resource Allocation via Reinforcem...
Closed Loop Network Automation for Optimal Resource Allocation via Reinforcem...
 
N(ot)-o(nly)-(Ha)doop - the DAG showdown
N(ot)-o(nly)-(Ha)doop - the DAG showdownN(ot)-o(nly)-(Ha)doop - the DAG showdown
N(ot)-o(nly)-(Ha)doop - the DAG showdown
 
Redfish & python redfish
Redfish & python redfishRedfish & python redfish
Redfish & python redfish
 
AI & Computer Vision (OpenVINO) - CPBR12
AI & Computer Vision (OpenVINO) - CPBR12AI & Computer Vision (OpenVINO) - CPBR12
AI & Computer Vision (OpenVINO) - CPBR12
 
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
Reducing Deep Learning Integration Costs and Maximizing Compute Efficiency| S...
 
Developing for Industrial IoT with Linux OS on DragonBoard™ 410c: Session 4
Developing for Industrial IoT with Linux OS on DragonBoard™ 410c: Session 4Developing for Industrial IoT with Linux OS on DragonBoard™ 410c: Session 4
Developing for Industrial IoT with Linux OS on DragonBoard™ 410c: Session 4
 
GTC15-Manoj-Roge-OpenPOWER
GTC15-Manoj-Roge-OpenPOWERGTC15-Manoj-Roge-OpenPOWER
GTC15-Manoj-Roge-OpenPOWER
 
Intel Itanium Hotchips 2011 Overview
Intel Itanium Hotchips 2011 OverviewIntel Itanium Hotchips 2011 Overview
Intel Itanium Hotchips 2011 Overview
 
Road to Cloud Native Orchestration
Road to Cloud Native Orchestration Road to Cloud Native Orchestration
Road to Cloud Native Orchestration
 
SDN and NFV: Friends or Enemies
SDN and NFV: Friends or EnemiesSDN and NFV: Friends or Enemies
SDN and NFV: Friends or Enemies
 
Tyrone-Intel oneAPI Webinar: Optimized Tools for Performance-Driven, Cross-Ar...
Tyrone-Intel oneAPI Webinar: Optimized Tools for Performance-Driven, Cross-Ar...Tyrone-Intel oneAPI Webinar: Optimized Tools for Performance-Driven, Cross-Ar...
Tyrone-Intel oneAPI Webinar: Optimized Tools for Performance-Driven, Cross-Ar...
 
How to Get the Best Deep Learning performance with OpenVINO Toolkit
How to Get the Best Deep Learning performance with OpenVINO ToolkitHow to Get the Best Deep Learning performance with OpenVINO Toolkit
How to Get the Best Deep Learning performance with OpenVINO Toolkit
 
AWS & Intel Webinar Series - Accelerating AI Research
AWS & Intel Webinar Series - Accelerating AI ResearchAWS & Intel Webinar Series - Accelerating AI Research
AWS & Intel Webinar Series - Accelerating AI Research
 
Bring Out the Best in Embedded Computing
Bring Out the Best in Embedded ComputingBring Out the Best in Embedded Computing
Bring Out the Best in Embedded Computing
 
"Current and Planned Standards for Computer Vision and Machine Learning," a P...
"Current and Planned Standards for Computer Vision and Machine Learning," a P..."Current and Planned Standards for Computer Vision and Machine Learning," a P...
"Current and Planned Standards for Computer Vision and Machine Learning," a P...
 
"How to Get the Best Deep Learning Performance with the OpenVINO Toolkit," a ...
"How to Get the Best Deep Learning Performance with the OpenVINO Toolkit," a ..."How to Get the Best Deep Learning Performance with the OpenVINO Toolkit," a ...
"How to Get the Best Deep Learning Performance with the OpenVINO Toolkit," a ...
 
OpenVINO introduction
OpenVINO introductionOpenVINO introduction
OpenVINO introduction
 

Ähnlich wie ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & Kuralamudhan Ramakrishnan, Intel

Cloud Technology: Now Entering the Business Process Phase
Cloud Technology: Now Entering the Business Process PhaseCloud Technology: Now Entering the Business Process Phase
Cloud Technology: Now Entering the Business Process Phase
finteligent
 

Ähnlich wie ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & Kuralamudhan Ramakrishnan, Intel (20)

IntelÂŽ Select Solutions for the Network
IntelÂŽ Select Solutions for the NetworkIntelÂŽ Select Solutions for the Network
IntelÂŽ Select Solutions for the Network
 
Cloud Technology: Now Entering the Business Process Phase
Cloud Technology: Now Entering the Business Process PhaseCloud Technology: Now Entering the Business Process Phase
Cloud Technology: Now Entering the Business Process Phase
 
E5 Intel Xeon Processor E5 Family Making the Business Case
E5 Intel Xeon Processor E5 Family Making the Business Case E5 Intel Xeon Processor E5 Family Making the Business Case
E5 Intel Xeon Processor E5 Family Making the Business Case
 
Intel xeon-scalable-processors-overview
Intel xeon-scalable-processors-overviewIntel xeon-scalable-processors-overview
Intel xeon-scalable-processors-overview
 
Platform Observability and Infrastructure Closed Loops
Platform Observability and Infrastructure Closed LoopsPlatform Observability and Infrastructure Closed Loops
Platform Observability and Infrastructure Closed Loops
 
Clear Linux OS - Introduction
Clear Linux OS - IntroductionClear Linux OS - Introduction
Clear Linux OS - Introduction
 
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...
 
Extend HPC Workloads to Amazon EC2 Instances with Intel and Rescale (CMP373-S...
Extend HPC Workloads to Amazon EC2 Instances with Intel and Rescale (CMP373-S...Extend HPC Workloads to Amazon EC2 Instances with Intel and Rescale (CMP373-S...
Extend HPC Workloads to Amazon EC2 Instances with Intel and Rescale (CMP373-S...
 
Enabling NFV features in kubernetes
Enabling NFV features in kubernetesEnabling NFV features in kubernetes
Enabling NFV features in kubernetes
 
Spring Hill (NNP-I 1000): Intel's Data Center Inference Chip
Spring Hill (NNP-I 1000): Intel's Data Center Inference ChipSpring Hill (NNP-I 1000): Intel's Data Center Inference Chip
Spring Hill (NNP-I 1000): Intel's Data Center Inference Chip
 
AIDC Summit LA- Hands-on Training
AIDC Summit LA- Hands-on Training AIDC Summit LA- Hands-on Training
AIDC Summit LA- Hands-on Training
 
IntelÂŽ XeonÂŽ Scalable Processors Enabled Applications Marketing Guide
IntelÂŽ XeonÂŽ Scalable Processors Enabled Applications Marketing GuideIntelÂŽ XeonÂŽ Scalable Processors Enabled Applications Marketing Guide
IntelÂŽ XeonÂŽ Scalable Processors Enabled Applications Marketing Guide
 
Inside story on Intel Data Center @ IDF 2013
Inside story on Intel Data Center @ IDF 2013Inside story on Intel Data Center @ IDF 2013
Inside story on Intel Data Center @ IDF 2013
 
Intel’s Big Data and Hadoop Security Initiatives - StampedeCon 2014
Intel’s Big Data and Hadoop Security Initiatives - StampedeCon 2014Intel’s Big Data and Hadoop Security Initiatives - StampedeCon 2014
Intel’s Big Data and Hadoop Security Initiatives - StampedeCon 2014
 
Hetergeneous Compute with Standards Based OFI/MPI/OpenMP Programming
Hetergeneous Compute with Standards Based OFI/MPI/OpenMP ProgrammingHetergeneous Compute with Standards Based OFI/MPI/OpenMP Programming
Hetergeneous Compute with Standards Based OFI/MPI/OpenMP Programming
 
Building Efficient Edge Nodes for Content Delivery Networks
Building Efficient Edge Nodes for Content Delivery NetworksBuilding Efficient Edge Nodes for Content Delivery Networks
Building Efficient Edge Nodes for Content Delivery Networks
 
LF_OVS_17_IPSEC and OVS DPDK
LF_OVS_17_IPSEC and OVS DPDKLF_OVS_17_IPSEC and OVS DPDK
LF_OVS_17_IPSEC and OVS DPDK
 
Re-Imagining the Data Center with Intel
Re-Imagining the Data Center with IntelRe-Imagining the Data Center with Intel
Re-Imagining the Data Center with Intel
 
DUG'20: 01 - Welcome & DAOS Update
DUG'20: 01 - Welcome & DAOS UpdateDUG'20: 01 - Welcome & DAOS Update
DUG'20: 01 - Welcome & DAOS Update
 
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciStreamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSci
 

KĂźrzlich hochgeladen

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

KĂźrzlich hochgeladen (20)

Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 

ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & Kuralamudhan Ramakrishnan, Intel

  • 2. Cloud Native Network Function Virtualization Alon Bernstein Distinguished Engineer Cisco Systems Inc.
  • 3. Cloud Native Network Functions • The term “Cloud Native Networking” covers many topics • This presentation focuses on micro-services dedicated to packet forwarding • The first part of the presentation is a high level outline of the motivations and challenges in cloud native network functions (CNF) • Second part will focus on tools used for CNF • Cisco is building a CNF based CMTS (cable modem termination system).
  • 4. What Is Cloud Native Software ? • Allow a software developer to focus on the core business logic - Break functions to “micro services” - Scaling/Availability are provided by the infrastructure - Open source software tools that reduce development time (databases/buses/productivity tools etc). - “12 factor app” and domain driven design • More in “cloud native computing foundation” • How does that translate to the network function virtualization ?
  • 5. POD (x+y) POD (x+y) POD (x+y) Load sharing Assume a very simple stateless micro- service: add two numbers. Run the micro- service in a POD. - If a POD crashes just start another one. Almost no service impact - If we run out of processing capacity add another POD - If we don’t need capacity remove PODs - To upgrade the code start adding PODs with the new SW version and remove the old ones - Kubernetes declarative model helps deploy design patterns such as this example very easily. Add (x,y) Cloud Native Software By Example
  • 6. • Cloud Native NFV What are the benefits of cloud native ? • Service velocity • Availability • Scale
  • 7. What Are The Risks Of Going Cloud Native ? • Complex. • Just the infra consists of 20+ software packages. • Automation is a must. • Managing a highly distributed system is not easy. • Complete rearrangement of the organization (CI/CD, devops, merge of IT/DC/access orgs)
  • 8. What About Containers ? • Containers are a packaging option. • Cloud native uses containers, but its possible to build a container system that is not cloud native. • Containers are useful in the cloud native context because of the breakup to micro-services (lots of lightweight functions).
  • 9. • Cloud Native NFV can be high-performance What About Performance ?
  • 10. Cloud Native For NFV • Current reference for NFV is ETSI-NFV which is a “lift-and- shift” architecture; create an equivalent network appliance in a VM and place it in the data center. • Cloud native is focused on web and e-commerce. Some adjustments need to be made for NFV.
  • 11. NFV Cloud Native Goal: implement networking functions in a data center environment. Networking focused Goal: treat a data center as a single OS. Application focus Orchestration Scheduling Relies on active/standby and orchestration for scaling/availability/upgrade Relies on load balancing for scaling/availability/upgrade Network Management Framework Application Deployment Framework ETSI-NFV is a common framework Depends on the tools used Long lived application, dynamic configuration Short lived application, configuration stored centrally, immutable
  • 12. Availability • Breakdown to micro- services is the first line of defense • Cattle vs. Pets
  • 13. Two key applications • Control plane: for the most part “classic cloud native” • Data plane: not supported out-of-the-box
  • 14. Issues With Cloud Native Data Plane • IP packet streams are not HTTP transactions – how does load balancing work ? • CPU and memory are counted as resource in the Kubernetes scheduler – but what about bandwidth ? • Solutions have to be built, they don’t come as part as the basic package
  • 15. Conclusion Back to basics. Behind the term “cloud native” are: • ABCs of software engineering (modularity) • Parallel computing • Distributed systems All of the above can be, and should be applied to NFV
  • 16.
  • 18. SoftwareDefinedDatacenterSolutionGroup 18 NoticesandDisclaimersŠ 2017 Intel Corporation. Intel, the Intel logo, Xeon and Xeon logos are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. All products, computer systems, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. ​Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non- infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Intel processors of the same SKU may vary in frequency or power as a result of natural variability in the production process. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. IntelÂŽ Advanced Vector Extensions (IntelÂŽ AVX)* provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with IntelÂŽ Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at http://www.intel.com/go/turbo. IntelÂŽ Hyper-Threading Technology available on select IntelÂŽ processors. Requires an IntelÂŽ HT Technology-enabled system. Your performance varies depending on the specific hardware and software you use. Learn more by visiting http://www.intel.com/info/hyperthreading. All SKUs, frequencies, features and performance estimates are PRELIMINARY and can change without notice Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Configurations: Based on Intel estimates.
  • 21. SoftwareDefinedDatacenterSolutionGroup 21 ContainerBareMetalDeploymentModel Collaboratewithearlymovers,driveOpenSourcedevelopmentsandenabletheindustry VNFs vCMTS vIMS vEPC vCPE vSBC NFVi- Network SR-IOV NFV Orchestration CLOUD NATIVE COMPUTING FOUNDATION Containers Bare Metal Hardware Containerized Virtual Network Functions vRNCvCPE vEPC vFirewall vSBCvCMTS vIMS vRouter Orchestration Host OS Docker Engine
  • 22. SoftwareDefinedDatacenterSolutionGroup 22 AddresskeyChallengesincontainersBareMetal Open Source: Available on Intel github https://github.com/Intel-Corp | NFD at https://github.com/kubernetes-incubator/node-feature-discovery Ability to request/allocate platform capabilities High performance Data Plane (N-S) High performance Data Plane (E-W) Multiple network interfaces for VNFs CPU Core-Pinning and isolation for K8s pods Dynamic Huge Page allocation Platform telemetry information Challengesbeingaddressed Discovery, Advertise, schedule and manage devices with K8s Guarantee NUMA node resource alignment Kubernetes Networking Data Plane Acceleration Telemetry Enhance Platform Awareness (EPA) Open Source: CNI plug-in - V2.0 June ‘17 Upstream K8s: TBD Open Source: CNI plug-in - V1.0 Sep ‘17 Open Source: CNI plug-in - V2.0 April ‘17 Open Source: Nov. ‘16 Upstream K8: Incubation Graduation TBD Open Source: V1.2 April ‘17 Upstream K8: Phase 1 - V1.8 Sept ‘17 Upstream K8: V1.8 Sept ‘17 SOFTWARE AVAILABILITY* Upstream collectd: V5.7.2 June ‘17 ; 5.8.0 ((Q4 2017 date TBD) Solution Node Feature Discovery CPU Manager for Kubernetes Native Huge page support for Kubernetes VHOST USER SR-IOV Device Plugin NUMA Manager New implementation WIP Upstream K8: Phase 1 - V1.8 Alpha New implementation – WIP PoC proposal Upstream K8: TBD
  • 23. SoftwareDefinedDatacenterSolutionGroup 23 MultipleNetworkInterfacesforVNFs PROBLEM Kubernetes support only one Network interface – “eth0” In NFV use cases, it is required to provide multiple network interfaces to the virtualized operating environment of the VNF eth0 Pod eth1 eth2 eth0 interface Pod Container Container Container Container Container Container USE CASES Functional separation of control and data network planes link aggregation/bonding for redundancy of the network Support for implementation of different network SLAs Network segregation and Security REFERENCE Multus CNI – https://github.com/Intel-Corp/multus-cni Network plumbing WG Kubernetes proposal- Multus PoC https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7Ud TPAG_RNydhVE1Kx54kFQ Network Control Flow with Multus Pod Network Interfaces with Multus SR-IOV net1 KUBELET SR-IOV net0eth0 LINUX BRIDGE VF0 VF1 SR-IOV Logging vFirewall net0 net1 eth0 FlannelLinuxBridge Kubernetes Pod Mutli net Example
  • 24. SoftwareDefinedDatacenterSolutionGroup 24 VhostUserCNIPlugin PROBLEM No Container Networking with software acceleration for NFV, particularly for East – West Traffic SOLUTION Virtio_user/ vhost_user performance better than VETH pairs Supports VPP as well as DPDK OVS Vhost_user CNI plugin enables K8s to leverage data plane acceleration REFERENCE https://github.com/intel/vhost-user-net-plugin (V1.0 Sep ’17) NIC eth0 OVS- DPDK / VPP vhostuser Kubernetes Pod Container VNF Application DPDK virtio_user
  • 25. SoftwareDefinedDatacenterSolutionGroup 25 DPDK–SRIOVCNIPlugin Kubernetes Pod Container VNF Application DPDK Kernel SR-IOV Enabled Network Interface VFVF VF uio_pci_generic/igb_uio/vfio-pci PROBLEM Lack of support for physical platform resource isolation No guaranteed network IO performance No support for Data Plane Networking SOLUTION Allows SRIOV support in Kubernetes via a CNI plugin Supports two modes of operation: SR-IOV: SR-IOV VFs are allocated to pod network namespace DPDK: SR-IOV VFs are bounded to DPDK drivers in the userspace REFERENCE github.com/Intel-Corp/sriov-cni
  • 26. SoftwareDefinedDatacenterSolutionGroup 26 NodeFeatureDiscovery SR-IOV Network Features Single Root I/O Virtualization BootGuard A hardware-based boot integrity protection mechanism (New feature on Purley). UEFI Secure Boot Boot Firmware verification and authorization of OS Loader/Kernel components AVX CPUID Features: IntelÂŽ Advances Vector Extensions 512 (IntelÂŽ AVX 512) Turbo Boost IntelÂŽ Turbo Boost Processor accelerator Node Feature Discovery Label Details NODE FEATUR DISCOVERY IN K8sPROBLEM No way to identify hardware capabilities or configuration Inability for workload to request certain hardware feature SOLUTION Node Feature Discovery brings Enhanced Platform Awareness (EPA) in K8s NFD detects resources on each node in a Kubernetes cluster and advertises those features Allows matching of workload to platform capabilities REFERENCE github.com/kubernetes-incubator/node-feature-discovery NODE 1 NFD DISCOVERY POD NODE 2 NFD DISCOVERY POD Application A Application B POD label: Application B Application A POD label: SR-IOV Bootguard MASTER ETCD NODE 1 NODE 2 AVX Turbo boost Bootguard Secureboot SR-IOV NFD DISCOVERY POD
  • 27. SoftwareDefinedDatacenterSolutionGroup 27 CPUManagerforKubernetes–CPUPinningandIsolation PROBLEM Kubernetes has no mechanism to support core pinning and isolation Results in high priority workloads not achieving SLAs * Noisy Neighbor Workload: An application that effect causes other virtual applications that share the infrastructure to suffer from uneven performance WITHOUT CMK: CPU Pinning and Isolation Core0 CPU0 CPU1 Target Workload Core1 CPU2 CPU3 Noisy Neighbour Workload REFERENCE https://github.com/Intel-Corp/CPU-Manager-for-Kubernetes SOLUTION CPU-Manager-For-Kubernetes introduces core pinning and isolation to K8s without requiring changes to the code base CMK guarantees high priority workloads are pinned to exclusive cores Gives a performance boost to high priority applications Negates the noisy neighbour* scenario Core0 CPU0 CPU1 Target Workload Core1 CPU2 CPU3 Noisy Neighbour Workload WITH CMK: CPU Pinning and Isolation Noisy Neighbour Workload
  • 28. SoftwareDefinedDatacenterSolutionGroup 28 ExperienceKitExample:CPUManagerforKubernetesBenchmarkTest Core Isolation decrease latency of target workload up >x13 in presence of Noisy Workload Uptox55latencydecrease Test are done with 16 Target Workloads” (=16 Containers) and with or without Noisy Workload present 1 Core with 2 threads are assigned to each container. Noisy Workload uses any available (non-isolated) cores in the system Platform: IntelÂŽ XeonÂŽ Gold Processor 20C@2.00 GHz (6138T); DPDK L2 Forwarding using XXV710 NICs Core Isolation increase throughput of target-workload >200% for small packets in presence of Noisy Workload CoreIsolationleadstoperformanceconsistencysolvingnoisyworkloadsproblem Uptox4throughput increase
  • 29. SoftwareDefinedDatacenterSolutionGroup 29 DevicePlugins-overview WHY? Device vendors have to write custom Kubernetes code in order to integrate their device with the ecosystem Results in multiple vendors maintaining custom code making it difficult for a customer to consume REFERENCE https://kubernetes.io/docs/concepts/cluster-administration/device-plugins/ HOW? Provide a device plugin framework which enables vendors to advertise, schedule and setup devices with native Kubernetes integration Device Plugins are easily deployed via K8s Daemonsets and workload device requests made via extended resource requests in the Pod Specification Provides predictability to workloads as Devices are allocated in a controlled manner with device health being monitored - *Predictability to K8s* WorkloadResourceRequests: Device KiubernetesNode Kubelet–devicemgr DevicePlugin Devicea Devicec gRPC Workload Deviceb DeviceC Devicea LIMITATIONS? Device Plugin API has no de-allocation resulting in no hook to free devices on Pod completion
  • 30. SoftwareDefinedDatacenterSolutionGroup 30 NUMAManagerforKubernetes–NUMAalignmentofResources PROBLEM Kubernetes has multiple independent components that handle resource allocation resulting in no alignment on Multi NUMA Node systems Results in high priority workloads not achieving SLAs or increased resource utilization REFERENCE https://github.com/kubernetes/community/pull/1680 SOLUTION NUMA Manager provides a mechanism to guarantee NUMA Node Affinity of resources requested by a workload NUMA Manager interfaces with components( eg. CPU Manager & Device Manager) that have NUMA awareness to enable NUMA aligned resource allocations Gives a performance boost to high priority applications as resources are NUMA Node aligned CPU1 Interconnect Device0 NUMANode1 Socket0 Socket1Device1 PriorityWorkload It Interconnect NUMANode0 WITH NUMA MANAGER Device0 Socket0 It Interconnect WITHOUT NUMA MANAGER PriorityWorkloadResourceRequests: CPU Device Socket1Device1 NUMANode0 NUMANode1 PriorityWorkload
  • 32. SoftwareDefinedDatacenterSolutionGroup 32 keyChallengestoaddressedCloudnativeNetworkfunction Placement considering the Network resources QoS Service Function chains High-speed wiring of NFVs Data plane Management Support for Cloud Native & high performance Kubernetes Networking NFV – Specific Policy APIs Management/Control Solutionstoworktogether Ligato CloudNativeimplementationChallengestoaddress
  • 33. Cloud CMTS • The Cloud CMTS is a cloud native implementation of the Cable Modem Termination System • The Physical layer is handled by the “Remote phy device (RPD)”, similar to the phy split in 5G • The Cloud CMTS processing byte streams below the MAC layer because the RPD performs only mod/demod at the phy layer
  • 34. 34 SoftwareDefinedDatacenterSolutionGroup CALLtoaction • We need feedback on the current ingredients e.g. • Multus • SR-IOV • Vhost user • Be active in K8s SIGs • Network • Resource Management • Try out the Ligato Dev Framework
  • 35. SoftwareDefinedDatacenterSolutionGroup 35 EngagewithIntel OPEN SOURCE POC EXPERIENCE KITS Best Practice Guidelines Software community Engagewithintel CONTAINER CAPABILITIES CONTAINER NETWORKING Intel is addressing key challenges to using containers for NFV use cases Many of these have been open sourced already Explore more information available on Intel’s Network Builders site while additional material will be made available throughout January 2018 :https://networkbuilders.intel.com/network-technologies/container-experience-kits VNF
  • 39. SoftwareDefinedDatacenterSolutionGroup LigatoFrameworkForCloudNativeVNFs Contiv – VPP Switch VPP Kernel Host stack Legacy apps High Performance apps • Service Mesh • Istio / Envoy Cloud-Native VNFs POD Istio Sidecar VPP APP Sockets POD APP Sockets POD VNF VPP Agent VPP TCP Stack Contiv-VPP Contiv-VPP etcd Ligato Kubelet mem-if Define ServicesDefine Topology mem-if IPv4/IPv6 Reference: Jan Medved’s Ligato: a platform for development of Cloud-Native VNFs
  • 40. SoftwareDefinedDatacenterSolutionGroup 40 Test configuration: Master & Minion Nodes: {mother board: Intel Corporation; S2600WFQ; CPU: IntelÂŽ XeonÂŽ Gold Processor 6138T; 2.0 Ghz; 2 socket; 20 cores; 27.5 MB; 125 W; Memory: Micron MTA36ASF2G72PZ; 1 DIMM/Channel, 6 Channel/Socket; BIOS: Intel Corporation SE5C620.86B.0X.01.0007.060920171037; NIC: Intel Corporation; Ethernet Controller XXV710 for 2x25GbE Firmware version 5.50; SW: Ubuntu 16.04.2 64bit; Kernel 4.4.0-62-generic x86_64; DPDK 17.05} IXIA* - IxNetwork 8.10.1046.6 EA; Protocols: 8.10.1105.9, IxOS 8.10.1250.8 EA-Patch1 ExperienceKitExample–TestsetupforCPUManagerforKubernetesBenchmarkTest Testing Results Summary: Managed Noisy neighbors using CMK Benchmark Test With core isolation Performance is consistent with or without ‘noisy’ application” present Without core isolation EPA feature, in presence of “noisy application” • >70% Throughput drops for small packet sizes • > 10% Throughput drops for large packet sizes • > x10 Packet latency increased
  • 41. SoftwareDefinedDatacenterSolutionGroup 41 BondingCNIPlugin PROBLEM There is no redundancy of network link failure in container environment. This results in high-priority workloads not achieving expected high-availability. e.g., due to failure of NIC, network Switch or cable breakdowns. SOLUTION Bonding CNI provides a mechanism to aggregate multiple network interfaces into a single logical “bonded” interface in a Container environment. Thus providing a fail-over, high- availability network for containerized applications e.g., VNF. REFERENCE https://github.com/Intel-Corp/bond-cni network SRIOV PF 0 Kubernetes Pod Container VNF Application Net 0 Net 1 SRIOV PF 1 bond VF VF VF VFVF VF VF VF
  • 42. SoftwareDefinedDatacenterSolutionGroup 42 HugepageNativeSupportinKubernetes PROBLEM No resource management of Huge Pages in kubernetes Responsibility of the cluster operator to handle it manually SOLUTION Huge Pages introduced as first class resource in kubernetes Support for Huge Pages via hugetlbfs enabled through a memory backed volume plugin Inherent accounting of Huge Pages Automatic relinquishing of Huge Pages in case of unexpected process termination REFERENCE Alpha support for pre-allocated hugepages Hugetlbfs support based on empty dir volume plugin
  • 43. SoftwareDefinedDatacenterSolutionGroup 43 QATsupportinkubernetes PROBLEM No way to identify QAT devices available in a kubernetes cluster Inability for a workload to request a QAT device along with other compute resources SOLUTION QAT support enabled through Device plugins introduced in kubernetes 1.8 A vendor independent framework for users to consume hardware devices Enables device discovery, advertisement to kubelet, allocation to pod and device health checks REFERENCE https://kubernetes.io/docs/concepts/cluster-administration/device-plugins apiVersion: v1 kind: Pod metadata: name: dpdkpod spec: containers: - image: dpdkapp name: dpdkcontainer resources: requests: cpu: "10" memory: "4Gi" pod.alpha.kubernetes.io/opaque-int-resource-qat: '1' limits: cpu: "10" memory: "4Gi" pod.alpha.kubernetes.io/opaque-int-resource-qat: '1'
  • 44. SoftwareDefinedDatacenterSolutionGroup 44 NFDSecurebootUSECASE PROBLEM The kernel does not allow IGB_UIO based DPDK applications on UEFI Secure Boot enabled systems SOLUTION Using node antiaffinty feature in kubernetes to prevent DPDK application requiring IGB_UIO driver support from landing on nodes with SecureBoot label created by Node feature Discovery apiVersion: v1 kind: Pod metadata: name: dpdkpodRequiringUIOSupport spec: affinity: nodeAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: “nfd-SecureBoot” operator: In values: - “true” containers: - image: dpdkapp name: dpdkcontainer
  • 45. SoftwareDefinedDatacenterSolutionGroup PlatformTelemetrySystemsupportinKubernetes-Remove Compute Network Storage IntelÂŽ Run Sure Technologies Prometheus Resource Telemetry Interfaces Open Collection BasePlatform IntelÂŽ Infrastructure Management Technologies See: Platform Service Assurance site not including containers specific data: https://networkbuilders.intel.com/network-technologies/serviceassurance C-Advisor Container Container and Platform Telemetry Platform Telemetry Container Telemetry • Service Mesh • Istio / Envoy
  • 46. SoftwareDefinedDatacenterSolutionGroup 46 VNFs vCMTS vIMS vEPC vCPE vSBC NFVi- Network SR-IOV NFV Orchestration Hybrid Unified Containers VM Containers Containers VM Bare Metal VM CLOUD NATIVE COMPUTING FOUNDATION Containersnetworkingdeploymentsconsiderations MultipleDeploymentModels…..
  • 47. SoftwareDefinedDatacenterSolutionGroup 47 ContainerUnifiedInfrastructureDeploymentModel VNFs vCMTS vIMS vEPC vCPE vSBC NFVi- Network SR-IOV NFV Orchestration CLOUD NATIVE COMPUTING FOUNDATION Containers Unified Infrastructure Hardware Orchestration VM Guest OS Docker Engine VM Guest OS Docker Engine VM Guest OS Docker Engine Hypervisor App App App App App App
  • 48. SoftwareDefinedDatacenterSolutionGroup CPU Manager for KubernetesSupport for CPU Core Pinning for Kuryr-K8s pods Support for high performance Data Plane (E-W) Multiple network interfaces for VNFs Kuryr- KubernetesRemoving Network performance penalties for container in VM IndustrychallengesincontainersUnifiedInfrastructure MASTER VM Same as in Container Bare Metal Challengesbeingaddressed Solution