Synopsis
During this session Brian will provide an overview of how Intel drives and simplifies network transformation and adoption of SDN and NFV in Telco, Cloud and Enterprise. The discussion will focus on Intel ONP, a software reference architecture platform that integrates Open Source software and hardware elements optimized for SDN and NFV. We will show how Intel ONP is addressing performance, manageability and scalability gaps through contribution to Open vSwitch, DPDK, OpenStack, and Open Daylight. Overall, Intel ONP is a better together integrated reference architecture that can be used to accelerate development efforts, evaluations and trials of SDN & NFV solutions.
About Brian Skerry
Brian is an architect within Intel's Network Platforms Group working on a number of SDN and NFV initiatives. He has been the lead architect on an open source reference platform for NFV within Intel targeted for telco and data center use cases. His focus for the past few years has been on using server platform technologies to achieve the NFV vision, and has worked with various industry partners and service providers on developing proof of concept systems. Previously Brian has been the lead SW architect for a number of platforms targeted at the communications industry.
How to Troubleshoot Apps for the Modern Connected Worker
Intel Open Network Platform
1. Intel Open network platform
Brian Skerry, Sr. SW Architect
Network Platforms Group, Intel
2. 2
SDN and NFV are Driving Network
Transformation
TEM/OEM
Proprietary OS
ASIC, DSP,
FPGA, ASSP
Intel Xeon
processor
Chipset
Acceleration
Switch
Silicon
NIC
Silicon
Open
Source
SDN/NFV
VM:
Firewall
VM:
VPN
Single Application on
Dedicated Hardware
Firewall VPN Intrusion
Detection
System
VM:
NAT
VM:
DPI
VM:
LB
NFVManagement
andOrchestration
SDN/NFV Infrastructure
Innovation
Enabling the server to become the new networking platform
3. 3
Enable the Transformation
Advance Open Source and Standards
Deliver Open Reference Architecture
Enable Open Ecosystem on IA
Collaborate on Trials and Deployments
4. 4
Intel® Open Network Platform (Intel® ONP)
Intel® ONP Software Ingredients Based
on
Open Source and Open Standards
Industry Standard Server Based
on Intel Architecture
What is Intel® ONP
Reference Architecture?*
Reference Architecture that brings
together hardware and open
source software ingredients
Optimized server architecture for
SDN/NFV in Telco, Enterprise
and Cloud
Vehicle to drive development and
to showcase solutions for
SDN/NFV based on IA
*Not a commercial product
VM
VIRTUAL
SWITCH
HW
OFFLOAD
LINUX/
KVM
DPDK
5. ETSI NFV Goals
• Improved CAPEX via COTS (instead of
dedicated hardware)
• Flexibility in assigning VNFs to hardware
• Rapid service innovation
• Improved OPEX from automation
• Reduced power usage by migrating
workloads (so unused hardware can be
powered down)
• Standardized and open interfaces
between VNF and NFVI (to enable multi-
vendor solutions)
Adapted from: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/002/01.01.01_60/gs_nfv002v010101p.pdf
5
VNF 1 VNF 2 VNF 3
NFVI
NFVI Hardware
NFVI Software
6. Key Requirements for Network Function Virtualization Data
Plane
Programmability
Scalability
Efficiency
High
Performance
Security
• Software Programmable & Flexible
• Control Plane & Policy Controls
• Scales across multi-core CPU options
• Generational Scalability
• Best-In-Class Perf/Watt/$$
• Seamless integration of platform accelerators
• 40G/100G/nx100G line rate over time
• Real time latency & jitter characteristics
• Data Security encompassing Platform
Security, Network Security, Storage Security,
Trust & Attestation
7. VNF Virtual Network Interface Options
7
VNF A
virtio
Kernel
Stack
Network
App
Stock
vSwitch
Any
NIC
VNF B
DPDK
virtio
Network
App
Any
NIC
VNF C
DPDK
virtio
Network
App
DPDK
vSwitch
NIC
VNF D
DPDK
IVSHME
M
Network
App
NIC
VNF E
SR-IOV
NIC
Performance
Flexibility, VNF-NFVI Independence
VNF F
NIC VF
Driver
Kernel
Stack
Network
App
SR-IOV
NIC
Stock
vSwitch
DPDK
vSwitch
DPDK
NIC VF
PMD
Network
App
vSwitch Acceleration is the most optimal solution for a scalable NFVI
8. 8
Open Network Plaform
8
EMSEMS
OpenStack
Plugin
Enhancements
OSS/BSS EMS
OCP Node
OCP Node
Linux / KVM
vCPE vBRAS vEPC
ONP
Open vSwitch with DPDK
NIC
VNF Manager
(Service) Orchestrator
Open
Flow
OVSDB Other
OCP Node
OCP Node
Linux / KVM
Server
App
vFW vADC
ONP
Open vSwitch with DPDK
NIC
Enhancements
9. 9
Open vSwitch
ovs-switchd
NICovs kernel module
qemu
VM
virtio
kernel
packet
processing
User Space Forwarding
socketTAP netdev
User Space
External
OpenDaylight
ovsdb OF
ovsdb
server
ovs-
switchd
DPIF
10. 10
Open vSwitch with DPDK
ovs-switchd
NIC
DPDK
Libraries PMD
DPDK
netdev
ovs kernel module
vHost
User
VM
virtio
kernel
packet
processing
User Space Forwarding
socketTAP netdev
User Space
External
OpenDaylight
ovsdb OF
ovsdb
server
ovs-
switchd
DPIF
Available on openvswitch.org
Tunnels
11. OpenvSwitch 2.4
Platform Performance Configuration
Item Description
Server Platform Intel® Server Board S2600WT2 DP (Formerly Wildcat Pass)
2 x 1GbE integrated LAN ports
Two processors per platform
Chipset Intel® C610 series chipset (Formerly Wellsburg)
Processor Intel® Xeon® Processor E5-2697 v3 (Formerly Haswell)
Speed and power: 2.60 GHz, 145 W
Cache: 35 MB per processor
Cores: 14 cores, 28 hyper-threaded cores per processor for 56 total hyper-threaded cores
QPI: 9.6 GT/s
Memory types: DDR4-1600/1866/2133,
Reference: http://ark.intel.com/products/81059/Intel-Xeon-Processor-E5-2697-v3-35M-Cache-2_60-GHz
Memory Micron 16 GB 1Rx4 PC4-2133MHz, 16 GB per channel, 8 Channels, 128 GB Total
Local Storage 500 GB HDD Seagate SATA Barracuda 7200.12 (SN:9VMKQZMT)
PCIe Port 3a and Port 3c x8
NICs 2 x Intel® Ethernet CAN X710-DA2 Adapter (Total: 4 x 10GbE ports)
(Formerly Fortville)
BIOS Version: SE5C610.86B.01.01.0008.021120151325
Date: 02/11/2015
12. OpenvSwitch 2.4
Phy-OVS-Phy Performance
Disclaimer: For more complete information about performance and benchmark results, visit www.intel.com/benchmarks and https://download.01.org/packet-
processing/ONPS1.5/Intel_ONP_Server_Release_1.5_Performance_Test_Report_Rev1.2.pdf
13. OpenvSwitch 2.4
Phy-VM-Phy Performance
Aggregate Switching Rate
Disclaimer: For more complete information about performance and benchmark results, visit www.intel.com/benchmarks and https://download.01.org/packet-
processing/ONPS1.5/Intel_ONP_Server_Release_1.5_Performance_Test_Report_Rev1.2.pdf
14. Need for an Efficient Data Plane for NFV
Server A
VNF1 VNF1
vSwitch
NIC
VNFn
Server B
VNF1 VNF1 VNFn
vSwitch
NIC
Server C
VNF1 VNF1 VNFn
vSwitch
NICNSH
VXLAN-GPE
Encapsulation
NSH
Forwarding
L2/L3
Forwarding
L2/L3
Routing
NSH: Network Services Header
VNF: Virtual Network Function
Service
Chain #1
Service
Chain #2
A Programmable, Scalable, Efficient & High Performance Data Plane is a key
requirement for NFV deployments
15. OpenvSwitch 2.4
Phy-OVS Tunnel-Phy Performance
Aggregate Switching Rate
Disclaimer: For more complete information about performance and benchmark results, visit www.intel.com/benchmarks and
https://download.01.org/packet-processing/ONPS1.5/Intel_ONP_Server_Release_1.5_Performance_Test_Report_Rev1.2.pdf
16. System Settings
System Capability Version
Host Operating System Fedora 21 x86_64 (Server version)
Kernel version: 3.17.4-301.fc21.x86_64
VM Operating System Fedora 21 (Server version)
Kernel version: 3.17.4-301.fc21.x86_64
libvirt libvirt-1.2.9.3-2.fc21.x86_64
QEMU QEMU-KVM version 2.2.1
http://wiki.qemu-project.org/download/qemu-2.2.1.tar.bz2
DPDK DPDK 2.0.0
http://www.dpdk.org/browse/dpdk/snapshot/dpdk-2.0.0.tar.gz
OVS with DPDK-netdev Open vSwitch 2.4.0
http://openvswitch.org/releases/openvswitch-2.4.0.tar.gz
System Capability Description
Host Boot Settings HugePage size = 1 G; no. of HugePages = 16
HugePage size = 2 MB; no. of HugePages = 2048
intel_iommu=off
Hyper-threading disabled: isolcpus = 1-13,15-27
Hyper-threading enabled: isolcpus = 1-13,15-27,29-41,43-55
VM Kernel Boot Parameters GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora-server/root
rd.lvm.lv=fedora-server/swap default_hugepagesz=1G hugepagesz=1G
hugepages=1 hugepagesz=2M hugepages=1024 isolcpus=1,2 rhgb
quiet"
System Capability Configuration
DPDK Compilation CONFIG_RTE_BUILD_COMBINE_LIBS=y
CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_USER=y
DPDK compiled with "-Ofast -g"
OVS Compilation OVS configured and compiled as follows:
#./configure --with-dpdk=<DPDK SDK PATH>/x86_64-native-linuxapp
CFLAGS="-Ofast -g"
make CFLAGS="-Ofast -g -march=native"
DPDK Forwarding
Applications
Build L3fwd: (in l3fwd/main.c)
#define RTE_TEST_RX_DESC_DEFAULT 2048
#define RTE_TEST_TX_DESC_DEFAULT 2048
Build L2fwd: (in l2fwd/main.c)
#define NB_MBUF 16384
#define RTE_TEST_RX_DESC_DEFAULT 2048
#define RTE_TEST_TX_DESC_DEFAULT 2048
Build testpmd: (in test-pmd/testpmd.c)
#define RTE_TEST_RX_DESC_DEFAULT 2048
#define RTE_TEST_TX_DESC_DEFAULT 2048
17. System Settings System Capability Settings
Linux OS Services
Settings
# systemctl disable NetworkManager.service
# chkconfig network on
# systemctl restart network.service
# systemctl stop NetworkManager.service
# systemctl stop firewalld.service
# systemctl disable firewalld.service
# systemctl stop irqbalance.service
# killall irqbalance
# systemctl disable irqbalance.service
# service iptables stop
# echo 0 > /proc/sys/kernel/randomize_va_space
# SELinux disabled
# net.ipv4.ip_forward=0
Uncore Frequency
Settings
Set the uncore frequency to the max ratio.
PCI Settings # setpci –s 00:03.0 184.l
0000000
# setpci –s 00:03.2 184.l
0000000
# setpci –s 00:03.0 184.l=0x1408
# setpci –s 00:03.2 184.l=0x1408
Linux Module Settings # rmmod ipmi_msghandler
# rmmod ipmi_si
# rmmod ipmi_devintf
18. 18
DPDK Acceleration Enhancements
DPDK – Architecture
Focus to Date DPDK-AE (Acceleration Enhancements)
Data Plane Development Kit (DPDK) API
Crypto
Device
DPI
Device
Classification
Device
Future
Device
AES-NI
QAT
Hyperscan RRC
SoCs*
SoC
PMD
external
memory
manager
Network Stacks
Storage and file
systems
Pktgen
Traffic generator
Example
Applications
Light weight
threads
Future
features
Event system
EAL
MALLOC
MBUF
MEMPOOL
RING
TIMER
Core
Libraries
KNI
POWER
IVSHME
M
Platform
LPM
Classification
ACL
Classify
e1000
ixgbe
bonding
af_pkt
i40e
fm10k
Packet Access (PMD)
ETHDEV
xenvirt
enic
ring
METER
SCHED
QoS
cxgbe
vmxnet3 virtio
3rd Party
3rd Party
PIPELIN
E
mlx4 memnic
others
HASH
Utilities
IP Frag
CMDLIN
E JOBSTA
T
KVARGS
REORDER
TABLE 3rd Party
NBT
Simple SOC model
1
19. Key Intel Development Areas
Cost Reduction
& Efficiency
• Application metadata catalog for intelligent scheduling
• Storage Policies & erasure codes
• VxLAN support for vSwitch
• Capacity and bandwidth monitoring
High
Availability
• Probable Root Cause Analysis, continuous analytics
• Platform status monitoring
• Live migration, Host evacuation
Trust &
Compliance
• Trusted Compute Pools, including bare metal
• Boundary Control or Geotagging
• Role-based access control
• Enabling Firewall as a Service
Key Focus AreasUser Needs Technology Alignment
Intel® VT, ASA
Intel® TXT
AES-NI, AVX,
DPDK, Intel®
QuickAssist
Performance
• Accelerated packet processing (Open vSwitch with DPDK)
• Intelligent scheduling through enhanced platform awareness
(CPU features, PCI Express* Accelerators, SR-IOV etc.)
Node Manager,
Cache/Memory QoS
Deployability
& Stability
• Improved installation & upgradability
• Disaster recovery capabilities
• User experience and scalability
Intel® RSA, Intel®
AMT, Intel® vPro
20. 20
The Need for Enhanced Platform Awareness
Port Degradation:
UP to 50%
System
Degradation up to
2.5 time
Source: Telefonica and Intel testing
21. 21
Example: Intel contributions to OpenStack
Kilo
Haswell
Socket 1
VIM
Haswell
Socket 0
CORE CORE CORE CORE
CORE CORE CORE CORE
Application
Process
Application
Process
Application
Process
Application
Process
Memory
Memory
GrantleyNIC
Enhance Platform Awareness(EPA) leading to improves SLA
• Non-uniform memory access (NUMA) topology filter for memory proximity
• NUMA I/O Awareness
NICNIC
* Other names and brands may be claimed as the property of others.
22. MANO EPA Features
22
Non-Uniform Memory Architecture (NUMA) CPU & Memory configuration (co-located memory and socket)
NUMA I/O Device Locality configuration (co-located PCI device and socket)
CPU Pinning
Huge Page Support (2MB/1GB)
QAT
TXT, Trusted Compute Pools
AES-NI, AVX, SSE4.2, RD RAND (Instruction Set Extensions)
CPU Model (explicit model match)
CPU LLC(cache size)
vSwitches (type, capability) - OVS specified, with or without DPDK
LLC utilization
CPU ddio (direct i/o)
CAT (cache allocation)
24. 24
Service Function Chaining Use Case
OpenStack
Neutron Nova
OpenDaylight
SFC GBP
Open vSwitch
OVSDB /
OF
Service
Function 1
Service
Function 2
Service
Function 3
Policy
TackerGBP?
SFF Classifier
25. 25
ONP 1.5 Deliverables and Ingredients
Document Description
Intel® ONP Server 1.5 Release Notes System description and solution ingredients list, new
features description, system limitation, and installation
instructions
Intel® ONP Server 1.5 Reference Architecture Guide Scripts for the integration of the Intel® ONP Server
Release 1.5 Reference Architecture. Content includes
high-level architecture, setup and configuration
procedures, and integration learnings.
Intel® ONP Server 1.5 Benchmark Performance Test
Report
Performance characterization and baseline
performance data based on ONP release 1.5
software.
Intel® ONP Server 1.5 Application Note Integration activities were done on the configuration in
the ONP Server Release 1.5 Reference Architecture
Guide. Benchmarking activities were done on the
configuration in the ONP Server Release 1.5
Benchmark Performance Test Report. The
Application Note contains information on the
differences between these two configurations.
Reference Architecture
Intel® Xeon E5-2600 V3
Intel® Ethernet Controller XL710
Fedora
DPDK
Intel ONP Server 1.5
Kilo 2015.1.1
Lithium SR1
v2.4.90
v2.0
2.3.0.5
Fedora v21
Industry SHVS
Integrated Software
26. 26
ONP Key Take-aways
• Fully open source, aligned with major upstream projects
• Quarterly cadence, allows for rapid turnaround on complex use cases
• Optimized for latest Intel Platforms
• Benchmarks showing platform and ingredient improvements
• Ingredients aligned with a subset of OPNFV projects
• Available now at:
• https://01.org/packet-processing/intel%C2%AE-onp-servers
Key takeaway:
Data traffic growth and the need for better business operation are forcing a transformation in how data centers and networks are designed and operate.
Intel is committed and actively supporting the SDN/NFV market transformation.
Intel is enabling the server to become the new networking platform.
Intel Open Network Platform (ONP) Server project which is an enabler supporting this transformation
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Speaking Notes:
The growth in data traffic is a fact that we are all aware off. Cisco estimates that by 2016 there will be over 1.6B mobile devices in the marketplace and that data traffic will continue to increase 3X by 2018.
Business are looking to support the increasing data traffic, looking for better business operations, network agility, more efficiency and cost cutting.
All those facts are forcing a transformation in how data centers and networks are designed and operate.
Software Defined Networking and Network Function Virtualization , SDN & NFV, are forces of change that are driving and enabling the network transformation.
SDN architectures decouple network control and forwarding functions, enabling the underlying infrastructure to be abstracted from the applications (and network services).
NFV uses virtualization to migrate from proprietary fixed-function boxes to software applications on Standard High Volume Servers.
As you may see on the slide, on the left hand side monolithic, vertically-integrated devices such as routers, VPN, and firewalls . SDN & NFV enables the transformation into systems that are based on standard high volume servers, and running these network functions in software. In practice, this enables the server to become the new networking platform
The question is: What is needed in order to ease the migration from traditional network topology to next generation networks where functions are virtualized.
Intel is committed and actively supporting the SDN/NFV market transformation.
Intel Open Network Platform project, ONP, is an enabler supporting this transformation
End speaking
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
From This, (Traditional networking topology pain points)
Monolithic vertical integrated box
Proprietary solutions
Manual provisioning (not agile or efficient)
To This, the Vision (next generation SDN/NFV network
Networking within VMs on Standard IA server hardware
Open source/standard solutions (OpenStack, OpenDaylight, etc…)
Automated provisioning (increased service agility and efficient network operation)
Key takeaway:
Intel is committed and actively supporting the SDN/NFV market transformation.
Intel is enabling the server to become the new networking platform.
Intel Open Network Platform (ONP) Server project which is an enabler supporting this transformation
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Speaking Notes:
The growth in data traffic is a fact that we are all aware off. Cisco estimates that by 2016 there will be over 1.6B mobile devices in the marketplace and that data traffic will continue to increase 3X by 2018.
Business are looking to support the increasing data traffic, looking for better business operations, network agility, more efficiency and cost cutting.
All those facts are forcing a transformation in how data centers and networks are designed and operate.
Software Defined Networking and Network Function Virtualization , SDN & NFV, are forces of change that are driving and enabling the network transformation.
SDN architectures decouple network control and forwarding functions, enabling the underlying infrastructure to be abstracted from the applications (and network services).
NFV uses virtualization to migrate from proprietary fixed-function boxes to software applications on Standard High Volume Servers.
As you may see on the slide, on the left hand side monolithic, vertically-integrated devices such as routers, VPN, and firewalls . SDN & NFV enables the transformation into systems that are based on standard high volume servers, and running these network functions in software. In practice, this enables the server to become the new networking platform
The question is: What is needed in order to ease the migration from traditional network topology to next generation networks where functions are virtualized.
Intel is committed and actively supporting the SDN/NFV market transformation.
Intel Open Network Platform project, ONP, is an enabler supporting this transformation
End speaking
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
From This, (Traditional networking topology pain points)
Monolithic vertical integrated box
Proprietary solutions
Manual provisioning (not agile or efficient)
To This, the Vision (next generation SDN/NFV network
Networking within VMs on Standard IA server hardware
Open source/standard solutions (OpenStack, OpenDaylight, etc…)
Automated provisioning (increased service agility and efficient network operation)
Intel influencing the transformation through a 4-part strategy (This is what SDND is all about - SDND Enables the transformation)
4 elements of the strategy feed each other are creating a strong foundation for the industry to leverage on.
Advance Open Source Open Standards
Promote and contribute to industry standards and open source solutions for interoperability
Committed to “Open” standards for a competitive market
2. Deliver Open reference Designs
Leading performance, security, open source software and reference designs
Enable industry leading manageability by exposing health, state, resource availability for optimal workload placement and configuration
3. Enable Open Ecosystem on IA
Enable TEMs/OEMs to deliver industry leading performance, power, cost, security optimized solutions
4. Collaborate on trials and deployments
Building solution experience with leading Enterprise, Telco and Cloud Service Providers and Vendors
vSwitches are becoming overloaded
Difficult to handle high data bandwidth with predictable latency
Complex features such as NSH tunneling and forwarding consume CPU cores
Now via OPenStack, Enhanced Platform Awareness has progressively added features since the Havana release back in 2013. The Kilo release is now public
List of EPA features keeps on growing. Intel is working on implementing them both at the VIM and NFVO layers.