SlideShare ist ein Scribd-Unternehmen logo
1 von 20
Downloaden Sie, um offline zu lesen
OVS connection tracking for Mobile use cases
Anita Tragler, Product Manager - Networking/NFV Platform
Franck Baudin, Principal Product Manager - OpenStack NFV
November, 2017 - OVS Conference
2
Picture credits: wikipedia
Mobile networks deployment today/yesterday
1 ATCA blade
== 1 VM
== 1 VNFc
1 VNF == N x VNFc
N
3
● Majority 75% are short duration flows < 100Kbps
● Large number of simultaneous calls - 1 Million flows
● High incoming call rate of 100K - 200K connections per second (cps)
● Need for distributed firewall at the vSwitch
● Statistics for each call for Billing - call duration, bandwidth, source &
destination ip
vEPC Mobile traffic profile
4
10 Gbps “real Mobile traffic” profile injection
1 M flows
Established user flows [1] (== conntracks)
Bidirectional ⇔ 2 X OpenFlow microflows
First flow
expiration
Last flow
creation
200k
flows/s
200k flows/s created
200k flows/s destroyed
10 Gbps, 4 Mpps
Average frame size ~600 B
[1] one five tuple: L1/L2/L3
Ex: One DNS request: MAC/IP/IUDP
One HTTP session : MAC/IP/TCP
5s 305s
5
Packet size
Typically only the packet header is accessed (one cache line)…
● Except for virtualization vhost-user/vhost-net (hypervisor on host) since guest requires
payload memcpy
● except for IPSec: segmentation/reassembly or packet ordering; not priority for vswitch
● except when we terminate a connection (SSL, TCP, UDP); not as relevant to NFV
Flows or Connections : creation/destruction of flow per second
Flow Creation: not in flow table and cache, upcall to add flow => bucket allocation
Flow Destruction: TCP FIN + timer, UDP timer, LRU recycling (flow hash table entry recycling)…
Performance depends on
● Number of flows in the flow table and
● Rate of incoming flows
Traffic profile - Key Parameters
6
1. Performance with the number of cores, minimum OF rules, varying packet sizes
a. Mpps (cycles/packets)
b. Latency
c. Jitter
2. Performance evolution regarding the number of conntrack, IP routes, …
a. For various cores numbers
b. Mpps, Latency, Jitter
What metrics to measure ?
In particular NEPs (VNFs vendors)
7
RFC 2544 permit to find the maximum packet throughput before dropping, i.e. when the
target is loaded at 100%:
● X Mpps ⇔ 100% system load ⇔ 0% idle for N cores running at F GHz
○ cycles/packet = (F x 10^3 / X) / N
○ 200 cycles/packet for 10 Mpps per core at 2GHz
○ This measure is an average (bulk)
● Gbps = (“inter-frame gap and preamble equivalent bits” + “frame size”) x Mpps
○ For 64 Bytes frames (CRC included): Gbps = ((20 + 64 ) x 8) x Mpps
Datapath performances: measurement units
Telco VMs (VNFs) typically use cycles/packet internally and Gbps/Mpps externally (marketing)
All tests developed within OPNFV VSperf project
All Measures (next slides) done with:
● OVS 2.7
● IPv4 traffic
● straight NUMA
● RFC2544, 0% acceptable loss rate, 2 mins iterations
● UDP flows, 5 Tuple change, referred as “flows” in the next slides
● DPDK testpmd in the VM, so the VM is never the bottleneck (verified)
● We use a Telco grade traffic generator (TRex, could an appliance as well), not iperf!!
8
Measurement methodology overview
VM
DPDK
testpmd
OVS-DPDK
bond
virtio
vhost-user
Credit: Maryam Tahhan - Al Morton
Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-01
Conntrack test results
Thanks to our QE team
● Christian Trautman
● Qi Jun Ding
Dev team
● Flavio Leitner
● Aaron Conole
Use TRex packet replay
Use 600B IPv4 data packets
Short calls with timeout =5s
Scale number of connections
10
TCP Stateful conntrack - test profile
VM
DPDK testpmd
macswap
OVS +
conntrack
bond
virtio
vhost-net
Server
port2
Client
port1
TCP: SYN, ACK, Data, FIN
TCP: SYN_ACK, Data, FIN_ACK
11
Conntrack test configuration
Openvswitch 2.7 and DPDK 16.11
Conntrack rule 4-Tuple - Match source IP, destination IP, src port and dst port
ovs-ofctl add-flow ovsbr0
"table=0,priority=100,ip,nw_src=10.0.0.1/12,nw_dst=20.0.0.1/12,udp,tp_src=1234,tp_dst=1234,ct_st
ate=-trk,action=ct(table=1)"
ovs-ofctl add-flow ovsbr0 "table=1,in_port=10,ip,ct_state=+trk,action=ct(commit),20"
ovs-ofctl add-flow ovsbr0 "table=1,in_port=10,ip,ct_state=+trk,action=output:20"
ovs-ofctl add-flow ovsbr0 "table=1,in_port=20,ip,ct_state=+trk,action=output:10"
ovs-ofctl add-flow ovsbr0 "table=1,in_port=11,ip,ct_state=+trk,action=ct(commit),21"
ovs-ofctl add-flow ovsbr0 "table=1,in_port=11,ip,ct_state=+trk,action=output:21"
ovs-ofctl add-flow ovsbr0 "table=1,in_port=21,ip,ct_state=+trk,action=output:11"
ovs-ofctl add-flow ovsbr0 "table=0,priority=1,action=drop"
12
OVS conntrack (pps) baseline Src ip Src & dst ip 4-Tuple 5-Tuple
1k Flows (with EMC) 431,064 237,490 238,244 228,452 256,320
1k Flows (EMC disabled) 321,580 256,320 230,712 232,218 269,878
100k Flows (with EMC) 216,402 151,626 152,380 174,222 180,248
100K Flows (EMC disabled) 303,359 172,176 151,626 230,424 199,830
OVS Conntrack - VSPerf Throughput (pps)
13
OVS-DPDK Conntrack - VSperf Throughput
IPv4 (pps) baseline src ip Src & dst ip 4-Tuple 5-Tuple
1k Flows 7,064,494 4,657,578 4,574,882 3,366,854 3,417,136
10k Flows 6,815,158 3,151,180
100K Flows 3,913,314 1,928,606 1,820,606 1,630,236 1,597,822
14
OVS-DPDK Conntrack - VSperf Throughput
Conntrack pps baseline Match src ip Match 4 Tuple
100K Flows (with EMC) 3,913,314 1,763,214 1,597,822
100K Flows (EMC disabled) 4,053,314 1,928,606 1,630,236
Userspace Conntrack no
significant performance
improvement with EMC
disabled
15
OVS Kernel: Conntrack Connection Setup Rate
TCP Connection rate (cps) Steady connections
after 5s
5K CPS 25K
10K CPS 50K
20K CPS 100K
50K CPS 250K
5s 305s
10K connections per second (cps)
50K connections
Connection duration 5s, test duration 300s
Track open connections (number of table entries)
conntrack -C (entries) & conntrack -S (stats)
timeout setting for conntrack in kernel:
nf_conntrack_tcp_timeout_close_wait=5
nf_conntrack_tcp_timeout_established=5
nf_conntrack_tcp_timeout_fin_wait=5
nf_conntrack_tcp_timeout_last_ack=5
nf_conntrack_tcp_timeout_max_retrans=5
nf_conntrack_tcp_timeout_syn_recv=5
nf_conntrack_tcp_timeout_syn_sent=5
nf_conntrack_tcp_timeout_time_wait=5
nf_conntrack_tcp_timeout_unacknowledged=5
nf_conntrack_udp_timeout=5
nf_conntrack_udp_timeout_stream=5
16
● Cannot set connection timeout; default timeout = 30s. Connections are timing out @ ~32s
● Cannot query conntrack table entries (# of entries) and stats (similar to conntrack -S -C)
● Only support for dumping conntrack table >ovs-appctl dpctl/dump-conntrack
● Max conntrack table size restricted to 3M entries, cannot change table size.
OVS-DPDK: Conntrack Connection Setup Rate
TCP Connection rate
(cps)
Steady connections after 30s
50K CPS 1.5M connections
100K CPS 3M connections (Max table
size)
200K CPS (goal) 6M connections
30s 330s
50K connections per second (cps)
1.5M connections
Connection duration 5s, test duration 300s
17
Measure Connection Rate (CPS)
Conntrack (cps) TCP w/o data HTTP 600B data HTTP 800B data UDP 800B data
OVS (kernel) 45K CPS 16K CPS 15K CPS 106K CPS
OVS-DPDK (userspace)
No configurable
timeout* 55K CPS 55K CPS 84K* CPS
userspace conntrack
cps performance is
lower than expected
In Conclusion
NFV Networking Insights19
SR-IOV, Base OVS and
OVS-DPDK, TestPMD as a
switch performance
Performance Benchmarking Plan (OPNFV VSPerf)
64B and 9KB Jumbo PVP
performance
Metric - throughput, latency
Single numa node, basic
multi-queue
vlan, flat, VXLAN networks,
bonding
Real traffic profile with T-Rex
Mobile traffic flows
Conntrack - scale flows
Multi-queue w/ RX queue mgmt.
Live Migration, Cross NUMA perf
More overlays (NSH, MPLS...?)
Firewall testing (dynamic rules)
Conntrack - connection rate
SNAT & DNAT rule scale
OVS Hardware Offload
BFD, ECMP, L3 VPN and eVPN
OVS-DPDK NFV performance ready
scale with cores, multi-queue
Real world Mobile traffic flows
vRouter and vFirewall features
We are here!
Thank-you
fbaudin@redhat.com
atragler@redhat.com

Weitere ähnliche Inhalte

Was ist angesagt?

Virtualized network with openvswitch
Virtualized network with openvswitchVirtualized network with openvswitch
Virtualized network with openvswitch
Sim Janghoon
 
debugging openstack neutron /w openvswitch
debugging openstack neutron /w openvswitchdebugging openstack neutron /w openvswitch
debugging openstack neutron /w openvswitch
어형 이
 

Was ist angesagt? (20)

TC Flower Offload
TC Flower OffloadTC Flower Offload
TC Flower Offload
 
LF_OVS_17_OvS manipulation with Go at DigitalOcean
LF_OVS_17_OvS manipulation with Go at DigitalOceanLF_OVS_17_OvS manipulation with Go at DigitalOcean
LF_OVS_17_OvS manipulation with Go at DigitalOcean
 
The Next Generation Firewall for Red Hat Enterprise Linux 7 RC
The Next Generation Firewall for Red Hat Enterprise Linux 7 RCThe Next Generation Firewall for Red Hat Enterprise Linux 7 RC
The Next Generation Firewall for Red Hat Enterprise Linux 7 RC
 
LF_OVS_17_Open vSwitch Offload: Conntrack and the Upstream Kernel
LF_OVS_17_Open vSwitch Offload: Conntrack and the Upstream KernelLF_OVS_17_Open vSwitch Offload: Conntrack and the Upstream Kernel
LF_OVS_17_Open vSwitch Offload: Conntrack and the Upstream Kernel
 
Quality of Service Ingress Rate Limiting and OVS Hardware Offloads
Quality of Service Ingress Rate Limiting and OVS Hardware OffloadsQuality of Service Ingress Rate Limiting and OVS Hardware Offloads
Quality of Service Ingress Rate Limiting and OVS Hardware Offloads
 
DPDK Support for New HW Offloads
DPDK Support for New HW OffloadsDPDK Support for New HW Offloads
DPDK Support for New HW Offloads
 
Open vSwitch - Stateful Connection Tracking & Stateful NAT
Open vSwitch - Stateful Connection Tracking & Stateful NATOpen vSwitch - Stateful Connection Tracking & Stateful NAT
Open vSwitch - Stateful Connection Tracking & Stateful NAT
 
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)
 
Offloading TC Rules on OVS Internal Ports
Offloading TC Rules on OVS Internal Ports Offloading TC Rules on OVS Internal Ports
Offloading TC Rules on OVS Internal Ports
 
LF_OVS_17_State of the OVN
LF_OVS_17_State of the OVNLF_OVS_17_State of the OVN
LF_OVS_17_State of the OVN
 
Linux Networking Explained
Linux Networking ExplainedLinux Networking Explained
Linux Networking Explained
 
DevConf 2014 Kernel Networking Walkthrough
DevConf 2014   Kernel Networking WalkthroughDevConf 2014   Kernel Networking Walkthrough
DevConf 2014 Kernel Networking Walkthrough
 
LinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking WalkthroughLinuxCon 2015 Linux Kernel Networking Walkthrough
LinuxCon 2015 Linux Kernel Networking Walkthrough
 
Virtualized network with openvswitch
Virtualized network with openvswitchVirtualized network with openvswitch
Virtualized network with openvswitch
 
debugging openstack neutron /w openvswitch
debugging openstack neutron /w openvswitchdebugging openstack neutron /w openvswitch
debugging openstack neutron /w openvswitch
 
Cilium - Fast IPv6 Container Networking with BPF and XDP
Cilium - Fast IPv6 Container Networking with BPF and XDPCilium - Fast IPv6 Container Networking with BPF and XDP
Cilium - Fast IPv6 Container Networking with BPF and XDP
 
BPF: Next Generation of Programmable Datapath
BPF: Next Generation of Programmable DatapathBPF: Next Generation of Programmable Datapath
BPF: Next Generation of Programmable Datapath
 
Accelerating Neutron with Intel DPDK
Accelerating Neutron with Intel DPDKAccelerating Neutron with Intel DPDK
Accelerating Neutron with Intel DPDK
 
Open vSwitch Offload: Conntrack and the Upstream Kernel
Open vSwitch Offload: Conntrack and the Upstream KernelOpen vSwitch Offload: Conntrack and the Upstream Kernel
Open vSwitch Offload: Conntrack and the Upstream Kernel
 
OpenStack networking juno l3 h-a, dvr
OpenStack networking   juno l3 h-a, dvrOpenStack networking   juno l3 h-a, dvr
OpenStack networking juno l3 h-a, dvr
 

Andere mochten auch

Andere mochten auch (16)

LF_OVS_17_IPSEC and OVS DPDK
LF_OVS_17_IPSEC and OVS DPDKLF_OVS_17_IPSEC and OVS DPDK
LF_OVS_17_IPSEC and OVS DPDK
 
LF_OVS_17_OVN at Nutanix
LF_OVS_17_OVN at NutanixLF_OVS_17_OVN at Nutanix
LF_OVS_17_OVN at Nutanix
 
LF_OVS_17_The birth of SmartNICs -- offloading dataplane traffic to...software
LF_OVS_17_The birth of SmartNICs -- offloading dataplane traffic to...softwareLF_OVS_17_The birth of SmartNICs -- offloading dataplane traffic to...software
LF_OVS_17_The birth of SmartNICs -- offloading dataplane traffic to...software
 
LF_OVS_17_CORD: An open source platform to reinvent the network edge
LF_OVS_17_CORD: An open source platform to reinvent the network edgeLF_OVS_17_CORD: An open source platform to reinvent the network edge
LF_OVS_17_CORD: An open source platform to reinvent the network edge
 
LF_OVS_17_OVS-DPDK for NFV: go live feedback!
LF_OVS_17_OVS-DPDK for NFV: go live feedback!LF_OVS_17_OVS-DPDK for NFV: go live feedback!
LF_OVS_17_OVS-DPDK for NFV: go live feedback!
 
LF_OVS_17_Riley: Pushing networking to the edge
LF_OVS_17_Riley: Pushing networking to the edgeLF_OVS_17_Riley: Pushing networking to the edge
LF_OVS_17_Riley: Pushing networking to the edge
 
LF_OVS_17_Conntrack + OvS
LF_OVS_17_Conntrack + OvSLF_OVS_17_Conntrack + OvS
LF_OVS_17_Conntrack + OvS
 
LF_OVS_17_OvS-CD: Optimizing Flow Classification for OvS using the DPDK Membe...
LF_OVS_17_OvS-CD: Optimizing Flow Classification for OvS using the DPDK Membe...LF_OVS_17_OvS-CD: Optimizing Flow Classification for OvS using the DPDK Membe...
LF_OVS_17_OvS-CD: Optimizing Flow Classification for OvS using the DPDK Membe...
 
LF_OVS_17_Day 1 Opening Remarks
LF_OVS_17_Day 1 Opening RemarksLF_OVS_17_Day 1 Opening Remarks
LF_OVS_17_Day 1 Opening Remarks
 
LF_OVS_17_Red Hat's perspective on OVS HW Offload Status
LF_OVS_17_Red Hat's perspective on OVS HW Offload StatusLF_OVS_17_Red Hat's perspective on OVS HW Offload Status
LF_OVS_17_Red Hat's perspective on OVS HW Offload Status
 
LF_OVS_17_Enabling hardware acceleration in OVS-DPDK using DPDK Framework.
LF_OVS_17_Enabling hardware acceleration in OVS-DPDK using DPDK Framework.LF_OVS_17_Enabling hardware acceleration in OVS-DPDK using DPDK Framework.
LF_OVS_17_Enabling hardware acceleration in OVS-DPDK using DPDK Framework.
 
LF_OVS_17_OVN and Kelda
LF_OVS_17_OVN and KeldaLF_OVS_17_OVN and Kelda
LF_OVS_17_OVN and Kelda
 
LF_OVS_17_OvS Hardware Offload with TC Flower
LF_OVS_17_OvS Hardware Offload with TC FlowerLF_OVS_17_OvS Hardware Offload with TC Flower
LF_OVS_17_OvS Hardware Offload with TC Flower
 
LF_OVS_17_Day 2 Closing Remarks
LF_OVS_17_Day 2 Closing RemarksLF_OVS_17_Day 2 Closing Remarks
LF_OVS_17_Day 2 Closing Remarks
 
LF_OVS_17_DigitalOcean Cloud Firewalls: powered by OvS and conntrack
LF_OVS_17_DigitalOcean Cloud Firewalls: powered by OvS and conntrackLF_OVS_17_DigitalOcean Cloud Firewalls: powered by OvS and conntrack
LF_OVS_17_DigitalOcean Cloud Firewalls: powered by OvS and conntrack
 
LF_OVS_17_Day 2 Opening Remarks
LF_OVS_17_Day 2 Opening RemarksLF_OVS_17_Day 2 Opening Remarks
LF_OVS_17_Day 2 Opening Remarks
 

Ähnlich wie LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases

Computer network (13)
Computer network (13)Computer network (13)
Computer network (13)
NYversity
 
Computer network (11)
Computer network (11)Computer network (11)
Computer network (11)
NYversity
 
Mininet: Moving Forward
Mininet: Moving ForwardMininet: Moving Forward
Mininet: Moving Forward
ON.Lab
 

Ähnlich wie LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases (20)

Troubleshooting TCP/IP
Troubleshooting TCP/IPTroubleshooting TCP/IP
Troubleshooting TCP/IP
 
Part 9 : Congestion control and IPv6
Part 9 : Congestion control and IPv6Part 9 : Congestion control and IPv6
Part 9 : Congestion control and IPv6
 
(NET404) Making Every Packet Count
(NET404) Making Every Packet Count(NET404) Making Every Packet Count
(NET404) Making Every Packet Count
 
AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)
 
Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)Network Programming: Data Plane Development Kit (DPDK)
Network Programming: Data Plane Development Kit (DPDK)
 
Transport Layer in Computer Networks (TCP / UDP / SCTP)
Transport Layer in Computer Networks (TCP / UDP / SCTP)Transport Layer in Computer Networks (TCP / UDP / SCTP)
Transport Layer in Computer Networks (TCP / UDP / SCTP)
 
Network and TCP performance relationship workshop
Network and TCP performance relationship workshopNetwork and TCP performance relationship workshop
Network and TCP performance relationship workshop
 
Chapter10 switching
Chapter10 switchingChapter10 switching
Chapter10 switching
 
Computer network (13)
Computer network (13)Computer network (13)
Computer network (13)
 
Redesigning MPTCP in Edge clouds
Redesigning MPTCP in Edge cloudsRedesigning MPTCP in Edge clouds
Redesigning MPTCP in Edge clouds
 
Polyraptor
PolyraptorPolyraptor
Polyraptor
 
Computer network (11)
Computer network (11)Computer network (11)
Computer network (11)
 
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...
 
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
 
cscn1819.pdf
cscn1819.pdfcscn1819.pdf
cscn1819.pdf
 
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverter
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverterKernel Recipes 2014 - NDIV: a low overhead network traffic diverter
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverter
 
PyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPCPyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPC
 
Wireshark, Tcpdump and Network Performance tools
Wireshark, Tcpdump and Network Performance toolsWireshark, Tcpdump and Network Performance tools
Wireshark, Tcpdump and Network Performance tools
 
Ns fundamentals 1
Ns fundamentals 1Ns fundamentals 1
Ns fundamentals 1
 
Mininet: Moving Forward
Mininet: Moving ForwardMininet: Moving Forward
Mininet: Moving Forward
 

Kürzlich hochgeladen

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Kürzlich hochgeladen (20)

Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 

LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases

  • 1. OVS connection tracking for Mobile use cases Anita Tragler, Product Manager - Networking/NFV Platform Franck Baudin, Principal Product Manager - OpenStack NFV November, 2017 - OVS Conference
  • 2. 2 Picture credits: wikipedia Mobile networks deployment today/yesterday 1 ATCA blade == 1 VM == 1 VNFc 1 VNF == N x VNFc N
  • 3. 3 ● Majority 75% are short duration flows < 100Kbps ● Large number of simultaneous calls - 1 Million flows ● High incoming call rate of 100K - 200K connections per second (cps) ● Need for distributed firewall at the vSwitch ● Statistics for each call for Billing - call duration, bandwidth, source & destination ip vEPC Mobile traffic profile
  • 4. 4 10 Gbps “real Mobile traffic” profile injection 1 M flows Established user flows [1] (== conntracks) Bidirectional ⇔ 2 X OpenFlow microflows First flow expiration Last flow creation 200k flows/s 200k flows/s created 200k flows/s destroyed 10 Gbps, 4 Mpps Average frame size ~600 B [1] one five tuple: L1/L2/L3 Ex: One DNS request: MAC/IP/IUDP One HTTP session : MAC/IP/TCP 5s 305s
  • 5. 5 Packet size Typically only the packet header is accessed (one cache line)… ● Except for virtualization vhost-user/vhost-net (hypervisor on host) since guest requires payload memcpy ● except for IPSec: segmentation/reassembly or packet ordering; not priority for vswitch ● except when we terminate a connection (SSL, TCP, UDP); not as relevant to NFV Flows or Connections : creation/destruction of flow per second Flow Creation: not in flow table and cache, upcall to add flow => bucket allocation Flow Destruction: TCP FIN + timer, UDP timer, LRU recycling (flow hash table entry recycling)… Performance depends on ● Number of flows in the flow table and ● Rate of incoming flows Traffic profile - Key Parameters
  • 6. 6 1. Performance with the number of cores, minimum OF rules, varying packet sizes a. Mpps (cycles/packets) b. Latency c. Jitter 2. Performance evolution regarding the number of conntrack, IP routes, … a. For various cores numbers b. Mpps, Latency, Jitter What metrics to measure ? In particular NEPs (VNFs vendors)
  • 7. 7 RFC 2544 permit to find the maximum packet throughput before dropping, i.e. when the target is loaded at 100%: ● X Mpps ⇔ 100% system load ⇔ 0% idle for N cores running at F GHz ○ cycles/packet = (F x 10^3 / X) / N ○ 200 cycles/packet for 10 Mpps per core at 2GHz ○ This measure is an average (bulk) ● Gbps = (“inter-frame gap and preamble equivalent bits” + “frame size”) x Mpps ○ For 64 Bytes frames (CRC included): Gbps = ((20 + 64 ) x 8) x Mpps Datapath performances: measurement units Telco VMs (VNFs) typically use cycles/packet internally and Gbps/Mpps externally (marketing)
  • 8. All tests developed within OPNFV VSperf project All Measures (next slides) done with: ● OVS 2.7 ● IPv4 traffic ● straight NUMA ● RFC2544, 0% acceptable loss rate, 2 mins iterations ● UDP flows, 5 Tuple change, referred as “flows” in the next slides ● DPDK testpmd in the VM, so the VM is never the bottleneck (verified) ● We use a Telco grade traffic generator (TRex, could an appliance as well), not iperf!! 8 Measurement methodology overview VM DPDK testpmd OVS-DPDK bond virtio vhost-user Credit: Maryam Tahhan - Al Morton Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-01
  • 9. Conntrack test results Thanks to our QE team ● Christian Trautman ● Qi Jun Ding Dev team ● Flavio Leitner ● Aaron Conole
  • 10. Use TRex packet replay Use 600B IPv4 data packets Short calls with timeout =5s Scale number of connections 10 TCP Stateful conntrack - test profile VM DPDK testpmd macswap OVS + conntrack bond virtio vhost-net Server port2 Client port1 TCP: SYN, ACK, Data, FIN TCP: SYN_ACK, Data, FIN_ACK
  • 11. 11 Conntrack test configuration Openvswitch 2.7 and DPDK 16.11 Conntrack rule 4-Tuple - Match source IP, destination IP, src port and dst port ovs-ofctl add-flow ovsbr0 "table=0,priority=100,ip,nw_src=10.0.0.1/12,nw_dst=20.0.0.1/12,udp,tp_src=1234,tp_dst=1234,ct_st ate=-trk,action=ct(table=1)" ovs-ofctl add-flow ovsbr0 "table=1,in_port=10,ip,ct_state=+trk,action=ct(commit),20" ovs-ofctl add-flow ovsbr0 "table=1,in_port=10,ip,ct_state=+trk,action=output:20" ovs-ofctl add-flow ovsbr0 "table=1,in_port=20,ip,ct_state=+trk,action=output:10" ovs-ofctl add-flow ovsbr0 "table=1,in_port=11,ip,ct_state=+trk,action=ct(commit),21" ovs-ofctl add-flow ovsbr0 "table=1,in_port=11,ip,ct_state=+trk,action=output:21" ovs-ofctl add-flow ovsbr0 "table=1,in_port=21,ip,ct_state=+trk,action=output:11" ovs-ofctl add-flow ovsbr0 "table=0,priority=1,action=drop"
  • 12. 12 OVS conntrack (pps) baseline Src ip Src & dst ip 4-Tuple 5-Tuple 1k Flows (with EMC) 431,064 237,490 238,244 228,452 256,320 1k Flows (EMC disabled) 321,580 256,320 230,712 232,218 269,878 100k Flows (with EMC) 216,402 151,626 152,380 174,222 180,248 100K Flows (EMC disabled) 303,359 172,176 151,626 230,424 199,830 OVS Conntrack - VSPerf Throughput (pps)
  • 13. 13 OVS-DPDK Conntrack - VSperf Throughput IPv4 (pps) baseline src ip Src & dst ip 4-Tuple 5-Tuple 1k Flows 7,064,494 4,657,578 4,574,882 3,366,854 3,417,136 10k Flows 6,815,158 3,151,180 100K Flows 3,913,314 1,928,606 1,820,606 1,630,236 1,597,822
  • 14. 14 OVS-DPDK Conntrack - VSperf Throughput Conntrack pps baseline Match src ip Match 4 Tuple 100K Flows (with EMC) 3,913,314 1,763,214 1,597,822 100K Flows (EMC disabled) 4,053,314 1,928,606 1,630,236 Userspace Conntrack no significant performance improvement with EMC disabled
  • 15. 15 OVS Kernel: Conntrack Connection Setup Rate TCP Connection rate (cps) Steady connections after 5s 5K CPS 25K 10K CPS 50K 20K CPS 100K 50K CPS 250K 5s 305s 10K connections per second (cps) 50K connections Connection duration 5s, test duration 300s Track open connections (number of table entries) conntrack -C (entries) & conntrack -S (stats) timeout setting for conntrack in kernel: nf_conntrack_tcp_timeout_close_wait=5 nf_conntrack_tcp_timeout_established=5 nf_conntrack_tcp_timeout_fin_wait=5 nf_conntrack_tcp_timeout_last_ack=5 nf_conntrack_tcp_timeout_max_retrans=5 nf_conntrack_tcp_timeout_syn_recv=5 nf_conntrack_tcp_timeout_syn_sent=5 nf_conntrack_tcp_timeout_time_wait=5 nf_conntrack_tcp_timeout_unacknowledged=5 nf_conntrack_udp_timeout=5 nf_conntrack_udp_timeout_stream=5
  • 16. 16 ● Cannot set connection timeout; default timeout = 30s. Connections are timing out @ ~32s ● Cannot query conntrack table entries (# of entries) and stats (similar to conntrack -S -C) ● Only support for dumping conntrack table >ovs-appctl dpctl/dump-conntrack ● Max conntrack table size restricted to 3M entries, cannot change table size. OVS-DPDK: Conntrack Connection Setup Rate TCP Connection rate (cps) Steady connections after 30s 50K CPS 1.5M connections 100K CPS 3M connections (Max table size) 200K CPS (goal) 6M connections 30s 330s 50K connections per second (cps) 1.5M connections Connection duration 5s, test duration 300s
  • 17. 17 Measure Connection Rate (CPS) Conntrack (cps) TCP w/o data HTTP 600B data HTTP 800B data UDP 800B data OVS (kernel) 45K CPS 16K CPS 15K CPS 106K CPS OVS-DPDK (userspace) No configurable timeout* 55K CPS 55K CPS 84K* CPS userspace conntrack cps performance is lower than expected
  • 19. NFV Networking Insights19 SR-IOV, Base OVS and OVS-DPDK, TestPMD as a switch performance Performance Benchmarking Plan (OPNFV VSPerf) 64B and 9KB Jumbo PVP performance Metric - throughput, latency Single numa node, basic multi-queue vlan, flat, VXLAN networks, bonding Real traffic profile with T-Rex Mobile traffic flows Conntrack - scale flows Multi-queue w/ RX queue mgmt. Live Migration, Cross NUMA perf More overlays (NSH, MPLS...?) Firewall testing (dynamic rules) Conntrack - connection rate SNAT & DNAT rule scale OVS Hardware Offload BFD, ECMP, L3 VPN and eVPN OVS-DPDK NFV performance ready scale with cores, multi-queue Real world Mobile traffic flows vRouter and vFirewall features We are here!