W wyścigu wielu technologii i standardów budowy sieci Data Center oraz Data Center Interconnect, EVPN zdaje się być tym, który wysunął się na prowadzenie. W ramach sesji odpowiemy sobie na pytanie gdzie EVPN jest w tej chwili na tle innych technologii, gdzie go stosować, na co zwracać uwagę podczas wdrożenia.
2. Page 2
Data Center Development Trend
Layer 3
Layer 2
Traditional data center
Layer 3
Layer 2
Next-generation data center
To better utilize existing data center resources, IDC carriers require
VMs to be migrated within a data center. Technologies like TRILL or
EVPN can be used to build a large Layer 2 network.
As huge east-west traffic exits in data center, non-blocking
forwarding of data frames is required to achieve full utilization of the
network link/bandwidth resources.
POD POD
Traditional data center structure
On traditional data center networks, Layer 2 only extends to access
or aggregation switches. Virtual machines (VMs) can only be
migrated within a Layer 2 domain. To migrate VMs to another Layer
2 domain, IP addresses of the VMs must be changed. If the
technologies such as load balancing are not used, services will be
interrupted during VM migration.
Next-generation data center structure
5. Page 5
Small- and medium-
sized POD-level Fabric
Uses N:1 device
virtualization technology to
solve problems
Suitable for single-POD
networking
Large POD-level
Fabric
Uses the IS-IS routing
control plane to prevent
loops
Within a single POD or the
entire data center
DC-level Fabric
The L2 network is overlaid
on the L3 network, which
prevents loops
Large L2 networking in the
entire data center or even
across DCs
DCI Fabric
VXLAN, MPLS or PBB
encapsulation, loop-free
DCI (cross-DC
interconnection)
Intra-POD/DC Intra-DC (L2) Intra-DC (L3) DCI
Summary of the DCN Fabric
6. Page 6
An STP-based Layer 2 Fabric network applies to small-
scale data center networks and supports VM migration.
However, in the construction of large-scale Layer 2
networks, the problems such as network storms and
convergence time occur. The most vital problem is that
physical links cannot be fully utilized and non-blocking
networks cannot be constructed.
Radia Perlman
Sun fellow
Invented STP and IS-IS
routing protocols in 1983.
STP "changing figure to tree"
STP-based Layer 2 Fabric networking
STP, as a basic LAN protocol, is used maturely.
Basic principle of LAN communication: The calling
mode leads to excessive broadcast and it is easy to
form loops, causing network storms.
Basic principle of STP: Protocol computing is used to
block some links, logically forming a tree topology
without loops.
Advantage: simple device configuration and low
costs
Disadvantage: small network scale; no loop
implemented by blocking some physical links,
hard to construct non-blocking networks
STP
L3
L2
STP-based Layer 2 Fabric Network
7. Page 7
STP – are you ready for a loop?
• Wasted cost of unused (STP BLK) links
› Backups
› Daily data replication tasks
• What if DC’s are connected in simple L2 extention mode?
› Do you think that you have redundancy
• Do you have plan to bring network to loop-free form?
› In case of loop, how to quickly disconnect redundand paths
› What if DC is in other city/country?
• What about DR plan?
› All DC’s affected by broadcast storm, all will go down…
8. Page 8
A CSS/SVF-based Layer 2 Fabric network applies to
small- and medium-sized data centers, binds links to
implement non-blocking switching, and supports VM
migration. However, a maximum of two physical
devices can be deployed on a core node. Therefore, it
cannot be expanded to support large-scale Layer 2
networks, bringing risks of entire network faults caused
by a single control plane.
CSS simplifies physical topology to tree topology
CSS/SVF-based Layer 2 Fabric networking
CSS/SVF technology virtualizes multiple physical nodes into a logical node,
converting a complex topology with Layer 2 loops to a simple tree topology.
The physical devices of multiple clusters are uniformly managed by a device
control plane.
Advantages:
1. Mature technology, simple device configuration, and low cost.
2. Non-blocking implemented by binding interconnection links
Disadvantages:
1. The control-plane management mode after core nodes are
clustered determines the limited number of cluster nodes,
resulting in a limited network scale.
2. There are risks of entire network faults caused by a single
control plane.
L3
L2
SVF SVF
CSS/SVF-based Layer 2 Fabric Network
CSS
9. Page 9
What about TRILL?
Loop
prevention
Efficient
forwarding
Fast
convergence
Easy
deployment
Build loop-free
distribution tree
and use TTL to
avoid loops
Forward data
based on SPF
and ECMP
quickly
Listen on
network
topology
changes and
complete
convergence
within several
sub-seconds
Easy
configuration
Unified control
protocol for
unicast and
multicast
10. Page 10
Concepts
Layer 2 Only
RBridge
Core
RBridge
Core
RBridge
Edge
RBridge
Edge
RBridge
Edge
TRILL runs at Layer 2 and calculates routes based on
the link status. It is implemented based on the IS-IS
protocol. A device running the TRILL protocol is the
route bridge (RB), and the network where RBs run is
the TRILL campus.
RBs can be directly connected or connected by Layer
2 switches.
TRILL
RB connection mode
11. Page 11
Nickname Concepts
Each RB on the TRILL network is identified by a nickname. A nickname is a
number of two digits.
An RB can have multiple nicknames, which are generated automatically or
configured manually. Each nickname must be unique on the entire network.
A nickname has two properties: priority and root priority, which are respectively
used for nickname collision negotiation and root election.
When nicknames are automatically generated, two RBs may have the same
nickname.
The priority field is introduced to avoid nickname collision.
When an RB is added to a network, the LSDB on the network is updated. The
RB is advertised only when the RB's nickname does not conflict with any
nickname on the network. If the RB's nickname conflicts with one on the network,
another nickname must be selected for the RB. Nickname collision affects
running services.
RB3
RB5
RB2 RB4RB1
My nickname is
0000000000000001
A nickname must be
unique on the network
Nickname
Nickname collision negotiation
12. Page 12
TRILL data packet
DA: outer destination MAC address. In unicast forwarding, it is the MAC address
of next-hop RB. In multicast forwarding, it is a reserved MAC address.
SA: outer source MAC address. It is the local MAC address of each RB.
VLAN: outer VLAN ID of TRILL data packets. It is the VLAN ID specified by the
TRILL protocol.
V: TRILL version, which has a fixed value of 0 currently. If the version is not 0, the
TRILL packet is discarded.
R: a reserved field.
M: multicast flag. The value 0 indicates unicast; the value 1 indicates multicast.
Op-Length: length of the TRILL header.
Hop: number of hops.
E-Rb-Nickname: nickname. In unicast forwarding, it is the egress RB nickname; in
multicast, it is the root nickname.
I-Rb-Nickname: ingress RB nickname.
Original Frame: original Layer 2 packets sent by the server.
MAC – in TRILL – in MAC
TRILL header fields
13. Page 13
TRILL Overview
TRILL
Transit RB
Edge RB Edge RB
STPSTP
MB N1 N3 M1 M2DataMA MB N1 N3 M2 M3DataMA
MBDataMA MBDataMA
AF
Forwarding
ECMP per flow load
balancing
RPF loop prevention
in TRILL network
Encapsulation
Forward data according
to outer nickname
Outer MAC changes
hop by hop
MAC learning
Learning on data
plane
MAC sensing only on
edge RBs
Edge access
Active-standby access
based on AF
Failover
Failover on control
plane
No MAC flooding
Control
IS-IS as control plane
SPF auto path
discovery
TRILL characteristics:
Loop free, high link bandwidth
efficiency
Support for equal-cost multipath
(ECMP)
Real-time entire network topology
awareness, subsecond-level
failover
Automatic path discovery, simple
deployment
Transparent Interconnection of
Lots of Links (TRILL) is a link state
routing protocol running on Layer 2
networks. It is implemented based
on IS-IS extensions. Devices
running the TRILL protocols are
called routing bridges (RBs). RBs
form a TRILL network.
14. Page 14
SAN
TRILL fabric Design Points
Egress router
Gateway
Server
Storage (NAS/SAN)
Interconnection
device
VRRP
TRILL leaf node
Interconnection
network
TRILL
TRILL backbone node
Data center interconnection
design: EPVN/VPLS
TRILL fabric gateway design
TRILL fabric multi-active gateway
design: VRRP
TRILL fabric access design:
Traditional Layer 2 network
converged access
Active-active server access
Storage network convergence
design: FCOE, IP SAN
xSTP/SVF
VM VM
VM VM
Security design
Network reliability design
Gateway
15. Page 15
FSB
FC storage
Server
FCF
CNA
Service and Storage Network Convergence
Design — FCoE
TRILL
Ethernet link FCoE link
Ethernet and FCoE converged link
FSB
Networking
Servers and storage devices are configured with double converged network
adapters (CNAs), which connect to two switches to provide double planes.
Access switches connected to server work as FIP snooping bridges (FSBs), and
FC SAN switches work as Fibre channel forwarders (FCF). Data center bridging
(DCB) is configured between server ->FSB->FCF to ensure lossless forwarding of
FC traffic over the Ethernet network.
If access switches set up a stack system, you can configure fcoe dual-fabric
enable in the stack system to separate the double SAN planes.
This networking converges the FC SAN and Ethernet networks. If the FC SAN
network has been deployed, customers do not need to configure CNAs in storage
devices, and only need to connect FCF switches to the FC switches.
The converged network reduces cable connections, but also increases difficulties
in locating service and storage faults.
Remarks
16. Page 16
iSCSI storageServer
Ethernet switch
IP SAN realizes complete convergence of storage and service networks.
Only servers need to use host bus adapters (HBAs), and no other
additional devices are needed on the converged network.
TOE/NIC/HBA
Ethernet switch
TRILL
Service and Storage Network Convergence
Design — IP SAN
Networking
Servers are configured with independent storage network adapters
and connected to access switches through bundled uplinks.
Service and storage traffic and traffic of different storage devices are
isolated by VLANs.
Switches have DCB enabled and use Priority Flow Control (PFC) and
Enhanced Transmission Selection (ETS) to ensure bandwidth for
storage traffic.
NICs of servers can be used exclusively for storage or shared by
storage and service traffic.
Remarks
17. Page 17
Comparison of TRILL and Other Layer 2 Technologies
Traditional Layer 2 CSS+iStack TRILL SPB
Encapsulation type Traditional ETH header (without TTL) Traditional ETH header (without TTL) TRILL (with TTL) MacInMac (with TTL)
Loop protection MSTP Management method TRILL SPB
ECMP Not supported Support ECMP using LAG Support hop-by-hop ECMP, similar to IP
network
Support flow-based ECMP on ingress node,
but not support hop-by-hop ECMP
Number of multicast
trees
NA
NA
Few (Layer 2 shared multicast tree) Many (Layer 2 source multicast tree)
Shortest path
forwarding
Not supported Supported Supported Supported
Convergence time Long, unstable convergence time Short Medium (hundreds of milliseconds) Medium (hundreds of milliseconds)
Multitenant support 4K (isolated based on VLANs) 4K (isolated based on VLANs) 4K (isolated based on VLANs). In future,
tenants can be isolated using FineLabel,
and a maximum of 16M tenants will be
supported)
16M (isolated based on I-SID)
Networking cost Low High (inter-chassis communication
occupies high bandwidth, and non-
blocking forwarding is difficult to
implement)
Low Low
Network scale Small Medium (the number of stacked
devices is limited, and non-blocking
forwarding is not supported)
Large Large
Applicable network Applicable to hierarchical networks where
the devices at each layer are aggregated
to the upper layer, but not applicable to
flat tree network
Applicable to flat tree networks Applicable to flat tree networks Applicable to flat tree networks and point-to-
multipoint IPTV networks
18. Page 18
Official standards describing TRILL
• RFCs”
› rfc6325 - Routing Bridges (RBridges): Base Protocol Specification
› rfc7176 - Transparent Interconnection of Lots of Links (TRILL) Use of IS-IS
› rfc7781 - Transparent Interconnection of Lots of Links (TRILL): Pseudo-Nickname for
Active-Active Access
• Drafts:
› draft-ietf-trill-dci-01 - Data Center Interconnect using TRILL
› draft-ietf-trill-yang-oam-04 - YANG Data Model for TRILL Operations, Administration,
and Maintenance (OAM)
› draft-ietf-trill-over-ip-06 - TRILL (Transparent Interconnection of Lots of Links) over IP
19. Page 19
Let’s look at Ethernet VPN (EVPN)
MAC-IP Binding allow PE respond ARP requests on
behalf of clients
Flood-and-learn model changes to pre-signaled learning
MAC learning over control plane, more policy control
can be done.
Multi-homing with all-active forwarding
Flow-based load-balancing
MAC mass withdraw provide better network resiliency
Deliver L2 and L3 service with unified control plane
Support multiple data plane models
RR mechanism avoid full-mesh configuration
Access auto sensing, redundant group auto-discovering
MAC Mobility make migration of VM much easier. The
migration is transparent to administrator.
Optimized BUM FloodingImprove Network Efficiency
Easy Migration of VMsFlexible Design and Service Provision
20. Page 20
Official standards describing EVPN Data Plane
• MPLS as Data Plane
› rfc7432 - BGP MPLS-Based Ethernet VPN
• VxLAN as Data Plane
› draft-ietf-bess-evpn-overlay-04
› draft-ietf-bess-dci-evpn-overlay-04
• PBB as Data Plane
› rfc7623 - Provider Backbone Bridging Combined with Ethernet VPN (PBB-EVPN)
21. Page 21
Overview of EVPN Solution
NVo3 Tunnel (VxLAN) as data plane
L2 and L3 DCI solution
AFI 25 (L2VPN)
Sub AFI 70 (EVPN)
• MPLS as data plane
• All active multi-homing for VPWS
• RSVP and LDP protocols
EVPN with PBB PE functionalities for scaling very large
networks
All active for multi-homing PBB-VPLS
NVo3NLRI
PBB
MPLS
MP-BGP working as control plane Data Plane Forwarding+
= EVPN Solution
22. Page 22
Overview of EVPN concepts
Ethernet Segment Identifier (ESI):
identifies a site connected to one or
more PEs
For Multi-homed site, the same ESI
must be assigned to all links
connected to that site.
EVI: Ethernet VPN Identifier
MPLS
Single-homed
Multi-homed
CE1
CE2
PE1
PE2
ESI = 1
ESI = 2
• Two Ways of generating ESI:
• 1) Manual configuration
Type=0x00 ESI Value (9 Bytes)
• 2) Auto Generation Via LACP
MPLS
MHD
CE
PE1
PE2
LACPDU
LACPDU
Type=0x01 System MAC (6 Bytes) Port Key (2 Bytes) 0x00
23. Page 23
Key Features of EVPN Control Plane
• All-Active Multi-homing:
› Split Horizon
› Designed Forwarder Election
• ARP Proxy and Unknown Unicast Flooding Suppression
• Load balancing:
› Flow-based load-balancing
› Aliasing
• MAC Mobility
• MAC Mass-Withdraw
24. Page 24
All-Active Multi-homing: Designed Forwarder Election
PE
PE
PE
PE
Duplicate
Without DF
PE
PE
PE
PE
DF
With DF
Challenge: Duplicated traffic is flooded to
multi-homed CE.
Election of DF:
PEs connected to a multi-homed CE
discover each other via A-D routes
Only DF PE’s are responsible for BUM
flooding to ES
NonDF will block BUM flooding to CE
DF Election granularity can be:
Per VNI (VxLAN as data plane)
Per VLAN (EVPN as data plane)
Per I-SID on Ethernet Segment (PBB-
EVPN)
25. Page 25
All-Active CE Multi-homing: Split Horizon
PE
PE
PE
PE
Echo Problem
Echo
• Challenge: In CE multi-homing scenario,
traffic cannot be forwarded back to the
same ES through different PE.
• Solution for echo problem in EVPN:
• DF PE distributes a split-horizon label
(Inclusive Multicast Ethernet Tag route)
associated with each multi-homed
Ethernet Segment in A-D routes
• When ingress non-DF PE flooding
BUM traffic, it encodes ES ESI label and
label distributed by DF PE in the
Inclusive Multicast Ethernet Tag route
for that ESI.
• When Egress PE receive traffic, check
ESI labels, if it equals to label it
allocated to other PEs, the traffic will
not be broadcast anymore.
PE
PE
PE
PE
ESI1 ESI2
①Allocate
ESI label
EVPN
Label
BUM
Label
Payload
ESI
Label
② Forward traffic with ESI label if to PE with same ESI
③ Block traffic
26. Page 26
Flow Based Load Balancing
• CE to PE direction
• Flow is balanced over MC-LAG
• Flows can be L2/L3/L4
• PE-PE direction
• Multiple RIB entries are associated for
a given MAC
• Traffic is balanced over multiple
destination PE.
Flow-based load balancing make network more efficiency.
27. Page 27
Aliasing
PE
PE
PE
PE
PE
PE
PE
PE
• Challenge: How to load-balance traffic
towards a multi-homed device across
multiple PEs when MAC addresses are
learnt by only a single PE?
• Solution:
• In all-active mode, a remote PE that
receives a MAC/IP Advertisement
route with a non-reserved ESI
SHOULD consider the advertised
MAC address to be reachable via all
PEs that have advertised reachability
to that MAC address's EVI/ES via the
combination of an Ethernet A-D per
EVI route for that EVI/ES (and
Ethernet tag, if applicable) AND
Ethernet A-D per ES routes for that
ES with the "Single-Active" bit in the
flags of the ESI Label extended
community set to 0
MAC
MAC
EVPN
Label
MAC
Label
EVPN
Label
Payload
ESI
Label
Without aliasing
With aliasing
28. Page 28
MAC Mobility
• Challenge:
• If learning is based on data plane, PE
won’t detect MAC has been moved and
withdraw it. This make two MAC exists
at same time: an old wrong one and a
new correct one.
• Solve:
• Each MAC is advertised with a MAC
mobility sequence number in an
extended community with MAC route.
• PE select the MAC route with highest
sequence number
• Trigger MAC withdraw from PE
advertising MAC route with lower
sequence number.
PE
PE
PE
PE
MAC
MAC
MAC
SEQ=0
MAC
SEQ=1
29. Page 29
MAC mass withdraw
• Challenge:
• How to inform remote PEs of a
failure. If MAC by MAC informing,
the convergence will be depended
on the number of MAC and long.
• Solve:
• PE advertise two information:
MAC address along with ESI
from the address was learnt
Connectivity to ESI
• If a PE detect failure impacting an ES,
it withdraws the routes for the
associated ESI.
PE
PE
PE
PE
MAC1 on ESI1
MAC2 on ESI1
……
MACn on ESI1
PE
PE
PE
PE
MAC
MAC
Without mass withdraw
Loss MAC1
Loss MAC2
……
Loss MACn
MAC1 on ESI1
MAC2 on ESI1
……
MACn on ESI1 Withdraw all MAC in ESI1 from this PE
With mass withdraw
30. Page 30
Traffic Encapsulation Types on PEs
• dot1q
If a Dot1q sub-interface receives a single-tagged VLAN packet, the sub-interface
forwards only the packet with a specifie VLAN ID. If a Dot1q sub-interface receives a
double-tagged VLAN packet, the subinterface forwards only the packet with a specified
outer VLAN ID.
› When performing VXLAN encapsulation on packets, a Dot1q Layer 2 sub-interface
removes the outer tags of the packets.
› When performing VXLAN decapsulation on packets, a Dot1q Layer 2 sub-interface
replaces the VLAN tags with specified VLAN tags if the inner packets carry VLAN tags,
or adds specified VLAN tags to the packets if the inner packets do not carry VXLAN
tags.
31. Page 31
Traffic Encapsulation Types on PEs
• untag
An untagged Layer 2 sub-interface receives only packets that do notcarry VLAN tags.
› When performing VXLAN encapsulation on packets, an untagged Layer 2 sub-
interface does not add any VLAN tag to the packets.
› When performing VXLAN decapsulation on packets, an untagged Layer 2 sub-
interface removes the VLAN tags of single-tagged inner packets or the outer VLAN
tags of double-tagged inner packets.
32. Page 32
Traffic Encapsulation Types on PEs
• QinQ
A QinQ sub-interface receives only tagged packets with specified inner and outer VLAN
tags.
› When performing VXLAN encapsulation on packets, a QinQ subinterface removes two
VLAN tags from packets if the action of the Layer 2 sub-interface is set to removing
two VLAN tags and maintains the VLAN tags of packets if the action of the Layer 2
sub-interface is not set to removing two VLAN tags.
› When performing VXLAN decapsulation on packets, a QinQ subinterface adds two
specific VLAN tags to packets if the action of the Layer 2 sub-interface is set to
removing two VLAN tags and maintain the VLAN tags of packets if the action of the
Layer 2 sub-interface is not set to removing two VLAN tags.
33. Page 33
Traffic Encapsulation Types on PEs
• default
› A default Layer 2 sub-interface receives all packets, irrespective of whether the
packets carry VLAN tags.
› When performing VXLAN encapsulation and decapsulation on packets, a default Layer
2 sub-interface does not process VLAN tags of the packets.
34. Page 34
Fast deploy VPN access to Data Center
Data Center
Service Provider Network
PE
DC GatewayPECE
Customer Site
• EVPN Services can be deployed over any Ethernet
network
‾ MPLS Network: EVPN+MPLS
‾ IP Network: EVPN + VxLAN
• Interoperable with both traditional data center and
newly designed data center (VxLAN, or SDN + VxLAN
solution)
EVPN
• EVPN allows providing L2 and L3 VPN services
integrated together.
‾ Single interface, single VLAN to customer
‾ One topology for both services
• All active and single active supported for PE-CE and
PE-DC connections.
• Flexible access method for customer site and DC
‾ VLAN (L2 and L3)
‾ VxLAN (L2 and L3)
35. Page 35
VPN solution satisfying all kinds of customers
Data Center
Service Provider Network
PE DC GatewayPE
EVPN
Gold Customer
Silver Customer
Bronze Customer
Confidential Customer
IPSEC tunnel, bandwidth = 100M
Low priority, bandwidth = 50M
Middle priority, bandwidth = 70M
High priority, bandwidth = 100M
TWAMP (SLA Monitor)
• Differentiate quality of services provided to
gold/silver/bronze customer
• Customer can be distinguished via VLAN, VxLAN, IP,
etc. information.
• IPSEC tunnel can provide security VPN for confidential
services
• Real Time Service SLA Monitoring and Measurement
‾ Two-Way Active Measurement Protocol
(TWAMP) is used to measure end to end SLA
‾ uTraffic provides visualized Real Time SLA
monitor Platform
uTraffic
36. Page 36
Service auto delivery using SDN based solution (site to DC)
Data Center
uTraffic
DC Controller
(SNC)
WAN Controller
(SNC)
Netstream
/SNMP
Openflow
NetMatrix
RESTful RESTfulRESTful
Maintenance PortalTenant Portal
WAN
Openflow
Customer
• Service delivery automatically using SDN
• Auto delivery of bandwidth on demand (BOD) service
via tenant portal
• uTraffic provide real-time SLA monitoring.
EVPN
• Provide all benefits of EVPN for site to DC
• No BUM for MAC and ARP learning
• Better resiliency
• All active multi-homing forwarding
37. Page 37
Data Center Interconnection
Data CenterService Provider Core Network
DC Gateway + PEDC Gateway + PE
Data Center
EVPNVxLAN VxLAN
• Flatten network: integrating DC Gateway and DCI PE in
one device.
• Flexible data plane suits for different kind of core
network:
• MPLS core: EVPN over MPLS
• IP core: EVPN over VxLAN
• MAC mobility for VMs that move between data centers.
Faster moves between DCs, while keeping Forwarding
DB correct on all nodes with no BUM
• Provide all benefits of EVPN for DCI network
• No BUM for MAC and ARP learning
• Better resiliency
• All active multi-homing forwarding
38. Page 38
Comparision of layer 2 technologies
STP/MSTP VPLS TRILL EVPN+VXLAN
Network Scale Small Large Medium Large
Flow-based ECMP No
Yes (Using entropy
label, new technology)
Yes (Based on
inner MAC or IP)
Yes (Using UDP Src port)
Active-active access No No Yes Yes
Unknown unicast
elimination
No (Data plane
MAC learning)
No (Data plane MAC
learning)
No (Data plane
MAC learning)
Yes (control plane MAC
learning)
Network convergence Worst (Seconds) Medium Medium Best
ARP mitigation SDN SDN SDN
Yes (EVPN can realize
ARP sync and proxy)
Network management Complex Complex Easy Easy
39. Page 39
EVPN interoperability tests
European Advanced Networking Test Center
http://www.eantc.de/showcases/mpls_sdn_2016/intro.html
Segment Routing
Ethernet VPNs
SDN Controllers
Yang Models for Services
LTE Clock Synchronization Readiness
Scope of topics:
MPLS + SDN + NFV World Congress Public
Multi-Vendor Interoperability Test 2016
40. Page 40
EVPN interoperability tests
Ethernet VPN multivendor interoperability
test
Multi encapsulation for dataplane
(VxLAN-MPLS dataplane stitching)
Scope of topics:
http://www.interop.jp/2016/en/shownet/
Interop Tokyo 2016
ShowNet
EVPN Interoperability Test