SlideShare a Scribd company logo
1 of 72
Nimish Desai
VMware
Reference Design for VMware NSX
NET4282
Student Guide & Internal & Confidential Update Daily
https://goo.gl/VVmVZ0
The vSphere Optimization Assessment (VOA): Best Practices for Closing a Virtualization
Deal in 30 Days or Less http://ouo.io/y4vaqW
vRealize Operations v6.0What's New Technical Overview (formerly vC Ops http://ouo.io/1KWCBr
Best Practices for Conducting a vSphere Optimization Assessment (VOA http://ouo.io/E1If0
Deep Dive into New Features in vRealizeOperations 6.0 (formerly vCOps) http://ouo.io/CyuCmK
How to Extend and Customize vRealizeOperations and Automation (vCOps and vCAC) http://ouo.io/jCvk7D
Troubleshooting with vRealizeOperations Insight (Operations and Log Management) http://ouo.io/gcz0oN
vRealizeAir –NEW Cloud Management SaaS Offerings http://ouo.io/6TMPF
How to Help Customers Install, Deploy and Migrate to the vRealizeOperations Manager
6.0 (formerly vCOps) http://ouo.io/1pL8wo
Showing Costs Back in the Virtualized Environment vRealize Business Standard Proof of
Concept (formerly ITBM) http://ouo.io/30TzE
vRealizeCloud Management Portfolio Overview and Glimpse into the Future http://ouo.io/OpLGQB
vRealizeSuite: VMware’s vRealizeCloud Management Platform http://ouo.io/t5n5MO
vRealizeAutomation (formerly vCAC) and NSX: Automating Networking & Security
Infrastructure http://ouo.io/CyCXv
Agenda
CONFIDENTIAL 2
1 Software Defined Data Center
2 Network Virtualization - NSX
3 NSX for vSphere Design and Deployment Considerations
4 Reference Designs
5 Summary and Q&A
Data Center Virtualization Layer
CInotemllpiguetnec, eNeintwSHooarfrktdwawanardereStorage Capacity
PODoepodelirecadati,toeVndea,nlVdMeonorddIonerdl oeSfppVeenMcdifeifconrItn,DfBraaetsasttrCuPecrtnicuteree/rPerformance Infrastructure
SAMiuamtnopumliafailetCedodCnCofiongnfuigfriaguturiaoratnioti&onnM&&aMnMaaangnaeagmgeeemnmet ennt t
What Is a Software Defined Data Center (SDDC)?
Abstract…pool…automate…across compute, networking and storage.
Software
Hardware
CONFIDENTIAL 3
VMware NSX Momentum: Over 400 Customers
top investment banks enterprises & service providers
CONFIDENTIAL 4
NSX Introduction
Traditional Networking Configuration Tasks
Initial configuration
 Multi-chassis LAG
 Routing configuration
 SVIs/RVIs
 VRRP/HSRP
 STP
• Instances/mappings
• Priorities
• Safeguards
 LACP
 VLANs
• Infra networks on
uplinks and downlinks
• STP
Recurring configuration
 SVIs/RVIs
 VRRP/HSRP
 Advertise new subnets
 Access lists (ACLs)
 VLANs
 Adjust VLANs on trunks
 VLANs STP/MST mapping
 VLANs STP/MST mapping
 Add VLANs on uplinks
 Add VLANs to server ports
Configuration consistency !
CONFIDENTIAL 6
L3
L2
How Does NSX Solve Next Generation DC Challenges?
 Distributed FW
 Micro Segmentation
 Multifunctional Edge
• Stateful FW
• NAT
• Load Balancer
• IPSEC/SSL
 Third Party Integration
Security & Services
 Time to Deploy
 Mobility
 Topology Independent
• L2 vs L3
• Services
 Distributed Forwarding
 HighlyAvailable
Flexibility & Availability
 IP Fabric
 Configure Once
 Horizontal Scale
 Any Vendor
Simplicity & Devices
Agnostic
 API Driven
 Automation
 CMP Integrated
 Self Services
Cloud Centric Services
NSX Platform
IP Fabric – Topology Independent (L2 or L3)
CONFIDENTIAL 7
Provides
A Faithful Reproduction of Network & Security Services in Software
Management
APIs, UI
Switching Routing
Firewalling
Load
Balancing
VPN
Connectivity to
Physical Networks
Policies,
Groups, Tags
Data Security Activity Monitoring
CONFIDENTIAL 8
NSX Architecture and Components
Cloud Consumption • Self Service Portal
• vCloud Automation Center, OpenStack, Custom
Data Plane
NSX Edge
ESXi Hypervisor Kernel Modules
Distributed Services
• High – Performance Data Plane
• Scale-out Distributed Forwarding Model
Management Plane
NSX Manager
• Single configuration portal
• REST API entry-point
LogicalNetwork
Physical
Network
…
…
NSX Logical Router Control
VM
Control Plane
NSX Controller
• Manages Logical networks
• Control-Plane Protocol
• Separation of Control and Data Plane
Logical Distributed Firewall
Switch Logical Router
vCenter Server
9
NSX for vSphere Design and
Deployment Considerations
Agenda
• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements
– NSX Edge Design
– Logical Routing Topologies
– NSX Topologies for Enterprise and Multi-tenant Networks
– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 12
NSX is AGNOSTIC to Underlay Network Topology
L2 or L3 or Any Combination
Only TWO Requirements
IP Connectivity MTU of 1600
CONFIDENTIAL 13
Classical Access/Aggregation/Core Network
• L2 application scope is limited to a single
POD and is the failure domain
• Multiple aggregation modules, to limit the
Layer 2 domain size
• VLANs carried throughout the PoD
• Unique VLAN to a subnet mapping
• Default gateway – HSRP at aggregation
layer
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
CONFIDENTIAL 14
L3 Topologies & Design Considerations
• L3 ToR designs have dynamic routing
protocol between leaf and spine.
• BGP, OSPF or ISIS can be used
• Rack advertises small set of prefixes
(Unique VLAN/subnet per rack)
• Equal cost paths to the other racks prefixes.
• Switch provides default gateway service for
each VLAN subnet
• 801.Q trunks with a small set of VLANs for
VMkernel traffic
• Rest of the session assumes L3 topology
WAN/Internet
L3
L2
L3 Uplinks
VLAN Boundary 802.1Q
Hypervisor 1
802.1Q
...
Hypervisor n
CONFIDENTIAL
L3
L2
15
MTU Considerations
• Arista
– L2 Interfaces  by default IP packet as large as 9214 Bytes can be sent and received 
no configuration is required
– L3 interfaces  by default IP packet as large as 1500 Bytes can be sent and received
• Configuration step for L3 interfaces: change MTU to 9214 (“mtu 9214” command)  IP packet as large as 9214 Bytes can be
sent and received
• Cisco Nexus 9000
– L2 and L3 Interfaces  by default IP packet as large as 1500 Bytes can be sent and received
– Configuration Steps for L2 interfaces
• Change the System Jumbo MTU to 9214 (“system jumbomtu 9214” global command)  this is because you can only set MTU to
default value (1500 Bytes) or the system wide configured value
• Change MTU to 9214 on each L2 interface (“mtu 9214” interface command)
– Configuration Steps for L3 interfaces
• Change MTU to 9214 on each L3 interface (“mtu 9214” interface command)
• Cisco Nexus 3000 and 5000/6000
– The configuration for L2 interfaces can ONLY be changed with “system QoS” policy
– Configuration step for L3 interfaces: change MTU to 9214 (“mtu 9214” command)  IP packet as large as 9214
Bytes can be sent and received
CONFIDENTIAL 16
Cluster Design Considerations
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
for bridging
Separation of compute, management and Edge
function with following design advantage
• Managing life-cycle of resources for compute and
Edge functions
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC
• Upgrades & migration flexibility
• High availability based on functional need
• Workload specific SLA (DRS & FT)
• Network centric connectivity – P/V, ECMP
• vMotion boundary
• Automation control over area or function that
requires frequent changes
• app-tier, micro-segmentation & load-balancer
Three areas of technology require considerations
• Interaction with physical network
• Overlay (VXLAN) impact
• Integration with vSphere clustering
vSphere Cluster Design – Collapsed Edge/Infra Racks
Compute Racks Infrastructure Racks (Edge, Storage,
vCenter and Cloud Management System)
Edge Clusters
vCenter 1
Max supported
number of VMs
vCenter 2
Max supported
number of VMs
WAN
Internet
Cluster location determined by connectivity requirements
Storage
Management
Cluster
L3
L2
L3
L2
L3
L2
Leaf
Spine
Edge Leaf (L3 to DC Fabric, L2 to
External Networks)
L2 VLANs
for bridging
19
vSphere Cluster Design – Separated Edge/Infra
WAN
Internet
Leaf
vCenter 1
Max supported
number of VMs
vCenter 2
Max supported
number of VMs
Compute Racks
Cluster location determined by connectivity requirements
Infrastructure Racks
(Storage, vCenter and Cloud
Management)
Edge Racks (Logical Router
Control VMs and NSX Edges)
Spine
L3
L2
L3
L2
L3
L2
Edge Leaf (L3 to
DC Fabric, L2 to
External Networks)
20
21
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager
NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
 Single vCenter Server to manage all Management, Edge and Compute Clusters
• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
• NSX Controllers can also be deployed into the Management Cluster
• Reduces vCenter Server licensing requirements
• Most common in POCs or small environments
Single vCenter Design
22
Management VC
Web
VM
Web
VM
VM
VM
Compute
A
Compute
B
VC for NSX Domain - A
NSX
Controller
Edge and
Control VM
Web
VM
Web
VM
VM
VM
VC for NSX Domain - B
NSX Manager
VM - B
NSX
Controller
Edge and
Control VM
NSX Manager
VM - A
Edge ClusterCompute Cluster Edge Cluster
• Option 2 following VMware best practices to have the Management Cluster managed by a dedicated
vCenter Server (Mgmt VC)
• Separate vCenter Server into the Management Cluster to manage the Edge and Compute Clusters
NSX Manager also deployed into the Management Cluster and pared with this second vCenter Server
Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
• NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore
the Controllers are usually also deployed into the Edge Cluster
Management Cluster
Multiple vCenters Design - Multiple NSX Domains
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Deployment Considerations
 Mgmt Cluster is typically provisioned on a single rack
 The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
 Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design
• Each ToR can be deployed in a separate rack
• Host uplinks extend across the racks
 Typically in a small design management and Edge
cluster are collapsed
• Exclude management cluster from preparing VXLAN
• NSX Mngr, NSX Controllers automatically excluded from DFW
functions.
• Put vCenter server in DFW exclusion list !
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Management Cluster
 Edge cluster availability and capacity planning requires for
• Minimum three host per cluster
• More if ECMP based North-South traffic BW requirements
 Edge cluster can also contain NSX controller and DLR
control VM for Distribute Logical Routing (DLR)
L3
L2
VMkernel VLANs
VLANs for L2 and
L3 NSX Services
Routed DC Fabric
L2
L3
WAN
Internet
L2
L3
L2
L3
VMkernel VLANs
VLANs for L2 and
L3 NSX Services
Routed DC Fabric
WAN
Internet
L2
L3
Single Rack Connectivity
Deployment Considerations
 Benefits of Dedicated Edge Rack
 Reduced need of stretching VLANs
 L2 required for External 802.1Q VLANs & Edge Default GW
 L2 connectivity between active and standby stateful Edge
design
 Uses GARP to announce new MAC in the event of a failover
 Localized routing configuration for N-S Traffic, reduce need
to configure and mange on rest of the spine
 Span of control for network centric operational
management, BW monitoring & features
Dual Rack Connectivity 24
Edge Cluster
NSX Manager
Deployment Considerations
 NSX Manager deployed as a virtual appliance
• 4 vCPU, 12 GB of RAM per node
• Consider reserving memory for VC to ensure good Web Client performance
• Can not modify configurations
 Resiliency of NSX Manager provided by vSphere HA
 Catastrophic failure of NSX Manager is rare, however periodic backup
is recommended to restore to the last known configurations
• During the failure all existing data plane connectivity continue to work since
data and management plane are separated
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager
NSX Controllers
Deployment Considerations
 Provide control plane to distribute network information to ESXi hosts
 NSX Controllers are clustered for scale out and high availability
 Network information is distributed across nodes in a Controller Cluster (slicing)
 Remove the VXLAN dependency on multicast routing/PIM in the physical network
 Provide suppression ofARP broadcast traffic in VXLAN networks
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN
Directory Service
MAC table
ARP table
VTEP table
NSX Controllers Functions
 Controller nodes are deployed as virtual appliances
• 4 vCPU, 4GB of RAM per node
• CPU Reservation of 2048 MHz
• No memory reservation required
• Modifying settings is not supported
 Can be deployed in the Mgmt or Edge clusters
 Cluster size of 3 Controller nodes is the only supported configuration
 Controller majority is required for having a functional controller cluster
• Data plane activity maintained even under complete controller cluster failure
 By default the DRS and anti-affinity rules are not enforced for controller
deployment
• The recommendation is to manually enable DRS and anti-affinity rules
• Minimum 3 host is required to enforce the anti-affinity rule
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Controllers
VDS, Transport Zone, VTEPs,
VXLAN Switching
Transport Zone, VTEP, Logical Networks and VDS
 Transport Zone: collection of VXLAN prepared ESXi
clusters
 Normally a TZ defines the span of Logical Switches
(Layer 2 communication domains)
 VTEP (VXLAN Tunnel EndPoint) is a logical interface
(VMkernel) connects to TZ for encap/decap VXLAN
traffic
 VTEP VMkernel interface belongs to a specific VLAN
backed port-group dynamically created during the
cluster VXLAN preparation
 One or more VDS can be part of the same TZ
 A given Logical Switch can span multiple VDS
33
vSphere
Host
VXLAN Transport
Network
VTEP1
10.20.10.10
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
vSphere Host (ESXi)
VMkernel Networking
L3 ToR Switch
Routed uplinks (ECMP)
VLAN Trunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
34
VMkernel Networking
 Multi instance TCP/IP Stack
• Introduced with vSphere 5.5 and leveraged by:
VXLAN
NSX vSwitch transport network
 Separate routing table, ARP table and default
gateway per stack instance
 Provides increased isolation and reservation
of networking resources
 Enables VXLAN VTEPs to use a gateway
independent from the default TCP/IP stack
 Management, vMotion, FT, NFS, iSCSI
leverage the default TCP/IP stack in 5.5
 VMkernel VLANs do not extend beyond the
rack in an L3 fabric design or beyond the
cluster with an L2 fabric, therefore static
routes are required for Management, Storage
and vMotion Traffic
 Host Profiles reduce the overhead of
managing static routes and ensure
persistence
35
L2 – Fabric Network Addressing and VLANs Definition Considerations
L2 Fabric
For L2 Fabric – Y denotes the same subnet used on entire cluster
• VXLAN when deployed creates
automatic port-group whose VLAN ID
must be the same per VDS
• For the Fabric is L2, this usually means
that the same IP subnets are also used
across racks for a given type of traffic
• For a given host only one VDS
responsible for VXLAN traffic. A single
VDS can span multiple cluster
VXLAN Transport Zone Scope (extends across ALL PODs/clusters)
Compute
Cluster
A
32 Hosts
Compute
Cluster
B
32 Hosts
VMkernel VLAN/Subnet Scope VMkernel VLAN/Subnet Scope
POD A POD BL3
L2
37
Compute Rack - IP Address Allocations
and VLANs
Function VLAN ID IP Subnet
Management 66 10.66.Y.x/24
vMotion 77 10.77.Y.x/24
VXLAN 88 10.88.Y.x/24
Storage 99 10.99.Y.x/24
L3 - Network Addressing and VLANs Definition Considerations
VXLAN Transport Zone Scope (extends across ALL racks/clusters)
For L3 Fabric - Values for VLANs, IP addresses and masks are provided as an example. R_id is the rack number
38
• VXLAN when deployed creates automatic
port-group whose VLAN ID must be the same
per VDS
• For the Fabric is L3, this implies that separate
IP subnets are associated to the same VLAN
IDs defined across racks
• In L3 fabric the IP addressing for the VTEP
requires consideration in which traditional “IP
Pools” may not work well, recommendation is
to use DHCP
L2
Compute
Cluster
A
32 Hosts
Compute
Cluster
B
32 Hosts
VMkernel same VLAN
unique Subnet Scope
VMkernel same VLAN
unique Subnet Scope
L3
Compute Rack - IP Address Allocations
and VLANs
Function VLAN ID IP Subnet
Management 66 10.66.R_id.x/26
vMotion 77 10.77.R_id.x/26
VXLAN 88 10.88.R_id.x/26
Storage 99 10.99.R_id.x/26
VDS Uplink Design
• VDS utilizes special port-groups (called dvuplinks) for uplink
connectivity
• The choice of configuration may be simplified based on the
following requirements
– Simplicity of teaming configuration
– BW Required for each type of traffic
– Convergence requirement
– Cluster usage – compute, Edge and management
– The uplink utilization factors – flow based vs. VM
• LACP teaming forces all the traffic types to use the same
teaming mode
• For the VXLAN traffic the choice in teaming mode depends on
• Simplicity
• Bandwidth requirement
• LBT mode is not supported
• Having multiple VDS for compute and Edge allow
flexibility of teaming more for uplink configuration
39
Teaming and
Failover Mode
NSX
Support
Multi-VTEP
Support
Uplink Behavior
2 x 10G
Route based on
Originating Port
✓ ✓ Both Active
Route based on Source
MAC hash
✓ ✓ Both Active
LACP ✓ × Flow based –both active
Route based on IP Hash
(Static EtherChannnel)
✓ × Flow based –both active
Explicit Failover Order ✓ × Only one link is active
Route based on
Physical NIC Load (LBT)
× × ×
vSphere
Host
VXLAN Transport
Network
• Simple operational model  all VXLAN traffic are associated to
10.20.10.10
Host 1
VTEP2
10.20.10.11
VTEP1
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VTEP Design
 Number of VTEPs deployed depends on teaming mode
• Single VTEP for LACP and Explicit Failover
• Multiple VTEPs (based on number of host uplinks) for Src-ID
teaming option
 Single VTEP is sufficient for
• Workloads do not drive more than 10G of throughput
the same VTEP address
• Deterministic traffic mapping to uplink is desired (Explicit Failover
only)
 Multiple VTEPs typically two is required for
 Workloads require > 10G of throughput
• Allows flexibility of choosing teaming mode for other traffic types
• IP addressing for VTEP
• Common VTEP subnet for a L2 fabric
• Multiple VTEP subnets (one per rack) for L3 fabrics
 IP Pools or DHCP can be use for IP address assignment
40
Design Considerations – VDS and Transport Zone Management Cluster
Edge Cluster
WebVM
WebVM
VM
VM
WebVM
WebVM
VM
VM
Compute A Compute N
vCenter
Server
NSX Manager
Controller Cluster NSX Edges
VXLAN Transport Zone Spanning Three Clusters
Compute VDS Edge VDS
VTEP
vSphere Host
vSphere Host
192.168.230.100 192.168.240.100
192.168.230.101
Compute Cluster 1
vSphere Host
Compute Cluster N
vSphere Host
192.168.240.101
vSphere Host
vSphere Host
192.168.220.100
192.168.220.101
VTEP VTEP
Recap: vCenter – Scale Boundaries
vCenter Server
ESXi ESXi ESXi ESXi ESXi ESXi
VDS 1
Cluster
DC Object
Max. 32 hosts
Max. 500 hosts
10,000 powered on VMs
1,000 ESXi hosts
128 VDS
Manual vMotion
DRS-based vMotion
42
ESXi ESXi
VDS 2
NSX for vSphere – Scale & Mobility Boundaries
Cloud Management System
DRS-based vMotion
Manual vMotion
Logical Network Span
Transport Zone
43
vCenter Server
NSX API
(Manager)
vCenter Server
NSX API
(Manager)
Controller Cluster Controller Cluster
1:1 mapping of
vCenter to
NSX Cluster
Cluster
DC Object
Max. 32 hosts
Max. 500 hosts
ESXi ESXi ESXi ESXi
VDS
ESXi ESXi
VDS
ESXi ESXi
VDS
NSX for vSphere VXLAN Replication Modes
NSX for vSphere provides flexibility for VXLAN Transport – Does not require complex multicast
configurations on physical network
• Unicast Mode
– All replication occurs using unicast. Applicable to small deployment
• Multicast Mode
– Entire replication is off-loaded to physical network
– Requires IGMP/Querier & and multicast routing for L3(PIM) *
• Hybrid Mode
– Local replication offloaded to physical network, while remote
replication occurs via unicast
– Most practical without the complexity of multicast mode
– Only requires IGMP Snooping/Querier. Does not require L3 PIM *
• All modes require an MTU of 1600 bytes.
* Host provides necessary querier function however external querier
recommended for manageability/admin-scope
CONFIDENTIAL 44
Agenda
• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements
– NSX Edge Design
– Logical Routing Topologies
– NSX Topologies for Enterprise and Multi-tenant Networks
– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 47
NSX Edge Gateway: Integrated network services
Routing/NAT
Firewall
Load Balancing
L2/L3 VPN
DHCP/DNS relayDDI
VM VM VM VM VM
• Multi-functional & multi-use VM model. Deployment varies based
on its use, places in the topology, performance etc.
• Functional use – P/V routing only, LB Only, Perimeter FW etc.
• Form factor – X-Large to Compact (one license)
• Stateful switchover of services(FW/NAT, LB, DHCP & IPSEC/SSL)
• Multi-interface routing Support – OSPF & BGP
• Can be deployed in high availability or stand alone mode
• Per tenant Edge services – scaling by interface and instance
• Scaling of north-south bandwidth with ECMP support in 6.1
• Requires design consideration for following
• Edge placement for north-south traffic
• Edge cluster design consideration
• Bandwidth scaling – 10G to 80G
• Edge services with multi-tenancy
NSX Edge Services Gateway Sizing
49
• Edge services gateway can be deployed in many sizes depending on services used
• Multiple Edge nodes can be deployed at once e.g. ECMP, LB and Active-Standby for NAT
• When needed the Edge size can be increased or decreased
• In most deployment the Quad-Large is sufficient for many services such as ECMP & LB
• X-Large is required for high performance L7 load balancer configurations
Edge Services Gateway
Form
vCPU Memory MB Specific Usage
X-Large 6 8192 Suitable for L7 High
Performance LB
Quad-Large 4 1024 Suitable for most
deployment
Large 2 1024 Small DC
Compact 1 512 PoC
50
Active-Standby Edge Design
L3 - ToR
Routing
Adjacency
vSphere Host vSphere Host
VXLAN 5020
Transit Link
Active-Standby
Stateful
FW/NAT/LB
• Active-Standby Edge Services Gateway enables stateful
services
• Perimeter FW, NAT, LB, SSL-VPN, North-South routing
• Deployed in pair with heartbeat and synchronization of
services state
• Heartbeat and sync both use the same internal vNic
• L2 connectivity required between active and standby
• Form factor – X-Large to Compact (one license)
• Multi-interface routing Support – OSPF & BGP
• Must tune protocol timers to 40/120(hello/hold timer)
• Anti-affinity is automatically created
• Active and Standby Edges are placed on different hosts
• Minimum three hosts are recommended
• Multiple instance to Edge can be deployed
• LB Edge can be deployed near application tire
• Multiple tenants can have separate Edge services
51
ECMP Based Edge Desing
ECMP Edges
Non-Stateful
VXLAN
VLAN
Transit VXLAN
E1
E2… E7
E8
R1 R2
External Network
VLAN 10
VLAN 20
ECMP Active
NSX Edges
Customer
Routers
• ECMP Edge enables scalable north-south
traffic forwarding services
• 8 instances of Edge - upto 80G BW
• Stateful services are not supported due to
asymmetric traffic behavior
• No heartbeat and sync between Edge nodes
• L2 connectivity required for peering
• Form factor – X-Large to Compact (one license)
• Multi-interface routing Support – OSPF &
BGP
• Aggressing timers tuning supported 3/4
(hello/hold timer)
• Anti-affinity configuration is required
• Minimum three hosts are recommended
• Multiple tenants can have separate Edge services
Edge Interaction with Physical Topology
• Edge forms peering adjacency with physical devices
• Impact of teaming configuration of uplink to routing
peering
– Failover or Src-ID - Single Uplink is used to establish routing
adjacencies
– LACP - Both uplink can be used however dependencies on
physical switch vendors
• In addition the design choices differs depending of either
Edge can peer with ToR configured as L3 or L2
• The uplink configuration on VDS along with ToR
connectivity create a design choices that has vendor
specific technology dependencies ( vPC or MLAG)
• The recommendation for typical design is to use explicit
failover mode for the teaming
– The explicit failover does not depend on vendor specific
configuration and provides a simple route peering.
L3 - ToR
Routing
Adjacency
vSphere Host vSphere Host
Uplink Teaming
Mode – Non-LACP
L3 - ToR
Routing
Adjacency
vSphere Host vSphere Host
Uplink Teaming
Mode – LACP
VXLAN 5020
Transit Link
VXLAN 5020
Transit Link
CONFIDENTIAL 52
Agenda
• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements
– NSX Edge Design
– Logical Routing Topologies
– NSX Topologies for Enterprise and Multi-tenant Networks
– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 53
Distributed Logical Routing Components – Control Plane
 The Distributed Logical Router Control Plane is provided by a per
instance DLR Control VM and the NSX Controller
 Dynamic Routing Protocols supported with DLR
• OSPF
• BGP
• Control VM forms the adjacencies with Edge node
 Communicates with NSX Manager and Controller Cluster
• NSX Manager sends LIF information to the Control VM and Controller Cluster
• Control VM sends Routing updates to the Controller Cluster
 DLR Control VM and NSX Controller are not in the data path
 High availability supported through Active-Standby configuration
 Can exist in edge cluster or in compute cluster
Logical Router Control VM
Distributed Logical Routing Components – Data Plane
 Logical Interfaces (LIFs) on a Distributed Logical Router Instance
• There are internal LIFs and uplink LIFs
• VM Default Gateway traffic is handled by LIFs on the appropriate network
• LIFs are distributed across every hypervisor prepared for NSX
• Up to 1000 LIFs can be configured per Distributed Logical Router Instance
8 Uplink
992 Internal
• An ARP table is maintained per LIF
 vMAC is the MAC address of an internal LIF
• vMAC is same across all hypervisors and it is never seen by the physical network (only by VMs)
• Routing table on each ESXi hosts is programed via controller
DLR Kernel Module
vSphere
Host
LIF1 LIF2
Transit VXLAN
Uplink
ECMP with DLR and Edge
56
DLR
E3E1
Physical Routers
E2
…
Core
VXLAN
VLAN
E8
Web DBApp
 ECMP support on the DLR and on the NSX Edge
Both have the capability of installing in their forwarding tables up to 8
equal cost routes toward a given destination
 8 NSX Edges can be simultaneously deployed for a
given tenant
Increase the available bandwidth for North-South communication (up
to 80 Gbps*)
Reduces the traffic outage in an ESG failure scenario (only 1/Xth of
the flows are affected)
 Load-balancing algorithm on NSX Edge:
Based on Linux kernel flow based random round robin algorithm for
the next-hop selection  a flow is a pair of source IP and destination
IP
 Load-balancing algorithm on DLR:
Hashing of source IP and destination IP defines the chosen next-hop
Active Standby
Distributed Router & ECMP Edge Routing
 2 VLANs used for peering with Customer
Routers
 Map each of these VLANs (portgroups) to a
different dvUplink on Edge VDS to ensures
distribution of N/S traffic across dvUplinks
 Uplink = VLAN = Adjacency
 Avoid using LACP to ToR for route peering
due to vendor dependencies
 Min 3 host per rack
 With two host, two active Edge with anti-affinity
same host to avoid dual failure
 Use third host for active control-VM, standby on any
remaining host with anti-affinity rule
VXLAN
VLAN
Web DBApp
Transit VXLAN
E1 E2 E3 E4
R1 R2
External Network
VLAN 10
VLAN 20
ECMP Active
NSX Edges
Customer
Routers
Distributed
RouterActive Standby
DLR
Control VM
Edge HA Models Comparison – BW, Services & Convergence
E1
Active
Physical Router
E2
Standby
Routing
Adjacency
Web DB
DLR
Control VM
DLR
App
Active Standby
…
E8E3E1
Physical Router
Routing
E2
Adjacencies
Web DB
DLR
App
Active Standby
DLR
Control VM
Active/Standby HA Model
Bandwidth Single Path
(~10 Gbps/Tenant)
Stateful Services Supported - NAT, SLB, FW
Availability Low convergence with stateful services
enabled
ECMP Model
Bandwidth Up to 8 Paths
(~80 Gbps/Tenant)
Stateful Services Not Supported
Availability High
~ 3-4 sec with (1,3 sec)
timers tuning
3-Tier App Logical to Physical Mapping
vSphere Host
Host 3
vSphere Host
Host 4
vSphere Host
Host 5
WebWeb WebAppApp DB
Edge
VMs
Logical Router
Control
VMs
WebWeb WebAppApp DB
WebWeb WebAppApp DB
vSphere Host
Host 1
vSphere Host
Host 2
vSphere Host
Host 6
vSphere Host
Host 7
Compute Cluster
NSX Manager
NSX Controller Cluster
vCAC
vCenter
Edge Cluster
Management Cluster
CONFIDENTIAL 59
 Edge cluster availability and capacity planning requires for
• Minimum three host per cluster
• More if ECMP based North-South traffic BW requirements
 Edge cluster can also contain NSX controller and DLR
control VM for Distribute Logical Routing (DLR)
L3
L2
VMkernel VLANs
VLANs for L2 and
L3 NSX Services
Routed DC Fabric
L2
L3
WAN
Internet
L2
L3
L2
L3
VMkernel VLANs
VLANs for L2 and
L3 NSX Services
Routed DC Fabric
WAN
Internet
L2
L3
Single Rack Connectivity
Deployment Considerations
 Benefits of Dedicated Edge Rack
 Reduced need of stretching VLANs
 L2 required for External 802.1Q VLANs & Edge Default GW
 L2 connectivity between active and standby stateful Edge
design
 Uses GARP to announce new MAC in the event of a failover
 Localized routing configuration for N-S Traffic, reduce need
to configure and mange on rest of the ToRs in spine
 Span of control for network centric operational
management, BW monitoring & features
Dual Rack Connectivity 60
Edge Cluster
Agenda
• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements
– NSX Edge Design
– Logical Routing Topologies
– NSX Topologies for Enterprise and Multi-tenant Networks
– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 61
Enterprise Topology – Two Tier Design – with/without 6.1 Onward
 Typical Enterprise topology consist of app-tier logical
segments
 Routing and distributed forwarding is enable for each
logical segment available on all host via distributed
logical router (DLR)
• Allowing workload to move without the dependencies of VLAN as
local forwarding exist on each host via DLR LIF
• The north-south traffic is handled via next hop Edge which provides
virtual to physical(VXLAN to VLAN) forwarding
 The DLR to Edge routing is provisioned initially once, the
topology then can be used for additional logical segments
(additional LIFs) for multiple app-tier deployment
 Scaling
• Edge Scaling – Two ways
• Per tenant scaling – aka each workload/tenant gets its own Edge
and DLR
• ECMP based scaling of incremental BW gain – 10G BW upgrade
per spin up of Edge upto maximum of 80 Gig(8 Edges). Available
on NSX 6.1 release onward
• DLR Scaling
• Upto 1000 LIF – aka 998 logical network per DLR instance
External Network
Physical Router
VLAN 20 Routing
Edge Uplink Peering
NSX Edge
Routing
Peering
VXLAN 5020
Transit Link
Distributed
Routing
Web1 App1 DB1 Webn Appn DBn
Web DB
DLR
E8E1
Physical Router
E2
…
App
Core
Routing
Peering
Route Update
ECMP
Non-Stateful
E3
Multi Tenant (DLRs) Routing Topology
External Network
Tenant 9
DLR Instance 9DLR Instance 1
Web Logical
Switch App Logical Switch DB Logical Switch
Web Logical
Switch App Logical Switch DB Logical Switch
Tenant 1
NSX Edge
VXLAN 5020
Transit Link
VXLAN 5029
Transit Link
…
63
 Can be deployed by Enterprises, SPs and
hosting companies
 No support for overlapping IP addresses
between Tenants connected to the same
NSX Edge
 If the true isolation of tenant routing and
overlapping IP addressing is required –
dedicated Edge HA mode is the right
approach
VLAN
VXLAN
Multi Tenant Routing Topology (Post-6.1 NSX Release)
External Network
NSX Edge
VXLAN Trunk
Interface
64
 From NSX SW Release 6.1, a new type of
interface is supported on the NSX Edge (in
addition to Internal and Uplink), the “Trunk”
interface
 This allows to create many sub-interfaces on
a single NSX Edge vNic and establish
peering with a separate DLR instance on
each sub-interface
 Scale up the number of tenants supported with
a single ESG (assuming no overlapping IP
addresses across tenants)
 Aggregate of 200 sub-interfaces per NSX Edge
supported in 6.1
 Only static routing & BGP supported on sub-
interfaces in 6.1
 OSPF support will be introduced in 6.1.3
maintenance release
 Scale numbers for Dynamic Routing (max
Peers/Adjacencies) are under review
Routing
Peering
Tenant 1
Tenant 2
Tenant n
Single vNIC
Web Logical
Switch App Logical Switch DB Logical Switch
VLAN
VXLAN
High Scale Multi Tenant Topology
65
• High scale multi-tenancy is enabled with multiple
tiers of Edge interconnected via VxLAN transit
uplink
• Two tier Edges allow the scaling with administrative
control
– Top tier Edge acting as a provider Edge manage by
cloud(central) admin
– Second tier Edges are provisioned and managed by
tenant
• Provider Edge can scale upto 8 ECMP Edges for
scalable routing
• Based on tenant requirement tenant Edge can be
ECMP or stateful
• Used to scale up the number of tenants (only option
before VXLAN trunk introduction)
• Support for overlapping IP addresses between
Tenants connected to different first tier NSX Edges
External Network
Tenant 1
Web Logical
Switch
App LS DB LS
…
Web Logical
Switch
Edge with HA
NAT/LB
features
Single
Adjacency to
ECMP Edge
ECMP Based
NSX Edge X-Large
(Route Aggregation
Layer)
ECMP Tenant
NSX Edge
VXLAN Uplinks
or VXLAN Trunk*
VXLAN
Uplinks or
VXLAN Trunk*
VXLAN 5100
Transit
App LS DB LS
*Supported from NSX Release 6.1 onward
… E8E1
Multi Tenant Topology - NSX (Today)
MPLS Network
Tenant 1
Web Logical
Switch App Logical Switch DB Logical Switch
…
Web Logical
Switch App Logical Switch DB Logical Switch
Tenant NSX ESG
Physical Router
(PE or Multi-VRF CE)
VXLAN Uplinks (or
VXLAN Trunk*)
VLAN 10
66*Supported from NSX Release 6.1 onward
Tenant 1 VRF
Tenant 2 VRF
T1 T2
Tenant NSX ESG
T1
T2
VXLAN Uplinks (or
VXLAN Trunk*)
VLAN 20
VLAN
VXLAN
 NSX Edge currently it is not VRF aware
 Single routing table does not allow to keep
tenants logically isolated
 Each dedicated Tenant Edge can connect to
a separate VRF in the upstream physical
router
 Current deployment option to integrate with an
MPLS network
Agenda
• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements
– NSX Edge Design
– Logical Routing Topologies
– NSX Topologies for Enterprise and Multi-tenant Networks
– Micro-segmentation with Distributed FW Design
CONFIDENTIAL 67
Internet
Intranet/Extranet
Perimeter
Firewall
(Physical)
NSX Edge
Service
Gateway
SDDC (Software Defined DC)
D
F
W
D
F
W
D
F
W
Distributed FW - DFW
Virtual
Compute Clusters
Stateful Perimeter
Protection
Inter/Intra
VM
Protection
NSX Security Architecture Overview
• Stateful Edge Security
• DFW per vNIC Characteristics
– Distributed & fully programmable(REST-API)
– vMotion with rules and connection state intact
– Flexible Rules and topology independence
– Third party ecosystem integration – PAN
– Foundation for the micro-segmentation design
• Tools and Methods to protect virtual resources
– Traffic redirection rules with services composer or
partner security services UI
– Filtering module within security policy definition
– Diverse policy object & Policy Enforcement Points(PEP)
• Identity – AD Groups
• VC Container Objects – DC, Cluster, Port-Groups, Logical SW
• VM Characteristics– VM Names, Security Tags, Attributes, OS Names
• Protocols, Ports, Services
• Security Groups to leverage objects and PEP to achieve micro-segmentation
CONFIDENTIAL 68
Micro-segmentations Design
• Collapsing application tiers to like services with each app-tier
has its own logical switch
– Better for managing domain(WEB, DB) specific security requirements
– Easier to develop segmented isolation between apps tier domain –
Web-to-DB – Deny_All vs Web-to-App granularity
– May requires complex security between app tiers as specifics web-to-
app or app-to-db security isolation required within logical switch as
well as between segments
• Collapsing the entire apps tiers into single logical switch
– Better for managing group/application-owner specific expertise
– Apps container model. May suits well for app as tenant model
– Simpler security group construct per app-tier
– Isolation between different apps container is required
• DMZ Model
– Zero trust security
– Multiple DMZ logical network, default deny_ALL within DMZ
segments
– External to internal protection by multiple groups
Logical Distributed Router
.1
.1
.1
W eb-Tier-01
1.1.1.0/24
w eb-01 w eb-02
App-Tier-01
2.2.2.0/24
app-01 app-02
D B -Tier-01
3.3.3.0/24
db-01 db-02
.11 .12 .11 .12 .11 .12
L o g i c a l D i s trib u t e d R o u t e r
. 1
w e b - 0 1 w e b - 0 2 a p p - 0 1 d b - 0 1a p p - 0 2
A ll-T ie r -0 1
1 . 1 . 1. 0 / 24
.11 . 1 2 . 2 1 . 2 2 . 3 1
d b - 0 2
. 3 2
S G -W E B S G -A P P S G -D B
Web-Tier
App-Tier
External Network
STOP
Client to Web HTTPS Traffic
Web to App
TCP/8443
CONFIDENTIAL 69
Feature Overview - vCloud Automation Center & NSX
• Connectivity
– vCAC Network Profiles for On-Demand Network Creation
• Define routed, NAT, private, external profiles for variety of app topologies
• Option to connect app to pre-created networks (logical or physical)
– NSX Logical Distributed Router (DLR)
• Optimize for east-west traffic & resources by connecting to pre-created LDR
• Security
– On-Demand Micro-segmentation
• Automatic creation of security group per app w/ default deny firewall rules
– Apply Firewall andAdvanced Security Policies w/ Ease
• Select pre-defined NSX security policies to apply to app/tier
• Antivirus, DLP, Intrusion Prevention, Vulnerability Mgmt…more to come
– Connect Business Logic to Security Policy w/ Ease
• Select pre-defined NSX security tag (e.g. ‘Finance’) which is applied to workload
and interpreted by NSX to place in pre-defined security group
• Availability
– On-demand Load Balancer in ‘One-Armed Mode
• Plus option for using pre-created, in-line load balancer (logical or phys)
CONFIDENTIAL
Range of features from pre-created to on-demand network and security services.
Web
App
Database
VM
70
Reference Designs
73
NSX
Reference
Designs
NSX
Platform
Hardening
NSX
Getting
Started
Guides
SDDC
Validated
Solutions
NSX
Partner
White
papers
Reference Designs & Technical Papers on VMware Communities:
https://communities.vmware.com/docs
Reference Designs and Technical Papers on the NSX Portal:
http://www.vmware.com/products/nsx/resources.html
NSX and
Fabric
Vendors
VMware NSX Collateral Landscape
VMware NSX Network Virtualization Design Guides:
https://communities.vmware.com/docs/DOC-27683
NSX Reference Design Guides – The Architecture
ESXi
Compute
Clusters
Compute Clusters
Infrastructure/Edge Clusters (Edge, Storage,
vCenter and Cloud Management System)
Edge Clusters
WAN
Internet
Storage Cluster
Mgmt and
Cloud Mgmt Cluster
CONFIDENTIAL 74
What’s Next…
VMware NSX
Hands-on Labs
labs.hol.vmware.com
VMware Booth #1229
3 NSX Demo Stations
Explore, Engage, Evolve
virtualizeyournetwork.com
Network Virtualization Blog
blogs.vmware.com/networkvirtualization
NSX Product Page
vmware.com/go/nsx
NSX Training & Certification
www.vmware.com/go/NVtraining
NSX Technical Resources
Reference Designs
vmware.com/products/nsx/resources
VMware NSX YouTube Channel
youtube.com/user/vmwarensx
Play Learn Deploy
CONFIDENTIAL 75
76
Please submit your feedback
via our mobile app.
Thank you!
Reference design for v mware nsx

More Related Content

What's hot

Hyper-converged infrastructure
Hyper-converged infrastructureHyper-converged infrastructure
Hyper-converged infrastructure
Igor Malts
 

What's hot (20)

Hyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: ConceptsHyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: Concepts
 
VMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowVMware NSX 101: What, Why & How
VMware NSX 101: What, Why & How
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5
 
SDN입문 (Overlay and Underlay)
SDN입문 (Overlay and Underlay)SDN입문 (Overlay and Underlay)
SDN입문 (Overlay and Underlay)
 
(2014년) Active Active 데이터센터
(2014년) Active Active 데이터센터(2014년) Active Active 데이터센터
(2014년) Active Active 데이터센터
 
Nsx t reference design guide 3-0
Nsx t reference design guide 3-0Nsx t reference design guide 3-0
Nsx t reference design guide 3-0
 
Microsoft Windows Server 2022 Overview
Microsoft Windows Server 2022 OverviewMicrosoft Windows Server 2022 Overview
Microsoft Windows Server 2022 Overview
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
Introduction to ibm cloud paks concept license and minimum config public
Introduction to ibm cloud paks concept license and minimum config publicIntroduction to ibm cloud paks concept license and minimum config public
Introduction to ibm cloud paks concept license and minimum config public
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
Brkarc 3454 - in-depth and personal with the cisco nexus 2000 fabric extender...
Brkarc 3454 - in-depth and personal with the cisco nexus 2000 fabric extender...Brkarc 3454 - in-depth and personal with the cisco nexus 2000 fabric extender...
Brkarc 3454 - in-depth and personal with the cisco nexus 2000 fabric extender...
 
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptxVMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
VMware Cloud Foundation - PnP presentation 8_6_18 EN.pptx
 
Aci presentation
Aci presentationAci presentation
Aci presentation
 
TechWiseTV Workshop: 5th Generation UCS
TechWiseTV Workshop: 5th Generation UCSTechWiseTV Workshop: 5th Generation UCS
TechWiseTV Workshop: 5th Generation UCS
 
Virtual Infrastructure Overview
Virtual Infrastructure OverviewVirtual Infrastructure Overview
Virtual Infrastructure Overview
 
Networking deep dive
Networking deep diveNetworking deep dive
Networking deep dive
 
RUCKUS Unleashed & SmartZone
RUCKUS Unleashed & SmartZoneRUCKUS Unleashed & SmartZone
RUCKUS Unleashed & SmartZone
 
Hyper-converged infrastructure
Hyper-converged infrastructureHyper-converged infrastructure
Hyper-converged infrastructure
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017
 
VMware virtual SAN 6 overview
VMware virtual SAN 6 overviewVMware virtual SAN 6 overview
VMware virtual SAN 6 overview
 

Viewers also liked

Viewers also liked (20)

VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco InfrastructureVMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
 
VMware NSX - Lessons Learned from real project
VMware NSX - Lessons Learned from real projectVMware NSX - Lessons Learned from real project
VMware NSX - Lessons Learned from real project
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
 
Network Virtualization with VMware NSX
Network Virtualization with VMware NSXNetwork Virtualization with VMware NSX
Network Virtualization with VMware NSX
 
VMworld 2016: Advanced Network Services with NSX
VMworld 2016: Advanced Network Services with NSXVMworld 2016: Advanced Network Services with NSX
VMworld 2016: Advanced Network Services with NSX
 
VMUG - NSX Architettura e Design
VMUG - NSX Architettura e DesignVMUG - NSX Architettura e Design
VMUG - NSX Architettura e Design
 
VMware NSX for vSphere - Intro and use cases
VMware NSX for vSphere - Intro and use casesVMware NSX for vSphere - Intro and use cases
VMware NSX for vSphere - Intro and use cases
 
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep Dive
 
NSX for vSphere Logical Routing Deep Dive
NSX for vSphere Logical Routing Deep DiveNSX for vSphere Logical Routing Deep Dive
NSX for vSphere Logical Routing Deep Dive
 
VMworld 2015: The Future of Network Virtualization with VMware NSX
VMworld 2015: The Future of Network Virtualization with VMware NSXVMworld 2015: The Future of Network Virtualization with VMware NSX
VMworld 2015: The Future of Network Virtualization with VMware NSX
 
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
 
Software Defined Networking (SDN) with VMware NSX
Software Defined Networking (SDN) with VMware NSXSoftware Defined Networking (SDN) with VMware NSX
Software Defined Networking (SDN) with VMware NSX
 
BETTER TOGETHER 〜VMware NSXとJuniperデバイスを繋いでみよう!〜
BETTER TOGETHER 〜VMware NSXとJuniperデバイスを繋いでみよう!〜BETTER TOGETHER 〜VMware NSXとJuniperデバイスを繋いでみよう!〜
BETTER TOGETHER 〜VMware NSXとJuniperデバイスを繋いでみよう!〜
 
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep DiveVMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
 
VMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep DiveVMworld 2015: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep Dive
 
Understanding and deploying Network Virtualization
Understanding and deploying Network VirtualizationUnderstanding and deploying Network Virtualization
Understanding and deploying Network Virtualization
 
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
 
VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6
 
VMware NSX + Cumulus Networks: Software Defined Networking
VMware NSX + Cumulus Networks: Software Defined NetworkingVMware NSX + Cumulus Networks: Software Defined Networking
VMware NSX + Cumulus Networks: Software Defined Networking
 
Network virtualization
Network virtualizationNetwork virtualization
Network virtualization
 

Similar to Reference design for v mware nsx

Net1674 final emea
Net1674 final emeaNet1674 final emea
Net1674 final emea
VMworld
 
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
nvirters
 
Banv meetup-contrail
Banv meetup-contrailBanv meetup-contrail
Banv meetup-contrail
nvirters
 

Similar to Reference design for v mware nsx (20)

VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
 
VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture
 
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
 
VMware nsx network virtualization tool
VMware nsx network virtualization toolVMware nsx network virtualization tool
VMware nsx network virtualization tool
 
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
 
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della SicurezzaNSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
 
VMworld 2013: Virtualized Network Services Model with VMware NSX
VMworld 2013: Virtualized Network Services Model with VMware NSX VMworld 2013: Virtualized Network Services Model with VMware NSX
VMworld 2013: Virtualized Network Services Model with VMware NSX
 
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
 
Net1674 final emea
Net1674 final emeaNet1674 final emea
Net1674 final emea
 
VMware NSX and Arista L2 Hardware VTEP Gateway Integration
VMware NSX and Arista L2 Hardware VTEP Gateway IntegrationVMware NSX and Arista L2 Hardware VTEP Gateway Integration
VMware NSX and Arista L2 Hardware VTEP Gateway Integration
 
Tungsten Fabric Overview
Tungsten Fabric OverviewTungsten Fabric Overview
Tungsten Fabric Overview
 
VMworld 2013: An Introduction to Network Virtualization
VMworld 2013: An Introduction to Network Virtualization VMworld 2013: An Introduction to Network Virtualization
VMworld 2013: An Introduction to Network Virtualization
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
Support of containerized workloads in ONAP
Support of containerized workloads in ONAPSupport of containerized workloads in ONAP
Support of containerized workloads in ONAP
 
Scaling Your SDDC Network: Building a Highly Scalable SDDC Infrastructure wit...
Scaling Your SDDC Network: Building a Highly Scalable SDDC Infrastructure wit...Scaling Your SDDC Network: Building a Highly Scalable SDDC Infrastructure wit...
Scaling Your SDDC Network: Building a Highly Scalable SDDC Infrastructure wit...
 
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
 
Building the SD-Branch using uCPE
Building the SD-Branch using uCPEBuilding the SD-Branch using uCPE
Building the SD-Branch using uCPE
 
From virtual to high end HW routing for the adult
From virtual to high end HW routing for the adultFrom virtual to high end HW routing for the adult
From virtual to high end HW routing for the adult
 
A consolidated virtualization approach to deploying distributed cloud networks
A consolidated virtualization approach to deploying distributed cloud networksA consolidated virtualization approach to deploying distributed cloud networks
A consolidated virtualization approach to deploying distributed cloud networks
 
Banv meetup-contrail
Banv meetup-contrailBanv meetup-contrail
Banv meetup-contrail
 

More from solarisyougood

More from solarisyougood (20)

Emc vipr srm workshop
Emc vipr srm workshopEmc vipr srm workshop
Emc vipr srm workshop
 
Emc recoverpoint technical
Emc recoverpoint technicalEmc recoverpoint technical
Emc recoverpoint technical
 
Emc vmax3 technical deep workshop
Emc vmax3 technical deep workshopEmc vmax3 technical deep workshop
Emc vmax3 technical deep workshop
 
EMC Atmos for service providers
EMC Atmos for service providersEMC Atmos for service providers
EMC Atmos for service providers
 
Cisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overviewCisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overview
 
Designing your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guideDesigning your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guide
 
Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...
 
Ibm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopIbm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshop
 
Power8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopPower8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshop
 
Power systems virtualization with power kvm
Power systems virtualization with power kvmPower systems virtualization with power kvm
Power systems virtualization with power kvm
 
Power vc for powervm deep dive tips & tricks
Power vc for powervm deep dive tips & tricksPower vc for powervm deep dive tips & tricks
Power vc for powervm deep dive tips & tricks
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
 
Ibm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshopIbm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshop
 
Emc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopEmc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshop
 
Emc isilon technical deep dive workshop
Emc isilon technical deep dive workshopEmc isilon technical deep dive workshop
Emc isilon technical deep dive workshop
 
Emc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshopEmc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshop
 
Emc vplex deep dive
Emc vplex deep diveEmc vplex deep dive
Emc vplex deep dive
 
Cisco mds 9148 s training workshop
Cisco mds 9148 s training workshopCisco mds 9148 s training workshop
Cisco mds 9148 s training workshop
 
Cisco cloud computing deploying openstack
Cisco cloud computing deploying openstackCisco cloud computing deploying openstack
Cisco cloud computing deploying openstack
 
Se training storage grid webscale technical overview
Se training   storage grid webscale technical overviewSe training   storage grid webscale technical overview
Se training storage grid webscale technical overview
 

Recently uploaded

Recently uploaded (20)

Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 

Reference design for v mware nsx

  • 1. Nimish Desai VMware Reference Design for VMware NSX NET4282
  • 2. Student Guide & Internal & Confidential Update Daily https://goo.gl/VVmVZ0 The vSphere Optimization Assessment (VOA): Best Practices for Closing a Virtualization Deal in 30 Days or Less http://ouo.io/y4vaqW vRealize Operations v6.0What's New Technical Overview (formerly vC Ops http://ouo.io/1KWCBr Best Practices for Conducting a vSphere Optimization Assessment (VOA http://ouo.io/E1If0 Deep Dive into New Features in vRealizeOperations 6.0 (formerly vCOps) http://ouo.io/CyuCmK How to Extend and Customize vRealizeOperations and Automation (vCOps and vCAC) http://ouo.io/jCvk7D Troubleshooting with vRealizeOperations Insight (Operations and Log Management) http://ouo.io/gcz0oN vRealizeAir –NEW Cloud Management SaaS Offerings http://ouo.io/6TMPF How to Help Customers Install, Deploy and Migrate to the vRealizeOperations Manager 6.0 (formerly vCOps) http://ouo.io/1pL8wo Showing Costs Back in the Virtualized Environment vRealize Business Standard Proof of Concept (formerly ITBM) http://ouo.io/30TzE vRealizeCloud Management Portfolio Overview and Glimpse into the Future http://ouo.io/OpLGQB vRealizeSuite: VMware’s vRealizeCloud Management Platform http://ouo.io/t5n5MO vRealizeAutomation (formerly vCAC) and NSX: Automating Networking & Security Infrastructure http://ouo.io/CyCXv
  • 3. Agenda CONFIDENTIAL 2 1 Software Defined Data Center 2 Network Virtualization - NSX 3 NSX for vSphere Design and Deployment Considerations 4 Reference Designs 5 Summary and Q&A
  • 4. Data Center Virtualization Layer CInotemllpiguetnec, eNeintwSHooarfrktdwawanardereStorage Capacity PODoepodelirecadati,toeVndea,nlVdMeonorddIonerdl oeSfppVeenMcdifeifconrItn,DfBraaetsasttrCuPecrtnicuteree/rPerformance Infrastructure SAMiuamtnopumliafailetCedodCnCofiongnfuigfriaguturiaoratnioti&onnM&&aMnMaaangnaeagmgeeemnmet ennt t What Is a Software Defined Data Center (SDDC)? Abstract…pool…automate…across compute, networking and storage. Software Hardware CONFIDENTIAL 3
  • 5. VMware NSX Momentum: Over 400 Customers top investment banks enterprises & service providers CONFIDENTIAL 4
  • 7. Traditional Networking Configuration Tasks Initial configuration  Multi-chassis LAG  Routing configuration  SVIs/RVIs  VRRP/HSRP  STP • Instances/mappings • Priorities • Safeguards  LACP  VLANs • Infra networks on uplinks and downlinks • STP Recurring configuration  SVIs/RVIs  VRRP/HSRP  Advertise new subnets  Access lists (ACLs)  VLANs  Adjust VLANs on trunks  VLANs STP/MST mapping  VLANs STP/MST mapping  Add VLANs on uplinks  Add VLANs to server ports Configuration consistency ! CONFIDENTIAL 6 L3 L2
  • 8. How Does NSX Solve Next Generation DC Challenges?  Distributed FW  Micro Segmentation  Multifunctional Edge • Stateful FW • NAT • Load Balancer • IPSEC/SSL  Third Party Integration Security & Services  Time to Deploy  Mobility  Topology Independent • L2 vs L3 • Services  Distributed Forwarding  HighlyAvailable Flexibility & Availability  IP Fabric  Configure Once  Horizontal Scale  Any Vendor Simplicity & Devices Agnostic  API Driven  Automation  CMP Integrated  Self Services Cloud Centric Services NSX Platform IP Fabric – Topology Independent (L2 or L3) CONFIDENTIAL 7
  • 9. Provides A Faithful Reproduction of Network & Security Services in Software Management APIs, UI Switching Routing Firewalling Load Balancing VPN Connectivity to Physical Networks Policies, Groups, Tags Data Security Activity Monitoring CONFIDENTIAL 8
  • 10. NSX Architecture and Components Cloud Consumption • Self Service Portal • vCloud Automation Center, OpenStack, Custom Data Plane NSX Edge ESXi Hypervisor Kernel Modules Distributed Services • High – Performance Data Plane • Scale-out Distributed Forwarding Model Management Plane NSX Manager • Single configuration portal • REST API entry-point LogicalNetwork Physical Network … … NSX Logical Router Control VM Control Plane NSX Controller • Manages Logical networks • Control-Plane Protocol • Separation of Control and Data Plane Logical Distributed Firewall Switch Logical Router vCenter Server 9
  • 11. NSX for vSphere Design and Deployment Considerations
  • 12. Agenda • NSX for vSphere Design and Deployment Considerations – Physical & Logical Infrastructure Requirements – NSX Edge Design – Logical Routing Topologies – NSX Topologies for Enterprise and Multi-tenant Networks – Micros-segmentation with Distributed FW Design CONFIDENTIAL 12
  • 13. NSX is AGNOSTIC to Underlay Network Topology L2 or L3 or Any Combination Only TWO Requirements IP Connectivity MTU of 1600 CONFIDENTIAL 13
  • 14. Classical Access/Aggregation/Core Network • L2 application scope is limited to a single POD and is the failure domain • Multiple aggregation modules, to limit the Layer 2 domain size • VLANs carried throughout the PoD • Unique VLAN to a subnet mapping • Default gateway – HSRP at aggregation layer WAN/Internet L3 L2 POD A L3 L2 POD B VLAN X Stretch VLAN Y Stretch CONFIDENTIAL 14
  • 15. L3 Topologies & Design Considerations • L3 ToR designs have dynamic routing protocol between leaf and spine. • BGP, OSPF or ISIS can be used • Rack advertises small set of prefixes (Unique VLAN/subnet per rack) • Equal cost paths to the other racks prefixes. • Switch provides default gateway service for each VLAN subnet • 801.Q trunks with a small set of VLANs for VMkernel traffic • Rest of the session assumes L3 topology WAN/Internet L3 L2 L3 Uplinks VLAN Boundary 802.1Q Hypervisor 1 802.1Q ... Hypervisor n CONFIDENTIAL L3 L2 15
  • 16. MTU Considerations • Arista – L2 Interfaces  by default IP packet as large as 9214 Bytes can be sent and received  no configuration is required – L3 interfaces  by default IP packet as large as 1500 Bytes can be sent and received • Configuration step for L3 interfaces: change MTU to 9214 (“mtu 9214” command)  IP packet as large as 9214 Bytes can be sent and received • Cisco Nexus 9000 – L2 and L3 Interfaces  by default IP packet as large as 1500 Bytes can be sent and received – Configuration Steps for L2 interfaces • Change the System Jumbo MTU to 9214 (“system jumbomtu 9214” global command)  this is because you can only set MTU to default value (1500 Bytes) or the system wide configured value • Change MTU to 9214 on each L2 interface (“mtu 9214” interface command) – Configuration Steps for L3 interfaces • Change MTU to 9214 on each L3 interface (“mtu 9214” interface command) • Cisco Nexus 3000 and 5000/6000 – The configuration for L2 interfaces can ONLY be changed with “system QoS” policy – Configuration step for L3 interfaces: change MTU to 9214 (“mtu 9214” command)  IP packet as large as 9214 Bytes can be sent and received CONFIDENTIAL 16
  • 18. Organizing Compute, Management & Edge Edge Leaf L3 to DC Fabric L2 to External Networks Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System) WAN Internet L3 L2 L3 L2 Leaf Spine L2 VLANs for bridging Separation of compute, management and Edge function with following design advantage • Managing life-cycle of resources for compute and Edge functions • Ability to isolate and develop span of control • Capacity planning – CPU, Memory & NIC • Upgrades & migration flexibility • High availability based on functional need • Workload specific SLA (DRS & FT) • Network centric connectivity – P/V, ECMP • vMotion boundary • Automation control over area or function that requires frequent changes • app-tier, micro-segmentation & load-balancer Three areas of technology require considerations • Interaction with physical network • Overlay (VXLAN) impact • Integration with vSphere clustering
  • 19. vSphere Cluster Design – Collapsed Edge/Infra Racks Compute Racks Infrastructure Racks (Edge, Storage, vCenter and Cloud Management System) Edge Clusters vCenter 1 Max supported number of VMs vCenter 2 Max supported number of VMs WAN Internet Cluster location determined by connectivity requirements Storage Management Cluster L3 L2 L3 L2 L3 L2 Leaf Spine Edge Leaf (L3 to DC Fabric, L2 to External Networks) L2 VLANs for bridging 19
  • 20. vSphere Cluster Design – Separated Edge/Infra WAN Internet Leaf vCenter 1 Max supported number of VMs vCenter 2 Max supported number of VMs Compute Racks Cluster location determined by connectivity requirements Infrastructure Racks (Storage, vCenter and Cloud Management) Edge Racks (Logical Router Control VMs and NSX Edges) Spine L3 L2 L3 L2 L3 L2 Edge Leaf (L3 to DC Fabric, L2 to External Networks) 20
  • 21. 21 Registration or Mapping WebVM WebVM VM VM WebVM Compute Cluster WebVM VM VM Compute A vCenter Server NSX Manager NSX Controller Compute B Edge and Control VM Edge Cluster Management Cluster  Single vCenter Server to manage all Management, Edge and Compute Clusters • NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server • NSX Controllers can also be deployed into the Management Cluster • Reduces vCenter Server licensing requirements • Most common in POCs or small environments Single vCenter Design
  • 22. 22 Management VC Web VM Web VM VM VM Compute A Compute B VC for NSX Domain - A NSX Controller Edge and Control VM Web VM Web VM VM VM VC for NSX Domain - B NSX Manager VM - B NSX Controller Edge and Control VM NSX Manager VM - A Edge ClusterCompute Cluster Edge Cluster • Option 2 following VMware best practices to have the Management Cluster managed by a dedicated vCenter Server (Mgmt VC) • Separate vCenter Server into the Management Cluster to manage the Edge and Compute Clusters NSX Manager also deployed into the Management Cluster and pared with this second vCenter Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains) • NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore the Controllers are usually also deployed into the Edge Cluster Management Cluster Multiple vCenters Design - Multiple NSX Domains
  • 23. Leaf L2 L3 L3 L2 VMkernel VLANs VLANs for Management VMs L2 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk VMkernel VLANs VLANs for Management VMs Single Rack Connectivity Deployment Considerations  Mgmt Cluster is typically provisioned on a single rack  The single rack design still requires redundant uplinks from host to ToR carrying VLANs for management  Dual rack design for increased resiliency (handling single rack failure scenarios) which could be the requirements for highly available design • Each ToR can be deployed in a separate rack • Host uplinks extend across the racks  Typically in a small design management and Edge cluster are collapsed • Exclude management cluster from preparing VXLAN • NSX Mngr, NSX Controllers automatically excluded from DFW functions. • Put vCenter server in DFW exclusion list ! Leaf L3 L2 VMkernel VLANs Routed DC Fabric 802.1Q Trunk Dual Rack Connectivity L2 23 Management Cluster
  • 24.  Edge cluster availability and capacity planning requires for • Minimum three host per cluster • More if ECMP based North-South traffic BW requirements  Edge cluster can also contain NSX controller and DLR control VM for Distribute Logical Routing (DLR) L3 L2 VMkernel VLANs VLANs for L2 and L3 NSX Services Routed DC Fabric L2 L3 WAN Internet L2 L3 L2 L3 VMkernel VLANs VLANs for L2 and L3 NSX Services Routed DC Fabric WAN Internet L2 L3 Single Rack Connectivity Deployment Considerations  Benefits of Dedicated Edge Rack  Reduced need of stretching VLANs  L2 required for External 802.1Q VLANs & Edge Default GW  L2 connectivity between active and standby stateful Edge design  Uses GARP to announce new MAC in the event of a failover  Localized routing configuration for N-S Traffic, reduce need to configure and mange on rest of the spine  Span of control for network centric operational management, BW monitoring & features Dual Rack Connectivity 24 Edge Cluster
  • 26.  NSX Manager deployed as a virtual appliance • 4 vCPU, 12 GB of RAM per node • Consider reserving memory for VC to ensure good Web Client performance • Can not modify configurations  Resiliency of NSX Manager provided by vSphere HA  Catastrophic failure of NSX Manager is rare, however periodic backup is recommended to restore to the last known configurations • During the failure all existing data plane connectivity continue to work since data and management plane are separated ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Manager
  • 28.  Provide control plane to distribute network information to ESXi hosts  NSX Controllers are clustered for scale out and high availability  Network information is distributed across nodes in a Controller Cluster (slicing)  Remove the VXLAN dependency on multicast routing/PIM in the physical network  Provide suppression ofARP broadcast traffic in VXLAN networks Logical Router 1 VXLAN 5000 Logical Router 2 VXLAN 5001 Logical Router 3 VXLAN - 5002 Controller VXLAN Directory Service MAC table ARP table VTEP table NSX Controllers Functions
  • 29.  Controller nodes are deployed as virtual appliances • 4 vCPU, 4GB of RAM per node • CPU Reservation of 2048 MHz • No memory reservation required • Modifying settings is not supported  Can be deployed in the Mgmt or Edge clusters  Cluster size of 3 Controller nodes is the only supported configuration  Controller majority is required for having a functional controller cluster • Data plane activity maintained even under complete controller cluster failure  By default the DRS and anti-affinity rules are not enforced for controller deployment • The recommendation is to manually enable DRS and anti-affinity rules • Minimum 3 host is required to enforce the anti-affinity rule ToR # 1 ToR #2 Controller 2 Controller 3 NSX Mgr Controller 1 vCenter Server NSX Controllers
  • 30. VDS, Transport Zone, VTEPs, VXLAN Switching
  • 31. Transport Zone, VTEP, Logical Networks and VDS  Transport Zone: collection of VXLAN prepared ESXi clusters  Normally a TZ defines the span of Logical Switches (Layer 2 communication domains)  VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for encap/decap VXLAN traffic  VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically created during the cluster VXLAN preparation  One or more VDS can be part of the same TZ  A given Logical Switch can span multiple VDS 33 vSphere Host VXLAN Transport Network VTEP1 10.20.10.10 Host 1 VTEP2 10.20.10.11 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch
  • 32. vSphere Host (ESXi) VMkernel Networking L3 ToR Switch Routed uplinks (ECMP) VLAN Trunk (802.1Q) VLAN 66 Mgmt 10.66.1.25/26 DGW: 10.66.1.1 VLAN 77 vMotion 10.77.1.25/26 GW: 10.77.1.1 VLAN 88 VXLAN 10.88.1.25/26 DGW: 10.88.1.1 VLAN 99 Storage 10.99.1.25/26 GW: 10.99.1.1 SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 SpanofVLANs SpanofVLANs 34
  • 33. VMkernel Networking  Multi instance TCP/IP Stack • Introduced with vSphere 5.5 and leveraged by: VXLAN NSX vSwitch transport network  Separate routing table, ARP table and default gateway per stack instance  Provides increased isolation and reservation of networking resources  Enables VXLAN VTEPs to use a gateway independent from the default TCP/IP stack  Management, vMotion, FT, NFS, iSCSI leverage the default TCP/IP stack in 5.5  VMkernel VLANs do not extend beyond the rack in an L3 fabric design or beyond the cluster with an L2 fabric, therefore static routes are required for Management, Storage and vMotion Traffic  Host Profiles reduce the overhead of managing static routes and ensure persistence 35
  • 34. L2 – Fabric Network Addressing and VLANs Definition Considerations L2 Fabric For L2 Fabric – Y denotes the same subnet used on entire cluster • VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS • For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic • For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster VXLAN Transport Zone Scope (extends across ALL PODs/clusters) Compute Cluster A 32 Hosts Compute Cluster B 32 Hosts VMkernel VLAN/Subnet Scope VMkernel VLAN/Subnet Scope POD A POD BL3 L2 37 Compute Rack - IP Address Allocations and VLANs Function VLAN ID IP Subnet Management 66 10.66.Y.x/24 vMotion 77 10.77.Y.x/24 VXLAN 88 10.88.Y.x/24 Storage 99 10.99.Y.x/24
  • 35. L3 - Network Addressing and VLANs Definition Considerations VXLAN Transport Zone Scope (extends across ALL racks/clusters) For L3 Fabric - Values for VLANs, IP addresses and masks are provided as an example. R_id is the rack number 38 • VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS • For the Fabric is L3, this implies that separate IP subnets are associated to the same VLAN IDs defined across racks • In L3 fabric the IP addressing for the VTEP requires consideration in which traditional “IP Pools” may not work well, recommendation is to use DHCP L2 Compute Cluster A 32 Hosts Compute Cluster B 32 Hosts VMkernel same VLAN unique Subnet Scope VMkernel same VLAN unique Subnet Scope L3 Compute Rack - IP Address Allocations and VLANs Function VLAN ID IP Subnet Management 66 10.66.R_id.x/26 vMotion 77 10.77.R_id.x/26 VXLAN 88 10.88.R_id.x/26 Storage 99 10.99.R_id.x/26
  • 36. VDS Uplink Design • VDS utilizes special port-groups (called dvuplinks) for uplink connectivity • The choice of configuration may be simplified based on the following requirements – Simplicity of teaming configuration – BW Required for each type of traffic – Convergence requirement – Cluster usage – compute, Edge and management – The uplink utilization factors – flow based vs. VM • LACP teaming forces all the traffic types to use the same teaming mode • For the VXLAN traffic the choice in teaming mode depends on • Simplicity • Bandwidth requirement • LBT mode is not supported • Having multiple VDS for compute and Edge allow flexibility of teaming more for uplink configuration 39 Teaming and Failover Mode NSX Support Multi-VTEP Support Uplink Behavior 2 x 10G Route based on Originating Port ✓ ✓ Both Active Route based on Source MAC hash ✓ ✓ Both Active LACP ✓ × Flow based –both active Route based on IP Hash (Static EtherChannnel) ✓ × Flow based –both active Explicit Failover Order ✓ × Only one link is active Route based on Physical NIC Load (LBT) × × ×
  • 37. vSphere Host VXLAN Transport Network • Simple operational model  all VXLAN traffic are associated to 10.20.10.10 Host 1 VTEP2 10.20.10.11 VTEP1 V M VXLAN 5002 MAC2 vSphere Host VTEP3 10.20.10.12 Host 2 10.20.10.13 V M MAC4 V M MAC1 V M MAC3 VTEP4 vSphere Distributed Switch vSphere Distributed Switch VTEP Design  Number of VTEPs deployed depends on teaming mode • Single VTEP for LACP and Explicit Failover • Multiple VTEPs (based on number of host uplinks) for Src-ID teaming option  Single VTEP is sufficient for • Workloads do not drive more than 10G of throughput the same VTEP address • Deterministic traffic mapping to uplink is desired (Explicit Failover only)  Multiple VTEPs typically two is required for  Workloads require > 10G of throughput • Allows flexibility of choosing teaming mode for other traffic types • IP addressing for VTEP • Common VTEP subnet for a L2 fabric • Multiple VTEP subnets (one per rack) for L3 fabrics  IP Pools or DHCP can be use for IP address assignment 40
  • 38. Design Considerations – VDS and Transport Zone Management Cluster Edge Cluster WebVM WebVM VM VM WebVM WebVM VM VM Compute A Compute N vCenter Server NSX Manager Controller Cluster NSX Edges VXLAN Transport Zone Spanning Three Clusters Compute VDS Edge VDS VTEP vSphere Host vSphere Host 192.168.230.100 192.168.240.100 192.168.230.101 Compute Cluster 1 vSphere Host Compute Cluster N vSphere Host 192.168.240.101 vSphere Host vSphere Host 192.168.220.100 192.168.220.101 VTEP VTEP
  • 39. Recap: vCenter – Scale Boundaries vCenter Server ESXi ESXi ESXi ESXi ESXi ESXi VDS 1 Cluster DC Object Max. 32 hosts Max. 500 hosts 10,000 powered on VMs 1,000 ESXi hosts 128 VDS Manual vMotion DRS-based vMotion 42 ESXi ESXi VDS 2
  • 40. NSX for vSphere – Scale & Mobility Boundaries Cloud Management System DRS-based vMotion Manual vMotion Logical Network Span Transport Zone 43 vCenter Server NSX API (Manager) vCenter Server NSX API (Manager) Controller Cluster Controller Cluster 1:1 mapping of vCenter to NSX Cluster Cluster DC Object Max. 32 hosts Max. 500 hosts ESXi ESXi ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi VDS
  • 41. NSX for vSphere VXLAN Replication Modes NSX for vSphere provides flexibility for VXLAN Transport – Does not require complex multicast configurations on physical network • Unicast Mode – All replication occurs using unicast. Applicable to small deployment • Multicast Mode – Entire replication is off-loaded to physical network – Requires IGMP/Querier & and multicast routing for L3(PIM) * • Hybrid Mode – Local replication offloaded to physical network, while remote replication occurs via unicast – Most practical without the complexity of multicast mode – Only requires IGMP Snooping/Querier. Does not require L3 PIM * • All modes require an MTU of 1600 bytes. * Host provides necessary querier function however external querier recommended for manageability/admin-scope CONFIDENTIAL 44
  • 42. Agenda • NSX for vSphere Design and Deployment Considerations – Physical & Logical Infrastructure Requirements – NSX Edge Design – Logical Routing Topologies – NSX Topologies for Enterprise and Multi-tenant Networks – Micros-segmentation with Distributed FW Design CONFIDENTIAL 47
  • 43. NSX Edge Gateway: Integrated network services Routing/NAT Firewall Load Balancing L2/L3 VPN DHCP/DNS relayDDI VM VM VM VM VM • Multi-functional & multi-use VM model. Deployment varies based on its use, places in the topology, performance etc. • Functional use – P/V routing only, LB Only, Perimeter FW etc. • Form factor – X-Large to Compact (one license) • Stateful switchover of services(FW/NAT, LB, DHCP & IPSEC/SSL) • Multi-interface routing Support – OSPF & BGP • Can be deployed in high availability or stand alone mode • Per tenant Edge services – scaling by interface and instance • Scaling of north-south bandwidth with ECMP support in 6.1 • Requires design consideration for following • Edge placement for north-south traffic • Edge cluster design consideration • Bandwidth scaling – 10G to 80G • Edge services with multi-tenancy
  • 44. NSX Edge Services Gateway Sizing 49 • Edge services gateway can be deployed in many sizes depending on services used • Multiple Edge nodes can be deployed at once e.g. ECMP, LB and Active-Standby for NAT • When needed the Edge size can be increased or decreased • In most deployment the Quad-Large is sufficient for many services such as ECMP & LB • X-Large is required for high performance L7 load balancer configurations Edge Services Gateway Form vCPU Memory MB Specific Usage X-Large 6 8192 Suitable for L7 High Performance LB Quad-Large 4 1024 Suitable for most deployment Large 2 1024 Small DC Compact 1 512 PoC
  • 45. 50 Active-Standby Edge Design L3 - ToR Routing Adjacency vSphere Host vSphere Host VXLAN 5020 Transit Link Active-Standby Stateful FW/NAT/LB • Active-Standby Edge Services Gateway enables stateful services • Perimeter FW, NAT, LB, SSL-VPN, North-South routing • Deployed in pair with heartbeat and synchronization of services state • Heartbeat and sync both use the same internal vNic • L2 connectivity required between active and standby • Form factor – X-Large to Compact (one license) • Multi-interface routing Support – OSPF & BGP • Must tune protocol timers to 40/120(hello/hold timer) • Anti-affinity is automatically created • Active and Standby Edges are placed on different hosts • Minimum three hosts are recommended • Multiple instance to Edge can be deployed • LB Edge can be deployed near application tire • Multiple tenants can have separate Edge services
  • 46. 51 ECMP Based Edge Desing ECMP Edges Non-Stateful VXLAN VLAN Transit VXLAN E1 E2… E7 E8 R1 R2 External Network VLAN 10 VLAN 20 ECMP Active NSX Edges Customer Routers • ECMP Edge enables scalable north-south traffic forwarding services • 8 instances of Edge - upto 80G BW • Stateful services are not supported due to asymmetric traffic behavior • No heartbeat and sync between Edge nodes • L2 connectivity required for peering • Form factor – X-Large to Compact (one license) • Multi-interface routing Support – OSPF & BGP • Aggressing timers tuning supported 3/4 (hello/hold timer) • Anti-affinity configuration is required • Minimum three hosts are recommended • Multiple tenants can have separate Edge services
  • 47. Edge Interaction with Physical Topology • Edge forms peering adjacency with physical devices • Impact of teaming configuration of uplink to routing peering – Failover or Src-ID - Single Uplink is used to establish routing adjacencies – LACP - Both uplink can be used however dependencies on physical switch vendors • In addition the design choices differs depending of either Edge can peer with ToR configured as L3 or L2 • The uplink configuration on VDS along with ToR connectivity create a design choices that has vendor specific technology dependencies ( vPC or MLAG) • The recommendation for typical design is to use explicit failover mode for the teaming – The explicit failover does not depend on vendor specific configuration and provides a simple route peering. L3 - ToR Routing Adjacency vSphere Host vSphere Host Uplink Teaming Mode – Non-LACP L3 - ToR Routing Adjacency vSphere Host vSphere Host Uplink Teaming Mode – LACP VXLAN 5020 Transit Link VXLAN 5020 Transit Link CONFIDENTIAL 52
  • 48. Agenda • NSX for vSphere Design and Deployment Considerations – Physical & Logical Infrastructure Requirements – NSX Edge Design – Logical Routing Topologies – NSX Topologies for Enterprise and Multi-tenant Networks – Micros-segmentation with Distributed FW Design CONFIDENTIAL 53
  • 49. Distributed Logical Routing Components – Control Plane  The Distributed Logical Router Control Plane is provided by a per instance DLR Control VM and the NSX Controller  Dynamic Routing Protocols supported with DLR • OSPF • BGP • Control VM forms the adjacencies with Edge node  Communicates with NSX Manager and Controller Cluster • NSX Manager sends LIF information to the Control VM and Controller Cluster • Control VM sends Routing updates to the Controller Cluster  DLR Control VM and NSX Controller are not in the data path  High availability supported through Active-Standby configuration  Can exist in edge cluster or in compute cluster Logical Router Control VM
  • 50. Distributed Logical Routing Components – Data Plane  Logical Interfaces (LIFs) on a Distributed Logical Router Instance • There are internal LIFs and uplink LIFs • VM Default Gateway traffic is handled by LIFs on the appropriate network • LIFs are distributed across every hypervisor prepared for NSX • Up to 1000 LIFs can be configured per Distributed Logical Router Instance 8 Uplink 992 Internal • An ARP table is maintained per LIF  vMAC is the MAC address of an internal LIF • vMAC is same across all hypervisors and it is never seen by the physical network (only by VMs) • Routing table on each ESXi hosts is programed via controller DLR Kernel Module vSphere Host LIF1 LIF2 Transit VXLAN Uplink
  • 51. ECMP with DLR and Edge 56 DLR E3E1 Physical Routers E2 … Core VXLAN VLAN E8 Web DBApp  ECMP support on the DLR and on the NSX Edge Both have the capability of installing in their forwarding tables up to 8 equal cost routes toward a given destination  8 NSX Edges can be simultaneously deployed for a given tenant Increase the available bandwidth for North-South communication (up to 80 Gbps*) Reduces the traffic outage in an ESG failure scenario (only 1/Xth of the flows are affected)  Load-balancing algorithm on NSX Edge: Based on Linux kernel flow based random round robin algorithm for the next-hop selection  a flow is a pair of source IP and destination IP  Load-balancing algorithm on DLR: Hashing of source IP and destination IP defines the chosen next-hop Active Standby
  • 52. Distributed Router & ECMP Edge Routing  2 VLANs used for peering with Customer Routers  Map each of these VLANs (portgroups) to a different dvUplink on Edge VDS to ensures distribution of N/S traffic across dvUplinks  Uplink = VLAN = Adjacency  Avoid using LACP to ToR for route peering due to vendor dependencies  Min 3 host per rack  With two host, two active Edge with anti-affinity same host to avoid dual failure  Use third host for active control-VM, standby on any remaining host with anti-affinity rule VXLAN VLAN Web DBApp Transit VXLAN E1 E2 E3 E4 R1 R2 External Network VLAN 10 VLAN 20 ECMP Active NSX Edges Customer Routers Distributed RouterActive Standby DLR Control VM
  • 53. Edge HA Models Comparison – BW, Services & Convergence E1 Active Physical Router E2 Standby Routing Adjacency Web DB DLR Control VM DLR App Active Standby … E8E3E1 Physical Router Routing E2 Adjacencies Web DB DLR App Active Standby DLR Control VM Active/Standby HA Model Bandwidth Single Path (~10 Gbps/Tenant) Stateful Services Supported - NAT, SLB, FW Availability Low convergence with stateful services enabled ECMP Model Bandwidth Up to 8 Paths (~80 Gbps/Tenant) Stateful Services Not Supported Availability High ~ 3-4 sec with (1,3 sec) timers tuning
  • 54. 3-Tier App Logical to Physical Mapping vSphere Host Host 3 vSphere Host Host 4 vSphere Host Host 5 WebWeb WebAppApp DB Edge VMs Logical Router Control VMs WebWeb WebAppApp DB WebWeb WebAppApp DB vSphere Host Host 1 vSphere Host Host 2 vSphere Host Host 6 vSphere Host Host 7 Compute Cluster NSX Manager NSX Controller Cluster vCAC vCenter Edge Cluster Management Cluster CONFIDENTIAL 59
  • 55.  Edge cluster availability and capacity planning requires for • Minimum three host per cluster • More if ECMP based North-South traffic BW requirements  Edge cluster can also contain NSX controller and DLR control VM for Distribute Logical Routing (DLR) L3 L2 VMkernel VLANs VLANs for L2 and L3 NSX Services Routed DC Fabric L2 L3 WAN Internet L2 L3 L2 L3 VMkernel VLANs VLANs for L2 and L3 NSX Services Routed DC Fabric WAN Internet L2 L3 Single Rack Connectivity Deployment Considerations  Benefits of Dedicated Edge Rack  Reduced need of stretching VLANs  L2 required for External 802.1Q VLANs & Edge Default GW  L2 connectivity between active and standby stateful Edge design  Uses GARP to announce new MAC in the event of a failover  Localized routing configuration for N-S Traffic, reduce need to configure and mange on rest of the ToRs in spine  Span of control for network centric operational management, BW monitoring & features Dual Rack Connectivity 60 Edge Cluster
  • 56. Agenda • NSX for vSphere Design and Deployment Considerations – Physical & Logical Infrastructure Requirements – NSX Edge Design – Logical Routing Topologies – NSX Topologies for Enterprise and Multi-tenant Networks – Micros-segmentation with Distributed FW Design CONFIDENTIAL 61
  • 57. Enterprise Topology – Two Tier Design – with/without 6.1 Onward  Typical Enterprise topology consist of app-tier logical segments  Routing and distributed forwarding is enable for each logical segment available on all host via distributed logical router (DLR) • Allowing workload to move without the dependencies of VLAN as local forwarding exist on each host via DLR LIF • The north-south traffic is handled via next hop Edge which provides virtual to physical(VXLAN to VLAN) forwarding  The DLR to Edge routing is provisioned initially once, the topology then can be used for additional logical segments (additional LIFs) for multiple app-tier deployment  Scaling • Edge Scaling – Two ways • Per tenant scaling – aka each workload/tenant gets its own Edge and DLR • ECMP based scaling of incremental BW gain – 10G BW upgrade per spin up of Edge upto maximum of 80 Gig(8 Edges). Available on NSX 6.1 release onward • DLR Scaling • Upto 1000 LIF – aka 998 logical network per DLR instance External Network Physical Router VLAN 20 Routing Edge Uplink Peering NSX Edge Routing Peering VXLAN 5020 Transit Link Distributed Routing Web1 App1 DB1 Webn Appn DBn Web DB DLR E8E1 Physical Router E2 … App Core Routing Peering Route Update ECMP Non-Stateful E3
  • 58. Multi Tenant (DLRs) Routing Topology External Network Tenant 9 DLR Instance 9DLR Instance 1 Web Logical Switch App Logical Switch DB Logical Switch Web Logical Switch App Logical Switch DB Logical Switch Tenant 1 NSX Edge VXLAN 5020 Transit Link VXLAN 5029 Transit Link … 63  Can be deployed by Enterprises, SPs and hosting companies  No support for overlapping IP addresses between Tenants connected to the same NSX Edge  If the true isolation of tenant routing and overlapping IP addressing is required – dedicated Edge HA mode is the right approach VLAN VXLAN
  • 59. Multi Tenant Routing Topology (Post-6.1 NSX Release) External Network NSX Edge VXLAN Trunk Interface 64  From NSX SW Release 6.1, a new type of interface is supported on the NSX Edge (in addition to Internal and Uplink), the “Trunk” interface  This allows to create many sub-interfaces on a single NSX Edge vNic and establish peering with a separate DLR instance on each sub-interface  Scale up the number of tenants supported with a single ESG (assuming no overlapping IP addresses across tenants)  Aggregate of 200 sub-interfaces per NSX Edge supported in 6.1  Only static routing & BGP supported on sub- interfaces in 6.1  OSPF support will be introduced in 6.1.3 maintenance release  Scale numbers for Dynamic Routing (max Peers/Adjacencies) are under review Routing Peering Tenant 1 Tenant 2 Tenant n Single vNIC Web Logical Switch App Logical Switch DB Logical Switch VLAN VXLAN
  • 60. High Scale Multi Tenant Topology 65 • High scale multi-tenancy is enabled with multiple tiers of Edge interconnected via VxLAN transit uplink • Two tier Edges allow the scaling with administrative control – Top tier Edge acting as a provider Edge manage by cloud(central) admin – Second tier Edges are provisioned and managed by tenant • Provider Edge can scale upto 8 ECMP Edges for scalable routing • Based on tenant requirement tenant Edge can be ECMP or stateful • Used to scale up the number of tenants (only option before VXLAN trunk introduction) • Support for overlapping IP addresses between Tenants connected to different first tier NSX Edges External Network Tenant 1 Web Logical Switch App LS DB LS … Web Logical Switch Edge with HA NAT/LB features Single Adjacency to ECMP Edge ECMP Based NSX Edge X-Large (Route Aggregation Layer) ECMP Tenant NSX Edge VXLAN Uplinks or VXLAN Trunk* VXLAN Uplinks or VXLAN Trunk* VXLAN 5100 Transit App LS DB LS *Supported from NSX Release 6.1 onward … E8E1
  • 61. Multi Tenant Topology - NSX (Today) MPLS Network Tenant 1 Web Logical Switch App Logical Switch DB Logical Switch … Web Logical Switch App Logical Switch DB Logical Switch Tenant NSX ESG Physical Router (PE or Multi-VRF CE) VXLAN Uplinks (or VXLAN Trunk*) VLAN 10 66*Supported from NSX Release 6.1 onward Tenant 1 VRF Tenant 2 VRF T1 T2 Tenant NSX ESG T1 T2 VXLAN Uplinks (or VXLAN Trunk*) VLAN 20 VLAN VXLAN  NSX Edge currently it is not VRF aware  Single routing table does not allow to keep tenants logically isolated  Each dedicated Tenant Edge can connect to a separate VRF in the upstream physical router  Current deployment option to integrate with an MPLS network
  • 62. Agenda • NSX for vSphere Design and Deployment Considerations – Physical & Logical Infrastructure Requirements – NSX Edge Design – Logical Routing Topologies – NSX Topologies for Enterprise and Multi-tenant Networks – Micro-segmentation with Distributed FW Design CONFIDENTIAL 67
  • 63. Internet Intranet/Extranet Perimeter Firewall (Physical) NSX Edge Service Gateway SDDC (Software Defined DC) D F W D F W D F W Distributed FW - DFW Virtual Compute Clusters Stateful Perimeter Protection Inter/Intra VM Protection NSX Security Architecture Overview • Stateful Edge Security • DFW per vNIC Characteristics – Distributed & fully programmable(REST-API) – vMotion with rules and connection state intact – Flexible Rules and topology independence – Third party ecosystem integration – PAN – Foundation for the micro-segmentation design • Tools and Methods to protect virtual resources – Traffic redirection rules with services composer or partner security services UI – Filtering module within security policy definition – Diverse policy object & Policy Enforcement Points(PEP) • Identity – AD Groups • VC Container Objects – DC, Cluster, Port-Groups, Logical SW • VM Characteristics– VM Names, Security Tags, Attributes, OS Names • Protocols, Ports, Services • Security Groups to leverage objects and PEP to achieve micro-segmentation CONFIDENTIAL 68
  • 64. Micro-segmentations Design • Collapsing application tiers to like services with each app-tier has its own logical switch – Better for managing domain(WEB, DB) specific security requirements – Easier to develop segmented isolation between apps tier domain – Web-to-DB – Deny_All vs Web-to-App granularity – May requires complex security between app tiers as specifics web-to- app or app-to-db security isolation required within logical switch as well as between segments • Collapsing the entire apps tiers into single logical switch – Better for managing group/application-owner specific expertise – Apps container model. May suits well for app as tenant model – Simpler security group construct per app-tier – Isolation between different apps container is required • DMZ Model – Zero trust security – Multiple DMZ logical network, default deny_ALL within DMZ segments – External to internal protection by multiple groups Logical Distributed Router .1 .1 .1 W eb-Tier-01 1.1.1.0/24 w eb-01 w eb-02 App-Tier-01 2.2.2.0/24 app-01 app-02 D B -Tier-01 3.3.3.0/24 db-01 db-02 .11 .12 .11 .12 .11 .12 L o g i c a l D i s trib u t e d R o u t e r . 1 w e b - 0 1 w e b - 0 2 a p p - 0 1 d b - 0 1a p p - 0 2 A ll-T ie r -0 1 1 . 1 . 1. 0 / 24 .11 . 1 2 . 2 1 . 2 2 . 3 1 d b - 0 2 . 3 2 S G -W E B S G -A P P S G -D B Web-Tier App-Tier External Network STOP Client to Web HTTPS Traffic Web to App TCP/8443 CONFIDENTIAL 69
  • 65. Feature Overview - vCloud Automation Center & NSX • Connectivity – vCAC Network Profiles for On-Demand Network Creation • Define routed, NAT, private, external profiles for variety of app topologies • Option to connect app to pre-created networks (logical or physical) – NSX Logical Distributed Router (DLR) • Optimize for east-west traffic & resources by connecting to pre-created LDR • Security – On-Demand Micro-segmentation • Automatic creation of security group per app w/ default deny firewall rules – Apply Firewall andAdvanced Security Policies w/ Ease • Select pre-defined NSX security policies to apply to app/tier • Antivirus, DLP, Intrusion Prevention, Vulnerability Mgmt…more to come – Connect Business Logic to Security Policy w/ Ease • Select pre-defined NSX security tag (e.g. ‘Finance’) which is applied to workload and interpreted by NSX to place in pre-defined security group • Availability – On-demand Load Balancer in ‘One-Armed Mode • Plus option for using pre-created, in-line load balancer (logical or phys) CONFIDENTIAL Range of features from pre-created to on-demand network and security services. Web App Database VM 70
  • 67. 73 NSX Reference Designs NSX Platform Hardening NSX Getting Started Guides SDDC Validated Solutions NSX Partner White papers Reference Designs & Technical Papers on VMware Communities: https://communities.vmware.com/docs Reference Designs and Technical Papers on the NSX Portal: http://www.vmware.com/products/nsx/resources.html NSX and Fabric Vendors VMware NSX Collateral Landscape
  • 68. VMware NSX Network Virtualization Design Guides: https://communities.vmware.com/docs/DOC-27683 NSX Reference Design Guides – The Architecture ESXi Compute Clusters Compute Clusters Infrastructure/Edge Clusters (Edge, Storage, vCenter and Cloud Management System) Edge Clusters WAN Internet Storage Cluster Mgmt and Cloud Mgmt Cluster CONFIDENTIAL 74
  • 69. What’s Next… VMware NSX Hands-on Labs labs.hol.vmware.com VMware Booth #1229 3 NSX Demo Stations Explore, Engage, Evolve virtualizeyournetwork.com Network Virtualization Blog blogs.vmware.com/networkvirtualization NSX Product Page vmware.com/go/nsx NSX Training & Certification www.vmware.com/go/NVtraining NSX Technical Resources Reference Designs vmware.com/products/nsx/resources VMware NSX YouTube Channel youtube.com/user/vmwarensx Play Learn Deploy CONFIDENTIAL 75
  • 70. 76 Please submit your feedback via our mobile app.