The document discusses vCloud Networking concepts including external networks, organization networks, vApp networks, and network pools. External networks connect the organization to the physical network, organization networks belong to a tenant organization, and vApp networks are available to a single application. Network pools give users control over layer 2 networks and include port-group, VLAN, and vCD-NI (VMware's proprietary encapsulation protocol) types. The document also covers considerations for the physical network design and configuration of external and organization networks.
4. Organization Networks
• Organizations describe a tenant
• Networks that belong to an Organization
• 3 Types of Org Network Connections
• Internal and External
• Created by the cloud admin
5. vApp Networks
• Networks available to a single vApp
• Ability to Fence
• vApp Networks connect to Organization’s
Internals or External
Networks
6. Network Pools
• Giving the users control of L2
• Creating the multi-tenant infrastructure
• Declaring what L2 networks are available for
consumption
• 3 Different kinds of Network pools
7. Port-Group Backed
• Pre-provisioned port-groups
• Non-automated
• vSphere Standard Switches
• Currently Nexus 1000v, until 1.5 GA release
17. External Network Creation
• Create portgroup before vCD external creation
• Use Ephemeral binding
• Assign a VLAN
• Layer 2 or Layer 3
• IP Address Range with Gateway
• DNS
• Load Balancing & Failover
This is what mostvSphere admins can relate to. An external network can be anything that an organization or vApp network needs to connect to. Think of it like a VLAN in your environment and how that relates to a portgroup connection for a VM. This could be something as simple as a VLAN that has access to the internet, or perhaps networks that communicate with a dev/test or certain department. Consider it external connectivity to your vCloud. Every external network must be created in vSphere and presented to vCD.
Organization networks start peeling off another layer of the onion, so to speak. An organization is an object within vCD that is used to define a tenant. If you’re an SP, that could be business names such as Pepsi or Coke, in an enterprise world, that could be defined as HR, Finance, IT, and so on. Organization networks are L2 segments that belong to a particular Organization. There are 3 different types of organization network connections that we will discuss later in the design section. Including direct connect external, NAT/Routed external, and Internal.
We’ve looked at external networks that will give you external connectivity to your cloud. Organization networks which create networks available to a particular organization and now we are going to examine networks that are only available to the vApps. Fencing a vApp allows dev/test environment to deploy a vApp with the same exact MAC address and IP addresses multiple times without having to worry about conflict because a vShield Edge device will take care of NATing and Firewall onto an Organization network.
A network pool is basically a small database of layer 2 network segments available to vCloud Administrators and end-users. This network pool can be consumed by Organation Networks and vApp Networks. We are going to explore real quickly the three different kinds of network pools.
This is about as easy as it comes. This is the same thing as an external network almost. The portgroup backed pool must be pre-provisioned in vSphere prior to vCD and added to vCD for consumption.
This is probably the easiest to understand. Give vCloud Director a range of VLAN like 400-500. Whenever you need to deploy a new layer 2 segment, vCD will create a new portgroup on the fly and assign an available VLAN from the pool. If a Layer 2 segment is ever destroyed, that VLAN gets added back into the network pool for consumption later on.
This is Vmware’s proprietary protocol utilizing mac-in-mac encapsulation to create a layer 2 network without using VLANs. This is extremely useful in environments where VLAN management is hectic because you are reaching that 4000+ VLAN mark. When you create a network pool based on vCD-NI you specify how many networks you want to create off this pool and vCD will will deploy portgroups not based on VLANs but different layer 2 segments.
The great thing about vCloud Director is that you can use the physical properties to differentiate between service offerings. I would say 90% of the time, we use storage as the differentiating factor between a service offering. SSD=Gold, SAS/FC = Silver, SATA = Bronze. But, have you ever thought about the correlation between networking and a service tier? There are a lot of different physical networking designs out there with 4 NICs, 6 NICS, 10 NICs, 12NICS, Fiber Channel, NFS, iSCSI, 10Gb, 1Gb. It’s up to you as the Cloud Admin to try and think of your service tiering approach in a POD like fashion. Perhaps you have an outdated infrastructure that utilized 6 NICs, iSCSI/NFS, and SATA storage. You can offer this up as a Bronze offering with a minimal SLA and be able to continue to use that infrastructure to service resident VMs. Then you can purchase something like a Vblock that utilizes 10Gb and Fiber Channel with SATA drives and offer that up as a Silver or perhaps Bronze+ offering. It’s the same type of drive on the storage, but your POD is more failure resistant and has a lot more available throughput. There are more ways to think about your service offering than storage alone. Think about resiliency of the Pod and what type of SLA you can offer against it.
External networks are going to differ in every company. It all depends on what you are trying to accomplish. Remember that the external network connections are connections that solely make up where vCloud connections need access.
Each external network must have an existing port group defined in vCenter. A best practice is to create these on distributedvSwitches of course. This may have changed with 1.5, but in 1.0, it was a best practice to create these portgroups as ephmeral. Without ephemeral binding, the number of ports is restricted t the port group. A restricted number of ports in the port group might cause problems because the number of connected virtual machine is not precictable. Every NAT-routed organization network needs an available port. Assign a VLAN because you it will isolate this traffic and you can create ACLs on network equipment based on the VLAN for security and provisioning purposes. These external networks can be layer 2 or layer 3 networks depending on where they need to travel. More often than not, it will be a layer 3 network. These external networks also need IP addresses associated with them. During the creation of an external network, vCloud will prompt you for information regarding IP addresses and DNS information. These IP addresses are assigned to VMs during a Direct connect external connection or assigned to vShield Edge devices as a NAT routed IP for External NATed Organization networks. In addition, I would set this portgroup as an Active/Active portgroup utilizing load balancing based on physical NIC load and set the failback option to NO. of course the load balancing mechnism can be changed to fit your environments needs.
Another design criteria is going to be the type of network pool design. The only reason we would be choosing port-group backed pools is using a vSphere Standard Switch, and I’m sure no one deploying vCloud Director wants to rely on vSS. Another reason would be for immediate Nexus 1000v integration. Port-group backed pools are the only type supported with the Nexus 1000v until version 1.5 is released, which should be at some point next month. If anyone is planning on deploying with the Nexus 1000v, I would encourage you to read the networking section of the vCloud on Vblock paper that was written by myself, Chris Colotti, and Jeramiah Dooley. We explain how it’s possible to utilize 4 NICs for vCloud by using the 1000v as external portgroups and using the vDS as vCloud Networking. The vShield Edge appliance creates the bridge between the two allowing full network capability. The traditional way most SPs seperated tenants was through the use of VLANs. vCD is a good step forward or SPs wanting to take that traditional approach because external networks can be assigned via tenant and segmented L2 networks can be isolated through the use of VLANs as well. Granted, this isn’t the best approach moving forward because depending on how big you want to become, 4000+ VLANs may creep up very soon. That leads us to vCD-NI based networking.
As we said before, this is Vmware’s proprietary protocol. It uses mac-in-mac encapsulation to accomplish L2 network between hosts. The benefit of using this method is that you can overcome the 4000+ VLAN barrier and it’s essentially a Vmware best practice to use this type of network pool. You now have to think about the design of how many vCD-NI networks are needed. With a port-group or VLAN based approach, you have to think about how many different organization networks are going to be created. The same goes for vCD-NI, but it’s a bit simplified. Instead of saying a single organizational network gets a VLAN, you can now put many different organizations on a single vCD-NI network and they are kept isolaed using the mac-in-mac encapsulation. During the creation of a vCD-NI backed network pool, you have to enter how many isolated networks you want this pool to use. The maximum amount of logical networks is 1000. A logical network in this sense is an organization or vApp network. The vCD maximum is 7000 logical networks. If you don’t plan on having that many tenants you can just keep a single 1000 network pool. It might be beneficial to create multiple pools and assign a vCD-NI network pool to a particular provider vDC. Say HR, Marketing, and Finance can be coupled together and assigned to vCD-NI network pool A, while IT and Engineering as assigned to vCD-NI network pool B. Every single vCD-NI network pool must be given a unique VLAN. If my external network is VLAN 6, I can’t use VLAN 6 as a vCD-NI network pool. Don’t worry about the creation of port-groups using this VLAN because vCloud Director will automatie the provisioning of port-groups for a vCD-NI pool. Another thing to keep in mind is if you want the pool to be L2 or L3. All the vCD-NI traffic going from host to host for communication is L2. The only time you have to decide is you want a L3 connectivity is if you need vApps to route outside to an external network connection. In my own testing I found that VMs can talk to each other on the vApp or organizational network, but if they needed to connect to the internet, then they needed to have a vCD-NI network pool backed by a VLAN that could route.
Since mac-in-mac encapsulation adds an additional header to a packet, you are now forced to use jumbo frames across your network. As of vCloud 1.5, it’s suggested to use an MTU of 1600 or greater when configuring vCD-NI port groups. Every device between a cluster of hosts in a provider vDC must have their networking equipment set to 1600 MTU or greater. IMO opinion, it’s much easier to set any networking equipment to MTU 9000 during initial configuration because changing the MTU globally on a switch usually calls for the need of a switch reboot. Doing this will allow you to set and forget. Now we have to go a bit deeper and actually set the vNetwork distrbuted switch to use an MTU greater than 1600 as well so packets can traverse. This is as simple as going to the dvSwitch , right click properties and changing the maximum MTU to 9000.
After the creation of a vCD-NI backed network pool, you still aren’t done. To complete the final configuration step, you must right click on your newly created vCD-NI pool and change the MTU setting from 1500 to 1600. in vCD 1.0.x, this was set to 1524 to account for the 24 bytes of extra space on the header. Now in 1.5, this is supposed to be set to 1600. I’m not sure if that is for future improvements or VXLAN additions. The penalty associated with utlizingvCD-NI is that your hosts will have to do some extra CPU processing, but it’s probably around 2-5%, not much when you think about it. For most organizations, CPU is hardly ever a taxed resource except in VDI deployments. By not setting the MTU to 1600 or greater you are not going to experience a failure in communication or a failure of the network pool. But instead you will be subject to packet fragmentation which of course leads to higher utilization of your switches and all the networking components in between because you are basically doubling the amount of packets on the pipes.
Now we can create organization vDCs and assign the newly created network pool to them. This is where you must make sure that you ration out vCD-NI networks effectively as a Cloud Admin. You have 1000 networks available to in a single vCD-NI backed network. You can give every Org vDC the ability to create up to 1000 networks, but a limit may hit. Think of it like oversubscription of resources within vSphere. You know what thresholds there are and how much you can oversubscribe, just be conscious of your decisions.
One concept to keep in mind is that you have to create an over arching organization. Something like Pepsi. Within a single Organization like Pepsi, you can create multiple Organization vDCs. These Organization vDCs can be something like Pepsi-Gold and Pepsi-Silver because a single Org vDC can only be paired with a single provider vDC. You don’t have to do create Org vDCs based on tier, but it’s probably the most common. Organization Networks on the other hand play a design role as well. For every external network an Organization needs access to, an Organizational network must be created. When creating a vApp from an organization, you can select where the VMs will rest and it can be in any Organization Network
This is the default behavior when creating an Organizational Network. I can create an internal network with a routed external network. This routed external network will create a vShield Edge device that will take care of NATing to the outside world. This will allow VMs on vApps to be placed on either A) internal networks and allow it to only to other VMs on the same internal network. or B) on an external network and let it talk to the outside world and have it be assigned an IP via NAT from vShield Edge, the VMs are also assigned unique IPs via DHCP from vShield Edge..There are certain use cases for every piece and we’re going to try and take a look at them all.
This would be a simple example of a web based application where you need to have your DB and App servers completely isolated from the outside world. Your Web DB and App servers can all communicate locally on a secured internal network, but now you can also use a secondary NIC on your Web Server and assign that to the external network which has the ability to talk to the outside world and take requests. From the image above we are using the default behavior and placing it behind a vShield Edge device to take care of NATing.
I can create a similar type of network but remove the vShield Edge appliance by selecting a direct connection to the external network. This removes vShield Edge with NATing, but also removes scale at the same. This will allow you to create an internal network that will have a unique IP scheme, and an IP address is used from the Scope for the external network for outside access. In my case, an IP address from 192.168.60.50-200 will be used. In For any VM that is placed on the external network, an IP will be used from that range as well.
Again, like our original example, this will create a secured network for the VMs to communicate, but it removes the vShield Edge device and give the Web VM an IP from the pool on the external network. This would probably be the most common use case for an actual public external facing Web App because mapping to an IP behind a NAT routed device might be a bit tricky. I can also create internal only Org networks and External only Org networks by unchecking the boxes during the creation. I didn’t feel like showing you those because they might be too simple.So you might be wondering what the scalability concerns would be on external networks
When you don’t use a vShield edge device, you are going to start burning up more and more IPs in that range that was given for the external network. Every VM that get’s connected to that external org will be using an IP.
When a vShield Edge device is used, we can now NAT the whole entire External Org network and only burn up a single IP address from the external network. Making the scalability of the vApps and the network go even further.
We can take the scalability a step further and talk about vApp fencing. When you fence a vApp, it gives you the ability to create VMs with the same exact IP and MAC addresses and have them be able to talk on the network without issue. As you can see on the right hand side, both of my vApps have identical IP addresses. The vShieldEdge device sits in front of these vApps and receives an IP via DHCP from the vShield Edge device sitting between the external org and external network. This allows both of these vApps to communicate to the outside world without interference. This is a handy utility for developers that need to constantly do testing for applications but have hardcoded IPs.