Uncovering Virtual Networking Part-2: Virtual Switches

As discussed in previous post Part-1: Basics, we have two types of switches that are available in vSphere i.e. Standard switch and Distributed switch. So in this post we will explore these switch types. But before we do that, let’s dive into basics of switches in virtual architecture first.

What is a virtual switch?

A virtual switch is a software program that emulates a switch as a layer-2 network device.

As we all know, a switch is a layer 2 smart device in physical networking that performs delivery of packets point to point as it maintains the MAC table (aka CAM table as network guys refer it). Now the question is, does virtual switch perform any differently?

Virtual switches behave just like physical switches for switching functionality. However, being virtualized switch, there will be some differences as well which we will discuss in this post.

Virtual switches do not provide all advanced features that a physical switch may provide. For example, virtual switch does not have console port to perform advanced configuration.

A virtual switch detects which virtual machines are connected to its virtual ports and use that information (MAC) to forward traffic to the correct destination (Point to Point). Virtual switches are basically used to establish a connection between the virtual and the physical network. A virtual switch is connected to physical switch through Ethernet adapters of ESXi host as shown in image below.

As discussed in previous post, after ESXi is installed, as default configuration, we get one standard switch created with a portgroup named as VM Network and a VMKernel port as Management Network (vmk0) that allows us initial networking on ESXi. Does this portgroup and VMKernel port on default standard switch of each ESXi satisfies networking needs for an organization or do we need to create additional port groups and switches?

Let us understand few basic limitations of the switches and then we will talk more about it later.

Maximums in vSphere 7 for networking

I have used only required maximums for discussion. For detailed maximums, visit Config Max portal.

  • Standard Switch
Virtual network switch creation ports per standard switch 4088
Port groups per standard switch 512
VSS portgroups per host 1000
  • Distributed Switch
Ports per distributed switch 60000
Static/Dynamic port groups per distributed switch 10000
Ephemeral port groups per distributed switch 1016
  • ESXi host
Total virtual network switch ports per host (VDS and VSS ports) 4096
Maximum active ports per host (VDS and VSS) 1016
Reserved ports for internal operations8

As you can see in above tables, an ESXi host supports maximum 4096 ports of which 8 ports are reserved for internal operations like vMotion, replication and others. So max usable port are 4088, but an ESXi host supports maximum 1024 Active ports of which 8 ports are reserved so usable active ports are 1016.

Abstract is that regardless of switch capacity, we have to always respect ESXi host maximums. Even if you go with single switch or multiple switches, ESXi maximums remains unchanged. So a single switch seems enough if we create additional required port groups and VMKernel ports, as per what we just discussed. What should we do? So let’s understand this.

In above image, what we are achieving with first half design (with VLAN) is similar to what we are achieving in second half design (without VLAN).

  • Virtual switch is an emulated behaviour of physical switch by means of logical switching fabric (Software program).
  • More you create switches on host, requires more resources of ESXi host to provide and run switch functions. This reduces your resources for virtualization.
  • Single switch is more than enough to handle networking requirements considering maximum ports available per host. As in above image, first half shows networking with single switch.
  • If you want traffic segregation on a single switch, use VLAN to do so.

But, we may have to create multiple switches in certain cases such as you don’t want to use VLANs in virtual networks, basically physical segregation of traffic as in image above in second half.

This might be the case in organizations where vLANs are not configured in physical networks. As we all know, VLAN requires end to end configuration on networking devices. So we cannot start using VLANs in virtual networks if not already present in Physical as it means configuring your datacenter networking devices as well.

Also another challenge with multi-switch design, do you have that many physical ethernet cards on ESXi host to support this design. As in my example above, I have four switches, so at least I should have 8 ethernet cards on my ESXi considering redundancy. You have to understand server form factors and available slots for network adapters. For example, what is the max number of network cards you can have in blade chassis form factor. Use PCI slots sparingly as we need them for HBAs as well.

So the recommendation is to keep your number of switches on lower side even if you are creating multiple switches. Don’t create multiple switches just because we can.

That being said, let’s talk about which type of switch i.e. standard or distributed, we should use. To do that, we need to understand characteristics of each type of switch based on which we can make our decision.

Standard Switch aka vSwitch aka vSS

Standard switches are created and managed per ESXi. What does it mean?

Take a scenario of 100 ESXi hosts. Now we need to create 2 port groups PROD and DEV, also we need to create a vMotion VMKernel port by adding one more switch. Let’s keep this much only as we just need to understand concept. In real world scenarios, you’ll need more networks for sure.

To design this scenario, we will need to access each ESXi in vSphere client and then will need to create port groups and vMotion network on our first server. Then we will be repeating our steps 99 times more as we have 100 hosts. Does that fascinate you? if not, then imagine after 6 months you need to rename or add new portgroup, so again 100 repetitions. Does that excite you considering all possible day-2 activities.

Standard switches do not offer centralised management. You can bring some automation by means of host profiles but still few manual tasks are required. So basically, Standard switches demands too much of administrative overhead.

Other limitations with Standard switches is that it provides basic layer two switch functionality. Basic features that are available under standard switches are as follows.

  • Layer 2 switch functionality
  • IPv6 Support
  • VLANs (802.1Q tagging)
  • NIC teaming
  • Egress (Outbound) Traffic shaping
  • Security Policy
    • Promiscuous mode
    • MAC address change
    • Forged Transmit
  • NIC Teaming and Failover policies
    • Route based on virtual port ID
    • Route based on source MAC Hash
    • Route Based on IP Hash
    • Specified failover order

To create a standard switch:

  • Navigate to an ESXi host, Click Configure Tab, and click virtual switches under networking.
  • Click on Add Networking
  • Select Portgroup creation or VMKernel port creation option and click next. I have used portgroup for demo purpose.
  • Select New standard switch and click next
  • Click + sign to add network adapter for this switch, select the network card to be used and click next.
  • Provide label for portgroup and VLAN if required, click next.
  • Review settings and click Finish.
  • Verify your switch with network diagram

We will talk about policies like security, traffic shaping and others in separate posts.

Conclusion on vSS:

Standard switches provides initial networking and can be used in scenarios such as environments with budget constraints or small setups are there. We should certainly avoid using Standard switches (only) in medium to large setups. We can keep them for limited functionality just in case.

Our discussion on standard switch seems to promote other type of switch i.e. Distributed switch. So let’s go ahead and discuss it now.

Distributed Virtual Switch aka DVS aka VDS

DVS includes all standard vSwitch features and also its own advanced feature set while offering a centralized administration interface. DVS is available in Enterprise edition of vSphere.

Below are some of the features in DVS. This is not a exaustive list of features for DVS.

  • Layer 2 switch functionality
  • IPv6 Support
  • VLANs (802.1Q tagging)
  • NIC teaming
  • Egress Traffic shaping
  • Ingress traffic shaping
  • Security Policy
    • Promiscuous mode
    • MAC address change
    • Forged Transmit
  • NIC Teaming and Failover policies
    • Route based on virtual port ID
    • Route based on source MAC Hash
    • Route Based on IP Hash
    • Load based teaming
    • Specified failover order
  • Port blocking
  • Private VLANs
  • Centralized management
  • Per port policy settings
  • Port mirroring
  • Netflow
  • Port-state monitoring

Due to centralized management ability with DVS, it is referred as logically single switch for multiple ESXi hosts as depicted in image below.

Before we dig into DVS architecture more let’s see first, how it is created?

  • First of all DVS is not created on ESXi host. It is created on vCenter by right clicking your datacenter or from Actions menu of Datacenter as shown below.
  • Assign name to DVS and click next
  • Select the version of DVS and click next
  • Edit no of uplinks as required, enable or disable NIOC as required, and you can continue creation of default portgroup along with switch creation or uncheck if you want to do it later. Click next.
  • Review your settings and click finish
  • Thats it, creating DVS is fairly simple. It should be listed under network view as in image below.

Now with DVS created, you can configure it further for other advanced settings.

Important point here, even after creating DVS, there are no changes done on ESXi hosts as this switch is not connected to any host as shown below.

Question here, why did we create it on vCenter Server (datacenter)?

Well, we use vCenter Server for centralized management, so we created DVS on it to achieve centralized management. But remember one thing here, DVS that we created on vCenter server is just a template or framework for centralized management, it does not mean that actual switch is running on vCenter server or we will redirect our traffic to vCenter server.

Let us have a look at DVS architecture.

If you see, what is on vCenter Server, only DVS Layout of how the switch is designed i.e. port groups, no of uplink. For ESXi host to use DVS, we need to add it as member of DVS. Once we add all our ESXi hosts to DVS, similar layout is used to create DVS on ESXi host automatically. This DVS instance on ESXi host is called as Host proxy switch. So actual switch functionality i.e. traffic is still on ESXi only, its just that it is linked to template we created as DVS in vCenter server for centralized management. So in future whenever you have to add new components or make changes to existing components in DVS, simply make change to template of DVS in vCenter Server and it gets propagated to all ESXi hosts that are connected to DVS. That is how DVS centralized management works.

DVS instance on vCenter Server is referred as Control plane, where DVS management is performed. Whereas host proxy switch on ESXi is referred as Data or IO plane where actual data transactions take place.

This approach reduces management overhead greatly for vSphere administrators by eliminating repetitive processes. Also centralized management avoids unnecessary errors in networking configurations due to manual repetitive processes. We can also take DVS backup by exporting the configuration in a file which is not possible with standard switches.

So now the question is how to add ESXi hosts to DVS. It’s time to add servers to DVS now. Let’s see how it is done.

  • Navigate to networking view and select DVS instance.
  • From actions menu of DVS click add and manage host
  • Click Add host.
  • Click + sign to add new host to DVS, select host and click next. For this demo I am just adding one ESXi.
  • Assign the network adapter by selecting it and click on assign Uplink.
  • For this demo I am not changing any VMkernel adapters so click next. If required, we can do this at later time.
  • As of now, we don’t want to migrate any VMs to DVS, so click next. We can do this at later time as required.
  • Review settings and click finish. Verify that server is listed as connected host.
  • If we check on ESXi that we added to DVS, we should see that the DVS instance or Host proxy switch got created automatically.
  • Let’s check other server which is not added yet to DVS. If you see there is no Demo DVS here on this server.

Thats it guys for this one. I hope it is informative.

We will take pause here. We talked about DVS, how to create one and adding hosts to it. There are lot of things that are possible with DVS. Some of those will be discussed in upcoming posts.

Do share, comment, like if you find it helpful. Check next post in this series here Part-3: Switch properties Walkthrough and policy inheritance.

!!!Cheers!!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.