Uncovering virtual networking Part-8: Load balancing Algorithms

In previous post on Network failure detection, we started with the Teaming and Failover policies. We are going to explore load balancing algorithms in virtual networking in this post. This one is going to be the longest post I have written until now I guess πŸ™‚

We have two types of switches, standard (vSS) and vDS and they do share these load balancing algorithms with exception of a policy called as Load based teaming (LBT) which is only available with vDS. These load balancing algorithms on a virtual switches determine how the network traffic is distributed between the physical NICs (uplinks) configured in a team.

Before we get into specific algorithms, let me declare a statement here. The term “Load balancing” you should not take as it means. Because the algorithms we are going to explore will be distributing the load across available uplinks and not exactly balancing. So it would be more appropriate to refer these algorithms as Load distribution algorithms. This concept will be much clearer once we are done discussing these algorithms in this post.

I guess that’s enough ground discussion to get started with these algorithms. Let’s dive into each of them.

Load Balancing Algorithms

  • Route Based on Originating Virtual Port ID
    • Available on vSS and vDS
  • Route Based on Source MAC Hash
    • Available on vSS and vDS
  • Route Based on IP Hash
    • Available on vSS and vDS
  • Route Based on Physical NIC Load (aka LBT or Load Based Teaming)
    • Available only on vDS
  • Use Explicit Failover Order
    • Available on vSS and vDS

Let’s see each of these in details.

Route Based on Originating Virtual Port ID:

Route Based on Originating Virtual Port ID is the default load balancing method on the vSphere vSS and vDS. This method has the lowest overhead of all algorithms from a virtual switch processing perspective, and works with any network configuration. It does not require any additional physical switch configuration. 

The virtual switch selects uplinks for distributing the traffic based on the virtual machine virtual port IDs on the vSS or vDS.

Each virtual machine running on an ESXi host has an associated virtual port ID on the virtual switch. The port ID of a virtual machine is fixed until the VM is running on the same ESXi host. If you migrate, power off, or delete the virtual machine, then its virtual port ID on the virtual switch is released for others to use. If a virtual machine is powered on again or migrated, it may use different port and use the uplink that is associated with the new virtual port.

As you can see in image below, I have a PG-PROD portgroup to which 4 VMs are connected, and each VM is associated with its virtual port such as VM1 with vPort 0, VM2 with vPort 1, VM3 with vPort 2 and VM4 with vPort 3. Also, I have 2 uplinks connected to this switch (vmnic0 and vmnic1). Now the question is, which uplink will be used by each of these VMs?

To calculate an uplink to be used for a virtual machine, the virtual switch uses the virtual machine port ID and the number of uplinks in the NIC team. So with this logic, VM that is connected to vPort 0, will use first uplink (vmnic0), VM connected to vPort 1 will use next available uplink (vmnic1).

Now here, I just have 2 uplinks so what about 3rd and 4th VM. If I had 3rd uplink (vmnic2), VM3 might have used it, but in this example, since I don’t have it, they will start the cycle again with first uplink. So VM3 on vPort 2 will use first uplink (vmnic0) and VM4 on vPort 3 will use next uplink (vmnic1). Below image is illustration of the process we just discussed.

If you just revise this process with odd number of VMs such as 3 or 5 of them, do you think it is a load balancing? I’ll leave that question for you to think.

Drawback of this algorithm:

  • The virtual switch is not aware of the traffic load on the uplinks and it does not load balance the traffic to uplinks that are less used. 
  • The bandwidth that is available to a VM is limited to the speed of the uplink that is associated with the relevant port ID, unless the VM has more than one virtual NICs.

Cool. That is one down. Let’s move onto the next algorithm.

Route Based on Source MAC Hash

With this algorithm, virtual switch selects an uplink based on the virtual machine MAC address. To calculate an uplink for a VM, the virtual switch uses the VMs MAC address and the number of uplinks in the NIC teaming and performs Mod or Modulo operation.

Uplink = HEX(VM MAC Address) mod (Number of uplinks)

Well, what does it mean?

Let’s take an example here as in image below to understand this.

Now let’s assume that each of these VMs have below listed MAC addresses, associated with them.

  • VM1: 00:50:56:00:00:0a
  • VM2: 00:50:56:00:00:0b
  • VM3: 00:50:56:00:00:0c

So how the calculations are done?

Let’s use modulus operator on these MACs with number of uplinks as mentioned in formula earlier.

  • VM1: Hex(00:50:56:00:00:0a) mod 2 = 0
  • VM2: Hex(00:50:56:00:00:0b) mod 2 = 1
  • VM3: Hex(00:50:56:00:00:0c) mod 2 = 0

That is it. Here we have the remainders which represents the uplinks that will be used by each of these VMs. VM1 will use first uplink (vmnic0), VM2 will use vmnic1 and VM3 will use vmnic0. Below image depicts the usage of uplinks as per our calculations.

In fact, in above example, even if I take LSB (least significant bit) in this case instead of full MAC address, I’ll have similar results, give it a try. This is because, I have MAC addresses, all starting with 00:50:56 and 3rd byte (4th octet) and 2nd byte (5th Octet) is also same.

If you look at above image, this is not really load balancing and also this algorithm may not be ideal choice for VMs with single vNIC as they will continue to use same uplink every time. Re-calculation of uplink to be used happens only if one the uplink becomes unavailable. Load on uplink is not taken into consideration with this algorithm and hence may end up in multiple VMs consuming heavily loaded uplink just because their Modulo operation results were similar.

Advantages of Source MAC Hash:

  • More even distribution of the traffic than Route Based on Originating Virtual Port, because the virtual switch calculates an uplink for every packet. Here, the virtual machines that will benefit most from the use of this algorithm are those with multiple virtual NICs.
  • Virtual machines use the same uplink because the MAC address is static. Powering a virtual machine on or off does not change the uplink that the virtual machine uses. 
  • No changes on the physical switch are required.

Disadvantages of Source MAC Hash:

  • The bandwidth that is available to a virtual machine is limited to the speed of the uplink that is associated with the relevant port ID, unless the virtual machine uses multiple source MAC addresses. 
  • Higher resource consumption than Route Based on Originating Virtual Port, because the virtual switch calculates an uplink for every packet. 
  • The virtual switch is not aware of the load of the uplinks, so uplinks might become overloaded.

Hmmmm, That’s two down now. Still 3 more to go. Lets move onto the next one.

Route Based on IP Hash

Since we already talked about Source MAC hash, it will be relatively easy to understand this algorithm. With Source MAC Hash, the challenge is with single vNIC VMs, since VMs are static to the uplink and MAC Hash algorithm, does not consider load on uplinks, it is very much possible that even if there is heavy load on uplink, multiple VMs keep using same uplink since remainder of modulo is same for multiple VMs. Re-calculations will not take place until any uplink fails.

IP Hash algorithm overcomes this challenge, as it is not using only the source IP but destination IP as well. The virtual switch selects uplinks for virtual machines based on the source and destination IP address of each packet.

To calculate an uplink for a virtual machine, the virtual switch takes the last octet of both source and destination IP addresses in the packet, puts them through a XOR operation, and then runs the result through another calculation based on the number of uplinks in the NIC team.  Let’s understand this with an example.

As in the image above, let’s say, we have two VMs with mentioned configurations. VM1 has two vNICS and VM2 has single vNIC. Also the environment has 3 servers as File server, print server and a NFS server. Now let’s assume that VM2 needs to communicate with all three servers, which uplink will be used?

To calculate the uplink to be used, last octet of Source IP of VM and destination IP is used. SO we will have 3 test cases as I have three destination servers.

  1. VM2 –> File Server : 10.0.0.30 –> 10.0.0.40
  2. VM2–> NFS Server : 10.0.0.30 –> 10.0.0.50
  3. VM2 –> Print Server : 10.0.0.30 –> 10.0.0.60

Lets convert these IP addresses to HEX value first.

  • 10.0.0.30 –> 0x0a00001e
  • 10.0.0.40 –> 0x0a000028
  • 10.0.0.50 –> 0x0a000032
  • 10.0.0.60 –> 0x0a00003c

Now let’s take last octet and perform XOR or Exclusive OR operation on it.

  1. 30 XOR 40 : 1e XOR 28 = 36
  2. 30 XOR 50 : 1e XOR 32 = 2c
  3. 30 XOR 60 : 1e XOR 3c = 22

Now that we have XORed values, let’s apply modulo on it with number of uplinks on switch.

  1. 36 mod 3 = 0
  2. 2c mod 3 = 2
  3. 22 mod 3 = 1

So we got the remainders which represents the uplinks to be used.

  1. VM2 –> File Server : vmnic0
  2. VM2–> NFS Server : vmnic2
  3. VM2 –> Print Server : vmnic1

As you can see, though IP address of VM2 is static, since destination IP is changed each time, there is possibility that VM may use different uplink. As in example above, VM2 is using different uplinks while communicating three different servers. I hope that explains it. So I leave you guys with calculations for first VM and it has two source IP addresses. As I said, I am kinda lazy πŸ™‚ So have fun.

Anyway, below is the graphical representation of above example (Only VM2 πŸ˜› )

IP hash algorithm is the only algorithm that gets closer to the term load balancing. I can understand if you look forward to use this algorithm in your environment, but there is prerequisite to use IP hash algorithm. To ensure that IP hash load balancing works correctly, you must have an Ether channel or LACP configured on the physical switch. An Ether channel bonds multiple network adapters into a single logical link. 

Limitations and Configuration Requirements:

  • ESXi hosts support IP hash teaming on a single physical switch or stacked switches. 
  • ESXi hosts support only 802.3ad link aggregation in Static mode . You can only use a static Etherchannel with vSphere Standard Switches. LACP is not supported.
  • If you enable IP hash load balancing without 802.3ad link aggregation and the reverse, you might experience networking disruptions. 
  • You must use Link Status Only as network failure detection with IP hash load balancing. 
  • You must set all uplinks from the team in the Active failover list . The Standby and Unused lists must be empty. 
  • The number of ports in the Ether channel must be same as the number of uplinks in the team.

Advantages of IP Hash algorithm:

  • More even distribution of the load compared to Route Based on Originating Virtual Port and Route Based on Source MAC Hash, as the virtual switch calculates the uplink for every packet. 
  • Potentially higher throughput for VMs that communicate with multiple IP addresses.

Disadvantages of IP Hash algorithm:

  • Highest resource consumption compared to the other load balancing algorithms. 
  • The virtual switch is not aware of the actual load of the uplinks. 
  • Requires changes on the physical network. 
  • Complex to troubleshoot.

Now just two more algorithms are remaining . So let’s jump into next one.

Route Based on Physical NIC Load

Route based on physical NIC load, is also commonly referred to as LBT or Load Based Teaming.Β 

This algorithm is based on Route Based on Originating Virtual Port, where the virtual switch also checks the actual load of the uplinks and takes steps to reduce it on overloaded uplinks. This algorithm is available only for vDS.Β 

The vDS calculates uplinks for VMs by taking their virtual port ID and the number of uplinks in the NIC team. The vDS tests the uplinks every 30 seconds, and if an uplink port reaches 75% utilization over a 30 second period, the VM with the highest I/O is shifted to a different uplink.

Pros of Route Based on Physical NIC Load:

  • Low resource consumption as the vDS calculates uplinks for VMs only once and checking the of uplinks has minimal impact. 
  • The vDS is aware of the load of uplinks and takes care to reduce it if needed. 
  • No changes on the physical switch are required.

Cons of Route Based on Physical NIC Load:

  • The bandwidth that is available to virtual machines is limited to the uplinks that are connected to the distributed switch.
  • Requires Enterprise plus license

That’s it for this one as we already talked about Originating virtual Port ID. This one is based on it and just adds the even load distribution to it by checking load on uplinks every 30 seconds.

Now just one more to go πŸ™‚ πŸ™‚ πŸ™‚

Use Explicit Failover Order

No actual load balancing is available with this policy. You manually define which NICs to use as Active, and which NICs to use in the event of a failure as Standby. This is done by creating a list of Active NICs and Standby NICs when you configure NIC teaming failover order.  

OptionDescription
Active adaptersContinue to use the uplink if the network adapter connectivity is up and active.
Standby adaptersUse this uplink if one of the active physical adapter is down. 
Unused adaptersDo not use this uplink.

Below is an example of Failover order configuration.

ESXi host will use the first NIC in the list. The other NICs will not be used unless the first NIC becomes unavailable. There is no intelligence behind this method and it also does not provide any load balancing. It is more focused on providing resiliency.

Whoaaaa !!! That was long and took really good amount of time to place everything in place. πŸ™‚ However, finally it’s completed.

Which one to use, you can decide based on your environment (I mean availability of vDS) and networking use cases.

That is all about these load balancing algorithms. I hope it is informative. Cheers.

By the way, I am planning to end this series. All these parts from 1-8, cover majority of concepts related to virtual networking. There are still few concepts which I have not touched upon such as advanced vDS policies.

If you guys have any suggestions, please feel free to share in comments, email. Just trying to make some better use of time by sharing during this period of lockdown.

Stay fit and safe everyone !!!!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.