Uncovering vSphere virtual networking Part-1: Basics

Welcome everyone to this series on vSphere virtual networking. This is going to be a long series due to the vast scope of configuration In this series, we will explore concepts related to vSphere virtual net possibilities in vSphere virtual networking. The reason of doing this series is basically to let everyone (specially beginners) understand the idea of how things are working under the hood. To keep it to the topic and make it simple, We will stick to vSphere virtual Networking only and will not discuss concepts related to NSX and its components.

To begin with, in this intro post, we will first of all understand different layers in virtualized network architecture and possible configurations associated with it.

On Virtual Machine

As you can see in above image, virtual machine has virtual network card, generally referred as vNIC or virtual NIC. A virtual machine can have multiple virtual network cards attached to it based on requirement. Check below table to determine how many maximum network cards can be connected to a VM across vSphere versions.

ESX 7.0ESX 6.7ESX 6.5ESX 6.0ESX 5.5ESX 5

There are multiple types of virtual network cards that we can attach to a VM. Below are the examples of network cards we can attach to a VM.

  • vLance
  • E1000/E1000E
  • Direct Path I/O
  • SR-IOV
  • vRDMA

For more details on these network adapter types, check updated VMware Docs for vSphere 7.

Connection from these vNICs of a VM goes to the Portgroup on a virtual Switch. Hold on. Now here, let’s discuss about components of virtual networking in ESXi host. We will continue this discussion at end of this post again.


There are two types of virtual switches as listed below that can be used in ESXi host.

  • Virtual Standard Switch aka vSS
  • Virtual Distributed Switch aka VDS or DVS

We will talk about these types of switches in great details in another post as this is just introduction post.

Regardless, what type of switch we are using, there are only two types of connecters on a switch i.e a Portgroup and VMkernel port.


Portgroup is defined as a logical template that is used by a VM to connect to a virtual switch. VMs always use port groups to connect to a virtual switch unless you have configured feature such as Direct path I/O.

We generally define a VM as virtual machine is a software representation of a physical computer that behaves just like a physical computer.

So how does your physical computer (Laptop/PC/Server) connects to a switch? Using Ethernet cable to a physical switch port from Ethernet port on your system, right? Virtual machine also expects similar type of connectivity. But since this is a virtual architecture and we cannot use ethernet cable. So this is where a port group is used for connectivity.

I generally get to hear another definition of port groups in my sessions as below.

Portgoup is a group of virtual ports on the virtual switch. And most of the time justification used for this definition is, multiple VMs are connected to same portgroup.

That definition is actually not correct. The reason why multiple VMs are connected to same portgroup is due to the fact that we want apply similar set of networking policies to those VMs. That is the reason why we connect multiple VMs to same portgroup and not for grouping the ports together.

Anyway, about networking policies, I will discuss them separately since there is huge scope for discussion there due to the number of policies available.

While creating a portgroup, we just need to assign name and VLAN ID (Optional) as shown in image below.

VMkernel port:

The vmkernel ports are not used by VMs. They are reserved for ESXi host system traffics such as vSAN, vMotion, IP Storage, Replication traffic, Provisioning Traffic, Fault tolerance logging, ESXi Management traffic.

It is possible to group different traffic types on single VMkernel port, it is recommended that you create separate VMkernel ports for each type of traffic and possibly on different subnets as part of designing best practices.

  • Management Traffic 
    • Carries the configuration and management communication for ESXi hosts, with vCenter Server, host-to-host HA traffic and other endpoints.
  • vMotion Traffic 
    • Carries vMotion related traffic.
  • Provisioning Traffic 
    • Handles the data that is transferred for virtual machine operations such as cold migration, cloning, and snapshot migration. 
  • IP storage Traffic
    • Handles the connection for storage types that use standard TCP/IP networks and depend on the VMkernel networking.
    • Examples of such storage types are software iSCSI, dependent hardware iSCSI, and NFS.
  • Fault Tolerance Traffic 
    • Handles the data that the primary fault tolerant virtual machine sends to the secondary fault tolerant virtual machine over the VMkernel networking layer.
    • A separate VMkernel adapter for Fault Tolerance logging is required on every host that is part of a vSphere HA cluster. 
  • vSphere Replication Traffic 
    • Handles the outgoing replication data that the source ESXi host transfers to the vSphere Replication server.
  • vSphere Replication NFC traffic 
    • Handles the incoming replication data on the target replication site. 
  • vSAN traffic 
    • Every host that participates in a vSAN cluster must have a VMkernel adapter to handle the vSAN traffic.

While creating VMKernel ports, we need to define IP address details along with label and traffic type mentioned earlier. This is not required when you create a portgroup.

By default, when we install an ESXi server, we get a default virtual switch (standard type) on which one default portgroup called as VM Network and a VMkernel port Management Network (vmk 0) is created. We can go ahead and modify these existing and also add additional port groups or VMkernel ports or switch as required.

Now that we discussed about port groups, vmkernel ports and switch types, let us continue with our network layer discussion as per first image as I asked you to hold on earlier when we reached port groups.

On switch, one port will be used to connect to Physical ethernet card of ESXi host. This port is called as uplink port. The ethernet card of EXSi host is referred as VMINC or PNIC or Uplink. It acts as bridge between virtual and physical networking. Due to this, ethernet cards of ESXi hosts generally do not have any IP address associated to them. What I mean by this? When you assign IP address to ESXi, it is not associated to network adapter of ESXi, instead it is associated to management vmkernel port.

Beyond ESXi ethernet card, it’s typical networking what we have been using traditionally. I hope this clears the layers of virtual networking. We will discuss these layers in other posts as well for other advanced topics later.

I hope it is helpful. See you in next post. Check next post Part-2: Virtual Switches

Do share, comment, like if you find it informative.

One thought on “Uncovering vSphere virtual networking Part-1: Basics

Leave a Reply