Wednesday, 23 August 2017

Remote Direct Memory Access for Virtual Machines in vSphere 6.5

As we all know the basic definition of Direct memory access (DMA), is an ability of a device to access the host's memory directly without the intervention of the CPU or Operating System.


RDMA allows direct memory access from the memory of one computer to the memory of another computer without involving the operating system or CPU.

Protocol Support for RDMA:

Several network protocols supports RDMA now a days. Such as,

  • InfiniBand (IB) 
  • RDMA over Converged Ethernet (RoCE) 
  • Internet Wide Area RDMA Protocol (iWARP)


Internet Wide Area RDMA Protocol network protocol which allows performing RDMA over TCP. This allows using RDMA over standard Ethernet infrastructure. Only the NICs should be compatible and support iWARP (if CPU offloads are used) otherwise, all iWARP stacks can be implemented in SW and loosing most of the RDMA performance advantages.


InfiniBand (IB) is a new generation network protocol which supports RDMA natively. This is a new network technology, hence it requires NICs and switches that support this technology.


RoCE is a network protocol which allows performing RDMA over Ethernet network. Its lower network headers are Ethernet headers and its upper network headers (including the data) are InfiniBand headers. This allows using RDMA over standard Ethernet infrastructure. 

vSphere with RDMA:

With vSphere 6.5, VMware introduced RDMA over Converged Ethernet (RoCE). RDMA over Converged Ethernet (RoCE) allows remote direct memory access (RDMA) over an Ethernet network. RoCE dramatically accelerates communication between two network endpoints leveraging DMA over converged Ethernet infrastructure.

RoCE is supported in two modes RoCE v1 and RoCE v2.

The RoCE v1 protocol is an Ethernet link layer protocol with Ethertype 0x8915, it means that the frame length limits of the Ethernet protocol apply – 1500 bytes for a regular Ethernet frame and 9000 bytes for a jumbo frame.

The RoCEv2 protocol exists on top of either the UDP/IPv4 or the UDP/IPv6 protocol.[2] The UDP destination port number 4791 has been reserved for RoCE v2. Since RoCEv2 packets are routable the RoCE v2 protocol is sometimes called Routable RoCE or RRoCE.

Following diagram shows the packet format for RoCEv1 and v2 protocols.

Using RDMA in vSphere:

vSphere 6.5 and later releases support remote direct memory access (RDMA) communication between virtual machines that have paravirtualized RDMA (PVRDMA) network adapters. 
The virtual machines must be connected to the same vSphere Distributed Switch.

A PVRDMA network adapter for virtual machine provides remote direct memory access in a virtual environment. Virtual machine uses PVRDMA network adapter to communicate with other virtual machines that has the PVRDMA devices. The transfer of memory is offloaded to the RDMA-capable Host Channel Adapters (HCA) as shown in images below in order

Image 1:VMware

Image 2: VMware 

The PVRDMA device automatically selects the method of communication between the virtual machines. For virtual machines that run on the same ESXi host with or without a physical RDMA device, the data transfer is a memcpy between the two virtual machines. The physical RDMA hardware is not used in this case. 

For virtual machines that reside on different ESXi hosts and that have a physical RDMA connection, the physical RDMA devices must be uplinks on the distributed switch. In this case, the communication between the virtual machines occurs using PVRDMA adapter which uses the underlying physical RDMA devices. 

For two virtual machines that run on different ESXi hosts, when at least one of the hosts does not have a physical RDMA device, the communication falls back to a TCP-based channel and the performance is reduced.

PVRDMA Requirements in vSphere 6.5:

VMware vSphere 6.5 and later supports PVRDMA only with specific configuration. Below are the requirements for some of the components.

  • ESXi host 6.5 or later. 
  • vCenter Server or vCenter Server Appliance 6.5 or later. 
  • vSphere Distributed Switch.
Host Channel Adapter (HCA): 
  • Virtual machines that reside on different ESXi hosts require HCA to use RDMA. 
  • HCA must be assigned as an uplink for the vSphere Distributed Switch. 
  • PVRDMA does not support NIC teaming. 
  • The HCA must be the only uplink on the vSphere Distributed Switch. 
  • For virtual machines on the same ESXi hosts or virtual machines using the TCP-based fallback, the HCA is not required (This point will be discussed later in this article). 
Virtual machine:
  • Virtual hardware verison 13 or later. 
Guest OS:
  • Linux (64-bit) 
Assign a PVRDMA Adapter to a Virtual Machine:

To enable a virtual machine to exchange data using RDMA, you must associate the virtual machine with a PVRDMA network adapter.

  • Locate the virtual machine in the vSphere Web Client inventory and Power off the virtual machine. 
  • On the Configure tab of the virtual machine, expand Settings and select VM Hardware. 
  • Click Edit and select the Virtual Hardware tab in the dialog box displaying the settings. 
  • From the New device drop-down menu, select Network and click Add. 
  • Expand the New Network section and connect the virtual machine to a distributed port group. 
  • From the Adapter type drop-down menu, select PVRDMA. 
  • Expand the Memory section, select Reserve all guest memory (All locked), and click OK . 
  • Power on the virtual machine. 
Hope this helps understand the vRDMA in VMware vSphere 6.5.

Tuesday, 22 August 2017

Client integration plugin missing in vSphere 6.5

Recently I came across the information that client integration plugin for vSphere 6.5 release. We used to install client integration plugin in previous releases for performing activities from web client such as:
  • Datastore File Upload/Download
  • Launching VM console
  • Using Windows Integrated authentication checkmark on login page
  • OVF templates Export/Deploy
  • Content Library Import/Export

So I got curious about the tasks that we used to perform with the help of this plugin. I searched through documentation for more information about this. 
The Client Integration Plug-in is no longer required. With vSphere 6.5 release, the VMware Enhanced Authentication Plug-in replaces the Client Integration Plug-in. The Enhanced Authentication Plug-in provides Integrated Windows Authentication and Windows-based smart card functionality. These are the only two features that are carried over from the previous Client Integration Plug-in. Install the plug-in only once to enable all the functionality this plug-in delivers. 
If you install the plug-in from an Internet Explorer browser, Internet Explorer identifies the plug-in as being on the Internet instead of on the local intranet. In such cases, the plug-in is not installed correctly because Protected Mode is enabled for the Internet. We must first disable Protected Mode and enable pop-up windows on Web browser. 

The Enhanced Authentication Plug-in can function seamlessly even if the Client Integration Plug-in is installed on the system already from vSphere 6.0 or earlier. There are no conflicts if both plug-ins are installed.

Procedure to install Enhanced Authentication Plugin:
  1. Open a Web browser and type the URL for the vSphere Web Client. 
  2. At the bottom of the vSphere Web Client login page, click "Download Enhanced Authentication Plug-in" link. 
  3. If the browser blocks the installation either by issuing certificate errors or by running a pop-up blocker, follow the Help instructions for your browser to resolve the problem. 
  4. Save the plug-in to your computer, and run the executable. 
  5. Step through the installation wizard for both the VMware Enhanced Authentication Plug-in and the VMware Plug-in Service which are run in succession. 
  6. When the installations are complete, refresh your browser. 
  7. On the External Protocol Request dialog box, click "Launch Application" to run the Enhanced Authentication Plug-in. 
Post completion of setting up the plugin, the link to download the plug-in disappears from the login page bottom area.

Popular Posts This Week