RDMA allows direct memory access from the memory of one computer to the memory of another computer without involving the operating system or CPU.
Protocol Support for RDMA:
Several network protocols supports RDMA now a days. Such as,
- InfiniBand (IB)
- RDMA over Converged Ethernet (RoCE)
- Internet Wide Area RDMA Protocol (iWARP)
Internet Wide Area RDMA Protocol network protocol which allows performing RDMA over TCP. This allows using RDMA over standard Ethernet infrastructure. Only the NICs should be compatible and support iWARP (if CPU offloads are used) otherwise, all iWARP stacks can be implemented in SW and loosing most of the RDMA performance advantages.
RoCE is supported in two modes RoCE v1 and RoCE v2.
The RoCE v1 protocol is an Ethernet link layer protocol with Ethertype 0x8915, it means that the frame length limits of the Ethernet protocol apply – 1500 bytes for a regular Ethernet frame and 9000 bytes for a jumbo frame.
The RoCEv2 protocol exists on top of either the UDP/IPv4 or the UDP/IPv6 protocol. The UDP destination port number 4791 has been reserved for RoCE v2. Since RoCEv2 packets are routable the RoCE v2 protocol is sometimes called Routable RoCE or RRoCE.
Following diagram shows the packet format for RoCEv1 and v2 protocols.
vSphere 6.5 and later releases support remote direct memory access (RDMA) communication between virtual machines that have para-virtualized RDMA (PVRDMA) network adapters. The virtual machines must be connected to the same vSphere Distributed Switch.
|Image 2: VMware|
- ESXi host 6.5 or later.
- vCenter Server or vCenter Server Appliance 6.5 or later.
- vSphere Distributed Switch.
Host Channel Adapter (HCA):
- Virtual machines that reside on different ESXi hosts require HCA to use RDMA.
- HCA must be assigned as an uplink for the vSphere Distributed Switch.
- PVRDMA does not support NIC teaming.
- The HCA must be the only uplink on the vSphere Distributed Switch.
- For virtual machines on the same ESXi hosts or virtual machines using the TCP-based fallback, the HCA is not required (This point will be discussed later in this article).
- Virtual hardware verison 13 or later.
- Linux (64-bit)
Assign a PVRDMA Adapter to a Virtual Machine:
To enable a virtual machine to exchange data using RDMA, you must associate the virtual machine with a PVRDMA network adapter.
- Locate the virtual machine in the vSphere Web Client inventory and Power off the virtual machine.
- On the Configure tab of the virtual machine, expand Settings and select VM Hardware.
- Click Edit and select the Virtual Hardware tab in the dialog box displaying the settings.
- From the New device drop-down menu, select Network and click Add.
- Expand the New Network section and connect the virtual machine to a distributed port group.
- From the Adapter type drop-down menu, select PVRDMA.
- Expand the Memory section, select Reserve all guest memory (All locked), and click OK .
- Power on the virtual machine.
Hope this helps understand the vRDMA in VMware vSphere 6.5.
One thought on “Remote Direct Memory Access for Virtual Machines in vSphere 6.5”
Is it possible to change the background colour, the text in Black are not visible and hard to read.