Saturday, 6 February 2016

Virtual machine network adaptor types.

When you create or configure a virtual machine, you can specify network adapter (NICs) type.

The type of network adapters that are available depend on the following factors:


The virtual machine version.
Whether the virtual machine has been updated to the latest version for the current host.
The guest operating system.

The following NIC types are supported:

Vlance:

Emulated version of the AMD 79C970 PCnet32 LANCE NIC, an older 10 Mbps NIC with drivers available in most 32-bit guest operating systems except Windows Vista and later. 

A virtual machine configured with this network adapter can use its network immediately.

Flexible:

Identifies itself as a Vlance adapter when a virtual machine boots, but initializes itself and functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it. 

Without VMware tools installed, it runs in Vlance mode. However, when you install VMware tools, it changes to high performance VMXNET adapter.

E1000:

Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.

VMXNET:

Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available. 

Enhanced VMXNET (VMXNET 2):

Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. VMXNET 2 (Enhanced) is available only for some guest operating systems on ESX/ESXi 3.5 and later.

VMXNET 3:

Next generation of a paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.

SR-IOV (Single Root I/O Virtualization):

vSphere 5.1 and later supports Single Root I/O Virtualization (SR-IOV). SR-IOV is a specification that allows a single Peripheral Component Interconnect Express (PCIe) physical device under a single root port to appear to be multiple separate physical devices to the hypervisor or the guest operating system. 

SR-IOV uses physical functions (PFs) and virtual functions (VFs) to manage global functions for the SR-IOV devices. 

PFs are full PCIe functions that include the SR-IOV Extended Capability which is used to configure and manage the SR-IOV functionality. It is possible to configure or control PCIe devices using PFs, and the PF has full ability to move data in and out of the device. 

VFs are lightweight PCIe functions that contain all the resources necessary for data movement but have a carefully minimized set of configuration resources. 

SR-IOV-enabled PCIe devices present multiple instances of themselves to the guest OS instance and hypervisor. The number of virtual functions presented depends on the device. For SR-IOV-enabled PCIe devices to function, you must have the appropriate BIOS and hardware support, as well as SR-IOV support in the guest driver or hypervisor instance.


SR-IOV Architecture: 

Image: VMware

You should choose this adapter type if you are running latency sensitive workload and need high performance adapter. However, do check if your guest operating system does support this type of adapter as it is supported by limited versions of operating systems. 

Also, number of available  virtualization features like vMotion, DRS, FT and other will be reduced as they are not compatible with SR-IOV. Please check VMware KB article on SR-IOV for information. 

You can also check VMware KB article on which adapter you should choose for virtual machine.












Which SCSI Controller to use for Virtual machine?

 When you create virtual machine you can view the storage controller setting as below.




 

Possibly you may go ahead and change the controller type. However, lets see what do they mean before we talk about changing the controller type for a VM.

Bus Logic Parallel

First emulated vSCSI controllers available in the VMware platform. In Windows server 2000, this driver available by default. This adapter has queue depth of 1 hence less performance.

LSI Logic Parallel

This is another emulated vSCSI controller available in the VMware platform.  Most operating systems had a driver that supported a queue depth of 32 and it became a very common choice, if not the default.

LSI Logic SAS

This is an evolution of the parallel driver to support a new future facing standard.  It began to grown popularity when Microsoft required its use for MCSC within Windows 2008 or newer.

VMware Paravirtual (aka PVSCSI)

This vSCSI controller is virtualization aware and was been designed to support very high throughput with minimal processing cost and is therefore the most efficient driver.

Now as we have talked about these controllers, lets talk about changing them.

When we do create VM, as per our OS selection, VM creation wizard will automatically select the SCSI controller that is best suited for that instance of operating system. Before you make decision to change to any other type of controller, make sure you check that the guest operating system does support that type of controller or else unexpected issues may occur.

Reference: VMwale blog

Do refer VMWare KB article on how to change the SCSI controller.


What is new with Virtual Hardware Version 11 in vSphere 6?

Virtual Hardware Version 11 has below listed features.


xHCI controller updated to version 1.0:

USB 3 support for Mac OS X 10.8, Windows Server 2012, and Windows 8 operating systems. This controller is also backward compatible with USB 2.0 and USB 1.0.

 
Windows VMXNET3 driver support:

Supports large receive offload, resulting in reduced associated CPU costs by reducing network packet processing.

Enhanced NUMA feature:

Hot-add local memory is distributed across all NUMA nodes. In earlier versions, when you add memory to your server, it used to get allocated to only single NUMA node.

Guest authentication:

Support for Windows 2000 and later, Linux kernels 2.4 and later, and Solaris operating systems.

Host Guest File System (HGFS) shared folder driver:

Allows sharing of a folder between the virtual machine and the host system. Use this driver if you plan to use the virtual machine with VMware WorkStation™, VMware Player™, or VMware Fusion®.

Increased vCPU capacity:

Hardware version 11 virtual machines can support up to 128 virtual CPUs. However make sure you have licensing requirements fulfilled for the same. Also Guest VM operating system need to support multiple CPU.

Increased RAM capacity: 

Hardware version 11 virtual machines support up to 4 TB of RAM. Check your guest operating system maximum limits as well before you allocate this much amount of memory.

Increased serial port configuration:

Hardware version 11 virtual machines can be configured with up to 32 serial ports.
















Friday, 5 February 2016

VMware vSphere: Networking - Distributed Virtual Switch

will be adding more details soon



ESXi Logs in DCUI Console



In DCUI Console, under system logs you will find 6 logs as listed below.

  • Syslog.log
  • Vmkernal.log
  • Config ((sysboot.log))
  • Management Agent (hostd.log)
  • Virutalcenter agent (vpxa.log)
  • Vmware esxi observation log (vobd) 
 



Syslog.log

Syslog.log holds logs for Management service initialization, watchdogs, scheduled tasks and DCUI use.


We can view this log at https://<ESXi FQDN or IP>/host/syslog.log 




Vmkernal.log 

This log includes Core VMkernel logs, including device discovery, storage and networking device and driver events, and virtual machine startup.

The VMkernel provides basic operating system services needed to support virtualization: hardware abstraction, hardware drivers, scheduler, memory allocator, filesystem (vmfs), and virtual machine monitor (vmm).

We can view this log at
https://<ESXi FQDN or IP>/host/vmkernel.log


Config (sysboot.log)

This log includes VMkernel startup and module loading.

We can view this log at
https://<ESXi FQDN or IP>/host/sysboot.log



Management Agent (hostd.log) 


Host management service logs, including virtual machine and host Task and Events, communication with the vSphere Client and vCenter Server vpxa agent, and SDK connections.

It knows about all the VMs that are registered on that host, the luns/vmfs volumes visible by the host, what the VMs are doing, etc. 


We can view this log at
https://<ESXi FQDN or IP>/host/hostd.log

Virtualcenter agent (vpxa.log)

vCenter Server vpxa agent logs, including communication with vCenter Server and the Host Management hostd agent

Vpxa runs if you are connected to a vCenter server. Standalone esxi will not use it.

It acts as an intermediary between vpxd on vCenter server and hostd on ESXi.

We can view this log at
https://<ESXi FQDN or IP>/host/vpxa.log 





Vmware esxi observation log (vobd)

VMkernel Observation events, similar to vob.component.event. Also contains logs for failed login attempts.

We can view thi log at
https://<ESXi FQDN or IP>/host/vobd.log



   
Steps to view ESXi Logs from DCUI:

  • Go to your DCUI , which is the console, and logon. You will have to press F2 to get this
  • Enter the credentials and hit enter


  • Go down to view system logs.
  • On the right side you will see the different logs you can browse.


Press digits in front of log types like press 1 to view the syslog. It will bring up syslogs.

For more help on navigation options in DCUI screen, do refer VMware blog.

VMware's Product Licensing Models

 


What is Promiscuous mode in VMware virtual networking?

Promiscuous mode:

This is one of the security policy that you can set in the properties of a virtual switch (Standard/Distributed) or in the properties of portgroup.

           

As we all know, switch is a point to point device as it maintains the MAC table to record information of connected nodes. Because of this we get high performance compared to old devices like HUB which uses broadcast method to deliver the traffic to destination.

Lets take a scenario here as a requirement to understand this policy. Lets say we have 3 VMs of which 2 are connected to a PROD portgroup and one is connected to QA portgroup as shown in below diagram.



Now the requirement is that VM3 should be able to capture all the packets that are being delivered to any of the VM in PROD protgroup as you have installed Wireshark tool in VM3 due to some requirement for packet capturing.

But as we all know switch will deliver traffic only to the valid destination as it performs point to point delivery. Hence to fulfill this requirement, we enable the Promiscuous Mode in the properties of portgroup QA so that only VM3 can capture the traffic being delivered to VMs connected to PROD protgroup as it gets the visibility of traffic now. Rest all VMs connected to any other portgroups other than QA will behave as regular switch communication with no traffic visibility that is not destined to them.

Be careful while you enable the policy. As  Promiscuous Mode can be enabled either on whole switch or just a Portgroup as we discussed in this example. As per the requirement, you can configure this policy.

NOTE:  By default, this policy is set to Reject on virtual switches (standard or distributed) in vsphere 6.0. All portgroups will also be set to Reject by default as they inherit the settings from switch level.

Do refer VMware KB article for more information on Promiscuous Mode.

I hope this will be helpful to you all. Please feel free to comment or share.

Popular Posts This Week