Saturday, 6 February 2016

Which SCSI Controller to use for Virtual machine?

 When you create virtual machine you can view the storage controller setting as below.


Possibly you may go ahead and change the controller type. However, lets see what do they mean before we talk about changing the controller type for a VM.

Bus Logic Parallel

First emulated vSCSI controllers available in the VMware platform. In Windows server 2000, this driver available by default. This adapter has queue depth of 1 hence less performance.

LSI Logic Parallel

This is another emulated vSCSI controller available in the VMware platform.  Most operating systems had a driver that supported a queue depth of 32 and it became a very common choice, if not the default.


This is an evolution of the parallel driver to support a new future facing standard.  It began to grown popularity when Microsoft required its use for MCSC within Windows 2008 or newer.

VMware Paravirtual (aka PVSCSI)

This vSCSI controller is virtualization aware and was been designed to support very high throughput with minimal processing cost and is therefore the most efficient driver.

Now as we have talked about these controllers, lets talk about changing them.

When we do create VM, as per our OS selection, VM creation wizard will automatically select the SCSI controller that is best suited for that instance of operating system. Before you make decision to change to any other type of controller, make sure you check that the guest operating system does support that type of controller or else unexpected issues may occur.

Reference: VMwale blog

Do refer VMWare KB article on how to change the SCSI controller.

What is new with Virtual Hardware Version 11 in vSphere 6?

Virtual Hardware Version 11 has below listed features.

xHCI controller updated to version 1.0:

USB 3 support for Mac OS X 10.8, Windows Server 2012, and Windows 8 operating systems. This controller is also backward compatible with USB 2.0 and USB 1.0.

Windows VMXNET3 driver support:

Supports large receive offload, resulting in reduced associated CPU costs by reducing network packet processing.

Enhanced NUMA feature:

Hot-add local memory is distributed across all NUMA nodes. In earlier versions, when you add memory to your server, it used to get allocated to only single NUMA node.

Guest authentication:

Support for Windows 2000 and later, Linux kernels 2.4 and later, and Solaris operating systems.

Host Guest File System (HGFS) shared folder driver:

Allows sharing of a folder between the virtual machine and the host system. Use this driver if you plan to use the virtual machine with VMware WorkStation™, VMware Player™, or VMware Fusion®.

Increased vCPU capacity:

Hardware version 11 virtual machines can support up to 128 virtual CPUs. However make sure you have licensing requirements fulfilled for the same. Also Guest VM operating system need to support multiple CPU.

Increased RAM capacity: 

Hardware version 11 virtual machines support up to 4 TB of RAM. Check your guest operating system maximum limits as well before you allocate this much amount of memory.

Increased serial port configuration:

Hardware version 11 virtual machines can be configured with up to 32 serial ports.

VMware vSphere: Networking - Distributed Virtual Switch

will be adding more details soon

ESXi Logs in DCUI Console

In DCUI Console, under system logs you will find 6 logs as listed below.

  • Syslog.log
  • Vmkernal.log
  • Config ((sysboot.log))
  • Management Agent (hostd.log)
  • Virutalcenter agent (vpxa.log)
  • Vmware esxi observation log (vobd) 


Syslog.log holds logs for Management service initialization, watchdogs, scheduled tasks and DCUI use.

We can view this log at https://<ESXi FQDN or IP>/host/syslog.log 


This log includes Core VMkernel logs, including device discovery, storage and networking device and driver events, and virtual machine startup.

The VMkernel provides basic operating system services needed to support virtualization: hardware abstraction, hardware drivers, scheduler, memory allocator, filesystem (vmfs), and virtual machine monitor (vmm).

We can view this log at
https://<ESXi FQDN or IP>/host/vmkernel.log

Config (sysboot.log)

This log includes VMkernel startup and module loading.

We can view this log at
https://<ESXi FQDN or IP>/host/sysboot.log

Management Agent (hostd.log) 

Host management service logs, including virtual machine and host Task and Events, communication with the vSphere Client and vCenter Server vpxa agent, and SDK connections.

It knows about all the VMs that are registered on that host, the luns/vmfs volumes visible by the host, what the VMs are doing, etc. 

We can view this log at
https://<ESXi FQDN or IP>/host/hostd.log

Virtualcenter agent (vpxa.log)

vCenter Server vpxa agent logs, including communication with vCenter Server and the Host Management hostd agent

Vpxa runs if you are connected to a vCenter server. Standalone esxi will not use it.

It acts as an intermediary between vpxd on vCenter server and hostd on ESXi.

We can view this log at
https://<ESXi FQDN or IP>/host/vpxa.log 

Vmware esxi observation log (vobd)

VMkernel Observation events, similar to vob.component.event. Also contains logs for failed login attempts.

We can view thi log at
https://<ESXi FQDN or IP>/host/vobd.log

Steps to view ESXi Logs from DCUI:

  • Go to your DCUI , which is the console, and logon. You will have to press F2 to get this
  • Enter the credentials and hit enter

  • Go down to view system logs.
  • On the right side you will see the different logs you can browse.

Press digits in front of log types like press 1 to view the syslog. It will bring up syslogs.

For more help on navigation options in DCUI screen, do refer VMware blog.

VMware's Product Licensing Models


Friday, 5 February 2016

What is Promiscuous mode in VMware virtual networking?

Promiscuous mode:

This is one of the security policy that you can set in the properties of a virtual switch (Standard/Distributed) or in the properties of portgroup.


As we all know, switch is a point to point device as it maintains the MAC table to record information of connected nodes. Because of this we get high performance compared to old devices like HUB which uses broadcast method to deliver the traffic to destination.

Lets take a scenario here as a requirement to understand this policy. Lets say we have 3 VMs of which 2 are connected to a PROD portgroup and one is connected to QA portgroup as shown in below diagram.

Now the requirement is that VM3 should be able to capture all the packets that are being delivered to any of the VM in PROD protgroup as you have installed Wireshark tool in VM3 due to some requirement for packet capturing.

But as we all know switch will deliver traffic only to the valid destination as it performs point to point delivery. Hence to fulfill this requirement, we enable the Promiscuous Mode in the properties of portgroup QA so that only VM3 can capture the traffic being delivered to VMs connected to PROD protgroup as it gets the visibility of traffic now. Rest all VMs connected to any other portgroups other than QA will behave as regular switch communication with no traffic visibility that is not destined to them.

Be careful while you enable the policy. As  Promiscuous Mode can be enabled either on whole switch or just a Portgroup as we discussed in this example. As per the requirement, you can configure this policy.

NOTE:  By default, this policy is set to Reject on virtual switches (standard or distributed) in vsphere 6.0. All portgroups will also be set to Reject by default as they inherit the settings from switch level.

Do refer VMware KB article for more information on Promiscuous Mode.

I hope this will be helpful to you all. Please feel free to comment or share.

Thursday, 4 February 2016

Upgrading vCenter Server 5.x to 6.0

vCenter Server 6 0 Deployment Changes:


Upgrading vCenter Server 5.0 to 6.0:

 Upgrading vCenter Server 5.1 or 5.5 to 6.0:

vCenter Server 5.5 to 6.0 Mixed Version Transitional Upgrades:

  NOTE: Also do refer official upgrade guide from VMware

Popular Posts This Week