Saturday, 29 December 2018

vSAN 6.x Cluster designs and considerations

vSAN is available to customers in two flavours as listed below.

  • Hybrid: Hybrid vSAN is implemented using combination of SSD (as Cache) and HDD (as Capacity device)
Image: VMware

  • All Flash: All flash uses only SSDs (for Cache as well as Capacity device) for implementing vSAN.
Image: VMware


Hybrid vSAN was inducted in vSphere 5.5 onwards whereas All-Flash was inducted in vSphere 6.0 onwards.

vSAN requirements:
  • Min 3 ESXi Hosts (max 64 hosts)
  • Min 1GB network for Hybrid and Min 10GB network for All flash
  • RAID controller: Passthrough Mode is recommended, RAID 0 mode is also supported.
  • vSAN License
  • Storage device requirements for vSAN:
    • Disks to be used for vSAN must be raw.
    • While enabling vSAN cluster, first 3 ESXi hosts must have local storage.  Later you can add diskless servers as well. 
    • Min 1 disk group on first 3 ESXi host
      • Min and Max 1 SSD for caching per disk group
      • Min 1 HDD/SSD for Capacity device and Max 7 per disk group
    • Max 5 disk groups per ESXi

There are four different ways in which vSAN cluster can be designed. Below are the possible types for vSAN cluster.
  • Standard Cluster
  • Two Node cluster
  • Stretched Cluster
  • Two Node Stretched Cluster
Standard Cluster:


A standard vSAN cluster consists of a minimum of three physical nodes and can be scaled to 64 nodes. All the hosts in a standard cluster are commonly located at a single location and are well-connected on the same Layer-2 network. Though maximum number of servers in vSAN cluster can be 64, maximum 32 fault domains can exist in a standard vSAN cluster.




Two Node Cluster:


Though we refer it as two node cluster, there will be 3 hosts in this design to form a valid cluster. 


Two physical ESXi hosts are placed in same location and are used for hosting workloads. These hosts are usually connected to the same network switch or are directly connected. While 10Gbps connections may be directly connected, 1Gbps connections will require a crossover cable.


A third ESXi host is required for a 2-node configuration to avoid “split-brain” issues when network connectivity is lost between the two physical nodes. Hence it will be used as witness host. Witness host is generally placed in different location than the other Two nodes. I will explain requirements for witness host later in this article.



Stretched Cluster:

A vSAN Stretched Cluster provides resiliency against the loss of an entire site. This design uses 3 sites to achieve site resiliency. Out of 3 sites, two sites are designated as Data sites (One site configured as preferred site and other as secondary site) where hosts are distributed evenly across two sites. Third site (also called as witness site) is used for placement of witness host and only used for hosting witness components. In stretched cluster design, there are only 3 fault domains since it's a three site deployment.

The two sites are well-connected (Layer 2 stretched network) from a network perspective with a round trip time (RTT) latency of no more than 5 ms. Connectivity between data site and Witness site can be Layer 3 with latency requirement as mentioned below.
  • If there are 10 or more hosts per data site, latency must be 100 ms or less
  • If there are 10 or less hosts per data site, latency must be 200 ms or less.
A vSAN Witness Host is placed at a third site (Witness site) to avoid “split-brain” issues if connectivity is lost between the two data sites.

A vSAN Stretched Cluster may have a maximum of 31 hosts in the cluster i.e. 15 hosts max per data site so total 30 hosts across two data sites and 1 Witness host in witness site. In cases where there is a need for more hosts across sites, additional vSAN Stretched Clusters may be used.



Two node stretched cluster:

A two-node stretched cluster is effectively the same as a two-node cluster but the two vSAN nodes are geographically disparate. 


Two-node stretched cluster configuration steps are same as that of stretched cluster. You must designate one site as the preferred site, and the other site becomes a secondary or non-preferred site. The only key difference is that each site contains only one host.


Witness Host considerations:

Witness host stores metadata commonly called as “witness components” for vSAN objects. Virtual machine data such as virtual disks and virtual machine configuration files are not stored on the vSAN Witness Host. The purpose of the vSAN Witness Host is to serve as a “tie-breaker” in cases where data sites are network isolated or disconnected.

A vSAN Witness Host may be a physical vSphere host, or a VMware provided virtual appliance, which can be easily deployed from an OVA. When using a physical host as a vSAN Witness Host, additional licensing is required, and the host must meet some general configuration requirements. 


When using a vSAN Witness Appliance as the vSAN Witness Host, it can easily reside on existing vSphere infrastructure, with no additional need for licensing.

When using 2 Node clusters for deployments such as remote office branch office (ROBO) locations, it is a common practice for vSAN Witness Appliances to reside at a primary datacenter. It is possible to run witness host at the same ROBO site but would require additional infrastructure at the ROBO site.

vSAN Witness Hosts providing quorum for Stretched Clusters may only be located in a tertiary site that is independent of the Preferred and Secondary Stretched Cluster sites.

One vSAN Witness Host is required for each 2 Node or Stretched Cluster vSAN deployment.

Using the VMware provided vSAN Witness Appliance is generally recommended as a better option for the vSAN Witness Host than using a physical vSphere host. The utilization of a vSAN Witness Appliance is relatively low during normal operations. It is not until a failover process occurs that a vSAN Witness Host will have any significant utilization. 

When using a vSAN Witness Appliance, it is patched in the same fashion as any other ESXi host. It is the last host updated when performing 2 Node and Stretched Cluster upgrades and should not be backed up. Should it become corrupted or deleted, it should be redeplo
yed. vSAN 6.6 introduced a quick and easy wizard to change the associated vSAN Witness Host.

Friday, 28 December 2018

vSphere 6.7 and vSAN 6.7 native monitoring using vROPS within vCenter

vSphere 6.7 and vSAN 6.7 now includes vRealize Operations within vCenter that provides monitoring capabilities natively in the HTML5 “Clarity” based vSphere client. This new feature allows vSphere customers to see a subset of intelligence offered up by vRealize Operations (vROPS) through a single vCenter user interface. Light-weight purpose-built dashboards are included for both vSphere and vSAN. It is easy to deploy, provides multi-cluster visibility.

The integrated nature of “vRealize Operations within vCenter” also allows the user to easily launch the full vROPS user interface to see the full collection of vRealize Operations dashboards.

Setting up vROPS monitoring within vCenter:
  • Login to vSphere client and click on vRealize Operations from Menu drop down.
  • On vROPS screen, default message is displayed as vROPS is not present.



  • Scroll down to bottom if required. You are presented with two options, Install or Configure existing instance. 
  • Click on Configure existing instance assuming that you already have vROPS installed.
  • Provide the vROPS instance details for FQDN, Credentials.
  • Click on Test connection to verify that details are correct. Click next to continue after test connection is validated successfully
  • Provide vCenter Server details and click Test connection as shown below. Click next once test connection is validated successfully.
  • On summery page, click Configure to proceed with vROPS configuration.
  • Wait for vROPS configuration to be completed.
  • Once configured, You can browse different dashboards to view monitoring details for vSphere and vSAN as shown in below example screenshots of few dashboards.





Tuesday, 25 December 2018

What is TRIM/UNMAP in vSAN 6.7?

Before we talk about vSAN Trim//Unmap, let's do quick recap about thin and thick provisioning.
In this example below, VM1 is thick provisioned and VM2 is thin provisioned. VM1 was allocated 40 GB as disk space at the time of creation. Since it is thick provisioned, entire 40 GB space is occupied by VM1 from underlying datastore even if the current requirement is less. This approach can lead to underutilisation of datastore capacity due to overallocation.
Whereas, the thin-provisioned VM2 disk occupies only 20 GB of storage which is current requirement (assumed) as per data even though VM2 was also allocated 40 GB at the time of creation. As the disk requires more space, it can grow into its entire 40 GB provisioned space on demand basis.


vSAN by default uses thin provisioning for VMs that are on vSAN datastore as the SPBM Policy for Object Space Reservation is set to its default of 0%.

One challenge to thin provisioning is that VMDK’s once grown will not shrink when files within the guest OS are deleted. This problem is amplified by the fact that many file systems will always direct new writes into free space. A steady set of writes to the same block of a single small file will eventually use significantly more space at the VMDK level.

Previous solutions to fix this problem requires manual intervention and storage vMotion to external storage, or powering off a virtual machine. 

To solve this problem, automated TRIM/UNMAP Space reclamation was created for vSAN 6.7U1.

What is TRIM/UNMAP
  • For efficient usage of storage space, modern guest OS filesystems have had the ability to reclaim no longer used space using what are known as TRIM/UNMAP commands for the respective ATA and SCSI protocols. 
  • vSAN 6.7 U1 now has full awareness of TRIM/UNMAP command sent from the guest OS and can reclaim the previously allocated storage as free space. 
  • This is an opportunistic space efficiency feature that can deliver much better storage capacity utilization in vSAN environments.
  • Trim/UNMAP support is disabled by default in vSAN. Trim/UNMAP support is enabled using the RVC Console.
How to Enable TRIM/UNMAP:

  • Login to RVC as shown in below image.

  • Navigate to computers directory under datacenter where vSAN cluster is located.
  • Run below command to enable TRIM/UNMAP.

Benfits of TRIM/UNMAP

  • This process helps to free up storage space 
  • Blocks that have been reclaimed do not need to be rebalanced, or re-mirrored in the event of a device failure. 
  • Read Cache can be freed up in the DRAM client Cache, as well as the Hybrid vSAN SSD Cache for use by other blocks. If removed from the write buffer this reduces the number of blocks that will be copied to the capacity tier.
VM requirement:
  • A minimum of virtual machine hardware version 11 for Windows
  • A minimum of virtual machine hardware version 13 for Linux.
  • disk.scsiUnmapAllowed flag is not set to false. The default is an implied true. This setting can be used as a "kill switch" at the virtual machine level should you wish to disable this behavior on a per VM basis and do not want to use in guest configuration to disable this behavior. VMX changes require a reboot to take effect. 
  • The guest operating system must be able to identify the virtual disk as thin.
  • After enabling at a cluster level, virtual machines must be power cycled.
Microsoft Specific Guidance:

Windows Server 2012 and newer support automated space reclamation. This behavior is enabled by default.

  • To check this behavior, the following PowerShell cmdlet can be used.
    • Get-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -NameDisableDeleteNotification
  • To enable automatic space reclamation:
    • Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -NameDisableDeleteNotification -Value 0

Popular Posts This Week