Saturday, 29 December 2018

vSAN 6.x Cluster designs and considerations

vSAN is available to customers in two flavours as listed below.

  • Hybrid: Hybrid vSAN is implemented using combination of SSD (as Cache) and HDD (as Capacity device)
Image: VMware

  • All Flash: All flash uses only SSDs (for Cache as well as Capacity device) for implementing vSAN.
Image: VMware


Hybrid vSAN was inducted in vSphere 5.5 onwards whereas All-Flash was inducted in vSphere 6.0 onwards.

vSAN requirements:
  • Min 3 ESXi Hosts (max 64 hosts)
  • Min 1GB network for Hybrid and Min 10GB network for All flash
  • RAID controller: Passthrough Mode is recommended, RAID 0 mode is also supported.
  • vSAN License
  • Storage device requirements for vSAN:
    • Disks to be used for vSAN must be raw.
    • While enabling vSAN cluster, first 3 ESXi hosts must have local storage.  Later you can add diskless servers as well. 
    • Min 1 disk group on first 3 ESXi host
      • Min and Max 1 SSD for caching per disk group
      • Min 1 HDD/SSD for Capacity device and Max 7 per disk group
    • Max 5 disk groups per ESXi

There are four different ways in which vSAN cluster can be designed. Below are the possible types for vSAN cluster.
  • Standard Cluster
  • Two Node cluster
  • Stretched Cluster
  • Two Node Stretched Cluster
Standard Cluster:


A standard vSAN cluster consists of a minimum of three physical nodes and can be scaled to 64 nodes. All the hosts in a standard cluster are commonly located at a single location and are well-connected on the same Layer-2 network. Though maximum number of servers in vSAN cluster can be 64, maximum 32 fault domains can exist in a standard vSAN cluster.




Two Node Cluster:


Though we refer it as two node cluster, there will be 3 hosts in this design to form a valid cluster. 


Two physical ESXi hosts are placed in same location and are used for hosting workloads. These hosts are usually connected to the same network switch or are directly connected. While 10Gbps connections may be directly connected, 1Gbps connections will require a crossover cable.


A third ESXi host is required for a 2-node configuration to avoid “split-brain” issues when network connectivity is lost between the two physical nodes. Hence it will be used as witness host. Witness host is generally placed in different location than the other Two nodes. I will explain requirements for witness host later in this article.



Stretched Cluster:

A vSAN Stretched Cluster provides resiliency against the loss of an entire site. This design uses 3 sites to achieve site resiliency. Out of 3 sites, two sites are designated as Data sites (One site configured as preferred site and other as secondary site) where hosts are distributed evenly across two sites. Third site (also called as witness site) is used for placement of witness host and only used for hosting witness components. In stretched cluster design, there are only 3 fault domains since it's a three site deployment.

The two sites are well-connected (Layer 2 stretched network) from a network perspective with a round trip time (RTT) latency of no more than 5 ms. Connectivity between data site and Witness site can be Layer 3 with latency requirement as mentioned below.
  • If there are 10 or more hosts per data site, latency must be 100 ms or less
  • If there are 10 or less hosts per data site, latency must be 200 ms or less.
A vSAN Witness Host is placed at a third site (Witness site) to avoid “split-brain” issues if connectivity is lost between the two data sites.

A vSAN Stretched Cluster may have a maximum of 31 hosts in the cluster i.e. 15 hosts max per data site so total 30 hosts across two data sites and 1 Witness host in witness site. In cases where there is a need for more hosts across sites, additional vSAN Stretched Clusters may be used.



Two node stretched cluster:

A two-node stretched cluster is effectively the same as a two-node cluster but the two vSAN nodes are geographically disparate. 


Two-node stretched cluster configuration steps are same as that of stretched cluster. You must designate one site as the preferred site, and the other site becomes a secondary or non-preferred site. The only key difference is that each site contains only one host.


Witness Host considerations:

Witness host stores metadata commonly called as “witness components” for vSAN objects. Virtual machine data such as virtual disks and virtual machine configuration files are not stored on the vSAN Witness Host. The purpose of the vSAN Witness Host is to serve as a “tie-breaker” in cases where data sites are network isolated or disconnected.

A vSAN Witness Host may be a physical vSphere host, or a VMware provided virtual appliance, which can be easily deployed from an OVA. When using a physical host as a vSAN Witness Host, additional licensing is required, and the host must meet some general configuration requirements. 


When using a vSAN Witness Appliance as the vSAN Witness Host, it can easily reside on existing vSphere infrastructure, with no additional need for licensing.

When using 2 Node clusters for deployments such as remote office branch office (ROBO) locations, it is a common practice for vSAN Witness Appliances to reside at a primary datacenter. It is possible to run witness host at the same ROBO site but would require additional infrastructure at the ROBO site.

vSAN Witness Hosts providing quorum for Stretched Clusters may only be located in a tertiary site that is independent of the Preferred and Secondary Stretched Cluster sites.

One vSAN Witness Host is required for each 2 Node or Stretched Cluster vSAN deployment.

Using the VMware provided vSAN Witness Appliance is generally recommended as a better option for the vSAN Witness Host than using a physical vSphere host. The utilization of a vSAN Witness Appliance is relatively low during normal operations. It is not until a failover process occurs that a vSAN Witness Host will have any significant utilization. 

When using a vSAN Witness Appliance, it is patched in the same fashion as any other ESXi host. It is the last host updated when performing 2 Node and Stretched Cluster upgrades and should not be backed up. Should it become corrupted or deleted, it should be redeplo
yed. vSAN 6.6 introduced a quick and easy wizard to change the associated vSAN Witness Host.

Friday, 28 December 2018

vSphere 6.7 and vSAN 6.7 native monitoring using vROPS within vCenter

vSphere 6.7 and vSAN 6.7 now includes vRealize Operations within vCenter that provides monitoring capabilities natively in the HTML5 “Clarity” based vSphere client. This new feature allows vSphere customers to see a subset of intelligence offered up by vRealize Operations (vROPS) through a single vCenter user interface. Light-weight purpose-built dashboards are included for both vSphere and vSAN. It is easy to deploy, provides multi-cluster visibility.

The integrated nature of “vRealize Operations within vCenter” also allows the user to easily launch the full vROPS user interface to see the full collection of vRealize Operations dashboards.

Setting up vROPS monitoring within vCenter:
  • Login to vSphere client and click on vRealize Operations from Menu drop down.
  • On vROPS screen, default message is displayed as vROPS is not present.



  • Scroll down to bottom if required. You are presented with two options, Install or Configure existing instance. 
  • Click on Configure existing instance assuming that you already have vROPS installed.
  • Provide the vROPS instance details for FQDN, Credentials.
  • Click on Test connection to verify that details are correct. Click next to continue after test connection is validated successfully
  • Provide vCenter Server details and click Test connection as shown below. Click next once test connection is validated successfully.
  • On summery page, click Configure to proceed with vROPS configuration.
  • Wait for vROPS configuration to be completed.
  • Once configured, You can browse different dashboards to view monitoring details for vSphere and vSAN as shown in below example screenshots of few dashboards.





Tuesday, 25 December 2018

What is TRIM/UNMAP in vSAN 6.7?

Before we talk about vSAN Trim//Unmap, let's do quick recap about thin and thick provisioning.
In this example below, VM1 is thick provisioned and VM2 is thin provisioned. VM1 was allocated 40 GB as disk space at the time of creation. Since it is thick provisioned, entire 40 GB space is occupied by VM1 from underlying datastore even if the current requirement is less. This approach can lead to underutilisation of datastore capacity due to overallocation.
Whereas, the thin-provisioned VM2 disk occupies only 20 GB of storage which is current requirement (assumed) as per data even though VM2 was also allocated 40 GB at the time of creation. As the disk requires more space, it can grow into its entire 40 GB provisioned space on demand basis.


vSAN by default uses thin provisioning for VMs that are on vSAN datastore as the SPBM Policy for Object Space Reservation is set to its default of 0%.

One challenge to thin provisioning is that VMDK’s once grown will not shrink when files within the guest OS are deleted. This problem is amplified by the fact that many file systems will always direct new writes into free space. A steady set of writes to the same block of a single small file will eventually use significantly more space at the VMDK level.

Previous solutions to fix this problem requires manual intervention and storage vMotion to external storage, or powering off a virtual machine. 

To solve this problem, automated TRIM/UNMAP Space reclamation was created for vSAN 6.7U1.

What is TRIM/UNMAP
  • For efficient usage of storage space, modern guest OS filesystems have had the ability to reclaim no longer used space using what are known as TRIM/UNMAP commands for the respective ATA and SCSI protocols. 
  • vSAN 6.7 U1 now has full awareness of TRIM/UNMAP command sent from the guest OS and can reclaim the previously allocated storage as free space. 
  • This is an opportunistic space efficiency feature that can deliver much better storage capacity utilization in vSAN environments.
  • Trim/UNMAP support is disabled by default in vSAN. Trim/UNMAP support is enabled using the RVC Console.
How to Enable TRIM/UNMAP:

  • Login to RVC as shown in below image.

  • Navigate to computers directory under datacenter where vSAN cluster is located.
  • Run below command to enable TRIM/UNMAP.

Benfits of TRIM/UNMAP

  • This process helps to free up storage space 
  • Blocks that have been reclaimed do not need to be rebalanced, or re-mirrored in the event of a device failure. 
  • Read Cache can be freed up in the DRAM client Cache, as well as the Hybrid vSAN SSD Cache for use by other blocks. If removed from the write buffer this reduces the number of blocks that will be copied to the capacity tier.
VM requirement:
  • A minimum of virtual machine hardware version 11 for Windows
  • A minimum of virtual machine hardware version 13 for Linux.
  • disk.scsiUnmapAllowed flag is not set to false. The default is an implied true. This setting can be used as a "kill switch" at the virtual machine level should you wish to disable this behavior on a per VM basis and do not want to use in guest configuration to disable this behavior. VMX changes require a reboot to take effect. 
  • The guest operating system must be able to identify the virtual disk as thin.
  • After enabling at a cluster level, virtual machines must be power cycled.
Microsoft Specific Guidance:

Windows Server 2012 and newer support automated space reclamation. This behavior is enabled by default.

  • To check this behavior, the following PowerShell cmdlet can be used.
    • Get-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -NameDisableDeleteNotification
  • To enable automatic space reclamation:
    • Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -NameDisableDeleteNotification -Value 0

Saturday, 22 December 2018

Cache concepts and cache techniques

The concept of the Cache memory has been around since long time. Cache memory is generally used to fill the performance gap between computing architecture and permanent storage. Permanent bulk storage cannot keep up with the performance requirements that of Compute/Application processing.

Cache can be hardware or software component that stores data in it so that future requests for the data can be served using data stored in cache instead of re-computing or fetching from permanent storage.

During read/write operations, if the requested data is found in a cache, It is called as Cache Hit. In case, requested data is not found in cache and needs recompute or fetch operation using permanent storage, it is called as Cache Miss.

As mentioned earlier, cache can be hardware or software component. Also cache can be used at various levels in computing architecture.  Below are some of the examples of hardware and software cache based on their usage at various levels.

Hardware Cache:

  • CPU Cache
  • GPU Cache
  • TLB (Translation Lookaside Buffer)
Software Cache:
  • Disk Cache
  • Web Cache
Caching Benefits:
  • Reduced latency of operations
  • Higher performance
  • Reduced IOPS to storage which results in lower SAN traffic and contention
  • Cost-effective use of high $/GB storage
Cache Techniques:

There are three main caching techniques that can be deployed. Each method comes with it's pros and cons.

  • Write-through
  • write-around
  • write-back
Write Through:
  • Write-through cache directs write I/O to cache and through to underlying permanent storage before confirming I/O completion to the host. This ensures data updates are safely stored.
  • Disadvantage with this technique is that the I/O experiences latency based on writing to permanent storage. 
  • Write-through cache is good for applications that write and then re-read data frequently as data is stored in cache and results in low read latency.
Image Credit: CodeAhoy

Write Around:
  • Write-around cache is a similar technique to write-through cache, but write I/O is written directly to permanent storage, bypassing the cache. 
  • This can reduce the cache being flooded with write I/O that will not subsequently be re-read. 
  • Disadvantage is that a read request for recently written data will create a “cache miss” and have to be read from slower bulk storage and experience higher latency.
Write Back:
  • Write-back cache is where write I/O is directed to cache and completion is immediately confirmed to the host. 
  • This results in low latency and high throughput for write-intensive applications, but there is data availability exposure risk because the only copy of the written data is in cache. 
  • Write-back cache is the best performer for mixed workloads as both read and write I/O have similar response time levels.
Image Credit: CodeAhoy

Relating above information in VMware virtualization, vSphere has features like vFRC, vSAN that uses cache mechanism. vFRC supports write-through or read caching whereas VMware vSAN uses Write back caching mechanism.

Friday, 7 December 2018

Enabling ThinPrint logging in Horizon View 7.x

Enabling ThinPrint logging helps you to troubleshoot issues with ThinPrint. ThinPrint logging can be enabled on VDI Desktop as well as on Horizon View client system. However, implement this procedure if ThinPrint is being used in environment.

Enabling TPAutoConnect and ThinPrint logging on the VMware View Desktop machine:
  • Start regedit.exe from command prompt or Windows Run.
  • Navigate to the HKLM\SOFTWARE\ThinPrint\TPAutoConnect key.

  • Create a new String value and name it DebugFile.


  • Modify DebugFile and set its Value data to C:\\tpautoconnect.log
    • Note: Ensure that you include the two backslashes so that the log file is created at the root of the C:\ drive.


  • Create a new DWORD value and name it DebugLevel.
  • Modify DebugLevel and set its Value data to 000000ff.
  • Navigate to the HKLM\SOFTWARE\ThinPrint\TPVMMon key.
  • Create a new String value and name it DebugFile. Modify DebugFile and set its Value data to C:\\thinprint.log.  
  • Create a new DWORD value and name it DebugLevel. Modify DebugLevel and set its Value data to 000000ff.
  • Create a new DWORD value and name it DebugMode. Modify DebugMode and set its Value data to 00000003.

  • Restart the TP AutoConnection Service for the changes to take effect.

  • Verify that log is being created.


Similar to above steps, ThinPrint logging can be enabled on Horizon View client machine as well. Follow below instructions to do the same.
  • Navigate to the HKLM\SOFTWARE\ThinPrint\Client key in Windows registry editor.
  • Create a new String value and name it DebugFile.
  • Modify DebugFile and set its Value data to C:\\thinprintclient.log.
  • Create a new DWORD value and name it DebugLevel.
  • Modify DebugLevel and set its Value data to 000000ff.
  • Create a new DWORD value and name it DebugMode.
  • Modify DebugMode and set its Value data to 00000003.
  • Navigate to the HKLM\SOFTWARE\ThinPrint\TPViewture key.
  • Create a new String value and name it DebugFile.
  • Modify DebugFile and set its Value data to C:\\tpviewture.log.
  • Create a new DWORD value and name it DebugLevel.
  • Modify DebugLevel and set its Value data to 000000ff.
  • Reboot the View Client system for the changes to take effect.

Wednesday, 5 December 2018

How to connect AD LDS database instance on View Connection Server

Below steps can be used to connect AD LDS database instance on Horizon view connection server.

  • Launch ADSIEdit tool.



  • In ADSIEdit console, right click ADSIEdit and click Connect to



  • Enter the details as shown below without any changes and click OK. NOTE:- Do not change any values e.g.  domain name. You do not need to type your active directory domain details here.



  • Now you should have your AD LDS database details as shown below.



Data recovery password in Horizon view 7.x

Data recovery password is configured during the initial setup of horizon view connection server as shown in below image. Data recovery password is required during the restore operations of AD LDS instance using vdmimport command. 

It is good idea to set data recovery password along with password reminder as it can be used to recover password in case you loose/forget the password. 


Data recovery password can be changed as required at later time also from View global settings page or Backup tab in Connection server properties as shown below.





AD LDS backup automatically runs everyday midnight as default schedule. We can change the periodicity of the backup, retention, offset and default path. 


One thing to be noticed here, there is no option to set clock time for backup schedule. We can only mention periodicity like every hour or every 6 hour and so on as shown below.


Manual backup option is also available in the View administration portal, hence can be executed as required from admin portal >View Configuration > Servers > Backup now button as shown below.



When backup is executed, it backs up AD LDS instance as well as the composer database in default path on connection server.



AD LDS Backup file (*.LDF) is by default encrypted. Hence, cannot be used directly while performing restore operations. One needs to decrypt the backup file first and then decrypted file is used to perform the restore operation. 


In order to generate decrypted LDF instance, vdmimport command is used with below syntax.

vdmimport -p "Your data recovery password" -f "Backup file path" > "new file name".ldf

In case you do not remember the data recovery password and try wrong password, error will be generated upon executing command as shown below.


Assuming that we do not know password for recovery, simply run vdmimport command without -p parameter. Once command is executed, it will prompt you for the data recovery password, however, if you had configured the reminder option, it will show you reminder string above password prompt as shown in image.


Enter the Password to proceed with decrypt operation. Once you have the decrypted LDF file. Use it to perform restore of AD LDS instance.



Cheers!!!! Hope this helps.

Popular Posts This Week