Sponsors

Thursday, 7 December 2017

vCenter Server 6.5 High availability (vCHA)

VMware vCenter Server sits at the heart of vSphere and provides services to manage virtual infrastructure components like ESXi hosts, virtual machines, storage and networking resources centrally. vCenter Server is an important element in ensuring the business continuity of virtual infrastructure. vCenter Server must be protected from a set of hardware and software failures in an environment and must be recovered transparently from such failures. With vSphere 6.5, VMware introduced high availability solution for vCenter Server, known as vCenter Server High Availability(VCHA). vCenter High Availability (vCHA), is exclusively available for the vCenter Server Appliance (VCSA) and not for windows based vCenter Server deployment.

From an architecture perspective, vCenter HA supports both embedded and external Platform Services Controllers. 
  • An embedded Platform Services Controller instance can be used when there are no other vCenter Server or Platform Services Controller instances within the single sign-on domain. 
  • External Platform Services Controller instance is required when there are multiple vCenter Server instances in an Enhanced Linked Mode configuration. 
When using vCenter HA with an external Platform Services Controller deployment, an external load balancer is required to provide high availability to the Platform Services Controller instances. Supported load balancers for Platform Services Controller instances in vSphere 6.5 include VMware NSX, F5 BIG-IP LTM, and Citrix NetScaler.

The vCenter High Availability architecture uses a three-node cluster to provide availability against multiple types of hardware and software failures. 
Image: VMware
  • A vCenter HA cluster consists of one Active vCenter Server node that serves client requests.
  • One Passive node to take the role of Active node in the event of active vCenter Server node failure. 
  • One quorum node called the Witness node to solve the classic split-brain problem due to network failures within distributed systems maintaining replicated data. 
Traditional architectures use some form of shared storage to solve the split-brain problem. However, in order to support a vCenter HA cluster spanning multiple datacenter's, vCHA design does not assume a shared storage–based deployment. As a result, one node in the vCenter HA cluster is permanently designated as a quorum node, or a Witness node. The other two nodes in the cluster dynamically assume the roles of Active and Passive nodes. 

vCenter Server availability is assured as long as there are two nodes running inside a cluster. However, a cluster is considered to be running in a degraded state if there are only two nodes in it. A subsequent failure in a degraded cluster means vCenter services are no longer available. 

A vCenter Server appliance is stateful and requires a strong, consistent state for it to work correctly. The appliance state (configuration state or runtime state) is mainly composed of:
  • Database data (stored in the embedded PostgreSQL database) 
  • Flat files (for example, configuration files). 
The appliance state must be backed up in order for vCHA failover to work properly. For the state to be stored inside the PostgreSQL database, vCHA uses the PostgreSQL native replication mechanism to keep the database data of the primary and secondary in sync. For flat files, a Linux native solution, rsync, is used for replication. Because the vCenter Server appliance requires strong consistency, it is a strong requirement to utilize a synchronous form of replication to replicate the appliance state from the Active node to the Passive node.   
Image: VMware

A design assumption of low latency and high bandwidth network connectivity between the Active and Passive node is made to guarantee zero recovery point objective (RPO).

A vCHA cluster requires a vCHA network that is separate from the management network for the vCenter Server appliance. Clients can have access to the Active vCenter Server appliance via the management network interface, which is public.

Roles for each type of node in a vCenter HA cluster, as shown above Figure, are:

Active Node: 
  • Runs the active instance of vCenter Server.
  • Enables and uses the public IP address of the cluster. 
Passive Node: 
  • Runs as the passive instance of vCenter Server. 
  • Constantly receives state updates from the Active node in synchronous mode. 
  • Equivalent to the Active node in terms of resources. 
  • Takes over the role of Active Node in the event of failover. 
Witness Node: 
  • Serves as a quorum node.
  • Used to break a tie in the event of a network partition causing a situation where the Active and Passive nodes cannot communicate with each other. 
  • A light-weight VM utilizing minimal hardware resources. 
  • Does not take over role of Active/Passive nodes. 
In the event of the Active vCenter Server appliance failing due to hardware, software, or network failures, the Passive node appliance takes over the role of the Active node, assumes the public IP address for the cluster, and starts serving client requests. Meanwhile, it is expected that the clients need to re-log into the appliance for continued access. Because the HA solution utilizes synchronous database replication, there will never be any data loss during failover (RPO = 0).

Availability of the vCenter Server appliance failure conditions:

Active node failure:
  • As long as the Passive node and the Witness node can communicate with each other, the Passive node will promote itself to Active and start serving client requests. 
Passive node failure:
  • As long as the Active node and the Witness node can communicate with each other, the Active node will continue to operate as Active and continue to serve client requests. 
Witness node failure:
  • As long as the Active node and the Passive node can communicate with each other, the Active node will continue to operate as Active and continue to serve client requests. 
  • The Passive node will continue to watch the Active node for failover. 
More than one node failure or is isolated:
  • This means all three nodes(Active, Passive & Witness) cannot communicate with each other.
  • This is more than a single point of failure and when this happens, the cluster is assumed non-functional and availability is impacted because VCHA is not designed for multiple failures. 
Isolated node behaviour:
  • When a single node gets isolated from the cluster, it is automatically taken out of the cluster and all services are stopped. For example, if an Active node is isolated, all services are stopped to ensure that the Passive node can take over as long as it is connected to the Witness node.
  • Isolated node detection takes into consideration intermittent network glitches and resolves to an isolated state only after all retry attempts have been exhausted. 
A client connecting to the vCenter Server appliance will use the public IP address. In the event of failover, the Passive node will take over the exact personality of the failed Active node including the public IP address. The target recovery time objective (RTO) is in minutes about 5 minutes, during which clients should be ready to receive failures.

There are two modes in which vCHA can be deployed.
  • Basic mode.
  • Advanced mode.
Basic Mode:
  • The basic workflow can be used in most scenarios in which all vCHA nodes run within the same cluster. 
  • This workflow automatically creates the passive and witness nodes. 
  • It also creates vSphere DRS anti-affinity rules if vSphere DRS is enabled on the destination cluster and uses VMware vSphere Storage DRS for initial placement if enabled. 
  • Some flexibility is provided in this workflow, so you can choose specific destination hosts, datastores, and networks for each node. This is a simple way to get a vCHA cluster up and running.
  • The Basic step by step walkthrough is available here at VMware Walkthrough portal.

Advanced Mode:
  • The Advanced workflow is an alternative that can be used when the active, passive, and witness nodes are to be deployed to different clusters, vCenter Server instances, or even other datacenter's. 
  • This process requires the customer to manually clone the source vCenter Server instance for the passive and witness nodes and to then place those nodes in the chosen locations with the appropriate IP address settings. 
  • This process, but provides greater flexibility for customers who needs it.
  • The Advanced step by step walkthrough is available here at VMware Walkthrough portal.

Wednesday, 6 December 2017

Performance Best Practices for VMware vSphere 6.5 technical white paper by VMware


This technical white paper by VMware provides tips that help administrators maximize the performance of VMware vSphere 6.5. Sections cover hardware selection, ESXi and virtual machines, guest operating systems, and virtual infrastructure management.

This technical white paper, provides performance tips that cover the most performance-critical areas of VMware vSphere 6.5. It is not intended as a comprehensive guide for planning and configuring your deployments.

Click below link for technical white paper.

Thursday, 19 October 2017

Shockwave flash plugin crash issue in Chrome while using vSphere web client/vCD/vRA

Recently I came across a strange issue. While trying to login to web client interface, suddenly started getting below error when clicked Login to web client and also in the vCloud Director interface. I tried several times, however it was the same error. Login page gets loaded however once clicked login, you get the Shockwave flash crash issue.


Fortunately when Searched, I came across announcement from adobe for Adobe flash player 27 Beta release. You can get this plugin at below link for download.


After updating plugin on my MAC system issue was fixed as in screenshot below.



Currently this flash player update is in beta so check your policies for production use. It is available for major platforms like windows, Linux and MAC.

VMware updated certification roadmap

Image: VMware
Note: VCA Exams are not mandatory to pursue any further certification as they are optional. However, vSphere Foundation exam is mandatory for all VCP certifications.

Wednesday, 18 October 2017

vSphere 6.5 and vNVMe controller

NVMe:

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface (NVMHCI) is a logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus in real and virtual hardware.

The acronym NVM stands for non-volatile memory, commonly flash memory that comes in the form of solid-state drives (SSDs). NVM Express, as a logical device interface, has been designed from the ground up to capitalise on the low latency and internal parallelism of flash-based storage devices, mirroring the parallelism of contemporary CPUs, platforms and applications.

NVM Express allows host hardware and software to fully exploit the possible parallelism in modern SSDs. As a result, NVM Express reduces I/O overhead and brings various performance improvements in comparison to previous logical-device interfaces, including multiple, long command queues, and reduced latency. 

vSphere and NVMe:

NVM Express (NVMe) controller is available with ESXi 6.5 and Hardware Version 13. With Hardware Version 13, you can use NVMe, SATA, SCSI, and IDE controllers in a virtual machine.



Virtual NVMe (vNVMe):


Virtual NVMe Device has lower IO overhead and scalable IO for all-flash SAN/vSAN storages. Hardware NVMe SSDs has significant advantage over old SATA/SAS based Flash devices.
The main benefit of NVMe interface over SCSI is that it reduces the amount of overhead, and so consumes fewer CPU cycles. Also, there is a reduction of IO latency for your VMs. 

Each virtual machine can support 4 NVMe controllers and up to 15 devices per controller. 
Driver Architecture:




  • Native Model: Register/unregister driver and bring up device
  • VMKLinux Model: Deprecated
  • OS Libs: Provide OS related resource, such as heap, lock, interrupt.
  • Mgmt: Management interface to pass through admin command
  • SCSI Emulation Layer: The storage stack of ESXi is SCSI based, responsible for translating SCSI to NVMe command
  • NVMe Core: NVMe related stuff, such as queue construction, command issue, register read/write


Supported guest operating systems:

Not all operating systems are supported with vNVMe. Make sure that your OS is supported and also verify that the guest OS has a driver installed to use the NVMe controller.

  • Windows 7 and 2008 R2 (hot fix required: https://support.microsoft.com/en-us/kb/2990941
  • Windows 8.1, 2012 R2, 10, 2016 
  • RHEL, CentOS, NeoKylin 6.5 and later 
  • Oracle Linux 6.5 and later 
  • Ubuntu 13.10 and later 
  • SLE 11 SP4 and later 
  • Solaris 11.3 and later 
  • FreeBSD 10.1 and later 
  • Mac OS X 10.10.3 and later 
  • Debian 8.0 and later

Tuesday, 17 October 2017

GA for VRealize Operations Federation management pack

With vRealize Operations Federation Management Pack you can unify your multi-site vRealize Operations Manager deployment into a single pane of glass. You can instantiate a deployment of vRealize Operations Manager with the capability of receiving key metrics for specified objects from other vRealize Operations Manager deployments in your environment.

Image: VMware
Description
The management pack allows you to unlock the following use cases:

  • Provide a summary of performance, capacity, and configuration to Senior Executives and Virtual Infrastructure Administrators across all your vSphere environments.
  • Provide a unified view of events triggered across the virtual environments into a single pane for making it easier for NOC or Helpdesk to initiate action.
  • Ability to create a data warehouse where user selected set of metrics can be stored for data archiving and report use cases.
  • Ability to provide summarized views of health and configuration of your Software Defined data center stack. This includes core applications such as VMware vCenter Server, VMware NSX, and VMware vSAN. The solution also covers the management applications such as vRealize Operations Manager, vRealize Log Insight, vRealize Automation, vRealize Business, and VMware Site Recovery Manager.
Highlights
  • Provide a summary of performance, capacity, and configuration to Senior Executives and Virtual Infrastructure Administrators across all your vSphere environments.
  • Provide a unified view of events triggered across the virtual environments into a single pane for making it easier for NOC or Helpdesk to initiate action.
  • Ability to create a data warehouse where user selected set of metrics can be stored for data archiving and report use cases.

Technical Specifications

The vRealize Operations Federation Management Pack requires a dedicated instance of vRealize Operations Manager on which this solution is installed. The dedicated instance applies the management pack to communicate to your other vRealize Operations Manager instances in your environment to collect key metrics.

vRealize Operations Federation Management pack is compatible with:
  • vRealize Operation Manager 6.6 and 6.6.1 version.

Documents

Monday, 9 October 2017

Migration Paths from vCenter Server for Windows to vCenter Server Appliance 6.5

In the new installer for vCenter 6.5, you get one option for migrating vCenter Server for Windows instance to a vCenter Server Appliance instance as shown in image below.

You can migrate a vCenter Server version 5.5 or version 6.0 instance on Windows to a vCenter Server Appliance 6.5 deployment on a Linux-based OS (OS name is Photon 1.0). 
The vCenter Server migration paths demonstrate supported migration outcomes. 
vCenter Server 5.5.x with Embedded vCenter Single Sign-On Installation Before and After Migration

Image:VMware
You can migrate a vCenter Server instance with an embedded vCenter Single Sign-On (version 5.5) to a vCenter Server Appliance 6.5 instance with an embedded Platform Services Controller appliance. In this case the software migrates the vCenter Server instance and the embedded vCenter Single Sign-On instance at the same time. 

vCenter Server 6.0.x with Embedded Platform Services Controller Installation Before and After Migration

Image:VMware
You can migrate a vCenter Server instance with an embedded Platform Services Controller(version 6.0) to a vCenter Server Appliance 6.5 instance with an embedded Platform Services Controller appliance. In this case the software migrates the vCenter Server instance and the embedded Platform Services Controller instance at the same time. 
vCenter Server 5.5.x with External vCenter Single Sign-On Installation Before and After Migration

Image: VMware
You can migrate a vCenter Server instance with an external vCenter Single Sign-On (version 5.5) to a vCenter Server Appliance 6.5 instance with an external Platform Services Controller appliance. In this case you must first migrate the external vCenter Single Sign-On instance and then the vCenter Server instance.

vCenter Server 6.0.x with External Platform Services Controller Installation Before and After Migration

Image: VMware
You can migrate a vCenter Server instance with an external Platform Services Controller(version 6.0) to a vCenter Server Appliance 6.5 instance with an external Platform Services Controller appliance. In this case you must first migrate the external Platform Services Controller instance and then the vCenter Server instance.

Saturday, 7 October 2017

Overview of the Upgrade process for ESXi host 6.5

You can upgrade a ESXi 5.5.x host or 6.0.x host, asynchronously released driver or other third-party customizations, interactive upgrade from CD or DVD, scripted upgrade, or upgrade with vSphere Update Manager. When you upgrade an ESXi 5.5.x host or 6.0.x host that has custom VIBs to version 6.5, the custom VIBs are migrated.

High level steps for upgrading ESXi:


Image: VMware

Methods supported for direct upgrade to ESXi 6.5 are: 
  • Use the interactive graphical user interface (GUI) installer from CD, DVD, or USB drive. 
  • Scripted upgrade. 
  • Use the esxcli command line interface (CLI). 
  • vSphere Auto Deploy. If the ESXi 5.5.x host was deployed by using vSphere Auto Deploy, you can use vSphere Auto Deploy to reprovision the host with a 6.5 image. 
  • vSphere Update Manager.

Monday, 2 October 2017

How to import/export vRealize automation blueprint using Cloud Client


In vRealize automation, export and import of blueprints is an easy way to share the blueprints across multiple locations of an organisation.

Use below steps to export blueprint from vRealize Automation using cloud client.
  • Login to vRA appliance server using "vra login userpass  --tenant <name>".  


  • List the current content list on vRA appliance to get required fields like Content ID & Content Type for exporting the blueprint.


  •  Run below command to export the required blueprint to specified path.


  •  Exit from the cloud client interface


  •  Navigate to path where blueprint is exported & extract the .zip file. 

  • Once extracted, we can open .yaml files with wordpad to see the information about the blueprint.



Follow below listed steps to import blueprint to vRA.


  • Login to vRA and run import command as in below screenshot


  •  Verify that imported blueprint is visible in the vRA portal.


  • To test it further, you can request the blueprint service from catalog and verify successful delivery of the service.  

Tuesday, 26 September 2017

Log locations for VMware vRealize Automation 7.x

A full log bundle containing all vRealize Automation 7.x logs (with the exception of Guest Agent and Application Services logs) can be obtained from the vRealize Automation Appliance Management website under the Cluster tab.

ServerLog locationLog Purpose
vRealize Automation Appliance/storage/log/vmware/vcac/vcac-config.logvRealize Automation Appliance configuration logs
vRealize Automation Appliance/storage/log/vmware/vcac/telemetry.logvRealize Automation telemetry log
vRealize Automation Appliance/storage/log/vmware/vco/configuration/catalina.outvRealize Orchestrator Configuration log
vRealize Automation Appliance/storage/log/vmware/vco/configuration/controlcenter.logvRealize Orchestrator Control Center log
vRealize Automation Appliance/storage/log/vmware/vco/app-server/server.logvRealize Orchestrator Server log
vRealize Automation Appliance/storage/log/vmware/horizon/connector.logvIDM connector request log
vRealize Automation Appliance/storage/log/vmware/horizon/catalina.logvIDM Server log
vRealize Automation Appliance/storage/log/vmware/horizon/horizon.logvIDM Server log
vRealize Automation Appliance/storage/log/vmware/vcac/catalina.outvRealize Automation Server log
vRealize Automation Appliance/var/log/vmware/horizon/horizon.logvRA 7.2 horizon log
vRealize Automation Appliance/var/log/vmware/horizon/connector.logvRA 7.2 connector request log
vRealize Automation Appliance/storage/db/pgdata/pg_log/postgresql.csvvPostgres Log
vRealize Automation Appliance/opt/vmware/var/log/vami/vami.logAppliance Management and upgrade logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Agents\agent_name\logs\filePlug-in logs example: CPI61, nsx, VC50, VC51Agent, VC51TPM, vc51withTPM, VC55Agent, vc55u, VDIAgent
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\DEMOR\Logs\DEMOR_AllDistributed Execution Manager logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\DEMWR\Logs\DEMWR_AllDistributed Execution Worker logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\LogsManager Service logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\ConfigTool\Log\vCACConfiguration-dateRepository Configuration logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Logs\nothing_todayIIS Access logs (usually empty, but can be expected)
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\Model Manager Web\Logs\RepositoryRepository logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\Website\Logs\Web_Admin_AllWeb Admin logs
IaaS Windows ServerC:\inetpub\logsIIS logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Web API\Logs\ElmahWAPI website logs (6.2.x)
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Management Agent\LogsManagement Agent logs
Windows Guest Agent Connection LogC:\VRMGuestAgent\axis2\gugent-axis.logConnection logs showing status of connection to vRA Management Service server
Windows Guest Agent LogC:\VRMGuestAgent\GuestAgent.logAll Guest Agent actions (post successful connection)
Linux Guest Agent Connection Log/usr/share/gugent/axis2/logs/gugent-axis.logConnection logs showing status of connection to vRA Management Service server
Linux Guest Agent Log/usr/share/gugent/GuestAgent.logAll Guest Agent actions (post successful connection)
Windows Application Services Agent logsc:\opt\vmware-appdirector\agent\logs\agent_bootstrap.logWindows logs for the Software Provisioning agent.
Linux Application Services Agent logs/opt/vmware-appdirector/agent/logs/agent_bootstrap.logLinux logs for the Software Provisioning agent.
Linux Software Agent Logs/opt/vmware-appdirector/agent/logs/darwin-agent<date>.logLinux logs for the Software agent.
Windows Software Agent Logsc:\opt\vmware-appdirector\agent\logs\darwin-agent<date>.logWindows logs for Software agent.A full log bundle containing all vRealize Automation 7.x logs (with the exception of Guest Agent and Application Services logs) can be obtained from the vRealize Automation Appliance Management website under the Cluster tab.


Source: VMware

Popular Posts This Week