Tuesday, 17 October 2017

GA for VRealize Operations Federation management pack

With vRealize Operations Federation Management Pack you can unify your multi-site vRealize Operations Manager deployment into a single pane of glass. You can instantiate a deployment of vRealize Operations Manager with the capability of receiving key metrics for specified objects from other vRealize Operations Manager deployments in your environment.

Image: VMware
Description
The management pack allows you to unlock the following use cases:

  • Provide a summary of performance, capacity, and configuration to Senior Executives and Virtual Infrastructure Administrators across all your vSphere environments.
  • Provide a unified view of events triggered across the virtual environments into a single pane for making it easier for NOC or Helpdesk to initiate action.
  • Ability to create a data warehouse where user selected set of metrics can be stored for data archiving and report use cases.
  • Ability to provide summarized views of health and configuration of your Software Defined data center stack. This includes core applications such as VMware vCenter Server, VMware NSX, and VMware vSAN. The solution also covers the management applications such as vRealize Operations Manager, vRealize Log Insight, vRealize Automation, vRealize Business, and VMware Site Recovery Manager.
Highlights
  • Provide a summary of performance, capacity, and configuration to Senior Executives and Virtual Infrastructure Administrators across all your vSphere environments.
  • Provide a unified view of events triggered across the virtual environments into a single pane for making it easier for NOC or Helpdesk to initiate action.
  • Ability to create a data warehouse where user selected set of metrics can be stored for data archiving and report use cases.

Technical Specifications

The vRealize Operations Federation Management Pack requires a dedicated instance of vRealize Operations Manager on which this solution is installed. The dedicated instance applies the management pack to communicate to your other vRealize Operations Manager instances in your environment to collect key metrics.

vRealize Operations Federation Management pack is compatible with:
  • vRealize Operation Manager 6.6 and 6.6.1 version.

Documents

Monday, 9 October 2017

Migration Paths from vCenter Server for Windows to vCenter Server Appliance 6.5

In the new installer for vCenter 6.5, you get one option for migrating vCenter Server for Windows instance to a vCenter Server Appliance instance as shown in image below.

You can migrate a vCenter Server version 5.5 or version 6.0 instance on Windows to a vCenter Server Appliance 6.5 deployment on a Linux-based OS (OS name is Photon 1.0). 
The vCenter Server migration paths demonstrate supported migration outcomes. 
vCenter Server 5.5.x with Embedded vCenter Single Sign-On Installation Before and After Migration

Image:VMware
You can migrate a vCenter Server instance with an embedded vCenter Single Sign-On (version 5.5) to a vCenter Server Appliance 6.5 instance with an embedded Platform Services Controller appliance. In this case the software migrates the vCenter Server instance and the embedded vCenter Single Sign-On instance at the same time. 

vCenter Server 6.0.x with Embedded Platform Services Controller Installation Before and After Migration

Image:VMware
You can migrate a vCenter Server instance with an embedded Platform Services Controller(version 6.0) to a vCenter Server Appliance 6.5 instance with an embedded Platform Services Controller appliance. In this case the software migrates the vCenter Server instance and the embedded Platform Services Controller instance at the same time. 
vCenter Server 5.5.x with External vCenter Single Sign-On Installation Before and After Migration

Image: VMware
You can migrate a vCenter Server instance with an external vCenter Single Sign-On (version 5.5) to a vCenter Server Appliance 6.5 instance with an external Platform Services Controller appliance. In this case you must first migrate the external vCenter Single Sign-On instance and then the vCenter Server instance.

vCenter Server 6.0.x with External Platform Services Controller Installation Before and After Migration

Image: VMware
You can migrate a vCenter Server instance with an external Platform Services Controller(version 6.0) to a vCenter Server Appliance 6.5 instance with an external Platform Services Controller appliance. In this case you must first migrate the external Platform Services Controller instance and then the vCenter Server instance.

Saturday, 7 October 2017

Overview of the Upgrade process for ESXi host 6.5

You can upgrade a ESXi 5.5.x host or 6.0.x host, asynchronously released driver or other third-party customizations, interactive upgrade from CD or DVD, scripted upgrade, or upgrade with vSphere Update Manager. When you upgrade an ESXi 5.5.x host or 6.0.x host that has custom VIBs to version 6.5, the custom VIBs are migrated.

High level steps for upgrading ESXi:


Image: VMware

Methods supported for direct upgrade to ESXi 6.5 are: 
  • Use the interactive graphical user interface (GUI) installer from CD, DVD, or USB drive. 
  • Scripted upgrade. 
  • Use the esxcli command line interface (CLI). 
  • vSphere Auto Deploy. If the ESXi 5.5.x host was deployed by using vSphere Auto Deploy, you can use vSphere Auto Deploy to reprovision the host with a 6.5 image. 
  • vSphere Update Manager.

Monday, 2 October 2017

How to import/export vRealize automation blueprint using Cloud Client


In vRealize automation, export and import of blueprints is an easy way to share the blueprints across multiple locations of an organisation.

Use below steps to export blueprint from vRealize Automation using cloud client.
  • Login to vRA appliance server using "vra login userpass  --tenant <name>".  


  • List the current content list on vRA appliance to get required fields like Content ID & Content Type for exporting the blueprint.


  •  Run below command to export the required blueprint to specified path.


  •  Exit from the cloud client interface


  •  Navigate to path where blueprint is exported & extract the .zip file. 

  • Once extracted, we can open .yaml files with wordpad to see the information about the blueprint.



Follow below listed steps to import blueprint to vRA.


  • Login to vRA and run import command as in below screenshot


  •  Verify that imported blueprint is visible in the vRA portal.


  • To test it further, you can request the blueprint service from catalog and verify successful delivery of the service.  

Tuesday, 26 September 2017

Log locations for VMware vRealize Automation 7.x

A full log bundle containing all vRealize Automation 7.x logs (with the exception of Guest Agent and Application Services logs) can be obtained from the vRealize Automation Appliance Management website under the Cluster tab.

ServerLog locationLog Purpose
vRealize Automation Appliance/storage/log/vmware/vcac/vcac-config.logvRealize Automation Appliance configuration logs
vRealize Automation Appliance/storage/log/vmware/vcac/telemetry.logvRealize Automation telemetry log
vRealize Automation Appliance/storage/log/vmware/vco/configuration/catalina.outvRealize Orchestrator Configuration log
vRealize Automation Appliance/storage/log/vmware/vco/configuration/controlcenter.logvRealize Orchestrator Control Center log
vRealize Automation Appliance/storage/log/vmware/vco/app-server/server.logvRealize Orchestrator Server log
vRealize Automation Appliance/storage/log/vmware/horizon/connector.logvIDM connector request log
vRealize Automation Appliance/storage/log/vmware/horizon/catalina.logvIDM Server log
vRealize Automation Appliance/storage/log/vmware/horizon/horizon.logvIDM Server log
vRealize Automation Appliance/storage/log/vmware/vcac/catalina.outvRealize Automation Server log
vRealize Automation Appliance/var/log/vmware/horizon/horizon.logvRA 7.2 horizon log
vRealize Automation Appliance/var/log/vmware/horizon/connector.logvRA 7.2 connector request log
vRealize Automation Appliance/storage/db/pgdata/pg_log/postgresql.csvvPostgres Log
vRealize Automation Appliance/opt/vmware/var/log/vami/vami.logAppliance Management and upgrade logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Agents\agent_name\logs\filePlug-in logs example: CPI61, nsx, VC50, VC51Agent, VC51TPM, vc51withTPM, VC55Agent, vc55u, VDIAgent
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\DEMOR\Logs\DEMOR_AllDistributed Execution Manager logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Distributed Execution Manager\DEMWR\Logs\DEMWR_AllDistributed Execution Worker logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\LogsManager Service logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\ConfigTool\Log\vCACConfiguration-dateRepository Configuration logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\Logs\nothing_todayIIS Access logs (usually empty, but can be expected)
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\Model Manager Web\Logs\RepositoryRepository logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Server\Website\Logs\Web_Admin_AllWeb Admin logs
IaaS Windows ServerC:\inetpub\logsIIS logs
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Web API\Logs\ElmahWAPI website logs (6.2.x)
IaaS Windows ServerC:\Program Files (x86)\VMware\vCAC\Management Agent\LogsManagement Agent logs
Windows Guest Agent Connection LogC:\VRMGuestAgent\axis2\gugent-axis.logConnection logs showing status of connection to vRA Management Service server
Windows Guest Agent LogC:\VRMGuestAgent\GuestAgent.logAll Guest Agent actions (post successful connection)
Linux Guest Agent Connection Log/usr/share/gugent/axis2/logs/gugent-axis.logConnection logs showing status of connection to vRA Management Service server
Linux Guest Agent Log/usr/share/gugent/GuestAgent.logAll Guest Agent actions (post successful connection)
Windows Application Services Agent logsc:\opt\vmware-appdirector\agent\logs\agent_bootstrap.logWindows logs for the Software Provisioning agent.
Linux Application Services Agent logs/opt/vmware-appdirector/agent/logs/agent_bootstrap.logLinux logs for the Software Provisioning agent.
Linux Software Agent Logs/opt/vmware-appdirector/agent/logs/darwin-agent<date>.logLinux logs for the Software agent.
Windows Software Agent Logsc:\opt\vmware-appdirector\agent\logs\darwin-agent<date>.logWindows logs for Software agent.A full log bundle containing all vRealize Automation 7.x logs (with the exception of Guest Agent and Application Services logs) can be obtained from the vRealize Automation Appliance Management website under the Cluster tab.


Source: VMware

Tuesday, 12 September 2017

UEFI Secure Boot with ESXi 6.5

UEFI Secure Boot:

UEFI, or Unified Extensible Firmware Interface, is a replacement for the traditional BIOS firmware. In UEFI, Secure Boot is a “protocol” of the UEFI firmware. UEFI Secure boot ensures that the boot loaders are not compromised by validating their digital signature against a digital certificate in the firmware.

UEFI can store whitelisted digital certificates in a signature database (DB). There is also a blacklist of forbidden certificates (DBX), a Key Exchange Keys (KEK) database and a platform key. These digital certificates are used by the UEFI firmware to validate the boot loader. 

Boot loaders are typically cryptographically signed and their digital signature chains to the certificate in the firmware.The default digital certificate in almost every implementation of UEFI firmware is a x509 Microsoft UEFI Public CA cert.

Most of the UEFI implementations also allows the installation of additional certificate in the UEFI firmware and UEFI would validate boot loader against that certificate.

UEFI Secure Boot in ESXi 6.5:
With the release of vSphere 6.5, ESXi 6.5 has adopted support for UEFI Secure boot. UEFI Secure boot ensures that ESXi server boots with signed boot loader that is validated by UEFI Firmware and also ensures that unsigned code does not run on hypervisor.

ESXi is comprised of components like boot loader, the VM Kernel, Secure Boot Verifier and VIBs. Each of these components is cryptographically signed.

Image: VMware
The boot process of ESXi 6.5 with UEFI Secure Boot:
  • Host is Powered On.
  • UEFI Firmware validates the ESXi Boot Loader against the Microsoft digital certificate in the UEFI firmware.
  • ESXi Boot Loader validates the kernel against the VMware digital certificate in the Boot Loader.
  • Kernel runs the Secure Boot Verifier.
  • Secure Boot Verifier validates each VIB against the VMware digital certificate in the Secure Boot Verifier.
  • Management applications (DCUI, hostd, etc) now run on the ESXi host.

The ESXi boot loader is signed with the Microsoft UEFI Public CA cert. This ensures that standard UEFI Secure Boot firmware can validate the VMware boot loader. 

The boot loader code contains a VMware public key. This VMware key is used to validate the VM Kernel and a small subset of the system that includes the Secure Boot Verifier, used to validate the VIBs.

The VMKernel itself is cryptographically signed using the VMware public key. The boot loader validates the kernel using the VMware public key it has. The first thing the VMKernel runs is the Secure Boot Verifier.

The Secure Boot Verifier validates every cryptographically signed VIB against the VMware public key. A VIB (TAR g-zipped file) comprises, an XML descriptor file and a digital signature file. When ESXi boots, it creates a file system in memory that maps to the contents of the VIBs. If the file never leaves the cryptographically signed “package” then you don’t have to sign every file, just the package.

Prerequisites to enable UEFI Secure Boot:
  • Verify that the hardware supports UEFI secure boot by default or if any firmware upgrade is required.
  • Verify that all VIBs are signed with an acceptance level of at least PartnerSupported. If you include VIBs at CommunitySupported level, you cannot use secure boot.
Enabling UEFI Secure boot post upgrade to ESXi 6.5:

We can call a validation script located on ESXi host to ensure that we can enable Secure Boot after upgrade to 6.5:

/usr/lib/vmware/secureboot/bin/secureBoot.py -c

The output either includes "Secure Boot can be enabled" or "Secure boot cannot be enabled".




Saturday, 2 September 2017

Good Bye, vSphere Web Client

VMware has announced to deprecate the Flash-based vSphere Web Client with the next numbered release (not update release) of vSphere. The next version of vSphere will be the terminal release for which vSphere Web Client will be available.

Since vSphere web client is based on Adobe flash technology, It results in less than ideal performance as compared to HTML5 based vSphere client and also has constant update requirements. Additionally, Adobe also has recently announced plans to deprecate Flash.



Currently we have two variants of the vSphere GUIs which includes the vSphere Web Client and HTML5-based vSphere Client in vSphere 6.5 to manage the operation of virtual datacenter.

With the decommissioning of windows based vSphere client, VMware also introduced the HTML5 based vSphere client with vSphere 6.5. Which provides the solid performance as compared to the vSphere web client. The vSphere Client was introduced first in the Fling, then supported with vSphere 6.5. Since its introduction, the vSphere Client has received positive responses from the vSphere community and customer base.

With the recently released vSphere 6.5 Update 1, the vSphere Client got even better and is now able to support most of the frequently performed operations. With each iteration of the vSphere Client additional improvements and functionality are being added.

By the time the vSphere Web Client is deprecated, the vSphere Client will be full featured but with significantly better responsiveness and usability.

The HTML based vSphere Client will be the primary GUI administration tool for vSphere environments starting in the next release. It is recommended that customers should start transitioning over to the HTML5 based vSphere Client as the vSphere Web Client will no longer be available after the next vSphere release. This announcement from VMware gives ample time to customers to prepare for the eventual vSphere Web Client deprecation.

Thursday, 24 August 2017

Remote Direct Memory Access for Virtual Machines in vSphere 6.5


As we all know the basic definition of Direct memory access (DMA), is an ability of a device to access the host's memory directly without the intervention of the CPU or Operating System.

RDMA:

RDMA allows direct memory access from the memory of one computer to the memory of another computer without involving the operating system or CPU.

Protocol Support for RDMA:

Several network protocols supports RDMA now a days. Such as,
  • InfiniBand (IB)
  • RDMA over Converged Ethernet (RoCE)
  • Internet Wide Area RDMA Protocol (iWARP)

iWARP:

Internet Wide Area RDMA Protocol network protocol which allows performing RDMA over TCP. This allows using RDMA over standard Ethernet infrastructure. Only the NICs should be compatible and support iWARP (if CPU offloads are used) otherwise, all iWARP stacks can be implemented in SW and loosing most of the RDMA performance advantages.

IB:


InfiniBand (IB) is a new generation network protocol which supports RDMA natively. This is a new network technology, hence it requires NICs and switches that support this technology.

RoCE:

RoCE is a network protocol which allows performing RDMA over Ethernet network. Its lower network headers are Ethernet headers and its upper network headers (including the data) are InfiniBand headers. This allows using RDMA over standard Ethernet infrastructure. 

vSphere with RDMA:

With vSphere 6.5, VMware introduced RDMA over Converged Ethernet (RoCE). RDMA over Converged Ethernet (RoCE) allows remote direct memory access (RDMA) over an Ethernet network. RoCE dramatically accelerates communication between two network endpoints leveraging DMA over converged Ethernet infrastructure.

RoCE is supported in two modes RoCE v1 and RoCE v2.

The RoCE v1 protocol is an Ethernet link layer protocol with Ethertype 0x8915, it means that the frame length limits of the Ethernet protocol apply – 1500 bytes for a regular Ethernet frame and 9000 bytes for a jumbo frame.

The RoCEv2 protocol exists on top of either the UDP/IPv4 or the UDP/IPv6 protocol.[2] The UDP destination port number 4791 has been reserved for RoCE v2. Since RoCEv2 packets are routable the RoCE v2 protocol is sometimes called Routable RoCE or RRoCE.

Following diagram shows the packet format for RoCEv1 and v2 protocols.
Image:VMware

Using RDMA in vSphere:

vSphere 6.5 and later releases support remote direct memory access (RDMA) communication between virtual machines that have paravirtualized RDMA (PVRDMA) network adapters. 
The virtual machines must be connected to the same vSphere Distributed Switch.

A PVRDMA network adapter for virtual machine provides remote direct memory access in a virtual environment. Virtual machine uses PVRDMA network adapter to communicate with other virtual machines that has the PVRDMA devices. The transfer of memory is offloaded to the RDMA-capable Host Channel Adapters (HCA) as shown in images below in order

Image 1:VMware

Image 2: VMware 

The PVRDMA device automatically selects the method of communication between the virtual machines. For virtual machines that run on the same ESXi host with or without a physical RDMA device, the data transfer is a memcpy between the two virtual machines. The physical RDMA hardware is not used in this case. 
Image:VMware

For virtual machines that reside on different ESXi hosts and that have a physical RDMA connection, the physical RDMA devices must be uplinks on the distributed switch. In this case, the communication between the virtual machines occurs using PVRDMA adapter which uses the underlying physical RDMA devices. 

Image:VMware
For two virtual machines that run on different ESXi hosts, when at least one of the hosts does not have a physical RDMA device, the communication falls back to a TCP-based channel and the performance is reduced.

PVRDMA Requirements in vSphere 6.5:

VMware vSphere 6.5 and later supports PVRDMA only with specific configuration. Below are the requirements for some of the components.

vSphere:
  • ESXi host 6.5 or later.
  • vCenter Server or vCenter Server Appliance 6.5 or later. 
  • vSphere Distributed Switch.
Host Channel Adapter (HCA):
  • Virtual machines that reside on different ESXi hosts require HCA to use RDMA.
  • HCA must be assigned as an uplink for the vSphere Distributed Switch. 
  • PVRDMA does not support NIC teaming. 
  • The HCA must be the only uplink on the vSphere Distributed Switch. 
  • For virtual machines on the same ESXi hosts or virtual machines using the TCP-based fallback, the HCA is not required (This point will be discussed later in this article).
Virtual machine:
  • Virtual hardware verison 13 or later.
Guest OS:
  • Linux (64-bit)

Assign a PVRDMA Adapter to a Virtual Machine:

To enable a virtual machine to exchange data using RDMA, you must associate the virtual machine with a PVRDMA network adapter.

Procedure:
  1. Locate the virtual machine in the vSphere Web Client inventory and Power off the virtual machine. 
  2. On the Configure tab of the virtual machine, expand Settings and select VM Hardware. 
  3. Click Edit and select the Virtual Hardware tab in the dialog box displaying the settings. 
  4. From the New device drop-down menu, select Network and click Add. 
  5. Expand the New Network section and connect the virtual machine to a distributed port group. 
  6. From the Adapter type drop-down menu, select PVRDMA. 
  7. Expand the Memory section, select Reserve all guest memory (All locked), and click OK . 
  8. Power on the virtual machine.

 Hope this helps understand the vRDMA in VMware vSphere 6.5.

Popular Posts This Week