Friday, 29 April 2016

What is Software Defined Datacenter (SDDC)?

Software Defined is now a days common term used in the virtualization and cloud solutions. You may have also seen terms like below.

  • Software defined Datacenter (SDDC)
  • Software defined Networking (SDN)
  • Software defined Compute (SD-Compute)
  • Software defined Storage (SDS)  (one of category for my blog post too :))
What do we exactly mean by this term Software Defined?

In any datacenter, we have four important resources which acts as pillars of the datacenter. These resources are CPU, Memory, Network, Storage. All of these resources are interdependent.

When we talk about term Software defined, it generally means that we are trying to virtualize these datacenter resources so that our datacenter's can become flexible and we can scale out our datacenter's to Cloud level functionality.  

Hence the term Software defined is an approach or vision towards virtualizing datacenter resources. So If I try to summarize the meaning of Software defined, it would be as below.

  • Software Defined  =  Virtualized        Hence,
  • SDDC  =  Virtualized Datacenter
  • SDN   =   Virtualized Network
  • SDS   =   Virtualized Storage
  • SDC  =   Virtualized Compute
I hope this clears the clouds around the term Software Defined.

Now as I mentioned, we will be virtualizing these datacenter resources in order to make datacenter's more flexible by means of virtualization, we will be using virtualization products from Vendor like VMware. Below image describes the products that are associated with VMware virtualization of datacenter resources and scale-out to cloud Services.

Image: VMware

Wednesday, 27 April 2016

vRealize Operations Manager 6.X node types

vROps 6.0 contains a common node architecture for all platforms which include Windows, Linux (RHEL) or appliance. We have four deployment nodes as listed below.
  • Master Node
  • Master Replica Node
  • Data Node
  • Remote collector Node

You must select a role when you try to add node into the cluster.

Master / Master replica node:

The master or master replica node is required for availability of the Operations Manager cluster. It includes all vROps 6.0 services i.e. UI, Controller, Analytic, Collector, and Persistence.

This role also includes the services that are not replicated across all nodes in the cluster. These services are as below.

• Global xDB
• NTP server
• Gem Fire locator

Data node:

It provides the core functionality of collecting and processing data and data queries. It also extends the vROps cluster as a member of the GemFire Federation. A data node is almost identical to a master/master replica node except that it does not contain Global xDB, NTP server, and GemFire locator.

Remote collector node:

The remote collector role is a standalone collector for remote. Remote collectors do not process data themselves; instead, they simply forward data to data nodes for analytics processing. Remote collector nodes do not run core vROps components like:

• The Product UI
• Controller
• Analytics
• Persistence

Remote collectors are not members of the GemFire Federation as they do not run any of vROps core components.

vRealize Operations Manager 6.X persistence layer and its databses

vROps 6.x Persistence Layer:
The Persistence layer  is the layer where the data is persisted to a disk. The layer primarily consists of a series of databases that replace the existing vCOps 5.x filesystem database (FSDB) and PostgreSQL combination.

vROps 6.0 has four primary database services built on the EMC Documentum xDB (an XML database) and the original FSDB.

Global xDB:

Global xDB is solely located on the master node (and master replica if high availability is enabled).
Global xDB contains all of the data that cannot be sharded i.e.  User configuration data that includes:

• User created dashboards and reports
• Policy settings and alert rules
• Super metric formulas (not super metric data, as it is sharded in the FSDB)
• Resource control objects

Alarms xDB:

Alerts and Alarms xDB is a sharded xDB database that contains information on Dynamic Threshhold breaches. This information then gets converted into vROps alarms based on active policies.


HIS xDB is a sharded xDB database that holds historical information on all resource properties and parent/child relationships. HIS is used to change data back to the analytics layer based on the incoming metric data that is then used for Dynamic Threshhold calculations and symptom/alarm generation.


The FSDB contains all raw time series metrics for the discovered resources.


VMware vRealize Operations Manager (vROps) 6.X Architecture

VMware vRealize Operations Manager (vROps) 6 helps IT admins to monitor, troubleshoot, and manage the health and capacity of virtual environment. vRealize Operations is a suite that includes vCenter Infrastructure Navigator (VIN), vRealize Configuration Manager (vCM), vRealize Log Insight, and vRealize Hyperic.

vCenter Operations Manager 6 (vROps) is designed to use a common platform that uses the same components in all type of deployment architecture. The following diagram shows the five major components of the Operations Manager architecture:

Product/Admin UI:

In vROps 6, the UI is devided into two components—the Product UI and the Admin UI. 

The vROps 6.0 Product UI is present on all nodes with the exception of nodes that are deployed as remote collectors. 

Primary purpose of the Product UI is to make GemFire calls to the Controller API to access data and create views, such as dashboards and reports. Product UI is simply accessed via HTTPS on TCP port 443.

The Product UI is the main Operations Manager graphical user interface. Product UI is based on Pivotal tomcat Server and can make HTTP REST calls to the CaSA (Cluster and Slice Administrator) for administrative tasks.  

The Admin UI is a web application hosted by Pivotal tomcat Server and is responsible for making HTTP REST calls to the Admin API for node administration tasks.

The collector is responsible for processing data from solution adapter instances. collector uses adapters to collect data from various sources and then contacts the GemFire locator for connection information of one or more controller cache servers to send the collected data.

The controller manages the storage and retrieval of the inventory of the objects
within the system. The queries are performed by leveraging the GemFire MapReduce
function that allows you to perform selective querying. This allows efficient data
querying as data queries are only performed on selective nodes rather than all nodes.

Analytics is the runtime layer for data analysis. The role of the analytics process is to track the individual states of every metric and uses various forms of correlation to determine whether there are problems.

Analytics layer is responsible for the following tasks:

• Metric calculations
• Dynamic thresholds
• Alerts and alarms
• Metric storage and retrieval from the Persistence layer
• Root cause analysis
• Historic Inventory Server (HIS) version metadata calculations and
relationship data

The Persistence layer, is the layer where the data is persisted to disk. The layer primarily consists of a series of databases that replace the existing vCOps 5.x filesystem database (FSDB) and PostgreSQL combination.

Sharding is the term that GemFire uses to describe the process of distributing data
across multiple systems to ensure that computational, storage, and network loads
are evenly distributed across the cluster.

Popular Posts This Week