Site icon Virtual Maestro

vRealize Operations Manager 8.x Requirements, Sizer, and maximums

Advertisements

In this post, we will discuss about requirements for vROPS 8.0 and 8.1 requirements, vROPS Sizer tool and sizing guidelines for vRealize Operations Manager to determine the configurations used during installation and post install.

vROPS Sizer

VMware provides vROPS sizer tool to help you with sizing vROPS deployments. Recommended sizing for vRealize Operations Manager 6.5 and later can be generated by using this tool.

You can use this tool at URL https://vropssizer.vmware.com/sizing-wizard/choose-installation.

Once connected provide details of vROPS version you are using or planning to use and also select the option of sizing guide, basic or advanced.

Enter the details of expected inventory size for required components as per your environment on the next screen. Once entered with all details, click on view recommendations.

It provides recommendations based on input you provided about inventory items as in image below.

vROPS 8.0 and 8.1 Requirements and Maximums

The vRealize Operations Manager virtual appliance must be deployed on hosts that are ESXi 6.0 or later. For ESXi 6.5 there is requirement that ESXi 6.5 must be running with Update 1.

If you are running previous version of vROPS on older version of ESXi such as ESXi 5.5, you must upgrade the vCenter Server to versions 6.0 or 6.5, and then upgrade to vRealize Operations Manager 8.0 and later.

We may need to expand VROPS Cluster capacity once the vRealize Operations Manager instance outgrows the existing size over the period of time. In such event, we can expand the cluster by adding more nodes of similar configuration to the cluster. In VROPS Cluster, all nodes should be with same configuration in terms of resources.

Below are the details of default configurations and maximums in vROPS 8.0. While deploying vROPS, we get default configuration options as Extra Small, Small, Medium, Large, and Extra Large configurations during installation. 

Appliance Configurations

  • Extra Small Configuration
No of vCPU: 2 vCPUs
Default Memory (GB):8 GB
Max Memory Configuration (GB): 8 GB
vCPU:Physical core ratio for data nodes: 1:1 at scale maximums
Network latency for data nodes: < 5ms
Network latency for agents (to vRealize Operations node or RC): < 20 ms
Datastore latency: Consistently < 10 ms with possible occasional peaks up to 15 ms
  • Small Configuration
No of vCPU: 4 vCPUs4 vCPUs
Default Memory (GB): 16 GB16 GB
Max Memory Configuration (GB): 32 GB32 GB
vCPU:Physical core ratio for data nodes: 1:1 at scale maximums
Network latency for data nodes: < 5ms
Network latency for agents (to vRealize Operations node or RC): < 20 ms
Datastore latency: Consistently < 10 ms with possible occasional peaks up to 15 msConsistently < 10 ms with possible occasional peaks up to 15 ms
No of vCPU: 4 vCPUs8 vCPUs
Default Memory (GB): 16 GB32 GB
Max Memory Configuration (GB): 32 GB64 GB
vCPU:Physical core ratio for data nodes: 1:1 at scale maximums
Network latency for data nodes: < 5ms
Network latency for agents (to vRealize Operations node or RC): < 20 ms
Datastore latency: Consistently < 10 ms with possible occasional peaks up to 15 msConsistently < 10 ms with possible occasional peaks up to 15 ms
No of vCPU: 4 vCPUs16 vCPUs
Default Memory (GB): 16 GB48 GB
Max Memory Configuration (GB): 32 GB96 GB
vCPU:Physical core ratio for data nodes: 1:1 at scale maximums
Network latency for data nodes: < 5ms
Network latency for agents (to vRealize Operations node or RC): < 20 ms
Datastore latency: Consistently < 10 ms with possible occasional peaks up to 15 msConsistently < 10 ms with possible occasional peaks up to 15 ms
No of vCPU: 4 vCPUs24 vCPUs
Default Memory (GB): 16 GB128 GB
Max Memory Configuration (GB): 32 GB128 GB
vCPU:Physical core ratio for data nodes: 1:1 at scale maximums
Network latency for data nodes: < 5ms
Network latency for agents (to vRealize Operations node or RC): < 20 ms
Datastore latency: Consistently < 10 ms with possible occasional peaks up to 15 msConsistently < 10 ms with possible occasional peaks up to 15 ms
No of vCPU: 4 vCPUs2 vCPUs
Default Memory (GB): 16 GB4 GB
Max Memory Configuration (GB): 32 GB8 GB
vCPU:Physical core ratio for data nodes: 1:1 at scale maximums
Network latency for data nodes: < 5ms
Network latency for agents (to vRealize Operations node or RC): < 20 ms
Datastore latency: Consistently < 10 ms with possible occasional peaks up to 15 msConsistently < 10 ms with possible occasional peaks up to 15 ms
No of vCPU: 4 vCPUs4 vCPUs
Default Memory (GB): 16 GB16 GB
Max Memory Configuration (GB): 32 GB32 GB
vCPU:Physical core ratio for data nodes: 1:1 at scale maximums
Network latency for data nodes: < 5ms
Network latency for agents (to vRealize Operations node or RC): < 20 ms
Datastore latency: Consistently < 10 ms with possible occasional peaks up to 15 msConsistently < 10 ms with possible occasional peaks up to 15 ms

Below are the Objects and Metrics maximums for all possible appliance configurations like extra small, small and other deployments

Objects and Metrics maximums

Single-Node Maximum Objects:300
Single-Node Maximum Collected Metrics:70000
Max number of nodes in a cluster:1
Multi-Node Max Objects Per Node:NA
Multi-Node Max Collected Metrics Per Node:NA
Max Objects with Max supported number of nodes:350
Max Metrics with Max supported number of nodes:70000
Single-Node Maximum Objects:5,000
Single-Node Maximum Collected Metrics:8,00,000
Max number of nodes in a cluster:2
Multi-Node Max Objects Per Node:3,000
Multi-Node Max Collected Metrics Per Node:7,00,000
Max Objects with Max supported number of nodes:6,000
Max Metrics with Max supported number of nodes:14,00,000
Single-Node Maximum Objects:15,000
Single-Node Maximum Collected Metrics:25,00,000
Max number of nodes in a cluster:8
Multi-Node Max Objects Per Node:8,500
Multi-Node Max Collected Metrics Per Node:20,00,000
Max Objects with Max supported number of nodes:68,000
Max Metrics with Max supported number of nodes:1,60,00,000
Single-Node Maximum Objects:20,000
Single-Node Maximum Collected Metrics:40,00,000
Max number of nodes in a cluster:16
Multi-Node Max Objects Per Node:16,500
Multi-Node Max Collected Metrics Per Node:30,00,000
Max Objects with Max supported number of nodes:2,00,000
Max Metrics with Max supported number of nodes:3,75,00,000
Single-Node Maximum Objects:45,000
Single-Node Maximum Collected Metrics:1,00,00,000
Max number of nodes in a cluster:6
Multi-Node Max Objects Per Node:40,000
Multi-Node Max Collected Metrics Per Node:75,00,000
Max Objects with Max supported number of nodes:2,40,000
Max Metrics with Max supported number of nodes:4,50,00,000
Single-Node Maximum Objects:6,000
Single-Node Maximum Collected Metrics: 12,00,000
Max number of nodes in a cluster:60
Single-Node Maximum Objects:32,000
Single-Node Maximum Collected Metrics: 65,00,000
Max number of nodes in a cluster:60

End Point Operations agent Maximums

Maximum number of agents per node:100
Maximum number of agents per node:300
Maximum number of agents per node:1,200
Maximum number of agents per node:2,500
Maximum number of agents per node:2,500
Maximum number of agents per node:250
Maximum number of agents per node:2,000

vRealize Application Remote Collector Telegraf agent maximums

Maximum number of agents per node: 100
Network latency for Application Remote Collector: Less than 10 ms
Maximum number of agents per node: 500
Network latency for Application Remote Collector: Less than 10 ms
Maximum number of agents per node: 1,500
Network latency for Application Remote Collector: Less than 10 ms
Maximum number of agents per node: 3,000
Network latency for Application Remote Collector: Less than 10 ms
Maximum number of agents per node: 4000
Network latency for Application Remote Collector: Less than 10 ms
Maximum number of agents per node: 250
Network latency for Application Remote Collector: Less than 10 ms
Maximum number of agents per node: 2,500
Network latency for Application Remote Collector: Less than 10 ms

Other Maximums

Maximum number of remote collectors60
Maximum number of vCenter adapter instances120
Maximum number of vCenter on a single collector100
Maximum number of concurrent users per node10
Maximum number of certified concurrent users300
Maximum number of the vRealize Application Remote Collector telegraf agents10,000
Maximum number of the End Point Operations agents10,000

Telegraf agents using vRealize Application Remote Collector

SmallMediumLarge
vCPU4816
Default Memory (GB)81624
Maximum Number of supported telegraf agents5003,0006,000

Sizing guidelines for vRealize Operations Continuous Availability

This is a new feature available with the release of vROPS 8.0 and onwards. Continuous Availability allows the cluster nodes to be stretched across two fault domains that are placed across two different sites. Just like how we deploy vSAN stretched cluster. Continuous Availability requires an equal number of nodes in each fault domain and a witness node which is placed in a third site, to monitor and avoid split brain syndrome scenarios. 

SmallMediumLargeExtra Large
Maximum number of nodes in each Continuous Availability fault-domain 1485

Network requirements for Continuous Availability

Between fault-domainsBetween witness node and fault-domains
Latency< 10ms, with peaks up to 20ms during 20sec intervals< 30ms, with peaks up to 60ms during 20sec intervals
Packet LossPeaks up to 2% during 20sec intervalsPeaks up to 2% during 20sec intervals
Bandwidth10Gbits/sec10Mbits/sec

Sizing Guidelines

For information about the sizing guidelines for vRealize Operations Manager 6.x, 7.x and 8.x, access the KB articles listed in the following table:

Remember that the sizing guides are version specific, please use the sizing guide based on the vRealize Operations version you are planning to deploy.

vRealize OperationsKnowledge Base Article
8.178495
8.0.x75162
7.567752
7.057903
6.754370
6.6, 6.6.12150421
6.52148829
6.42147780
6.3, 6.3.12146615

That is all for this post. I hope it’s helpful.

Exit mobile version