In this article, we will explore NSX-T 3.0 architecture components and try to understand the key concepts behind each of them.
NSX-T provides the complete set of networking and security services such as switching, routing, fire walling, load balancing, QoS, IDS in software. These services can be programmatically assembled in arbitrary combinations to produce unique, isolated virtual networks in a matter of seconds.
NSX-T operates through three separate but integrated planes, these planes are called as Management Plane, Control Plane and Data Plane. These three planes are implemented as sets of processes, modules, and agents residing on two types of nodes: manager appliance and transport.

With the release of NSX 2.4 and later, Management plane and control plane resides on NSX management cluster nodes which is comprised of three NSX Manager appliance that are deployed as part of highly available NSX-T deployment.

Management plane:
The management plane provides an entry point to the system for API and NSX-T graphical user interface. It is responsible for maintaining user configuration, handling user queries, and performing operational tasks on all three plane nodes i.e. management, control, and data plane. The management plane handles querying recent status and statistics from the control plane, and sometimes directly from the data plane.
The NSX-T Manager implements the management plane for the NSX-T ecosystem. NSX-T Manager provides the following functionality:
- Serves as a unique entry point for user configuration via RESTful API (CMP, automation) or NSX-T user interface.
- Responsible for storing desired configuration in its database.
- The NSX-T Manager stores the final configuration request by the user for the system. This configuration is pushed by the NSX-T Manager to the control plane which becomes the effective configuration in the data plane.
- Retrieves the desired configuration in addition to system information (e.g., statistics).
- Provides ubiquitous connectivity, consistent enforcement of security and operational visibility via object management and inventory collection and for multiple compute domains – up to 16 vCenter Servers, container orchestrators (TAS/TKGI & OpenShift) and clouds (AWS and Azure)

Control plane:
The control plane computes the runtime state of the system based on configuration from the management plane. It is also responsible for disseminating topology information reported by the data plane elements and pushing stateless configuration to forwarding engines. Control Plane works with objects such as logical networks, logical ports, logical routers and so on.
Control plane is split into two layers:
Central Control Plane (aka CCP):
CCP Computes ephemeral runtime state based on configuration from management plane and shares topology information it receives via LCP from data plane.
Local Control Plane (aka LCP):
LCP monitors local link status, computes ephemeral runtime state for endpoints based on updates from data plane and CCP. The LCP pushes stateless configurations to forwarding engines in the data plane and reports the information back to the CCP.
- CCP resides on NSX manager nodes whereas LCP resides on host transport nodes and Edge transport nodes.
- CCP and LCP communicates using NSX-RPC protocol.
- CCP is logically separated from data plane traffic which means that any failure in the control plane will not affect existing data plane operations.
Below image shows the placement of CCP and LCP

NSX-T Manager:
NSX Manager and NSX Controller are bundled in a virtual machine called the NSX Manager Appliance. In the releases prior to 2.4, there were separate appliances based on the roles, one management appliance and 3 controller appliances, so total four appliances to be deployed and managed for NSX. With the release of NSX-T 2.4, the NSX manager, NSX policy manager and NSX controller elements co-exists within a common VM.

The benefits with this converged manager appliance include less management overhead with reduced appliances to manage. And a potential reduction in the total amount of resources (CPU, memory and disk). With the converged manager appliance, one only need to consider the appliance sizing once.
NSX manager is the management plane in NSX ecosystem which provides aggregated system view. As mentioned earlier, NSX manager has been integrated with NSX controller in a fully active three node clustered configuration.
NSX manager provides configuration and orchestration of:
- Logical networking components – logical switching and routing
- Networking and Edge services
- Security services and distributed firewall
The NSX Manager provides a web-based UI where you can manage your NSX-T environment and hosts the API server that processes API calls.
NSX Controller:
NSX Controller is an advanced distributed state management system that controls virtual networks and overlay transport tunnels. It maintains the effective state of the system and configures the data plane.
The main functions of NSX Controller include:
- Providing control plane functionality, such as logical switching, routing, and distributed firewall.
- Computing all ephemeral runtime states based on the configuration from the management plane.
- Disseminating topology information reported by the data plane elements.
- Pushing stateless configurations to forwarding engines.
Traffic doesn’t pass through the controller, instead the controller is responsible for providing configuration to other components such as the logical switches, logical routers, and edge configuration.
To enhance high availability and scalability, the NSX Controller is deployed in a cluster of three instances in management plane.
Data Plane:
The data plane is responsible for forwarding packets based on configurations populated by the control plane. It reports topology information to the control plane and maintains packet level statistics. The data plane also maintains the status of links and handles failover between multiple links or tunnels.
Data plane components includes Hypervisor transport nodes (ESXi and KVM), NSX edge transport nodes and Bare metal transport nodes.

Transport Node:
The hosts running the local control plane daemons and forwarding engines implementing the NSX-T data plane are called transport nodes. Transport nodes are running an instance of the NSX-T virtual switch called the NSX Virtual Distributed Switch, or N-VDS.
Transport nodes are required to perform networking (overlay or VLAN) and security functions. On ESXi platforms, the N-VDS is built on the top of the vSphere Distributed Switch (VDS). For all other types of transport node, the N-VDS is based on the platform independent Open vSwitch (OVS) and serves as the foundation for the implementation of NSX-T.
A transport node is responsible for forwarding the data plane traffic that originates from VMs, containers, or applications running on bare-metal servers.
NSX-T Data Center supports the following types of transport nodes:
- Hypervisor (ESXi or KVM)
- Bare-metal server (RHEL, CentOS, Ubuntu, SLES, Windows)
- Edge nodes
ESXi and KVM transport nodes can work together. Networks and topologies can extend to both ESXi and KVM environments, regardless of the hypervisor type. NSX-T Data Center 3.0 also introduces Windows bare-metal support as a transport node.
The NSX Manager management plane and CCP communicates with the transport nodes by using Appliance Proxy Hub aka APH Server over NSX-RPC/TCP through port 1234/1235 respectively.

That sums it up folks. I hope it is useful to everyone.
Cheers!!!