Prerequisites and Guidelines
Before you proceed with deploying the Nexus Dashboard cluster in VMware ESX, you must:
-
Ensure that the ESX form factor supports your scale and services requirements.
Scale and services support and co-hosting vary based on the cluster form factor and the specific services you plan to deploy. You can use the Nexus Dashboard Capacity Planning tool to verify that the virtual form factor satisfies your deployment requirements.
Note
Some services (such as Nexus Dashboard Fabric Controller) may require only a single ESX virtual node for one or more specific use cases. In that case, the capacity planning tool will indicate the requirement and you can simply skip the additional node deployment step in the following section.
However, note that if you have to deploy a mix of App and Data nodes, for example if you plan to deploy Nexus Dashboard Insights or co-host multiple services in the same cluster, you must ensure that the Data nodes are deployed first as the initial cluster's 3 master nodes. Then you can add the App nodes as the
worker
nodes, as described in the Cisco Nexus Dashboard User Guide.
-
Review and complete the general prerequisites described in Deployment Overview and Requirements.
Note that this document describes how to initially deploy the base Nexus Dashboard cluster. If you want to expand an existing cluster with additional nodes (such as
worker
orstandby
), see the "Infrastructure Management" chapter of the Cisco Nexus Dashboard User Guide instead, which is available from the Nexus Dashboard UI or online at Cisco Nexus Dashboard User Guide -
Review and complete any additional prerequisites described in the Release Notes for the services you plan to deploy.
-
When deploying in VMware ESX, you can deploy two types of nodes:
-
Data Node—node profile designed for data-intensive applications, such Nexus Dashboard Insights
-
App Node—node profile designed for non-data-intensive applications, such Nexus Dashboard Orchestrator
Ensure you have enough system resources:
Table 1. Deployment Requirements Nexus Dashboard Version
Data Node Requirements
App Node Requirements
Release 2.2.x
-
VMware ESXi 6.7, 7.0, 7.0.1, 7.0.2, or 7.0.3
-
VMware vCenter 6.x, 7.0.1, or 7.0.2, if deploying using vCenter
-
Each VM requires the following:
-
32 vCPUs with physical reservation of at least 2.2GHz
-
128GB of RAM with physical reservation
-
3TB SSD storage for the data volume and an additional 50GB for the system volume
Data
nodes must be deployed on storage with the following minimum performance requirements:-
The SSD must be attached to the data store directly or in JBOD mode if using a RAID Host Bus Adapter (HBA)
-
The SSDs must be optimized for
Mixed Use/Application
(notRead-Optimized)
-
4K Random Read IOPS:
93000
-
4K Random Write IOPS:
31000
-
-
-
We recommend that each Nexus Dashboard node is deployed in a different ESXi server.
-
VMware ESXi 6.7, 7.0, 7.0.1, 7.0.2, or 7.0.3
-
VMware vCenter 6.x, 7.0.1, or 7.0.2, if deploying using vCenter
-
Each VM requires the following:
-
16 vCPUs with physical reservation of at least 2.2GHz
-
64GB of RAM with physical reservation
-
500GB HDD or SSD storage for the data volume and an additional 50GB for the system volume
Some services require
App
nodes to be deployed on faster SSD storage while other services support HDD. Check the Nexus Dashboard Capacity Planning tool to ensure that you use the correct type of storage.
-
-
We recommend that each Nexus Dashboard node is deployed in a different ESXi server.
-
-
If you plan to install Nexus Dashboard Insights with NDFC/DCNM fabrics and use Persistent IPs functionality over Layer 2 (IPs configured as part of the management and data subnets), you must enable promiscuous mode for both management and data network interface portgroups, as described in https://kb.vmware.com/s/article/1004099.
-
After each node's VM is deployed, ensure that the VMware Tools periodic time synchronization is disabled as described in the deployment procedure in the next section.
-
VMware vMotion is not supported for Nexus Dashboard cluster nodes.
-
VMware Distributed Resource Scheduler (DRS) is not supported for Nexus Dashboard cluster nodes.
-
Because Nexus Dashboard is a platform infrastructure, it is not possible to bring down all services.
In other words, if you want to take a snapshot of the virtual machine (such as for debugging purposes), the snapshot must have all Nexus Dashboard services running.
-
You can choose to deploy the nodes directly in ESXi or using vCenter.
If you want to deploy using vCenter, following the steps described in Deploying Nexus Dashboard Using VMware vCenter.
If you want to deploy directly in ESXi, following the steps described in Deploying Nexus Dashboard Directly in VMware ESXi.
ESX Host Network Connectivity
If you plan to install Nexus Dashboard Insights or Fabric Controller service and use the Persistent IPs feature, you must
ensure that the ESX host where the cluster nodes are deployed has a single logical uplink. In other words, it is connected
via a single link, PC, or vPC and not a dual Active/Active (A/A
) or Active/Standby (A/S
) link without PC/vPC.
The following diagrams summarize the supported and unsupported network connectivity configurations for the ESX host where the nodes are deployed:
-
In case the ESX host is connected directly, the following configurations are supported:
-
A/A
uplinks of Port-Group or virtual switch with PC or vPC -
Single uplink of Port-Group or virtual switch
-
Port-Channel used for the uplink of Port-Group or virtual switch.
A/A
orA/S
uplinks of Port-Group or virtual switch without PC or vPC are not supported -
-
In case the ESX host is connected via a UCS Fabric Interconnect (or equivalent), the following configurations are supported:
-
A/S
uplinks of Port-Group or virtual switch at UCS Fabric Interconnect level without PC or vPCIn this case, the
Active
/Standby
links are based on the server technology, such as Fabric Failover for Cisco UCS and not at the ESXi hypervisor level. -
Single uplink of Port-Group or virtual switch
A/A
orA/S
uplinks of Port-Group or virtual switch at the hypervisor level without PC or vPC are not supported -