Prerequisites

This chapter provides release-specific prerequisites information for your deployment of Cisco Nexus Dashboard Fabric Controller.

Prerequisites

Before you install the Cisco Nexus Dashboard Fabric Controller on Cisco Nexus Dashboard, you must need to meet the following prerequisites:

Nexus Dashboard

You must have Cisco Nexus Dashboard cluster deployed and its fabric connectivity configured, as described in Cisco Nexus Dashboard Deployment Guide before proceeding with any additional requirements and the Nexus Dashboard Fabric Controller service installation described here.

Nexus Dashboard Fabric Controller Release

Minimum Nexus Dashboard Release

Release 12.0.1a

Cisco Nexus Dashboard, Release 2.1.1e or later

Nexus Dashboard Networks

When first configuring Nexus Dashboard, on every node, you must provide two IP addresses for the two Nexus Dashboard interfaces—one connected to the Data Network and the other to the Management Network. The data network is typically used for the nodes' clustering and north-south connectivity to the physical network. The management network typically connects to the Cisco Nexus Dashboard Web UI, CLI, or API.

For enabling the Nexus Dashboard Fabric Controller, the Management and Data Interfaces on a Nexus Dashboard node must be in different subnets. The interfaces between different nodes that belong to the same Nexus Dashboard cluster, must be within the same Layer-2 Network and Layer-3 subnet.

Connectivity between the Nexus Dashboard nodes is required on both networks with the round trip time (RTT) not exceeding 50ms. Other applications running on the same Nexus Dashboard cluster may have lower RTT requirements and you must always use the lowest RTT requirement when deploying multiple applications in the same Nexus Dashboard cluster. Refer to Cisco Nexus Dashboard Deployment Guide for more information.

Nexus Dashboard Fabric Controller Traffic Type Nexus Dashboard Network

Any traffic to and from Cisco Nexus Dashboard Fabric Controller

Data network
Intra-cluster communication Data network
Audit log streaming (Splunk/syslog) Management network
Remote backup Management network
Table 1. Network Requirements for NDFC on Nexus Dashboard

Management Interface

Data Interface

Persistent IPs

Support for Data and Management in the same subnet

Layer 2 adjacent

Layer 2 adjacent

One of the following for LAN:

  • 2 IPs in management network if using the default LAN Device Management Connectivity setting

  • 2 IPs in data network if setting LAN Device Management Connectivity to Data

    Plus one IP per fabric for EPL in data network

    Plus one IP for Telemetry receiver in data or management network if IP Fabric for Media is enabled.

  • Plus one IP for SNMP and Syslog in data or management network

For SAN:

  • 2 IPs in data network

    Plus one IP per node in data network for SAN Insights receiver if enabled.

    Plus one IP for SNMP and Syslog

Not supported

Layer 3 adjacent

Layer 3 adjacent

For LAN:

  • Data network

    2 IPs in data network if setting LAN Device Management Connectivity to Data

    Plus one IP per fabric for EPL in data network

    Plus one IP for Telemetry receiver in data or management network if IP Fabric for Media is enabled.

For SAN:

  • Data network

    2 IPs in data network

    Plus one IP per node in data network for SAN Insights receiver if enabled.

Persistent IPs belong to a dedicated subnet (not mgmt subnet, nor data subnet)

Virtual Nexus Dashboard (vND) Prerequisites

For virtual Nexus Dashboard deployments, each vND node has 2 interfaces or vNICs. The Data vNIC maps to bond0 (also known as bond0br) interface and Management vNIC maps to bond1 (also known as bond1br) interface. The requirement is to enable/accept promiscuous mode on the port groups associated with the Nexus Dashboard Management and/or Data vNICs where IP stickiness is required. The Persistent IP addresses are given to the pods (e.g., SNMP Trap/Syslog receiver, Endpoint Locator instance per Fabric, SAN Insights receiver, etc.). Every POD in Kubernetes can have multiple virtual interfaces. Specifically for IP stickiness, an extra virtual interface is associated with the POD that is allocated an appropriate free IP from the external service IP pool. The vNIC has its own unique MAC address that is different from the MAC addresses associated with the vND virtual vNICs. Moreover, all North-to-South communication to and from these PODs go out of the same bond interface. By default, the VMware ESXi systems check if the traffic flows out of a particular VM vNIC matches the Source-MAC associated with that vNIC. In the case of NDFC pods with an external service IP, the traffic flows are sourced with the Persistent IP addresses of the given PODs that map to the individual POD MAC associated with the virtual POD interface. Therefore, we need to enable the required settings on the VMware side to allow this traffic to flow seamless in and out of the vND node.

For more information, refer to Cisco Nexus Dashboard Deployment Guide.

Nexus Dashboard Cluster Sizing

Nexus Dashboard supports cohosting of services. Depending on the type and number of services you choose to run, you may be required to deploy extra worker nodes in your cluster. For cluster sizing information and recommended number of nodes based on specific use cases, see the Cisco Nexus Dashboard Capacity Planning tool.

If you plan to host other applications in addition to the Nexus Dashboard Fabric Controller, ensure that you deploy and configure additional Nexus Dashboard nodes based on the cluster sizing tool recommendation, as described in the Cisco Nexus Dashboard User Guide, which is also available directly from the Nexus Dashboard Web UI.

Network Time Protocol (NTP)

Nexus Dashboard Fabric Controller uses NTP for clock synchronization, so you must have an NTP server configured in your environment.

Clocks on all nodes must be synchronized within the same second. Any delta between two nodes that exceeds more than 1 second could affect database consistency mechanism between the nodes.