NDFC-12-1-3b-deployment

Available Languages

Download Options

  • PDF
    (2.9 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (2.0 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (1.4 MB)
    View on Kindle device or Kindle app on multiple devices
Updated:July 16, 2024

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Available Languages

Download Options

  • PDF
    (2.9 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (2.0 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (1.4 MB)
    View on Kindle device or Kindle app on multiple devices
Updated:July 16, 2024
 

 

Introduction

Cisco Nexus Dashboard Fabric Controller , ( aka NDFC ) ,  ( formerly known as Data Center Network Manager , aka ( DCNM) ,   runs exclusively as an application service on top of the Cisco Nexus Dashboard (ND). Nexus Dashboard uses Kubernetes at its core with customized extensions, creating a secure, scaled-out platform for microservices-based application deployment. Nexus Dashboard provides Active/Active HA (High Availability) for all applications running on top of that cluster.

The NDFC 12.1.3b release introduces several new features, notably pure IPv6 deployment and management capability. Prior ND releases supported pure IPv4 or dual-stack IPv4/IPv6 configurations for the cluster nodes. With release 3.0(1), ND now supports pure IPv4, pure IPv6, and/or dual stack IPv4/IPv6 configurations for the cluster nodes and services. These new deployment models will be are the focus of this paper.  

Note:       The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product.

NDFC can be deployed to manage three 3 fabric types[MA(1] [R(2] —LAN, IPFM , and SAN. LAN stands for Local Area Network; NDFC supports two 2 types of LAN fabrics —Brownfield deployments are applicable for existing fabrics, while Greenfield deployments are for new fabrics. For more information on LAN deployments, please refer to the NDFC 12.1.3b Release Notes and Enhanced Classic LAN in Cisco NDFC[MA(3] [R(4] . IPFM stands for IP Fabric for Media; the IPFM fabric feature is a specific type of LAN fabric, and it must be specifically enabled. For more information, please refer to the NDFC 12.1.3 Deployment Guide. SAN stands for Storage Area Networking; NDFC provides complete lifecycle management and automation for Cisco MDS and Nexus Dashboard deployments, spanning SAN. For more information on SAN deployments, please refer to Unlocking SAN Innovation with Cisco NDFC.

 

You can deploy NDFC on either a Physical Nexus Dashboard Cluster (pND) or a Virtual Nexus Dashboard cluster (vND). In either case, as a native microservices-based application, NDFC supports true scale-out. This means that simply adding extra nodes to the Nexus Dashboard cluster will increase s the system scale. The system requirements and qualified scale support depend on the Nexus Dashboard deployment model. Refer to the Networking Requirements section to validate NDFC verified scale information.

Networking with Nexus Dashboard

As an application that runs on top of the Cisco Nexus Dashboard, NDFC uses the networking interfaces of the Nexus Dashboard to manage, automate, configure, maintain, and monitor the Cisco Nexus and MDS family of switches. In this section, we will briefly review networking guidelines for the Nexus Dashboard cluster.

Each Nexus Dashboard node in a cluster has two interfaces, each in a different subnet:

     m M anagement i I nterface .

     d D ata (also known as f F abric) i I nterface .

 

Therefore, during deployment of the Nexus Dashboard cluster, you must provide two IP addresses/subnets for each node that will be part of the cluster. At the time of deployment, a user can make a choice on you may choose whether they you want to deploy a single-node or 3-node Nexus Dashboard cluster. Single-node Nexus Dashboard cluster deployments support NDFC IP Fabric for Media and SAN Controller production deployments, and a LAN Controller[N(5]  lab deployment (<=25 switches). A minimum of three Nexus Dashboard nodes are required for all production NDFC LAN Controller deployments.

Test.

Figure 1: Feature Management

As the name implies, the Nexus Dashboard m M anagement i I nterface connects to the management network, and it typically provides web/API access to the Nexus Dashboard cluster. The Nexus Dashboard d D ata i I nterface typically provides IP reachability to the physical data center network infrastructure.

This section describes the purpose and functionality of the networks as they are used by the Nexus Dashboard services.

Management Network

The management network is used for these functions :

     Accessing the Nexus Dashboard GUI ( g G raphical u U ser Interface).

     Accessing the Nexus Dashboard CLI (commandline interface) via SSH[N(6]  (Secure Shell).

     DNS (Domain Name System) and NTP (Network Time Protocol) communication.

     Nexus Dashboard firmware upload.

     Installing applications from the Cisco DC App Center (AppStore) If you want to use the Nexus Dashboard App Store to install applications .

     Intersight device connect ion or .

Data Network

The data network is used for these functions :

     Nexus Dashboard clustering.

     Application - to - application communication (SMTP (Simple Mail Transfer Protocol) and SNMP[N(7]  (Simple Network Management Protocol) forwarding).

Networking Requirements

     Two l L ogical i I nterfaces [RR8] are required per Nexus Dashboard n N ode:

    bond1br (also known as Nexus Dashboard m M anagement i I nterface).

    bond0br (also known as Nexus Dashboard d D ata i I nterface).[N(9] [MA10] [R(11] 

     For enabling NDFC on a Nexus Dashboard cluster, the Nexus Dashboard m M anagement and d D ata i I nterfaces must be in different subnets. Therefore, a minimum of two IP subnets is are required for deployment of such a cluster.

     Note: the capability to configure nodes within the cluster with either Layer 2 or Layer 3 adjacency was enabled in release 12.1.1e (NDFC on Nexus Dashboard Release 2.2.1h). For more information on Layer 3 r R eachability between c C luster n N odes, see Layer 3 Reachability between Cluster Nodes[N(12] .

o    L2 vs L3 cluster deployments are will not be discussed in detail in this paper.

     NDFC can manage the switches in two ways: OOB or IB-management. 

o    In-band management (IB) means that you connect to an IP address of the switch via one or more front-panel ports, often through SSH or Telnet. The address you connect to is will often be a loopback.

o    Out-of-band management (OOB) means that you connect to the mgmt0 interface of the switch, which always has an assigned IP address. [MA13] [R(14] [RR(15] 

     Switch OOB reachability from NDFC, by default, is via the Nexus Dashboard m M anagement i I nterface, so you need to ensure that it is connected to an infrastructure that provides access to the mgmt0 interface(s) of the switches.

o    Note: if desired, the user you can specify, via configuration, to use the d d ata i i nterface for OOB communication.

     Switch i I n-band reachability from NDFC must be via the Nexus Dashboard d d ata i i nterface. If switches are managed by NDFC via the switch front-panel port (SVI, loopback or equivalent), it is referred to as In-band management.

     All NDFC application pods use the default route that is associated with to the Nexus Dashboard d D ata[N(16]  i I nterface. If desired, you may an operator must add static routes on the Nexus Dashboard to force connectivity through the m M anagement i I nterface. This is done via the Nexus Dashboard System Settings workflow that is available on the Nexus Dashboard Admin Console.[MA17] [R(18] 

     Connectivity between the Nexus Dashboard nodes is required on both networks with the following added round trip time (RTT) requirements:

 

Application

Connectivity

Maximum RTT

Nexus Dashboard Fabric Controller

Between Nodes

50 ms

To Switches

200 ms[MA19] [R(20] [R(21] 

Table 1: NDFC RTT stats

Deployment Modes and Design for LAN Fabrics

The following sections provide information about deployment modes and design for LAN f F abrics. The example assumes a Layer 2 ND cluster adjacency, but the general guidelines are also applicable at Layer 3 ND adjacency.

ND node IP assignment

IconDescription automatically generated

Figure 2: Nexus Dashboard Interface IP Addresses

Deploying NDFC on pND

The following figure shows the Nexus Dashboard physical node interfaces.

     eth1-1 and eth1-2 must be connected to the m M anagement network.

     eth2-1 and eth2-2 must be connected to the d D ata network.

 

DiagramDescription automatically generated

Figure 3: Physical Nexus Dashboard Interface Mapping

The interfaces are configured as Linux bonds one for the data interfaces and one for the management interfaces running in active-standby mode. All interfaces must be connected to individual host ports. Port-Channel or vPC links are not supported.[MA(22] [R(23] 

Deployment Model 1

DiagramDescription automatically generated

Figure 4: Deploying NDFC on pND Deployment Model 1

In this model, the Nexus Dashboard m M anagement and d D ata interfaces are connected to a network infrastructure that provides reachability to [MA24] [R(25] the switch’s mgmt0 interfaces and front-panel ports. The ND interfaces are connected to a pair of upstream switches in this setup.

Sample Configurations

On both uplink switches (marked as yellow) for Nexus Dashboard management-

Interface eth1/1, eth1/3, eth1/5

  switchport mode access

  switchport access vlan 23[MA26] [R(27] 

On both uplink switches (marked as yellow) for Nexus Dashboard fabric-

Interface eth1/2,eth1/4,eth1/6

  switchport mode access

  switchport access vlan 10

OR

Interface eth1/2,eth1/4,eth1/6

  switchport mode trunk

  switchport trunk native vlan 10

  switchport trunk allowed vlan 10

OR

Interface eth1/2,eth1/4,eth1/6

  switchport mode trunk

  switchport trunk allowed vlan 10

Note for option 3 under “Nexus Dashboard fabric”: without if the trunk native VLAN is not specified on the switch, you an operator will provide a VLAN tag of VLAN ID 10 during Nexus Dashboard installation and i I nterface bootstrap.

Deployment Model 2

DiagramDescription automatically generated

Figure 5: Deploying NDFC on pND Deployment Model 2

In this model, two separate network infrastructures provide access to the switch mgmt0 interfaces and front-panel ports. Consequently, the ND m M anagement and d D ata i I nterfaces are connected to those separate networks.

Sample Configurations

On both uplink switches (marked as blue) for Nexus Dashboard management-

Interface eth1/1-3

  switchport mode access

  switchport access vlan 23[MA28] [R(29] 

On both uplink switches (marked as green) for Nexus Dashboard fabric-

Interface eth1/1-3

  switchport mode access

  switchport access vlan 10

OR

Interface eth1/1-3

  switchport mode trunk

  switchport trunk native vlan 10

  switchport trunk allowed vlan 10

OR

Interface eth1/1-3

  switchport mode trunk

  switchport trunk allowed vlan 10

Note:       For option 3 under “Nexus Dashboard fabric , : without the trunk native VLAN specified on the switch, you must an operator will provide a VLAN tag of VLAN ID 10 during Nexus Dashboard installation and i I nterface bootstrap .

Deploying NDFC on vND

A vND node can be deployed as an OVA in ESXi with or without a vCenter.

Graphical user interface, text, application, emailDescription automatically generated

Figure 6: vND VM Settings

 

Deployment Model 1

DiagramDescription automatically generated

Figure 7: NDFC on vND Deployment Model 1

In this model, we are using a common set of switches that can provide IP reachability to the f F abric switches via the Nexus Dashboard m M anagement and d D ata i I nterfaces. This infrastructure also uses separate ESXi uplinks [MA(30] [R(31] for m M anagement and d D ata traffic.

Sample Configurations

On both uplink switches (marked as yellow) for Nexus Dashboard management-

interface port-channel1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23

  spanning-tree port type edge trunk

  mtu 9216

  vpc 1

interface Ethernet1/1

  description To-ESXi-vND1-mgmt

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23

  mtu 9216

  channel-group 1 mode active

  no shutdown

An operator You must repeat the configuration for the remaining interface(s) attached to the server(s) hosting vND.

On both uplink switches (marked as yellow) for Nexus Dashboard fabric-

interface port-channel2

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 10

  spanning-tree port type edge trunk

  mtu 9216

  vpc 2

interface Ethernet1/2

  description To-ESXi-vND1-fabric

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 10

  mtu 9216

  channel-group 2 mode active

  no shutdown

An operator You must repeat the configuration for the remaining interface(s) attached to the server(s) hosting vND.

Deployment Model 2

DiagramDescription automatically generated

Figure 8: NDFC on vND Deployment Model 2

In this model, we are using a common set of switches that can provide IP reachability to the f F abric switches via the Nexus Dashboard m M anagement and d D ata i I nterfaces. This infrastructure also uses shared ESXi uplinks [MA(32] [R(33] for both m M anagement and d D ata traffic.

On both uplink switches (marked as yellow) for Nexus Dashboard management and fabric-

interface port-channel1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23, 10

  spanning-tree port type edge trunk

  mtu 9216

  vpc 1

interface Ethernet1/1

  description To-ESXi-vND1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23, 10

  mtu 9216

  channel-group 1 mode active

  no shutdown

An operator You must repeat the configuration for the remaining interface(s) attached to the server(s) hosting vND.

Deployment Model 3

DiagramDescription automatically generated

Figure 9: NDFC on vND Deployment Model 3

In this model , [MA(34] [R(35] , we are using a dedicated pair of switches that provide s IP reachability to the f F abric via the Nexus Dashboard m M anagement and d D ata i I nterfaces. This infrastructure also uses separate uplinks for m M anagement and d D ata traffic.

Sample Configurations

On both uplink switches (marked as blue) for Nexus Dashboard management-

interface port-channel1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23

  spanning-tree port type edge trunk

  mtu 9216

  vpc 1

interface Ethernet1/1

  description To-ESXi-vND1-mgmt

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23

  mtu 9216

  channel-group 1 mode active

  no shutdown

An operator You must repeat the configuration for the remaining interface(s) attached to the server(s) hosting vND.

On both uplink switches (marked as green) for Nexus Dashboard fabric-

interface port-channel1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 10

  spanning-tree port type edge trunk

  mtu 9216

  vpc 1

interface Ethernet1/1

  description To-ESXi-vND1-fabric

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 10

  mtu 9216

  channel-group 1 mode active

  no shutdown

An operator You must repeat the configuration for the remaining interface(s) attached to the server(s) hosting vND.

Deployment Modes and Design for SAN Fabrics

When NDFC is enabled with the SAN Controller persona selected, the resulting application can then be employed for managing and monitoring SAN Fabrics. This includes the ability to enable SAN Insights for deep analytics via streaming telemetry. SAN fabrics typically comprise the Cisco MDS family of switches that support SAN traffic over the Fib e r e Channel. Recall that for NDFC SAN Controller deployments, both a single and a 3-node vND/pND deployment are supported. Refer to the NDFC Verified Scalability Guide for more details on the supported scale, especially with SAN Insights.[N(36] 

An important distinction to note about SAN deployments is that, in opposition to LAN and IPFM deployments, SAN m M ana g e m en t and d D ata networks can be in the same subnet if desired by the user.

Deploying[V(37] [RR(38]  SAN Controller[N(39]  on pND

DiagramDescription automatically generated

Figure 10: Deploying SAN Controller on pND

In this option, we are using a common set of switches that can provide IP reachability to f F abric switches via the Nexus Dashboard m M anagement or d D ata i I nterfaces.

Sample configurations

On both uplink switches (marked as yellow) for Nexus Dashboard management-

Interface eth1/1, eth1/3, eth1/5

  switchport mode access

  switchport access 23

On both uplink switches (marked as yellow) for Nexus Dashboard fabric-

Interface eth1/2,eth1/4,eth1/6

  switchport mode access

  switchport access vlan 10

OR

Interface eth1/2,eth1/4,eth1/6

  switchport mode trunk

  switchport trunk native vlan 10

  switchport trunk allowed vlan 10

OR

Interface eth1/2,eth1/4,eth1/6

  switchport mode trunk

  switchport trunk allowed vlan 10

For the last option without the trunk native vlan VLAN , an operator provide s VLAN ID 10 as the VLAN tag during Nexus Dashboard installation and i I nterface bootstrap (as shown in Figure 3) from the Networking Requirements s S ection.

Deploying SAN Controller on vND

Deployment Option 1

DiagramDescription automatically generated

Figure 11: SAN Controller on vND Deployment Option 1

In this option, we are using a common set of switches that can provide IP reachability to the f F abric switches via the Nexus Dashboard m M anagement or d D ata interfaces. It also uses separate uplinks for m M anagement and d D ata traffic.

Sample Configurations

On both uplink switches (marked as yellow) for Nexus Dashboard management-

interface port-channel1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23

  spanning-tree port type edge trunk

  mtu 9216

  vpc 1

interface Ethernet1/1

  description To-ESXi-vND1-mgmt

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23

  mtu 9216

  channel-group 1 mode on

  no shutdown

An operator You must repeat the configuration for the remaining interfaces that are attached to servers hosting vND.

On both uplink switches (marked as yellow) for Nexus Dashboard fabric-

interface port-channel2

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 10

  spanning-tree port type edge trunk

  mtu 9216

  vpc 2

interface Ethernet1/2

  description To-ESXi-vND1-fabric

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 10

  mtu 9216

  channel-group 2 mode on

  no shutdown

An operator You must repeat the configuration for the remaining interfaces that are attached to servers hosting vND.

Deployment Option 2

DiagramDescription automatically generated

Figure 12: SAN Controller on vND Deployment Option 2

In this option, we are using a common set of switches that can provide IP reachability to f F abric switches via the Nexus Dashboard m M anagement or d D ata interfaces. It also uses shared uplinks for both m M anagement and d D ata traffic.

On both uplink switches (marked as yellow) for Nexus Dashboard management and fabric-

interface port-channel1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23, 10

  spanning-tree port type edge trunk

  mtu 9216

  vpc 1

interface Ethernet1/1

  description To-ESXi-vND1

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 23, 10

  mtu 9216

  channel-group 1 mode on

  no shutdown

An operator You must repeat the configuration for the remaining interfaces that are attached to servers hosting vND.

Deployment Mode Options:

The NDFC 12.1.3b r R elease introduces IPv6-only deployment and management capability for the cluster nodes and services. This release also continues to support dual-stack deployment and management.

When defining IP deployment guidelines, it is important to note that all nodes/networks in the cluster MUST have uniform IP configuration—that is, pure IPv4, pure IPv6, or dual-stack IPv4/IPv6. Additionally, the deployment mode MUST be set at the time of initial Nexus Dashboard configuration. If you want to change the deployment mode at any point in time after initial deployment, a clean install is required. [N(40] 

To access NDFC, first deploy Nexus Dashboard, either on pND or vND (as demonstrated above). Once the individual nodes have been configured, navigate to the node’s management IP address to access the cluster configuration user interface.

-      Example: if your management IP is 192.168.10.3/24 (with a default gateway of 192.168.10.1), use https://192.168.10.3. [N(41] 

-      If you are configuring a 3-node cluster, you can navigate to any of the three management IPs you have configured—you will import the others into the fabric during cluster configuration. [N(42] 

This section will cover s how to specify the deployment mode (IPv4, IPv6 or dual stack) after you’ve deployed all nodes and have loaded the cluster configuration user interface. For further information on general Nexus Dashboard installation, please refer to the Nexus Dashboard deployment guide.[RR43] 

For all deployment models, the following information is required on the “Cluster details” page:

-      NTP Host

-      DNS Provider IP Address

-      Proxy server

-      Note: the NTP host and DNS p P rovider IP a A ddress must be in the same deployment mode as the management and data addresses—that is, IPv4 for pure IPv4 or IPv6 for pure IPv6. For dual stack deployments, you can pick which mode you would like to use for NTP and DNS.

-      A screenshot of a computerDescription automatically generated

Figure 13: Nexus Dashboard Web Installer– Cluster Details UI

o    In the above environment, the initial management IPs were IPv6 addresses—therefore, you have the option to “Enable IPv4” (which would create a dual-stack environment).

§  Note: if your initial configuration was in IPv4, you would have the option to “Enable IPv6” for dual-stack.

A black text on a white backgroundDescription automatically generated

o    To skip proxy server configuration , : click the encircled “i” icon next to “Proxy server” and select “skip.”[N(44]  A warning will come s up that you can either “confirm” or “cancel.”

A screenshot of a computer errorDescription automatically generated

A screenshot of a warning messageDescription automatically generated

§  Note: I i t is best practice to configure a proxy, if one is available. [N(45] 

Pure IPv4

To deploy a pure IPv4 NDFC configuration, use IPv4 management addresses in the initial Nexus Dashboard node creation process.[MA(46] [RR47] 

A screenshot of a computerDescription automatically generated

Figure 14: Nexus Dashboard vND IPv4 Deployment

Then, when you access the Cluster Bringup section, do not check “Enable IPv6.” Full Cluster Bringup steps for Instead, for a 3-node cluster, follow these steps a 3-node cluster are below :

     Input the NTP, DNS, and proxy information as described in the previous section. Do not enable IPv6. Click “Next.” NTP and DNS addresses should be IPv4. 

     Configure the Nexus Dashboard d D ata i I nterface of your ND node by clicking the “Edit” (pen icon) button.

     Enter the Nexus Dashboard data network and default gateway for IP access to NDFC in-band management.[N(48] 

o    If connected via a trunk interface, also include the VLAN ID.

o    A screenshot of a computerDescription automatically generated

o     

Figure 15: Nexus Dashboard Web Installer– Data Network UI in Cluster Details

     Input the other nodes in the fabric (if configuring a 3-node):

o    Select the “Add Node” option.

o    Under “Deployment Details,” input the management IP address and password that you configured when initially deploying node 2 of your 3-node cluster. Validate the information.[N(49] 

 A screenshot of a computerDescription automatically generated

o    If the information is validated, a green checkmark will appear s in place of “validate , ” and the management network IP/mask and default gateway you configured are will be imported directly.

 A screenshot of a computerDescription automatically generated

o    Add the data network IP/mask and gateway, as with the previous node.

o    Repeat the above steps for node 3.

o    When all nodes have been added (as in the sample screenshot below), click “Next” to review the information and “Configure” to start the bootstrap.

 

A screenshot of a computerDescription automatically generated

Figure 16: Nexus Dashboard Web Installer– 3-Node Cluster

Dual-Stack

Dual-stack means that both IPv4-based and IPv6-based fabrics are supported in the network. This can be enabled on both pND and vND. All core services including authentication domains, DNS, NTP, and PTP are usable in dual-stack mode.

As mentioned above, please note that dual-stack cannot be implemented through an upgrade. If your environment has either a pure IPv4 or pure IPv6 configuration already deployed, you will have to do a clean install and enable both deployment models during the initial cluster configuration.

During initial node bring-up, you can configure either IPv4 or IPv6 addresses for the nodes’ management network, but you MUST provide both types of IPs during the cluster bootstrap workflow. Mixed configurations, such as an IPv4 data network and dual-stack management network, are not supported.

·         Note: regardless of whether you choose to initially provide an IPv4 or IPv6 management IP address, you will use this address to access the cluster bootstrap workflow. Once the system has bootstrapped, Nexus Dashboard will be accessible through both the IPv4 and/or IPv6 management IP address(es).[N(51] 

Full configuration steps are below, assuming an initial IPv4 setup and an “Enable IPv6” selection option:

     Input the NTP, DNS, and proxy info as described in the previous section. NTP and DNS addresses can be either IPv4 or IPv6.

     Click “Enable IPv6” (or “Enable IPv4,” if your initial configuration was in IPv6) [MA(52] [R(53] to deploy as dual stack. The wording for this option will depend on what kind of address you used for the initial management IP(s).

 A black text on a white backgroundDescription automatically generated

     Configure the Nexus Dashboard d D ata i I nterface by clicking the “Edit” (pen icon) button.

     Under “Management Network,” input the required IPv4 and IPv6 address/mask s and default gateway s .

     Under “Data Network,” input the required both an IPv4 and IPv6 address/mask s and default gateways.

     A screenshot of a computerDescription automatically generated

o    If connected via a trunk interface, also include the VLAN ID.

A screenshot of a computerDescription automatically generated

Figure 17: Nexus Dashboard Web Installer– Dual-Stack 1-Node Cluster

     Input the other fabric nodes, using the same steps as above with the following additions:

o    Select t he “Add Node” option.

o    Under “deployment details,” use the management IP address and password you configured when initially deploying node 2 of your 3-node cluster. Validate this information.

o    After the management IP has been auto-populated, input the IPv6 address/mask and default gateway.

o    Under “Data Network,” input both an IPv4 and IPv6 address/mask and default gateways.

o    Repeat the above steps for node 3.

o    When all nodes have been added, click “Next” to review the information and “Configure” to start the bootstrap.

o    Note: if you make a mistake during your initial configuration, you will need to must re-validate the management IP and password. Click the “Edit” (pen icon) button on the node that you want to amend, input the management IP and password ,   and re-validate for full edit access.

     Note: you can only deploy Nexus Dashboard as a 1 - or 3-node cluster. If you deploy two 2 nodes, you cannot proceed with the install until you either add or delete one.  A screenshot of a computerDescription automatically generated

 

Pure IPv6

IPv6 deployments are supported on physical and virtual form-factors. When initially configuring the node(s), IPv6 management IP address(es) (and default gateway(s)) must be supplied. Once the nodes are up, these are the addresses that are used to log into the UI and continue the cluster bootstrap process. IPv6 addresses are also required for the data network and gateway, as well as NTP and DNS.

A screenshot of a computerDescription automatically generated

Figure 18: Nexus Dashboard vND IPv6 Deployment

Note that during the cluster bootstrap process, you will see an option to enable IPv4—if you select to do so, your configuration will be dual-stack. If you do not enable IPv4, the system will work s in pure IPv6 mode. [N(54] 

As mentioned above regarding dual-stack, once the ND cluster has been deployed, the operational mode cannot be changed. If you would like to enable dual-stack, a new cluster deployment is required.

Full configuration steps are below:

     Input the NTP, DNS, and proxy info as described in the previous section. NTP and DNS addresses should be IPv6. Do not enable IPv4.

     Configure the Nexus Dashboard d D ata i I nterface by clicking the “Edit” (pen icon) button.

     Under “Data Network,” input the IPv6 address/mask and default gateway.

o    If connected via a trunk interface, also include the VLAN ID. A screenshot of a computerDescription automatically generated

     Input the other fabric nodes, using the same steps as above with the following changes:

o    Select the “Add Node” option.

o    Under “deployment details,” use the management IP address and password that you configured when initially deploying node 2 of your 3-node cluster. Validate this information.

o    If validated, the management network IP/mask and default gateway that you configured are will be imported directly.

o    Under “Data Network,” input the IPv6 address/mask and default gateway, as with the previous node.

o    Repeat the above steps for node 3.

A screen shot of a computerDescription automatically generated

Figure 19: Nexus Dashboard Web Installer– IPv6 3-Node Cluster

o    When all nodes have been added (as in the sample screenshot above), click “Next” to review the information and “Configure” to start the bootstrap (as in the screenshot below).

A screenshot of a computerDescription automatically generated

Figure 20: Nexus Dashboard Web Installer– IPv6 3-Node Cluster

 

 

Installing NDFC on ND:

When you load Nexus Dashboard for the first time after bootstrapping, you will see the “Journey: Getting Started” page. You will have the option to install NDFC during step 5, “Manage Services.” Alternatively, you can navigate directly to this option by going to “Operate > Sites > App Store.”

 

A screenshot of a computerDescription automatically generated

Figure 21: Nexus Dashboard Journey

The App Store will give s you six service options to install on top of your Nexus Dashboard cluster. When you click “Install,” a pop-up terms and conditions window  will come s up. Once y ou they are accept the terms and conditions ed , the download will begin s . A screenshot of a computerDescription automatically generated

Figure 22: Nexus Dashboard Service Catalog

You can track the progress of the download under the “Installed Services” tab.

A screenshot of a computerDescription automatically generated

Figure 23: Nexus Dashboard Fabric Controller– Initial Installation in Progress

Once NDFC has been installed, you have to must enable it separately. If you have navigated away from the Service Catalog, you can re-access it by navigating through “Operate > Services > Installed Services.” [N(55] 

A screenshot of a computerDescription automatically generated

Figure 24: Nexus Dashboard Fabric Controller– Ready for Enablement

You can track the progress of NDFC’s enablement by clicking on the pending task.

A screenshot of a computerDescription automatically generated

Figure 25: Nexus Dashboard Fabric Controller– Enablement Progress

Once NDFC is successfully enabled, your “Installed Services” page will look s like the below example.

A screenshot of a service catalogDescription automatically generated

Figure 26: Nexus Dashboard Fabric Controller– Installed

When you click “Open,” you see you’ll be greeted by a “What’s new in 12.3.1b” pop-up window, and then a prerequisites guideline pop-up window appears .

Related image, diagram or screenshot

Figure 27: Nexus Dashboard Fabric Controller Updates Guide

A screenshot of a computerDescription automatically generated

Figure 28: Nexus Dashboard Fabric Controller Prerequisites

At this stage, you will select your NDFC instance’s feature management mode—Fabric Discovery, Fabric Controller, or SAN Controller.   Fabric Discovery is a lightweight version of NDFC; when enabled, it will support s inventory discovery and monitoring only (NOT configuration or provisioning). This option helps minimize resource utilization and further customize NDFC, but if you require configuration or provisioning capability, select Fabric Controller as your feature management mode. The SAN controller is for MDS and Nexus Switch use cases. [N(56] 

A screenshot of a computerDescription automatically generated

Figure 29: Nexus Dashboard Fabric Controller Feature Management Options

If you elect for a full f F abric c C ontroller, you have the option to enable specific features from the start.

Related image, diagram or screenshot

Figure 30: Nexus Dashboard Fabric Controller Customization Options

Once you have selected the appropriate feature management mode, select “apply” to finish configuring your NDFC instance. For more information on NDFC modes and features, please refer to the NDFC 12 Data Sheet.

Persistent IP Requirements for NDFC

Persistent IP addresses, also known as e E xternal s S ervice IP addresses, are required for pods/services in NDFC that require sticky IP addresses. In other words, pods that are provisioned with a persistent IP retain their IP address even if they are re-provisioned (either on the same Nexus Dashboard node or a different Nexus Dashboard node within the same Nexus Dashboard cluster). Persistent IP addresses are required because switches may be configured with a certain NDFC service as a destination (e.g., SNMP trap destination) . F f or these use cases, a failure of the Nexus Dashboard node hosting the corresponding service/pod should not lead to a switch configuration change. For uninterrupted service, the associated service/pod must be respawned somewhere else in the Nexus Dashboard cluster (usually in another node) such so that the pod/service IP remains the same.

Examples of p P ersistent IP addresses include the following: [RR(57] 

-      SNMP Trap/Syslog Receiver.

-      POAP/SCP.

-      EPL (Endpoint Locator).

-      PMN (for IPFM deployments).

-      SAN.

Since the Nexus Dashboard nodes are typically Layer 2-adjacent, from a network reachability point of view, there is nothing else is required for traffic to be redirected to the new location of that destination service/pod. Note that with the introduction of Layer 3 reachability for an ND cluster hosting NDFC, eBGP is employed to dynamically advertise the updated location of the service following a node failure. Consequently, from a network reachability point of view, as soon as the pod has been re-deployed in the new location, service resumes without any user intervention.

External s S ervice IP addresses are configured under Nexus Dashboard c C luster c C onfiguration. The usage of p P ersistent IP addresses is based on what features are enabled on NDFC, the deployment model, and the way NDFC connects to the switches. Based on your specific use case, you may need IP addresses in the Nexus Dashboard management pool, data pool, or both.

For v V irtual Nexus Dashboard d D eployments, enable (or accept) promiscuous mode on the port groups associated with the Nexus Dashboard m M anagement and/or d D ata vNICs where IP stickiness is required. The p P ersistent IP addresses are given to the PODs (examples include an SNMP t T rap/ s S yslog receiver, Endpoint Locator instance per f F abric, SAN Insights receiver, etc.). Every POD in Kubernetes can have multiple virtual interfaces. Specifically for IP stickiness, an additional virtual interface is associated with the pod that is allocated an appropriate free IP from the appropriate external service IP pool.

The vNIC has its own unique MAC address that is different from the MAC addresses associated with the vND’s virtual vNICs. Moreover, all communication to and from the pods towards an external switch goes out of the same bond interface for n N orth-to- s S outh traffic flows. The d D ata vNIC maps to the bond0 (also known as bond0br) interface and the m M anagement vNIC maps to the bond1 (also known as bond1br) interface. By default, the VMware system checks if the traffic flows out of a particular vNIC are matched with the s S ource-MAC associated with the vNIC. In the case of NDFC, the traffic flows are sourced with the p P ersistent IP address and associated MAC of the given pods. Therefore, we need to you must enable the required settings on the VMware side.

DiagramDescription automatically generated

Figure 31: vSphere Network Setup

 

Graphical user interfaceDescription automatically generated

Figure 32: vSphere mgmt0 Network Settings

 

Graphical user interface, applicationDescription automatically generated

Figure 33: vSphere fabric0 Settings

Note:       An operator will not be You are not able to activate an NDFC feature if appropriate p P ersistent IP addresses are not available. NDFC has a precheck that confirms that there are enough free e E xternal s S ervice IP addresses are configured on the Nexus Dashboard in the corresponding pool before a feature that has such a requirement can be enabled.

Depending on the specific use case and the selected interface for communicating with the switch’s mgmt0 interfaces, the persistent IP addresses will have to must be associated with the ND m M anagement interface or the ND Data or data interface.[RR58] 

A screenshot of a computer errorDescription automatically generated

Cisco NDFC Release 12.1.2e , introduced the capability for NDFC to be run on top of a virtual Nexus Dashboard (vND) instance with promiscuous mode disabled on port groups that are associated with Nexus Dashboard interfaces where External Service IP addresses are specified. It is recommended to disable promiscuous mode for the port groups after upgrading to ND 2.3.1/NDFC 12.1.2, in case customers are upgrading from a previous version. Recall that vND comprises a management interface and a data interface. By default, for LAN deployments, two external service IP addresses are required for the Nexus Dashboard management interface subnet. Similarly, by default, for SAN deployments, two external service IP addresses are required for the Nexus Dashboard data interface subnet.

Note:       Disabling promiscuous mode is supported from Cisco Nexus Dashboard Release 2.3.1c.

Note:       You can disable promiscuous mode when Nexus Dashboard nodes are Layer 3-adjacent to the d D ata network, BGP is configured, and fabric switches are reachable through the data interface.

Note:       You can now disable promiscuous mode even when Nexus Dashboard interfaces are Layer-2 adjacent on the m M anagement and d D ata networks.

Note:       Default option for promiscuous mode on VMware ESXi environments is Reject, meaning promiscuous mode is disabled.

Configuring Persistent IP Addresses

To configure the Persistent IP addresses (also known as External Service IP) perform the following steps:

Step 1. Navigate to Nexus Dashboard Admin console.

Step 2. Click on the System Settings t T ab.

Step 3. Stay under on the General tab and , scroll down to External Service pools.

Step 4. Based on the deployment model and use-case, edit the External Service Pools , and associate the persistent IP addresses to the m M anagement or d D ata interfaces.

Graphical user interfaceDescription automatically generated

Figure 34: Nexus Dashboard Persistent IPs in Management Pool for LAN Deployments.

 

Graphical user interfaceDescription automatically generated with low confidence

Figure 35: Nexus Dashboard Persistent IPs in Data Pool for LAN Deployments.

 

Graphical user interfaceDescription automatically generated with low confidence

Figure 36: Nexus Dashboard Persistent IPs in Data Pool for SAN Deployments.

 

As with the above deployment section, your persistent IP addresses need to match your selected IP version—that is, an IPv4 deployment requires IPv4 addresses , and an IPv6 deployment requires IPv6 addresses. If you have a dual-stack deployment, you will need to must provide both IPv4 and IPv6 addresses as persistent IPs. [N(59] 

As a reminder, if the user selects to you use the ND d D ata interface to communicate with the switch’s mgmt0 interfaces before assigning any persistent IP addresses, you must also it is also required to override the default global server settings for LAN Device Management Connectivity. For this purpose, an operator must To do this, navigate to the NDFC server settings , . To do so, go to the Admin tab , and specify data in the LAN Device Management Connectivity field.[RR(60] 

Graphical user interface, applicationDescription automatically generated

Figure 37: Server Settings for LAN Device Management

For SAN deployments, recall that all NDFC SAN controller-to-device reachability is over the Nexus Dashboard data interface . T ; t herefore , the requirements are the same as above : , two free IP addresses are required in the Nexus Dashboard External Service Data IP Pool. Additionally, one IP address per cluster node is required to receive SAN Insights streaming data.

Conclusion

Cisco Nexus Dashboard Controller (NDFC) 12.1.3b introduces pure-IPv6 deployment and management capability, in addition to the preexisting pure-IPv4 and dual-stack options. The cluster’s operational mode must be specified during the initial Nexus Dashboard deployment, and it must have uniform IP configuration. If you want to change your cluster’s operational mode (for example, from pure IPv4 to dual-stack) after initial configuration, a clean install is required.

A single-node ND cluster deployment supports an NDFC LAN Controller lab deployment (≤25 switches), while a minimum of three 3 ND nodes are [N(61]   is required for all NDFC LAN Controller production deployments. Once you have deployed your Nexus Dashboard nodes and bootstrapped your cluster configuration, you will then have the option to configure your persistent IP addresses, download and enable NDFC on your ND instance, select its feature management capability, and begin taking advantage of its many functionalities. 

Glossary

NDFC: Nexus Dashboard Fabric Controller.

HA: High Availability.

BGP: Border Gateway Protocol.

vND: Virtual Nexus Dashboard Cluster.

pND: Physical Nexus Dashboard Cluster.

GUI: Graphical User Interface.

CLI: Command Line Interface.

DNS: Domain Name System.

NTP: Network Time Protocol.

SMTP: Simple Mail Transfer Protocol.

SNMP: Simple Network Management Protocol.

SVI: Switched Virtual Interface.

VRF: Virtual Routing and Forwarding.

PMN/PTP telemetry: Private Mobile Networks/Precision Time Protocol

OOB: Out-of-Band

IB: In-Band

SCP POAP: Secure Copy Protocol PowerOn Auto Provisioning.

SNMP Trap: Simple Network Management Protocol Trap.

DHCP: Dynamic Host Configuration Protocol.

vPC: virtual Port Channel.

SAN: Storage Area Networking.

EPL: Endpoint Locator.

IPFM: IP Fabric for Media.

Additional Information

Additional documentation about Cisco Nexus Dashboard and Cisco Nexus Fabric Controller and related topics can be found at the sites listed here.

Nexus Dashboard

ND 3.0.1 Deployment Guide: https://www.cisco.com/c/en/us/td/docs/dcn/nd/3x/deployment/cisco-nexus-dashboard-deployment-guide-301.html

ND 3.0.1 User Content: https://www.cisco.com/c/en/us/td/docs/dcn/nd/3x/collections/nd-user-content-301.html

Nexus Dashboard Fabric Controller

NDFC 12.1.3b Release Notes: https://www.cisco.com/c/en/us/td/docs/dcn/ndfc/1213/release-notes/cisco-ndfc-release-notes-1213.html

Compatibility Matrix: https://www.cisco.com/c/dam/en/us/td/docs/Website/datacenter/dcnm-compatibility/index.html

NDFC 12.1.3b Scalability Guide: https://www.cisco.com/c/en/us/td/docs/dcn/ndfc/1213/verified-scalability/cisco-ndfc-verified-scalability-1213.html

NDFC Configuration Guide Library: https://www.cisco.com/c/en/us/support/cloud-systems-management/prime-data-center-network-manager/products-installation-and-configuration-guides-list.html

Legal Information

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1721R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

© 2022-2023 Cisco Systems, Inc. All rights reserved.


 [MA(1]I’d change this to “NDFC can be deployed to manage 3 fabric types”.

Good note, fixed. [R(2]

 [MA(3]Well, LAN also applies to VXLAN EVPN fabrics and external fabrics, no?

The goal here is to give a sentence-long description of what LAN is, and then direct any customers who want more info to a more in-depth source. I remember this one being published (because I edited it) so I linked to that, but I could also separate this maybe? "For more info on LAN, go ______.. For more info on VXLAN, go _______." But I worry that if I include too many options, it'll be overwhelming.  [R(4]

I wasn't aware we had a SAN and LAN controller. I'm actually not sure what the difference is between these two products. I'm not sure if customers would feel the same way but maybe a sentence describing these could help. Also not sure what the lab deployment means here. [N(5]

if all acronyms are being broken out, maybe include SSH as well? [N(6]

maybe same for these acronyms (or maybe don't explain any of them)? [N(7]

 [RR8]Should I define logical interfaces?

maybe list 0 before 1? just a thought [N(9]

 [MA10]Not sure about this paragraph. The user has freedom if using separate physical interfaces or not. I’d probably stay away from this.

Good note, simplified. I had difficulty early-on understanding what a logical interface was, so I was trying to figure out a way to break that down. But for this document, since you can do physical or virtual deployments, I think that was an over-complication that isn't necessary.  [R(11]

I love that this is a word for word match. made me smile :) [N(12]

 [MA13]No, this is incorrect. In-band management means you connect to an IP address of the switch (usually a loopback) via one (or more) front-panel ports. Out-of-band management means you connect to the mgmt0 interface of the switch (which always has an assigned IP address).

I've simplified it down to just what you said but I need clarification on this so I'll ask you more about it before pass 2. OOB vs IB is also very confusing for me, and I did a lot of looking around at different sources for this section (Cisco does not have a good answer when you google, I didn't even see an official one) and there's a lot of info that contradicts each other, and it is confusing me.  [R(14]

 [RR(15]Fixed.

I think dashboard is needed here [N(16]

 [MA17]This is a hairy topic and source of huge confusion for many people. You can probably leave things as they are, but to be technically correct, ND has a separate routing table associated to each interface and a separate default route is installed in each routing table. Depending on the “ND service”, the lookup for the destination will be done in one specific routing table and, if needed, the associated default route would be used.

Let me look into this and see if I can figure out better wording, or whether this should just be taken out.  [R(18]

 [MA19]You sure about those values? Should actually be 50 ms between ND nodes and 150 ms between an ND node and the managed switches.

I got these from Shyam when I asked for the updated numbers for this release.  [R(20]

My mistake, I flipped them. He did say 200ms between NDFC and switches though, not 150-- would this be in documentation for a final answer? [R(21]

 [MA(22]Should probably explain better what does it mean that the two physical interfaces are part if a Linux bond: how is traffic sent toward the switches using the two interfaces? Load-balanced per flow? Or using a single link and activate the second one only after failure of the first one?

This is another thing that came from the previous version, and in looking into it, it looks like this wording actually comes from the ND deployment guide going back to ND 2.1 (at least). I have updated it to match the ND 2.3.x guide for now, but I want to ask you for more clarification later.  [R(23]

 [MA24]to

Done [R(25]

 [MA26]switchport access vlan 23

Done. [R(27]

 [MA28]switchport access vlan 23

Done. [R(29]

 [MA(30]Clarify you are referring to the ESXi uplinks.

Done. [R(31]

 [MA(32]Same as previous comment.

Done. [R(33]

 [MA(34]model

Done. [R(35]

maybe a piece of this could also be mentioned earlier to clarify the differences [N(36]

Do mention that if the user wants, both Mgmt and Data networks can be in the same network. [V(37]

 [RR(38]Done.

maybe im just not familiar but this is throwing me off. this is just a fabric type we select for ndfc right? [N(39]

this is very good to point out. usually people forget about things like this [N(40]

very good idea to include this. sometimes there are extra url pieces that are not intuitive [N(41]

another great piece to include [N(42]

 [RR43]Link

its good you pointed this out because this is horrible product design. makes zero sense and ive never seen a product do that before [N(44]

would this be necessary if the customer doesnt have a proxy? [N(45]

 [MA(46]It would probably be useful to show a snapshot of where this is done.

 [RR47]Done.

I think this might need interface or management at the end here [N(48]

is this a validation button they click? sorry i dont know ndfc [N(49]

beautiful screenshot [N(50]

very good to point this out [N(51]

 [MA(52]A bit confusing, I don’t think there is an option to enable IPv4/IPv6, right?

Changed the wording a bit, with the clarification in the next sentence still. When you go to that page, the option will be different depending on how you initially configured the nodes (if IPv4 initially, you'll be able to enable IPv6. if IPv6 initially, you'll be able to enable IPv4), so I want to make sure that's clear. [R(53]

I like that this has been made very clear in multiple ways [N(54]

i wouldnt have guessed this. im glad you pointed that out. maybe add the location of where to enable it? it seems like operate, services, installed services [N(55]

Maybe you can explain these modes to me [N(56]

 [RR(57]Suggestion: escalation doc here.

 

“For more information on Persistent IP address use cases, please refer to [escalation document to be written]:”

 [RR58]Suggestion: new document to cover use cases.

very good to point this out [N(59]

 [RR(60]Check on this?

i think this should be are [N(61]

Learn more