Managing a Brownfield VXLAN BGP EVPN Fabric

Overview

This use case shows how to migrate an existing VXLAN BGP EVPN fabric to Cisco NDFC. The transition involves migrating existing network configurations to Nexus Dashboard Fabric Controller.

Typically, your fabric would be created and managed through manual CLI configuration or custom automation scripts. Now, you can start managing the fabric through Nexus Dashboard Fabric Controller. After the migration, the fabric underlay and overlay networks will be managed by NDFC.

For information about MSD fabric migration, see Migrating an MSD Fabric with Border Gateway Switches.

Prerequisites

  • NDFC-supported NX-OS software versions. For details, refer Cisco Nexus Dashboard Fabric Controller Release Notes.

  • Underlay routing protocol is OSPF or IS-IS.

  • The following fabric-wide loopback interface IDs must not overlap:

    • Routing loopback interface for IGP/BGP.

    • VTEP loopback ID

    • Underlay rendezvous point loopback ID if ASM is used for multicast replication.

  • BGP configuration uses the ‘router-id’, which is the IP address of the routing loopback interface.

  • If the iBGP peer template is configured, then it must be configured on the leaf switches and route reflectors. The template name that needs to be used between leaf and route reflector should be identical.

  • The BGP route reflector and multicast rendezvous point (if applicable) functions are implemented on spine switches. Leaf switches do not support the functions.

  • Familiarity with VXLAN BGP EVPN fabric concepts and functioning of the fabric from the Nexus Dashboard Fabric Controller perspective.

  • Fabric switch nodes are operationally stable and functional and all fabric links are up.

  • vPC switches and the peer links are up before the migration. Ensure that no configuration updates are in progress or changes pending.

  • Create an inventory list of the switches in the fabric with their IP addresses and credentials. Nexus Dashboard Fabric Controller uses this information to connect to the switches.

  • Shut down any other controller software you are using presently so that no further configuration changes are made to the VXLAN fabric. Alternatively, disconnect the network interfaces from the controller software (if any) so that no changes are allowed on the switches.

  • The switch overlay configurations must have the mandatory configurations defined in the shipping NDFC Universal Overlay profiles. Additional network or VRF overlay related configurations found on the switches are preserved in the freeform configuration associated with the network or VRF NDFC entries.

  • All the overlay network and VRF profile parameters such as VLAN name and route map name should be consistent across all devices in the fabric for the brownfield migration to be successful.

Guidelines and Limitations

  • Brownfield import must be completed for the entire fabric by adding all the switches to the NDFC fabric.

  • On the Create Fabric window, the Advanced > Overlay Mode fabric setting decides how the overlays will be migrated. If the default config-profile is set, then the VRF and Network overlay configuration profiles will be deployed to switches as part of the migration process. In addition, there will be diffs to remove some of the redundant overlay CLI configurations. These are non network impacting.

  • From the Overlay Mode drop-down list, if CLI is set, then VRF and Network overlay configurations stay on the switch as-is with no or little changes to address any consistency differences.

  • The brownfield import in NDFC supports the simplified NX-OS VXLAN EVPN configuration CLIs. For more information, see Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide, Release 10.2(x).

  • The following features are unsupported.

    • Super Spine roles

    • ToR

    • eBGP underlay

    • Layer 3 port channel

    • vPC Fabric Peering

  • Take a backup of the switch configurations and save them before migration.

  • No configuration changes (unless instructed to do so in this document) must be made to the switches until the migration is completed. Else, significant network issues can occur.

  • Migration to Cisco Nexus Dashboard Fabric Controller is only supported for Cisco Nexus 9000 switches.

  • The Border Spine and Border Gateway Spine roles are supported for the brownfield migration.

  • First, note the guidelines for updating the settings. Then update each VXLAN fabric settings as explained below:

    • Some values (BGP AS Number, OSPF, etc) are considered as reference points to your existing fabric, and the values you enter must match the existing fabric values.

    • For some fields (such as IP address range, VXLAN ID range), the values that are auto-populated or entered in the settings are only used for future allocation. The existing fabric values are honored during migration.

    • Some fields relate to new functions that may not exist in your existing fabric (such as advertise-pip). Enable or disable it as per your need.

    • At a later point in time, after the fabric transition is complete, you can update settings if needed.

Fabric Topology Overview

This example use case uses the following hardware and software components:

  • Five Cisco Nexus 9000 Series Switches

  • One Fabric Extender or FEX

  • One host

For information about the supported software images, see Compatibility Matrix for Cisco NDFC.

Before we start the transition of the existing fabric, let us see its topology.

You can see that there is a border switch, two spine switches, two leaf switches, and a Fabric Extender or FEX.

A host is connected to the n9k12 leaf switch through the interface Ethernet 1/5.

Verifying the Existing VXLAN BGP EVPN Fabric

Let us check the network connectivity of the n9k12 switch from the console terminal.

Procedure


Step 1

Verify the Network Virtual Interface or NVE in the fabric.

n9k12# show nve vni summary
Codes: CP - Control Plane        DP - Data Plane
       UC - Unconfigured

Total CP VNIs: 84    [Up: 84, Down: 0]
Total DP VNIs: 0    [Up: 0, Down: 0]

There are 84 VNIs in the control plane and they are up. Before the Brownfield migration, make sure that all the VNIs are up.

Step 2

Check consistency and failures of vPC.

n9k12# show vpc
Legend:        
                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 2   
Peer status                       : peer adjacency formed ok      
vPC keep-alive status             : peer is alive                 
Configuration consistency status  : success                       
Per-vlan consistency status       : success                       
Type-2 consistency status         : success                       
vPC role                          : secondary                     
Number of vPCs configured         : 40                            
Peer Gateway                      : Enabled                       
Dual-active excluded VLANs        : -                             
Graceful Consistency Check        : Enabled                       
Auto-recovery status              : Enabled, timer is off.(timeout = 300s)
Delay-restore status              : Timer is off.(timeout = 60s)          
Delay-restore SVI status          : Timer is off.(timeout = 10s)          
Operational Layer3 Peer-router    : Disabled                              
.
.
.
Step 3

Check the EVPN neighbors of the n9k-12 switch.

n9k12# show bgp l2vpn evpn summary
BGP summary information for VRF default, address family L2VPN EVPN
BGP router identifier 192.168.0.4, local AS number 65000
BGP table version is 637, L2VPN EVPN config peers 2, capable peers 2
243 network entries and 318 paths using 57348 bytes of memory
BGP attribute entries [234/37440], BGP AS path entries [0/0]
BGP community entries [0/0], BGP clusterlist entries [2/8]

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
192.168.0.0     4 65000     250      91      637    0    0 01:26:59 75
192.168.0.1     4 65000     221      63      637    0    0 00:57:22 75

You can see that there are two neighbors corresponding to the spine switches.

Note that the ASN is 65000.

Step 4

Verify the VRF information.

n9k12# show run vrf internet

!Command: show running-config vrf Internet
!Running configuration last done at: Fri Aug  9 01:38:02 2019
!Time: Fri Aug  9 02:48:03 2019                              

version 7.0(3)I7(6) Bios:version 07.59 

interface Vlan347
  vrf member Internet

interface Vlan349
  vrf member Internet

interface Vlan3962
  vrf member Internet

interface Ethernet1/25
  vrf member Internet 

interface Ethernet1/26
  vrf member Internet 
vrf context Internet  
  description Internet
  vni 16777210        
  ip route 204.90.141.0/24 204.90.140.129 name LC-Networks
  rd auto                                                 
  address-family ipv4 unicast                             
    route-target both auto
    route-target both auto evpn
router ospf 300
  vrf Internet
    router-id 204.90.140.3
    redistribute direct route-map allow
    redistribute static route-map static-to-ospf
router bgp 65000
  vrf Internet
    address-family ipv4 unicast
      advertise l2vpn evpn

The VRF Internet is configured on this switch.

The host connected to the n9k-12 switch is part of the VRF Internet.

You can see the VLANs associated with this VRF.

Specifically, the host is part of Vlan349.

Step 5

Verify the layer 3 interface information.

n9k12# show run interface vlan349

!Command: show running-config interface Vlan349
!Running configuration last done at: Fri Aug  9 01:38:02 2019
!Time: Fri Aug  9 02:49:27 2019

version 7.0(3)I7(6) Bios:version 07.59

interface Vlan349
  no shutdown
  vrf member Internet
  no ip redirects
  ip address 204.90.140.134/29
  no ipv6 redirects
  fabric forwarding mode anycast-gateway

Note that the IP address is 204.90.140.134. This IP address is configured as the anycast gateway IP.

Step 6

Verify the physical interface information. This switch is connected to the Host through the interface Ethernet 1/5.

n9k12# show run interface ethernet1/5

!Command: show running-config interface Ethernet1/5
!Running configuration last done at: Fri Aug  9 01:38:02 2019
!Time: Fri Aug  9 02:50:05 2019

version 7.0(3)I7(6) Bios:version 07.59

interface Ethernet1/5
  description to host
  switchport mode trunk
  switchport trunk native vlan 349
  switchport trunk allowed vlan 349,800,815
  spanning-tree bpduguard enable
  mtu 9050

You can see that this interface is connected to the host and is configured with VLAN 349.

Step 7

Verify the connectivity from the host to the anycast gateway IP address.

host# ping 204.90.140.134 count unlimited interval 1
PING 204.90.140.134 (204.90.140.134): 56 data bytes
64 bytes from 204.90.140.134: icmp_seq=0 ttl=254 time=1.078 ms
64 bytes from 204.90.140.134: icmp_seq=1 ttl=254 time=1.129 ms
64 bytes from 204.90.140.134: icmp_seq=2 ttl=254 time=1.151 ms
64 bytes from 204.90.140.134: icmp_seq=3 ttl=254 time=1.162 ms
64 bytes from 204.90.140.134: icmp_seq=4 ttl=254 time=1.84 ms
64 bytes from 204.90.140.134: icmp_seq=5 ttl=254 time=1.258 ms
64 bytes from 204.90.140.134: icmp_seq=6 ttl=254 time=1.273 ms
64 bytes from 204.90.140.134: icmp_seq=7 ttl=254 time=1.143 ms

We let the ping command run in the background while we transition the existing brownfield fabric into Nexus Dashboard Fabric Controller.


Creating a New VXLAN BGP EVPN Fabric

This procedure shows how to create a new VXLAN BGP EVPN fabric.

This procedure contains descriptions for the IPv4 underlay. For information about IPv6 underlay, see IPv6 Underlay Support for Easy Fabric.

  1. From Actions drop-down list, choose Create Fabric.

    The Create Fabric window appears.

  2. Enter a unique name for the Fabric.

    Click on Choose Template to pick a template for your fabric.

    A list of all available fabric templates are listed.

  3. From the available list of Fabric templates, choose Easy_Fabric template.

    Click Select.

    Enter the necessary field values to create a Fabric.

    The tabs and their fields in the screen are explained in the subsequent points. The overlay and underlay network parameters are included in these tabs.


    Note

    If you’re creating a standalone fabric as a potential member fabric of an MSD fabric (used for provisioning overlay networks for fabrics that are connected through EVPN Multi-Site technology), then see Multi-Site Domain for VXLAN BGP EVPN Fabrics topic before member fabric creation.


  4. The General Parameters tab is displayed by default. The fields in this tab are:

    BGP ASN – Enter the BGP AS number the fabric is associated with. This must be same as existing fabric.

    Enable IPv6 Underlay – Enable the IPv6 underlay feature. For information, see IPv6 Underlay Support for Easy Fabric.

    Enable IPv6 Link-Local Address – Enables the IPv6 Link-Local address.

    Fabric Interface Numbering – Specifies whether you want to use point-to-point (p2p) or unnumbered networks.

    Underlay Subnet IP Mask – Specifies the subnet mask for the fabric interface IP addresses.

    Underlay Subnet IPv6 Mask – Specifies the subnet mask for the fabric interface IPv6 addresses.

    Underlay Routing Protocol – The IGP used in the fabric, OSPF, or IS-IS.

    Route-Reflectors (RRs) – The number of spine switches that are used as route reflectors for transporting BGP traffic. Choose 2 or 4 from the drop-down box. The default value is 2.

    To deploy spine devices as RRs, Nexus Dashboard Fabric Controller sorts the spine devices based on their serial numbers, and designates two or four spine devices as RRs. If you add more spine devices, existing RR configuration won’t change.

    Increasing the count – You can increase the route reflectors from two to four at any point in time. Configurations are automatically generated on the other two spine devices designated as RRs.

    Decreasing the count – When you reduce four route reflectors to two, remove the unneeded route reflector devices from the fabric. Follow these steps to reduce the count from 4 to 2.

    1. Change the value in the drop-down box to 2.

    2. Identify the spine switches designated as route reflectors.

      An instance of the rr_state policy is applied on the spine switch if it’s a route reflector. To find out if the policy is applied on the switch, right-click the switch, and choose View/edit policies. In the View/Edit Policies screen, search rr_state in the Template field. It is displayed on the screen.

    3. Delete the unneeded spine devices from the fabric (right-click the spine switch icon and choose Discovery > Remove from fabric).

      If you delete existing RR devices, the next available spine switch is selected as the replacement RR.

    4. Click Deploy Config in the fabric topology window.

    You can preselect RRs and RPs before performing the first Save & Deploy operation. For more information, see Preselecting Switches as Route-Reflectors and Rendezvous-Points.

    Anycast Gateway MAC – Specifies the anycast gateway MAC address.

    Enable Performance Monitoring – Check the check box to enable performance monitoring.

  5. Click the Replication tab. Most of the fields are auto generated. You can update the fields if needed.

    Replication Mode – The mode of replication that is used in the fabric for BUM (Broadcast, Unknown Unicast, Multicast) traffic. The choices are Ingress Replication or Multicast. When you choose Ingress replication, the multicast related fields get disabled.

    You can change the fabric setting from one mode to the other, if no overlay profile exists for the fabric.

    Multicast Group Subnet – IP address prefix used for multicast communication. A unique IP address is allocated from this group for each overlay network.

    The replication mode change isn’t allowed if a policy template instance is created for the current mode. For example, if a multicast related policy is created and deployed, you can’t change the mode to Ingress.

    Enable Tenant Routed Multicast (TRM) – Check the check box to enable Tenant Routed Multicast (TRM) that allows overlay multicast traffic to be supported over EVPN/MVPN in the VXLAN BGP EVPN fabric.

    Default MDT Address for TRM VRFs – The multicast address for Tenant Routed Multicast traffic is populated. By default, this address is from the IP prefix specified in the Multicast Group Subnet field. When you update either field, ensure that the TRM address is chosen from the IP prefix specified in Multicast Group Subnet.

    For more information, see Overview of Tenant Routed Multicast.

    Rendezvous-Points – Enter the number of spine switches acting as rendezvous points.

    RP mode – Choose from the two supported multicast modes of replication, ASM (for Any-Source Multicast [ASM]) or BiDir (for Bidirectional PIM [BIDIR-PIM]).

    When you choose ASM, the BiDir related fields aren’t enabled. When you choose BiDir, the BiDir related fields are enabled.


    Note

    BIDIR-PIM is supported on Cisco's Cloud Scale Family platforms 9300-EX and 9300-FX/FX2, and software release 9.2(1) onwards.


    When you create a new VRF for the fabric overlay, this address is populated in the Underlay Multicast Address field, in the Advanced tab.

    Underlay RP Loopback ID – The loopback ID used for the rendezvous point (RP), for multicast protocol peering purposes in the fabric underlay.

    The next two fields are enabled if you choose BIDIR-PIM as the multicast mode of replication.

    Underlay Primary RP Loopback ID – The primary loopback ID used for the phantom RP, for multicast protocol peering purposes in the fabric underlay.

    Underlay Backup RP Loopback ID – The secondary loopback ID used for the phantom RP, for multicast protocol peering purposes in the fabric underlay.

    Underlay Second Backup RP Loopback Id and Underlay Third Backup RP Loopback Id – Used for the second and third fallback Bidir-PIM Phantom RP.

  6. Click the VPC tab. Most of the fields are auto generated. You can update the fields if needed.

    vPC Peer Link VLAN – VLAN used for the vPC peer link SVI.

    Make vPC Peer Link VLAN as Native VLAN – Enables vPC peer link VLAN as Native VLAN.

    vPC Peer Keep Alive option – Choose the management or loopback option. If you want to use IP addresses assigned to the management port and the management VRF, choose management. If you use IP addresses assigned to loopback interfaces (and a non-management VRF), choose loopback.

    If you use IPv6 addresses, you must use loopback IDs.

    vPC Auto Recovery Time – Specifies the vPC auto recovery time-out period in seconds.

    vPC Delay Restore Time – Specifies the vPC delay restore period in seconds.

    vPC Peer Link Port Channel ID – Specifies the Port Channel ID for a vPC Peer Link. By default, the value in this field is 500.

    vPC IPv6 ND Synchronize – Enables IPv6 Neighbor Discovery synchronization between vPC switches. The check box is enabled by default. Clear the check box to disable the function.

    vPC advertise-pip – Select the check box to enable the Advertise PIP feature.

    You can enable the advertise PIP feature on a specific vPC as well. .

    Enable the same vPC Domain Id for all vPC Pairs – Enable the same vPC Domain ID for all vPC pairs. When you select this field, the vPC Domain Id field is editable.

    vPC Domain Id – Specifies the vPC domain ID to be used on all vPC pairs.

    vPC Domain Id Range – Specifies the vPC Domain Id range to use for new pairings.

    Enable QoS for Fabric vPC-Peering – Enable QoS on spines for guaranteed delivery of vPC Fabric Peering communication. .


    Note

    QoS for vPC fabric peering and queuing policies options in fabric settings are mutually exclusive.


    QoS Policy Name – Specifies QoS policy name that should be same on all fabric vPC peering spines. The default name is spine_qos_for_fabric_vpc_peering.

  7. Click the Protocols tab. Most of the fields are auto generated. You can update the fields if needed.

    Underlay Routing Loopback Id – The loopback interface ID is populated as 0 since loopback0 is usually used for fabric underlay IGP peering purposes.

    Underlay VTEP Loopback Id – The loopback interface ID is populated as 1 since loopback1 is used for the VTEP peering purposes.

    Underlay Anycast Loopback Id – The loopback interface ID is greyed out and used for vPC Peering in VXLANv6 Fabrics only.

    Underlay Routing Protocol Tag – The tag defining the type of network.

    OSPF Area ID – The OSPF area ID, if OSPF is used as the IGP within the fabric.


    Note

    The OSPF or IS-IS authentication fields are enabled based on your selection in the Underlay Routing Protocol field in the General tab.


    Enable OSPF Authentication – Select the check box to enable OSPF authentication. Deselect the check box to disable it. If you enable this field, the OSPF Authentication Key ID and OSPF Authentication Key fields get enabled.

    OSPF Authentication Key ID – The Key ID is populated.

    OSPF Authentication Key – The OSPF authentication key must be the 3DES key from the switch.


    Note

    Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in this field. Refer, Retrieving the Authentication Key section for details.

    IS-IS Level – Select the IS-IS level from this drop-down list.

    Enable IS-IS Network Point-to-Point – Enables network point-to-point on fabric interfaces which are numbered.

    Enable IS-IS Authentication – Select the check box to enable IS-IS authentication. Deselect the check box to disable it. If you enable this field, the IS-IS authentication fields are enabled.

    IS-IS Authentication Keychain Name – Enter the Keychain name, such as CiscoisisAuth.

    IS-IS Authentication Key ID – The Key ID is populated.

    IS-IS Authentication Key – Enter the Cisco Type 7 encrypted key.


    Note

    Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in this field. Refer the Retrieving the Authentication Key section for details.


    Set IS-IS Overload Bit – When enabled, set the overload bit for an elapsed time after a reload.

    IS-IS Overload Bit Elapsed Time – Allows you to clear the overload bit after an elapsed time in seconds.

    Enable BGP Authentication - Select the check box to enable BGP authentication. Deselect the check box to disable it. If you enable this field, the BGP Authentication Key Encryption Type and BGP Authentication Key fields are enabled.


    Note

    If you enable BGP authentication using this field, leave the iBGP Peer-Template Config field blank to avoid duplicate configuration.

    BGP Authentication Key Encryption Type – Choose the 3 for 3DES encryption type, or 7 for Cisco encryption type.

    BGP Authentication Key – Enter the encrypted key based on the encryption type.


    Note

    Plain text passwords are not supported. Log in to the switch, retrieve the encrypted key and enter it in the BGP Authentication Key field. Refer the Retrieving the Authentication Key section for details.

    Enable PIM Hello Authentication – Select this check box to enable PIM hello authentication on all the intra-fabric interfaces of the switches in a fabric. This check box is editable only for the Multicast replication mode. Note this check box is valid only for the IPv4 underlay.

    PIM Hello Authentication Key – Specifies the PIM hello authentication key. For more information, see Retrieving PIM Hello Authentication Key.

    To retrieve PIM Hello Authentication Key, perform the following steps:

    1. SSH into the switch.

    2. On an unused switch interface, enable the following:

      switch(config)# interface e1/32 
      switch(config-if)# ip pim hello-authentication ah-md5 pimHelloPassword

      In this example, pimHelloPassword is the cleartext password that has been used.

    3. Enter the show run interface command to retrieve the PIM hello authentication key.

      switch(config-if)# show run interface e1/32 | grep pim 
      ip pim sparse-mode 
      ip pim hello-authentication ah-md5 3 d34e6c5abc7fecf1caa3b588b09078e0 

      In this example, d34e6c5abc7fecf1caa3b588b09078e0 is the PIM hello authentication key that should be specified in the fabric settings.

    Enable BFD: Select the check box to enable feature bfd on all switches in the fabric. This feature is valid only on IPv4 underlay and the scope is within a fabric.

    BFD within a fabric is supported natively. The BFD feature is disabled by default in the Fabric Settings. If enabled, BFD is enabled for the underlay protocols with the default settings. Any custom required BFD configurations must be deployed via the per switch freeform or per interface freeform policies.

    The following config is pushed after you select the Enable BFD check box:

    feature bfd

    For information about BFD feature compatibility, refer your respective platform documentation and for information about the supported software images, see Compatibility Matrix for Cisco Nexus Dashboard Fabric Controller.

    Enable BFD for iBGP – Select the check box to enable BFD for the iBGP neighbor. This option is disabled by default.

    Enable BFD for OSPF – Select the check box to enable BFD for the OSPF underlay instance. This option is disabled by default, and it is grayed out if the link state protocol is ISIS.

    Enable BFD for ISIS – Select the check box to enable BFD for the ISIS underlay instance. This option is disabled by default, and it is grayed out if the link state protocol is OSPF.

    Enable BFD for PIM – Select the check box to enable BFD for PIM. This option is disabled by default, and it is be grayed out if the replication mode is Ingress.

    Here are the examples of the BFD global policies:

    
    router ospf <ospf tag>
       bfd
    
    router isis <isis tag>
      address-family ipv4 unicast
        bfd
    
    ip pim bfd
    
    router bgp <bgp asn>
      neighbor <neighbor ip>
        bfd
    

    Enable BFD Authentication – Select the check box to enable BFD authentication. If you enable this field, the BFD Authentication Key ID and BFD Authentication Key fields are editable.


    Note

    BFD Authentication is not supported when the Fabric Interface Numbering field under the General tab is set to unnumbered. The BFD authentication fields will be grayed out automatically. BFD authentication is valid for only for P2P interfaces.


    BFD Authentication Key ID – Specifies the BFD authentication key ID for the interface authentication. The default value is 100.

    BFD Authentication Key – Specifies the BFD authentication key.

    For information about how to retrieve the BFD authentication parameters. .

    iBGP Peer-Template Config – Add iBGP peer template configurations on the leaf switches to establish an iBGP session between the leaf switch and route reflector.

    If you use BGP templates, add the authentication configuration within the template and clear the Enable BGP Authentication check box to avoid duplicate configuration.

    In the sample configuration, the 3DES password is displayed after password 3.

    router bgp 65000
        password 3 sd8478fswerdfw3434fsw4f4w34sdsd8478fswerdfw3434fsw4f4w
    

    The following fields can be used to specify different configurations:

    • iBGP Peer-Template Config – Specifies the config used for RR and spines with border role.

    • Leaf/Border/Border Gateway iBGP Peer-Template Config – Specifies the config used for leaf, border, or border gateway. If this field is empty, the peer template defined in iBGP Peer-Template Config is used on all BGP enabled devices (RRs, leafs, border, or border gateway roles).

    In brownfield migration, if the spine and leaf use different peer template names, both iBGP Peer-Template Config and Leaf/Border/Border Gateway iBGP Peer-Template Config fields need to be set according to the switch config. If spine and leaf use the same peer template name and content (except for the “route-reflector-client” CLI), only iBGP Peer-Template Config field in fabric setting needs to be set. If the fabric settings on iBGP peer templates do not match the existing switch configuration, an error message is generated and the migration will not proceed.

  8. Click the Advanced tab. Most of the fields are auto generated. You can update the fields if needed.

    VRF Template and VRF Extension Template: Specifies the VRF template for creating VRFs, and the VRF extension template for enabling VRF extension to other fabrics.

    Network Template and Network Extension Template: Specifies the network template for creating networks, and the network extension template for extending networks to other fabrics.

    Overlay Mode – VRF/Network configuration using config-profile or CLI, default is config-profile. For more information, see Overlay Mode.

    Site ID – The ID for this fabric if you are moving this fabric within an MSD. The site ID is mandatory for a member fabric to be a part of an MSD. Each member fabric of an MSD has a unique site ID for identification.

    Intra Fabric Interface MTU – Specifies the MTU for the intra fabric interface. This value should be an even number.

    Layer 2 Host Interface MTU - Specifies the MTU for the layer 2 host interface. This value should be an even number.

    Unshut Host Interfaces by Default – Select this check box to unshut the host interfaces by default.

    Power Supply Mode – Choose the appropriate power supply mode.

    CoPP Profile – Choose the appropriate Control Plane Policing (CoPP) profile policy for the fabric. By default, the strict option is populated.

    VTEP HoldDown Time – Specifies the NVE source interface hold down time.

    Brownfield Overlay Network Name Format – Enter the format to be used to build the overlay network name during a brownfield import or migration. The network name should not contain any white spaces or special characters except underscore (_) and hyphen (-). The network name must not be changed once the brownfield migration has been initiated. See the Creating Networks for the Standalone Fabric section for the naming convention of the network name. The syntax is [<string> | $$VLAN_ID$$] $$VNI$$ [<string>| $$VLAN_ID$$] and the default value is Auto_Net_VNI$$VNI$$_VLAN$$VLAN_ID$$. When you create networks, the name is generated according to the syntax you specify. The following table describes the variables in the syntax.

    Variables

    Description

    $$VNI$$

    Specifies the network VNI ID found in the switch configuration. This is a mandatory keyword required to create unique network names.

    $$VLAN_ID$$

    Specifies the VLAN ID associated with the network.

    VLAN ID is specific to switches, hence Nexus Dashboard Fabric Controller picks the VLAN ID from one of the switches, where the network is found, randomly and use it in the name.

    We recommend not to use this unless the VLAN ID is consistent across the fabric for the VNI.

    <string>

    This variable is optional and you can enter any number of alphanumeric characters that meet the network name guidelines.

    Example overlay network name: Site_VNI12345_VLAN1234


    Note

    Ignore this field for greenfield deployments. The Brownfield Overlay Network Name Format applies for the following brownfield imports:

    • CLI-based overlays

    • Configuration profile-based overlay


    Enable CDP for Bootstrapped Switch – Enables CDP on management (mgmt0) interface for bootstrapped switch. By default, for bootstrapped switches, CDP is disabled on the mgmt0 interface.

    Enable VXLAN OAM – Enables the VXLAM OAM functionality for devices in the fabric. This is enabled by default. Clear the check box to disable VXLAN OAM function.

    If you want to enable the VXLAN OAM function on specific switches and disable on other switches in the fabric, you can use freeform configurations to enable OAM and disable OAM in the fabric settings.


    Note

    The VXLAN OAM feature in Cisco Nexus Dashboard Fabric Controller is only supported on a single fabric or site.


    Enable Tenant DHCP – Select the check box to enable feature dhcp and associated configurations globally on all switches in the fabric. This is a pre-requisite for support of DHCP for overlay networks that are part of the tenant VRFs.


    Note

    Ensure that Enable Tenant DHCP is enabled before enabling DHCP-related parameters in the overlay profiles.


    Enable NX-API - Specifies enabling of NX-API on HTTPS. This check box is checked by default.

    Enable NX-API on HTTP Port – Specifies enabling of NX-API on HTTP. Enable this check box and the Enable NX-API check box to use HTTP. This check box is checked by default. If you uncheck this check box, the applications that use NX-API and supported by Cisco Nexus Dashboard Fabric Controller, such as Endpoint Locator (EPL), Layer 4-Layer 7 services (L4-L7 services), VXLAN OAM, and so on, start using the HTTPS instead of HTTP.


    Note

    If you check the Enable NX-API check box and the Enable NX-API on HTTP check box, applications use HTTP.


    Enable Policy-Based Routing (PBR) – Select this check box to enable routing of packets based on the specified policy. Starting with Cisco NX-OS Release 7.0(3)I7(1) and later releases, this feature works on Cisco Nexus 9000 Series switches with Nexus 9000 Cloud Scale (Tahoe) ASICs. This feature is used along with the Layer 4-Layer 7 service workflow. For information on Layer 4-Layer 7 service, refer the Layer 4-Layer 7 Service chapter.

    Enable Strict Config Compliance – Enable the Strict Config Compliance feature by selecting this check box. It enables bi-directional compliance checks to flag additional configs in the running config that are not in the intent/expected config. By default, this feature is disabled.

    Enable AAA IP Authorization – Enables AAA IP authorization, when IP Authorization is enabled in the remote authentication server. This is required to support Nexus Dashboard Fabric Controller in scenarios where customers have strict control of which IP addresses can have access to the switches.

    Enable NDFC as Trap Host – Select this check box to enable Nexus Dashboard Fabric Controller as an SNMP trap destination. Typically, for a native HA Nexus Dashboard Fabric Controller deployment, the eth1 VIP IP address will be configured as SNMP trap destination on the switches. By default, this check box is enabled.

    Anycast Border Gateway advertise-pip – Enables to advertise Anycast Border Gateway PIP as VTEP. Effective on MSD fabric 'Recalculate Config'.

    Greenfield Cleanup Option – Enable the switch cleanup option for switches imported into Nexus Dashboard Fabric Controller with Preserve-Config=No, without a switch reload. This option is typically recommended only for the fabric environments with Cisco Nexus 9000v Switches to improve on the switch clean up time. The recommended option for Greenfield deployment is to employ Bootstrap or switch cleanup with a reboot. In other words, this option should be unchecked.

    Enable Precision Time Protocol (PTP) – Enables PTP across a fabric. When you select this check box, PTP is enabled globally and on core-facing interfaces. Additionally, the PTP Source Loopback Id and PTP Domain Id fields are editable. For more information, see PTP Information.

    PTP Source Loopback Id – Specifies the loopback interface ID Loopback that is used as the Source IP Address for all PTP packets. The valid values range from 0 to 1023. The PTP loopback ID cannot be the same as RP, Phantom RP, NVE, or MPLS loopback ID. Otherwise, an error will be generated. The PTP loopback ID can be the same as BGP loopback or user-defined loopback which is created from Nexus Dashboard Fabric Controller.

    If the PTP loopback ID is not found during Deploy Config, the following error is generated:

    Loopback interface to use for PTP source IP is not found. Create PTP loopback interface on all the devices to enable PTP feature.

    PTP Domain Id – Specifies the PTP domain ID on a single network. The valid values range from 0 to 127.

    Enable MPLS Handoff – Select the check box to enable the MPLS Handoff feature. For more information, see the MPLS SR and LDP Handoff chapter in External/WAN Layer 3 Connectivity for VXLAN BGP EVPN Fabrics.

    Underlay MPLS Loopback Id – Specifies the underlay MPLS loopback ID. The default value is 101.

    Enable TCAM Allocation – TCAM commands are automatically generated for VXLAN and vPC Fabric Peering when enabled.

    Enable Default Queuing Policies – Check this check box to apply QoS policies on all the switches in this fabric. To remove the QoS policies that you applied on all the switches, uncheck this check box, update all the configurations to remove the references to the policies, and save and deploy. Pre-defined QoS configurations are included that can be used for various Cisco Nexus 9000 Series Switches. When you check this check box, the appropriate QoS configurations are pushed to the switches in the fabric. The system queuing is updated when configurations are deployed to the switches. You can perform the interface marking with defined queuing policies, if required, by adding the required configuration to the per interface freeform block.

    Review the actual queuing policies by opening the policy file in the template editor. From Cisco Nexus Dashboard Fabric Controller Web UI, choose Operations > Templates. Search for the queuing policies by the policy file name, for example, queuing_policy_default_8q_cloudscale. Choose the file. From the Actions drop-down list, select Edit template content to edit the policy.

    See the Cisco Nexus 9000 Series NX-OS Quality of Service Configuration Guide for platform specific details.

    N9K Cloud Scale Platform Queuing Policy - Choose the queuing policy from the drop-down list to be applied to all Cisco Nexus 9200 Series Switches and the Cisco Nexus 9000 Series Switches that ends with EX, FX, and FX2 in the fabric. The valid values are queuing_policy_default_4q_cloudscale and queuing_policy_default_8q_cloudscale. Use the queuing_policy_default_4q_cloudscale policy for FEXes. You can change from the queuing_policy_default_4q_cloudscale policy to the queuing_policy_default_8q_cloudscale policy only when FEXes are offline.

    N9K R-Series Platform Queuing Policy – Choose the queuing policy from the drop-down list to be applied to all Cisco Nexus switches that ends with R in the fabric. The valid value is queuing_policy_default_r_series.

    Other N9K Platform Queuing Policy – Choose the queuing policy from the drop-down list to be applied to all other switches in the fabric other than the switches mentioned in the above two options. The valid value is queuing_policy_default_other.

    Enable MACsec - Enables MACsec for the fabric. For more information, see Enabling MACsec.

    Freeform CLIs - Fabric level freeform CLIs can be added while creating or editing a fabric. They are applicable to switches across the fabric. You must add the configurations as displayed in the running configuration, without indentation. Switch level freeform configurations such as VLAN, SVI, and interface configurations should only be added on the switch. For more information, refer Enabling Freeform Configurations on Fabric Switches. For more information, see Enabling Freeform Configurations on Fabric Switches.

    Leaf Freeform Config – Add CLIs that should be added to switches that have the Leaf, Border, and Border Gateway roles.

    Spine Freeform Config – Add CLIs that should be added to switches with a Spine, Border Spine, Border Gateway Spine, and Super Spine roles.

    Intra-fabric Links Additional Config – Add CLIs that should be added to the intra-fabric links.

  9. Click the Resources tab.

    Manual Underlay IP Address AllocationDo not select this check box if you are transitioning your VXLAN fabric management to Nexus Dashboard Fabric Controller.

    • By default, Nexus Dashboard Fabric Controller allocates the underlay IP address resources (for loopbacks, fabric interfaces, etc) dynamically from the defined pools. If you select the check box, the allocation scheme switches to static, and some of the dynamic IP address range fields are disabled.

    • For static allocation, the underlay IP address resources must be populated into the Resource Manager (RM) using REST APIs.

    • The Underlay RP Loopback IP Range field stays enabled if BIDIR-PIM function is chosen for multicast replication.

    • Changing from static to dynamic allocation keeps the current IP resource usage intact. Only future IP address allocation requests are taken from dynamic pools.

    Underlay Routing Loopback IP Range – Specifies loopback IP addresses for the protocol peering.

    Underlay VTEP Loopback IP Range – Specifies loopback IP addresses for VTEPs.

    Underlay RP Loopback IP Range – Specifies the anycast or phantom RP IP address range.

    Underlay Subnet IP Range – IP addresses for underlay P2P routing traffic between interfaces.

    Underlay MPLS Loopback IP Range – Specifies the underlay MPLS loopback IP address range.

    For eBGP between Border of Easy A and Easy B, Underlay routing loopback and Underlay MPLS loopback IP range must be a unique range. It should not overlap with IP ranges of the other fabrics, else VPNv4 peering will not come up.

    Underlay Routing Loopback IPv6 Range – Specifies Loopback0 IPv6 Address Range

    Underlay VTEP Loopback IPv6 Range – Specifies Loopback1 and Anycast Loopback IPv6 Address Range.

    Underlay Subnet IPv6 Range – Specifies IPv6 Address range to assign Numbered and Peer Link SVI IPs.

    BGP Router ID Range for IPv6 Underlay – Specifies BGP router ID range for IPv6 underlay.

    Layer 2 VXLAN VNI Range and Layer 3 VXLAN VNI Range - Specifies the VXLAN VNI IDs for the fabric.

    Network VLAN Range and VRF VLAN Range – VLAN ranges for the Layer 3 VRF and overlay network.

    Subinterface Dot1q Range – Specifies the subinterface range when L3 sub interfaces are used.

    VRF Lite Deployment – Specify the VRF Lite method for extending inter fabric connections.

    The VRF Lite Subnet IP Range field specifies resources reserved for IP address used for VRF LITE when VRF LITE IFCs are auto-created. If you select Back2BackOnly, ToExternalOnly, or Back2Back&ToExternal then VRF LITE IFCs are auto-created.

    Auto Deploy Both – This check box is applicable for the symmetric VRF Lite deployment. When you select this check box, it would set the auto deploy flag to true for auto-created IFCs to turn on symmetric VRF Lite configuration.

    This check box can be selected or deselected when the VRF Lite Deployment field is not set to Manual. In the case, a user explicitly unchecks the auto-deploy field for any auto-created IFCs, then the user input is always given the priority. This flag only affects the new auto-created IFC and it does not affect the existing IFCs.

    VRF Lite Subnet IP Range and VRF Lite Subnet Mask – These fields are populated with the DCI subnet details. Update the fields as needed.

    The values shown in your screen are automatically generated. If you want to update the IP address ranges, VXLAN Layer 2/Layer 3 network ID ranges or the VRF/Network VLAN ranges, ensure the following:


    Note

    When you update a range of values, ensure that it does not overlap with other ranges. You should only update one range of values at a time. If you want to update more than one range of values, do it in separate instances. For example, if you want to update L2 and L3 ranges, you should do the following.

    1. Update the L2 range and click Save.

    2. Click the Edit Fabric option again, update the L3 range and click Save.


    Service Network VLAN Range – Specifies a VLAN range in the Service Network VLAN Range field. This is a per switch overlay service network VLAN range. The minimum allowed value is 2 and the maximum allowed value is 3967.

    Route Map Sequence Number Range – Specifies the route map sequence number range. The minimum allowed value is 1 and the maximum allowed value is 65534.

  10. Click the Manageability tab.

    The fields in this tab are:

    Inband Management – Enabling this allows the management of the switches over their front panel interfaces. The Underlay Routing Loopback interface is used for discovery. If enabled, switches cannot be added to the fabric over their out-of-band (OOB) mgmt0 interface. To manage easy fabrics through Inband management ensure that you have chosen Data in NDFC Web UI, Settings > Server Settings > Admin. Both inband management and out-of-band connectivity (mgmt0) are supported for this setting. For more information, see Inband Management and Inband POAP in Easy Fabrics.

    DNS Server IPs – Specifies the comma separated list of IP addresses (v4/v6) of the DNS servers.

    DNS Server VRFs – Specifies one VRF for all DNS servers or a comma separated list of VRFs, one per DNS server.

    NTP Server IPs – Specifies comma separated list of IP addresses (v4/v6) of the NTP server.

    NTP Server VRFs – Specifies one VRF for all NTP servers or a comma separated list of VRFs, one per NTP server.

    Syslog Server IPs – Specifies the comma separated list of IP addresses (v4/v6) IP address of the syslog servers, if used.

    Syslog Server Severity – Specifies the comma separated list of syslog severity values, one per syslog server. The minimum value is 0 and the maximum value is 7. To specify a higher severity, enter a higher number.

    Syslog Server VRFs – Specifies one VRF for all syslog servers or a comma separated list of VRFs, one per syslog server.

    AAA Freeform Config – Specifies the AAA freeform configurations.

    If AAA configurations are specified in the fabric settings, switch_freeform PTI with source as UNDERLAY_AAA and description as AAA Configurations will be created.

  11. Click the Bootstrap tab.

    Enable Bootstrap – Select this check box to enable the bootstrap feature. Bootstrap allows easy day-0 import and bring-up of new devices into an existing fabric. Bootstrap leverages the NX-OS POAP functionality.

    Starting from Cisco NDFC Release 12.1.1e, to add more switches and for POAP capability, chose checkbox for Enable Bootstrap and Enable Local DHCP Server. For more information, see Inband Management and Inband POAP in Easy Fabrics

    After you enable bootstrap, you can enable the DHCP server for automatic IP address assignment using one of the following methods:

    • External DHCP Server: Enter information about the external DHCP server in the Switch Mgmt Default Gateway and Switch Mgmt IP Subnet Prefix fields.

    • Local DHCP Server: Enable the Local DHCP Server check box and enter details for the remaining mandatory fields.

    Enable Local DHCP Server – Select this check box to initiate enabling of automatic IP address assignment through the local DHCP server. When you select this check box, the DHCP Scope Start Address and DHCP Scope End Address fields become editable.

    If you do not select this check box, Nexus Dashboard Fabric Controller uses the remote or external DHCP server for automatic IP address assignment.

    DHCP Version – Select DHCPv4 or DHCPv6 from this drop-down list. When you select DHCPv4, the Switch Mgmt IPv6 Subnet Prefix field is disabled. If you select DHCPv6, the Switch Mgmt IP Subnet Prefix is disabled.


    Note

    Cisco Nexus 9000 and 3000 Series Switches support IPv6 POAP only when switches are either Layer-2 adjacent (eth1 or out-of-band subnet must be a /64) or they are L3 adjacent residing in some IPv6 /64 subnet. Subnet prefixes other than /64 are not supported.


    DHCP Scope Start Address and DHCP Scope End Address - Specifies the first and last IP addresses of the IP address range to be used for the switch out of band POAP.

    Switch Mgmt Default Gateway – Specifies the default gateway for the management VRF on the switch.

    Switch Mgmt IP Subnet Prefix – Specifies the prefix for the Mgmt0 interface on the switch. The prefix should be between 8 and 30.

    DHCP scope and management default gateway IP address specification - If you specify the management default gateway IP address 10.0.1.1 and subnet mask 24, ensure that the DHCP scope is within the specified subnet, between 10.0.1.2 and 10.0.1.254.

    Switch Mgmt IPv6 Subnet Prefix – Specifies the IPv6 prefix for the Mgmt0 interface on the switch. The prefix should be between 112 and 126. This field is editable if you enable IPv6 for DHCP.

    Enable AAA Config – Select this check box to include AAA configurations from the Manageability tab as part of the device startup config post bootstrap.

    DHCPv4/DHCPv6 Multi Subnet Scope – Specifies the field to enter one subnet scope per line. This field is editable after you check the Enable Local DHCP Server check box.

    The format of the scope should be defined as:

    DHCP Scope Start Address, DHCP Scope End Address, Switch Management Default Gateway, Switch Management Subnet Prefix

    For example: 10.6.0.2, 10.6.0.9, 10.6.0.1, 24

    Bootstrap Freeform Config – (Optional) Enter additional commands as needed. For example, if you require some additional configurations to be pushed to the device and be available post device bootstrap, they can be captured in this field, to save the desired intent. After the devices boot up, they will contain the configuration defined in the Bootstrap Freeform Config field.

    Copy-paste the running-config to a freeform config field with correct indentation, as seen in the running configuration on the NX-OS switches. The freeform config must match the running config. For more information, see Enabling Freeform Configurations on Fabric Switches.

  12. Click the Configuration Backup tab. The fields on this tab are:

    Hourly Fabric Backup – Select the check box to enable an hourly backup of fabric configurations and the intent.

    The hourly backups are triggered during the first 10 minutes of the hour.

    Scheduled Fabric Backup – Check the check box to enable a daily backup. This backup tracks changes in running configurations on the fabric devices that are not tracked by configuration compliance.

    Scheduled Time: Specify the scheduled backup time in a 24-hour format. This field is enabled if you check the Scheduled Fabric Backup check box.

    Select both the check boxes to enable both back up processes.

    The backup process is initiated after you click Save.

    The scheduled backups are triggered exactly at the time you specify with a delay of up to two minutes. The scheduled backups are triggered regardless of the configuration deployment status.

    The number of fabric backups that will be retained on NDFC is decided by the Settings > Server Settings > LAN Fabric > Maximum Backups per Fabric.

    The number of archived files that can be retained is set in the # Number of archived files per device to be retained: field in the Server Properties window.


    Note

    To trigger an immediate backup, do the following:

    1. Choose LAN > Topology.

    2. Click within the specific fabric box. The fabric topology screen comes up.

    3. From the Actions pane at the left part of the screen, click Re-Sync Fabric.


    You can also initiate the fabric backup in the fabric topology window. Click Backup Now in the Actions pane.

  13. Click on the Flow Monitor tab. The fields on this tab are:

    Enable Netflow – Check this checkbox to enable Netflow on VTEPs for this Fabric. By default, Netflow is disabled. On Enable, NetFlow configuration will be applied to all VTEPS that support netflow.

    Note: When Netflow is enabled on the fabric, you can choose not to have netflow on a particular switch by having a dummy no_netflow PTI.

    If netflow is not enabled at the fabric level, an error message is generated when you enable netflow at the interface, network, or vrf level. For information about Netflow support for Cisco NDFC, refer to Netflow Support.

    In the Netflow Exporter area, click Actions > Add to add one or more Netflow exporters. This exporter is the receiver of the netflow data. The fields on this screen are:

    • Exporter Name – Specifies the name of the exporter.

    • IP – Specifies the IP address of the exporter.

    • VRF – Specifies the VRF over which the exporter is routed.

    • Source Interface – Enter the source interface name.

    • UDP Port – Specifies the UDP port over which the netflow data is exported.

    Click Save to configure the exporter. Click Cancel to discard. You can also choose an existing exporter and select Actions > Edit or Actions > Delete to perform relevant actions.

    In the Netflow Record area, click Actions > Add to add one or more Netflow records. The fields on this screen are:

    • Record Name – Specifies the name of the record.

    • Record Template – Specifies the template for the record. Enter one of the record templates names. In Release 12.0.2, the following two record templates are available for use. You can create custom netflow record templates. Custom record templates saved in the template library are available for use here.

      • netflow_ipv4_record – to use the IPv4 record template.

      • netflow_l2_record – to use the Layer 2 record template.

    • Is Layer2 Record – Check this check box if the record is for Layer2 netflow.

    Click Save to configure the report. Click Cancel to discard. You can also choose an existing record and select Actions > Edit or Actions > Delete to perform relevant actions.

    In the Netflow Monitor area, click Actions > Add to add one or more Netflow monitors. The fields on this screen are:

    • Monitor Name – Specifies the name of the monitor.

    • Record Name – Specifies the name of the record for the monitor.

    • Exporter1 Name – Specifies the name of the exporter for the netflow monitor.

    • Exporter2 Name – (optional) Specifies the name of the secondary exporter for the netflow monitor.

    The record name and exporters referred to in each netflow monitor must be defined in "Netflow Record" and "Netflow Exporter".

    Click Save to configure the monitor. Click Cancel to discard. You can also choose an existing monitor and select Actions > Edit or Actions > Delete to perform relevant actions.

  14. Click on the Fabric to view summary in the slide-in pane. Click on the Launch icon to view the Fabric Overview.

Adding Switches and Transitioning VXLAN Fabric Management to NDFC

Let us discover and add switches to the newly created fabric.

Procedure


Step 1

Double click on the newly created fabric to view the Fabric Overview screen.

Click on Switches tab.

Step 2

From the Actions drop-down list, select Add Switches.

The Add Switches window appears.

Similarly, you can add switches on Topology window. On Topology window, choose a fabric, right-click on a fabric and click Add Switches.

Step 3

On the Add Switches - Fabric screen, enter the Seed Switch Details.

Enter the IP address of the switch in the Seed IP field. Enter the username and password of the switches that you want to discover.

By default, the value in the Max Hops field is 2. The switch with the specified IP address and the switches that are 2 hops from it will be populated after the discovery is complete.

Make sure to check the Preserve Config check box. This ensures that the current configuration of the switches will be retained.

Step 4

Click Discover Switches.

The switch with the specified IP address and switches up to two hops away (depending on the setting of Max Hops) from it are populated in the Scan Details section.

Step 5

Check the check box next to the switches that have to be imported into the fabric and click Import into fabric.

It is best practice to discover multiple switches at the same time in a single attempt. The switches must be cabled and connected to the NDFC server and the switch status must be manageable.

If switches are imported in multiple attempts, then please ensure that all the switches are added to the fabric before proceeding with the Brownfield import process.

Step 6

Click Import into fabric.

The switch discovery process is initiated. The Progress column displays progress for all the selected switches. It displays done for each switch after completion.

Note 

You should not close the screen and try to import switches again until all selected switches are imported or an error message comes up.

If an error message comes up, close the screen. The fabric topology screen comes up. The error messages are displayed at the top-right part of the screen. Resolve the errors and initiate the import process again by clicking Add Switches in the Actions panel.

Step 7

After a successful import, the progress bar shows Done for all the switches. Click Close.

After closing the window, the fabric topology window comes up again. The switch is in Migration Mode now, and the Migration mode label is displayed on the switch icons.

At this point, you must not try to add Greenfield or new switches. Support is not available for adding new switches during the migration process. It might lead to undesirable consequences for your network. However, you can add a new switch after the migration process is complete.

Step 8

After all the network elements are discovered, they are displayed in the Topology window in a connected topology. Each switch is assigned the Leaf role by default.

The switch discovery process might fail for a few switches, and the Discovery Error message is displayed. However, such switches are still displayed in the fabric topology. You should remove such switches from the fabric (Right-click the switch icon and click Discovery > Remove from fabric), and import them again.

You should not proceed to the next step until all switches in the existing fabric are discovered in NDFC.

If you choose the Hierarchical layout for display (in the Actions panel), the topology automatically gets aligned as per role assignment, with the leaf switches at the bottom, the spine switches connected on top of them, and the border switches at the top.

Note 

The supported roles for switches with the Cisco NX-OS Release 7.0(3)I4(8b) and 7.0(4)I4(x) images are Border Leaf, Border Spine, Leaf, and Spine

Step 9

Select the switch, click Actions > Set Role. On the Select Role screen, select Border and click Select.

Similarly, set the Spine role for the n9k-14 and n9k-8 spine switches.

Note 

You need to manually create a vPC pairing when the L3 keep alive is configured on the switches. Otherwise, the vPC configuration is automatically picked up from the switches.

vPC Pairing - The vPC pairing must be done for switches where the Layer 3 vPC peer-keep alive is used. The vPC configuration is automatically picked up from the switches when the vPC peer keep alive is established through the management option. This pairing reflects in the GUI only after the migration is complete.

  1. Right-click the switch icon and click vPC Pairing to set a vPC switch pair.

    The Select vPC peer screen comes up. It lists potential vPC peer switches.

  2. Select the appropriate switch and click OK. The fabric topology comes up again. The vPC pair is formed.

Note 

Check if you have added all switches from the current fabric. If you have missed adding switches, add them now. Once you are certain that you have imported all existing switches, move to the next step, the Save and Deploy option.

Step 10

From the Fabric Overview Actions drop-down list, choose Recalculate and Deploy.

When you click Recalculate and Deploy, NDFC obtains switch configurations and populates the state of every switch from the current running configuration to the current expected configuration, which is the intended state maintained in NDFC.

If there are configuration mismatches, Pending Config column shows the number of lines of difference. Click on the Pending Config column to view the Pending Config and Side-by-Side Comparison of the running configuration. Click Deploy to apply the configurations.

After the migration of underlay and overlay networks, the Deploy Configurationscreen comes up.

Note 
  • The brownfield migration requires best practices to be followed on the existing fabric such as maintain consistency of the overlay configurations.

  • The Brownfield migration may take some time to complete since it involves collecting the running configuration from switches, build the NDFC configuration intent based on these, consistency checks etc.

  • Any errors or inconsistencies that are found during the migration is reported in fabric errors. The switches continue to remain in the Migration mode. You should fix these errors and complete the migration again by clicking Deploy until no errors are reported.

Step 11

After the configurations are generated, review them by clicking the links in the Preview Config column.

We strongly recommend that you preview the configuration before proceeding to deploy it on the switches. Click the Preview Configuration column entry. The Preview Config screen comes up. It lists the pending configurations on the switch.

The Side-by-side Comparison tab displays the running configuration and expected configuration side-by-side.

The Pending Config tab displays the set of configurations that need to be deployed on a switch in order to go from the current running configuration to the current expected or intended configuration.

The Pending Config tab may show many configuration lines that will be deployed to the switches. Typically, on a successful brownfield import, these lines correspond to the configuration profiles pushed to the switches for a overlay network configuration. Note that the existing network and VRF-related overlay configurations are not removed from the switches.

Note 

The configuration profiles are NDFC required constructs for managing the VXLAN configurations on the switches. During the Brownfield import process, they capture the same information as the original VXLAN configurations already present on the switches. In the following image, the configuration profile with vlan 160 is applied.

As part of the import process, after the configuration profiles are applied, the original CLI based configuration references will be removed from the switches. These are the ‘no’ CLIs that will be seen towards the end of the diffs. The VXLAN configurations on the switches will be persisted in the configuration profiles. In the following image, you can see that the configurations will be removed, specifically, no vlan 160.

The removal of CLI based configuration is allowed if the Overlay Mode is set to config-profile, and not CLI.

The Side-by-side Comparison tab displays the Running Config and Expected Config on the switch.

Step 12

Close the Config Preview Switch window after reviewing the configurations.

Step 13

Click Deploy Config to deploy the pending configuration onto the switches.

If the Status column displays FAILED, investigate the reason for failure to address the issue.

The progress bar shows 100% for each switch. After correct provisioning and successful configuration compliance, close the screen.

In the fabric topology screen that comes up, all imported switch instances are displayed in green color, indicating successful configuration. Also, the Migration Mode label is not displayed on any switch icon.

NDFC has successfully imported a VXLAN-EVPN fabric.

Post-transitioning of VXLAN fabric management to NDFC - This completes the transitioning process of VXLAN fabric management to NDFC. Now, you can add new switches and provision overlay networks for your fabric. For details, refer the respective section in the Fabrics topic in the configuration guide.

For more information, see Fabric Overview.


Configuration Profiles Support for Brownfield Migration

Cisco NDFC supports the Brownfield import of fabrics with VXLAN overlay provisioned with configuration profiles. This import process recreates the overlay configuration intent based on the configuration profiles. The underlay migration is performed with the usual Brownfield migration.

This feature can be used to recover your existing Easy fabric when a NDFC backup is not available to be restored. In this case, you must install the latest NDFC release, create a fabric, and then import the switches into the fabric.

Note that this feature is not recommended for the NDFC upgrade. For more information, see Cisco NDFC Installation and Upgrade Guide.

The following are the guidelines for the support of configuration profiles:

  • The Brownfield migration of configuration profiles is supported for the Easy_Fabric template.

  • The configuration profiles on the switches must be a subset of the default overlay Universal profiles. If extra configuration lines are present that are not part of the Universal profiles, unwanted profile refreshes will be seen. In this case, after you recaluclate and deploy configuration, review the diffs using the Side-by-side Comparison feature and deploy the changes.

  • Brownfield migration with switches having a combination of VXLAN overlay configuration profiles and regular CLIs is not supported. If this condition is detected, an error is generated, and migration is aborted. All the overlays must be with either configuration profiles or regular CLIs only.

Manually Adding PIM-BIDIR Configuration for Leaf or Spine Post Brownfield Migration

After brownfield migration, if you add new spine or leaf switches, you should manually configure the PIM-BIDIR feature.

The following procedure shows how to manually configure the PIM-BIDIR feature for a new Leaf or Spine:

Procedure


Step 1

Check the base_pim_bidir_11_1 policies that are created for an RP added through the brownfield migration. Check the RP IP and Multicast Group used in each ip pim rp-address RP_IP group-list MULTICAST_GROUP bidir command.

Step 2

Add respective base_pim_bidir_11_1 policies from the View/Edit Policies window for the new Leaf or Spine, push the config for each base_pim_bidir_11_1 policy.


Migrating an MSD Fabric with Border Gateway Switches

When you migrate an existing MSD fabric with a border gateway switch into DCNM, make sure to note the following guidelines:

  • Uncheck all Auto IFC creation related fabric settings. Review the settings and ensure they are unchecked as follows:

    • Easy_Fabric fabric

      Uncheck Auto Deploy Both check box under Resources tab.

    • MSD_Fabric fabric

      Uncheck Multi-Site Underlay IFC Auto Deployment Flag check box under DCI tab.

  • Underlay Multisite peering: The eBGP peering and corresponding routed interfaces for underlay extensions between sites are captured in switch_freeform and routed_inerfaces, and optionally in the interface_freeform configs. This configuration includes all the global configs for multisite. Loopbacks for EVPN multisite are also captured via the appropriate interface templates.

  • Overlay Multisite peering: The eBGP peering is captured as part of switch_freeform as the only relevant config is under router bgp.

  • Overlays containing Networks or VRFs: The corresponding intent is captured with the profiles on the Border Gateways with extension_type = MULTISITE.

  1. Create all the required fabrics including the Easy_Fabric and External_Fabric fabrics with the required fabric settings. Disable the Auto VRF-Lite options as mentioned above. For more information, refer to Creating VXLAN EVPN Fabric and External Fabric sections.

  2. Import all the switches into all the required fabrics and set roles accordingly.

  3. Click Recalculate and Deploy in each of the fabrics and ensure that the Brownfield Migration process reaches the 'Deployment' phase. Now, do not click Deploy Configuration.

  4. Create an MSD_Fabric fabric with the required fabric settings and disable the Auto MultiSite IFC options as shown in Guidelines. For more information, see Creating an MSD Fabric in Cisco DCNM LAN Fabric Configuration Guide.

  5. Move all the member fabrics into the MSD. Do not proceed further till this step is completed successfully. For more information, see Moving the Member1 Fabric Under MSD-Parent-Fabric in Cisco DCNM LAN Fabric Configuration Guide.


    Note

    The Overlay Networks and VRFs definitions in each of the Easy Fabrics must be symmetric for them to get added successfully to the MSD. Errors will be reported if any mismatches are found. These must be fixed by updating the overlay information in the fabric(s) and added to the MSD.


  6. Create all the Multisite Underlay IFCs such that they match the IP address and settings of the deployed configuration.


    Note

    Additional interface configurations must be added to the Source/Destination interface freeform fields in the Advanced section as needed.


    For more information, see Configuring Multi-Site Overlay IFCs.

  7. Create all the Multisite Overlay IFCs such that they match the IP address and settings of the deployed configuration. You will need to add the IFC links. For more information, see Configuring Multi-Site Overlay IFCs.

  8. If there are VRF-Lite IFCs also, create them as well.


    Note

    If the Brownfield Migration is for the case where Configuration Profiles already exist on the switches, the VRF-Lite IFCs will be created automatically in Step #3.


  9. If Tenant Routed Multicast (TRM) is enabled in the MSD fabric, edit all the TRM related VRFs and Network entries in MSD and enable the TRM parameters.

    This step needs to be performed if TRM is enabled in the fabric. If TRM is not enabled, you still need to edit each Network entry and save it.

  10. Now click Recalculate and Deploy in the MSD fabric, but, do not click Deploy Configuration.

  11. Navigate to each member fabric, click Recalculate and Deploy, and then click Deploy Configuration.

This completes the Brownfield Migration. You can now manage all the networks or VRFs for BGWs by using the regular NDFC Overlay workflows.