Introduction to Programmable Fabric Management, Operations, and Provisioning
This chapter briefly describes POAP auto configuration profiles:
Power On Auto Provisioning in Programmable Fabric
A NX-OS device with Power On Auto Provisioning (POAP) feature enabled needs to be able to find its IP address and download image/configuration and successfully complete the POAP process via the DHCP server and a repository server or Cisco DCNM that is located across in a VXLAN/EVPN fabric.
In a VXLAN-EVPN based Programmable Fabric deployment, day-0 bring up of switches is supported in an automated way via POAP. Along with the traditional POAP option via the mgmt0 (out-of-band) interfaces, day-0 bring-up of the devices can also be done over the front-panel Ethernet (inband) interfaces.
Note |
Inband POAP and management are available beginning with Cisco NX-OS Release 7.0(3)I5(2) for Cisco Nexus 9000 Series switches and Cisco NX-OS 7.1(2) on Cisco DCNM with 10.1(2)ST(1) Cisco DCNM templates. |
In an IP fabric, the use of IP unnumbered interfaces is preferred to simplify IP address management on the underlay where only one unique IP address per device is required to bring up the routing table state between the various devices. All the core facing interfaces on a device that participate in the IGP (for example, IS-IS or OSPF) share this per device unique IP. While it greatly simplifies the IP address management on the underlay, it certainly complicates how DHCP relay functionality works on the inband interfaces as there are no longer regular Layer 3 interfaces under which a relay can be readily configured. Without DHCP relay functionality, inband POAP cannot work.
POAP over IP numbered interfaces works when the DHCP server is configured with unique subnet scopes for every Layer 3 core-facing interface pair and IP DHCP relay is enabled under every core-facing interface. The DHCP server can be connected to a leaf that is attached to the default VRF.
Prerequisites for Inband POAP over IP Unnumbered Links
See the following illustration for inband POAP via IP unnumbered fabric interfaces:
See the following prerequisites for inband POAP over IP unnumbered links:
-
The seed/edge leaf leaf-1 is the leaf that is connected to the POAP network. In the topology, a router sits in between the edge leaf and the DCNM.
-
On the seed/edge leaf node, the IP address of the port connecting to the DCNM network is recommended to be a /30 IP. It means that the IP on the other side, the next hop, is the other IP in the /30 network.
-
On the router connecting the seed leaf to the DHCP network, DHCP relay configuration for reachability to the DHCP server IP must be made. An example CLI configuration on a Cisco Nexus switch that acts as a router is ip dhcp relay address <dhcp_server_ip> .
-
The DHCP server IP is the IP address of the DCNM if DCNM is used for POAP. Otherwise it is the IP address of the standalone DHCP server used. If DCNM is used for fabric deployment, the DCNM POAP template has the option to configure two DHCP Servers.
-
For inband POAP, the DHCP server’s dhcpd.conf must be configured manually to include new scope(s) for the networks in the fabric.
-
On the DCNM, vCenter, or on any other Network Management Station in the internal network, the route configuration must be done manually for network reachability (192.178.16.0 and 10.10.10.0 networks in the above sample topology).
-
There needs to be at least one seed/edge leaf node in the fabric with connectivity to the DCNM/DHCP server/vCenter/Management Station etc. for the other NX-OS nodes in the IP Fabric to successfully go through inband POAP.
With DCNM as the DHCP and Repository/Configuration Server:
-
When DCNM is used as the configuration server for the fabric, DCNM POAP templates are used for day-0 configuration on the devices. The DCNM POAP templates support the topology displayed in the illustration.
-
For the devices to be discovered in the inband IP fabric topology on the DCNM, the Management IP field in the POAP template's General tab should be the management IP of the device which is its loopback0 IP. This is not the mgmt0 interface IP as the vrf is default and not management.
-
The first time the devices are being brought up, write erase and reload must be issued on the device and not via the DCNM GUI. It is because the management IP on DCNM for inband case is that of the loopback0 IP and the fabric is not discovered yet before POAP for reachability from DCNM.
-
DHCP scope for the underlay network must be manually added either by configuring DHCP Scope via DCNM GUI or by editing the dhcpd.conf file in the DHCP server. Routes for reachability must also be manually added on other devices like the router in between, DCNM, and vCenter etc.
-
The seed leaf needs to be brought up first, following which the RR needs to be brought up. After the seedleaf and RR spine are up, any of the other nodes (non-RR spine, other Leaf nodes) are ready to successfully finish inband POAP. The devices loop in POAP phase until the seed leaf, RR are up and until there is a successful DHCP handshake and a successful TFTP operation.
Prerequisites for Inband POAP over IP Numbered Links
See the following illustration for inband POAP via IP numbered fabric interfaces:
See the following prerequisites for inband POAP over IP numbered links:
-
The seed/edge leaf leaf-1 is the leaf that is connected to the POAP network. In the topology, a router sits in between the edge leaf and the DCNM.
-
On the seed/edge leaf node, the IP address of the port connecting to the DCNM network is recommended to be a /30 IP. It means that the IP on the other side, the next hop, is the other IP in the /30 network.
-
On the router connecting the seed leaf to the DHCP network, DHCP relay configuration for reachability to the DHCP server IP must be made. An example CLI configuration on a Cisco Nexus switch that acts as a router is ip dhcp relay address <dhcp_server_ip> .
-
The DHCP server IP is the IP address of the DCNM if DCNM is used for POAP. Otherwise it is the IP address of the standalone DHCP server used. If DCNM is used for fabric deployment, the DCNM POAP template has the option to configure two DHCP Servers.
-
For inband POAP, the DHCP server’s dhcpd.conf must be configured manually to include new scope(s) for the networks in the fabric. In the IP Numbered topology, the DHCP scope is required for every fabric link.
Alternately, the following script is available as part of the DCNM OVA installation and it can be used to generate the DHCP scopes automatically on DCNM: /root/utils/inband_p2p_dhcp_scope.py. The script can be modified or customized if needed. The instructions are provided inside the script itself.
-
On the DCNM, vCenter, or on any other Network Management Station in the internal network, the route configuration must be done manually for network reachability (192.178.16.0 and 192.0.1.0 networks in the above sample topology).
-
There needs to be at least one seed/edge leaf node in the fabric with connectivity to the DCNM/DHCP server/vCenter/Management Station etc. for the other NX-OS nodes in the IP Fabric to successfully go through inband POAP.
With DCNM as the DHCP and Repository/Configuration Server:
-
When DCNM is used as the configuration server for the fabric, DCNM POAP templates are used for day-0 configuration on the devices. The DCNM POAP templates support the topology displayed in the illustration.
-
For the devices to be discovered in the inband IP fabric topology on the DCNM, the Management IP field in the POAP template's General tab should be the management IP of the device which is its loopback0 IP. This is not the mgmt0 interface IP as the vrf is default and not management.
-
The first time the devices are being brought up, write erase and reload must be issued on the device and not via the DCNM GUI. It is because the management IP on DCNM for inband case is that of the loopback0 IP and the fabric is not discovered yet before POAP for reachability from DCNM.
-
DHCP scope for the underlay network must be manually added either by configuring DHCP Scope via DCNM GUI or by editing the dhcpd.conf file in the DHCP server. Routes for reachability must also be manually added on other devices like the router in between, DCNM, and vCenter etc.
-
The seed leaf needs to be brought up first, following which the RR needs to be brought up. After the seedleaf and RR spine are up, any of the other nodes (non-RR spine, other Leaf nodes) are ready to successfully finish inband POAP. The devices loop in POAP phase until the seed leaf, RR are up and until there is a successful DHCP handshake and a successful TFTP operation.
Inband Management
Device management is done via vrf management in the out-of-band network or via vrf default in the inband network. This feature adds support to manage the network devices from DCNM via the front panel inband ports under vrf default for Cisco Nexus 9000 Series switches.
Note |
Inband POAP and inband management is available beginning with Cisco NX-OS Release 7.0(3)I5(2) for Cisco Nexus 9000 Series switches and Cisco NX-OS 7.1(2) on Cisco DCNM with 10.1(2)ST(1) Cisco DCNM templates. |
Inband POAP with inband management (Inband POAP and vrf = default for management) and Out-of-band POAP with Out-of-band management (POAP over mgmt0 and vrf = management for management) are currently supported via the DCNM POAP templates.
DCNM POAP Templates
See the following templates that are created to help generate the required configuration for VXLAN EVPN feature for Cisco Nexus switches:
-
Leaf
-
Spine
-
Border Leaf
-
Border Spine
Note |
POAP is day-0 device automated provisioning. For information on the installation of DCNM POAP templates, see the Cisco DCNM Installation Guide. For POAP launchpad, see the Cisco DCNM Web Client Online Help. |
POAP template parameters are divided into various groups and these parameters are used by the template to generate a set of CLI commands to be run on a switch.
Note |
The parameters may vary based on your POAP template selection. The VXLAN EVPN leaf template supports Dynamic Virtual Ports (DVP) and multi-mobility domain with DCNM version 10.0(1) or later, but the DVP remains disabled by default. |
You can configure the encapsulation settings from Admin > Fabric > Fabric Encapsulation Settings to specify the following settings:
-
Enable VXLAN Encapsulation for Leaf Network
-
Specify multicast group subnet (default is 239.1.1.0/25)
-
Specify number of Rendezvous Points (RPs): 1, 2, or 4
-
One or multiple multicast groups based on the number of RPs specified are generated by Cisco DCNM automatically.
-
-
Specify RP Subnet (default is 10.254.254.0/24)
-
Phantom RP addresses are generated from the RP subnet
-
RP Redundancy is same as number of RPs
-
Corresponding loopback interfaces with IP addresses in the same subnet as the phantom RP addresses but with different masks are added to each RP.
-
-
Fabric Settings
The following global settings can be configured from the Cisco DCNM application.
You can specify the general settings for the Fabric. From the menu bar, select Admin > Fabric > General Settings to specify the following settings:
-
Configure LDAP Server and Segment/Partition ID ranges
-
Segment ID Range
-
Partition ID Range
-
-
Configure dynamic VLAN ranges used by auto configuration
-
System Dynamic VLANs
-
Core Dynamic VLANs
-
-
Configure global anycast gateway MAC
You can set common parameters, that are populated as default values in POAP templates. For a new POAP template, values defined in this global settings page, are automatically pre-populated. From the menu bar, select Admin > Fabric > POAP Settings to specify the following settings:
-
Configure global setting for the fabric backbone
-
BGP AS
-
BGP RR IP/IP
-
Backbone Prefix Length
-
Enable conversational learning
-
You can configure the encapsulation settings from Admin > Fabric > Fabric Encapsulation Settings to specify the following settings:
-
Enable VXLAN encapsulation for leaf network
-
Specify multicast group subnet (default is 239.1.1.0/25)
-
Specify number of rendezvous points (RPs): 1, 2, or 4
-
One or multiple multicast groups based on the number of RPs specified are generated by Cisco DCNM automatically
-
Specify RP Subnet (default is 10.254.254.0/24)
-
Phantom RP addresses are generated from the RP subnet
-
RP Redundancy is same as number of RPs
-
Corresponding loopback interfaces with IP addresses in the same subnet as the phantom RP addresses but with different masks are added to each RP
Note |
The inband POAP and management support is available on Cisco DCNM POAP templates for Cisco Nexus 9000 Series switches since Cisco DCNM 10.1(x) release. |
You can configure the following fields in the templates for inband POAP template configuration on leaf and spine nodes:
Cisco Nexus 9000 Series switches Leaf POAP Template:
For Seed/Edge Leaf configuration:
-
General->Management Vrf : Select default from the dropdown.
-
General->Seed device to enable inband POAP and management.
-
General->Inband Port
-
General->Inband Local Port IP
-
General->Inband Next-Hop Port IP
-
General->DHCP Server IP Address
-
General->Second DHCP Server IP Address
Spine POAP Template:
-
General->Management VRF: Select default from the dropdown.
-
Manageability->DHCP Server IP
-
Manageability->Second DHCP Server IP Address
Auto-Configuration in Programmable Fabric
Configuration Profile
A configuration profile in Cisco Programmable Fabric is a collection of commands used to instantiate a specific configuration. Based on appropriate end-host triggers (VDP or data plane trigger (any data frame)), the configuration profiles are grouped to allow flexible and extensible options to instantiate day-1 tenant-related configurations on a Cisco Programmable Fabric leaf on need basis.
The commands are entered using variables for certain parameters instead of entering the actual value. The switch then populates the actual values to derive the completed command. When the required parameters for a particular configuration profile are available, the profile can be instantiated to create a configuration set. The switch then applies this configuration set to complete the command execution belonging to the configuration set.
The commands which are supported under a configuration profile are called config-profile-aware commands. Most of the commands in the switch can be entered into the configuration profile.
Note |
Various sets of configuration profiles can be created and stored in the network database, and each network can use a different configuration profile. The configuration profiles can be used from the network to set up on the leaf whenever required. |
Profile Refresh
Profile refresh involves updating and/or removing profile parameters (arguments or variables) without disrupting the traffic while using universal profiles. After the changes are done, Cisco DCNM executes the fabric database refresh vni/vrf command.
LDAP Configuration
There are four different tables, which you can query from:
-
Network Table
-
Partition Table
-
Profile Table
-
Host Table
-
BL-DCI Table
Network Table
Switch# configure terminal
Switch(config)# fabric database type network
Switch(config)# server protocol ldap ip 10.1.1.2 vrf <default/management>
Switch(config)# db-table ou=networks, dc=cisco, dc=com key-type 1
Partition Table
Switch# configure terminal
Switch(config)# fabric database type partition
Switch(config)# server protocol ldap ip 10.1.1.2 vrf <default/management>
Switch(config)# db-table ou=partitions, dc=cisco, dc=com
Profile Table
Multi-tenancy lite version
This table is used to store configuration profiles, which are pre-packaged with DCNM and custom config profiles that are created. The Cisco Nexus 5000 Series Switches or Cisco Nexus 6000 Series Switches employ this tenancy model where a maximum of 4000 VLANs are supported on the ToR. So, if a profile is not pre-configured on the system then ToR will query profile table to download profile contents and cache it locally.
Switch# configure terminal
Switch(config)# fabric database type profile
Switch(config)# server protocol ldap ip 10.1.1.2 vrf <default/management>
Switch(config)# db-table ou=profiles, dc=cisco, dc=com
Multi-tenancy full version
The Cisco Nexus 7000 Series Switches employ this tenancy model where, up to 4000 VLANs or dot1q tags can be supported on a per port basis.
Switch# configure terminal
Switch(config)# fabric database type profile
Switch(config)# server protocol ldap ip 10.1.1.2 vrf <default/management>
Switch(config)# db-table ou=profilesBridgeDomain, dc=cisco, dc=com
Host Table
Switch(config)# fabric database type host
Switch(config)# server protocol ldap host ip [ip address] vrf [vrf name]
Switch(config)# db-table ou=hosts,dc=cisco,dc=com
Note |
A host table is required for LLDP auto-config support. |
BL-DCI Table
switch(config)# fabric database type bl-dci
switch(config)# server protocol ldap ip [ip address] vrf [vrf name]
switch(config)# db-table ou=bl-dcis,dc=cisco,dc=com
Note |
A BL-DCI table is required for border leaf. DCI Auto-configuration is not applicable to Cisco Nexus 9000 Series switches. |
Specifying profile Mapping for network instances
Switch(config)# fabric database profile-map global
Switch(config-profile-map-global)# dot1q default dynamic
Switch(config-profile-map-global)# vni default dynamic
Switch(config)# fabric database profile-map global
Switch(config-profile-map-global)# vrf default dynamic
LDAP Preemptive
Note |
LDAP Preemptive is currently not supported on Cisco Nexus 9000 Series switches. |
Customer can deploy multiple LDAP servers. One server can be specified as primary server and the other servers become secondary. When the primary LDAP server is up, the device always queries the primary server. When the primary server is down, the device queries the secondary servers instead. However, when the primary server comes back, the device performs all new LDAP queries on the primary server again instead of sticking on the secondary. One of the reasons for this is for load-balancing, for example, half devices set LDAP-A as primary and half devices set LDAP-B as primary. Therefore, the load is evenly distributed between the two servers. Only if one server is down, all queries go to the server that is still up.
The following configuration specifies the primary LDAP server:
Switch(config)#fabric database server primary {ip <ipaddr> | ipv6 <ipv6addr> | host <hostname>} [port <portnum>] [vrf {<vrf-name>}] [enable-ssl]
Note that the primary LDAP server specified in this CLI must have the same format as it is specified in the DB table configuration, for example, if the DB table configuration uses host name for the server, this CLI must use the host name. Similarly, if the DB table configuration uses the port/vrf/enable-ssl options, this CLI must use those options.
Routable Loopback Auto-Configuration
Note |
Routable loopback auto-configuration is not supported on Cisco Nexus 9000 Series switches. |
In multi-tenant environments, every leaf switch needs to have a unique routable IP address per VRF for reachability. The user can use a knob to turn ON for an on demand for a per VRF basis. By turning on this knob, on every leaf switch where the VRF exists, there will be unique routable IP address allocated per leaf switch on that VRF (typically via a loopback interface). The reachability of this loopback address will be advertised via BGP-EVPN using route-type 5, thereby preventing any additional sources of floods or ARPs.
In many scenarios, there is a requirement of having a per leaf switch, per VRF, unique interface. Example scenarios that require this include DHCPv4 relay support when the DHCP server and client are in non-default VRFs behind a vPC pair of switches. In addition, this is a pre-requisite for a VXLAN OAM functionality. To automatically configure a loopback interface per leaf switch per VRF, the partition profile vrf-common-evpn-loopback can be used. During profile application, the leaf switches will automatically pick a free loopback interface ID, configure it with the respective VRF/partition, and associate the loopback interface with the IP address used for the VTEP associated loopback interface.
The following configuration results in the VRF profile being updated on the leaf switch resulting in the loopback routable IP address being auto-configured under that VRF as well as advertised via EVPN to all leaf switches.
configure profile vrf-common-evpn-loopback
interface loopback $system_auto_loopbackId
vrf member $vrfName
ip address $system_auto_backboneIpAddress/32 tag 12345
vrf context $vrfName
vni $include_vrfSegmentId rd auto
ip route 0.0.0.0/0 $include_serviceNodeIpAddress
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
address-family ipv6 unicast
route-target both auto
route-target both auto evpn
router bgp $asn
vrf $vrfName
address-family ipv4 unicast
advertise l2vpn evpn
redistribute direct route-map FABRIC-RMAP-REDIST-
SUBNET maximum-paths ibgp 2
address-family ipv6 unicast
advertise l2vpn evpn
redistribute direct route-map FABRIC-RMAP-REDIST-
SUBNET maximum-paths ibgp 2
interface nve $nveId
member vni $include_vrfSegmentId associate-vrf
!
DCI Auto-Configuration
Following are the DCI auto configuration supported for Cisco Programmable Fabric.
LISP
feature lisp
vrf context <COMMON-RLOC-VRF>
ip lisp itr-etr
ip lisp itr map-resolver <MR-address>
ip lisp etr map-server <MS-address> key <registration-key>
configure profile lispDCEdgeNodeProfile
vrf context $vrfName
ip lisp itr-etr
ip lisp locator-vrf $COMMON-RLOC-VRF-NAME
lisp instance-id $include_dciId
register-route-notifications tag $asn
lisp dynamic-eid $vrfName
database-mapping 0.0.0.0/0 $include_RLOC1-address, $include_RLOC1-priority, $include_RLOC1-weight
database-mapping 0.0.0.0/0 $include_RLOC2-address, $include_RLOC2-priority, $include_RLOC2-weight
Layer-3 DCI for VXLAN EVPN
Layer-3 DCI supports both single box and two box solutions with EVPN to VRF Lite handoff at border leaf.
config profile vrf-common-universal-evpn-bl-dc-edge
vrf context $vrfName
vni $include_l3SegmentId
rd auto
address-family ipv4 unicast
route-target both $rsvdGlobalAsn:$dciId
route-target both auto evpn
address-family ipv6 unicast
route-target both $rsvdGlobalAsn:$dciId
route-target both auto evpn
router bgp $asn
vrf $vrfName
address-family ipv4 unicast
allow vpn default-originate
address-family ipv6 unicast
allow vpn default-originate
interface nve $nveId
member vni $include_l3SegmentId associate-vrf
end
configure profile defaultCombinedEvpnBorderLeafDcEdgeProfile
router bgp $asn
include profile any
config profile vrf-common-universal-evpn-bl
vrf context $vrfName
vni $include_l3SegmentId
rd auto
address-family ipv4 unicast
route-target both auto evpn
address-family ipv6 unicast
route-target both auto evpn
router bgp $asn
vrf $vrfName
address-family ipv4 unicast
maximum-paths 2
maximum-paths ibgp 2
allow vpn default-originate
address-family ipv6 unicast
maximum-paths 2
maximum-paths ibgp 2
allow vpn default-originate
neighbor $include_peerIpAddress1 remote-as $include_peerAsn
address-family ipv4 unicast
send-community both
neighbor $include_peerIpv6Address1 remote-as $include_peerAsn
address-family ipv6 unicast
send-community both
neighbor $include_peerIpAddress2 remote-as $include_peerAsn
address-family ipv4 unicast
send-community both
neighbor $include_peerIpv6Address2 remote-as $include_peerAsn
address-family ipv6 unicast
send-community both
end