The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This document describes the steps to set up and configure Application Centric Infrastructure (ACI) multi-site fabric.
The ACI Multi-Site feature introduced in Release 3.0 allows you to interconnect separate Cisco ACI Application Policy Infrastructure Controller (APIC) cluster domains (fabrics). Each site represents a different availability zone. This helps to ensure multi-tenant Layer 2 and Layer 3 network connectivity across sites and it also extends the policy domain end-to-end across fabrics. You can create policies in the Multi-Site GUI and push them to all integrated sites or selected sites. Alternatively, you can import tenants and their policies from a single site and deploy them on other sites.
Cisco recommends that you:
The information in this document is based on these software and hardware versions:
Site A | |
---|---|
Hardware Device | Logical Name |
N9K-C9504 w/ N9K-X9732C-EX | spine109 |
N9K-C93180YC-EX | leaf101 |
N9K-C93180YC-EX | leaf102 |
N9K-C9372PX-E | leaf103 |
APIC-SERVER-M2 | apic1 |
Site B | |
---|---|
Hardware Device | Logical Name |
N9K-C9504 w/ N9K-X9732C-EX | spine209 |
N9K-C93180YC-EX | leaf201 |
N9K-C93180YC-EX | leaf202 |
N9K-C9372PX-E | leaf203 |
APIC-SERVER-M2 | apic2 |
IP Network (IPN) | N9K-C93180YC-EX |
Hardware | Version |
---|---|
APIC | Version 3.1(2m) |
MSC | Version: 1.2(2b) |
IPN | NXOS: Version 7.0(3)I4(8a) |
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, ensure that you understand the potential impact of any command.
Note: The cross-site namespace normalization is performed by the connecting spine switches. This requires 2nd generation or later Cisco Nexus 9000 Series switches with "EX" or "FX" on the end of the product name. Alternatively, Nexus 9364C is supported in ACI Multi-Site Release 1.1(x) and later.
For more details on hardware requirements and compatibility information, see the ACI Multi-Site Hardware Requirements Guide.
This document mainly focuses on an ACI and MSC side configuration for Multi-Site deployment. IPN switch configuration details are not fully covered. However, a few important configurations from the IPN switch are listed for reference purposes.
These configurations are used in the IPN device connected to the ACI spines.
vrf context intersite description VRF for Multi-Site lab feature ospf router ospf intersite vrf intersite
//Towards to Spine109 in Site-A | // Towards to Spine209 in Site-B |
---|---|
interface Ethernet1/49 speed 100000 mtu 9216 no negotiate auto no shutdown interface Ethernet1/49.4 mtu 9150 encapsulation dot1q 4 vrf member intersite ip address 172.16.1.34/27 ip ospf network point-to-point ip router ospf intersite area 0.0.0.1 no shutdown |
interface Ethernet1/50 speed 100000 mtu 9216 no negotiate auto no shutdown interface Ethernet1/50.4 mtu 9150 encapsulation dot1q 4 vrf member intersite ip address 172.16.2.34/27 ip ospf network point-to-point ip router ospf intersite area 0.0.0.1 no shutdown |
Note: Maximum transmission unit (MTU) of Multiprotocol Border Gateway Protocol (MP-BGP) Ethernet Virtual Private Network (EVPN) control plane communication between spine nodes in different sites - By default, the spine nodes generate 9000-byte packets to exchange endpoint routing information. If that default value is not modified, the Inter Site Network (ISN) must support an MTU size of at least 9100 bytes. In order to tune the default value, modify the corresponding system settings in each APIC domain.
This example uses the default control plane MTU size (9000 bytes) on the spine nodes.
Configure the spine access policies for spine uplink to the IPN switch with an Access Entity Profile (AEP) and Layer 3 routed domain (APIC GUI > Fabric > Access Policies).
Note: As for now, there is no need to configure L3Out of Open Shortest Path First (OSPF) under infra tenant from the APIC GUI. This will be configured via MSC and the configuration pushed to each site later.
In most cases, the attribute values would have already been retrieved automatically from APIC to MSC.
Set the IP address and mask
BGP Peering - On
Control Plane TEP - enter the Router IP address
Spine is Route Reflector - On
The initial integration between APIC clusters and MSC is complete and ready to use.
You should be able to configure stretched policies for tenants on MSC for different ACI sites.
Use this section in order to confirm that your configuration works properly.
Also, ensure the L3Out logical node and interface profile configuration is correctly set in VLAN 4.
Log in to the Spine CLI, verify the BGP L2VPN EVPN and OSPF is up on each spine. Also verify the node-role for BGP is msite-speaker.
spine109# show ip ospf neighbors vrf overlay-1 OSPF Process ID default VRF overlay-1 Total number of neighbors: 1 Neighbor ID Pri State Up Time Address Interface 172.16.1.34 1 FULL/ - 04:13:07 172.16.1.34 Eth1/32.32 spine109#
spine109# show bgp l2vpn evpn summary vrf overlay-1
BGP summary information for VRF overlay-1, address family L2VPN EVPN
BGP router identifier 172.16.1.3, local AS number 100
BGP table version is 235, L2VPN EVPN config peers 1, capable peers 1
0 network entries and 0 paths using 0 bytes of memory
BGP attribute entries [0/0], BGP AS path entries [0/0]
BGP community entries [0/0], BGP clusterlist entries [0/0]
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
172.16.2.3 4 200 259 259 235 0 0 04:15:39 0
spine109#
spine109# vsh -c 'show bgp internal node-role'
Node role : : MSITE_SPEAKER
spine209# show ip ospf neighbors vrf overlay-1 OSPF Process ID default VRF overlay-1 Total number of neighbors: 1 Neighbor ID Pri State Up Time Address Interface 172.16.1.34 1 FULL/ - 04:20:36 172.16.2.34 Eth1/32.32 spine209# spine209# show bgp l2vpn evpn summary vrf overlay-1 BGP summary information for VRF overlay-1, address family L2VPN EVPN BGP router identifier 172.16.2.3, local AS number 200 BGP table version is 270, L2VPN EVPN config peers 1, capable peers 1 0 network entries and 0 paths using 0 bytes of memory BGP attribute entries [0/0], BGP AS path entries [0/0] BGP community entries [0/0], BGP clusterlist entries [0/0] Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 172.16.1.3 4 100 264 264 270 0 0 04:20:40 0 spine209# spine209# vsh -c 'show bgp internal node-role' Node role : : MSITE_SPEAKER
Log in to the Spine CLI to check and verify Overlay-1 interfaces.
ETEP (Multipod Dataplane TEP)
The Dataplane Tunnel Endpoint address used to route traffic between multiple Pods within the single ACI fabric.
DCI-UCAST (Intersite Dataplane unicast ETEP (anycast per site))
This anycast dataplane ETEP address is unique per site. It is assigned to all the spines connected to the IPN/ISN device and used to receive L2/L3 unicast traffic.
DCI-MCAST-HREP (Intersite Dataplane multicast TEP)
This anycast ETEP address is assigned to all the spines connected to the IPN/ISN device and used to receive L2 BUM (Broadcast, Unknown unicast and Multicast) traffic.
MSCP-ETEP (Multi-Site Control-plane ETEP)
This is the control plane ETEP address, which is also known as BGP Router ID on each spine for MP-BGP EVPN.
spine109# show ip int vrf overlay-1 <snip> lo17, Interface status: protocol-up/link-up/admin-up, iod: 83, mode: etep IP address: 172.16.1.4, IP subnet: 172.16.1.4/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0 lo18, Interface status: protocol-up/link-up/admin-up, iod: 84, mode: dci-ucast IP address: 172.16.1.1, IP subnet: 172.16.1.1/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0 lo19, Interface status: protocol-up/link-up/admin-up, iod: 85, mode: dci-mcast-hrep IP address: 172.16.1.2, IP subnet: 172.16.1.2/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0 lo20, Interface status: protocol-up/link-up/admin-up, iod: 87, mode: mscp-etep IP address: 172.16.1.3, IP subnet: 172.16.1.3/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0
spine209# show ip int vrf overlay-1 <snip> lo13, Interface status: protocol-up/link-up/admin-up, iod: 83, mode: etep IP address: 172.16.2.4, IP subnet: 172.16.2.4/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0 lo14, Interface status: protocol-up/link-up/admin-up, iod: 84, mode: dci-ucast IP address: 172.16.2.1, IP subnet: 172.16.2.1/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0 lo15, Interface status: protocol-up/link-up/admin-up, iod: 85, mode: dci-mcast-hrep IP address: 172.16.2.2, IP subnet: 172.16.2.2/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0 lo16, Interface status: protocol-up/link-up/admin-up, iod: 87, mode: mscp-etep IP address: 172.16.2.3, IP subnet: 172.16.2.3/32 IP broadcast address: 255.255.255.255 IP primary address route-preference: 1, tag: 0
At the end, ensure no faults are seen from the MSC.
There is currently no specific troubleshooting information available for this configuration.