Configuring the Cisco APIC Using the CLI

Configuring the Cisco APIC Cluster

Cluster Management Guidelines

The Cisco Application Policy Infrastructure Controller (APIC) cluster comprises multiple Cisco APICs that provide operators a unified real time monitoring, diagnostic, and configuration management capability for the ACI fabric. To assure optimal system performance, follow the guidelines below for making changes to the Cisco APIC cluster.


Note

Prior to initiating a change to the cluster, always verify its health. When performing planned changes to the cluster, all controllers in the cluster should be healthy. If one or more of the Cisco APICs' health status in the cluster is not "fully fit," remedy that situation before proceeding. Also, assure that cluster controllers added to the Cisco APIC are running the same version of firmware as the other controllers in the Cisco APIC cluster.


Follow these general guidelines when managing clusters:

  • We recommend that you have at least 3 active Cisco APICs in a cluster, along with additional standby Cisco APICs. In most cases, we recommend a cluster size of 3, 5, or 7 Cisco APICs. We recommend 4 Cisco APICs for a two site multi-pod fabric that has between 80 to 200 leaf switches.

  • Disregard cluster information from Cisco APICs that are not currently in the cluster; they do not provide accurate cluster information.

  • Cluster slots contain a Cisco APIC ChassisID. Once you configure a slot, it remains unavailable until you decommission the Cisco APIC with the assigned ChassisID.

  • If a Cisco APIC firmware upgrade is in progress, wait for it to complete and the cluster to be fully fit before proceeding with any other changes to the cluster.

  • When moving a Cisco APIC, first ensure that you have a healthy cluster. After verifying the health of the Cisco APIC cluster, choose the Cisco APIC that you intend to shut down. After the Cisco APIC has shut down, move the Cisco APIC, re-connect it, and then turn it back on. From the GUI, verify that the all controllers in the cluster return to a fully fit state.


    Note

    Only move one Cisco APIC at a time.


  • When an Cisco APIC cluster is split into two or more groups, the ID of a node is changed and the changes are not synchronized across all Cisco APICs. This can cause inconsistency in the node IDs between Cisco APICs and also the affected leaf nodes may not appear in the inventory in the Cisco APIC GUI. When you split a Cisco APIC cluster, decommission the affected leaf nodes from a Cisco APIC and register them again, so that the inconsistency in the node IDs is resolved and the health status of the APICs in a cluster are in a fully fit state.

  • Before configuring the Cisco APIC cluster, ensure that all of the Cisco APICs are running the same firmware version. Initial clustering of Cisco APICs running differing versions is an unsupported operation and may cause problems within the cluster.

This section contains the following topics:

Replacing a Cisco APIC in a Cluster Using the CLI


Note

  • For more information about managing clusters, see Cluster Management Guidelines.

  • When you replace an APIC, the password will always be synced from the cluster. When replacing APIC 1, you will be asked for a password but it will be ignored in favor of the existing password in the cluster. When replacing APIC 2 or 3, you will not be asked for a password.


Before you begin

Before replacing an APIC, ensure that the replacement APIC is running the same firmware version as the APIC to be replaced. If the versions are not the same, you must update the firmware of the replacement APIC before you begin. Initial clustering of APICs running differing versions is an unsupported operation and may cause problems within the cluster.

Procedure


Step 1

Identify the APIC that you want to replace.

Step 2

Note the configuration details of the APIC to be replaced by using the acidiag avread command.

Step 3

Decommission the APIC using the controller controller-id decommission command.

Note 
Decommissioning the APIC removes the mapping between the APIC ID and Chassis ID. The new APIC typically has a different APIC ID, so you must remove this mapping in order to add a new APIC to the cluster.
Step 4

To commission the new APIC, follow these steps:

  1. Disconnect the old APIC from the fabric.

  2. Connect the replacement APIC to the fabric.

    The new APIC controller appears in the APIC GUI menu System > Controllers > apic_controller_name > Cluster as Seen by Node in the Unauthorized Controllers list.

  3. Commission the new APIC using the controller controller-id commission command.

  4. Boot the new APIC.

  5. Allow several minutes for the new APIC information to propagate to the rest of the cluster.

    The new APIC controller appears in the APIC GUI menu System > Controllers > apic_controller_name > Cluster as Seen by Node in the Active Controllers list.


Switching Over Active APIC with Standby APIC Using CLI

Use this procedure to switch over an active APIC with a standby APIC.

Procedure


Step 1

replace-controller replace ID number Backup serial number

Replaces an active APIC with an standby APIC.

Example:

apic1#replace-controller replace 2 FCH1804V27L
Do you want to replace APIC 2 with a backup? (Y/n): Y
Step 2

replace-controller reset ID number

Resets fail over status of the active controller.

Example:

apic1# replace-controller reset 2
Do you want to reset failover status of APIC 2? (Y/n): Y

Verifying Cold Standby Status Using the CLI

Procedure


To verify the Cold Standby status of APIC, log in to the APIC as admin and enter the command show controller .


apic1# show controller
Fabric Name          : vegas
Operational Size     : 3
Cluster Size         : 3
Time Difference      : 496
Fabric Security Mode : strict

 ID    Pod   Address          In-Band IPv4     In-Band IPv6               OOB IPv4         OOB IPv6                   Version             Flags  Serial Number     Health
 ----  ----  ---------------  ---------------  -------------------------  ---------------  -------------------------  ------------------  -----  ----------------  ------------------
 1*    1     10.0.0.1         0.0.0.0          fc00::1                    172.23.142.4     fe80::26e9:b3ff:fe91:c4e0  2.2(0.172)          crva-  FCH1748V0DF       fully-fit
 2     1     10.0.0.2         0.0.0.0          fc00::1                    172.23.142.6     fe80::26e9:bf8f:fe91:f37c  2.2(0.172)          crva-  FCH1747V0YF       fully-fit
 3     1     10.0.0.3         0.0.0.0          fc00::1                    172.23.142.8     fe80::4e00:82ff:fead:bc66  2.2(0.172)          crva-  FCH1725V2DK       fully-fit
 21~         10.0.0.21                                                                                                                    -----  FCH1734V2DG

Flags - c:Commissioned | r:Registered | v:Valid Certificate | a:Approved | f/s:Failover fail/success
(*)Current (~)Standby

Fabric Initialization and Switch Discovery

Switch Discovery

Registering an Unregistered Switch Using the CLI

Use this procedure to register a switch from the Nodes Pending Registration tab on the Fabric Membership work pane using the CLI.


Note

This procedure is identical to "Adding a Switch Before Discovery Using the CLI". When you execute the command, the system determines if the node exists and, if not, adds it. If the node exists, the system registers it.

Procedure

Command or Action Purpose

[no] system switch-id serial-number switch-id name pod id role leaf node-type tier-2-leaf

Adds the switch to the pending registration list.

Adding a Switch Before Discovery Using the CLI

Use this procedure to add a switch to the Nodes Pending Registration tab on the Fabric Membership work pane using the CLI.


Note

This procedure is identical to "Registering an Unregistered Switch Using the CLI". When you execute the command, the system determines if the node exists and, if not, adds it. If the node does exist, the system registers it.

Procedure


[no] system switch-id serial-number switch-id name pod id role leaf node-type tier-2-leaf

Adds the switch to the pending registration list.


Graceful Insertion and Removal (GIR) Mode

Removing a Switch to Maintenance Mode Using the CLI

Use this procedure to remove a switch to maintenance mode using the CLI.


Note

While the switch is in maintenance mode, CLI 'show' commands on the switch show the front panel ports as being in the up state and the BGP protocol as up and running. The interfaces are actually shut and all other adjacencies for BGP are brought down, but the displayed active states allow for debugging.

Procedure


[no]debug-switch node_id or node_name

Removes the switch to maintenance mode.


Inserting a Switch to Operation Mode Using the CLI

Use this procedure to insert a switch to operational mode using the CLI.

Procedure


[no]no debug-switch node_id or node_name

Inserts the switch to operational mode.