Cisco Application Policy Infrastructure Controller Release Notes, Release 3.2(9)
The Cisco Application Centric Infrastructure (ACI) is an architecture that allows the application to define the networking requirements in a programmatic way. This architecture simplifies, optimizes, and accelerates the entire application deployment lifecycle. Cisco Application Policy Infrastructure Controller (APIC) is the software, or operating system, that acts as the controller.
The Cisco Application Centric Infrastructure Fundamentals guide provides complete details about the Cisco ACI, including a glossary of terms that are used in the Cisco ACI.
This document describes the features, bugs, and limitations for the Cisco APIC.
Note: Use this document with the Cisco Nexus 9000 ACI-Mode Switches Release Notes, Release 13.2(9), which you can view at the following location:
Release notes are sometimes updated with new information about restrictions and bugs. See the following website for the most recent version of this document:
You can watch videos that demonstrate how to perform specific tasks in the Cisco APIC on the Cisco ACI YouTube channel:
https://www.youtube.com/c/CiscoACIchannel
For the verified scalability limits (except the CLI limits), see the Verified Scalability Guide for this release.
For the CLI verified scalability limits, see the Cisco NX-OS Style Command-Line Interface Configuration Guide for this release.
You can access these documents from the following website:
Table 1 shows the online change history for this document.
Table 1 Online History Change
Date |
Description |
December 9, 2022 |
In the Open Bugs section, added bug CSCvw33061. |
August 1, 2022 |
In the Miscellaneous Compatibility Information section, added: ■ 4.2(2a) CIMC HUU ISO (recommended) for UCS C220/C240 M5 (APIC-L3/M3) ■ 4.1(2k) CIMC HUU ISO (recommended) for UCS C220/C240 M4 (APIC-L2/M2) |
March 21, 2022 |
In the Miscellaneous Compatibility Information section, added: ■ 4.1(3f) CIMC HUU ISO (recommended) for UCS C220/C240 M5 (APIC-L3/M3) |
February 23, 2022 |
In the Miscellaneous Compatibility Information section, added: ■ 4.1(2g) CIMC HUU ISO (recommended) for UCS C220/C240 M4 (APIC-L2/M2) |
November 2, 2021 |
In the Miscellaneous Compatibility Information section, added: ■ 4.1(3d) CIMC HUU ISO (recommended) for UCS C220/C240 M5 (APIC-L3/M3) |
August 4, 2021 |
In the Open Issues section, added bug CSCvy30453. |
July 26, 2021 |
In the Miscellaneous Compatibility Information section, the CIMC 4.1(3c) release is now recommended for UCS C220/C240 M5 (APIC-L3/M3). |
March 11, 2021 |
In the Miscellaneous Compatibility Information section, for CIMC HUU ISO, added: ■ 4.1(3b) CIMC HUU ISO (recommended) for UCS C220/C240 M5 (APIC-L3/M3) Changed: ■ 4.1(2b) CIMC HUU ISO (recommended) for UCS C220/C240 M4 (APIC-L2/M2) and M5 (APIC-L3/M3) To: ■ 4.1(2b) CIMC HUU ISO (recommended) for UCS C220/C240 M4 (APIC-L2/M2 |
February 9, 2021 |
In the Open Bugs section, added bug CSCvt07565. |
February 5, 2021 |
Moved bug CSCvr51121 from the open bugs table to the resolved bugs table. Removed bug CSCvr92532 from the open bugs table. This bug was erroneously included. |
February 3, 2021 |
In the Miscellaneous Compatibility Information section, for CIMC HUU ISO, added: ■ 4.1(2b) CIMC HUU ISO (recommended) for UCS C220/C240 M4 (APIC-L2/M2) and M5 (APIC-L3/M3) |
December 16, 2020 |
In the Miscellaneous Compatibility Information section, CIMC release 4.1(1g) is now recommended for UCS C220/C240 M4 (APIC-L2/M2) and UCS C220/C240 M5 (APIC-L3/M3). |
April 24, 2020 |
Release 3.2(9h) became available. There are no changes to this document for this release. See the Cisco NX-OS Release Notes for Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.2(9) for the changes in this release. |
March 11, 2020 |
Release 3.2(9f) became available. Added the resolved bugs for this release. In the New Software Features section, added the link level policies feature. |
March 6, 2020 |
In the Miscellaneous Compatibility Information section, updated the CIMC HUU ISO information for the 4.0(2g) and 4.0(4e) CIMC releases. |
November 5, 2019 |
In the Miscellaneous Compatibility Information, added the supported CIMC versions for APIC-L3/M3. |
November 3, 2019 |
Release 3.2(9b) became available. |
This document includes the following sections:
■ Upgrade and Downgrade Information
■ Bugs
This section lists the new and changed features in this release and includes the following topics:
The following table lists the new software features in this release:
Table 2 New Software Features, Guidelines, and Restrictions
Feature |
Description |
Guidelines and Restrictions |
Cisco APIC support for the Cisco Network Insights Base Application |
Cisco APIC now supports the Cisco Network Insights Base Application release 1.0.1. See the Cisco Network Insights Base Application for the Cisco Application Policy Infrastructure Controller Release Notes, Release 1.0.1. |
N/A |
Link level policies |
You can now create link level policies under fabric policies. A link level policy applies to the fabric interfaces (leaf switch uplink ports and spine switch ports) at the link level and is used where there might be link noise or transient flaps. In the policy, you configure a link debounce interval. When there is a flap, the fabric interfaces will wait for the debounce interval to pass, then check the link status again to re-confirm the event. If the link is okay at this time, then the interfaces will remain up. The default debounce interval is 0 ms. We recommend a value of 100 ms, but you should choose a value that is appropriate to your fabric. This feature was added in release 3.2(9f). |
N/A |
■ This release adds support for the APIC-L3 and APIC-M3 servers.
For switch-related new hardware features, see the Cisco NX-OS Release Notes for Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.2(9) at the following location:
For the changes in behavior, see the Cisco ACI Releases Changes in Behavior document.
For upgrade and downgrade considerations for the Cisco APIC, see the Cisco APIC documentation site at the following URL:
See the "Upgrading and Downgrading the APIC Controller and Switch Software" section of the Cisco APIC Installation, Upgrade, and Downgrade Guide.
This section contains lists of open and resolved bugs and known behaviors.
This section lists the open bugs. Click the bug ID to access the Bug Search tool and see additional information about the bug. The "Exists In" column of the table specifies the 3.2(9) releases in which the bug exists.
Table 3 Open Bugs in This Release
Description |
Exists in |
|
For a Cisco ACI fabric that is configured with fabricId=1, if APIC3 is replaced from scratch with an incorrect fabricId of "2," APIC3's DHCPd will set the nodeRole property to "0" (unsupported) for all dhcpClient managed objects. This will be propagated to the appliance director process for all of the Cisco APICs. The process then stops sending the AV/FNV update for any unknown switch types (switches that are not spine nor leaf switches). In this scenario, commissioning/decommissioning of the Cisco APICs will not be propagated to the switches, which causes new Cisco APICs to be blocked out of the fabric. |
3.2(9h) and later |
|
The Cisco APIC PSU voltage and amperage values are zero. |
3.2(9f) and later |
|
SSD lifetime can be exhausted prematurely if unused Standby slot exists |
3.2(9f) and later |
|
The per feature container for techsupport "objectstore_debug_info" fails to collect on spines due to invalid filepath. Given filepath: more /debug/leaf/nginx/objstore*/mo | cat Correct filepath: more /debug/spine/nginx/objstore*/mo | cat TAC uses this file/data to collect information about excessive DME writes. |
3.2(9f) and later |
|
AVE is not getting the VTEP IP address from the Cisco APIC. The logs show a "pending pool" and "no free leases". |
3.2(9f) and later |
|
A tunnel endpoint doesn't receive a DHCP lease. This occurs with a newly deployed or upgraded Cisco ACI Virtual Edge. |
3.2(9f) and later |
|
CDP is not enabled on the management interfaces for the leaf switches and spine switches. |
3.2(9b) and later |
|
The stats for a given leaf switch rule cannot be viewed if a rule is double-clicked. |
3.2(9b) and later |
|
The Port ID LLDP Neighbors panel displays the port ID when the interface does not have a description. Example: Ethernet 1/5, but if the interface has description, the Port ID property shows the Interface description instead of the port ID. |
3.2(9b) and later |
|
A service cannot be reached by using the APIC out-of-band management that exists within the 172.17.0.0/16 subnet. |
3.2(9b) and later |
|
This enhancement is to change the name of "Limit IP Learning To Subnet" under the bridge domains to be more self-explanatory. Original : Limit IP Learning To Subnet: [check box] Suggestion : Limit Local IP Learning To BD/EPG Subnet(s): [check box] |
3.2(9b) and later |
|
A tenant's flows/packets information cannot be exported. |
3.2(9b) and later |
|
Requesting an enhancement to allow exporting a contract by right clicking the contract itself and choosing "Export Contract" from the right click context menu. The current implementation of needing to right click the Contract folder hierarchy to export a contract is not intuitive. |
3.2(9b) and later |
|
When configuring an L3Out under a user tenant that is associated with a VRF instance that is under the common tenant, a customized BGP timer policy that is attached to the VRF instance is not applied to the L3Out (BGP peer) in the user tenant. |
3.2(9b) and later |
|
For strict security requirements, customers require custom certificates that have RSA key lengths of 3072 and 4096. |
3.2(9b) and later |
|
This is an enhancement to allow for text-based banners for the Cisco APIC GUI login screen. |
3.2(9b) and later |
|
For a client (browser or ssh client) that is using IPv6, the Cisco APIC aaaSessionLR audit log shows "0.0.0.0" or some bogus value. |
3.2(9b) and later |
|
When a VRF table is configured to receive leaked external routes from multiple VRF tables, the Shared Route Control scope to specify the external routes to leak will be applied to all VRF tables. This results in an unintended external route leaking. This is an enhancement to ensure the Shared Route Control scope in each VRF table should be used to leak external routes only from the given VRF table. |
3.2(9b) and later |
|
When authenticating with the Cisco APIC using ISE (TACACS), all logins over 31 characters fail. |
3.2(9b) and later |
|
On modifying a service parameter, the Cisco APIC sends 2 posts to the backend. The first post deletes all of the folders and parameters. The second post adds all of the remaining modified folders and parameters to the backend. These 2 posts will disrupt the running traffic. |
3.2(9b) and later |
|
The actrlRule is has the wrong destination. |
3.2(9b) and later |
|
The connectivity filter configuration of an access policy group is deprecated and should be removed from GUI. |
3.2(9b) and later |
|
There is no record of who acknowledged a fault in the Cisco APIC, nor when the acknowledgement occurred. |
3.2(9b) and later |
|
The action named 'Launch SSH' is disabled when a user with read-only access logs into the Cisco APIC. |
3.2(9b) and later |
|
Support for local user (admin) maximum tries and login delay configuration. |
3.2(9b) and later |
|
There is a VLAN overlapping scenario. After configuring new static ports and a physical domain under an existing EPG, there is a Layer 2 loop. This issue is due to an FD VLAN encapsulation mismatch on two leaf switches. |
3.2(9b) and later |
|
Permament License Reservation (PLR) fails after upgrading to a Cisco APIC 4.0 release. |
3.2(9b) and later |
|
Fault delegates are raised on the Cisco APIC, but the original fault instance is already gone because the affected node has been removed from the fabric. |
3.2(9b) and later |
|
Access policy resolutions reveal unexpected results due to the existence of connectivity filters that are hidden from the UI. Depending on the VLANs tied to the AEPs/domains, this can result in unexpected outages as VLANs are pulled and 'invalid path' faults are flagged. |
3.2(9b) and later |
|
The Nginx process generates cores on the switches and the process will restart automatically. The core file from the switches gets collected and moved to the APIC, which drains the memory. You might need to remove the collected core files manually from the APIC to free up space. |
3.2(9b) and later |
|
A leaf switch gets upgraded when a previously-configured maintenance policy is triggered. |
3.2(9b) and later |
|
New port groups in VMware vCenter may be delayed when pushed from the Cisco APIC. |
3.2(9b) and later |
|
Specific operating system and browser version combinations cannot be used to log in to the APIC GUI. Some browsers that are known to have this issue include (but might not be limited to) Google Chrome version 75.0.3770.90 and Apple Safari version 12.0.3 (13606.4.5.3.1). |
3.2(9b) and later |
|
In a RedHat OpenStack platform deployment running the Cisco ACI Unified Neutron ML2 Plugin and with the CompHosts running OVS in VLAN mode, when toggling the resolution immediacy on the EPG<->VMM domain association (fvRsDomAtt.resImedcy) from Pre-Provision to On-Demand, the encap VLANs (vlanCktEp mo's) are NOT programmed on the leaf switches. This problem surfaces sporadically, meaning that it might take several resImedcy toggles between PreProv and OnDemand to reproduce the issue. |
3.2(9b) and later |
|
Disabling dataplane learning is only required to support a policy-based redirect (PBR) use case on pre-"EX" leaf switches. There are few other reasons otherwise this feature should be disabled. There currently is no confirmation/warning of the potential impact that can be caused by disabling dataplane learning. |
3.2(9b) and later |
|
When making a configuration change to an L3Out (such as contract removal or addition), the BGP peer flaps or the bgpPeerP object is deleted from the leaf switch. In the leaf switch policy-element traces, 'isClassic = 0, wasClassic =1' is set post-update from the Cisco APIC. |
3.2(9b) and later |
|
A previously-working traffic is policy dropped after the subject is modified to have the "no stats" directive. |
3.2(9b) and later |
|
This is an enhancement request for allowing DVS MTU to be configured from a VMM domain policy and be independent of fabricMTU. |
3.2(9b) and later |
|
When configuring local SPAN in access mode using the GUI or CLI and then running the "show running-config monitor access session<session>" command, the output does not include all source span interfaces. |
3.2(9b) and later |
|
vmmPLInf objects are created with epgKey's and DN's that have truncated EPG names ( truncated at "."). |
3.2(9b) and later |
|
Descending option will not work for the Static Ports table. Even when the user clicks descending, the sort defaults to ascending. |
3.2(9b) and later |
|
When using AVE with Cisco APIC, fault F0214 gets raised, but there is no noticeable impact on AVE operation: descr: Fault delegate: Operational issues detected for OpFlex device: ..., error: [Inventory not available on the node at this time] |
3.2(9b) and later |
|
When trying to track an AVE endpoint IP address, running the "show endpoint ip x.x.x.x" command in the Cisco APIC CLI to see the IP address and checking the IP address on the EP endpoint in the GUI shows incorrect or multiple VPC names. |
3.2(9b) and later |
|
There is a minor memory leak in svc_ifc_policydist when performing various tenant configuration removals and additions. |
3.2(9b) and later |
|
Configuring a static endpoint through the Cisco APIC CLI fails with the following error: Error: Unable to process the query, result dataset is too big Command execution failed. |
3.2(9b) and later |
|
When migrating an AVS VMM domain to Cisco ACI Virtual Edge, the Cisco ACI Virtual Edge that gets deployed is configured in VLAN mode rather than VXLAN Mode. Because of this, you will see faults for the EPGs with the following error message: "No valid encapsulation identifier allocated for the epg" |
3.2(9b) and later |
|
While configuring a logical node profile in any L3Out, the static routes do not have a description. |
3.2(9b) and later |
|
An error is raised while building an ACI container image because of a conflict with the /opt/ciscoaci-tripleo-heat-templates/tools/build_openstack_aci_containers.py package. |
3.2(9b) and later |
|
An endpoint is unreachable from the leaf node because the static pervasive route (toward the remote bridge domain subnet) is missing. |
3.2(9b) and later |
|
Randomly, the Cisco APIC GUI alert list shows an incorrect license expiry time.Sometimes it is correct, while at others times it is incorrect. |
3.2(9b) and later |
|
For a DVS with a controller, if another controller is created in that DVS using the same host name, the following fault gets generated: "hostname or IP address conflicts same controller creating controller with same name DVS". |
3.2(9b) and later |
|
When logging into the Cisco APIC using "apic#fallback\\user", the "Error: list index out of range" log message displays and the lastlogin command fails. There is no operational impact. |
3.2(9b) and later |
|
In Cisco ACI Virtual Edge, there are faults related to VMNICs. On the Cisco ACI Virtual Edge domain, there are faults related to the HpNic, such as "Fault F2843 reported for AVE | Uplink portgroup marked as invalid". |
3.2(9b) and later |
|
The plgnhandler process crashes on the Cisco APIC, which causes the cluster to enter a data layer partially diverged state. |
3.2(9b) and later |
|
When physical domains and external routed domains are attached to a security domain, these domains are mapped as associated tenants instead of associated objects under Admin > AAA > security management > Security domains. |
3.2(9b) and later |
|
A Cisco ACI leaf switch does not have MP-BGP route reflector peers in the output of "show bgp session vrf overlay-1". As a result, the switch is not able to install dynamic routes that are normally advertised by MP-BGP route reflectors. However, the spine switch route reflectors are configured in the affected leaf switch's pod, and pod policies have been correctly defined to deploy the route reflectors to the leaf switch. Additionally, the bgpPeer managed objects are missing from the leaf switch's local MIT. |
3.2(9b) and later |
|
In a GOLF configuration, when an L3Out is deleted, the bridge domains stop getting advertised to the GOLF router even though another L3Out is still active. |
3.2(9b) and later |
|
The CLI command "show interface x/x switchport" shows VLANs configured and allowed through a port. However, when going to the GUI under Fabric > Inventory > node_name > Interfaces > Physical Interfaces > Interface x/x > VLANs, the VLANs do not show. |
3.2(9b) and later |
|
The tmpfs file system that is mounted on /data/log becomes 100% utilized. |
3.2(9b) and later |
|
The policy manager (PM) may crash when use testapi to delete MO from policymgr db. |
3.2(9b) and later |
|
SNMP does not respond to GETs or sending traps on one or more Cisco APICs despite previously working properly. |
3.2(9b) and later |
|
The policymgr DME process can crash because of an OOM issue, and there are many pcons.DelRef managed objects in the DB. |
3.2(9b) and later |
|
The eventmgr database size may grow to be very large (up to 7GB). With that size, the Cisco APIC upgrade will take 1 hour for the Cisco APIC node that contains the eventmgr database. In rare cases, this could lead to a failed upgrade process, as it times out while working on the large database file of the specified controller. |
3.2(9b) and later |
|
VPC protection created in prior to the 2.2(2e) release may not to recover the original virtual IP address after fabric ID recovery. Instead, some of vPC groups get a new vIP allocated, which does not get pushed to the leaf switch. The impact to the dataplane does not come until the leaf switch had a clean reboot/upgrade, because the rebooted leaf switch gets a new virtual IP that is not matched with a vPC peer. As a result, both sides bring down the virtual port channels, then the hosts behind the vPC become unreachable. |
3.2(9b) and later |
|
Updating the interface policy group breaks LACP if eLACP is enabled on a VMM domain. If eLACP was enabled on the domain, Creating, updating, or removing an interface policy group with the VMM AEP deletes the basic LACP that is used by the domain. |
3.2(9b) and later |
|
When migrating an EPG from one VRF table to a new VRF table, and the EPG keeps the contract relation with other EPGs in the original VRF table. Some bridge domain subnets in the original VRF table get leaked to the new VRF table due to the contract relation, even though the contract does not have the global scope and the bridge domain subnet is not configured as shared between VRF tables. The leaked static route is not deleted even if the contract relation is removed. |
3.2(9b) and later |
|
The login history of local users is not updated in Admin > AAA > Users > (double click on local user) Operational > Session. |
3.2(9b) and later |
|
In the Cisco APIC GUI, after removing the Fabric Policy Group from "System > Controllers > Controller Policies > show usage", the option to select the policy disappears, and there is no way in the GUI to re-add the policy. |
3.2(9b) and later |
|
The MD5 checksum for the downloaded Cisco APIC images is not verified before adding it to the image repository. |
3.2(9b) and later |
|
Protocol information is not shown in the GUI when a VRF table from the common tenant is being used in any user tenant. |
3.2(9b) and later |
|
The following error is encountered when accessing the Infrastructure page in the ACI vCenter plugin after inputting vCenter credentials. "The Automation SDK is not authenticated" VMware vCenter plug-in is installed using powerCLI. The following log entry is also seen in vsphere_client_virgo.log on the VMware vCenter: /var/log/vmware/vsphere-client/log/vsphere_client_virgo.log [ERROR] http-bio-9090-exec-3314 com.cisco.aciPluginServices.core.Operation sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: signature check failed |
3.2(9b) and later |
|
When trying to assign a description to a FEX downlink/host port using the Config tab in the Cisco APIC GUI, the description will get applied to the GUI, but it will not propagate to the actual interface when queried using the CLI or GUI. |
3.2(9b) and later |
|
For an EPG containing a static leaf node configuration, the Cisco APIC GUI returns the following error when clicking the health of Fabric Location: Invalid DN topology/pod-X/node-Y/local/svc-policyelem-id-0/ObservedEthIf, wrong rn prefix ObservedEthIf at position 63 |
3.2(9b) and later |
|
There is a BootMgr memory leak on a standby Cisco APIC. If the BootMgr process crashes due to being out of memory, it continues to crash, but system will not be rebooted. After the standby Cisco APIC is rebooted by hand, such as by power cycling the host using CIMC, the login prompt of the Cisco APIC will be changed to localhost and you will not be able to log into the standby Cisco APIC. |
3.2(9b) and later |
|
After a delete/add of a Cisco ACI-managed DVS, dynamic paths are not programmed on the leaf switch and the compRsDlPol managed object has a missing target. The tDn property references the old DVS OID instead of the latest value.# moquery -c compRsDlPol |
3.2(9b) and later |
|
A leaf switch reloads due to an out-of-memory condition after changing the contract scope to global. |
3.2(9b) and later |
|
Traffic loss is observed from multiple endpoints deployed on two different vPC leaf switches. |
3.2(9b) and later |
|
Leaf switch downlinks all go down at one time due to FabricTrack. |
3.2(9b) |
|
When using QSFP-100G-PSM4-S transceivers in an N9K-X9736C-FX linecard that is running ACI code, link flaps may be seen intermittently. |
3.2(9b) |
|
A downgrade fails for a standby APIC when downgraded from release 3.2(9b). |
3.2(9b) |
This section lists the resolved bugs. Click the bug ID to access the Bug Search tool and see additional information about the bug. The "Fixed In" column of the table specifies whether the bug was resolved in the base release or a patch release.
Table 4 Resolved Bugs in This Release
Bug ID |
Description |
Fixed in |
Leaf switch downlinks all go down at one time due to FabricTrack. |
3.2(9f) |
|
When using QSFP-100G-PSM4-S transceivers in an N9K-X9736C-FX linecard that is running ACI code, link flaps may be seen intermittently. |
3.2(9f) |
|
The following fault is raised on the switch: "F3525: High SSD usage observed. Please check switch activity and contact Cisco Technical Support about high SSD usage." |
3.2(9f) |
|
This is an enhancement to include the managed object class name and isPersisted attribute in DME log line. |
3.2(9f) |
|
A downgrade fails for a standby APIC when downgraded from release 3.2(9b). |
3.2(9f) |
|
Cisco APIC can be seen repeatedly logging into the RHV controller at a rapid rate in the RHV Event tab. This can also lead to a memory usage increase on the controller, as each login is a new session. Specifically, the Postgres process on the RHV controller increases. |
3.2(9b) |
|
The policymgr process in the APIC resets and results in a core file. Configurations can be missed if the core file is generated during a configuration import or during configuration steps. |
3.2(9b) |
|
The endpoints of the VMs behind an AVS tunnel are not learned in EPM on the leaf switch. |
3.2(9b) |
|
A vulnerability in the fabric infrastructure VLAN connection establishment of the Cisco Nexus 9000 Series Application Centric Infrastructure (ACI) Mode Switch Software could allow an unauthenticated, adjacent attacker to bypass security validations and connect an unauthorized server to the infrastructure VLAN. The vulnerability is due to insufficient security requirements during the Link Layer Discovery Protocol (LLDP) setup phase of the infrastructure VLAN. An attacker could exploit this vulnerability by sending a malicious LLDP packet on the adjacent subnet to the Cisco Nexus 9000 Series Switch in ACI mode. A successful exploit could allow the attacker to connect an unauthorized server to the infrastructure VLAN, which is highly privileged. With a connection to the infrastructure VLAN, the attacker can make unauthorized connections to Cisco Application Policy Infrastructure Controller (APIC) services or join other host endpoints. Cisco has released software updates that address this vulnerability. There are workarounds that address this vulnerability. This advisory is available at the following link: |
3.2(9b) |
|
A vulnerability in the REST API for software device management in Cisco Application Policy Infrastructure Controller (APIC) Software could allow an authenticated, remote attacker to escalate privileges to root on an affected device. The vulnerability is due to incomplete validation and error checking for the file path when specific software is uploaded. An attacker could exploit this vulnerability by uploading malicious software using the REST API. A successful exploit could allow an attacker to escalate their privilege level to root. The attacker would need to have the administrator role on the device. Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability. This advisory is available at the following link: |
3.2(9b) |
|
Fault F0467 is raised with the following description: Configuration failed for ... due to Encap Already Used in Another EPG |
3.2(9b) |
|
During an SSH connection to the Cisco APIC, the following message appears in CLI:
<html><head><title>502 Bad Gateway</title></head><body bgcolor="white"><center><h1>502 Bad Gateway</h1></center><hr><center>nginx/1.10.3</center></body></html> |
3.2(9b) |
|
Traffic to newly provisioned floating IP addresses is dropped for up to an hour. |
3.2(9b) |
|
Using a PBR service graph with ASA in two-arm mode. After upgrading from release 12.3(1) to 13.1 to 13.2(6i), the service graph (in the common tenant) has some faults. The service graph is working for previously-configured EPGs (verified by checking traffic that is redirected to the firewall), but new EPGs cannot be applied to the service graph. The service graph state is also "not applied". One of the faults mentions that the service graph contract cannot be used as provider and consumer and that the contract is only supported in single-node PBR with one arm. The contract is not being applied in any vzAny consumer/provider. The "Epp not found. Retry or abort task" error appears in the policymanager. Adding an ARP filter on the contract does not trigger anything, although the modification is seen in the policy distributor. |
3.2(9b) |
|
The Hyper-V agent is in the STOPPED state. Hyper-V agent logs indicate that process is stopping at the "Set-ExecutionPolicy Unrestricted" command. |
3.2(9b) |
|
In addition to the "Enforce Password Change interval" setting, APIC has another setting, "Minimum period between password changes," which is by default set to 24 hours. When "Enforce Password Change interval" is disabled, the backend enforces the "Minimum period between password changes" setting. |
3.2(9b) |
|
Changing the user credentials for a Red Hat VMM domain may result in a previous username still being used to authenticate with the Red Hat controller. |
3.2(9b) |
|
NGINX spikes to 100% CPU usage. |
3.2(9b) |
|
The vmmmgr process on the APICs crash due to an unhandled HTML tag "hr" response from a Red Hat Controller that is tied to a Red Hat VMM domain. |
3.2(9b) |
|
If a switch upgrade is triggered after the series of steps outlined in this bug's conditions, the switch nodes may get stuck in a reboot loop where the bootscript process pushes the bootscript file (using DCHP), which tells the switch to boot into the default version. However, the APIC policy discovers the switch and finds that it has not yet been upgraded to the upgrade target version. |
3.2(9b) |
|
On Cisco APIC release 4.1(2m) and VMware release 6.7.0.3, the VMM process generates a core while attempting to perform VMM integration. |
3.2(9b) |
|
The VMMMGR process continuously crashes and causes the cluster health to become partially diverged. |
3.2(9b) |
|
With Cisco ACI Virtual Edge (when not part of a Cisco ACI vPod), after using vMotion to migrate virtual machines, there is traffic loss and a live core gets generated on opflexelem on the TOR switch. |
3.2(9b) |
|
Policies may take a long time (over 10 minutes) to get programmed on the leaf switches. In addition, the APIC pulls inventory from the VMware vCenter repeatedly, instead of following the usual 24 hour interval. |
3.2(9b) |
|
If the current VMware vCenter crashes and is not recoverable, then a new VMware vCenter with an identical configuration is built, the Cisco APIC pushes the DVS and Quarantine port-groups. However, the APIC does not push the EPG port group. |
3.2(9b) |
|
The last APIC in the cluster gets rebooted when APIC-1 is decommissioned due to some issue seen on APIC-1 while upgrading. In addition, after decommissioning APIC-1, the other APICs still wait for APIC-1 to get upgraded. |
3.2(9b) |
This section lists bugs that describe known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the bug. The "Exists In" column of the table specifies the 3.2(9) releases in which the known behavior exists.
Table 5 Known Behaviors in This Release
Bug ID |
Description |
Exists in |
The Cisco APIC does not validate duplicate IP addresses that are assigned to two device clusters. The communication to devices or the configuration of service devices might be affected. |
3.2(9b) and later |
|
In some of the 5-minute statistics data, the count of ten-second samples is 29 instead of 30. |
3.2(9b) and later |
|
The node ID policy can be replicated from an old appliance that is decommissioned when it joins a cluster. |
3.2(9b) and later |
|
The DSCP value specified on an external endpoint group does not take effect on the filter rules on the leaf switch. |
3.2(9b) and later |
|
The hostname resolution of the syslog server fails on leaf and spine switches over in-band connectivity. |
3.2(9b) and later |
|
Following a FEX or switch reload, configured interface tags are no longer configured correctly. |
3.2(9b) and later |
|
Switches can be downgraded to a 1.0(1) version if the imported configuration consists of a firmware policy with a desired version set to 1.0(1). |
3.2(9b) and later |
|
If the Cisco APIC is rebooted using the CIMC power reboot, the system enters into fsck due to a corrupted disk. |
3.2(9b) and later |
|
The Cisco APIC Service (ApicVMMService) shows as stopped in the Microsoft Service Manager (services.msc in control panel > admin tools > services). This happens when a domain account does not have the correct privilege in the domain to restart the service automatically. |
3.2(9b) and later |
|
The traffic destined to a shared service provider endpoint group picks an incorrect class ID (PcTag) and gets dropped. |
3.2(9b) and later |
|
Traffic from an external Layer 3 network is allowed when configured as part of a vzAny (a collection of endpoint groups within a context) consumer. |
3.2(9b) and later |
|
Newly added microsegment EPG configurations must be removed before downgrading to a software release that does not support it. |
3.2(9b) and later |
|
Downgrading the fabric starting with the leaf switch will cause faults such as policy-deployment-failed with fault code F1371. |
3.2(9b) and later |
|
Creating or deleting a fabricSetupP policy results in an inconsistent state. |
3.2(9b) and later |
|
After a pod is created and nodes are added in the pod, deleting the pod results in stale entries from the pod that are active in the fabric. This occurs because the Cisco APIC uses open source DHCP, which creates some resources that the Cisco APIC cannot delete when a pod is deleted. |
3.2(9b) and later |
|
When a Cisco APIC cluster is upgrading, the Cisco APIC cluster might enter the minority status if there are any connectivity issues. In this case, user logins can fail until the majority of the Cisco APICs finish the upgrade and the cluster comes out of minority. |
3.2(9b) and later |
|
When downgrading to a 2.0(1) release, the spines and its interfaces must be moved from infra L3out2 to infra L3out1. After infra L3out1 comes up, delete L3out2 and its related configuration, and then downgrade to a 2.0(1) release. |
3.2(9b) and later |
|
No fault gets raised upon using the same encapsulation VLAN in a copy device in tenant common, even though a fault should get raised. |
3.2(9b) and later |
|
In the leaf mode, the command "template route group <group-name> tenant <tenant-name>" fails, declaring that the tenant passed is invalid. |
3.2(9b) and later |
|
When First hop security is enabled on a bridge domain, traffic is disrupted. |
3.2(9b) and later |
|
Cisco ACI Multi-Site Orchestrator BGP peers are down and a fault is raised for a conflicting rtrId on the fvRtdEpP managed object during L3extOut configuration. |
3.2(9b) and later |
|
The PSU SPROM details might not be shown in the CLI upon removal and insertion from the switch. |
3.2(9b) and later |
|
If two intra-EPG deny rules are programmed—one with the class-eq-deny priority and one with the class-eq-filter priority—changing the action of the second rule to "deny" causes the second rule to be redundant and have no effect. The traffic still gets denied, as expected. |
3.2(9b) and later |
|
With a uniform distribution of EPs and traffic flows, a fabric module in slot 25 sometimes reports far less than 50% of the traffic compared to the traffic on fabric modules in non-FM25 slots. |
3.2(9b) and later |
|
When you click Restart for the Microsoft System Center Virtual Machine Manager (SCVMM) agent on a scaled-out setup, the service may stop. You can restart the agent by clicking Start. |
3.2(9b) and later |
|
The CiscoAVS_4.10-5.2.1.SV3.4.10-pkg package has signature issues during installation. |
3.2(9b) and later |
■ In a multipod configuration, before you make any changes to a spine switch, ensure that there is at least one operationally "up" external link that is participating in the multipod topology. Failure to do so could bring down the multipod connectivity. For more information about multipod, see the Cisco Application Centric Infrastructure Fundamentals document and the Cisco APIC Getting Started Guide.
■ With a non-english SCVMM 2012 R2 or SCVMM 2016 setup and where the virtual machine names are specified in non-english characters, if the host is removed and re-added to the host group, the GUID for all the virtual machines under that host changes. Therefore, if a user has created a micro segmentation endpoint group using "VM name" attribute specifying the GUID of respective virtual machine, then that micro segmentation endpoint group will not work if the host (hosting the virtual machines) is removed and re-added to the host group, as the GUID for all the virtual machines would have changed. This does not happen if the virtual name has name specified in all english characters.
■ A query of a configurable policy that does not have a subscription goes to the policy distributor. However, a query of a configurable policy that has a subscription goes to the policy manager. As a result, if the policy propagation from the policy distributor to the policy manager takes a prolonged amount of time, then in such cases the query with the subscription might not return the policy simply because it has not reached policy manager yet.
■ Cisco ACI vCenter Plug-in: Uninstall is not working; remains present in the GUI. After you uninstall the Cisco ACI vCenter Plug-in, it remains visible in the VMware vCenter UI. Restart the VMware vCenter Server to update the UI.
■ When there are silent hosts across sites, ARP glean messages might not be forwarded to remote sites if a 1st generation ToR switch (switch models without -EX or -FX in the name) happens to be in the transit path and the VRF is deployed on that ToR switch, the switch does not forward the ARP glean packet back into the fabric to reach the remote site. This issue is specific to 1st generation transit ToR switches and does not affect 2nd generation ToR switches (switch models with -EX or -FX in the name). This issue breaks the capability of discovering silent hosts.
The following sections list compatibility information for the Cisco APIC software.
This section lists virtualization compatibility information for the Cisco APIC software.
■ For a table that shows the supported virtualization products, see the ACI Virtualization Compatibility Matrix at the following URL:
■ This release supports VMM Integration and VMware Distributed Virtual Switch (DVS) 6.5 and 6.7. For more information about guidelines for upgrading VMware DVS from 5.x to 6.x and VMM integration, see the Cisco ACI Virtualization Guide, Release 3.2(9) at the following URL:
■ For information about Cisco APIC compatibility with Cisco UCS Director, see the appropriate Cisco UCS Director Compatibility Matrix document at the following URL:
■ If you use Microsoft vSwitch and want to downgrade to Cisco APIC Release 2.3(1) from a later release, you first must delete any microsegment EPGs configured with the Match All filter.
This release supports the following Cisco APIC servers:
Product ID |
Description |
APIC-L1 |
Cisco APIC with large CPU, hard drive, and memory configurations (more than 1000 edge ports) |
APIC-L2 |
Cisco APIC with large CPU, hard drive, and memory configurations (more than 1000 edge ports) |
APIC-L3 |
Cisco APIC with large CPU, hard drive, and memory configurations (more than 1200 edge ports) |
APIC-M1 |
Cisco APIC with medium-size CPU, hard drive, and memory configurations (up to 1000 edge ports) |
APIC-M2 |
Cisco APIC with medium-size CPU, hard drive, and memory configurations (up to 1000 edge ports) |
APIC-M3 |
Cisco APIC with medium-size CPU, hard drive, and memory configurations (up to 1200 edge ports) |
The following list includes additional hardware compatibility information:
■ To connect the N2348UPQ to Cisco ACI leaf switches, the following options are available:
— Directly connect the 40G FEX ports on the N2348UPQ to the 40G switch ports on the Cisco ACI leaf switches
— Break out the 40G FEX ports on the N2348UPQ to 4x10G ports and connect to the 10G ports on all other Cisco ACI leaf switches.
Note: A fabric uplink port cannot be used as a FEX fabric port.
■ Connecting the Cisco APIC (the controller cluster) to the Cisco ACI fabric requires a 10G interface on the Cisco ACI leaf switch. You cannot connect the Cisco APIC directly to the Cisco N9332PQ ACI leaf switch, unless you use a 40G to 10G converter (part number CVR-QSFP-SFP10G), in which case the port on the Cisco N9332PQ switch auto-negotiate to 10G without requiring any manual configuration.
■ The Cisco N9K-X9736C-FX (ports 29 to 36) and Cisco N9K-C9364C-FX (ports 49-64) switches do not support 1G SFPs with QSA.
■ Cisco N9K-C9508-FM-E2 fabric modules must be physically removed before downgrading to releases earlier than Cisco APIC 3.0(1).
■ The Cisco N9K-C9508-FM-E2 and N9K-X9736C-FX locator LED enable/disable feature is supported in the GUI and not supported in the Cisco ACI NX-OS Switch CLI.
■ Contracts using matchDscp filters are only supported on switches with "EX" on the end of the switch name. For example, N9K-93108TC-EX.
■ N9K-C9508-FM-E2 and N9K-C9508-FM-E fabric modules in the mixed mode configuration are not supported on the same spine switch.
■ The N9K-C9348GC-FXP switch does not read SPROM information if the PSU is in a shut state. You might see an empty string in the Cisco APIC output.
■ When the fabric node switch (spine or leaf) is out-of-fabric, the environmental sensor values, such as Current Temperature, Power Draw, and Power Consumption, might be reported as "N/A." A status might be reported as "Normal" even when the Current Temperature is "N/A."
This section lists ASA compatibility information for the Cisco APIC software.
■ This release supports Adaptive Security Appliance (ASA) device package version 1.2.5.5 or later.
■ If you are running a Cisco Adaptive Security Virtual Appliance (ASA) version that is prior to version 9.3(2), you must configure SSL encryption as follows:
(config)# ssl encryption aes128-sha1
This section lists miscellaneous compatibility information for the Cisco APIC software.
■ This release supports the following software:
— Cisco NX-OS Release 13.2(9)
— Cisco ACI Virtual Edge, Release 1.2(9a)
— Cisco AVS, Release 5.2(1)SV3(3.31)
For more information about the supported AVS releases, see the AVS software compatibility information in the Cisco Application Virtual Switch Release Notes at the following URL:
— Cisco UCS Manager software release 2.2(1c) or later is required for the Cisco UCS Fabric Interconnect and other components, including the BIOS, CIMC, and the adapter.
— Network Insights Base, Network Insights Advisor, and Network Insights for Resources
For the release information, documentation, and download links, see the Cisco Network Insights for Data Center page. For the supported releases, see the Cisco Day-2 Operations Apps Support Matrix.
■ The latest recommended CIMC releases are as follows:
— 4.2(3e) CIMC HUU ISO (recommended) for UCS C220/C240 M5 (APIC-L3/M3)
— 4.2(3b) CIMC HUU ISO for UCS C220/C240 M5 (APIC-L3/M3)
— 4.2(2a) CIMC HUU ISO for UCS C220/C240 M5 (APIC-L3/M3)
— 4.1(3m) CIMC HUU ISO for UCS C220/C240 M5 (APIC-L3/M3)
— 4.1(3f) CIMC HUU ISO for UCS C220/C240 M5 (APIC-L3/M3)
— 4.1(3d) CIMC HUU ISO for UCS C220/C240 M5 (APIC-L3/M3)
— 4.1(3c) CIMC HUU ISO for UCS C220/C240 M5 (APIC-L3/M3)
— 4.1(2m) CIMC HUU ISO (recommended) for UCS C220/C240 M4 (APIC-L2/M2)
— 4.1(2k) CIMC HUU ISO for UCS C220/C240 M4 (APIC-L2/M2)
— 4.1(2g) CIMC HUU ISO for UCS C220/C240 M4 (APIC-L2/M2)
— 4.1(2b) CIMC HUU ISO for UCS C220/C240 M4 (APIC-L2/M2)
— 4.1(1g) CIMC HUU ISO for UCS C220/C240 M4 (APIC-L2/M2) and M5 (APIC-L3/M3)
— 4.1(1f) CIMC HUU ISO for UCS C220 M4 (APIC-L2/M2) (deferred release)
— 4.1(1d) CIMC HUU ISO for UCS C220 M5 (APIC-L3/M3)
— 4.1(1c) CIMC HUU ISO for UCS C220 M4 (APIC-L2/M2)
— 4.0(4e) CIMC HUU ISO for UCS C220 M5 (APIC-L3/M3)
— 4.0(2g) CIMC HUU ISO for UCS C220/C240 M4 and M5 (APIC-L2/M2 and APIC-L3/M3)
— 4.0(1a) CIMC HUU ISO for UCS C220 M5 (APIC-L3/M3)
— 3.0(4l) CIMC HUU ISO (recommended) for UCS C220/C240 M3 (APIC-L1/M1)
— 3.0(4d) CIMC HUU ISO for UCS C220/C240 M3 and M4 (APIC-L1/M1 and APIC-L2/M2)
— 3.0(3f) CIMC HUU ISO for UCS C220/C240 M4 (APIC-L2/M2)
— 3.0(3e) CIMC HUU ISO for UCS C220/C240 M3 (APIC-L1/M1)
— 2.0(13i) CIMC HUU ISO
— 2.0(9c) CIMC HUU ISO
— 2.0(3i) CIMC HUU ISO
■ This release supports the partner packages specified in the L4-L7 Compatibility List Solution Overview document at the following URL:
■ A known issue exists with the Safari browser and unsigned certificates, which applies when connecting to the Cisco APIC GUI. For more information, see the Cisco APIC Getting Started Guide.
■ For compatibility with OpenStack and Kubernetes distributions, see the Cisco Application Policy Infrastructure Controller OpenStack and Container Plugins, Release 3.2(9), Release Notes.
■ For compatibility with Day-2 Operations apps, see the Cisco Day-2 Operations Apps Support Matrix.
The following sections list usage guidelines for the Cisco APIC software.
This section lists virtualization-related usage guidelines for the Cisco APIC software.
■ Do not separate virtual port channel (vPC) member nodes into different configuration zones. If the nodes are in different configuration zones, then the vPCs’ modes become mismatched if the interface policies are modified and deployed to only one of the vPC member nodes.
■ If you are upgrading VMware vCenter 6.0 to vCenter 6.7, you should first delete the following folder on the VMware vCenter: C:\ProgramData\cisco_aci_plugin.
If you do not delete the folder and you try to register a fabric again after the upgrade, you will see the following error message:
Error while saving setting in C:\ProgramData\cisco_aci_plugin\<user>_<domain>.properties.
The user is the user that is currently logged in to the vSphere Web Client, and domain is the domain to which the user belongs. Although you can still register a fabric, you do not have permissions to override settings that were created in the old VMware vCenter. Enter any changes in the Cisco APIC configuration again after restarting VMware vCenter.
■ If the communication between the Cisco APIC and VMware vCenter is impaired, some functionality is adversely affected. The Cisco APIC relies on the pulling of inventory information, updating VDS configuration, and receiving event notifications from the VMware vCenter for performing certain operations.
■ After you migrate VMs using a cross-data center VMware vMotion in the same VMware vCenter, you might find a stale VM entry under the source DVS. This stale entry can cause problems, such as host removal failure. The workaround for this problem is to enable "Start monitoring port state" on the vNetwork DVS. See the KB topic "Refreshing port state information for a vNetwork Distributed Virtual Switch" on the VMware Web site for instructions.
■ When creating a vPC domain between two leaf switches, both switches must be in the same switch generation. Switches not in the same generation are not compatible vPC peers. The generations are as follows:
— Generation 1—Cisco Nexus 9200 and 9300 platform switches without "EX" on the end of the switch name; for example, Cisco Nexus 93120TX.
— Generation 2—Cisco Nexus 9300-EX and FX platform switches; for example, Cisco Nexus 93108TC-EX.
■ The following Red Hat Virtualization (RHV) guidelines apply:
— We recommend that you use release 4.1.6 or later.
— Only one controller (compCtrlr) can be associated with a Red Hat Virtualization Manager (RHVM) data center.
— Deployment immediacy is supported only as pre-provision.
— IntraEPG isolation, micro EPGs, and IntraEPG contracts are not supported.
— Using service nodes inside a RHV domain have not been validated.
This section lists GUI-related usage guidelines for the Cisco APIC software.
■ The Cisco APIC GUI includes an online version of the Quick Start Guide that includes video demonstrations.
■ To reach the Cisco APIC CLI from the GUI: choose System > Controllers, highlight a controller, right-click, and choose "launch SSH". To get the list of commands, press the escape key twice.
■ The Basic GUI mode is deprecated. We do not recommend using Cisco APIC Basic mode for configuration. However, if you want to use Cisco APIC Basic mode, use the following URL:
APIC_URL/indexSimple.html
This section lists CLI-related usage guidelines for the Cisco APIC software.
■ The output from show commands issued in the NX-OS-style CLI are subject to change in future software releases. We do not recommend using the output from the show commands for automation.
■ The CLI is supported only for users with administrative login privileges.
■ If FIPS is enabled in the Cisco ACI setups, then SHA256 support is mandatory on the SSH Client. Additionally, to have the SHA256 support, the openssh-client must be running version 6.6.1 or higher.
This section lists Layer 2 and Layer 3-related usage guidelines for the Cisco APIC software.
■ For Layer 3 external networks created through the API or GUI and updated through the CLI, protocols need to be enabled globally on the external network through the API or GUI, and the node profile for all the participating nodes needs to be added through the API or GUI before doing any further updates through the CLI.
■ When configuring two Layer 3 external networks on the same node, the loopbacks need to be configured separately for both Layer 3 networks.
■ All endpoint groups (EPGs), including application EPGs and Layer 3 external EPGs, require a domain. Interface policy groups must also be associated with an Attach Entity Profile (AEP), and the AEP must be associated with domains. Based on the association of EPGs to domains and of the interface policy groups to domains, the ports VLANs that the EPG uses are validated. This applies to all EPGs including bridged Layer 2 outside and routed Layer 3 outside EPGs. For more information, see the Cisco APIC Layer 2 Networking Configuration Guide.
Note: When creating static paths for application EPGs or Layer 2/Layer 3 outside EPGs, the physical domain is not required. Upgrading without the physical domain raises a fault on the EPG stating "invalid path configuration."
■ In a multipod fabric, if a spine switch in POD1 uses the infra tenant L3extOut-1, the TORs of the other pods (POD2, POD3) cannot use the same infra L3extOut (L3extOut-1) for Layer 3 EVPN control plane connectivity. Each POD must use its own spine switch and infra L3extOut.
■ You do not need to create a customized monitoring policy for each tenant. By default, a tenant shares the common policy under tenant common. The Cisco APIC automatically creates a default monitoring policy and enables common observable. You can modify the default policy under tenant common based on the requirements of your fabric.
■ The Cisco APIC does not provide IPAM services for tenant workloads.
■ Do not mis-configure Control Plane Policing (CoPP) pre-filter entries. CoPP pre-filter entries might impact connectivity to multi-pod configurations, remote leaf switches, and Cisco ACI Multi-Site deployments.
■ You cannot use remote leaf switches with Cisco ACI Multi-Site.
This section lists IP address-related usage guidelines for the Cisco APIC software.
■ For the following services, use a DNS-based hostname with out-of-band management connectivity. IP addresses can be used with both in-band and out-of-band management connectivity.
— Syslog server
— Call Home SMTP server
— Tech support export server
— Configuration export server
— Statistics export server
■ The infrastructure IP address range must not overlap with other IP addresses used in the fabric for in-band and Out-of-band networks.
■ If an IP address is learned on one of two endpoints for which you are configuring an atomic counter policy, you should use an IP-based policy and not a client endpoint-based policy.
■ A multipod deployment requires the 239.255.255.240 system Global IP Outside (GIPo) to be configured on the inter-pod network (IPN) as a PIM BIDIR range. This 239.255.255.240 PIM BIDIR range configuration on the IPN devices can be avoided by using the Infra GIPo as System GIPo feature. The Infra GIPo as System GIPo feature must be enabled only after upgrading all of the switches in the Cisco ACI fabric, including the leaf switches and spine switches, to the latest Cisco APIC release.
■ Cisco ACI does not support a class E address as a VTEP address.
This section lists miscellaneous usage guidelines for the Cisco APIC software.
■ User passwords must meet the following criteria:
— Minimum length is 8 characters
— Maximum length is 64 characters
— Fewer than three consecutive repeated characters
— At least three of the following character types: lowercase, uppercase, digit, symbol
— Cannot be easily guessed
— Cannot be the username or the reverse of the username
— Cannot be any variation of "cisco", "isco", or any permutation of these characters or variants obtained by changing the capitalization of letters therein
■ In some of the 5-minute statistics data, the count of ten-second samples is 29 instead of 30.
■ The power consumption statistics are not shown on leaf node slot 1.
■ If you defined multiple login domains, you can choose the login domain that you want to use when logging in to a Cisco APIC. By default, the domain drop-down list is empty, and if you do not choose a domain, the DefaultAuth domain is used for authentication. This can result in login failure if the username is not in the DefaultAuth login domain. As such, you must enter the credentials based on the chosen login domain.
■ A firmware maintenance group should contain a maximum of 80 nodes.
■ When contracts are not associated with an endpoint group, DSCP marking is not supported for a VRF with a vzAny contract. DSCP is sent to a leaf switch along with the actrl rule, but a vzAny contract does not have an actrl rule. Therefore, the DSCP value cannot be sent.
■ The Cisco APICs must have 1 SSD and 2 HDDs, and both RAID volumes must be healthy before upgrading to this release. The Cisco APIC will not boot if the SSD is not installed.
■ In a multipod fabric setup, if a new spine switch is added to a pod, it must first be connected to at least one leaf switch in the pod. Then the spine switch is able to discover and join the fabric.
Caution: If you install 1-Gigabit Ethernet (GE) or 10GE links between the leaf and spine switches in the fabric, there is risk of packets being dropped instead of forwarded, because of inadequate bandwidth. To avoid the risk, use 40GE or 100GE links between the leaf and spine switches.
■ For a Cisco APIC REST API query of event records, the Cisco APIC system limits the response to a maximum of 500,000 event records. If the response is more than 500,000 events, it returns an error. Use filters to refine your queries. For more information, see Cisco APIC REST API Configuration Guide.
■ Subject Alternative Names (SANs) contain one or more alternate names and uses any variety of name forms for the entity that is bound by the Certificate Authority (CA) to the certified public key. These alternate names are called "Subject Alternative Names" (SANs). Possible names include:
— DNS name
— IP address
■ If a node has port profiles deployed on it, some port configurations are not removed if you decommission the node. You must manually delete the configurations after decommissioning the node to cause the ports to return to the default state. To do this, log into the switch, run the setup-clean-config.sh script, wait for the script to complete, then enter the reload command.
■ When using the SNMP trap aggregation feature, if you decommission Cisco APICs, the trap forward server will receive redundant traps.
■ If you upgraded from a release prior to the 3.2(1) release and you had any apps installed prior to the upgrade, the apps will no longer work. To use the apps again, you must uninstall and reinstall them.
■ Connectivity filters were deprecated in the 3.2(4) release. Feature deprecation implies no further testing has been performed and that Cisco recommends removing any and all configurations that use this feature. The usage of connectivity filters can result in unexpected access policy resolution, which in some cases will lead to VLANs being removed/reprogrammed on leaf interfaces. You can search for these VLANs using the moquery command on the APIC:
> moquery -c infraConnPortBlk
> moquery -c infraConnNodeBlk
> moquery -c infraConnNodeS
> moquery -c infraConnFexBlk
> moquery -c infraConnFexS
■ Fabric connectivity ports can operate at 10G or 25G speeds (depending on the model of the APIC server) when connected to leaf switch host interfaces. We recommend connecting two fabric uplinks, each to a separate leaf switch or vPC leaf switch pair.
For APIC-M3/L3, virtual interface card (VIC) 1445 has four ports (port-1, port-2, port-3, and port-4 from left to right). Port-1 and port-2 make a single pair corresponding to eth2-1 on the APIC server; port-3 and port-4 make another pair corresponding to eth2-2 on the APIC server. Only a single connection is allowed for each pair. For example, you can connect one cable to either port-1 or port-2 and another cable to either port-3 or port-4, but not 2 cables to both ports on the same pair. Connecting 2 cables to both ports on the same pair creates instability in the APIC server. All ports must be configured for the same speed: either 10G or 25G.
■ When you create an access port selector in a leaf interface rofile, the fexId property is configured with a default value of 101 even though a FEX is not connected and the interface is not a FEX interface. The fexId property is only used when the port selector is associated with an infraFexBndlGrp managed object.
The Cisco Application Policy Infrastructure Controller (APIC) documentation can be accessed from the following website:
The documentation includes installation, upgrade, configuration, programming, and troubleshooting guides, technical references, release notes, and knowledge base (KB) articles, as well as other documentation. KB articles provide information about a specific use case or a specific topic.
By using the "Choose a topic" and "Choose a document type" fields of the APIC documentation website, you can narrow down the displayed documentation list to make it easier to find the desired document.
The following list provides links to the release notes and verified scalability documentation:
■ Cisco ACI Simulator Release Notes
■ Cisco NX-OS Release Notes for Cisco Nexus 9000 Series ACI-Mode Switches
■ Cisco Application Policy Infrastructure Controller OpenStack and Container Plugins Release Notes
■ Cisco Application Virtual Switch Release Notes
This section lists the new Cisco ACI product documents for this release.
■ Cisco ACI Virtual Edge Configuration Guide, Release 1.2(9)
■ Cisco ACI Virtual Edge Release Notes, Release 1.2(9)
■ Cisco Application Virtual Switch Release Notes, 5.2(1)SV3(3.31)
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2019-2024 Cisco Systems, Inc. All rights reserved.