This document describes the features, caveats, and limitations for the Cisco Application Policy Infrastructure Controller (APIC) software.
Note: Use this document in combination with the Cisco NX-OS Release 11.2(3) Release Notes for Cisco Nexus 9000 Series ACI-Mode Switches, which you can view at the following location:
Additional product documentation is listed in the “Related Documentation” section.
Release notes are sometimes updated with new information about restrictions and caveats. See the following website for the most recent version of this document:
Table 1 shows the online change history for this document.
Table 1 Online History Change
Date | Description |
April 8, 2016 | Created the release notes for the 1.2(3c) release. |
May 15, 2016 | 1.2(3e): Release 1.2(3e) became available. Added the resolved caveats for this release. |
June 17, 2016 | 1.2(3h): Release 1.2(3h) became available. Added the resolved caveats for this release. |
June 22, 2016 | In the Changes in Behavior section, added a change in AVS certifications. |
August 11, 2016 | In the Upgrading the APIC Controller section, added information about upgrading from an unlisted release. |
November 12, 2016 | 1.2(3m): Release 1.2(3m) became available. Added the resolved caveats for this release. |
December 6, 2016 | In the Compatibility Information section, added information about a known issue when using the Safari browser to connect to the APIC. |
February 28, 2017 | In the Usage Guidelines section, added: If the communication between the APIC and vCenter is impaired, some functionality is adversely affected. The APIC relies on the pulling of inventory information, updating vDS configuration, and receiving event notifications from the vCenter for performing certain operations. |
November 20, 2017 | In the Usage Guidelines section, changed a mention of “Virtual Private Cloud (VPC)” to “virtual port channel (vPC).” |
This document includes the following sections:
■ Upgrading the APIC Controller
■ Downgrading the APIC Controller
■ Caveats
The Cisco Application Centric Infrastructure (ACI) is an architecture that allows the application to define the networking requirements in a programmatic way. This architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.
The Cisco Application Centric Infrastructure Fundamentals guide provides complete details about the ACI, including a glossary of terms that are used in the ACI.
■ For installation instructions, see the Cisco ACI Fabric Hardware Installation Guide.
■ Back up your APIC configuration prior to installing or upgrading to this release. Single APIC clusters, which should not be run in production, can lose their configuration if database corruption occurs during the installation or upgrade.
■ For instructions on how to access the APIC for the first time, see the Cisco APIC Getting Started Guide.
■ For the Cisco APIC Python SDK documentation, including installation instructions, see the Cisco APIC Python SDK Documentation.
The SDK egg file that is needed for installation is included in the package:
— acicobra-1.2_3X-py2.7.egg
“X” is the letter of the release. For example, “1.2_3c”.
Note: Installation of the SDK with SSL support on Unix/Linux and Mac OS X requires a compiler. For a Windows installation, you can install the compiled shared objects for the SDK dependencies using wheel packages.
Note: The model package depends on the SDK package; be sure to install the SDK package first.
■ Cisco ACI with Microsoft System Center Virtual Machine Manager (SCVMM) or Microsoft Windows Azure Pack only supports ASCII characters. Non-ASCII characters are not supported. Ensure that English is set in the System Locale settings for Windows, otherwise ACI with SCVMM and Windows Azure Pack will not install. In addition, if the System Locale is later modified to a non-English Locale after the installation, the integration components might fail when communicating with the APIC and the ACI fabric.
You can find all of the indicated documentation at the following URL:
Table 2 lists the supported APIC upgrades. If you are upgrading from a release that is not listed in the table, you must first upgrade to one of the listed “From” releases, and then upgrade to a 1.2(3) release.
Note: Do not make any configuration changes until the APIC and switch upgrades are complete.
Table 2 Supported APIC Upgrades
From | To | Limitations | Recommended Procedure |
1.2(1) | 1.2(3) | None | 1. Upgrade APICs 2. After APICs are upgraded successfully, upgrade the switches using two or more maintenance groups |
1.1(4) | 1.2(3)
| Due to bug CSCux40954, which was resolved in the 1.2(1) release, the Cisco APIC firmware process using the Upload button from the GUI does not work. The upload appears to complete successfully, but the firmware is not updated in the repository. You must instead download the image using SCP or HTTP from a server to the APIC. | 1. Upgrade APICs 2. After APICs are upgraded successfully, upgrade the switches using two or more maintenance groups |
1.1(3f) | 1.2(3) | None | 1. Upgrade APICs 2. After APICs are upgraded successfully, upgrade the switches using two or more maintenance groups |
1.1(2h) | 1.2(3) | None | 1. Upgrade APICs 2. After APICs are upgraded successfully, upgrade the switches using two or more maintenance groups |
1.1(1) | 1.2(3) | None | 1. Upgrade APICs 2. After APICs are upgraded successfully, upgrade the switches using two or more maintenance groups |
1.0(4q) or later | 1.2(3) | None | 1. Upgrade APICs 2. After APICs are upgraded successfully, upgrade the switches using two or more maintenance groups |
Table 3 lists the supported APIC and switch downgrades.
Note: APIC Image downgrades will be blocked by default if the target image is not in a supported downgrade path.
Table 3 Supported APIC and Switch Downgrades
From | To | Limitations | Recommended Procedure |
1.2(3) | 1.1(1o) and later | None | 1. Downgrade APICs. 2. After APICs are downgraded successfully, downgrade the switches using two or more maintenance groups. |
1.2(2) | 1.1(1o) and later | None | 1. Downgrade APICs. 2. After APICs are downgraded successfully, downgrade the switches using two or more maintenance groups. |
1.2(1) | 1.1(1o) and later | None | 1. Downgrade APICs. 2. After APICs are downgraded successfully, downgrade the switches using two or more maintenance groups. |
1.2(1) | 1.0(4q) and earlier | None | You must perform a stateless downgrade. See the procedure below. |
The following procedure performs a stateless downgrade:
Note: You must plan for a Fabric outage, as this procedure rebuilds the Fabric.
1 Export the Fabric configuration.
2 Run the “acidiag touch setup” command on the APIC controllers. This will reboot the controllers. Ensure that the controllers have been rebooted before moving on to step 3.
3 Run the “setup-clean-config.sh” script on the switch nodes and reload all of the switches. Steps 2 and 3 clear the configuration on the Fabric, making this a stateless downgrade.
4 Rediscover the Fabric.
5 Downgrade the Fabric to the desired release.
6 Run the “acidiag touch setup” command on the APIC controllers. This step is required so that the script can run additional commands that might be required for the version that is being used. The “acidiag touch setup” command will reload the APICs.
7 Run the “setup-clean-config.sh” script on the switch nodes and reload them.
8 Complete the initial setup script on the APIC controllers.
9 Import the Fabric configuration using the import “merge” mode.
■ This release supports the hardware and software listed on the ACI Ecosystem Compatibility List document and the software listed as follows:
— Cisco NX-OS Release 11.2(3)
— Cisco AVS, Release 5.2(1)SV3(1.15)
For more information about the supported AVS releases, see the AVS software compatibility information in the Cisco Application Virtual Switch Release Notes at the following URL:
— Cisco UCS Manager software release 2.2(1c) or later is required for the Cisco UCS Fabric Interconnect and other components, including the BIOS, CIMC, and the adapter
See the ACI Ecosystem Compatibility List document at the following URL:
■ The breakout of 40G ports to 4x10G on the N9332PQ switch is not supported in ACI-Mode.
■ To connect the N2348UPQ to ACI leaf switches, the following options are available:
— Directly connect the 40G FEX ports on the N2348UPQ to the 40G switch ports on the N9332PQ switch
— Break out the 40G FEX ports on the N2348UPQ to 4x10G ports and connect to the N9396PX or N9372PX switches
■ Connecting the APIC (the controller cluster) to the ACI fabric requires a 10G interface on the ACI leaf. You cannot connect the APIC directly to the N9332PQ ACI Leaf.
■ This release supports the following firmware:
— 1.5(4e) CIMC HUU iso
— 2.0(3i) CIMC HUU iso (recommended)
■ Beginning with Cisco Application Virtual Switch (AVS) release 5.2(1)SV3(1.10), Layer 4 to Layer 7 service graphs are supported for Cisco AVS. Layer 4 to Layer 7 service graphs for Cisco AVS can be configured for virtual machines only and in VLAN mode only.
■ This release supports VMM Integration and VMware Distributed Virtual Switch (DVS) 6.x. For more information about guidelines for upgrading VMware DVS from 5.x to 6.x and VMM integration, see the Cisco ACI Virtualization Guide, Release 1.2(3x) at the following URL:
■ The 1.2(3c) and 1.2(3e) releases support the Microsoft System Center Virtual Machine Manager (SCVMM) Update Rollup 9 release, and the Microsoft Windows Azure Pack (WAP) Update Rollup 9 release.
— The 1.2(3h) release also supports the Microsoft SCVMM Update Rollup 10 release and the Microsoft WAP Update Rollup 10 release.
■ This release supports the partner packages specified in the L4-L7 Compatibility List Solution Overview document at the following URL:
https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/solution-overview-listing.html
■ This release supports Adaptive Security Appliance (ASA) device package version 1.2.5.5 or later.
■ If you are running a Cisco Adaptive Security Virtual Appliance (ASAv) version that is prior to version 9.3(2), you must configure SSL encryption as follows:
(config)# ssl encryption aes128-sha1
■ A known issue exists with the Safari browser and unsigned certificates, which applies when connecting to the APIC GUI. For more information, see the Cisco APIC Getting Started Guide.
■ For information about APIC compatibility with UCS Director, see the appropriate Cisco UCS Director Compatibility Matrix document at the following URL:
This section lists usage guidelines for the APIC software.
■ The APIC GUI supports the following browsers:
— Chrome version 35 (at minimum) on Mac and Windows
— Firefox version 42 (at minimum) on Mac, Linux, and Windows
— Internet Explorer version 11 (at minimum)
— Safari 7.0.3 (at minimum)
Note: Restart your browser after upgrading to release 1.2(3).
Caution: A known issue exists with the Safari browser and unsigned certificates. Read the information presented here before accepting an unsigned certificate for use with WebSockets.
When you access the HTTPS site, the following message appears:
“Safari can’t verify the identity of the website APIC. The certificate for this website is invalid. You might be connecting to a website that is pretending to be an APIC, which could put your confidential information at risk. Would you like to connect to the website anyway?”
To ensure that WebSockets can connect, you must do the following:
1. Click Show Certificate.
2. Select Always Trust in the three drop-down lists that appear.
If you do not follow these steps above, WebSockets will not be able to connect.
■ The APIC GUI includes an online version of the Quick Start guide that includes video demonstrations.
■ The infrastructure IP address range must not overlap with other IP addresses used in the fabric for in-band and out-of-band networks.
■ The APIC does not provide IPAM services for tenant workloads.
■ To reach the APIC CLI from the GUI: select System > Controllers, highlight a controller, right-click and select "launch SSH". To get the list of commands, press the escape key twice.
■ In some of the 5-minute statistics data, the count of ten-second samples is 29 instead of 30.
■ For the following services, use a DNS-based host name with out-of-band management connectivity. IP addresses can be used with both in-band and out-of-band management connectivity.
— Syslog server
— Call Home SMTP server
— Tech support export server
— Configuration export server
— Statistics export server
■ Both leaf and spine switches can be managed from any host that has IP connectivity to the fabric.
■ When configuring an atomic counter policy between two endpoints, and an IP is learned on one of the two endpoints, it is recommended to use an IP-based policy and not a client endpoint-based policy.
■ When configuring two Layer 3 external networks on the same node, the loopbacks need to be configured separately for both Layer 3 networks.
■ All endpoint groups (EPGs), including application EPGs and Layer 3 external EPGs, require a domain. Interface policy groups must also be associated with an Attach Entity Profile (AEP), and the AEP must be associated with domains. Based on the association of EPGs to domains and of the interface policy groups to domains, the ports and VLANs that the EPG uses are validated. This applies to all EPGs including bridged Layer 2 outside and routed Layer 3 outside EPGs. For more information, see the Cisco Fundamentals Guide and the KB: Creating Domains, Attach Entity Profiles, and VLANs to Deploy an EPG on a Specific Port article.
Note: In the 1.0(4x) and earlier releases, when creating static paths for application EPGs or Layer 2/Layer 3 outside EPGs, the physical domain was not required. In this release, it is required. Upgrading without the physical domain will raise a fault on the EPG stating “invalid path configuration.”
■ An EPG can only associate with a contract interface in its own tenant.
■ User passwords must meet the following criteria:
— Minimum length is 8 characters
— Maximum length is 64 characters
— Fewer than three consecutive repeated characters
— At least three of the following character types: lowercase, uppercase, digit, symbol
— Cannot be easily guessed
— Cannot be the username or the reverse of the username
— Cannot be any variation of “cisco”, “isco”, or any permutation of these characters or variants obtained by changing the capitalization of letters therein
■ The power consumption statistics are not shown on leaf node slot 1.
■ For Layer 3 external networks created through the API or Advanced GUI and updated through the CLI, protocols need to be enabled globally on the external network through the API or Advanced GUI, and the node profile for all the participating nodes needs to be added through the API or Advanced GUI before doing any further updates through the CLI.
■ For Layer 3 external networks created through the CLI, you should not to update them through the API. These external networks are identified by names starting with “__ui_”.
■ The output from "show" commands issued in the NX-OS-style CLI are subject to change in future software releases. Cisco does not recommend using the output from the show commands for automation.
■ In this software version, the CLI is supported only for users with administrative login privileges.
■ Do not separate virtual port channel (vPC) member nodes into different configuration zones. If the nodes are in different configuration zones, then the vPCs’ modes become mismatched if the interface policies are modified and deployed to only one of the vPC member nodes.
■ If you defined multiple login domains, you can choose the login domain that you want to use when logging in to an APIC. By default, the domain drop-down list is empty, and if you do not choose a domain, the DefaultAuth domain is used for authentication. This can result in login failure if the username is not in the DefaultAuth login domain. As such, you must enter the credentials based on the chosen login domain.
■ A firmware maintenance group should contain max of 80 nodes.
■ When contracts are not associated with an endpoint group, DSCP marking is not supported for a VRF with a vzAny contract. DSCP is sent to a leaf along with the actrl rule, but a vzAny contract does not have an actrl rule. Therefore, the DSCP value cannot be sent.
■ If the communication between the APIC and vCenter is impaired, some functionality is adversely affected. The APIC relies on the pulling of inventory information, updating vDS configuration, and receiving event notifications from the vCenter for performing certain operations.
Table 4 shows the CLI scalability limits.
Table 4 CLI Scalability Limits
Configurable Option | Scale |
Number of tenants | 500 |
Number of Layer 3 (L3) contexts | 300 |
Number of endpoint groups (EPGs) | 3,500 |
Number of endpoints (EPs) | 20,000 |
Number of bridge domains (BDs) | 3,500 |
Number of BGP + number of OSPF sessions + EIGRP (for external connection) | 300 |
Maximum number of vPCs | 48 |
Maximum number of PCs, access ports | 48 |
Maximum number of encaps per access port | 1,750 |
Number of multicast groups | 8,000 |
Maximum number of vzAny provided contracts | 16 |
Maximum number of vzAny consumed contracts | 16 |
Maximum amount of encaps per endpoint group | 2 static, 1 dynamic |
Security TCAM size | 4,000 |
Number of VRFs | 500 |
Separate-Config-Set |
|
Tenants | 100 |
Endpoint groups | 1,000 |
Bridge domains | 500 |
VRFs | 100 |
SPAN destinations | 3 |
NTP servers | 2 |
Contracts | 100 |
DNS servers | 2 |
Syslog servers | 1 |
For additional verified scalability limits, see the Verified Scalability Guide for this release:
https://www.cisco.com/c/en/us/support/cloud-systems-management/application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
This section lists the new and changed features in this release and includes the following topics:
This release supports no new software features.
This section lists changes in behavior in this release.
■ Starting with the 1.2(3) release, AVS uses site-specific certifications. Prior to the 1.2(3) release, AVS used image-based certifications. Because of the change in certifications, if you upgrade from a release prior to the 1.2(3) release, you must use an alternate upgrade procedure. For the upgrade procedure, see the Cisco AVS Installation Guide at the following website:
This section contains lists of open and resolved caveats and known behaviors.
This section lists the open caveats. Click the bug ID to access the Bug Search tool and see additional information about the bug. If a caveat is fixed in a patch of this release, the “Fixed In” column of the tables specifies the release.
Table 5 lists the open caveats in the 1.2(3c) release.
Table 5 Open Caveats in the 1.2(3c) Release
Bug ID | Description | Fixed In |
After upgrading TORs from the 1.1(4e) release to the 1.2(1k) release, when the maint-grp-1 set of the TORs are rebooted, there is traffic loss on the virtual machines. |
| |
The AVS host to leaf OpFlex handshake could be delayed after a VIB upgrade when there is a large number of vMotions happening in short time. OpFlex will auto-establish for the newly upgraded host once the vMotion events processing load subside. |
| |
Upgrading or downgrading the APIC intermittently fails. The upgrade logs (/root/insieme_installer.log) show that the TCSD service is unable to run, as it is failing to communicate with TPM hardware. The problem happens when the TCSD daemon is unable to communicate with TPM hardware through the TIS module that is compiled into kernel. |
| |
Downgrading an APIC configured with Intra-EPG deny configuration from the 1.2(2) release to an earlier release is not supported. You must manually clean up the Intra-EPG deny configuration before downgrading. |
| |
After deleting or re-creating encapsulation blocks, changing the association of accBaseGrp to AEP or deleting an AEP results in an invalid VLAN fault. |
| |
A subnet does not get deleted from a VRF that is in policy unenforced mode after the subnet is moved to another VRF. | 1.2(3e) |
There are no new open caveats in the 1.2(3e) release.
There are no new open caveats in the 1.2(3h) release.
There are no new open caveats in the 1.2(3m) release.
This section lists the resolved caveats. Click the bug ID to access the Bug Search tool and see additional information about the bug.
Table 6 lists the resolved caveats in the 1.2(3c) release.
Table 6 Resolved Caveats in the 1.2(3c) Release
Bug ID | Description |
Multiple encapsulations are not supported for different physical paths when configuring an l3Out using SVIs. | |
A configuration failed fault gets raised on the in-band management EPG. This can happen if the EPG is not modifed in the same transaction where the relation from mgmt:InBZone to the EPG becomes formed. This is a falsely-raised fault and does not have any operational impact. | |
Eraseconfig does not bring up IFC in the factory setting mode. | |
You are not given the option to configure a consumed contract interface on the in-band EPG using the Cisco APIC GUI. | |
Public subnets in a bridge domain can be advertised out through a routing protcol using a "match bridge-domain <bridge_domain_name>" in the route-map associated with the protcol. Route control properties such as "set tag"or "set metric" can be set for these public subnets through "inherit route-profile <profile name>" under the "match bridge-domain" command. If the route-profile name is not equal to "default-export", then the route control properties are not set correctly on the exported BD subnets. | |
Match statements on a route-map, such as match bridge-domain, community, or prefix-list, that do not have specific route-profiles defined under the match statement use the default-export route-profiles when the route-map is applied in the export direction and default-import route-profile when the route-map is applied in the import direction. The route-profile set action that is associated with the "default-export" or "default-import" route-profiles does not take effect on the route-map under certain conditions. | |
The "match community" statement under the route-map <name> does not take effect. | |
The Layer 3 sub-interface MTU value is reset to the inherited fabric policy value when l3extInstP is deleted. | |
After installing the 1.2(2) release and using PXE boot to bring up the APICs, the admin login does not work. | |
When configuring a consumed contract under a inband EPG, having a “-“ in the name results in a missing target error. | |
After you create a new AEP with the Infra. VLAN check box checked, and after you create a physical domain from the AEP wizard, the Infra. VLAN check box becomes unchecked. | |
Service graphs tied to a particular device are stuck in the Applying state. At least one of the function connectors for the device shows a classID/pcTag of any. This issue occurs because the bridge domain placement of the connectors of the device have been changed while graphs were deployed on the device. | |
There is connectivity loss for traffic leaving or entering the ACI fabric through an L3 Out due to stale IP prefix entries in GST-L3-TCAM of the border leaf switches. |
Table 7 lists the resolved caveats in the 1.2(3e) release.
Table 7 Resolved Caveats in the 1.2(3e) Release
Bug ID | Description |
A subnet does not get deleted from a VRF that is in policy unenforced mode after the subnet is moved to another VRF. | |
A policymgr core gets dumped while commissioning or decommission a TOR. | |
The GUI does not work for the BGP route reflector. | |
A large amount of opflexIDEp movement/updates in the ACI fabric due to a port flapping issue causes the GUI to react slowly when using the Operational Members tab under the endpoint group. |
Table 8 lists the resolved caveats in the 1.2(3h) release.
Table 8 Resolved Caveats in the 1.2(3h) Release
Bug ID | Description |
Routes leaked from one VRF Y to another VRF X are advertised from an L3out of VRF X, even though leaked routes are not allowed to be advertised. The leaked routes are bridge domain subnets in VRF Y. | |
After epg1 consumes a contract interface, then epg2 consumes a global contract provided by epg1, the route-map entries do not get removed in instances where they should get removed. | |
The OpFlex state is shown as send functionality on two of the ESXi hosts after upgrading from 1.2(3h) to 1.3(2f), which results in some of the endpoints being down. | |
OpFlex flaps with timer issues when both OpFlex channels go down at the same time. |
Table 8 lists the resolved caveats in the 1.2(3m) release.
Table 8 Resolved Caveats in the 1.2(3m) Release
Bug ID | Description |
The GUI sends 2 posts to the backend. First post deletes all of the folders and parameters. Second post adds all of the remaining and modified folders and parameters to the backend. The mechanism of tese 2 posts will disrupt the running traffic. | |
There is a memory leak with some ESX/hosts that are in the disconnected state in vCenter. | |
An operational fault with the following in the description might be seen in the VM Networking tab under a VM controller: Last inventory pull did not complete for some Hosts or VMs, or no instances were received. Please verify the state of the Hosts and VMs in VMM controller and manually trigger inventory sync. | |
When a leaf switch joins a Cisco ACI fabric, such as when decommissioning or commissioning the leaf switch or a new leaf switch is connected to the fabric, the policymgr process on the APICs could crash with a core. | |
An ESXi host loses connectivity through the Cisco ACI to vCenter, and the host VMNICS are shown as "UP" in the VMM domain. | |
A leaf switch will dump a core due to transactions being suspended for a while with the following error: Suspending the doer queue This issue occurs for transactions without a resume transaction. | |
VMs lose network connectivity and the EPG VLAN is deleted from a switch when vCenter is powered off, restarted, or upgraded. | |
The VMMmgr process crashes while retrieving inventory information, after which the VMMmgr process restarts. | |
During an upgrade, traffic forwarding fails for one of the service graph Instances. | |
Extra messages get sent when a filter in tenant common is added or deleted, or any of its entries are added, deleted, or modified. |
This section lists caveats that describe known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the bug.
Table 9 lists caveats that describe known behaviors in the 1.2(3c) release.
Table 9 Known Behaviors in the 1.2(3c) Release
Bug ID | Description |
The APIC does not validate duplicate IPs assigned to two device clusters. The communication to devices or the configuration of service devices might be affected. | |
In some of the 5-minute statistics data, the count of ten-second samples is 29 instead of 30. | |
The node ID policy can be replicated from an old appliance that is decommissioned when it joins a cluster. | |
The DSCP value specified on an external endpoint group does not take effect on filter rules on the leaf switch. | |
The hostname resolution of the syslog server fails on leaf and spine switches over in-band connectivity. | |
After importing an exported configuration, graph instances are not created and Layer 4 to Layer 7 packages are missing in the system. | |
Following a FEX or switch reload, configured interface tags are no longer configured correctly. | |
Switches could get downgraded to a 1.0(1x) version if the imported configuration consists of a firmware policy with a desired version set to 1.0(1x). | |
Some reported client endpoints are not present on the APIC during an upgrade. | |
The APIC is rebooted using the CIMC power reboot. On reboot, the system enters into fsck due to a corrupted disk. | |
The Cisco APIC Service (ApicVMMService) shows as stopped in the Microsoft Service Manager (services.msc in control panel > admin tools > services) after valid domain credentials are entered during installation or configuration of the service. | |
The traffic destined to a shared service provider endpoint group picks an incorrect class Id (PcTag) and gets dropped. | |
Traffic from an external Layer 3 network is allowed when configured as part of a vzAny (a collection of endpoint groups within a context) consumer. | |
The microsegment endpoint group is in the incorrect state after downgrading. | |
Downgrading the fabric starting with the leaf will cause faults such as policy-deployment-failed with fault code F1371. | |
For direct server return operations, if the client is behind the Layer 3 out, the server-to-client response will not be forwarded through the fabric. | |
The OpenStack metadata feature cannot be used with ACI integration with the Juno release (or earlier) of OpenStack due to limitations with both OpenStack and Cisco’s ML2 driver. |
There are no new known behaviors in the 1.2(3e) release.
There are no new known behaviors in the 1.2(3h) release.
There are no new known behaviors in the 1.2(3m) release.
The Cisco Application Policy Infrastructure Controller (APIC) documentation can be accessed from the following website:
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2016-2017 Cisco Systems, Inc. All rights reserved.