This document describes the features, caveats, and limitations for the Cisco Application Policy Infrastructure Controller (APIC) software. For more information on specific hardware features, see the Cisco NX-OS Release 11.1(1j) Release Notes for Cisco Nexus 9000 Series ACI-Mode Switches. Additional product documentation is listed in the “Related Documentation” section.
Release notes are sometimes updated with new information about restrictions and caveats. See the following website for the most recent version of this document:
Table 1 shows the online change history for this document.
Table 1 Online History Change
Date | Description |
June 14, 2015 | Created the release notes for Release 1.1(1j) |
June 15, 2015 | Revised downgrading information |
June 18, 2015 | ■ Added CSCuu87980 to Open Caveats ■ Added new support information about the Cisco Nexus 2348UPQ in the Compatibility Information section ■ Updated New Documentation section |
June 25, 2015 | Added the following: ■ Upgrading from 1.0(3x) to 1.1(1j) is supported ■ Downgrading from 1.1(1j) to 1.0(4o) is supported |
July 1, 2015 | Fixed descriptions of breakout support and the options for connecting the N2348UPQ to switches in the Compatibility Information section |
July 16, 2015 | Added the Static VMM Encap Mode feature to the New Software Features section |
July 21, 2015 | Updated the downgrade information |
August 5, 2015 | Added CSCut76195 to Resolved Caveats |
August 6, 2015 | In the Upgrading the APIC Controller section, added the upgrade path from 1.0(1x) to 1.1(1j). |
August 18, 2015 | In the Upgrading the APIC Controller section, removed the note in the row for upgrading from 1.0(3x) to 1.0(4x). |
August 21, 2015 | In the Installation Notes section, added that acimodel-1.1_1j-py.egg is also required. |
August 28, 2015 | Rewrote the procedure in the Downgrading the APIC Controller section to provide more information about stateless downgrades. |
October 16, 2015 | In the Compatibility Information section, added the supported ASA device package version. Also added information about AVS and DVS support with Layer 4 to Layer 7 service insertion or service chaining. |
November 13, 2015 | In the Known Behaviors section, added bug CSCuw81638. |
December 3, 2015 | In the “Installation Notes” section, fixed the .egg file URLs. |
December 9, 2015 | Fixed incorrect URLs to the documentation on cisco.com. |
February 29, 2016 | In the Compatibility Information section, added a link to the AVS Release Notes. |
March 16, 2016 | In the Installation Notes section, added mention that ACI with SCVMM or Windows Azure Pack only supports ASCII characters. |
February 28, 2017 | In the Usage Guidelines section, added: If the communication between the APIC and vCenter is impaired, some functionality is adversely affected. The APIC relies on the pulling of inventory information, updating vDS configuration, and receiving event notifications from the vCenter for performing certain operations. |
April 17, 2017 | Removed deprecated Knowledge Base articles. |
May 24, 2018 | In the New Software Features section, for the IPv6 Forwarding and Routing feature, added the following limitation: You cannot configure both IPv4 and IPv6 addresses on singular physical interfaces. |
This document includes the following sections:
■ Upgrading the APIC Controller
■ Downgrading the APIC Controller
■ Caveats
The Cisco Application Centric Infrastructure (ACI) is an architecture that allows the application to define the networking requirements in a programmatic way. This architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.
The Cisco Application Centric Infrastructure Fundamentals guide provides complete details about the ACI, including a glossary of terms that are used in the ACI.
■ For installation instructions, see the Cisco ACI Fabric Hardware Installation Guide.
■ For instructions on how to access the APIC for the first time, see the Cisco APIC Getting Started Guide.
■ For the Cisco APIC Python SDK documentation, including installation instructions, see the Cisco APIC Python SDK Documentation.
■ Two installation egg files are needed for installation. You can download these files from a running APIC from the URLs below.
The following file is the SDK:
o http[s]://<APIC address>/cobra/_downloads/acimodel-1.1_1j-py.egg
The following file includes the Python packages that model the Cisco ACI Management Information Tree:
o http[s]://<APIC address>/cobra/_downloads/acicobra-1.1_1j-py2.7.egg
Note: Installation of the SDK with SSL support on Unix/Linux and Mac OS X requires a compiler. For a Windows installation, you can install the compiled shared objects for the SDK dependencies using wheel packages.
Note: The model package depends on the SDK package; be sure to install the SDK package first.
■ Cisco ACI with Microsoft System Center Virtual Machine Manager (SCVMM) or Microsoft Windows Azure Pack only supports ASCII characters. Non-ASCII characters are not supported. Ensure that English is set in the System Locale settings for Windows, otherwise ACI with SCVMM and Windows Azure Pack will not install. In addition, if the System Locale is later modified to a non-English Locale after the installation, the integration components might fail when communicating with the APIC and the ACI fabric.
Table 2 lists the supported APIC upgrades.
Table 2 Supported APIC Upgrades
From | To | Limitations | Recommended Procedure |
1.0(4x) | 1.1(1j) | None | 1. Upgrade the APICs. 2. After the APICs are upgraded successfully, upgrade the switches using two or more maintenance groups. |
1.0(3x) | 1.1(1j) | None | 1. Upgrade the APICs. 2. After the APICs are upgraded successfully, upgrade the switches using two or more maintenance groups. |
1.0(3x) | 1.0(4x) | None | 1. Upgrade the APICs. 2. After the APICs are upgraded successfully, upgrade the switches using two or more maintenance groups. |
1.0(2x) | 1.0(4x) | None | 1. Upgrade the APICs. 2. After the APICs are upgraded successfully, upgrade the switches using two or more maintenance groups. |
1.0(2x) | 1.0(3x) | None | 2. After the APICs are upgraded successfully, upgrade the switches using two or more maintenance groups. |
1.0(1x) | 1.1(1j) | Upgrading directly to 1.1(1j) is not supported with a policy upgrade. You must upgrade first to a 1.0(4x) release, and then upgrade to the 1.1(1j) release. If an issue occurs, you might need to run the installer manually on the individual APICs. | 1. Upgrade the APICs to a 1.0(4x) release. 2. After the APICs are upgraded successfully, upgrade the switches to the equivalent 11.0(4x) release using two or more maintenance groups. 3. Upgrade the APICs to the 1.1(1j) release. 4. After the APICs are upgraded successfully, upgrade the switches to the 11.1(1j) release using two or more maintenance groups. |
Downgrading from this release to 1.0(4o) is supported. However, this release does not support a stateful downgrade to 1.0(4h) or earlier releases. If you wish to downgrade from this release to 1.0(4h) or earlier must perform a stateless downgrade, as shown in the following procedure.
Note: You must plan for a Fabric outage, as this procedure rebuilds the Fabric.
1 Export the Fabric configuration.
2 Run the “eraseconfig” command on the APIC controllers. This will reboot the controllers. Ensure that the controllers have been rebooted before moving on to step 3.
3 Run the “setup-clean-config.sh” script on the switch nodes and reload all of the switches. Steps 2 and 3 clear the configuration on the Fabric, making this is a stateless downgrade.
4 Rediscover the Fabric.
5 Downgrade the Fabric to the desired release.
6 Run the “eraseconfig setup” command on the APIC controllers. This step is required so that the script can run additional commands that might be required for the version that is being used. The “eraseconfig setup” command will reload the APICs.
7 Run the “setup-clean-config.sh” script on the switch nodes and reload them.
8 Complete the initial setup script on the APIC controllers.
9 Import the Fabric configuration using the import “merge” mode.
■ This release supports the hardware and software listed on the ACI Ecosystem Compatibility List and the software listed as follows:
— Cisco NX-OS Release 11.1(1j)
— Cisco AVS, Release 5.2(1)SV3(1.5)
For more information about the supported AVS releases, see the AVS software compatibility information in the Cisco Application Virtual Switch Release Notes at the following URL:
— Cisco UCS Manager software Release 2.2(1c) or later is required for the Cisco UCS Fabric Interconnect and other components, including the BIOS, CIMC, and the adapter
■ The breakout of 40G ports to 4x10G on the N9332PQ switch is not supported in ACI-Mode.
■ To connect the N2348UPQ to ACI leaf switches, the following options are available:
— Directly connect the 40G FEX ports on the N2348UPQ to the 40G switch ports on the N9332PQ switch
— Break out the 40G FEX ports on the N2348UPQ to 4x10G ports and connect to the N9396PX or N9372PX switches
■ Connecting the APIC (the controller cluster) to the ACI fabric requires a 10G interface on the ACI leaf. You cannot connect the APIC directly to the N9332PQ ACI Leaf.
■ This release supports the following firmware:
— 1.5(4e) CIMC HUU iso
— 2.0(3i) CIMC HUU iso (recommended)
■ The Cisco Application Virtual Switch (AVS) in either VLAN or VXLAN mode is not supported with Layer 4 to Layer 7 service insertion or service chaining. VMware vSphere Distributed Switch (VDS) is the only supported configuration.
■ This release supports the partner packages specified here: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-734587.html
■ This release supports Adaptive Security Appliance (ASA) device package version 1.2.1.2.
■ For information about APIC compatibility with UCS Director, see the appropriate Cisco UCS Director Compatibility Matrix document at the following URL:
This section lists usage guidelines for the APIC software.
■ The APIC GUI supports the following browsers:
— Chrome version 35 (at minimum) on Mac and Windows
— Firefox version 26 (at minimum) on Mac, Linux, and Windows
— Internet Explorer version 11 (at minimum)
— Safari 7.0.3 (at minimum)
Note: Restart your browser after upgrading to release 1.1(1j).
Caution: A known issue exists with the Safari browser and unsigned certificates. Read the information presented here before accepting an unsigned certificate for use with WebSockets.
When you access the HTTPS site, the following message appears:
“Safari can’t verify the identity of the website APIC. The certificate for this website is invalid. You might be connecting to a website that is pretending to be an APIC, which could put your confidential information at risk. Would you like to connect to the website anyway?”
To ensure that WebSockets can connect, you must do the following:
1. Click Show Certificate.
2. Select Always Trust in the three drop-down lists that appear.
If you do not follow these steps above, WebSockets will not be able to connect.
■ The APIC GUI includes an online version of the Quick Start guide that includes video demonstrations.
■ The infrastructure IP address range must not overlap with other IP addresses used in the fabric for in-band and out-of-band networks.
■ The APIC does not provide IPAM services for tenant workloads.
■ To reach the APIC CLI from the GUI: select System > Controllers, highlight a controller, right-click and select "launch SSH". To get the list of commands, press the escape key twice.
■ In some of the 5-minute statistics data, the count of ten-second samples is 29 instead of 30.
■ For the following services, use a DNS-based host name with out-of-band management connectivity. IP addresses can be used with both in-band and out-of-band management connectivity.
— Syslog server
— Call Home SMTP server
— Tech support export server
— Configuration export server
— Statistics export server
■ In-band management connectivity to the spine switches is possible from any host that is connected to the leaf switches of the Fabric, and leaf switches can be managed from any host that has IP connectivity to the fabric.
■ When configuring an atomic counter policy between two endpoints, and an IP is learned on one of the two endpoints, it is recommended to use an IP-based policy and not a client endpoint-based policy.
■ When configuring two Layer 3 external networks on the same node, the loopbacks need to be configured separately for both Layer 3 networks.
■ All endpoint groups (EPGs), including application EPGs and Layer 3 external EPGs, require a domain. Interface policy groups must also be associated with an Attach Entity Profile (AEP), and the AEP must be associated with domains. Based on the association of EPGs to domains and of the interface policy groups to domains, the ports and VLANs that the EPG uses are validated. This applies to all EPGs including bridged Layer 2 outside and routed Layer 3 outside EPGs. For more information, see the Cisco Fundamentals Guide and the KB: Creating Domains, Attach Entity Profiles, and VLANs to Deploy an EPG on a Specific Port article.
Note: In the 1.0(4x) and earlier releases, when creating static paths for application EPGs or layer 2/layer 3 outside EPGs, the physical domain was not required. In this release, it is required. Upgrading without the physical domain will raise a fault on the EPG stating “invalid path configuration.”
■ An EPG can only associate with a contract interface in its own tenant.
■ User passwords must meet the following criteria:
— Minimum length is 8 characters
— Maximum length is 64 characters
— Fewer than three consecutive repeated characters
— At least three of the following character types: lowercase, uppercase, digit, symbol
— Cannot be easily guessed
— Cannot be the username or the reverse of the username
— Cannot be any variation of cisco, isco or any permutation of these characters or variants obtained by changing the capitalization of letters therein
■ The power consumption statistics are not shown on leaf node slot 1.
■ If the communication between the APIC and vCenter is impaired, some functionality is adversely affected. The APIC relies on the pulling of inventory information, updating vDS configuration, and receiving event notifications from the vCenter for performing certain operations.
For the verified scalability limits, see the Verified Scalability Guide for this release:
This section lists the new and changed features in Release 1.1(1j) and includes the following topics:
Table 3 lists the new software features in this release:
Table 3 New Software Features, Guidelines, and Restrictions
Feature | Description | Guidelines and Restrictions |
ACI Integration with Microsoft Windows Azure Pack 2.0 | ACI integrates Microsoft Windows Azure Pack to provide a self-service experience for tenants. ACI resource provider in Windows Azure Pack drives the APIC for network management. Networks are created in System Center Virtual Machine Manager (SCVMM) and are available in Windows Azure Pack for respective tenants. | Windows Azure Pack version UR5 or above is supported. |
ACI Integration with Microsoft System Center VMM 2012 R2 | ACI fabric and SCVMM integration simplifies the networking aspects of the Virtual Machine management process. APIC and Microsoft SCVMM communicate with each other for network management. EPGs are created in APIC and are created as VM networks in SCVMM. Compute is provisioned in SCVMM and can consume these networks. | A single ACI Virtual Switch can be deployed on a Hyper-V Host. |
Atomic Counters Path Mode for Scale Topologies | In scale setups where the number of Tunnel End Points (leafs or VPC leaf pairs) is greater than 64, the atomic counter supports ongoing statistics between the TEP instead of per spine counters. | None |
eBGP | The ACI fabric supports the eBGP protocol in Layer 3 outside for both IPv4 and IPv6. | None |
EIGRP | The ACI fabric supports the EIGRP protocol in Layer 3 outside for IPv4 only. | None |
Gratuitous ARP (GARP) for Endpoint Learning | When a Virtual Machine (VM) moves from one MAC to another behind the same interface, the hardware does not generate a learn notification. This feature enables endpoint learning when a GARP packet is received as a result of a VM move and syncs the move info to the endpoint manager. | ARP forwarding must be in flood mode for this feature. ARP unicast mode will not work. The feature can be controlled in the bridge domain configuration. |
Host vPC FEX | The Cisco ACI fabric supports Cisco Fabric Extender (FEX) server side virtual port channels (VPC), also known as FEX straight-through VPC. | None |
IPv6 Forwarding and Routing | The Cisco ACI fabric supports IPv6 for tenant addressing, contracts, shared services, routing, Layer 4 to Layer 7 services, and troubleshooting. The following features are supported: ■ Atomic Counters - Provides support for IPv6 sources and destinations, but configuring source and destination IP addresses across IPv4 and IPv6 addresses is not allowed. ■ Contract filters based on the IP protocol type applies to both IPv4 and IPv6 end points. Filters can also be defined for specific ICMPv6 message types. ■ DHCPv6 - Provides support for DHCPv6 relay. ■ Faults and Events. ■ iPing6 - Provides support for ping to IPv6 addresses from the switch CLI. ■ IPv6 support for SVI for bridge domain subnets. Note: This feature does not provide IPv6 support on ACI fabric management interfaces (in-band and out-of-band) or IPv6 tunnel interfaces, such as ISATAP and 6to4. ■ L2/L3 outside (BGP, OSPFv3, static) - Provides IPv6 support for outside network external interfaces (routed, sub-interface, and SVI) and route (static route, OSPFv3, iBGP, and eBGP). ■ Neighbor Discovery policy – Provides the following: — NS/NA parameter configurable per Bridge Domain (BD) basis. Configurable on SVI for BD subnet and external interface (routed, sub-interface, and SVI). — RS/RA parameter configurable per Subnet basis. Configurable on SVI for BD subnet. ■ Policy, Shared services - Provides support for contracts between endpoint groups including IPv6 enabled endpoints. ■ SPAN support for IPv6 traffic. Note: Although SPAN supports IPv6 traffic, the destination IP for the ERSPAN cannot be IPv6. ■ Traceroute - Provides support for traceroute to IPv6 addresses from the switch CLI. | ■ Neighbor Discovery (ND) unicast mode is not supported. ■ IPv6 MLD Snooping is not supported. ■ You cannot configure both IPv4 and IPv6 addresses on singular physical interfaces. |
Microsegmentation and Distributed Firewall with AVS | ■ Microsegmentation with the Cisco Application Virtual Switch (AVS) provides the ability automatically assign endpoints to logical security zones called endpoint groups (EPGs) based on various attributes. Microsegmentation with Cisco AVS is a new feature in Cisco AVS Release 5.2(1)SV3(1.5). This feature is available in the Cisco Application Centric Infrastructure (ACI) for Cisco AVS only; it is not available with VMware DVS. Microsegmentation policies used by the Cisco AVS are centrally managed by the Cisco Application Policy Infrastructure Controller (APIC) and enforced by the fabric. ■ Distributed Firewall is a hardware-assisted firewall that supplements--but does not replace--other security features in the Cisco Application Centric Infrastructure (ACI) fabric such as Cisco Adaptive Security Virtual Appliance (ASAv) or secure zones created by Microsegmentation with the Cisco Application Virtual Switch (AVS). Part of Cisco AVS, the Distributed Firewall resides in the ESXi (hypervisor) kernel and is in learning mode by default. No additional software is required for the Distributed Firewall to work. However, you must configure policies in the Cisco Application Policy Infrastructure Controller (APIC) to work with the Distributed Firewall. The Distributed Firewall is supported on all Virtual Ethernet (vEth) ports but is disabled for all system ports (Virtual Extensible LAN [VXLAN] tunnel endpoint [VTEP]) and all vmkernel ports) and for all uplink ports. Distributed Firewall flows are limited to 10,000 per endpoint and 250,000 per Cisco AVS host. | Distributed Firewall – If you use the configuration wizard to create interface, switch, and vCenter domain profiles when you install Cisco AVS, Cisco APIC automatically configures a firewall policy. You should leave the Firewall checkbox in the wizard unchecked to enable Distributed Firewall. The default mode is Learning, which is used only when installing or upgrading to the new Cisco AVS release. If you did not use the configuration wizard when installing Cisco AVS, you need to create a Distributed Firewall policy.
|
Miscabling Protocol - MCP | The ACI fabric provides loop detection policies that can detect loops in Layer 2 network segments that are connected to ACI access ports. | None |
New EPG Resolution Mode: Pre-provisioned | The EPG resolution mode is used to determine when to download an EPG to the Leaf in ACI fabric. If the EPG resolution mode is set to "pre-provision", the EPG is downloaded to all the Leaf/interfaces associated with the VMM domain at the configuration time itself. It is recommended to use this resolution mode for critical EPGs used to represent port-groups of vmkernel ports. | None |
OSPF | The ACI fabric supports NSSA, regular, and backbone for OSPF. | None |
Per BD Multicast/Broadcast Packet Behavior Knob | Currently, all broadcast and multicast packets in a BD are flooded within the fabric. Since there is no way to apply contracts to these packets, there is no way to restrict these packets within the same EPG or to drop the packets. To provide more granularity, a new knob has been added for these packet types where you can control the behavior of these packets per BD by configuring one of the three options: ■ bd-flood - Flood packet in the BD (default mode) ■ encap-flood - Flood packet in the encap ■ drop - Drop the packet | None |
Per-port VLAN Significance | This feature allows the configuration of the same VLAN ID across different EPGs (on different BDs) on different ports on the same leaf switch. In essence, a user can now configure the same VLAN ID on every port on the same switch. Before this feature, a particular VLAN could only be part of one EPG on a leaf. This feature is enabled by setting the VLAN scope to port local scope in the layer 2 interface profile. | ■ ACI cannot flush out the MAC address if there is a topology change with MSTP. ■ MAC addresses cannot be synced unless the same VLAN encapsulation is used across a vPC pair. ■ Multiple EPGs belonging to same BD cannot use the same VLAN encapsulation. |
Route Peering with Service Appliances | Route peering is a special case of the more generic Cisco ACI fabric as a transit use case, in which route peering enables the ACI fabric to serve as a transit domain for Open Shortest Path First (OSPF) or Border Gateway Protocol (BGP) protocols. A common use case for route peering is route health injection, in which the server load balancing virtual IP is advertised over OSPF or internal BGP (iBGP) to clients that are outside of the ACI fabric. You can use route peering to configure OSPF or BGP peering on a service device so that the device can peer and exchange routes with the ACI leaf node to which it is connected. The following protocols are supported for route peering: ■ iBGPv4 ■ iBGPv6 ■ OSPF ■ OSPFv3 | None |
Static VMM Encap Mode | Static VMM encap mode enables you to statically allocate the VLAN encapsulation for a VMM domain EPG. | Using a static VLAN pool or a static VLAN block within an already existing pool is required for the VLANs defined in this feature. |
Transit routing | The ACI fabric supports transit routing, including the necessary EIGRP, eBGP, and OSPF protocol support, which enables border routers to perform bi-directional redistribution with other routing domains. The following protocols are supported: ■ eBGP ■ EIGRP ■ iBGP ■ OSPF ■ Static-Route ■ OSPFv3 ■ iBGPv6 ■ eBGPv6 | None |
Troubleshooting Wizard | The Troubleshooting Wizard is a suite of tools added to the APIC to monitor, manage, and troubleshoot layer 1 to layer 7 issues in one pane of glass. Tools included are: ■ Atomic Counters ■ Audit Logs ■ iTraceroute ■ SPAN ■ Statistics | ■ When the topology has a Firewall in the graph, troubleshooting wizard assumes the source end-point or external IP is in the external/client network, and the destination is in the internal/server network. ■ Either the source endpoint or the destination endpoint must be natively learned within a tenant endpoint group. ■ Syslog level must be set to "debugging" for contract deny logs to be shown. ■ Irrespective of the time window, the topology, faults, traceroute, atomic counter, and SPAN reflect the current state of the fabric. ■ All session information including reports and SPAN capture files are deleted when the session is re-moved. ■ Some of the operations may take up to two minutes in large configurations. ■ Endpoints behind layer 2 external network are not supported. ■ External-to-EP traceroute is not supported. ■ IPv6 traceroute only generates internal path results, and its "protocol”/”destination port" configuration options are not effective. |
This release supports the following new hardware:
■ APIC-L2 – APIC for large configurations with over 1000 leaf ports
■ APIC-M2 – APIC for medium configurations up to 1000 leaf ports
■ APIC-CLUSTER – L2 - APIC cluster for large configurations with over 1000 leaf ports
■ APIC-CLUSTER-M2 – APIC cluster for medium configurations up to 1000 leaf ports
■ Cisco Nexus 2348UPQ (N2K-C2348UPQ) FEX module
■ Cisco Nexus 9516 spine switch (N9K-C9516) with 16 linecard slots (slots 1-8 only)
This section contains lists of open and resolved caveats and known behaviors.
Table 4 lists the open caveats in the Cisco APIC Release 1.1(1j). Click the bug ID to access the Bug Search tool and see additional information about the bug.
Table 4 Open Caveats in Cisco APIC Release 1.1(1j)
Bug ID | Description |
The switch disappears for several minutes from topology, firmware, and maintenance policies while being upgraded. | |
The APIC appliance sees a crash in the DMEs while getting a replication transaction, or when a configuration is missing on the APIC that was introduced with a different version. | |
In Microsoft SCVMM, if a VM network is already attached and used by virtual machines, and if an admin changes the VLAN number of this VM network on SCVMM, the virtual machine VLAN information is not automatically updated on Hyper-V Host virtual machines. | |
VTEP tunnels for VXLAN load balancing might go missing and lead to traffic drop when OpFlex times out due to the stress load on rebooting a couple of hosts with a few hundred vEths. | |
If a user selects a custom time range while keeping the category field to be "system info" in the GUI for the techsupport policy, the techsupport files are not exported. | |
Expired user authentication certificates cannot be deleted. | |
Only admin users can access the "Visibility Tool” (Troubleshooting Wizard). Role-admin users cannot access the tool. |
Table 5 lists the resolved caveats in the Cisco APIC Release 1.1(1j). Click the bug ID to access the Bug Search tool and see additional information about the bug.
Table 5 Resolved Caveats in Cisco APIC Release 1.1(1j)
Bug ID | Description |
When attempting to log into an LDAP provider configured in Strict SSL mode, and the system is not configured with the CA certificate for that LDAP SSL server, the nginx daemon will gracefully restart itself to attempt to work around an openldap library SSL certificate caching bug. | |
During a policy upgrade of the APIC controller, some APICs fail to reboot after the upgrade process has completed. | |
Policy elements crash on the leaf after deleting an infrastructure configuration such as bundle interface groups, selectors, or VLAN/VXLAN namespaces. | |
On large scale setups, some login requests are taking more than 30 seconds. | |
When a Layer 2 or Layer 3 external instance profile is specified as a provider to a contract, and a collection of endpoint groups within a context is specified as the consumer, the provider will be skipped. This can result in the graph not getting deployed. | |
Fault F0467 is raised due to an invalid VLAN configuration (rule: fv-nw-issues-config-failed). | |
There is no infra VLAN connectivity to the leaf in the simulator from the ESX (host) added to the AVS. | |
The APIC controller fan stats collection does not display the speed/PWM data regardless of the interval chosen. | |
Interface policies are not applied to all ports if the interface policy group is associated with a policy before the policy is created. | |
The help screen for the endpoint retention policy has incorrect default values. This is cosmetic. The help screen should also note that setting the bounce entry aging interval to a value less than the remote endpoint aging interval will create a traffic loss situation when hosts migrate between leaf switches. | |
Under high scale circumstances, a new leaf may not get all contracts downloaded. | |
Traffic between application endpoint groups and external Layer 3 networks on different leafs is dropped if multiple external Layer 3 networks are configured in the same context. | |
The endpoint attachment notification: ■ Does not always result in an attachment notification ■ Is only sent to a subset of deployed service graph instances ■ Is sent to the incorrect service graph instance | |
The zoning-rule is missing after deleting or adding 0/0 and/or a particular subnet present in a Layer 3 external network. | |
When a Layer 3 external network is configured with DSCP marking, it is not copied to a filter rule on the node. | |
When DSCP marking is configured on an external endpoint group, it is not copied to the filter rule on the node. | |
Leaf switches drop Link Layer Discovery Protocol (LLDP) packets that have the unicast destination MAC address. | |
WebServer uses ciphers that use SHA1. Web browsers, such as chrome, report a warning indicating that SHA1 is obsolete and that a stronger hashing algorithm should be used. (as shown in the attached enclosure) | |
The online help page is blank when the "i" symbol is selected for the subnet entry for the tenant created external instance profile. | |
After an endpoint is learned, and a graph is removed and reattached, the endpoint will not be added to the L4-L7 device configuration. | |
In the 1.0(4h) release, support for filtering techsupport information for a specified time-window and category was added to generate a techsupport file with a reduced size. If the time-window and/or category filters are set in the GUI for an on-demand techsupport policy named tsod-ts_exp_pol, and then if the techsupport collection is triggered through CLI, the filters still get applied although no filters are specified in the CLI command. | |
A fault is raised indicating that an image downloaded into the repository is bad, although the image is actually good. | |
Tech support filtering based on category can miss some files when triggered from the GUI. |
Table 6 lists caveats that describe known behaviors in the Cisco APIC Release 1.1(1j). Click the Bug ID to access the Bug Search Tool and see additional information about the bug.
Table 6 Known Behaviors in Cisco APIC Release 1.1(1j)
Bug ID | Description |
The APIC does not validate duplicate IPs assigned to two device clusters. The communication to devices or the configuration of service devices might be affected. | |
In some of the 5-minute statistics data, the count of ten-second samples is 29 instead of 30. | |
The node ID policy can be replicated from an old appliance that is decommissioned when it joins a cluster. | |
The DSCP value specified on an external endpoint group does not take effect on filter rules on the leaf switch. | |
The hostname resolution of the syslog server fails on leaf and spine switches over in-band connectivity. | |
After importing an exported configuration, graph instances are not created and L4-L7 packages are missing in the system. | |
Following a FEX or switch reload, configured interface tags are no longer configured correctly. | |
Switches could get downgraded to a 1.0(1x) version if the imported configuration consists of a firmware policy with a desired version set to 1.0(1x). | |
Some reported client endpoints are not present on the APIC during an upgrade. | |
The APIC is rebooted using the CIMC power reboot. On reboot, the system enters into fsck due to a corrupted disk. | |
The Cisco APIC Service (ApicVMMService) shows as stopped in the Microsoft Service Manager (services.msc in control panel > admin tools > services) after valid domain credentials are entered during installation or configuration of the service. | |
The traffic destined to a shared service provider endpoint group picks an incorrect class Id (PcTag) and gets dropped. | |
Traffic from an external layer 3 network is allowed when configured as part of a vzAny (a collection of endpoint groups within a context) consumer. | |
The microsegment endpoint group is in the incorrect state after downgrading. | |
Downgrading the fabric starting with the leaf will cause faults such as policy-deployment-failed with fault code F1371. | |
The OpenStack metadata feature cannot be used with ACI integration with the Juno release (or earlier) of OpenStack due to limitations with both OpenStack and Cisco’s ML2 driver. |
The Cisco Application Policy Infrastructure Controller (APIC) documentation can be accessed from this website:
This section lists the new Cisco APIC product documents for this release.
■ ACI Virtualization Guide
■ Developing L4-L7 Device Packages, Release 1.1(1j)
■ Deploying L4-L7 Services, Release 1.1(1j)
■ Nexus 9516 ACI-Mode Switch Hardware Installation Guide
■ Verified Scalability Guide for Cisco ACI, Release 1.1(1) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 11.1(1)
■ KB: Configuring IPv6 Neighbor Discovery
■ KB: Configuring BGP External Routed Network in APIC
■ KB: Creating Domains, Attach Entity Profiles, and VLANs to Deploy an EPG on a Specific Port
■ KB: Transit Routing
■ Video: Configuring EIGRP Using the GUI
■ Video: Configuring Microsegmentation with Cisco AVS Using VM-Based Attributes
■ Video: Creating the Tenant, Private Network, and Bridge Domain with IPv6 Neighbor Discovery Using the GUI
■ Video: Installing Cisco AVS with Cisco VSUM (Part 1: Configuring APIC Settings Before the Installation)
■ Video: Installing Cisco AVS with Cisco VSUM (Part 2: Installing Cisco AVS and Verifying the Installation)
■ Video: SCVMM Creating a Tenant
■ Video: SCVMM Creating an EPG
■ Video: SCVMM Associating the EPGs with Microsoft VMM Domain
■ Video: SCVMM Creating SCVMM Domain Profile
■ Workflow: Cisco ACI with Microsoft SCVMM
■ Workflow: Cisco ACI with Microsoft Windows Azure Pack
■ Open Source Used In APIC 1.1(1)
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2015-2018 Cisco Systems, Inc. All rights reserved.