The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Cisco Multi-Site is an architecture that allows you to interconnect separate Cisco APIC, Cloud Network Controller (formerly known as Cloud APIC), and Cisco NDFC (formerly known as DCNM) domains (fabrics) each representing a different region. This helps ensure multitenant Layer 2 and Layer 3 network connectivity across sites and extends the policy domain end-to-end across the entire system.
Cisco Nexus Dashboard Orchestrator is the intersite policy manager. It provides single-pane management that enables you to monitor the health of all the interconnected sites. It also allows you to centrally define the intersite configurations and policies that can then be pushed to the different Cisco APIC, Cloud Network Controller, or DCNM fabrics, which in term deploy them in those fabrics. This provides a high degree of control over when and where to deploy the configurations.
This document describes the features, issues, and limitations for this release of Nexus Dashboard Orchestrator. For more information, see the “Related Content” section at the end of this document.
Date |
Description |
May 10, 2024 |
Additional resolved issue CSCwi55341. |
April 08, 2024 |
Additional open issue CSCwi76522. |
March 21, 2024 |
Additional resolved issue CSCwi65902. |
March 13, 2024 |
Additional known issue CSCwi35916. |
March 07, 2024 |
Release 4.3(1) became available. |
This release adds the following new features:
Product Impact |
Feature |
Description |
Base Functionality |
'Common' Tenant Support for Hybrid Cloud Environments |
You can now stretch the 'common' tenant between an on-premises ACI and a Cloud Network Controller sites. |
Monitoring template "L3Out source filter" capability |
You can now filter the source packets based on the EPG or L3Out from which they are coming. |
|
EPG to physical domain association and path binding selection |
You can now associate EPGs with physical domains and path bindings defined in the fabric policy and fabric resource templates. |
|
Description field for Interface Policy Group (IPG) |
You can now provide a description for the Interface Policy Group (IPG) objects. |
|
Interoperability |
vSphere 8.0 Support |
Nexus Dashboard clusters can now be deployed in VMware vSphere 8.0. |
Reliability |
Prevent template editor from self-approval |
Template editors can no longer approve their own changes. |
Ease of Use |
UI Navigation Improvements |
This release adds product GUI improvements, including main navigation bar changes for consistency across the platform and services. |
|
PATCH API Support for L3Out Templates |
You can now use the PATCH APIs with L3Out templates. |
There is no new hardware supported in this release.
● Beginning with Nexus Dashboard release 3.1(1), all services have been unified into a single deployment image.
You no longer need to download, install, and enable each service individually. Instead, you can simply choose which services to enable during the Nexus Dashboard platform deployment process. As a result, we recommend deploying Nexus Dashboard release 3.1(1) with unified install for all new installations.
Upgrading to this release will also automatically upgrade all services in your existing cluster.
● If you upgrade to this release from a release prior to 4.0(1) and have template versioning enabled, only the latest versions of the templates are preserved during the upgrade.
All other existing versions of templates, including older versions that are tagged Golden, will not be transferred during the upgrade.
● If you upgrade to this release from a release prior to 4.0(1), existing schemas’ IDs may change.
If you are using any API automation that relies on static Schema IDs, we recommend dynamically obtaining the IDs before executing any action against the Schemas.
● Beginning with Nexus Dashboard release 3.1(1) and Orchestrator release 4.3(1), patch API is supported for L3Out templates.
See the Nexus Dashboard Orchestrator API documentation for the full API changelog.
● Beginning with Release 4.0(1), the “Application Profiles per Schema” scale limit has been removed.
For the full list of maximum verified scale limits, see the Nexus Dashboard Orchestrator Verified Scalability Guide.
● Beginning with Release 4.0(1), if you have route leaking configured for a VRF, you must delete those configurations before you delete the VRF or undeploy the template containing that VRF.
● Beginning with Release 4.0(1), if you are configuring EPG Preferred Group (PG), you must explicitly enable PG on the VRF.
In prior releases, enabling PG on an EPG automatically enabled the configuration on the associated VRF. For detailed information on configuring PG in Nexus Dashboard Orchestrator, see the “EPG Preferred Group” chapter of the Cisco Nexus Dashboard Orchestrator Configuration Guide for ACI Fabrics.
● Note that CloudSec encryption for intersite traffic will be deprecated in a future release.
We recommend not enabling (if currently disabled) or disabling (if currently enabled) this feature for your ACI Multi-Site deployments.
● Downgrading from this release is not supported.
We recommend creating a full backup of the configuration before upgrading, so that if you ever want to downgrade, you can deploy a brand-new cluster using an earlier version and then restore your configuration in it.
This section lists the open issues. Click the bug ID to access the Bug Search Tool and see additional information about the bug. The "Exists In" column of the table lists the specific releases in which the bug exists.
Bug ID |
Description |
Exists in |
When service graphs or devices are created on Cloud APIC by using the API and custom names are specified for AbsTermNodeProv and AbsTermNodeCons, a brownfield import to the Nexus Dashboard Orchestrator will fail. |
4.3(1) and later |
|
Contract is not created between shadow EPG and on-premises EPG when shared service is configured between Tenants. |
4.3(1) and later |
|
Inter-site shared service between VRF instances across different tenants will not work, unless the tenant is stretched explicitly to the cloud site with the correct provider credentials. That is, there will be no implicit tenant stretch by Nexus Dashboard Orchestrator. |
4.3(1) and later |
|
Deployment window may not show all the cloud related config values that have been modified. |
4.3(1) and later |
|
After brownfield import, the BD subnets are present in site local and not in the common template config |
4.3(1) and later |
|
In shared services use case, if one VRF has preferred group enabled EPGs and another VRF has vzAny contracts, traffic drop is seen. |
4.3(1) and later |
|
The REST API call "/api/v1/execute/schema/5e43523f1100007b012b0fcd/template/Template_11?undeploy=all" can fail if the template being deployed has a large object count |
4.3(1) and later |
|
Shared service traffic drops from external EPG to EPG in case of EPG provider and L3Out vzAny consumer |
4.3(1) and later |
|
Two cloud sites (with Private IP for CSRs) with the same InfraVNETPool on both sites can be added to NDO without any infraVNETPool validation. |
4.3(1) and later |
|
Multiple Peering connections created for 2 set of cloud sites. |
4.3(1) and later |
|
Route leak configuration for invalid Subnet may get accepted when Internal VRF is the hosted VRF. There would be fault raised in cAPIC. |
4.3(1) and later |
|
Username and password is not set properly in proxy configuration so a component in the container cannot connect properly to any site. In addition, external module pyaci is not handling the web socket configuration properly when user and password are provided for proxy configuration. |
4.3(1) and later |
|
Implicit Filters and Contracts are not getting updated when the original policies are modified. Any small property changes in policies is not updating the implicit objects. |
4.3(1) and later |
|
For a BD in NDO schema, only the linked L3Out name is populated, and the BD's L3Out reference field is empty even though the L3Out is managed by NDO. This behavior can be observed in the Reconcile Drift UI where the BD's L3Out reference is missing in the NDO schema tab, and only the name is displayed. |
4.3(1) and later |
|
If a stretched external EPG is associated to a shadow L3Out during an upgrade, drift reconciliation does not detect the L3Out or the external EPG. |
4.3(1) and later |
|
The issue occurs in the following scenario: 1. Deployed template version1 2. Modify the template to version2 3. Undeploy the template without first deploying version2. The undeployment happens on version1 but the UI displays the data from version2. |
4.3(1) and later |
|
Unable to deploy Fabric Resource Policy template with VPCI after modifying Node 1 and Node 2. |
4.3(1) and later |
|
When trying to do a preview deployment on a configuration with VRF->BD or BD->EPG references, the referenced object is not seen in the preview deploy screen. |
4.3(1) and later |
|
When a new EPG that uses VRF from "common" tenant is added for shared service use case, the traffic from this EPG does not reach the other EPG. |
4.3(1) and later |
|
Template deployment fails with the following error message: "...bulk write exception: write errors: [E11000 duplicate key error collection: ...." |
4.3(1) and later |
|
Extra contract relationships seen in shadow objects when parent EPG consumes or provides to multiple contracts. |
4.3(1) and later |
|
After migrating some object (such as a BD) from site local template to stretched template and then add a new object to that site local template, trying to deploy it may result in the following error: Template deployment failed: this is a stretch object migration case. Please deploy the target template ZZZZ in schema XXXXX first. |
4.3(1) and later |
This section lists the resolved issues. Click the bug ID to access the Bug Search tool and see additional information about the issue. The "Fixed In" column of the table specifies whether the bug was resolved in the base release or a patch release.
Bug ID |
Description |
Fixed in |
When configuring a site-specific subnet on a Non-Layer 2 stretched BD, only one subnet can be set to primary. An error occurs when attempting to deploy the template stating that only one preferred subnet per address family is allowed under the BD. |
4.3(1) |
|
There is a false Config-Drift notification on the NDO about all Fabric Resource Policy objects. Deploying the Fabric Resource Policy template will remove the Config-Drift notification. After two days, the Config-Drift notification reappears about the Fabric Resource Policy template even though nothing was changed on the APIC. |
4.3(1) |
This section lists known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the issue.
Bug ID |
Description |
NDO will not update or delete VRF vzAny configuration which was directly created on APIC even though the VRF is managed by NDO. |
|
Unable to download Nexus Dashboard Orchestrator report and debug logs when database and server logs are selected |
|
For hybrid cloud deployments, no validation is available for shared services scenarios |
|
If an infra L3Out that is being managed by Cisco Multi-Site is modified locally in a Cisco APIC, Cisco Multi-Site might delete the objects not managed by Cisco Multi-Site in the Infra L3Out. |
|
"Phone Number" field is required in all releases prior to Release 2.2(1). Users with no phone number specified in Release 2.2(1) or later will not be able to log in to the GUI when Orchestrator is downgraded to an earlier release. |
|
Routes are not programmed on CSR and the contract config is not pushed to the Cloud site. |
|
Shadow of cloud VRF may be unexpectedly created or deleted on the on-premises site. |
|
Let's say APIC has EPGs with some contract relationships. If this EPG and the relationships are imported into NDO and then the relationship was removed and deployed to APIC, NDO doesn't delete the contract relationship on the APIC. |
|
When creating VRFs in infra tenant on a Google Cloud site, you may see them classified as internal VRF in NDO. If you then import these VRFs in NDO, the allowed routeleak configuration will be determined based on whether the VRF is used for external connectivity (external VRF) or not (internal VRF). This is because on cAPIC, VRFs in infra tenant can fall into 3 categories: internal, external and un-decided. NDO treats infra tenant VRFs as 2 categories for simplicity: internal and external. There is no usecase impacted because of this. |
|
Removing site connectivity or changing the protocol is not allowed between two sites. |
|
Template goes to approved state when the number of approvals is fewer than the required number of approvers. |
|
After a site is re-registered, NDO may have connectivity issues with APIC or CAPIC |
|
If cloud sites have EVPN-based connectivity with another cloud or on-premises site, then contract-based routing must be enabled for intersite traffic to work. |
|
When APIC-owned L3Outs are deleted manually on APIC by the user, stretched and shadow InstP belonging to the L3Outs get deleted as expected. However, when deploying the template from NDO, only the stretched InstPs detected in config drift will get deployed. |
|
NSG rules on Cloud EPG are removed right after applying service graph between Cloud EPG and on-premises EPG, which breaks communication between Cloud and on-premises. |
|
Existing IPSec tunnel state may be affected after update of connectivity configuration with external device. |
|
User can not withdraw the hubnetwork from a region if intersite connectivity is deployed. |
|
BGP sessions from Google Cloud site to AWS/Azure site may be down due to CSRs being configured with a wrong ASN number. |
|
APIC has GOTO and GOTHROUGH options when configuring an L3 device, but in NDO the GOTHROGH option is not exposed intentionally. Only the GOTO option is supported. |
|
May be unable to deploy a template with VPCI after modifying Node 1 and Node 2. NDO will not delete a VPC peer group on APIC, because it may be shared by multiple other VPCs that are not managed by NDO, removing which may cause config issues. |
|
After an upgrade to NDO 4.2.1 or later, the orchestrator raises configuration drifts that are not automatically reconciled, associated to the configuration objects for Service Devices and Service Graphs. |
This release supports the hardware listed in the “Prerequisites” section of the Cisco Nexus Dashboard Orchestrator Deployment Guide.
This release supports Nexus Dashboard Orchestrator deployments in Cisco Nexus Dashboard only.
Cisco Nexus Dashboard Orchestrator can be cohosted with other services in the same cluster. For cluster sizing guidelines, see the Nexus Dashboard Cluster Sizing tool.
Cisco Nexus Dashboard Orchestrator can manage fabrics managed by a variety of controller versions. For fabric compatibility information see the Nexus Dashboard and Services Compatibility Matrix.
For Nexus Dashboard Orchestrator verified scalability limits, see Cisco Nexus Dashboard Orchestrator Verified Scalability Guide.
For Cisco ACI fabrics verified scalability limits, see Cisco ACI Verified Scalability Guides.
For Cisco Cloud ACI fabrics releases 25.0(1) and later verified scalability limits, see Cisco Cloud Network Controller Verified Scalability Guides.
For Cisco NDFC (DCNM) fabrics verified scalability limits, see Cisco NDFC (DCNM) Verified Scalability Guides.
For ACI fabrics, see the Cisco Application Policy Infrastructure Controller (APIC) documentation page. On that page, you can use the "Choose a topic" and "Choose a document type” fields to narrow down the displayed documentation list and find a specific document.
For Cloud Network Controller fabrics, see the Cisco Cloud Network Controller documentation page.
For NDFC (DCNM) fabrics, see the Cisco Nexus Dashboard Fabric Controller documentation page.
The following table describes the core Nexus Dashboard Orchestrator documentation.
Document |
Description |
Provides release information for the Cisco Nexus Dashboard Orchestrator product. |
|
Provides cluster sizing guidelines based on the type and number of services you plan to run in your Nexus Dashboard as well as the target fabrics' sizes. |
|
Provides Cisco Nexus Dashboard and Services compatibility information for specific Cisco Nexus Dashboard, services, and fabric versions. |
|
Describes how to install Cisco Nexus Dashboard Orchestrator and perform day-0 operations. |
|
Cisco Nexus Dashboard Orchestrator Configuration Guide for ACI Fabrics |
Describes Cisco Nexus Dashboard Orchestrator configuration options and procedures for fabrics managed by Cisco APIC. |
Cisco Nexus Dashboard Orchestrator Use Cases for Cloud Network Controller |
A series of documents that describe Cisco Nexus Dashboard Orchestrator configuration options and procedures for fabrics managed by Cisco Cloud Network Controller. |
Cisco Nexus Dashboard Orchestrator Configuration Guide for NDFC (DCNM) Fabrics |
Describes Cisco Nexus Dashboard Orchestrator configuration options and procedures for fabrics managed by Cisco DCNM. |
Cisco Nexus Dashboard Orchestrator Verified Scalability Guide |
Contains the maximum verified scalability limits for this release of Cisco Nexus Dashboard Orchestrator. |
Contains the maximum verified scalability limits for Cisco ACI fabrics. |
|
Contains the maximum verified scalability limits for Cisco Cloud ACI fabrics. |
|
Contains the maximum verified scalability limits for Cisco NDFC (DCNM) fabrics. |
|
Contains videos that demonstrate how to perform specific tasks in the Cisco Nexus Dashboard Orchestrator. |
To provide technical feedback on this document, or to report an error or omission, send your comments to apic-docfeedback@cisco.com. We appreciate your feedback.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2023 Cisco Systems, Inc. All rights reserved.