Integrating with DCNM

This section includes the following topics:

DCNM Integration Overview

supports integration with Cisco Data Center Network Manager (DCNM). As part of this integration, provides the automation of virtual network services in Cisco Dynamic Fabric Automation (DFA). In the Cisco DFA solution, services like firewalls and load balancers are deployed at leaf nodes within the spine-leaf topology and in border leaf nodes, in contrast to more traditional data centers where these services are deployed at the aggregation layer.

The following table describes the primary items in the integration with DCNM:

Item Description

Provides central management of network services in a multi-tenant environment.

DCNM

  • Provides the setup, visualization, management, and monitoring of the data center infrastructure.

  • Provides configuration and image management for the fabric.

Dynamic Fabric Automation (DFA) cluster

Provides a simplified spine-leaf architecture, enhanced forwarding, and distributed control plane.

Adaptor

  • Links with DCNM.

  • Enables DCNM to interoperate with one or more instances of .

  • Maps the tenants and virtual data centers to the instances responsible for network services.

  • Listens to network database updates and communicates those updates to the appropriate instance.

  • Upon notification of a new network service in a tenant network, notifies DCNM of the change.

provides centralized management of network services by supporting the following actions:

  • The creation, reading, updating, and deletion of vPath-based service chains.

  • The creation, updating, and deletion of network services.

  • Communicating changes about network services to the Adapter.

The GUI reflects this support by displaying information for networks and subnetworks associated with a tenant, and network services in a tenant's network.

Terminology

The following table identifies the corresponding terms in and DCNM:

Prime Network Services Controller Name DCNM Name Description

Tenant

Organization

A collection of VDCs for tenant-level separation of resources and data.

Virtual Data Center (VDC)

Partition

An independent routing domain that includes a collection of subnetworks. A VDC can belong to only one tenant.

Subnetwork

Network

A Layer 2 network with a unique identifier. A subnetwork can belong to only one VDC.

Networks

After an admin user provisions one or more tenant networks in DCNM, DCNM sends the information about the tenant network to . A tenant-admin user in can then deploy network services such as firewalls, load balancers, and routers on those networks.

For each network, DCNM provides with a handle that uniquely identifies the network on a VM manager and the network's Layer 3 IP details, such as subnet prefix, mask, and default gateway.

To view these networks in , choose Resource Management > Managed Resources > root > tenant (or other subordinate organization), and then click the Subnetworks tab.

You can place the interfaces of a network service that is deployed at a particular level (or node) in the tenant organizational hierarchy on available networks at the following locations:

  • The organization node on which the service is being deployed.

  • Organization nodes that are children of the organization node on which the service is being deployed.

  • Organization nodes that are ancestors of the organization node on which the service is being deployed.

Network Roles

Networks are qualified by a role property which identifies their intended usage. The following table describes the various network roles.

Network Role Description

Host

Tenant-specific network intended for tenant application VMs. Service nodes can also be connected to this network.

Service

Tenant network intended exclusively for service nodes.

External

Tenant network that provides external connectivity. Both tenant application VMs and service nodes can connect to this network.

Management

Shared infrastructure network used for communication between service nodes and . Service node management interfaces connect to this network.

HA

Shared infrastructure network intended for high availability communications between service nodes. Service node HA interfaces connect to this network.

In contrast with tenant networks, which are tenant-specific and provisioned on the data center fabric by DCNM, infrastructure networks are shared by all tenants and are provisioned on the data center fabric out of band.

Details about infrastructure networks need to be added to by the admin user. Because these networks are shared, they can be added only to root (Tenant Management > root).

To add details about infrastructure networks, choose Resource Management > Managed Resources and then click the Subnetworks tab.

Roles and Privileges

The following roles support integration with DCNM:

Role Responsibility

admin

  • Deploy if it is not already deployed.

  • Configure the instance and credentials on DCNM.

  • Confirm communication between and DCNM.

  • As needed, create tenant-admin user accounts.

  • Provide the tenant-admin user with the management IP address.

tenant-admin

  • Add, modify, or delete network services in the scope of the tenant organizational hierarchy provided by DCNM.

  • As part of network service creation, connect the data interfaces on the subnetworks for that tenant.

Configuring Connectivity with DCNM

This procedure describes how to configure connectivity between and DCNM.

After you have successfully configured connectivity, the following aspects apply:
  • When operating with DCNM, you cannot create, modify, or delete tenants or virtual data centers from the GUI.
  • allows admin and tenant-admin users to create, modify, and delete Application and Tier organizational levels under a Virtual Data Center organization.
  • The GUI does not allow admin or tenant-admin users to modify any information related to tenant-scoped network or subnetworks. This restriction does not apply to management or HA networks and subnetworks that are managed by admin users.
  • If you create, update, or delete a network service in , it will be reflected in both DCNM and .
Before You Begin
Confirm the following:
  • The DCNM system is running.
  • Enhanced Fabric Network was enabled during DCNM deployment.
  • You have network access to the DCNM system.
  • You have the appropriate privileges for configuring DCNM.
  • You have deployed in Orchestrator mode.
  • You have created a user account with the admin role for use only by the Adaptor in DCNM.
For more information about these prerequisites, see the following links:
SUMMARY STEPS

    1.    Log in to the DCNM VM console as root.

    2.    Navigate to the /opt/nscadapter/bin directory.

    3.    Start the adaptor by entering the following command: nsc-adapter-mgr adapter start

    4.    Using the nsc-adapter-mgr nsc add command, enter the following information to provide DCNM with access to :

    5.    Log in to the Cisco DCNM GUI and do the following:

    6.    To confirm that connectivity is established between DCNM and , log in to and confirm that the organization is displayed in the Tenant Management tab.


DETAILED STEPS
    Step 1   Log in to the DCNM VM console as root.
    Step 2   Navigate to the /opt/nscadapter/bin directory.
    Step 3   Start the adaptor by entering the following command: nsc-adapter-mgr adapter start
    Step 4   Using the nsc-adapter-mgr nsc add command, enter the following information to provide DCNM with access to :
    • management IP address
    • Username for access
    • Password for access

    The command format is nsc-adapter-mgr nsc add ip-address username password.

    Step 5   Log in to the Cisco DCNM GUI and do the following:
    1. Choose Admin > Dynamic Fabric Automation > Settings.
    2. Choose Config > Dynamic Fabric Automation (DFA) > Auto-Configuration.
    3. Click Add Organization and enter the information for the organization. An organization in DCNM corresponds to a tenant in .
    4. As needed, add partitions to the organization. A partition in DCNM corresponds to virtual data center in .
    5. Add a network to the partition.
    Step 6   To confirm that connectivity is established between DCNM and , log in to and confirm that the organization is displayed in the Tenant Management tab.

    Troubleshooting Integration Issues

    If you encounter issues with the and DCNM integration, you can look for information in the following locations:
    • On the DCNM server, review the log files in /opt/nscadapter/var/log for information.
    • In the GUI:
      • Review faults for services by choosing Resource Management > Managed Resources > root > tenant > Network Services > network-service > Edit > Faults tab.
      • Review audit logs and faults by choosing Resource Management > Diagnostics > Audit Logs or Faults.

      For either option, double-click a fault to view more information.

    The following table describes specific issues that you might encounter and how to address them:

    Symptom Cause Resolution

    Organizations and partitions are created in DCNM but no tenants or virtual device contexts (VDCs) are displayed in

    The configurations in DCNM and are incomplete.

    1. Confirm that the Service Configuration parameters are complete for networks created in DCNM.
    2. Confirm that is registered with the VM Manager IP parameter.

    Networks are created in DCNM but no tenants, VDCs, or subnetworks are displayed in .

    The Network Services Controller (NSC) Adapter does not have an active connection to .

    Use the nsc-adapter-mgr adapter connections command to ensure there is an active connection to .

    The NSC Adapter is not active on DCNM.

    Use the nsc-adapter-mgr adapter connections command to ensure there is an active connection to DCNM.

    does not have the VM Manager IP.

    Confirm that is registered with the correct VM Manager and provide the VM Manager IP address in the VM Manager IP parameter.

    Networks were added to DCNM while or the NSC Adapter was down.

    1. Enter the command nsc-adapter-mgr adapter connections and verify that the connections are correct.
    2. In the DCNM GUI, choose the auto-config interface, choose the network, click Edit, and then click OK without making changes.

    Service networks were deleted in DCNM but the tenants, VDCs, and subnetworks are still shown in .

    Networks were deleted from DCNM while or the NSC Adapter was down.

    1. Enter the nsc-adapter-mgr adapter connections command and verify that the connections are correct.
    2. In the DCNM GUI, choose the auto-config interface, choose the network, click Edit, and then click OK without making changes.

    An edge service was removed from but the Service Node IP Address is still shown in DCNM.

    The service was deleted from while DCNM or the NSC Adapter was down.

    Manually delete the Service Node IP Address in DCNM for the affected partition.

    An edge service was deployed in but the Service Node IP Address is not shown in DCNM.

    The service was deployed in while DCNM or the NSC Adapter was down.

    Manually update the Service Node IP Address in DCNM auto-config for the affected partition.

    Host traffic does not reach the service node.

    • The wrong profile is specified in DCNM for host networks.
    • The service is not attached to the leaf.
    • Make sure that the correct profile is specified in DCNM for the host network.
    • Make sure that the auto-config profile and parameters are correct with particular attention to the Service Node IP address.