Deployment Overview and Requirements

Deployment Overview

Cisco Application Services Engine provides a common platform for deploying Cisco Data Center applications. This release of Application Services Engine supports the Cisco Day-2 Operations apps, which provide real time analytics, visibility, and assurance for policy and infrastructure, and the Cisco ACI Multi-Site Orchestrator app, which provides a single pane of glass view into managing multiple Cisco ACI fabrics.

Cisco Data Center apps are resource intensive applications that rely on modern technology stacks. Cisco Application Services Engine can host containerized applications on a common platform.

Cisco Application Services Engine is deployed as a cluster of three service nodes. This clustering provides reliability and high-availability software framework.

Hardware vs Software Stack

Originally, the Application Services Engine was offered as a cluster of specialized Cisco UCS servers with the software framework pre-installed on it and referred to the combined package of the hardware and the software stack. With addition of other form factors, the Cisco ASE software stack has been decoupled from the hardware and the term can be used to refer to either the physical appliance nodes, the software stack alone when deploying in a virtual environment, or the combination of both.

This document describes how to deploy the Application Services Engine software stack.

Available Form Factors

Cisco Application Services Engine can be deployed using a number of different form factors. Keep in mind however, you must use the same form factor for all nodes, mixing different form factors within the same cluster is not supported.

  • Cisco Application Services Engine physical appliance (.iso)

    This form factor refers to software stack that can be deployed on the original physical appliance hardware that you purchased with the Application Services Engine software stack pre-installed on it.

    The later sections in this document describe how to re-deploy the software stack on the existing physical appliance hardware. Deploying the original Cisco Application Services Engine physical appliance, including physical network connectivity is described in Cisco Application Services Engine Hardware Installation Guide.

  • VMware ESX (.ova)

  • Amazon Web Services (.ami)

  • Linux KVM (.qcow2)


Note

If you have previously deployed an earlier release of Cisco Application Services Engine, stateful upgrade or migration of the cluster is not supported. You will need to deploy a brand new Release 1.1.3d cluster and reinstall all the applications.


Application Services Engine and Cisco DCNM

Application Services Engine may be used in context of Cisco DCNM. In this case, DCNM is not an application running in the Application Services Engine software stack. Instead, the DCNM image (.iso) is installed directly on the Application Services Engine physical servers in order to provide additional compute resources to the applications installed and running in Cisco DCNM thus enabling horizontal scaling of the DCNM platform. As this document deals with the Application Services Engine software stack deployments, see Cisco Application Services Engine Installation Guide For Cisco DCNM for any DCNM-related information.

Supported Applications

For the full list of supported applications and the associated compatibility information, see the Cisco Day-2 Operations Apps Support Matrix.


Note

At this time, the virtual Cisco Application Services Engine form factors support the Cisco ACI Multi-Site Orchestrator application only. For other applications, you must deploy the Application Services Engine as a physical appliance.


Prerequisites and Guidelines

Network Time Protocol (NTP)

The Application Services Engine nodes use NTP for clock synchronization, so you must have an NTP server configured in your environment.

Application Services Engine External Networks

When first configuring Application Services Engine, you will need to provide two IP addresses for the two Application Services Engine interfaces—one connected to the Data Network and the other to the Management Network.

  • Data Network is used for:

    • Application Services Engine node clustering

    • Application to application communication

    • Application Services Engine nodes to Cisco APIC nodes communication

      For example, the network traffic for Day-2 Operations applications such as NAE.

    This network must have IP reachability to the ACI fabric's in-band IPs for Day-2 Operations apps and out-of-band IP for Multi-Site Orchestrator. The data network uses MTU of 1500.

    Cluster nodes must be able to communicate with each other on this network.

  • Management Network is used for:

    • Accessing the Application Services Engine GUI

    • Accessing the Application Services Engine CLI via SSH

    • DNS and NTP communication

    • Application Services Engine firmware upload

    • Intersight device connector

The two interfaces can be in the same or different subnets. In addition, each network's interfaces across different nodes in the cluster can also be in different subnets.

Connectivity between the nodes is required on both networks with the round trip time (RTT) not exceeding 150ms.

Application Services Engine Internal Networks

Two additional internal networks are required for communication between the containers used by the Application Services Engine:

  • Application overlay is used for applications internally within Application Services Engine

    Application overlay must be a /16 network.

  • Service overlay is used internally by the Application Services Engine.

    Service overlay must be a /16.


Note

Communications between containers deployed in different Application Services Engine nodes is VXLAN-encapsulated and uses the data interfaces IP addresses as source and destination. This means that the Application Overlay and Service Overlay addresses are never exposed outside the data network and any traffic on these subnets is routed internally and does not leave the cluster nodes. As such, when configuring these networks, ensure that they are unique and do not overlap with any existing networks or services you may need to access from the Application Services Engine cluster nodes


Communication Ports

The following ports are required by the Application Services Engine cluster and its applications:

  • For traffic to and from the Application Services Engine cluster:

    • Port 53 for DNS

    • Port 443 for HTTPS

    • Ports 22 and 1022 for SSH

  • For traffic between the Application Services Engine nodes:

    • Ports 3379, 3380, 9969, 9979, 15223 for KMS

    • Port 19999 for confd

    • Ports 30000-30100 for Application Services Engine cluster services

    • Ports 30500-30600 for Kubernetes

  • For traffic between the Application Services Engine cluster and Cisco ACI fabrics:

    • Ports 2022 and 884 for Network Insights Assurance

    • Ports 5640-5656 for Network Insights Resources

ACI Fabric Connectivity

Previous releases of the Application Services Engine allowed for two modes when it came to cluster connectivity to the ACI fabrics: cluster nodes connected directly to leaf switches and configured by an application running in the APIC (fabric internal) and cluster nodes separated by a Layer 3 network (fabric external). The fabric internal mode has been deprecated in Release 1.1.3d and later and the following section illustrates how to connect the Application Services Engine cluster to an ACI fabric via an L3Out or an EPG.

Connecting via an L3Out

Connectivity depends on that type of applications deployed in the Application Services Engine:

  • If you are deploying Day-2 Operations applications, you will use the data interface IP to communicate to the in-band management network in the management tenant of each fabric. Connectivity to all fabrics is established via an L3Out to the in-band management network in the in-band VRF.

  • In addition, if you are deploying Multi-Site Orchestrator, you must also establish connectivity to the out-of-band (OOB) interface of each site's Cisco APIC cluster.

If you plan to connect the cluster across a Layer 3 network, keep the following in mind:

  • You must configure an L3Out and the external EPG for Cisco Application Services Engine data network connectivity.

    Configuring external connectivity in an ACI fabric is described in Cisco APIC Layer 3 Networking Configuration Guide.

  • If you specify a VLAN ID for your data interface during setup of the cluster, the host port must be configured as trunk allowing that VLAN.

    However, in most common deployments, you can leave the VLAN ID empty and configure the host port in access mode.

The following two figures show two distinct network connectivity scenarios when connecting the Application Services Engine cluster to ACI fabrics via an L3Out. The primary purpose of each depends on the type of application you may be running in your Application Services Engine.

Figure 1. Connecting via L3Out, Day-2 Operations Applications
Figure 2. Connecting via L3Out, Multi-Site Orchestrator

Connecting via an EPG/BD

Like in the previous example, connectivity depends on that type of applications deployed in the Application Services Engine:

  • If you are deploying Day-2 Operations applications, you will use the data interface IP to communicate to the in-band management network in the management tenant of each fabric. The data interface IP subnet connects to an EPG/BD in the fabric and must have a contract established to the local in-band EPG in the management tenant. We recommend deploying the Application Services Engine in the management tenant and in-band VRF. Connectivity to other fabrics is established via an L3Out.

  • In addition, if you are deploying Multi-Site Orchestrator, you must also establish connectivity to the out-of-band (OOB) interface of each site's Cisco APIC cluster.

If you plan to connect the cluster directly to the ACI leaf switches, keep the following in mind:

  • If deploying in VMware ESX or Linux KVM, the host must be connected to the fabric via trunk port.

  • We recommend configuring the bridge domain (BD), subnet, and endpoint group (EPG) for Cisco Application Services Engine connectivity in management tenant.

    Because the Application Services Engine requires connectivity to the in-band EPG in the in-band VRF, creating the EPG in the management tenant means no route leaking is required.

  • You must create a contract between the fabric's in-band management EPG and Cisco Application Services Engine EPG.

  • If you specify a VLAN ID for your data network during setup of the cluster, the Application Services Engine interface and the port on the connected network device must be configured as trunk

    However, in most cases we recommend not assigning a VLAN to the data network, in which case you must configure the ports in access mode.

  • If several ACI fabrics are monitored with apps on the Services Engine cluster, L3Out with default route or specific route to other ACI fabric in-band EPG must be provisioned and a contract must be established between the the cluster EPG and the L3Out's external EPG.

The following two figures show two distinct network connectivity scenarios when connecting the Application Services Engine cluster to ACI fabrics using an EPG. The primary purpose of each depends on the type of application you may be running in your Application Services Engine.

Figure 3. Connecting via an EPG/BD, Day-2 Operations Applications
Figure 4. Connecting via an EPG/BD, Multi-Site Orchestrator