New and Changed Information
The following table provides an overview of the significant changes up to this current release. The table does not provide an exhaustive list of all changes or of the new features up to this release.
Cisco APIC Release Version |
Feature |
Description |
---|---|---|
Cisco APIC 4.2(1) |
Docker EE 2.1 support with Kubernetes |
Beginning with this release, Docker EE is supported with Kubernetes. |
Cisco APIC 4.2(2) |
Docker EE 3.0 support with Kubernetes |
Beginning with this release, Docker EE 3.0 is supported with Kubernetes. |
Cisco ACI and Docker EE Integration
Docker Enterprise Edition (EE) is a containers as a service (CaaS) platform that enables workload deployment for high availability using the Kubernetes orchestrator. Beginning with Cisco Application Policy Infrastructure Controller (APIC) Release 4.2(1), Cisco Application Centric Infrastructure (ACI) supports integration with Docker EE.
Docker EE includes the Docker Universal Control Plane (UCP), a cluster-management solution that supports the Cisco ACI Container Network Interface (CNI) plug-in. The Cisco ACI CNI plug-in is required for integration with Cisco ACI Docker EE provides technical support.
Note |
This document refers to Docker EE 3.0, Docker Enterprise Engine 19.03.x, and Docker UCP 3.2.x on Red Hat 7.6 and 7.7. See the section "Docker Enterprise Edition 3.0" in the Compatibility Matrix article on the Docker website. |
System Requirements for Cisco ACI Docker EE Integration
You need at least one manager Kubernetes node and one Kubernetes worker node. This section lists the requirements for the nodes.
-
Manager node:
-
CPU: 16 core
-
RAM: 16 GB
-
OS: Red Hat Enterprise Linux Server release 7.6 (Maipo)
-
-
Worker node:
-
CPU: 8 core
-
RAM: 8 GB
-
OS: Red Hat Enterprise Linux Server release 7.6 (Maipo)
-
Note |
The use of symmetric policy-based routing (PBR) feature for load-balancing external services requires the use of Cisco Nexus 9300-EX or FX leaf switches. |
Hardware Requirements
This section provides the hardware requirements:
-
Connecting the servers to Gen1 hardware or Cisco Fabric Extenders (FEXes) is not supported and results in a nonworking cluster.
-
The use of symmetric policy-based routing (PBR) feature for load balancing external services requires the use of Cisco Nexus 9300-EX or -FX leaf switches.
For this reason, the Cisco ACI CNI Plug-in is only supported for clusters that are connected to switches of those models.
Note |
UCS-B is supported as long as the UCS Fabric Interconnects are connected to Cisco Nexus 9300-EX or -FX leaf switches. |
Workflow for Cisco ACI Docker EE Integration
This section provides a high-level description of the tasks that you must perform to integrate Docker Enterprise Edition (EE) into the Cisco Application Centric Infrastructure (ACI) fabric.
-
Configure the manager and worker nodes.
See the section Configure the Manager and Worker Nodes.
-
Install Docker EE.
See the section Install Docker EE.
-
Generate the deployment file for the Cisco ACI Container Network Interface (CNI) plug-in.
See the section Generate the Deployment File for Cisco ACI CNI Plug-in.
-
Install the Docker Universal Control Plane (UCP).
See the section Install the Docker UCP.
-
Install the Cisco ACI CNI plug-in
See the section Install the Cisco ACI CNI Plug-in.
-
Verify the Docker EE installation.
See the section Verify the Docker EE Installation.
-
Configure the worker nodes to join the swarm cluster.
See the section Configure Worker Nodes to Join the Swarm Cluster.
-
Verify the installation.
See the section Verify the Docker EE Installation.
-
Connect to the Docker UCP Dashboard.
See the section Connect to the Docker UCP Dashboard.
Configure the Manager and Worker Nodes
You must configure the manager and worker nodes before you can integrate Docker Enterprise Edition (EE) with the Cisco Application Centric Infrastructure (ACI) Perform the following steps on all manager and worker nodes.
Procedure
Step 1 |
Configure the firewall to allow the ports mentioned in the section "Ports Used," in the article UCP System requirements, on the Docker website. |
Step 2 |
Install the following packages:
|
Install Docker EE
Procedure
Step 1 |
Get the Docker Enterprise Edition (EE) image through your Docker subscription. Example:
By default, the Docker version is 19.03.4. It is backward compatible. |
Step 2 |
Disable firewall. Example:
|
Step 3 |
Enable the repository. Example:
|
Step 4 |
Install Docker: Example:
|
Step 5 |
Start Docker: Example:
|
Step 6 |
Verify the Docker version and installation on all nodes: Example:
|
Generate the Deployment File for Cisco ACI CNI Plug-in
You use the acc_provision tool to generate the deployment for the Cisco Application Centric Infrastructure (ACI) Container Network Interface (CNI) plug-in.
The tool supports docker-ucp-3.x to generate the relevant deployment file for the Docker Enterprise Edition (EE) platform. The version results in the creation of Cisco ACI contracts to allow traffic for the following TCP ports that are required for Docker Universal Control Plane (UCP):
-
ucp-ui-port: 443
-
ucp-tls-port: 12376
-
ucp-kubelet-port: 10250
-
ucp-miscellaneous-ports: 12378 to 12388
Before you begin
-
Make sure that your manager node is able to reach Cisco Application Policy Infrastructure Controller (APIC).
-
Install the acc-provision file on the manager node (either from the package or from PyPI).
Procedure
Step 1 |
Generate the input file that you need for acc-provision from the generated sample file: Example:
Update the acc_provision_input.yaml file as required. |
Step 2 |
Run acc-provision to configure Cisco APIC and generate the Kubernetes deployment file: Example:
|
Install the Docker UCP
This procedure installs the latest version of the Docker Universal Control Plane (UCP) on our machine.
Note |
The output of the command in this procedure displays the URL for the Docker UCP browser, administrator username, and password. Record this information for use in the procedure Verify the Docker EE Installation. |
Procedure
On the manager node, run the following command:
Example:
The output displays a command for each node to join the cluster, as shown in the following example. Note the information;
you will use it to verify the Docker installation after the Cisco Application Centric Infrastructure (ACI) Container Network Interface (CNI) plug-in is installed.
|
Install the Cisco ACI CNI Plug-in
After you install the Docker Unified Control Plane (UCP), install the Cisco Application Centric Infrastructure (ACI) Container Network Interface (CNI) plug-in.
Procedure
Step 1 |
Install the kubectl command-line interface on the manager node by using yum or by following Docker documentation. See the document Install the Kubernetes CLI on the Docker website.
|
Step 2 |
Run the following command on the manager node using the aci_deployment.yaml file that you generated earlier:
|
Step 3 |
Verify the status of pods aci-* and kube-dns-*: Example:
Wait until all the pods are up before proceeding. |
Configure Worker Nodes to Join the Swarm Cluster
After you complete installation, you configure the worker nodes to join the storm cluster.
Before you begin
docker swarm join-token worker
Procedure
Log in to your worker nodes and on each one run the following command:
|
Verify the Docker EE Installation
After installing Docker Enterprise Edition (EE), verify that the Docker EE is functional and ready to use. Use the administrator username and password that you captured in the procedure "Install Docker UCP" to log in to UCP. There you can see your swarm cluster and details of the Kubernetes pods, nodes, and Cisco Application Centric Infrastructure (ACI) Container Network Interface (CNI) plug-in.
Before you begin
You must have captured the URL for the Universal Control Plane (UCP) browser, the administrator username, and password from the procedure Install the Docker UCP.
Procedure
Step 1 |
Verify the swarm cluster, making sure that the state is Running and that the status is Active. Example:
|
Step 2 |
Use the kubectl command-line interface to get detailed information for all the pods running on your system. Example:
|
Connect to the Docker UCP Dashboard
After you install Docker Enterprise Edition (EE), you connect to the Docker Universal Control Plane (UCP) dashboard.
Procedure
Step 1 |
Open a web browser and point to the host IP address that was provided during Docker UCP installation in the section Install the Docker UCP. Example:
|
Step 2 |
Accept the exception and continue. The Docker Enterprise login page appears.
|
Step 3 |
Log in with the credentials that you provided when you installed the Docker UCP. |
Step 4 |
Upload your license or skip uploading it for now. The Docker UCP dashboard appears. You can browse the namespaces, pods, and other Kubernetes resources from the left navigation
panel.
|
Deploying Applications
After you integrate Docker Enterprise Edition (EE) with Cisco Application Centric Infrastructure (ACI), you can deploy applications.
This document includes procedures for deploying two popular applications. You do not need to deploy the applications: They are examples and can help you validate your cluster.
Deploy NGINX
Use the kubectl CLI to apply the NGINX .yaml file, which contains the NGINX specification. Then verify the deployment using kubectl and the Docker Universal Control Plane (UCP) dashboard.
The following is an example of an NGINX deployment .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Procedure
Step 1 |
Apply the .yaml file to deploy NGINX. Example:
|
Step 2 |
Verify the deployment using kubectl. Example:
|
Step 3 |
Verify the deployment pm the Docker UCP dashboard. |
Deploy Guestbook
Complete this procedure to deploy the guestbook application.
Before you begin
-
Make sure that the final "frontend" service is up.
-
Download the deployment files from the following location:
https://kubernetes.io/docs/tutorials/stateless-application/guestbook/
Procedure
Step 1 |
Run the kubectl get svc command and note the external IP address from the output. |
Step 2 |
Use a browser to reach the dashboard of the guestbook service. |
Step 3 |
Log in to the Docker Universal Control Plane (UCP) dasboard. |
Step 4 |
Go to , and in the central pane verify the deployed service. |
Step 5 |
Go to , and in the central pane, and verify the pods.Look for pods with names that begin with frontend-. |