Cisco Container Platform

Cisco Container Platform is a turnkey, production grade, extensible platform to deploy and manage multiple Kubernetes clusters. It runs on 100% upstream Kubernetes. Cisco Container Platform offers seamless container networking, enterprise-grade persistent storage, built-in production-grade security, integrated logging, monitoring and load balancing.

This chapter contains the following topics:

Cisco Container Platform Architecture Overview

The following figure shows the architecture of Cisco Container Platform deployment with HyperFlex and ACI integration.

Figure 1. Cisco Container Platform Architecture Overview

Note

Cisco Container Platform can run on top of an ACI networking fabric as well as on a non-ACI networking fabric that performs standard L3 switching.

At the bottom of the stack, there is an ACI fabric that consists of Nexus switches, Application Policy Infrastructure Controllers (APICs) and Fabric Interconnects (FIs). The next layer up is the UCS servers running the HyperFlex software. HyperFlex provides virtualized compute resources through VMware, and distributed storage resources through the HyperFlex converged data platform.

The next layer up is the Cisco Container Platform Control Plane and Data Plane. In the preceeding figure, Cisco Container Platform Control Plane runs on the four VMs on the left.

Kubernetes tenant clusters are preconfigured to support Persistent Volumes using vSphere Cloud Provider and FlexVolumes using HyperFlex volume plugin. Both implementations use the underlying replicated, highly available HyperFlex data platform for storage.

Components of Cisco Container Platform

The following table describes the components of Cisco Container Platform.

Function

Component

Container Runtime

Docker CE

Operating System

Ubuntu

Orchestration

Kubernetes

IaaS

vSphere

Infrastructure

HyperFlex

Container Network Interface (CNI)

ACI CNI, Contiv VPP, Calico

SDN

ACI

Container Storage

HyperFlex Flex Driver

Load Balancing

NGINX

Monitoring

Prometheus, Grafana

Logging

Elasticsearch, Fluentd, and Kibana (EFK) stack

Container Network Interface Plugins

Cisco Container Platform supports multiple Kubernetes CNI plugins such as:

  • ACI CNI is the recommended plugin for use with an ACI fabric. It is optimized for use with an ACI fabric. ACI CNI is fully supported by Cisco.

  • Contiv VPP is recommended when an ACI fabric is not used. It is a user space switch optimized for high performance and scale. It is fully supported by Cisco.

  • Calico can be used for quick evaluation of Cisco Container Platform. Calico is an integrated CNI plugin and is not fully supported under the Cisco commercial support agreement.

Operationally, all the three CNI plugins offer the same experience to the customer. The container network connectivity is seamless and network policies are applied using Kubernetes NetworkPolicies. Under-the-hood, both ACI CNI and Contiv VPP offer advanced feature support. ACI CNI allows you to map CNI NetworkPolicies to an ACI fabric and supports richer underlay policies such as common policies for containers/virtual machines/physical servers and inter-Kubernetes cluster policies. Additionally, ACI CNI supports Kubernetes Type LoadBalancer using PBR policies in the ACI fabric. Contiv VPP is a pure user space switch that is optimized for high performance and scale. It is modular, flexible, and supports a rich protocol feature set. Contiv VPP interfaces support high performance for legacy, sidecar (Istio), and Cloud Native Function containers. Additionally, as Contiv VPP runs in user space, it enables easy fast upgrades and is shielded from kernel bugs and context switches.

ACI CNI

ACI CNI is tightly integrated with the ACI fabric. It supports underlay integration with the ACI fabric and hardware accelerated load balancing.

The following figure shows the architecture of ACI CNI.

Figure 2. Architecture of ACI CNI

Fast Path Container Networking with Contiv VPP

Contiv VPP is a user space switch, which is optimized for performance and scale. The benefits of a user space switch are no kernel tax for context-switches, easy rapid upgrades, and high availability. Contiv VPP is based on Vector Packet Processing (VPP) technology. The benefits of VPP-switch are high performance, proven technology, modularity and flexibility, and a richer feature set.

Contiv VPP connects containers to VPP-switch using tapv2, memif, or VPP TCP stack fast-path interfaces. This connectivity is seamless to the customers. On the downstream, VPP-switch connects to the NIC using Data Plane Development Kit for fast packet processing. Contiv VPP displays detailed application container stats such as inPackets, outPackets, outErrorPackets, dropPackets, inMissPackets, inNobufPackets, puntPackets that can be dumped on the Kubernetes nodes or viewed in Prometheus/Grafana.

The following figure shows the architecture of Contiv VPP.

Figure 3. Architecture of Contiv VPP

System Requirements

This section describes the software, hardware, storage, and network requirements that are necessary to deploy Cisco Container Platform.

Supported Version Matrix

Cisco Container Platform uses various software and hardware components. The following table provides information on the validated versions of each component.

Component

Validated Version

Kubernetes

1.10

ACI

3.1.2p

HyperFlex software

3.0.1b

HyperFlex hardware

All Flash 220M5

All Flash 240M5

vSphere

vSphere 6.0 (u2)+

vSphere 6.5

Software Requirements

Ensure that the following software applications are installed in the deployment environment:

  • VMware vCenter server 6.5

  • VMware client integration plugin

  • vSphere Flash client

  • HyperFlex 3.0.1b

    For more information on installing HyperFlex and accessing the HyperFlex Connect UI, refer to the latest HperFlex documentation.

Storage Requirements

Once HyperFlex is installed, you need to configure two shared datastores that are accessible to the hosts in the cluster for the following purposes:

  • For persistent volume storage

  • For deploying the Cisco Container Platform tenant base VM

Configuring Shared Datastore

Procedure

Step 1

Log in to the HX Connect UI using the VMware vCenter SSO administrator credentials.

Step 2

In the left pane, click Manage > Datastores.

Step 3

Perform these steps to create a datastore for the Kubernetes persistent volume storage:

  1. In the right pane, click Create Datastore.

  2. In the Name field, enter ds1, and then enter a size and block size for the datastore.

    Note 
    We recommend that you use 1TB size and 8K block size.
  3. Click Create Datastore.

Step 4

Perform these steps to create a datastore for deploying the Cisco Container Platform tenant base VM:

  1. In the right pane, click Create Datastore.

  2. Specify a name, size, and block size for the datastore.

  3. Click Create Datastore.

The newly created datastore is available on vCenter.

Configuring Link-local Network for HyperFlex iSCSI Communication

The FlexVolume plug-in requires a host‑only link between each VM that runs Kubernetes and the Internet Small Computer System Interface (iSCSI) target on the ESX host.
Procedure

Step 1

Open an SSH session to the HyperFlex 3.0 Platform Installer VM or one of the HyperFlex Controller VMs and log in as a root user.

Step 2

Perform these steps to get the vCenter details that you need to enter when you run the add_vswitch.py script.

  1. Run the following command to get the vCenter datacenter name and vCenter cluster name.

    stcli cluster info | grep -i vcenter
  2. Run the following command to get the vCenter IP address.

    ping <vcenter URL>
Step 3

Navigate to the following location:

/usr/share/springpath/storfs-misc/hx-scripts/
Step 4

Run the add_vswitch.py script.

python add_vswitch.py --vcenter-ip <vCenter IP address>
When prompted, specify the vCenter credentials, datacenter name, and cluster name that you got from the output of Step 2.
The HyperFlex infrastructure is configured and ready to use for Cisco Container Platform with Kubernetes persistent volume support.

Network Requirements

Configuring DHCP Server

Cisco Container Platform requires a DHCP server to be present. The Cisco Container Platform installer VM, control plane VMs and tenant cluster VMs get their primary interface IP addresses from the DHCP server. You must ensure that you have configured a DHCP server.

Reserving IP Addresses for Static Allocation

A static IP address is used during Cisco Container Platform installation for the CCP Control Plane master node virtual IP to support Cisco Container Platform upgrades. Additionally, Virtual IP address (VIP) is used as an external IP address for each Kubernetes cluster. VIPs are configured using VIP Pools. You can obtain this IP address from the same or a different subnet and you must ensure that it is not part of a DHCP pool.

Provisioning a Port Group for Cisco Container Platform VM Deployment

Create a Port Group on a vSphere Standard Switch (attached to all the ESXi hosts in the vSphere cluster) or a vSphere Distributed Virtual Switch (DVS).

CCP VMs are deployed on this Port Group and need to have access to the vCenter management network.

Enabling DRS and HA on Clusters

It is required that you enable DRS and HA on vCenter for the following reasons:

  • DRS continuously monitors resource utilization across vSphere servers and intelligently balances VMs on the servers.

  • HA provides easy to use, cost-effective high availability for applications running on virtual machines.

Procedure

Step 1

Browse to the cluster on which you want to deploy Cisco Container Platform.

Step 2

Click the Configure tab.

Step 3

Under Services, click vSphere DRS, and then click Edit.

Step 4

In the right pane of the Edit Cluster Settings window, check the Turn ON vSphere DRS check box, and then click OK.

Step 5

Under Services, click vSphere Availability, and then click Edit.

Step 6

In the right pane of the Edit Cluster Settings window, check the Turn ON vSphere HA check box, and then click OK.


ACI Integration Requirements

Cisco ACI enables you to group your application into End Point Groups (EPGs), define policies for the EPGs, and then deploy network policies on the ACI fabric. The policy enforcement is implemented using the spine and leaf architecture of the ACI fabric.

The following figure shows the components of a Cisco Container Platform ACI integrated network topology.

The main components of the network topology are as follows:

  • ACI Fabric includes two spine nodes, two leaf nodes, and three APIC controllers. You can choose the number of the spine and leaf nodes and APIC controllers as per your network requirement.

  • HyperFlex Fabric Interconnect (FI) includes two fabric interconnect switches connected between the ESXi hosts and the ACI leaf switches.

  • ESXi Hosts includes a UCS server such as UCS C220 M4.

  • ASR router is connected to an ACI border leaf for the Availability Zone (AZ) external access.

APIC Controller Requirements

If you are using ACI CNI, ensure that you have configured the following settings on the APIC controller:

  • Assign a port number other than 4094 for Infra VLAN as 4094 is reserved for provisioning HyperFlex fabric interconnect

  • Create a common tenant

  • Create a Virtual Route Forwarder (VRF) in the common tenant

  • Create at least one L3OUT

  • Create an Access Entity Profile (AEP) for the ACI tenant physical domain

  • Create an AEP for L3OUT

  • Create a Virtual Machine Manager (VMM) domain which connects to vSphere

For more information on configuring an APIC controller, refer to the latest ACI documentation.

HyperFlex FI Requirements

Ensure that you have configured the following settings on HyperFlex FI:

  • Configure QOS

  • Ensure that the tenant VLAN is allowed

Once Cisco Container Platform Control Plane and management node networking are configured, you can access the HyperFlex cluster on vSphere and install Cisco Container Platform. Each time you create a tenant cluster, the ACI constructs such as L3OUT, VRF and AEP stored in the common tenant cluster are reused.

Tenant Cluster with ACI CNI Deployment

With an ACI deployment, each tenant cluster is required to have its own routable subnet. The node VLAN, pod subnet, and multicast subnet range should not overlap between clusters. Cisco Container Platform ensures that the VLAN and subnet do not overlap.

Unlike other CNI, an ACI tenant cluster requires a couple sub-interface (VLAN interface) for each Kubernetes node. As shown in the following figure, Cisco Container Platform assigns the unique Node VLAN IDs. You need to assign the unique Infra VLAN ID for the clusters during cluster creation.

For more information on creating tenant clusters, refer to the Cisco Container Platform User Guide.