Supported Version Matrix
Cisco Container Platform uses various software and hardware components.
For more information on the validated versions of each component, refer to the latest Cisco Container Platform Release Notes.
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This section describes the requirements that are necessary to deploy Cisco Container Platform.
Note |
Cisco Container Platform does not support installing virtual machines in nested datacenter, vCenter cluster or virtual machine folders. Cisco Container Platform does not support moving the virtual machines or changing their configuration from vCenter directly. |
It contains the following topics:
Cisco Container Platform uses various software and hardware components.
For more information on the validated versions of each component, refer to the latest Cisco Container Platform Release Notes.
Ensure that the following software applications are installed in your deployment environment:
VMware vCenter server 6.7 Update 3 and later
VMware client integration plugin
vSphere Flash client
In the Cisco Container Platform Control Plane VM, each master and worker node requires 2 vCPUs, 8 GB memory, and 40 GB HDD.
In the Cisco Container Platform Tenant Cluster VM, each master and worker node requires 2 vCPUs, 16 GB memory, and 40 GB HDD. You can modify the vCPU and memory configurations when you deploy a new tenant cluster.
The following topics provide information on the necessary resource management requirements:
Note |
You must use the Enterprise Plus license to set up VMware clusters with HA and DRS enabled. For more information on the supported versions of VMware, see Supported Version Matrix. |
It is required that you enable DRS and HA on vCenter for the following reasons:
DRS continuously monitors resource utilization across vSphere servers and intelligently balances VMs on the servers.
HA provides easy to use, cost-effective high availability for applications running on virtual machines.
Step 1 |
In the vSphere Web Client, navigate to the host or cluster on which you want to deploy Cisco Container Platform. |
Step 2 |
Click the Configure tab. |
Step 3 |
Under Services, click vSphere DRS, and then click Edit. |
Step 4 |
In the right pane of the Edit Cluster Settings window, check the Turn ON vSphere DRS check box, and then click OK. |
Step 5 |
Under Services, click vSphere Availability, and then click Edit. |
Step 6 |
In the right pane of the Edit Cluster Settings window, check the Turn ON vSphere HA check box, and then click OK. |
You need to enable the Time Synchronization services on each host within your vSphere environment. If you do not enable this service, errors due to timing differences between hosts may cause installation of the Cisco Container Platform to fail.
Step 1 |
In the vSphere Web Client, navigate to the host or cluster on which you want to deploy Cisco Container Platform. |
||
Step 2 |
Click the Configure tab. |
||
Step 3 |
From the left pane, expand System, and then click Time Configuration. |
||
Step 4 |
In the right pane, click Edit. |
||
Step 5 |
In the Edit Time Configuration window, check the Turn ON vSphere DRS check box, and then click OK.
|
The following topics provide information on the necessary network requirements:
If you have chosen Contiv as the CNI, the pod-to-pod traffic across nodes is tunneled by the VXLAN protocol.
Cisco Container Platform creates VMs that are attached to a Port Group on either a vSphere Standard Switch (VSS) or a Distributed Virtual Switch (DVS). The HyperFlex installer creates VSS switches in vSphere for the networks that are defined during installation. You need to create either VSS or DVS Switches for managing the VM traffic.
The following topics provide information on configuring a VSS or a DVS.
Step 1 |
In the vSphere Web Client, navigate to the host or cluster on which you want to deploy Cisco Container Platform. |
Step 2 |
Click the Configure tab. |
Step 3 |
Expand Networking, and then select Virtual switches. |
Step 4 |
Click Add host networking. |
Step 5 |
Choose Virtual Machine Port Group for a Standard Switch as the connection type for which you want to use the new standard switch and click Next. |
Step 6 |
Select New standard switch and click Next. |
Step 7 |
Add physical network adapters to the new standard switch. |
Step 8 |
Under Assigned adapters, click Add adapters. |
Step 9 |
Select one or more physical network adapters from the list. |
Step 10 |
From the Failover order group drop-down list, choose from the Active or Standby failover lists. |
Step 11 |
For higher throughput and to provide redundancy, configure at least two physical network adapters in the Active list. |
Step 12 |
Click OK. |
Step 13 |
Enter connection settings for the adapter or the port group as follows:
|
Step 14 |
On the Ready to Complete screen, click OK. |
Step 1 |
In the Navigation pane, click the DVS switch. |
Step 2 |
In the right pane, click the Hosts tab. |
Step 3 |
Click the Actions icon and click the Add and Manage Hosts radio button. |
Step 4 |
In the Select tasks screen, click the Add Hosts radio button, and then click Next. |
Step 5 |
In the Select hosts screen, click the Add Hosts icon. |
Step 6 |
In the Select new hosts screen, check the check box next to the hosts that you want to add, and then click OK. |
Step 7 |
Click Next in the Select network adapter tasks screen. |
Step 8 |
In the Manage physical network adapters screen, click the network switch that you want to configure, and then click the Assign uplink. |
Step 9 |
Repeat Step 8 for all the networks, and click Next. |
Step 10 |
In the Manage VMKernel network adapters screen, click Next. |
Step 11 |
In the Analyze impact screen, click Next. |
Step 12 |
In the Ready to complete screen, click Finish. |
Cisco Container Platform requires a DHCP server to be present. The Cisco Container Platform installer VM and upgrade VM get their primary interface IP addresses from the DHCP server. You must ensure that you have configured a DHCP server.
If the DHCP server does not provide the location of the NTP service, enter the NTP address in the Installer UI, under
.Cisco Container Platform uses static IP addresses for all cluster nodes and the CCP Control Plane master node VIP, which provides worker nodes with a consistent IP address. Additionally, a load balancer VIP is used as an external IP address for NGINX Ingress in each Kubernetes cluster. These VIPs are configured using IP pools. The static IP addresses are assigned from the same subnet as the load balancer VIP addresses, and you must ensure that the static IP address pools for the subnet do not overlap with a DHCP pool.
You must ensure that the following conditions are met:
The subnet is routable to and from the VMware vCenter server.
The client install machine is routable to the network during the Cisco Container Platform control plane install.
The network allows communication between Cisco Container Platform VM instances. You must not use a private LAN.
The following table summarize the static and DHCP IP address requirements for the Cisco Container Platform components:
Component |
Static IP for Calico |
Static IP for ACI-CNI |
DHCP IP |
||||
---|---|---|---|---|---|---|---|
Installer VM |
0 |
0 |
1 |
||||
Tenant clusters |
2 + Number of masters + Number of load balancer VIPs desired for applications + Number of workers |
See also, Cisco ACI and Kubernetes Integration |
0 |
||||
Control Plane and Cisco Container Platform web interface |
6
|
6
|
0 |
By default, the Cisco Container Platform Control Plane pod network uses the 192.168.0.0/16 subnet for Calico. If you have routed IP addresses in that space, you must assign another RFC1918 range for your VXLAN network. It does not need to be a full /16 subnet, a /22 subnet is adequate for the Cisco Container Platform control plane.
Note |
This section is applicable only if you want to use HyperFlex environment. It is not required for running VMware on UCS. |
Cisco Container Platform is supported on all hardware configurations that are supported by the required HyperFlex software versions. For more information on HyperFlex hardware configurations, refer to the UCS HyperFlex product documentation.
The following topics provide information on the necessary HyperFlex integration requirements:
The datastore is required for the following purposes:
Provisioning persistent volume storage
Deploying the Cisco Container Platform tenant base VM
Step 1 |
Log in to the HX Connect UI using the VMware vCenter SSO administrator credentials. |
Step 2 |
In the left pane, click . |
Step 3 |
Perform these steps to create a datastore for provisioning the Kubernetes persistent volume storage and deploying the Cisco Container Platform tenant base VM: |
The FlexVolume plug-in requires a host‑only link between each VM that runs Kubernetes and the Internet Small Computer System Interface (iSCSI) target on the ESX host.
Step 1 |
Log in to the HX Connect UI. |
Step 2 |
Choose . |
Step 3 |
Click Enable All Node and wait until the KUBERNETES STORAGE PROVISIONING option is enabled. |
Step 1 |
Open an SSH session to the HyperFlex 3.0 Platform Installer VM or one of the HyperFlex Controller VMs and log in as a root user. |
Step 2 |
Perform these steps to get the vCenter details that you need to enter when you run the add_vswitch.py script. |
Step 3 |
Navigate to the following location: |
Step 4 |
Run the add_vswitch.py script.
|
Cisco ACI enables you to group your application into End Point Groups (EPGs), define policies for the EPGs, and then deploy network policies on the ACI fabric. The policy enforcement is implemented using the spine and leaf architecture of the ACI fabric.
The following figure shows the components of a Cisco Container Platform ACI integrated network topology.
The main components of the network topology are as follows:
ACI Fabric includes two spine nodes, two leaf nodes, and three APIC controllers. You can choose the number of the spine and leaf nodes and APIC controllers as per your network requirement.
HyperFlex Fabric Interconnect (FI) includes two fabric interconnect switches connected between the ESXi hosts and the ACI leaf switches.
ESXi Hosts includes a UCS server such as UCS C220 M4.
ASR router is connected to an ACI border leaf for external internet access.
If you are using ACI, ensure that you have configured the following settings on the APIC controller:
Assign a port number other than 4094 for Infra VLAN as 4094 is reserved for provisioning HyperFlex fabric interconnect
Create a common tenant
Create a Virtual Route Forwarder (VRF) in the common tenant
Create at least one L3OUT
Create an Access Entity Profile (AEP) for the ACI tenant physical domain
Create an AEP for L3OUT
Create a Virtual Machine Manager (VMM) domain which connects to vSphere
For more information on configuring an APIC controller, refer to the latest ACI documentation.
Ensure that you have configured the following settings on HyperFlex FI:
Configure QOS
From the left pane, click LAN.
From the right pane, click the QoS tab, and then configure QoS.
Note |
|
Ensure that the tenant VLAN is allowed
Once Cisco Container Platform Control Plane and management node networking are configured, you can access the HyperFlex cluster on vSphere and install Cisco Container Platform. Each time you create a tenant cluster, the ACI constructs such as L3OUT, VRF, and AEP stored in the common tenant cluster are reused.
Cisco Container Platform supports GPU devices in passthrough mode to enable AI/ML workloads.
This section describes the requirements on the ESXi and vCenter hosts to integrate the GPU devices with Cisco Container Platform.
Step 1 |
Follow these steps to enable GPU Passthrough for the devices that you want to use: |
Step 2 |
Follow these steps to enable shared direct passthrough for the GPU device:
|
Step 3 |
Follow these steps to allow VM access to the GPU device: |