Installing Cisco Container Platform

Installing Cisco Container Platform is a three-step process:

  • Importing CCP Tenant base VM

    The Cisco Container Platform tenant base VM contains the container image and the files that are necessary to create the tenant Kubernetes clusters that are used for configuring monitoring, logging, container network interfaces (CNI), and persistent volumes.

  • Deploying Installer VM

    The Installer VM contains the VM image and the files for installing other components such as Kubernetes and the Cisco Container Platform application.

  • Deploying Cisco Container Platform

    The Cisco Container Platform Control Plane is set up using an installer UI. After the installer VM is switched on, the URL of the installer appears on the vCenter Web console.

Importing Cisco Container Platform Tenant Base VM

Before you begin

  • Ensure that you have configured the storage and networking requirements. For more information, see HyperFlex Integration Requirements and Network Requirements.

  • Ensure that vSphere has an Enterprise Plus license, which supports DRS and vSphere HA.

  • Recommend to use the vSphere Web Client (Flash) version of the vSphere Web Client.

Procedure


Step 1

Log in to the VMware vSphere Web Client as an administrator.

Step 2

In the Navigation pane, right-click the cluster on which you want to deploy Cisco Container Platform, and then choose Deploy OVF Template.

The Deploy OVF Template wizard appears.
Step 3

In the Select template screen, perform these steps:

  1. Click the URL radio button, and enter the URL of the Cisco Container Platform Tenant OVA.

    Alternatively, click the Local file radio button, and browse to the location where the Cisco Container Platform tenant OVA is saved on your computer.
    Note 
    The format of the Tenant OVA filename is as follows:
    ccp-tenant-image-x.y.z-ubuntuXX-a.b.c.ova

    Where x.y.z corresponds to the version of Kubernetes and a.b.c corresponds to the version of Cisco Container Platform.

    The Version Mapping Table provides the Cisco Container Platform version, Kubernetes version and image names mapping for each release.

  2. Click Next.

Step 4

In the Select name and location screen, perform these steps:

  1. In the Name field, enter a name for the Cisco Container Platform tenant base VM.

    Note 
    You need to note down the Cisco Container Platform tenant base VM name as you will need to specify it while creating a cluster.
  2. In the Browse tab, choose the data center where you want to deploy Cisco Container Platform.

  3. Click Next.

Step 5

In the Select a resource screen, choose a cluster where you want to run the Cisco Container Platform tenant base VM, and then click Next.

Step 6

In the Review details screen, verify the Cisco Container Platform tenant base VM details, and then click Next.

The Select storage screen appears.
Figure 1. Select Storage Screen
Step 7

In the Select storage screen, perform these steps:

  1. From the Select virtual disk format drop-down list, choose Thin Provision to allocate storage on demand.

  2. In the Filters tab, choose a destination datastore for the Cisco Container Platform tenant base VM.

  3. Click Next.

The Select networks screen appears.
Figure 2. Select Networks Screen
Step 8

In the Select networks screen, perform these steps:

  1. From the Destination Network column, choose a network for each source network that is available in the Cisco Container Platform tenant base VM.

  2. Click Next.

Step 9

In the Customize template screen, click Next.

Step 10

In the Ready to complete screen, verify the Cisco Container Platform tenant base VM settings, and then click Finish.

The Cisco Container Platform tenant base VM import takes few minutes to complete.
Note 
You can leave the tenant base VM powered off and continue to deploy the installer VM.

Deploying Installer VM

Before you begin


Note

This deployment is for new installations of Cisco Container Platform. For upgrades, see Upgrading Cisco Container Platform.

Ensure that you have imported the latest tenant image version during the OVA import for the Cisco Container Platform tenant base VM.

Procedure


Step 1

Log in to the VMware vSphere Web Client as an administrator.

Step 2

In the Navigation pane, right-click the cluster on which you want to deploy Cisco Container Platform, and then choose Deploy OVF Template.

The Deploy OVF Template wizard appears.
Step 3

In the Select template screen, perform these steps:

  1. Click the URL radio button, and enter the URL of the Installer OVA.

    Alternatively, click the Local file radio button, and browse to the location where the Installer OVA is saved on your computer.
    Note 
    The format of the Installer OVA filename is as follows:
    kcp-vm-x.y.z.ova

    Where x, y, z corresponds to the major, minor, and patch release of Cisco Container Platform.

  2. Click Next.

Step 4

In the Select name and location screen, perform these steps:

  1. In the Name field, enter a name for the installer VM.

  2. In the Browse tab, choose the data center where you want to deploy Cisco Container Platform.

  3. Click Next.

Step 5

In the Select a resource screen, choose the cluster where you want to run the installer VM, and then click Next.

Step 6

In the Review details screen, verify the template details, and then click Next.

Step 7

In the Select storage screen, perform these steps:

  1. From the Select virtual disk format drop-down list, choose Thin Provision to allocate storage on demand.

  2. In the Filters tab, choose a destination datastore to store the installer VM.

  3. Click Next.

Step 8

In the Select networks screen, perform these steps:

  1. From the Destination Network column, choose a network for each source network that is available in the installer VM.

    Note 
    The selected network must have access to vCenter and the tenant VM networks.
  2. Click Next.

The Customize template screen appears.

Figure 3. Customize Template Screen
Step 9

In the Customize template screen, enter the following optional parameters to customize the deployment properties:

  1. Expand CCP, in the SSH public key for installer node access field, enter an ssh public key.

    You can use this key to ssh to the installer VM.
    Note 
    • Ensure that you enter the public key in a single line.

    • If you do not have an SSH key pair, you can generate it using the ssh-keygen command.

    • Ensure that you use the Ed25519 or ECDSA format for the public key.

      Note: As RSA and DSA are less secure formats, Cisco prevents the use of these formats.

  2. Expand Advance and enter the optional fields as necessary.

    In the CIDR for Kubernetes pod network field, 192.168.0.0/24 is displayed as the default pod network CIDR of the Kubernetes cluster for the installer. If the CIDR IP addresses conflict with the tenant cluster VM network or the vCenter network, you need to set a different value for the CIDR.
    This CIDR is the single large CIDR from which smaller CIDRs are automatically allocated to each node for allocating IP addresses to the pods in the Kubernetes cluster. For more information, refer to https://kubernetes.io/docs/setup/scratch/#network-connectivity.
  3. Click Next.

Step 10

In the Ready to complete screen, verify the installer VM deployment settings, and then click Finish.

Step 11

Click the Power on button to switch on the VM.

Figure 4. Switching on Installer VM
Once the installer VM is switched on, the installer UI takes a few minutes to become ready. You can view the status of the Installer UI using the Web console of vCenter. When the installer UI is ready, you can access it using the URL from the Web console.

You can use the ssh private key to access the Installer, control plane VMs, or the tenant cluster VMs. However, logging into these VMs using a username and password is not supported.

Caution 
After deploying Cisco Container Platform, do not change the location of the Control Plane VMs by modifying the datacenter or folder location in vSphere. Changing these settings will adversely impact the management of clusters.

Deploying Cisco Container Platform

The Cisco Container Platform Control Plane is set up using an installer UI. After the installer VM is switched on, the URL of the installer appears on the vCenter Web console.

Procedure


Step 1

Obtain the URL from the vCenter Web console and use a browser to open the installer UI.

The Welcome screen appears.
Figure 5. Welcome Screen
Step 2

Click Install.

The Connect your Cloud screen appears.

Figure 6. Connect your Cloud Screen
Step 3

In the Connect your Cloud screen, enter the following information:

  1. In the VCENTER HOSTNAME OR IP ADDRESS field, enter the IP address of the vCenter instance that you want to use.

  2. In the PORT field, enter the port number that your vCenter server uses.

    Note 
    The default port for vCenter is 443.
  3. In the VCENTER USERNAME field, enter the username of the user with administrator access to the vCenter instance.

  4. In the VCENTER PASSPHRASE field, enter the passphrase of the vCenter user.

  5. Click CONNECT.

    The Placement Properties screen appears.

    Figure 7. Placement Properties Screen
Step 4

In the Placement Properties screen, enter the following information:

  1. From the VSPHERE DATACENTER drop-down list, choose the datacenter.

  2. From the VSPHERE CLUSTER drop-down list, choose the cluster.

  3. From the VSPHERE DATASTORE drop-down list, choose the datastore.

    Caution 
    Do not use a datastore located in a nested folder or a Storage DRS (SDRS).
  4. From the VSPHERE NETWORK drop-down list, choose the network.

  5. In the BASE VM IMAGE field, enter the Cisco Container Platform tenant base VM name from Step 5 of the Importing Cisco Container Platform Tenant Base VM task.

    Caution 
    Do not select a VM name that is located in nested folder.
  6. Click NEXT.

The Cluster Configuration screen appears.

Figure 8. Cluster Configuration Screen
Step 5

In the Cluster Configuration screen, enter the following information:

  1. From the NETWORK PLUGIN FOR TENANT KUBERNETES CLUSTERS drop-down list, choose one of the following options for network connectivity:

    • ACI-CNI

    • Calico

    • Contiv (Tech Preview)

    Note 
    For more information on the network plugins, see Container Network Interface Plugins.
  2. In the CIDR FOR CONTROLLER KUBERNETES POD NETWORK field, 192.168.0.0/16 is displayed as the default pod network CIDR of the Kubernetes cluster for the installer. If the CIDR IP addresses conflict with the tenant cluster VM network or the vCenter network, you need to set a different value for the CIDR.

    Note 
    This CIDR is the single large CIDR from which smaller CIDRs are automatically allocated to each node for allocating IP addresses to the pods in the Kubernetes cluster. For more information, refer to https://kubernetes.io/docs/setup/scratch/#network-connectivity.
  3. In the USERNAME FOR NODE ACCESS field, enter the username of the user who can ssh into the Cisco Container Platform Control Plane nodes.

  4. In the SSH PUBLIC KEY FOR NODE ACCESS field, enter an ssh public key.

    You can use this key to ssh to the Control Plane nodes.

    Note:

    • Ensure that you enter the public key in a single line.

    • If you do not have an SSH key pair, you can generate it using the ssh-keygen command.

    • Ensure that you use the Ed25519 or ECDSA format for the public key.

      Note: As RSA and DSA are less secure formats, Cisco prevents the use of these formats.

  5. Click NEXT.

The Network Settings screen appears.

Figure 9. Network Settings Screen
Step 6

In the Network Settings screen, enter the following information:

Note 
These network settings will be used to configure the Cisco Container Platform web interface.
  1. In the NETWORK NAME field, enter the name of the network that you want to use.

  2. In the SUBNET CIDR field, enter a CIDR for your subnet.

  3. In the GATEWAY IP field, enter the gateway IP address that you want to use.

  4. Under NAMESERVER, enter the IP address of the necessary DNS nameserver.

    You can click +NAMESERVER to enter IP addresses of additional nameservers.
  5. Under POOLS, enter a range for the VIP network pool by specifying the First IP and Last IP that are within the Subnet CIDR specified above. The VIP network pool range enables us to prevent provisioning of tenant clusters with IP address ranges from overlapping subnets.

    The IP address for the Control Plane is also allocated from this network pool range.
    You can click +POOL to enter multiple pools in the subnet.
    Note 
    You must ensure that these IP addresses are not part of a DHCP pool.
  6. Click SAVE.

The Authentication screen appears.

Figure 10. Authentication Screen
Step 7

In the Authentication screen, click the Enable button next to the type of authentication that you want to configure.

Caution 
Use of local authentication is not recommended and is considered less secure for production data.
  1. If you have enabled Active Directory, specify the following information in the Active Directory screen:

    1. Use the toggle button to enable or disable validation of Active Directory settings.

    2. In the SERVER IP ADDRESS field, enter the IP address of the AD server.

    3. In the PORT field, enter the port number for the AD server.

    4. To establish a secure connection using SSL/TLS, enable STARTTLS.

    5. To ensure security of your data, disable SKIP CERTIFICATE VERIFICATION.

      Caution 
      If you enable SKIP CERTIFICATE VERIFICATION, TLS will accept any certificate presented by the AD server. In this mode, TLS is susceptible to data loss.
    6. In the BASE DN field, specify the domain name of the AD server.

      Note 
      Base DN is the Distinguished Name for the base entity. All searches for users and groups will be scoped to this distinguished name.
    7. In the ADMIN GROUP QUERY field, enter the AD group that is associated with the Administrator role.

    8. In the SERVICE ACCOUNT DN field, enter the service account domain name that is used for accessing the LDAP server.

    9. In the SERVICE ACCOUNT PASSPHRASE field, enter the passphrase of the AD account.

    10. Click SAVE.

  2. If you have enabled Local (not recommended), specify the following information in the LOCAL AUTHENTICATION screen:

    1. In the LOCAL ADMIN USERNAME field, enter the admin username.

    2. In the LOCAL ADMIN PASSPHRASE field, enter a passphrase.

    3. In the CONFIRM LOCAL ADMIN PASSPHRASE re-enter the admin passphrase.

    4. Click SAVE.

The Control Plane Settings screen appears.

Figure 11. Control Plane Settings Screen
Step 8

In the Control Plane Settings screen, enter the following information:

  1. In the CONTROL PLANE NAME field, enter the name of the Cisco Container Platform cluster.

    Note 
    • The cluster name must start with an alphanumeric character (a-z, A-Z, 0-9). It can contain a combination of hyphen (-) symbols and alphanumeric characters (a-z, A-Z, 0-9). The maximum length of the cluster name is 46 characters.

    • Deployment of the installer VM fails if another Control Plane cluster with the same name already exists on the same datastore. You must ensure that you specify a unique name for the Control Plane cluster.

  2. In the CCP VERSION field, enter the version of the Cisco Container Platform cluster.

  3. From the CCP LICENSE ENTITLEMENT drop-down list, choose an entitlement option that indicates the type of Smart Licensing that you want to use.

    Note 
    The Partner option will only be used in conjunction with a Not for Retail (NFR) or Trial license.
  4. Expand Advanced Settings, in the NTP SERVERS field, enter the list of any NTP servers in your environment.

    This field is optional.
  5. Click DEPLOY and then monitor the installation progress through the vCenter Web console.

Caution 
After deploying Cisco Container Platform, do not change the location of the Control Plane VMs by modifying the datacenter or folder location in vSphere. Changing these settings will adversely impact the management of clusters.