Administering vSphere On-prem Clusters

You can create, upgrade, modify, or delete vSphere on-prem Kubernetes clusters using the Cisco Container Platform web interface.

Cisco Container Platform supports v2 and v3 vSphere clusters. The v2 vSphere clusters use a single master node for its Control Plane, whereas the v3 vSphere clusters can use 1 or 3 master nodes for its Control Plane. The multi-master approach of v3 clusters is the preferred cluster type because this ensures high availability for the Control Plane.


Note

The UI differences between v2 and v3 clusters are called out in the cluster creation task.

This chapter contains the following topics:

Creating Kubernetes Clusters on vSphere On-prem Clusters

Procedure


Step 1

From the left pane, click Clusters, and then click the vSphere tab.

Step 2

Click NEW CLUSTER.

Step 3

In the Basic Information screen, specify the following information:

  1. From the INFRASTRUCTURE PROVIDER drop-down list, choose the provider related to your Kubernetes cluster.

    See also Adding vSphere Provider Profile.
  2. In the KUBERNETES CLUSTER NAME field, enter a name for your Kubernetes tenant cluster.

  3. In the KUBERNETES VERSION drop-down list, choose the version of Kubernetes that you want to use for creating the cluster.

  4. If you are using ACI, specify the ACI profile, see Adding ACI Profile.

  5. Click NEXT.

Step 4

In the Provider Settings screen, specify the following information:

  1. From the DATA CENTER drop-down list, choose the data center that you want to use.

  2. From the CLUSTERS drop-down list, choose a cluster.

    Note 

    Ensure that DRS and HA are enabled on the cluster that you choose. For more information on enabling DRS and HA on clusters, refer to the Cisco Container Platform Installation Guide.

  3. From the DATASTORE drop-down list, choose a datastore.

    Note 
    Ensure that the datastore is accessible to the hosts in the cluster.
  4. From the VM TEMPLATE drop-down list, choose a VM template.

  5. From the NETWORK drop-down list, choose a network.

    Note 
    • Ensure that you select a subnet with an adequate number of free IP addresses. For more information, see Managing Networks. The selected network must have access to vCenter.

    • For v2 clusters that use HyperFlex systems:

      • The selected network must have access to the HypexFlex Connect server to support HyperFlex Storage Provisioners.

      • For HyperFlex Local Network, select k8-priv-iscsivm-network to enable HyperFlex Storage Provisioners.

  6. From the RESOURCE POOL drop-down list, choose a resource pool.

  7. Click NEXT.

Step 5

In the Node Configuration screen, specify the following information:

  1. From the GPU TYPE drop-down list, choose a GPU type.

    Note 
    GPU Configuration is applicable only if you have GPUs in your HyperFlex cluster.
  2. For v3 clusters, under MASTER, choose the number of master nodes, and their VCPU and memory configurations.

    Note 
    You may skip this step for v2 clusters. You can configure the number of master nodes only for v3 clusters.
  3. Under WORKER, choose the number of worker nodes, and their VCPU and memory configurations.

  4. In the SSH USER field, enter the ssh user name.

  5. In the SSH KEY field, enter the SSH public key that you want to use for creating the cluster.

    Note 
    Ensure that you use the Ed25519 or ECDSA format for the public key. As RSA and DSA are less secure formats, Cisco prevents the use of these formats.
  6. In the NUMBER OF LOAD BALANCERS field, enter the number of load balancer IP addresses for this cluster.

    See also Load Balancer Services.

  7. From the SUBNET drop-down list, choose the subnet that you want to use for this cluster.

  8. In the POD CIDR field, enter the IP addresses for the pod subnet in the CIDR notation.

  9. In the DOCKER HTTP PROXY field, enter a proxy for the docker.

  10. In the DOCKER HTTPS PROXY field, enter an https proxy for the docker.

  11. 1. In the DOCKER BRIDGE IP field, enter a valid CIDR to override the default Docker bridge.

  12. In the VM USERNAME field, enter the VM username that you want to use as the login for the VM.

  13. Under NTP POOLS, click ADD POOL to add a pool.

  14. Under NTP SERVERS, click ADD SERVER to add an NTP server.

  15. Under ROOT CA REGISTRIES, click ADD REGISTRY to add a root CA certificate to allow tenant clusters to securely connect to additional services.

  16. Under INSECURE REGISTRIES, click ADD REGISTRY to add docker registries created with unsigned certificates.

  17. For v2 clusters, under ISTIO, use the toggle button to enable or disable Istio.

  18. Click NEXT.

Step 6

For v2 clusters, to integrate Harbor with Cisco Container Platform, follow these steps:

Note 
Harbor is currently not available for v3 clusters.
  1. In the Harbor Registry screen, click the toggle button to enable Harbor.

  2. In the PASSWORD field, enter a password for the Harbor server administrator.

  3. In the REGISTRY field, enter the size of the registry in gigabits.

  4. Click NEXT.

Step 7

In the Summary screen, verify the configuration, and then click FINISH.

The cluster deployment takes few minutes to complete. The newly created cluster is displayed on the Clusters screen.

The cluster deployment takes a few minutes to complete. The newly created cluster is displayed on the Clusters screen.

For more information on deploying applications on clusters, see Deploying Applications on Kubernetes Clusters.


Configuring Add-ons for v3 Clusters


Note

This section is applicable only for v3 clusters.

In v3 clusters, the monitoring, logging, Istio, Harbor, and Kubernetes dashboard functions are available as configurable add-ons.

In v2 clusters, these add-ons are installed by default.

Procedure


Step 1

From the left pane, click Clusters, and then click the vSphere tab.

Step 2

From the VERSIONS drop-down list, choose VERSION 3 to view the v3 clusters.

Step 3

Choose the cluster for which you want to configure add-ons.

Step 4

Click the ADD-ONS tab.

The Installed Add-ons page appears.
Step 5

Click INSTALL ADD-ON.

The Select an Add-on page appears.
Step 6

Select one of the following add-ons:

  • ccp-monitor: For monitoring clusters

  • ccp-efk: For logging

  • kubernetes-dashboard: For deploying and managing the applications that are deployed on the clusters

  • ccp-kubeflow: For deploying machine learning (ML) workloads

  • HyperFlex CSI: For deploying HyperFlex storage

  • Istio Operator: For deploying the Istio operator service, which is required for running Istio

  • Istio: For deploying the Istio services, which requires the Istio Operator to be running beforehand

  • Harbor Operator: For deploying the Harbor operator service, which is required for running Harbor

  • Harbor: For deploying the Harbor service, which requires the Harbor Operator to be running beforehand

    See also Customizing Registry Size for Harbor Instance.

Step 7

Click Close.


Customizing Registry Size for Harbor Instance

For v3 clusters, you cannot customize the registry size for a Harbor instance from the Cisco Container Platform web interface.

To deploy a Harbor instance with a registry size greater than the default 20Gi, follow these steps:

Procedure


Step 1

Install the Harbor operator add-on as described in Configuring Add-ons for v3 Clusters.

Step 2

Ssh into the master of the tenant cluster.

Step 3

Run the command to set the custom registry size.

For example, run the following command to set the registry size to 40Gi:

helm install -n harbor-cr /opt/ccp/charts/ccp-harbor-cr.tgz --set registrySize=40Gi

Deleting Add-ons for v3 Clusters


Note

This section is applicable only for v3 clusters.

In v3 clusters, the monitoring, logging, Istio, Harbor, and Kubernetes dashboard functions are removable through the Cisco Container Platform web interface.

In v2 clusters, you cannot delete these add-ons through the Cisco Container Platform web interface.

Procedure


Step 1

From the left pane, click Clusters, and then click the vSphere tab.

Step 2

From the VERSIONS drop-down list, choose VERSION 3 to view the v3 clusters.

Step 3

Choose the cluster for which you want to delete add-ons.

Step 4

Click the ADD-ONS tab.

The Installed Add-ons page appears.
Step 5

From the drop-down list displayed under the ACTIONS column, click Delete for the add-on that you want to delete.

Step 6

Click Close.


Upgrading vSphere Clusters

Before you begin

Ensure that you have imported the latest tenant cluster OVA to the vSphere environment.

Ensure that an adequate number of free IP addresses are available. For more information, see Managing Networks.

For more information on importing the tenant cluster OVA, refer to the Cisco Container Platform Installation Guide.

Procedure


Step 1

From the left pane, click Clusters, and then click the vSphere tab.

Step 2

From the drop-down list displayed under the ACTIONS column, choose Upgrade for the cluster that you want to upgrade.

Step 3

In the Upgrade Cluster dialog box, choose a Kubernetes version and a new template for the VM, and then click Submit.

It may take a few minutes for the Kuberenetes cluster upgrade to complete.

Scaling vSphere Clusters

You can scale clusters by adding or removing worker nodes to them based on the demands of the workloads you want to run. You can add worker nodes in a default or custom node pool.

For more information on adding worker node pools, see Configuring Node Pools.

Configuring Node Pools

Node pools allow the creation of worker nodes with varying configurations. Nodes belonging to a single node pool have identical characteristics.

In the Cisco Container Platform vSphere implementation, a node pool has the following properties:

Labels and taints are optional parameters. All nodes that belong to a nodepool are tagged with labels and they are tainted. Taints are key-value pairs, which are associated with an effect.

The following table describes the available effects.

Effect

Description

NoSchedule

Ensures that the pods that do not contain this taint are not scheduled on the node.

PreferNoSchedule

Ensures that Kubernetes avoids scheduling pods that do not contain this taint on the node.

NoExecute

Ensures that a pod is removed from the node if it is already running on the node, and is not scheduled on the node if it is not yet running on the node.

During cluster creation, each cluster is assigned a default node pool. Cisco Container Platform supports the ability for different master and worker configurations. Upon cluster creation, the master node is created in the default-master-pool and the worker nodes are created in the default-pool.

Cisco Container Platform supports the ability to create multiple node pools and customize each pool characteristics such as vCCPUs, memory, labels, and taints.

Adding Node Pools

Cisco Container Platform allows you to add custom node pools to an existing cluster.

Procedure


Step 1

Click the cluster for which you want to add a node pool.

The Cluster Details page displays the node pools of the cluster that you have selected.
Step 2

From the right pane, click ADD NODE POOL.

The Add Node Pool page appears.
Step 3

Under POOL NAME, enter a name for the node pool.

Step 4

Ensure that an adequate number of free IP addresses is available in the subnet that you have selected during tenant cluster creation. For more information, see Managing Networks.

Step 5

Under Kubernetes Labels, enter the key-value pair of the label.

You can click the Delete icon to delete a label and the +LABEL icon to add a label.
Step 6

Under Kubernetes Taints, enter the key-value pair and the effect you want to set for the label.

You can click the Delete icon to delete a taint and the +TAINT icon to add a taint.
Step 7

Click ADD.

The Cluster Details page displays the node pools. You can point the mouse over the Labels and Taints to view a summary of the labels and taints that are assigned to a pool.

Modifying Node Pools

Cisco Container Platform allows you to modify the worker node pools.

Procedure


Step 1

Click the cluster that contains the node pool that you want to modify.

The Cluster Details dialog box appears displaying the node pools of the cluster that you have chosen.
Step 2

From the drop-down list next to the name of the node pool, click Edit.

The Update Node Pool page appears.
Step 3

Ensure that an adequate number of free IP addresses is available in the subnet that you have selected during tenant cluster creation. For more information, see Managing Networks.

Step 4

Under Kubernetes Labels, modify the key-value pair of the label.

Step 5

Under Kubernetes Taints, modify the key-value pair and the effect you want to set for the label.

Step 6

Click UPDATE.


Deleting Node Pools

Cisco Container Platform allows you to delete the worker node pools. You cannot delete the default master pool.

Procedure


Step 1

Click the cluster that contains the node pool that you want to delete.

The Cluster Details page displays the node pools of the cluster that you have chosen.
Step 2

From the drop-down list next to the worker pool that you want to delete, choose Delete.

The worker pool is deleted from the Cluster Details page.

Deleting vSphere Clusters

Before you begin

Ensure that the cluster you want to delete is not currently in use, as deleting a cluster removes the containers and data associated with it.

Procedure


Step 1

From the left pane, click Clusters, and then click the vSphere tab.

Step 2

From the drop-down list displayed under the ACTIONS column, choose Delete for the cluster that you want to delete.

Step 3

Click DELETE in the confirmation dialog box.