New and Changed Information
The following table provides an overview of the significant changes up to this current release. The table does not provide an exhaustive list of all changes or of the new features up to this release.
Cisco ACI CNI plug-in Release Version |
Feature |
---|---|
5.1(1) |
Support for Red Hat OpenShift 4.5 on VMware vSphere 7 User-Provisioned Infrastructure (UPI). |
OpenShift 4.5 on VMware vSphere
The Cisco Application Centric Infrastructure (ACI) supports Red Hat OpenShift 4.5 on VMware vSphere 7 User-Provisioned Infrastructure (UPI). This document provides the instructions on using Ansible playbooks to provision OpenShift 4.5 on VMware vSphere with the Cisco ACI Container Network Interface (CNI) plug-in.
The Ansible playbooks provision virtual machines (VMs) with the needed interface configuration and generate the ignition configuration files. You must deploy your own DHCP, DNS, and load-balancing infrastructure following high-availability best practices.
The Ansible playbooks are available on GitHub.
The following are the Ansible playbooks:
-
asserts.yml: This playbook performs basic validations of variable declarations in the all.yml file.
-
setup.yml: This playbook performs the following tasks:
-
Configures the orchestrator node:
-
Installs Terraform, the OpenShift client, and the OpenShift installer. It creates the following: Terraform variables for the bootstrap, master, and worker nodes; the master and worker machine-config operator; the OpenShift install config file.
-
Configures load balancer node: It disables Security-Enhanced Linux (SELinux), configures HAProxy, sets up DHCP and DNS if selected.
This optional step configures these three components only if you set the
provision_dhcp
,provision_dns
, andprovision_lb
variables to true.
-
-
-
oshift_prep.yml:
-
Sets up the install and bootstrap directories.
-
Generates manifests using openshift-install.
-
Adds the additional machine-config operator manifests.
-
Adds the Cisco ACI-CNI manifests.
-
Creates a backup of the manifests.
-
Sets up the bootstrap, master, and worker nodes ignition files.
-
Copies the bootstrap ignition file to the loadbalancer node.
-
-
create_nodes.yml:
-
Provisions the bootstrap, master, and worker nodes, using Terraform.
-
Sets up a cron job to approve Cisco Cloud Services Routers (CSRs) if selected.
-
-
delete_nodes.yml: Deletes the nodes.
Prerequisites for Installing OpenShift 4.5 on VMware vSphere
To install OpenShift Container Platform (OCP) 4.5 on VMware vSphere, fulfill the following prerequisites:
Cisco ACI
-
Download the acc-provision tool version 5.1(x) or later.
Specify the “--flavor” option value as “openshift-4.5-esx,” and use the “-z” option. The tool creates a .tar archive file as specified by the “-z” option value. You need this archive file during installation.
Make sure that the Cisco Application Centric Infrastructure (ACI) container images that are specified as input to the acc-provision tool are version TBD or later.
VMware vSphere
Obtain user credentials with privileges to create virtual machines (VMs).
OpenShift
Obtain the following from the Red Hat website:
-
The OCP4 Open Virtualization Appliance (OVA)
-
OCP4 client tools
-
Pull Secret
Install OpenShift 4.5 on VMware vSphere
Note |
Before you begin
Complete the tasks in the section Prerequisites for Installing OpenShift 4.5 on VMware vSphere.
It is recommended to see the RedHat OpenShift documentation for prerequisites and other details about Installing a Cluster on vSphere.
Procedure
Step 1 |
Provision the Cisco Application Centric Infrastructure (ACI) fabric using the acc-provision utility:
|
||||
Step 2 |
Once the Cisco ACI fabric is provisioned, verify that a port group with the name system_id_vlan_kubeapi_vlan is created under distributed switch. This document refers to this port group as api-vlan-portgroup. |
||||
Step 3 |
In VMware vSphere, import the OpenShift Container Platform 4 (OCP4)Open Virtual Appliance (OVA) image. Specify |
||||
Step 4 |
Provision a Red Hat Enterprise load balancer virtual machine (VM) with the network interface that is connected to the api-vlan-portgroup. The Ansible playbooks optionally configure this VM as a load balancer, DNS server, and DHCP server for the OpenShift cluster. |
||||
Step 5 |
Provision a Red Hat Enterprise orchestrator VM with the network interface that is connected to the api-vlan-portgroup. The Ansible playbooks play from the orchestrator VM. |
||||
Step 6 |
Perform the following tasks on the orchestrator VM: |
What to do next
You can use the commands openshift-install wait-for bootstrap-complete
and openshift-install wait-for install-complete
to check the progress of the installation. Execute the commands from the bootstrap directory.
Sample Files for Installing OpenShift 4.5 on VMware vSphere
This section contains sample files that you need for installing OpenShift 4.5 on VMware vSphere.
Sample acc-provision-input File
The following is a sample acc-provision-input.yaml. The highlighted or bold values are those that you must modify to meet your site requirements.
#
# Configuration for ACI Fabric
#
aci_config:
system_id: ocp4aci
#apic-refreshtime: 1200
apic_hosts:
- 1.1.1.1
vmm_domain:
encap_type: vxlan
mcast_range: # Every opflex VMM must use a distinct range
start: 225.28.1.1
end: 225.28.255.255
nested_inside:
type: vmware
name: hypflex-vswitch
# Include if nested inside a VMM:
type: vmware # Specify the VMM vendor (supported: vmware)
name: hyperflex-vswitch # Specify the name of the VMM domain
installer_provisioned_lb_ip: 10.213.0.201
# The following resources must already exist on the APIC.
# They are used, but not created, by the provisioning tool.
aep: hypf-aep
vrf: # This VRF used to create all kubernetes EPs
name: k8s18_vrf
tenant: common
l3out:
name: k8s18
external_networks:
- k8s18_net
#
# Networks used by ACI containers
#
net_config:
node_subnet: 192.168.18.1/24
pod_subnet: 10.128.0.1/16 # Subnet to use for Kubernetes
# Pods/CloudFoundry containers
extern_dynamic: 10.3.0.1/24 # Subnet to use for dynamic external IPs
extern_static: 10.4.0.1/24 # Subnet to use for static external IPs
node_svc_subnet: 10.5.0.1/24 # Subnet to use for service graph
kubeapi_vlan: 35
service_vlan: 36
infra_vlan: 3901
#interface_mtu: 1600
#service_monitor_interval: 5 # IPSLA interval probe time for PBR tracking
# default is 0, set to > 0 to enable, max: 65535
#pbr_tracking_non_snat: true # Default is false, set to true for IPSLA to
# be effective with non-snat services
#
# Configuration for container registry
# Update if a custom container registry has been setup
#
kube-config:
image_pull_policy: Always
ovs_memory_limit: 1Gi
registry:
image_prefix: docker.io/noirolabs
aci_containers_controller_version: 5.1.1.0.1ae238a
aci_containers_host_version: 5.1.1.0.1ae238a
cnideploy_version: 5.1.1.0.1ae238a
opflex_agent_version: 5.1.1.0.1ae238a
openvswitch_version: 5.1.1.0.1ae238a
aci_containers_operator_version: 5.1.1.0.1ae238a
logging:
controller_log_level: debug
hostagent_log_level: debug
opflexagent_log_level: debug
Sample Ansible group_vars/all.yml File
The following is a sample group_vars/all.yml. The highlighted or bold values are those that you must modify to meet your site requirements.
#provision_dhcp
# type: boolean, True or False
# required: yes
# notes: If set to true, load balancer is configured as dhcp server.
# If false, it is assumed that the dhcp server pre-exists
provision_dhcp: True
#domainname
# type: string, base dns domain name, cluster metadata name is added as subdomain to this
# required: yes
domainname: ocplab.local
#provision_dns
# type: boolean, True or False
# required: yes
# notes: If set to true, load balancer is configured as dns server.
# If false, it is assumed that the dns server pre-exists.
provision_dns: True
#dns_forwarder:
# type: ip address
# required: yes
# notes: This value is used when setting up a dhcp service and also for 'forwarders' value in dns configuration.
dns_forwarder: 172.28.184.18
#loadbalancer_ip:
# type: ip address or resolvable hostname
# required: yes
# notes: This host is configured as load balancer for cluster and also as dhcp and dns server if required
loadbalancer_ip: 192.168.18.201. This IP address is the same as the one that you configure in installer_provisioned_lb_ip in the acc-provision config.
#auto_approve_csr:
# type: boolean
# required: yes
# notes: when set to true, sets up a cron job to auto approve openshift csr
auto_approve_csr: True
#proxy_env
#
proxy_env:
#donot remove dummy field, irrespective of whether setup needs a proxy or not.
dummy: dummy
#set the http/https proxy server, if setup does not need proxy, comment the below values.
#these values are used for ansible tasks and also passed on to openshift installer
http_proxy: http://1.1.1.1:80
https_proxy: http://1.1.1.1:80
no_proxy: 1.2.1.1,1.2.1.2
#packages
# defines the urls to download terraform, openshift client and openshift-install tools from.
packages:
validate_certs: False
terraform_url: https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_linux_amd64.zip
openshift_client_linux_url: https://mirror.openshift.com/pub//openshift-v4/x86_64/clients/ocp/4.5.18/openshift-client-linux-4.5.18.tar.gz
openshift_install_linux_url: https://mirror.openshift.com/pub//openshift-v4/x86_64/clients/ocp/4.5.18/openshift-install-linux-4.5.18.tar.gz
#default_aci_manifests_archive:
# default filename that is searched under files directory.
# this can be overridden by passing extra parameter aci_manifests_archive on ansible command line
default_aci_manifests_archive: aci_manifests.tar.gz
#vsphere
vsphere:
server: hypf.local.lab
user: administrator@vsphere.local
passwd: xxxx
allow_unverified_ssl: true
datacenter_name: hypflex-dc
cluster_name: hypflex-cluster
datastore_name: noiro
RHCOS_template_name: RHCOS443
#base_dir
# type: directory path
# required: yes
# notes: All install files and directories are created under this directory
base_dir: /root/ocpinstall
#bootstrap node variables
bootstrap_vars:
node_mac: 00:50:56:b2:c7:a1 #required
node_ip: 192.168.18.210 #required
cpu_count: 8 #optional: defaults to 4
memory_KB: 16384 #optional: defaults to 8192
disk_size_MB: 40 #optional: defaults to 40
masters_vars:
cpu_count: 8 #optional: defaults to 4
memory_KB: 16384 #optional: defaults to 16384
disk_size_MB: 40 #optional: defaults to 40
nodes:
#mac address and ip address for each node is required
- master-1:
api_mac: 00:50:56:b2:c7:b1
ip: 192.168.18.211
- master-2:
api_mac: 00:50:56:b2:c7:b3
ip: 192.168.18.212
- master-3:
api_mac: 00:50:56:b2:c7:b5
ip: 192.168.18.213
workers_vars:
cpu_count: 8 #optional: defaults to 4
memory_KB: 16384 #optional: defaults to 16384
disk_size_MB: 40 #optional: defaults to 40
nodes:
#mac address and ip address for each node is required
- worker-1:
api_mac: 00:50:56:b2:c7:c1
ip: 192.168.18.214
- worker-2:
api_mac: 00:50:56:b2:c7:c3
ip: 192.168.18.215
#user_ssh_key:
# required: no
# notes: if specified this key is setup on nodes, else ssh key of current
# user is used.
user_ssh_key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD...
#additional_trust_bundle:
# required: no
# notes: use this field to add a certificate for private repository
#
# example:
#additional_trust_bundle: |
# -----BEGIN CERTIFICATE-----
# MIIDDDCCAfQCCQDuOnV7XBjpODANBgkqhkiG9w0BAQsFADBIMQswCQYDVQQGEwJV
# UzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEOMAwGA1UECgwFQ2lzY28xDjAM
# -----END CERTIFICATE-----
#openshift_pullsecret:
# required: yes
# example:
# openshift_pullsecret: {"auths":{"cloud.openshift.com":{"auth":.........}
openshift_pullsecret: xxx
Sample hosts.ini file
The following is a sample hosts.ini. The highlighted or bold values are those that you must modify to meet your site requirements.
[orchestrator]
192.168.18.200
[lb]
192.168.18.201