Install the Crosswork Cluster

This chapter contains the following topics:

Available Installation Methods

The Cisco Crosswork cluster can be installed using the following methods:

  • Installation using Cluster Installer tool: Cluster installer tool is a one-time day 0 deployment tool that leverages VMware or Cisco CSP APIs to deploy all of the virtual machines needed to form your cluster and bring the system to an initial operational state. This is the recommended installation method.


    Note

    The installer tool will deploy the software and power on the virtual machines. If you wish to power on the virtual machines yourself, use the manual installation.


  • Manual Installation: This option is available for deployments that cannot use the installer tool.

Installation Parameters

This section explains the important parameters that must be specified while installing the Crosswork cluster. Kindly ensure that you have relevant information to provide for each of the parameters mentioned in the table and that your environment meets all the requirements specified under Cisco Crosswork Infrastructure Requirements.


Note

Some of the below parameters are named differently depending upon the installation method (cluster installer tool or manual) and IP stack (IPv4 or IPv6) you choose. The aliases of such parameters are mentioned in the "Also mentioned as" column.


Parameter Name

Also mentioned as

Description

ClusterName

Name of the cluster file

ClusterIPStack CWIPv4Address, CWIPv6Address

The IP stack protocol: IPv4 or IPv6

ManagementIPAddress ManagementIPv4Address, ManagementIPv6Address

The Management IP address of the VM (IPv4 or IPv6).

ManagementIPNetmask ManagementIPv4Netmask, ManagementIPv6Netmask

The Management IP subnet in dotted decimal format (IPv4 or IPv6).

ManagementIPGateway ManagementIPv4Gateway, ManagementIPv6Gateway

The Gateway IP on the Management Network (IPv4 or IPv6). The address must be reachable, otherwise the installation will fail.

ManagementVIP

The Management Virtual IP for the cluster.

ManagementVIPName

Name of the Management Virtual IP for the cluster. This is an optional parameters used to reach Crosswork cluster Management VIP via DNS name. If this parameter is used, the corresponding DNS record must exist in the DNS server and must match the ManagementVIP and ManagementVIPName.

DataIPAddress DataIPv4Address, DataIPv6Address

The Data IP address of the VM (IPv4 or IPv6).

DataIPNetmask DataIPv4Netmask, DataIPv6Netmask

The Data IP subnet in dotted decimal format (IPv4 or IPv6).

DataIPGateway DataIPv4Gateway, DataIPv6Gateway

The Gateway IP on the Data Network (IPv4 or IPv6). The address must be reachable, otherwise the installation will fail.

DataVIP

The Data Virtual IP for the cluster.

DataVIPName

Name of the Data Virtual IP for the cluster. This is an optional parameters used to reach Crosswork cluster Data VIP via DNS name. If this parameter is used, the corresponding DNS record must exist in the DNS server and must match the DataVIP and DataVIPName.

DNS DNSv4, DNSv6

The IP address of the DNS server (IPv4 or IPv6). The address must be reachable, otherwise the installation will fail.

NTP

NTP server address or name. The address must be reachable, otherwise the installation will fail.

DomainName Domain

The domain name used for the cluster

CWusername

Username to log into Cisco Crosswork.

CWPassword

Password to log into Cisco Crosswork.

VMSize

VM size for the cluster. Values are small(for lab deployments only) or large.

VMName

Name of the VM

You will require at least 3 unique names (one for each VM).

NodeType VMType

Indicates the type of VM. Choose either "Hybrid" or "Worker".

Note 

The Crosswork cluster for 4.1 release requires at least three VMs operating in a hybrid configuration.

IsSeed

Choose "True" if this is the first VM being built in a new cluster.

Choose "False" for all other VMs, or when rebuilding a failed VM.

InitNodeCount

Total number of nodes in the cluster including hybrid and worker nodes. The default value is 3.

InitMasterCount

Total number of hybrid nodes in the cluster. The default value is 3.

BackupMinPercent

Minimum percentage of the data disk space to be used for the size of the backup partition. The default value is 50 (valid range is from 1 to 80).

Please use the default value unless recommended otherwise.

Note 

The final backup partition size will be calculated dynamically. This parameter defines the minimum.

ManagerDataFsSize

Refers to the data disk size for Hybrid nodes (in Giga Bytes). This is an optional parameter and the default value is 450 (valid range is from 450 to 8000), if not explicitly specified.

Please use the default value unless recommended otherwise.

WorkerDataFsSize

Refers to the data disk size for Worker nodes (in Giga Bytes). This is an optional parameter and the default value is 450 (valid range is from 450 to 8000), if not explicitly specified.

Please use the default value unless recommended otherwise.

ThinProvisioned

Thin or thick provisioning for all disks. Set as "false" for live production deployments, and "true" for lab deployments.

EnableHardReservations

Determines the enforcement of VM CPU and Memory profile reservations. This is an optional parameter and the default value is true, if not explicitly specified.

If set as true, the VM's resources are provided exclusively. In this state, the installation will fail if there are insufficient CPU cores, memory or CPU cycles.

If set as false (only set for lab installations), the VM's resources are provided on best efforts. In this state, the installation will fail if there are insufficient CPU cores.

RamDiskSize ramdisk

Size of the Ram disk.

This parameter is only used for lab installations (value must be at least 2). When a non-zero value is provided for RamDiskSize, the HSDatastore value is not used.

VMware resource data

vCenterAddress

The vCenter IP or host name.

vCenterUser

The username needed to log into vCenter.

vCenterPassword

The password needed to log into vCenter.

DCname

The name of the Data Center resource to use.

MgmtNetworkName

The name of the vCenter network to attach to the VM's Management interface.

DataNetworkName

The name of the vCenter network to attach to the VM's Data interface.

Host

The ESXi host or resource group name.

Datastore

The datastore name available to be used by this host or resource group.

HSDatastore

The high speed datastore available for this host or resource group.

DCfolder

The resource folder name on vCenter. Leave as empty if not used.

Cisco CSP resource data

name Host

Host name

protocol

Protocol used (e.g. "https")

server

Cisco CSP Server IP address

username

The username needed to log into Cisco CSP.

password

The password needed to log into Cisco CSP.

insecure

Default value is "true".

MgmtNetworkName

The name of the CSP network to attach to the VM's Management interface.

DataNetworkName

The name of the CSP network to attach to the VM's Data interface.

Install Cisco Crosswork using the Cluster Installer tool

This section describes how Cisco Crosswork is installed in VMware and Cisco CSP using the Cluster Installer tool.

The cluster installer tool is the recommended method to install Cisco Crosswork. It is a day 0 installation tool used to deploy the Crosswork cluster with user specified parameters supplied via a template file. The tool is run from a docker container which can be hosted on any docker capable platform including a regular PC/laptop. The docker container contains a set of template files which can be edited to provide the deployment specific data. Separate templates need to be used for vCenter and CSP deployments.


Note

Docker version 19 or higher is recommended while using the cluster installer option. For more information on docker, see https://docs.docker.com/get-docker/


Few pointers to know when using the cluster installer tool:

  • Make sure that your data center meets all the requirements specified under Cisco Crosswork Infrastructure Requirements.

  • The install script is safe to run multiple times. Upon error, input parameters can be corrected and re-run. However, it must be noted that running the tool multiple times may result in the deletion and re-creation of VMs.

  • The edited template in the /data directory will contain sensitive information (VM passwords). The operator needs to manage access to this content. Erase them after use or when you quit the container.

  • The install.log, install_tf.log, and crosswork-cluster.tfstate files will be created during the install and stored in the /data directory. If you encounter any trouble with the installation, provide these files to the Cisco Customer Experience team when opening a case.

  • In case you are using the same installer tool for multiple Crosswork cluster installations, it is important to run the tool from different local directories, allowing for each deployment state files to be independent. The simplest way for doing so is to create on the host machine a local directory for each deployment on the host machine and map each one to the container accordingly.


Note

In order to change install parameters or to correct parameters following installation errors, it is important to distinguish whether the installation has managed to deploy the VMs or not. Deployed VMs are evidenced by the output of the installer similar to:

vsphere_virtual_machine.crosswork-IPv4-vm["1"]: Creation complete after 2m50s [id=4214a520-c53f-f29c-80b3-25916e6c297f]

In case of deployed VMs, changes to the CW VM settings or the Data Center host for a deployed VM are NOT supported. To change a setting using the installer when the deployed VMs are present, the clean operation needs to be run and the cluster redeployed.

A VM redeployment will delete the VM's data, hence caution is advised. We recommend you to perform VM parameter changes from the CW UI, or alternatively one VM at a time. Installation parameter changes that occur prior to any VM deployment, e.g. an incorrect vCenter parameter, can be performed by applying the change and simply re-running the install operation.


Install Cisco Crosswork on VMware vCenter

This section explains the procedure to install Cisco Crosswork on VMware vCenter using the cluster installer tool.

Before you begin

  • Make sure that your environment meets all the vCenter requirements specified under Cisco Crosswork Infrastructure Requirements.

  • On running, the installer will upload the .ova file into the vCenter if it is not already present, and convert it into a VM template. After the installation is completed successfully, you can delete the template file from the vCenter UI (located under VMs and Templates) if the image is no longer needed.

Procedure


Step 1

In your docker capable machine, create a directory where you will store everything you will use during the installation.

Step 2

Download the installer bundle (.tar.gz file) and the OVA file from cisco.com to the directory you created previously. For the purpose of these instructions, we will use the file names as "cw-na-platform-4.1.0-38-installer-pkg.tar.gz" and "cw-na-platform-4.1.0-38-release-211108.ova" respectively.

Step 3

Use the following command to unzip the installer bundle:

tar -xvf cw-na-platform-4.1.0-38-installer-pkg.tar.gz

The contents of the installer bundle is unzipped to a new directory (e.g. cw-na-platform-4.1.0-38-installer). This new directory will contain the installer image (e.g. cw-na-platform-installer-4.1.0-38-release-211108.tar.gz) and files necessary to validate the image.

Step 4

Navigate to the directory created in the previous step and use the following command to verify the signature of the installer image:

Note 

Use python --version to find out the version of python on your machine.

If you are using python 2.x, use the following command:

python cisco_x509_verify_release.py -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file> -v dgst -sha512

If you are using python 3.x, use the following command:

python cisco_x509_verify_release.py3 -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file> -v dgst -sha512
Note 

If you do not get a successful verification message, please contact the Cisco Customer Experience team.

Step 5

Use the following command to load the installer image file into your Docker environment.

docker load -i <.tar.gz file>

For example:

docker load -i cw-na-platform-installer-4.1.0-38-release-211108.tar.gz

The result will be a line similar to the following: (section we will need is underlined for clarity)

Loaded image ID: sha256:4a55858a7dd9a5fed7d0d46716e4c9525333525419e5517a4904093f01b3f165
Step 6

Launch the Docker container using the following command:

docker run --rm -it -v `pwd`:/data 4a55858a7dd9a5fed7d0d46716e4c9525333525419e5517a4904093f01b3f165
Note 

You do not have to enter that full value. In this case, "docker run --rm -it -v `pwd`:/data 4a5" was adequate. You only require enough of the image ID to uniquely identify the image you want to use for the installation.

Note 

In the above command, we are using the backtick (`). Do not use the single quote or apostrophe (') as the meaning to the shell is very different. By using the backtick (recommended), the template file and OVA will be stored in the directory where you are when you run the commands on your local disk, instead of inside the container.

My Machine% docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED        SIZE
cw-na-platform-installer-4.1.0-38-release-211108    <none>    4a55858a7dd9   7 days ago     276MB
Step 7

Navigate to the directory with the VMware template.

cd /opt/installer/deployments/4.1.0/vcenter
Step 8

Copy the template file found under /opt/installer/deployments/4.1.0/vcenter/deployment_template_tfvars to the /data folder using a different name.

For example: cp deployment_template_tfvars /data/deployment.tfvars

For the rest of this procedure, we will use deployment.tfvars in all the examples.

Step 9

Edit the template file located in the /data directory, in a text editor, adding the necessary parameters:

  • Crosswork cluster information such as VM size: Use "Small" for lab deployments, otherwise enter "Large". For more information, see the storage profiles in VM Host Requirements.
  • Unique Crosswork VM entries, including names, their IP addresses and node type settings.
    Note 

    Use a strong VM Password (8 character long, including upper & lower case letters, numbers and one special character). The VM setup will fail if a weak password is used.

  • vCenter access details and credentials, along with the assignment of the named Crosswork VMs to the Data Center resources.
Note 

A sample of the template file is posted at the end of this section. The file itself has two parts, the template that you need to fill in with the values for your environment and a set of example data to demonstrate how the information is formatted.

Step 10

From the terminal window, determine the container id and copy the OVA file to the /data directory in your container.

docker ps
CONTAINER ID     IMAGE            COMMAND       CREATED         STATUS         PORTS NAMES
1bda806bbd82     4a55858a7dd9     "/bin/sh"     3 hours ago     Up 3 hours     <port-name>

Note the container ID.

docker cp {image file name} {container id} :/data

For example: docker cp cw-na-platform-4.1.0-38-release-211108.ova 1bda806bbd82:/data

Step 11

Run the installer.

./cw-installer.sh install -p -m /data/<template file name> -o /data/<.ova file>

For example:

./cw-installer.sh install -p -m /data/deployment.tfvars -o /data/cw-na-platform-4.1.0-38-release-211108.ova
Note 

If the installation fails, you should try rerunning the installation without the -p option. This will deploy the VMs serially rather than in parallel.

Step 12

Enter "yes" when prompted to accept the End User License Agreement (EULA).

Step 13

Enter "yes" when prompted to confirm the operation.

Note 

It is not uncommon to see some warnings like the following during the install:

Warning: Line 119: No space left for device '8' on parent controller '3'.
Warning: Line 114: Unable to parse 'enableMPTSupport' for attribute 'key' on element 'Config'.

If the install process proceeds to a successful conclusion (see sample output below), these warnings can be ignored.

Sample output:

cw_cluster_vms = <sensitive>
INFO: Copying day 0 state inventory to CW
INFO: Waiting for deployment status server to startup on 10.90.147.66. Elapsed time 0s, retrying in 30s
Crosswork deployment status available at http://{VIP}:30602/grafana.monitoring 
Once deployment is complete login to Crosswork via: https://{VIP}:30603/#/logincontroller 
INFO: Cw Installer operation complete.

Example

See Sample manifest template for VMware vCenter

What to do next

The time taken to create the cluster can vary based on the size of your deployment profile and the performance characteristics of your hardware. See Monitor the Installation to know how you can check the status of the installation.

Install Cisco Crosswork on Cisco CSP

This section explains the procedure to install Cisco Crosswork on Cisco CSP using the cluster installer tool.

Before you begin

Procedure


Step 1

In your docker capable machine, create a directory where you will store everything you will use during the installation.

Step 2

Download the installer bundle (.tar.gz file) and the QCOW2 bundle (.tar.gz file) from cisco.com to the directory you created previously. For the purpose of these instructions, we will use the file names as "cw-na-platform-4.1.0-38-installer-pkg.tar.gz" and "cw-na-platform-4.1.0-38-release-211108-qcow2-pkg.tar.gz" respectively.

Step 3

Use the following command to unzip the installer bundle:

tar -xvf cw-na-platform-4.1.0-38-installer-pkg.tar.gz

The contents of the installer bundle is unzipped to a new directory (e.g. cw-na-platform-4.1.0-38-installer). This new directory will contain the installer image (e.g. cw-na-platform-installer-4.1.0-38-release-211108.tar.gz) and files necessary to validate the image.

Step 4

Navigate to the directory created in the previous step and use the following command to verify the signature of the installer image:

Note 

Use python --version to find out the version of python on your machine.

If you are using python 2.x, use the following command:

python cisco_x509_verify_release.py -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file> -v dgst -sha512

If you are using python 3.x, use the following command:

python cisco_x509_verify_release.py3 -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file> -v dgst -sha512
Note 

If you do not get a successful verification message, please contact the Cisco Customer Experience team.

Step 5

Use the following command to load the installer image file into your Docker environment.

docker load -i <.tar.gz file>

For example:

docker load -i cw-na-platform-installer-4.1.0-38-release-211108.tar.gz

The result will be a line similar to the following: (section we will need is underlined for clarity)

Loaded image ID: sha256:4a55858a7dd9a5fed7d0d46716e4c9525333525419e5517a4904093f01b3f165
Step 6

Launch the Docker container using the following command:

docker run --rm -it -v `pwd`:/data 4a55858a7dd9a5fed7d0d46716e4c9525333525419e5517a4904093f01b3f165
Note 

You do not have to enter that full value. In this case, "docker run --rm -it -v `pwd`:/data 4a5" was adequate. You only require enough of the image ID to uniquely identify the image you want to use for the installation.

Note 

In the above command, we are using the backtick (`). Do not use the single quote or apostrophe (') as the meaning to the shell is very different. By using the backtick (recommended), the template file and QCOW2 will be stored in the directory where you are when you run the commands on your local disk, instead of inside the container.

My Machine% docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED        SIZE
cw-na-platform-installer-4.1.0-38-release-211108    <none>    4a55858a7dd9   7 days ago     276MB
Step 7

Navigate to the directory with the CSP template.

cd /opt/installer/deployments/4.1.0/csp
Step 8

Copy the template file found under /opt/installer/deployments/4.1.0/csp/deployment_template_tfvars to the /data folder using a different name.

For example: cp deployment_template_tfvars /data/deployment.tfvars

For the rest of this procedure, we will use deployment.tfvars in all the examples.

Step 9

Edit the template file located in the /data directory, in a text editor, adding the necessary parameters:

  • Crosswork cluster information such as VM size: Use "Small" for lab deployments, otherwise enter "Large".
  • Unique Crosswork VM entries, including names, their IP addresses and node type settings.
    Note 

    Use a strong VM Password (8 character long, including upper & lower case letters, numbers and one special character). The VM setup will fail if a weak password is used.

  • Cisco CSP access details and credentials, along with the assignment of the named Crosswork VMs to the Cisco CSP host resources.
Note 

A sample of the template file is posted at the end of this section. The file itself has two parts, the template that you need to fill in with the values for your environment and a set of example data to demonstrate how the information is formatted.

Step 10

From the terminal window, unzip the QCOW2 bundle (.tar.gz file):

tar -xvf cw-na-platform-4.1.0-38-release-211108-qcow2-pkg.tar.gz

The contents of the QCOW2 bundle is unzipped to a new directory (e.g. cw-na-platform-4.1.0-38-release-211108-qcow2). This new directory will contain the QCOW2 image (e.g. cw-na-platform-4.1.0-38-release-211108-qcow2.tar.gz) and files necessary to validate the image.

Step 11

Navigate to the directory created in the previous step, and use the following command to verify the signature of the QCOW2 image:

python cisco_x509_verify_release.py -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file> -v dgst -sha512
Note 

If you do not get a successful verification message, please contact the Cisco Customer Experience team.

Step 12

Run the installer.

./cw-installer.sh install -t csp -m /data/<template file name> -o /data/<qcow2.tar.gz file> -p

For example:

./cw-installer.sh install -t csp m /data/deployment.tfvars -o /data/cw-na-platform-4.1.0-38-release-211108-qcow2.tar.gz -p
Note 

If the installation fails, you should try rerunning the installation without the -p option. This will deploy the VMs serially rather than in parallel.

Step 13

Enter "yes" when prompted to accept the End User License Agreement (EULA).

Step 14

Enter "yes" when prompted to confirm the operation.


Example

See Sample manifest template for Cisco CSP.

What to do next

The time taken to create the cluster can vary based on the size of your deployment profile and the performance characteristics of your hardware. See Monitor the Installation to know how you can check the status of the installation.

Install Cisco Crosswork Manually

This section describes how Cisco Crosswork can be manually installed in VMware and Cisco CSP.

Manual Installation of Cisco Crosswork using vSphere UI

This section explains the procedure to manually install Cisco Crosswork on VMware vCenter using the vSphere UI. The procedure needs to repeated for each node in the cluster.

The manual installation workflow is broken into two parts. In the first part, you create a template. In the second part, you deploy the template as many times as needed to build the cluster of 3 hybrid nodes (typically) along with any worker nodes that your environment requires.

Before you begin

Procedure


Step 1

Download the latest available Cisco Crosswork image file (*.ova) to your system.

Step 2

With VMware ESXi running, log into the VMware vSphere Web Client. On the left navigation pane, choose the ESXi host on which you want to deploy the VM.

Step 3

Choose Actions > Deploy OVF Template.

Caution 

The default VMware vCenter deployment timeout is 15 minutes. The total time needed to deploy the OVA image file may take much longer than 15 minutes, depending on your network speed and other factors. If vCenter times out during deployment, the resulting VM will be unbootable. To prevent this, we recommend that you either set the vCenter deployment timeout to a much longer period (such as one hour), or unTAR the OVA file before continuing, and then deploy using the OVA's four separate Open Virtualization Format and Virtual Machine Disk component files: cw.ovf, cw_rootfs.vmdk, cw_dockerfs.vmdk, and cw_extrafs.vmdk.

Step 4

The VMware Deploy OVF Template window appears, with the first step, 1 - Select an OVF template, highlighted. Click Choose Files to navigate to the location where you downloaded the OVA image file and select it. Once selected, the file name is displayed in the window.

Step 5

Click Next. The Deploy OVF Template window is refreshed, with 2 - Select a name and folder now highlighted. Enter a name and select the respective Datacenter for the Cisco Crosswork VM you are creating.

We recommend that you include the Cisco Crosswork version and build number in the name, for example: Cisco Crosswork 4.0 Build 152.

Step 6

Click Next. The Deploy OVF Template window is refreshed, with 3 - Select a compute resource highlighted. Select the host for your Cisco Crosswork VM.

Step 7

Click Next. The VMware vCenter Server validates the OVA. Network speed will determine how long validation takes. After the validation is complete, the Deploy OVF Template window is refreshed, with 4 - Review details highlighted.

Step 8

Review the OVF template that you are deploying. Note that this information is gathered from the OVF, and cannot be modified.

Step 9

Click Next. The Deploy OVF Template window is refreshed, with 5 - License agreements highlighted. Review the End User License Agreement and click the I accept all license agreements checkbox.

Step 10

Click Next The Deploy OVF Template window is refreshed, with 6 - Configuration highlighted. Choose the desired deployment configuration.

Figure 1. Select a deployment configuration
Note 

If Cisco Crosswork is deployed using a single interface, then Cisco Crosswork Data Gateway must be deployed using a single interface as well (only required for lab deployments).

Step 11

Click Next. The Deploy OVF Template window is refreshed, with 7 - Select Storage highlighted. Choose the relevant option from the Select virtual disk format drop-down list. From the table, choose the datastore you want to use, and review its properties to ensure there is enough available storage.

Figure 2. Select Storage
Note 

For production deployment, choose the Thick provision eager zeroed option because this will preallocate disk space and provide the best performance. For lab purposes, we recommend the Thin provision option because it saves disk space.

Step 12

Click Next. The Deploy OVF Template window is refreshed, with 8 - Select networks highlighted. From the Data Network and Management Network drop-down lists, choose an appropriate destination network.

Step 13

Click Next. The Deploy OVF Template window is refreshed, with 9 - Customize template highlighted.

  1. Expand the Management Network settings. Provide information for the IPv4 or IPv6 deployment (as per your selection).

  2. Expand the Data Network settings. Provide information for the IPv4 or IPv6 deployment (as per your selection).

    Figure 3. Customize template settings
    Note 

    Data Network settings are not displayed if you have selected the IPv4 on a Single Interface or IPv6 on a Single Interface configuration.

  3. Expand the Deployment Credentials settings. Enter relevant values for the VM Username and Password.

  4. Expand the DNS and NTP Servers settings. According to your deployment configuration (IPv4 or IPv6), the fields that are displayed are different. Provide information in the following three fields:

    • DNS IP Address: The IP addresses of the DNS servers you want the Cisco Crosswork server to use. Separate multiple IP addresses with spaces.

    • DNS Search Domain: The name of the DNS search domain.

    • NTP Servers: The IP addresses or host names of the NTP servers you want to use. Separate multiple IPs or host names with spaces.

    Note 

    The DNS and NTP servers must be reachable using the network interfaces you have mapped on the host. Otherwise, the configuration of the VM will fail.

  5. The default Disk Configuration settings should work for most environments. Change the settings only if you are instructed to by the Cisco Customer Experience team.

  6. Expand Crosswork Configuration and enter your legal disclaimer text (users will see this text if they log into the CLI).

  7. Expand Crosswork Cluster Configuration. Provide relevant values for the following fields:

    • VM Type:

      • Choose Hybrid if this is one of the 3 hybrid nodes.

      • Choose Worker if this is a worker node.

    • Cluster Seed node:

      • Choose True if this is the first VM being built in a new cluster.

      • Choose False for all other VMs, or when rebuilding a failed VM.

    • Crosswork Management Cluster Virtual IP: Enter the Management Virtual IP address and Management Virtual IP DNS name.

    • Crosswork Data Cluster Virtual IP: Enter the Data Virtual IP address. and the Data Virtual IP DNS name.

    • Initial node count: Default value is 3.

    • Initial leader node count: Default value is 3.

    • Location of VM: Enter the location of VM.

    • Installation type:

      • For new cluster installation: Do not select the checkbox.

      • Replacing a failed VM: Select the checkbox if this VM is being installed to replace a failed VM.

Step 14

Click Next. The Deploy OVF Template window is refreshed, with 10 - Ready to Complete highlighted.

Step 15

Review your settings and then click Finish if you are ready to begin deployment. Wait for the deployment to finish before continuing. To check the deployment status:

  1. Open a VMware vCenter client.

  2. In the Recent Tasks tab of the host VM, view the status of the Deploy OVF template and Import OVF package jobs.

Step 16

To finalize the template creation, select the host and right-click on the newly installed VM and select Template > Convert to Template. A prompt confirming the action is displayed. Click Yes to confirm. The template is created under the VMs and Templates tab in the vSphere Client UI.

This is the end of the first part of the manual installation workflow. In the second part, use the newly created template to build the cluster VMs.

Step 17

To build the VM, right-click on the newly created template and select New VM from This Template.

Step 18

The VMware Deploy From Template window appears, with the first step, 1 - Select a name and folder, highlighted. Enter a name and select the respective Datacenter for the VM.

Step 19

Click Next. The Deploy From Template window is refreshed, with 2 - Select a compute resource highlighted. Select the host for your Cisco Crosswork VM.

Step 20

Click Next. The Deploy From Template window is refreshed, with 3 - Select Storage highlighted. Choose Same format as source option as the virtual disk format (recommended).

If you are using a single data store: Select the data store you wish to use, and click Next.

Figure 4. Select Storage - single data store

If you are using two data stores (regular and high speed):

  • Enable Configure per disk option.

  • Select regular data store as the Storage setting for all the disks except disk 6.

  • Select high speed (ssd) data store as the Storage setting for disk 6.

    Note 

    This disk must have 50 GB of free storage space.

    Figure 5. Select Storage - Configure per disk
  • Click Next.

Step 21

The Deploy From Template window is refreshed, with 4 - Select clone options highlighted. You can choose further clone options here.

(Optional) Perform the following steps to configure the disk, memory and Extensive Firmware Interface (EFI) boot settings:

  • Choose Customize this virtual machine's hardware and click Next. The Edit Settings dialog box is displayed.

  • Under Virtual Hardware tab, enter the relevant values (see VM Host Requirements) for CPU and Memory.

  • Under VM Options tab, expand Boot Options, select EFI as the Firmware, and check the Secure Boot checkbox.

Step 22

Click Next. The Deploy From Template window is refreshed, with 5 - Customize vApp properties highlighted. The vApp properties from the template is already populated in this window. You need to check the following fields:

  • Cluster Seed node:

    • Choose True if this is the first VM being built in a new cluster.

    • Choose False for all other VMs, or when rebuilding a failed VM.

  • Management Network settings: Enter correct IP values for each VM in the cluster.

  • Data Network settings: Enter correct IP values for each VM in the cluster.

  • Crosswork Management Cluster Virtual IP: The Virtual IP will remain same for each cluster node.

  • Crosswork Data Cluster Virtual IP: The Virtual IP will remain same for each cluster node.

  • Deployment Credentials: Enter same deployment credentials for each VM in the cluster.

Note 

If this VM is being deployed to replace a failed VM, the IP and other settings must match the machine being replaced.

Step 23

Click Next. The Deploy From Template window is refreshed, with 6 - Ready to complete highlighted. Review your settings and then click Finish if you are ready to begin deployment.

Step 24

Repeat from Step 17 to Step 23 to deploy the remaining VMs in the cluster.

Step 25

You can now power on Cisco Crosswork VMs to complete the deployment process. The VM selected as the cluster seed node must be powered on first, followed by the remaining VMs (after a delay of few minutes). To power on, expand the host’s entry, click the Cisco Crosswork VM, and then choose Actions > Power > Power On.

The time taken to create the cluster can vary based on the size of your deployment profile and the performance characteristics of your hardware. See Monitor the Installation to know how you can check the status of the installation.

Note 

If you are running this procedure to replace a failed VM, then you can check the status from the Cisco Crosswork GUI (go to Administration > Crosswork Manager and click on the cluster tile to check the Crosswork Cluster status.


Manual Installation of Cisco Crosswork on Cisco CSP

This section explains the procedure to manually install Crosswork cluster hybrid nodes and worker nodes on Cisco CSP.


Note

While deploying worker nodes, set the VMType value in the ovf-env.xml file as Worker.


Procedure


Step 1

Prepare the Cisco Crosswork service image for upload to Cisco CSP:

  1. Download and extract the Cisco Crosswork qcow2 build from cisco.com to your local machine or a location on your local network that is accessible to your Cisco CSP.

    The build is a tarball of the qcow2 file and the template file (.tpl).

    Note 

    The procedure requires ovf-env.xml file. You must create it using the template file found in the build.

  2. Open the ovf-env.xml file and modify the parameters as per your installation requirements.

    Below is an example of how the ovf-env.xml file looks like:

    <?xml version="1.0" encoding="UTF-8"?>
    <Environment>
         xmlns="http://schemas.dmtf.org/ovf/environment/1"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns:oe="http://schemas.dmtf.org/ovf/environment/1"
         xmlns:ve="http://www.cisco.com/schema/ovfenv"
         oe:id=""
       <PlatformSection>
          <Kind>Cisco CSP</Kind>
          <Version>2.8</Version>
          <Vendor>Cisco</Vendor>
          <Locale>en</Locale>
       </PlatformSection>
       <PropertySection>
             <Property oe:key="CWIPv4Address" oe:value="0.0.0.0"/>
             <Property oe:key="CWIPv6Address" oe:value="::0"/>
             <Property oe:key="CWPassword" oe:value="{{.CWPassword}}"/>
             <Property oe:key="CWUsername" oe:value="{{.CWUsername}}"/>
             <Property oe:key="ClusterName" oe:value="{{.ClusterName}}"/>
             <Property oe:key="CwInstaller" oe:value="True"/>
             <Property oe:key="DNSv4" oe:value="{{.DNSv4}}"/>
             <Property oe:key="DNSv6" oe:value="{{.DNSv6}}"/>
             <Property oe:key="DataIPv4Address" oe:value="{{.DataIPv4Address}}"/>
             <Property oe:key="DataIPv4Gateway" oe:value="{{.DataIPv4Gateway}}"/>
             <Property oe:key="DataIPv4Netmask" oe:value="{{.DataIPv4Netmask}}"/>
             <Property oe:key="DataIPv6Address" oe:value="{{.DataIPv6Address}}"/>
             <Property oe:key="DataIPv6Gateway" oe:value="{{.DataIPv6Gateway}}"/>
             <Property oe:key="DataIPv6Netmask" oe:value="{{.DataIPv6Netmask}}"/>
             <Property oe:key="DataVIP" oe:value="{{.DataVIP}}"/>
             <Property oe:key="Deployment" oe:value="{{.Deployment}}"/>
             <Property oe:key="Disclaimer" oe:value="{{.Disclaimer}}"/>
             <Property oe:key="Domain" oe:value="{{.Domain}}"/>
             <Property oe:key="InitMasterCount" oe:value="{{.InitMasterCount}}"/>
             <Property oe:key="InitNodeCount" oe:value="{{.InitNodeCount}}"/>
             <Property oe:key="IsSeed" oe:value="{{.IsSeed}}"/>
             <Property oe:key="K8Orch" oe:value=""/>
             <Property oe:key="ManagementIPv4Address" oe:value="{{.ManagementIPv4Address}}"/>
             <Property oe:key="ManagementIPv4Gateway" oe:value="{{.ManagementIPv4Gateway}}"/>
             <Property oe:key="ManagementIPv4Netmask" oe:value="{{.ManagementIPv4Netmask}}"/>
             <Property oe:key="ManagementIPv6Address" oe:value="{{.ManagementIPv6Address}}"/>
             <Property oe:key="ManagementIPv6Gateway" oe:value="{{.ManagementIPv6Gateway}}"/>
             <Property oe:key="ManagementIPv6Netmask" oe:value="{{.ManagementIPv6Netmask}}"/>
             <Property oe:key="ManagementVIP" oe:value="{{.ManagementVIP}}"/>
             <Property oe:key="NSOProvider" oe:value="False"/>
             <Property oe:key="NTP" oe:value="{{.NTP}}"/>
             <Property oe:key="VMType" oe:value="{{.VMType}}"/>
             <Property oe:key="corefs" oe:value="20"/>
             <Property oe:key="ddatafs" oe:value="200"/>
             <Property oe:key="logfs" oe:value="10"/>
             <Property oe:key="ramdisk" oe:value="{{.RamDiskSize}}"/>
       </PropertySection>
    </Environment>
    Note 

    Only one node in the cluster must have IsSeed set to True.

Step 2

Upload Cisco Crosswork service image to Cisco CSP:

  1. Log into the Cisco CSP.

  2. Go to Configuration > Repository.

  3. On the Repository Files page, Click Add icon button.



  4. Select an Upload Destination.

  5. Click Browse, navigate to the qcow2 file, click Open and then Upload.

    Repeat this step to upload ovf-env.xml file.



    After the file is uploaded, the file name and other relevant information are displayed in the Repository Files table.

Step 3

Create Cisco Crosswork VM:

  1. Go to Configuration > Services.

  2. On the Service page, click Add icon button.

  3. Check Create Service option.

    The Create Service Template page is displayed.

  4. Enter the values for the following fields:

    Field

    Description

    Name

    Name of the VM.

    Target Host Name

    Choose the target host on which you want to deploy the VM.

    Image Name

    Select the qcow2 image.

  5. Click Day Zero Config.



    In the Day Zero Config dialog box, do the following:

    1. From the Source File Name drop-down list, select a day0 configuration file i.e., the ovf-env.xml file that you modifed and uploaded earlier.

    2. In the Destination File Name field, specify the name of the day0 destination text file. This must always be "ovf-env.xml".

    3. Click Submit.

  6. Enter the values for the following fields:

    Field

    Description

    Number of CPU Cores

    Small: 8

    Large: 12

    RAM (MB)

    Small: 49152

    Large: 98304

  7. Click VNIC.



    In the VNIC Configuration dialog box, perform the following:

    Note 

    The VNIC Name is set by default.

    1. Select the Interface Type as Access.

    2. Select the Model as Virtio.

    3. Select the Network Type as External.

    4. Select Network Name:

      For VNIC...

      Select...

      vnic0

      Eth0-1

      vnic1

      Eth1-1

    5. Select Admin Status as UP.

    6. Click Submit.

    7. Repeat Steps i to vi for vNIC1 and vNIC2.

    After you have added all three vNICs, the VNIC table will look like this:



  8. Expand the Service Advance Configuration and for Firmware, select uefi from the drop-down.

    Check the Secure Boot checkbox.



  9. Click Storage. In the Storage Configuration dialog box, fill the following fields:

    Field

    Description

    Name

    Name of the storage. This is specified by default.

    Device Type

    Select Disk.

    Location

    Select local.

    Disk Type

    Select VIRTIO.

    Format

    Select QCOW2.

    Mount image file as disk?

    Leave this unchecked.

    Size (GB)

    Enter the disk size (5 for Standard and 500 for Extended.)



    Note 

    You have to configure 3 disks of different sizes:

    • Disk 0: 10 GB

    • Disk 1: 400 GB

    • Disk 2: 50 GB

    When you have completed the storage configuration, click Submit.

  10. Click Deploy.



    You will see a similar message once the service has successfully deployed. Click Close.

Step 4

Repeat Step 1 to Step 3 for each VM in the cluster.

Step 5

Deploy Cisco Crosswork VM:

  1. Go to Configuration > Services.

  2. In the Services table, click the console icon under Console column for the Cisco Crosswork VM you created above.


What to do next

The time taken to create the cluster can vary based on the size of your deployment profile and the performance characteristics of your hardware. See Monitor the Installation to know how you can check the status of the installation.

Monitor the Installation

This section explains how to monitor and verify if the installation has completed successfully. As the installer builds and configures the cluster it will report progress. The installer will prompt you to accept the license agreement and then ask if you want to continue the install. After you confirm, the installation will progress and any errors will be logged in either installer.log or installer_tf.log.


Note

During installation, Cisco Crosswork will create a special administrative ID (virtual machine (VM) administrator, with the username cw-admin, and the default password cw-admin). The administrative username is reserved and cannot be changed. The first time you log in using this administrative ID, you will be prompted to change the password. Data center administrators use this ID to log into and troubleshoot the Crosswork application VM. You will use it to verify that the VM has been properly set up.


The following is a list of critical steps in the process that you can watch for to be certain that things are progressing as expected:

  1. The installer uploads the crosswork image file (OVA file in vCenter & QCOW2 file in CSP) to the data center.

  2. The installer creates the VMs, and displays a success message (e.g. "Creation Complete") after each VM is created.


    Note

    For VMware deployments, this activity can also be monitored from the vSphere UI.


  3. After the VMs are created successfully, the Crosswork cluster will be created.

  4. Once the cluster is created and becomes accessible, a success message (e.g. "CW Installer operation complete") will be displayed on the screen.

Once the VMs are built and powered on (either automatically when the installer completes, or after you power on the VMs during the manual installation) the Kubernetes cluster is built and the containers that make up Crosswork are started. You can monitor startup progress using the following methods:

  • Using browser accessible dashboard: While the cluster is being created, you can monitor the setup process from a browser accessible dashboard. The URL for this grafana dashboard (in the format http://{VIP}:30603/grafana.monitoring) is displayed once the installer completes. Please note that this URL is temporary and will be available only for a limited time (around 30 minutes). At the end of the deployment, the grafana dashboard will report a "Ready" status. If the URL is inaccessible, you can use the other methods described in this section to monitor the installation process.

    Figure 6. Crosswork Deployment Readiness
  • Using the console: You can also check the progress from the console of one of the hybrid VMs by using SSH to the Virtual IP address, switching to super user, and running kubectl get nodes (to see if the nodes are ready) and kubectl get pods (to see the list of active running pods) commands. Repeat the kubectl get pods command until you see robot-ui in the list of active pods. At this point, you can try to access the Cisco Crosswork UI.

After the Cisco Crosswork UI becomes accessible, you can also monitor the status from the UI. For more information, see Log into the Cisco Crosswork UI.

Failure Scenario

In the event of a failue scenario (listed below), contact the Cisco Customer Experience team and provide the installer.log and installer_tf.log files (there will be one per VM) for review:

  • Installation is incomplete

  • Installation is completed, but the VMs are not functional

  • Installation is completed, but you are directed to check firstboot.log file

Log into the Cisco Crosswork UI

Once the cluster activation and startup have been completed, you can check if all the nodes are up and running in the cluster from the Cisco Crosswork UI. Perform the following steps to log into the Cisco Crosswork UI and and check the cluster health:


Note

If the Cisco Crosswork UI is not accessible, during installation, please access the host's console from the VMware or CSP UI to confirm if there was any problem in setting up the VM. When logging in, if you are directed to review the firstboot.log file, please check the file to determine the problem. If you are able to identify the error, rectify it and rerun the installer. If you require assistance, please contact the Cisco Customer Experience team.



Note

You can log into the Crosswork UI using DNS name as well.


Procedure


Step 1

Launch one of the supported browsers (see Supported Web Browsers).

Step 2

In the browser's address bar, enter:

https://<Crosswork Management Network Virtual IP (IPv4)>:30603/

or

https://[<Crosswork Management Network Virtual IP (IPv6)>]:30603/
Note 

Please note that the IPv6 address in the URL must be enclosed with brackets.

Note 

You can also log into the Crosswork UI using DNS name.

The Log In window opens.

Note 

When you access the Cisco Crosswork for the first time, some browsers display a warning that the site is untrusted. When this happens, follow the prompts to add a security exception and download the self-signed certificate from the Cisco Crosswork server. After you add a security exception, the browser accepts the server as a trusted site in all future login attempts. If you want to use a CA signed certificate, see the "Manage Certificates" section in the Cisco Crosswork Infrastructure 4.1 and Applications Administrator Guide.

Step 3

Log into the Cisco Crosswork as follows:

  1. Enter the Cisco Crosswork administrator username admin and the default password admin.

  2. Click Log In.

  3. When prompted to change the administrator's default password, enter the new password in the fields provided and then click OK.

    Note 

    Use a strong password (8 character long, including upper & lower case letters, numbers and one special character).

The Crosswork Manager window is displayed.

Step 4

(Optional) Click on the Crosswork Health tab, and click on the Crosswork Infrastructure tile to view the health status of the microservices running on Cisco Crosswork.


Known Limitations

These following scenarios are the caveats for installing the Cisco Crosswork using the cluster installer tool.

  • The vCenter host VMs defined must use the same network names (vSwitch) across all hosts in the DC.

  • The vCenter storage folders, i.e. datastores organized under a virtual folder structure, are not supported currently. Please ensure that the datastores referenced are not grouped under a folder.

  • When deploying a IPv6 cluster, the installer needs to run on an IPv6 enabled container/VM. This requires additionally configuring the docker daemon before running the installer, using the following method:

    • Linux hosts (ONLY): Run the docker container in host networking mode by adding the "–network host" flag to the docker run command line.

      docker run --network host <remainder of docker run options>
  • The cluster installer does not configure VMs with VLAN interfaces. As a result, CSP interfaces have to be untrunked with no tagged VLANs used for Management and Data networks. CSP allows non-VLAN tagged interfaces to be shared between multiple VMs, which allows for a more optimal interface assignment when deploying Crosswork and Crosswork Data Gateway VMs on the same CSP.

  • Any VMs that are not created by the day 0 installer (for example, manually brought up VMs), cannot be changed either by the day 0 installer or via the Crosswork UI later. Similarly, VMs created via the Crosswork UI cannot be modified using the day 0 installer.

  • Crosswork does not support dual stack configurations, and all addresses for the environment must be either IPv4 or IPv6. However, vCenter UI provides a service where a user accessing via IPv4 can upload images to the IPv6 ESXi host. Cluster installer cannot use this service. Follow either of the following workarounds for IPv6 ESXi hosts:

    1. Upload the OVA template image manually, via the GUI and convert it to template.

    2. Run the cluster installer from an IPv6 enabled machine. To do this, configure the docker daemon to map an IPv6 address into the docked container.

  • Centos/RHEL hosts, by default, enforce a strict SELinux policy which does not allow the installer container to read from or write to the mounted data volume. On such hosts, run the docker volume command with the Z option as shown below:

    docker run --rm -it -v `pwd`:/data:Z <remainder of docker options>

Troubleshoot the Cluster

By default, the installer displays progress data on the command line. The install log is fundamental in identifying the problems, and it is copied into the /data directory.

Scenario

Possible Resolution

Missing or invalid parameters

The installer provides a clue as regards to the issue; however, in case of errors in the manfiest file HCL syntax, these can be misguiding. If you see "Type errors", check the formatting of the configuration manifest.

The manifest file can also be passed as a simple JSON file. Use the following converter to validate/convert: https://www.hcl2json.com/

Image upload takes a long time or upload is interrupted.

The image upload duration depends on the link and datastore performance and can be expected to take around 10 minutes or more. It is best not to interrupt the process, which automatically ceases. However, if an upload is interrupted, the user needs to manually remove the partially uploaded image file from vCenter via the vSphere UI.

vCenter authorization

The vCenter user needs to have authorization to perform the actions as described in Cisco Crosswork Installation Requirements.

Floating VIP address is not reachable

The VRRP protocol requires unique router_id advertisments to be present on the network segment. By default, Crosswork uses the ID 169 on the management and ID 170 on the data network segments. A symptom of conflict, if it arises, is that the VIP address is not reachable. Remove the conflicting VRRP router machines or use a different network.

Crosswork VM is not allowing to log in

The password specified is not strong enough. Change the configuration manfiest and redeploy.

Error conditions such as:

Error: Error locking state: Error acquiring the state lock: resource temporarily unavailable

Error: error fetching virtual machine: vm not found

Error: Invalid index

These errors are common when re-running the installer after an initial run is interrupted (Control C, or TCP timeout, etc). Remediation steps are:

  1. Run the clean operation (./cw-installer.sh clean -m <your manifest here>) OR remove the VM files manually from the vCenter.

  2. Remove the state file (rm /data/crosswork-cluster.tfstate) and retry.

Deployment fails with: Failed to validate Crosswork cluster initialization.

The clusters' seed VM is either unreachable or one or more of the cluster VMs have failed to get properly configured.

  1. Check whether the VM is reachable, and collect logs from /var/log/firstBoot.log and /var/log/vm_setup.log

  2. Check the status of the other cluster nodes.

The VMs are deployed but the Crosswork cluster is not being formed.

A successful deployment allows the operator logging in to the VIP or any cluster IP address to run the following command to get the status of the cluster:
sudo kubectl get nodes
A healthy output for a 3-node cluster is:
NAME                  STATUS   ROLES    AGE   VERSION
172-25-87-2-hybrid.cisco.com   Ready    master   41d   v1.16.4
172-25-87-3-hybrid.cisco.com   Ready    master   41d   v1.16.4
172-25-87-4-hybrid.cisco.com   Ready    master   41d   v1.16.4

In case of a different output, collect the following logs: /var/log/firstBoot.log and /var/log/vm_setup.log

In addition, for any cluster nodes not displaying the Ready state, collect:
sudo kubectl describe node <name of node>

The following error is displayed while uploading the image:

govc: The provided network mapping between OVF networks and the system network is not supported by any host.

The Dswitch on the vCenter is misconfigured. Please check whether it is operational and mapped to the ESXi hosts.

The VMs take a long time to deploy

The disk load on the vCenter plays a major role in cloning VM. To ease loaded systems, it is possible to run the VM install operations in a serialized manner. On higher performance systems, run the deployment in parallel by passing the [-p] flag.

VMs deploy but install fails with Error: timeout waiting for an available IP address

Most likely cause would be an issue in the VM parameters provided or network reachability. Enter the VM host through the vCenter console. and review and collect the following logs: /var/log/firstBoot.log and /var/log/vm_setup.log

On cluster node failure, the VIP is not transferred to the remaining nodes

Ensure that switch or the vCenter Dswitch connected the VMs allows IP address movement (Allow Forged Transmits in vCenter). For more information, see Data Center Requirements.

When deploying on a vCenter, the following error is displayed towards the end of the VM bringup:

Error processing disk changes post-clone: disk.0: ServerFaultCode: NoPermission: RESOURCE (vm-14501:2000), ACTION (queryAssociatedProfile): RESOURCE (vm-14501), ACTION (PolicyIDByVirtualDisk)

Enable Profile-driven storage. Query permissions for the vCenter user at the root level (i.e. for all resources) of the vCenter.

Installer reports plan to add more resources than the current numbr of VMs

Other than the Crosswork cluster VMs, the installer tracks a couple of other meta-resources. Thus, when doing an installation of, say a 3-VM cluster, the installer may report a "plan" to add more resources than the number of VMs.

On running or cleaning, installer reports Error: cannot locate virtual machine with UUID "xxxxxxx": virtual machine with UUID "xxxxxxxx" not found

To resolve, remove the /data/crosswork-cluster.tfstate file.

The installer uses the tfstate file stored as /data/crosswork-cluster.tfstate to maintain the state of the VMs it has operated upon. If a VM is removed outside of the installer, that is through the vCenter UI, this state is out of synchronization.