Install Cisco iNode Manager with Autodeployer
Prerequisite: Install Intelligent Node Software on GS7000 iNode
As a prerequisite for iNode Manager 23.1 installation or migration, ensure that the GS7000 iNode runs 3.1.1 and above OIB Image by performing the following steps.
Procedure
Step 1 |
Download GS7000 iNode image from Cisco Software Download page. |
Step 2 |
Set iNode version in the DHCP configuration file as 03.01.01 or above. |
Step 3 |
Enable the force upgrade option in the DHCP configuration. See GS7000 iNode Release Notes. |
Step 4 |
Reboot iNode. |
Background
To install an SMI cluster, the following setup is necessary:
-
Staging server: a physical or virtual machine to run the installation script.
-
Hypervisor (VMware ESXi 7.0)
-
vCenter (version 7.0 or above): manager for the vSphere infrastructure that hosts the VMs for the SMI clusters.
Note
We recommend that you use VMware vCenter Server 7.0 with VMFS 6 Datastore type.
The installation process creates the following:
-
SMI Cluster Manager (or Deployer): a controller to configure and deploy SMI cluster.
-
SMI Cluster: the cluster on which the target product application runs
A release image bundle is a compressed tarball file that contains all the scripts, helm charts, and docker images necessary to install the Deployer and the SMI cluster. It also contains copies of these instruction and configuration examples.
You can use the Autodeploy script that is in the bundle to set up the Deployer and the SMI clusters.
Cisco iNode manager supports two cluster sizes:
-
Single-node cluster (also called All-In-One cluster, or AIO): Runs on a single VM.
-
Multinode cluster: Runs on three UCS servers; each with a Control-Plane, ETCD, Infra, and App VMs, giving a total of 12 VMs. Multinode cluster supports two sizes
small
andnormal
.
The multinode cluster provides full high-availability support and is the only recommended cluster size for production environment.
The default size is small
when size
field is not provided..
The following tables show the minimum requirements for AIO cluster:
Node Type | CPUs | RAM Size (GB) | Disk Size (GB) |
---|---|---|---|
Deployer | 8 | 16 | 320 |
AIO | 18 | 96 | 1529 |
The following tables show the minimum requirements for each of the VM types deployed in a multimode cluster:
Node Type | CPUs | RAM Size (GB) | Disk Size (GB) |
---|---|---|---|
Deployer | 8 | 16 | 320 |
Control-plane | 2 | 16 | 125 |
etcd | 2 | 16 | 125 |
Infra | 8 | 64 | 1000 |
Ops | 8 | 64 | 320 |
Node Type | CPUs | RAM Size (GB) | Disk Size (GB) |
---|---|---|---|
Deployer | 8 | 16 | 320 |
Control-plane | 2 | 16 | 125 |
etcd | 2 | 16 | 125 |
Infra | 14 | 98 | 1500 |
Ops | 16 | 180 | 320 |
High-Level Overview of Installation Workflow
You can prepare and deploy multiple clusters, if necessary.
Prepare Staging Server
The staging server can be any type of host: physical server, virtual machine, or even a laptop. However, the server must be able to connect to the target VMware vSphere Infrastructure, vCenter Server, and cluster nodes with correct credentials.
Prerequisites
The Staging Server must have the following software installed:
Note |
Ensure that the staging server has internet connectivity to download the iNode Manager release bundle from the Cisco software downloads page. |
Unpack Cisco iNode Manager Release Bundle
The iNode Manager release bundle image is a compressed tarball file that is self-sufficient for the Deployer and the iNode Manager cluster installation. It contains the following files:
-
Installation script
-
All relevant product images
-
Sample configuration files
-
Copy of the README file
Procedure
Step 1 |
Download the signed iNode Manager release bundle image to the Staging Server. Extract the content with the following command:
This command untars the signed bundle. |
Step 2 |
Run the following command to extract all the individual images of SMI, Cisco Operations Hub, and iNode Manager.
We call this directory the staging directory in this document. |
Prepare a Cluster Configuration File
VMware vCenter Details
To contact the VMware vCenter server, the deploy script and the deployer require the following details:
-
Server name or IP address
-
Username and password
-
Datacenter and cluster name
-
Host server and data store names
For the Deployer and single-node cluster, one host server is necessary. For multinode clusters, three host servers are necessary.
The Deployer and the SMI Clusters can run on different vCenters.
IP Addresses for Deployer and Cluster
Deploying the iNode Manager software offline requires the following IP addresses:
-
One management IP address for the Deployer
-
Management IP addresses for cluster nodes (1 for single-node, 12 for multinode clusters)
-
Converged Interconnect Network (CIN) network IP addresses for iNode Manager (1 per CIN interfaces per APP node)
-
For multinode clusters, 1 virtual IP for management and 1 for each CIN network
Cluster Configuration File
Place the configuration under the staging directory. This configuration file is in the standard YAML language format, with the following three sections:
-
Environments
-
Deployers
-
Clusters (iNode Manager multinode or single-node)
Each section can contain multiple items.
Note |
Replace all the fields marked with <...> in the following sections with actual values. |
VMware vCenter Environment Configuration
This section provides details of the VMware vCenter access and network access for creating and provisioning the deployers and cluster virtual machines.
environments:
<environment name>:
server: <vCenter name or IP address>
username: <vCenter user name>
datacenter: <vCenter datacenter name>
cluster: <vCenter cluster name>
nics: [ <LIST of vCenter management networks> ]
host: <UCS host>
datastore: <Datastore name>
nameservers: [ <LIST of DNS servers> ]
search-domains: [ <LIST of search domains ]
ntp: <ntp server name or IP address>
Guidelines for configuring the VMware vCenter Environment:
-
The environment name can have only lowercase letters, digits, and hyphens (-).
-
The NIC's list must have only one network, although the NIC configuration allows multiple networks. The deployer or cluster that refers to this environment uses this network as the management network.
-
Configure multiple environments for this vCenter if your vCenter has more than one network that serves as a management network. Configure an environment for each network. Use the corresponding environment in the deployer or cluster, based on the management network it uses.
-
Configure the NIC's
nameservers
andsearch-domains
fields as lists.
Note |
If there are special characters in the username, update the configuration from the deployer CLI. Add double quotes (") around the username value and rerun the sync command. |
Deployer Configuration
Before creating and deploying a deployer, define a minimum of one environment.
FQDN disabled:
deployers:
<deployer name>:
environment: <environment of vCenter hosting the deployer>
address: <deployer VM IP address in CIDR format>
gateway: <gateway IP address>
username: <user name for deployer>
# SSH private-key-file with path relative to the staging directory
# If the line is missing, ssh private key will be auto-generated and saved inside .sec/
private-key-file: <path and filename for ssh private key>
host: <ESXi host IP address>
datastore: <vCenter datastore name for host>
FQDN enabled:
deployers:
<deployer name>:
environment: <environment of vCenter hosting the deployer>
address: <deployer VM IP address in CIDR format>
gateway: <gateway IP address>
username: <user name for deployer>
# SSH private-key-file with path relative to the staging directory
# If the line is missing, ssh private key will be auto-generated and saved inside .sec/
private-key-file: <path and filename for ssh private key>
host: <ESXi host IP address>
datastore: <vCenter datastore name for host>
# ingress-hostname only supports valid FQDN
ingress-hostname: "deployer.example.com"
Guidelines for configuring the deployer:
-
The name of the deployer can have only lowercase letters, digits, and hyphens (-).
-
The private-key-file field, when present, must refer to the SSH private key file. This file must be in the staging directory and must not be accessible (read, write, or execute) to other users.
If the private-key-file line is missing, the deploy script generates an SSH private key for the deployer (or SMI cluster) and places it in the
.sec
subdirectory under the staging directory. The filename is<deployer-name>_auto.pem
. -
To avoid resource-contention, do not run the deployer in an ESXi server that serves any iNode Manager clusters.
Cluster Configuration
Before creating and deploying a cluster, configure a minimum of one environment and one deployer. A cluster has an environment field to reference to its corresponding environment.
clusters:
"multinode-blr":
type: "opshub"
size: "normal"
environment: "sj-mn-inf"
username: "cloud-user"
# “true” for dual-stack, otherwise “none”
ipv6-mode: "true"
# private-key-file must exist in the path of staging/install directory
# file path is relative to the staging/install directory
private-key-file: "mncmtsb.pem"
primary-vip: "10.64.98.219/25"
primary-vip-ipv6: "2001:420:54FF:24:0000:0000:655:0017/112"
gateway: "10.64.98.129"
# You can configure the optional parameter ingress-hostname to enable FQDN for ingress access.
# If you do not configure ingress-hostname, ingress can be accessed via <primary-vip>.nip.io
#ingress-hostname: "blrmn.opsdev.com"
ipv6-gateway: "2001:420:54FF:24:0000:0000:655:1"
# 'ingress-hostname only supports '.' and alphanumeric characters
nodes:
- host: "10.64.98.171"
datastore: "datastore1 (2)"
addresses: [ "10.64.98.220", "10.64.98.221", "10.64.98.222", "10.64.98.223"]
addresses-v6: [ "2001:420:54FF:24:0000:0000:655:b/112", "2001:420:54FF:24:0000:0000:655:c/112", "2001:420:54FF:24:0000:0000:655:d/112", "2001:420:54FF:24:0000:0000:655:e/112" ]
- host: "10.64.98.172"
datastore: "datastore1 (3)"
addresses: [ "10.64.98.224", "10.64.98.225", "10.64.98.226", "10.64.98.227"]
addresses-v6: [ "2001:420:54FF:24:0000:0000:655:f/112", "2001:420:54FF:24:0000:0000:655:0017/112", "2001:420:54FF:24:0000:0000:655:0011/112", "2001:420:54FF:24:0000:0000:655:0012/112" ]
- host: "10.64.98.173"
datastore: "datastore1 (4)"
addresses: [ "10.64.98.228", "10.64.98.229", "10.64.98.230", "10.64.98.231"]
addresses-v6: [ "2001:420:54FF:24:0000:0000:655:0013/112", "2001:420:54FF:24:0000:0000:655:0014/112", "2001:420:54FF:24:0000:0000:655:0015/112", "2001:420:54FF:24:0000:0000:655:0016/112" ]
apps:
- inode-manager:
nodes:
- host: 10.64.98.171
# nics and ops->interfaces nodes are array object they are mapped by array index.
nics:
- 7.29.9.x Network
ops:
interfaces:
-
addresses:
- 7.29.9.20/16
# vip - Virtual IP address of the southbound interface
vip:
- 7.29.9.23/16
vrouter-id: 20
- host: 10.64.98.172
# nics and ops->interfaces nodes are array object they are mapped by array index.
nics:
- 7.29.9.x Network
ops:
interfaces:
-
addresses:
- 7.29.9.21/16
vip:
- 7.29.9.23/16
vrouter-id: 20
- host: 10.64.98.173
# nics and ops->interfaces nodes are array object they are mapped by array index.
nics:
- 7.29.9.x Network
ops:
interfaces:
-
addresses:
- 7.29.9.22/16
vip:
- 7.29.9.23/16
vrouter-id: 20
# For Single-Node cluster only
clusters:
"cicd-aio-nodes":
type: opshub
environment: "chn-smi-inodemgr-lab"
username: "inodemgruser"
gateway: "10.78.229.1"
private-key-file: "inodemgr.pem"
#pod-subnet is an optional field if not given by default "192.168.0.0/16" will be assigned.
pod-subnet: "192.168.120.0/24"
# service-subnet is an optional field if not given by default "10.96.0.0/12" will be assigned.
service-subnet: "10.96.120.0/24"
# docker-bridge-subnet is an optional field if not given by default 172.17.0.0/16" will be assigned.
docker-bridge-subnet: ["172.17.0.0/16"]
nodes:
- host: 10.78.229.151
datastore: DatastoreSSD-229-151
datastore-folder: "ClusterDataStore"
addresses: ["10.78.229.229/24"]
apps:
- inode-manager:
nodes:
- host: 10.78.229.151
nics:
- "VLAN 175"
control-plane:
interfaces:
-
addresses:
- 175.175.255.229/16
- "2002::afaf:ffd6/112"
routes:
-
dest:
- 192.175.175.0/24
nhop: "175.175.254.254"
-
dest:
- "2002::C0af:af00/120"
nhop: "2002::afaf:fefe"
Guidelines for configuring a cluster:
-
The name of the cluster can have only lowercase letters, digits, and hyphens (-).
-
The
private-key-file
field, when present, must refer to the SSH private key file. This file must be in the staging directory and must not be accessible (read, write, or execute) to other users.If the
private-key-file
line is missing, the deploy script generates an SSH private key for the deployer (or SMI cluster) and places it in the.sec
subdirectory under the staging directory. The filename is<deployer-name>_auto.pem
. -
Configure the virtual IP address (
master-vip
) and VRRP ID (vrouter-id
at the cluster level) for the management network for multinode clusters. The management network supports only IPv4. Thevrouter-id
parameter can take values 1–254. -
If multiple clusters share the same management subnet, the VRRP ID for each cluster must be unique in the management subnet.
-
The
ingress-hostname
field, when present, only supports valid DNS name, i.e., a fully qualified domain name (FQDN). Ifingress-hostname
is specified, for exampleinodemgr.cisco.com
, then the following FQDNs are used:- inodemgr.cisco.com - restconf.cee-data-ops-center.inodemgr.cisco.com - cli.cee-data-ops-center.inodemgr.cisco.com - restconf.opshub-data-ops-center.inodemgr.cisco.com - cli.opshub-data-ops-center.inodemgr.cisco.com - restconf.inode-manager-data-ops-center.inodemgr.cisco.com - cli.inode-manager-data-ops-center.inodemgr.cisco.com - grafana.inodemgr.cisco.com - show-tac-manager.cee-data-smi-show-tac.inodemgr.cisco.com
Note
It's recommended to register a wildcard DNS record, such as *.inodemgr.cisco.com, so that all the sub domains resolve to the same IP. Otherwise all the above FQDNs must be configured in the DNS server.
The IP address used for the DNS record/FQDN is the
ingress-ip
(for Multi-Node clusters)/AIO VM IP address
(for AIO clusters).
If ingress-hostname is not specified, then the specified
ingress-ip
is used to create a FQDN. For example, ifingress-ip
is1.2.3.4
, then the following FQDNs are used. For AIO installation(s), theingress-ip
is the IP assigned to the AIO/Ops node.- 1.2.3.4.nip.io - restconf.cee-data-ops-center.1.2.3.4.nip.io - cli.cee-data-ops-center.1.2.3.4.nip.io - restconf.opshub-data-ops-center.1.2.3.4.nip.io - cli.opshub-data-ops-center.1.2.3.4.nip.io - restconf.inode-manager-data-ops-center.1.2.3.4.nip.io - cli.inode-manager-data-ops-center.1.2.3.4.nip.io - grafana.1.2.3.4.nip.io - show-tac-manager.cee-data-smi-show-tac.1.2.3.4.nip.io
Note
The DNS server must allow the resolution of
nip.io
domain names (corporate DNS resolution policies must not block the resolution ofnip.io
domain names) for this approach to work.
iNode Manager CIN Configuration
Configure Converged Interconnect Network (CIN) for the iNode Manager cluster. One or more CIN networks can be present. Configure CIN under each node.
Guidelines for configuring CIN:
-
CIN must contain the network names (
nics
) and the IP addresses (addresses
). -
The routing table (routes) is optional.
-
Use the virtual IP addresses (
vip
) and the VRRP ID (vrouter-id
) fields only in multinode clusters. Configure them on the first node. -
The virtual IP addresses are mandatory. You can configure up to one IPv4 and one IPv6 address per CIN network.
-
If multiple iNode Manager clusters share a CIN subnet, the VRRP ID must be unique for each cluster.
-
For multinode cluster, all nodes must have the same number of CIN interfaces. If the NICs or route fields are missing for the second or third nodes, use the corresponding value from the first node.
-
You can also set up a iNode Manager cluster as a backup cluster. For backup clusters, do not include any CIN configuration. The configuration must not have operations and interfaces under the nodes.
Sample Configuration Files
The examples
directory contains sample configuration files for automatic deployment:
-
deployer-sample-config-autodeploy.yaml
: Configuration file with only the deployer configuration. -
aio-inode-manager-config.yaml
: Configuration file with the deployer and the single-node iNode Manager cluster configuration. -
multinode-inode-manager-config.yaml
: Configuration file with the deployer and multinode iNode Manager cluster configuration. -
aio-inode-manager-standby-config.yaml
: Configuration file with the standby deployer and single-node iNode Manager cluster configuration (without CIN config). -
multinode-inode-manager-standby-config.yaml
: Configuration file with the standby deployer and multinode iNode Manager cluster configuration (without CIN config).
Note |
You can find |
deployer-sample-config-autodeploy.yaml
deployers:
smi-deployer-147:
address: 10.78.229.147/24
datastore: DatastoreSSD-229-151
environment: chn-smi-inodemgr-lab
gateway: "10.78.229.1"
host: "10.78.229.151"
private-key-file: inodemgr.pem
username: cloud-user
#ingress-hostname only supports valid FQDN
ingress-hostname: deployer.example.com
#Optional configuration
docker-subnet-override:
- pool-name: pool1
base: 172.17.0.0/16
size: 16
environments:
chn-smi-inodemgr-lab:
cluster: smi
datacenter: CABU-VC65
datastore: "DatastoreSSD-229-150 (1)"
nameservers:
- "172.30.131.10"
- "172.16.128.140"
nics:
- "VM Network"
ntp:
- 8.ntp.esl.cisco.com
- 2.ntp.esl.cisco.com
search-domains:
- cisco.com
server: "10.78.229.250"
username: administrator@CABU.VCENTER60
apps:
- inode-manager
aio-inode-manager-config.yaml
deployers:
smi-deployer-147:
address: 10.78.229.147/24
datastore: DatastoreSSD-229-151
datastore-folder: "ClusterDataStore"
environment: chn-smi-inodemgr-lab
gateway: "10.78.229.1"
host: "10.78.229.151"
private-key-file: inodemgr.pem
username: cloud-user
#ingress-hostname only supports valid FQDN
ingress-hostname: deployer.example.com
#Optional configuration
docker-subnet-override:
- pool-name: pool1
base: 172.17.0.0/16
size: 16
environments:
chn-smi-inodemgr-lab:
cluster: smi
datacenter: CABU-VC65
datastore: DatastoreSSD-229-150
nameservers:
- "172.30.131.10"
- "172.16.128.140"
nics:
- "VM Network"
ntp:
- 8.ntp.esl.cisco.com
- 2.ntp.esl.cisco.com
search-domains:
- cisco.com
server: "10.78.229.250"
username: administrator@CABU.VCENTER60
clusters:
"cicd-aio-229":
type: opshub
environment: "chn-smi-inodemgr-lab"
username: "inodemgruser"
gateway: "10.78.229.1"
private-key-file: "inodemgr.pem"
ipv6-mode: "true"
ipv6-gateway: "2001:0000:0000:0000:0000:0000:655:1"
#pod-subnet is an optional field if not given by default "192.168.0.0/16" will be assigned.
pod-subnet: "192.168.121.0/24"
# service-subnet is an optional field if not given by default "10.96.0.0/12" will be assigned.
service-subnet: "10.96.130.0/24"
# docker-bridge-subnet is an optional field if not given by default 172.17.0.0/16" will be assigned.
docker-bridge-subnet: ["172.20.0.0/16"]
nodes:
- host: 10.78.229.151
datastore: DatastoreSSD-229-151
datastore-folder: "ClusterDataStore"
addresses: ["10.78.229.229/24"]
addresses-v6: [ "2001:0000:0000:0000:0000:0000:afaf:e5e5/112"]
apps:
- inode-manager:
nodes:
- host: 10.78.229.151
nics:
- "VLAN 175"
control-plane:
interfaces:
-
addresses:
- 172.17.255.229/16
- "2002::afaf:ffe5/112"
routes:
-
dest:
- 192.168.174.0/24
nhop: "172.17.254.254"
-
dest:
- "2002::C0af:af00/120"
nhop: "2002::afaf:fefe"
apps:
- inode-manager
multinode-inode-manager-config.yaml
deployers:
smi-deployer-147:
address: 10.78.229.147/24
datastore: DatastoreSSD-229-151
environment: chn-smi-inodemgr-lab
gateway: "10.78.229.1"
host: "10.78.229.151"
private-key-file: inodemgr.pem
username: cloud-user
#ingress-hostname only supports valid FQDN
ingress-hostname: deployer.example.com
#Optional configuration
docker-subnet-override:
- pool-name: pool1
base: 172.17.0.0/16
size: 16
environments:
chn-smi-inodemgr-lab:
cluster: smi
datacenter: CABU-VC65
datastore: "DatastoreSSD-229-150 (1)"
nameservers:
- "172.30.131.10"
- "172.16.128.140"
nics:
- "VM Network"
ntp:
- 8.ntp.esl.cisco.com
- 2.ntp.esl.cisco.com
search-domains:
- cisco.com
server: "10.78.229.250"
username: administrator@CABU.VCENTER60
clusters:
"cicd-multi-node-211":
type: opshub
environment: "chn-smi-inodemgr-lab"
username: "inodemgruser"
gateway: "10.78.229.1"
primary-vip: "10.78.229.211/23"
vrouter-id: 78
private-key-file: "inodemgr.pem"
ipv6-mode: "true"
primary-vip-ipv6: "2001:0000:0000:0000:0000:0000:655:9/112"
ipv6-gateway: "2001:0000:0000:0000:0000:0000:655:1"
ingress-hostname: "inodemgr-chn08-dev01.cisco.com"
enable-http-redirect: "true"
#pod-subnet is an optional field if not given by default "192.168.0.0/16" will be assigned.
pod-subnet: "192.168.121.0/24"
# service-subnet is an optional field if not given by default "10.96.0.0/12" will be assigned.
service-subnet: "10.96.130.0/24"
# docker-bridge-subnet is an optional field if not given by default 172.17.0.0/16" will be assigned.
docker-bridge-subnet: ["172.20.0.0/16"]
nodes:
- host: 10.78.229.150
datastore: "DatastoreSSD-229-150 (1)"
addresses: ["10.78.229.217", "10.78.229.214", "10.78.229.224", "10.78.229.221"]
addresses-v6: [ "2001:0000:0000:0000:0000:0000:655:5/112", "2001:0000:0000:0000:0000:0000:655:6/112", "2001:0000:0000:0000:0000:0000:655:7/112", "2001:0000:0000:0000:0000:0000:655:8/112" ]
- host: 10.78.229.151
datastore: DatastoreSSD-229-151
addresses: ["10.78.229.213", "10.78.229.216", "10.78.229.219", "10.78.229.223"]
addresses-v6: [ "2001:0000:0000:0000:0000:0000:655:4/112", "2001:0000:0000:0000:0000:0000:655:9/112", "2001:0000:0000:0000:0000:0000:655:a/112", "2001:0000:0000:0000:0000:0000:655:b/112" ]
- host: 10.78.229.196
datastore: DatastoreSSD-229-196
addresses: ["10.78.229.222", "10.78.229.218", "10.78.229.215", "10.78.229.212"]
addresses-v6: [ "2001:0000:0000:0000:0000:0000:655:c/112", "2001:0000:0000:0000:0000:0000:655:d/112", "2001:0000:0000:0000:0000:0000:655:e/112", "2001:0000:0000:0000:0000:0000:655:f/112" ]
apps:
- inode-manager:
nodes:
- host: 10.78.229.150
nics:
- "VLAN 175"
ops:
interfaces:
-
addresses:
- 192.168.255.214/16
- "2002::afaf:ffd6/112"
vip: [ 192.168.255.211/16, "2002::afaf:ffd3/112" ]
vrouter-id: 78
routes:
-
dest:
- 192.168.175.0/24
apps:
- inode-manager
aio-inode-manager-standby-config.yaml
deployers:
smi-deployer-147:
address: 10.78.229.147/24
datastore: DatastoreSSD-229-151
datastore-folder: "ClusterDataStore"
environment: chn-smi-inodemgr-lab
gateway: "10.78.229.1"
host: "10.78.229.151"
private-key-file: inodemgr.pem
username: cloud-user
#ingress-hostname only supports valid FQDN
ingress-hostname: deployer.example.com
#Optional configuration
docker-subnet-override:
- pool-name: pool1
base: 172.17.0.0/16
size: 16
environments:
chn-smi-inodemgr-lab:
cluster: smi
datacenter: CABU-VC65
datastore: DatastoreSSD-229-150
nameservers:
- "172.30.131.10"
- "172.16.128.140"
nics:
- "VM Network"
ntp:
- 8.ntp.esl.cisco.com
- 2.ntp.esl.cisco.com
search-domains:
- cisco.com
server: "10.78.229.250"
username: administrator@CABU.VCENTER60
clusters:
"cicd-aio-229":
type: opshub
environment: "chn-smi-inodemgr-lab"
username: "inodemgruser"
gateway: "10.78.229.1"
private-key-file: "inodemgr.pem"
nodes:
- host: 10.78.229.151
datastore: DatastoreSSD-229-151
datastore-folder: "ClusterDataStore"
addresses: ["10.78.229.229/24"]
apps:
- inode-manager
multinode-inode-manager-standby-config.yaml
deployers:
smi-deployer-147:
address: 10.78.229.147/24
datastore: DatastoreSSD-229-151
environment: chn-smi-inodemgr-lab
gateway: "10.78.229.1"
host: "10.78.229.151"
private-key-file: inodemgr.pem
username: cloud-user
#ingress-hostname only supports valid FQDN
ingress-hostname: deployer.example.com
#Optional configuration
docker-subnet-override:
- pool-name: pool1
base: 172.17.0.0/16
size: 16
environments:
chn-smi-inodemgr-lab:
cluster: smi
datacenter: CABU-VC65
datastore: "DatastoreSSD-229-150 (1)"
nameservers:
- "172.30.131.10"
- "172.16.128.140"
nics:
- "VM Network"
ntp:
- 8.ntp.esl.cisco.com
- 2.ntp.esl.cisco.com
search-domains:
- cisco.com
server: "10.78.229.250"
username: administrator@CABU.VCENTER60
clusters:
"cicd-multi-node-211":
type: opshub
environment: "chn-smi-inodemgr-lab"
username: "inodemgruser"
gateway: "10.78.229.1"
primary-vip: "10.78.229.211/23"
vrouter-id: 78
private-key-file: "inodemgr.pem"
ingress-hostname: "inodemgr-chn08-dev01.cisco.com"
enable-http-redirect: "true"
pod-subnet: "192.168.120.0/24"
service-subnet: "10.96.120.0/24"
docker-bridge-subnet: ["172.17.0.0/16"]
nodes:
- host: 10.78.229.150
datastore: "DatastoreSSD-229-150 (1)"
addresses: ["10.78.229.217", "10.78.229.214", "10.78.229.224", "10.78.229.221"]
- host: 10.78.229.151
datastore: DatastoreSSD-229-151
addresses: ["10.78.229.213", "10.78.229.216", "10.78.229.219", "10.78.229.223"]
- host: 10.78.229.196
datastore: DatastoreSSD-229-196
addresses: ["10.78.229.222", "10.78.229.218", "10.78.229.215", "10.78.229.212"]
apps:
- inode-manager
Deploy the Cluster
Use the deploy script to deploy both the deployer and the cluster. Run the deploy command without any parameters to get the available options:
./deploy -c <config_file> [-v]
-c <config_file> : Configuration File, <Mandatory Argument>
-v : Config Validation Flag, [Optional]
-f : Day0: Force VM Redeploy Flag [Optional]
: Day1: Force iNode Manager Update Flag [Optional]
-u : Cluster Upgrade Flag [Optional]
-s : Skip Compare Flag [Optional]
-i <install_opt> : Cluster installation options: deploy, redeploy, or upgrade [Optional]
The deploy script takes a configuration file with the '-c' option.
The deploy script uses the -u
flag to update the deployer. When this flag is present, the script processes all the deployers in the deployers
section in the config yaml. The deploy script ignores the clusters in the clusters
section.
For cluster installations, use one of the three options for the -i
flag:
-
deploy: this option is active when the
-i <install_option>
parameter is absent. In this mode, the deploy script first pings the cluster. If it is not pingable, the script deploys the cluster. Otherwise, the script does not perform any operations on the cluster. -
redeploy: In this mode, the deploy script first uninstalls the cluster, if it is already available. Then the script redeploys the new cluster.
-
upgrade: In this mode, the deploy script upgrades the cluster with the software in the package.
Caution |
With the redeploy option, you lose all data in the original cluster. |
For example, the following command installs the cluster using the configuration file config.yaml, assuming it does not exist:
$ ./deploy -c config.yaml
Note |
|
The deploy script does the following operations:
If you are running the deploy script for the first time, it prompts you to enter all the passwords required for installation.
-
For VMware vCenter environment:
-
vCenter password for the user specified in the environment config
-
-
For deployer:
-
SSH password for the deployer's ops-center, for the user
cloud-user
-
-
For an iNode Manager cluster:
-
SSH password for all VMs in the cluster, for the user in the cluster's config (
inodemgruser
is the default user) -
SSH passwords for the three ops-centers (iNode Manager, Operations Hub, and CEE), for the user
admin
-
Note |
The deploy script prompts you twice to enter each password. The deploy script saves the passwords in the staging directory in encrypted form for future use. |
-
Passwords for the deployer, the cluster, and the Operation Centers must be eight characters long. The passwords must have a minimum of one lowercase letter, one uppercase letter, one numeric character, and one special character.
-
The deploy script generates an SSH key pair when the
private-key-file
line is missing for the deployer or the cluster in the configuration file. The generated private key files are in the.sec
sub directory under the staging directory, with<cluster-name>_auto.pem
as the filename. -
The root-user owns the generated private keys. When logging in using SSH and these private key files, make sure that you run it with
sudo
. -
If the deployer is not running, the deploy script installs the deployer.
-
The deploy script checks if the deployer is missing any of the product packages in the
offline-images
directory. If it finds any missing, it uploads them to the deployer. -
The script also generates the configuration for each cluster and pushes them to the deployer.
-
The deploy script triggers the deployer to perform the sync operation for the cluster. The sync operation applies the configuration to the cluster. If you have not set up the cluster, it installs the cluster. Or the sync operation updates the cluster with the configuration.
-
If the sync operation times out, the deploy script triggers the sync operation again. The script waits for the sync operation to complete. Then, it continues to monitor the cluster to ensure the deployment of all helm charts and creation of all pods.
You can repeat the deploy script to deploy more than one cluster by providing the corresponding configuration files. Alternatively,
you can run this command appending a -v
flag. The -v
flag forces the deploy script to skip the sync operation and the remaining operations. Use this option to push the configuration
of a cluster to the deployer without deploying or updating the cluster.
Sample Logs
The following example shows logs for the autodeployer.
[host]$ ./deploy -c examples/deployer-sample-config.yaml -v
Running autodeployer...
Day0 Configuration Detected
Validating config [environments]
Validating config [deployers]
Config Validated...
[vCenter:cabu-sdn-vc.cisco.com]$ Enter Password for cvideo.gen@cisco.com :
Re-Enter Password :
Create credentials for the deployer...inode-manager-deployer-1
Enter password for cloud-user@192.0.2.28 :
Re-Enter Password :
Gathering Product Images Info !!!
--- : Product Info : ---
cee : http://charts.192.0.2.28.nip.io/cee-2020-01-1-11
inode : http://charts.192.0.2.28.nip.io/inode-manager-3.1.0-release-2007142325
opshub : http://charts.192.0.2.28.nip.io/opshub-release-2007150030
--- : cnBR Images : ---
cluster-manager-docker-deployer : cluster-manager-docker-deployer:1.0.3-0079-01a50dd
autodeploy : autodeploy:0.1.0-0407-2e073f8
--- : vCenter Info : ---
atl-smi-inodemgr-lab : Cloud Video Datacenter, iNodeManager
--- : Deployer Info : ---
inode-manager-deployer-1 : IP -> 192.0.2.28/24, host -> 192.0.2.7
PING 192.0.2.28 (192.0.2.28) 56(84) bytes of data.
--- 192.0.2.28 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
2020-08-03 12:08:51.753 INFO deploy: Parsing config file: .gen/tmp7n19eqji.json
2020-08-03 12:08:51.842 INFO deploy: Created ansible inventory yaml file
2020-08-03 12:08:51.842 INFO deploy: Config Directory is /opt/deployer/work and vmdk file is /opt/deployer/work/cluster-deployer-airgap.vmdk:
2020-08-03 12:08:51.842 INFO deploy: Ansible inventory file:
/tmp/tmpsetosj02/output_inventory.yaml
2020-08-03 12:08:51.842 INFO deploy: Running ansible to deploy and update VM. See vsphere for progress: .gen/tmp7n19eqji.json