Cisco Crosswork Hierarchical Controller 9.0 Installation Guide

Available Languages

Download Options

  • PDF
    (1.7 MB)
    View with Adobe Reader on a variety of devices
Updated:August 5, 2024

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Available Languages

Download Options

  • PDF
    (1.7 MB)
    View with Adobe Reader on a variety of devices
Updated:August 5, 2024
 

 

 


Introduction

This document is an installation guide for Cisco Crosswork Hierarchical Controller with or without High Availability. In the High Availability configuration, Cisco Crosswork Hierarchical Controller implements intra-node resiliency and a three-node cluster (which includes a witness node).

The document explains:

        Cisco Crosswork Hierarchical Controller Prerequisites

        Install Cisco Crosswork Hierarchical Controller Platform

        Upgrade Cisco Crosswork Hierarchical Controller Platform

        Install Cisco Network Services Orchestrator Crosswork Hierarchical Controller Function Pack

Cisco Crosswork Hierarchical Controller Prerequisites

Cisco Crosswork Hierarchical Controller is released with a single VMWare OVA file distribution. OVA is a disk image deployed using vCenter on any ESXi host. This OVA packages together several components including a file descriptor (OVF) and virtual disk files containing a basic operating system and the Cisco Crosswork Hierarchical Controller installation files.

OVA can be deployed using vCenter on ESXi hosts supporting both Standalone (SA) or High Availability (HA) deployment models.

The three VMs for HA can run on any single or multiple ESXi hosts. In case of multiple ESXi hosts, 10 Gbps is required for connecting each host. The control plane network is also associated with the same 10 Gbps interface and a 10 Gbps communication channel between the hosts is required.

Requirements

        VMware vCenter Server 7.0 Update 3

        VMware ESXi 7.0 Update 3

        High Availability, version 9.0, requires a latency of up to P95 15ms (95% of the requests to the system must be served faster than this) between nodes.

Note: The system was tested with version 7.0 Update 3. The system is expected to function as expected with other 7.0 sub-versions as well. If you are using a sub-version other than 7.0 Update 3 and you encounter any issues, contact your Cisco support representative.

Hardware

Primary, Secondary, or Standalone Nodes

This spec is for primary, secondary, or standalone instances of Crosswork Hierarchical Controller.

Hardware

Requirement

CPU

10 Cores

Memory

96 GB

Multiple ESXi hosts (Control Plane)

10 Gbps between hosts

Storage

500 GB SSD to 2 TB (Scale requirement)

Note: This is without considering RAID configurations

HW Reservation

100% for CPU and memory

NICs

2

 

Witness Node

This spec is for the witness (or arbitrator) instance of Crosswork Hierarchical Controller.

Hardware

Requirement

CPU

4 Cores

Memory

32 GB

Storage

200 GB SSD

HW Reservation

100% for CPU and memory

NICs

2

Network Bandwidth

100 Mbps between the Primary/Secondary and Arbitrator.

Client

The client machine requirements are:

        Windows PC or MAC

        GPU

        Web browser with GPU hardware acceleration support

        Recommended

       Screen resolution 1920x1080

       Google Chrome Web browser version 75 or later is recommended

Note:   GP U is mandatory to properly get all the benefits of the network 3D map.

Communications Matrix

The table that follows lists the default port assignments. The ports can be customized as necessary to meet your network requirements.  

User

Role

Description

Inbound

TCP 22

SSH remote management

TCP 8443

HTTPS for UI access

Outbound

TCP 22

NETCONF to routers

TCP 389

LDAP if using Active Directory

TCP 636

LDAPS if using Active Directory

Customer Specific

HTTP for access to an SDN controller

Customer Specific

HTTPS for access to an SDN controller

TCP 3082, 3083,

2361, 6251

TL1 to optical devices

Control Plane Ports (Internal network between cluster nodes, not exposed)

 

Kubernetes

TCP 443    

Kubernetes

TCP 6443   

Kubernetes

TCP 10250  

etcd

TCP 2379   

etcd

TCP 2380   

VXLAN

UDP 8472   

Ping between nodes (optional)

ICMP           

syslog

Customer specific

TCP/UDP

Storage

The storage volume required for Crosswork Hierarchical Controller production depends on the amount of storage needed for performance counters and for daily DB backups.

The performance monitoring storage is calculated based on the number of client ports and the amount of time the counters are stored. The ballpark figure is 700 MB for 1000 ports.

The detailed formula to calculate the storage is:

<uncompressed data>=<number of ports>*<samples per day>*<number of days>*60

Storage = (<uncompressed data>*0.1)+<daily backup size>*<number of days>*<number of months>

Taking the following assumptions into account:

        Samples – samples per day

        Sample size per port – 60 bytes

        Days – number of days the PM data is stored

        Compression ratio – data is compressed in DB, at a ratio of ~10%

        Daily backup – ~60 MB per day

        Number of backup days – 14 days

        Number of backup months – default is 12 months

Scaling

Crosswork Hierarchical Controller certified scaling.

Component

Maximum Certified

Total number of devices

10,000

Total number of L2 links

39,127

Total number of L3 links

48,522

Total physical interfaces

230,014

Total logical interfaces

320,061

Total LAG interfaces

169,898

Total L2 VPN services

51,038

Total L3 VPN services

49,931

Install Crosswork Hierarchical Controller Platform

High Availability Architecture

For more information on High Availability, see the Crosswork Hierarchical Controller Administration Guide.

Related image, diagram or screenshot

Figure 1.   

Cisco Crosswork Hierarchical Controller Architecture

Control Plane and Northbound Networks Installation Requirements

The following list contains the pre-requisites of Cisco Crosswork Hierarchical Controller installation.

Before installing Cisco Crosswork Hierarchical Controller:

        Install the ESXi host on servers with vSphere to support creating VMs.

        Create two networks, one for the control plane and the other for the northbound network:

       The control plane network helps in the communication between the deployed VMs.

       The northbound network is used for communication between the client and the cluster.

To create the control plane and northbound networks:

1.     From the vSphere client, select the Datacenter where you want to add the ESXi host.

2.     After adding the ESXi host, create the control plane and northbound network before deploying the standalone or High Availability:

       High Availability has four IPs (v4), for the primary, secondary, and witness nodes and a VIP. The VIP is the IP that exposes the active node to the user.

       Standalone has two IPs (v4).

Create SSH Keys

SSH to Crosswork Hierarchical Controller requires a public and private SSH key to be generated upfront before OVA deployment:

        Public SSH key is passed as a parameter during OVA deployment

        Private key is required to execute the remote SSH login

        SSH key must use the ed25519 encryption algorithm

To generate the keys:

1.     Execute the ssh-keygen command:

# ssh-keygen -t ed25519 -f <PATH>/<keyname>

This generates the public and private keys:

       <keyname>.pub: Public key

       <keyname>: Private key

2.     Remove the comment from the public key before using it during the OVA deployment.

Install Standalone Crosswork Hierarchical Controller

When you deploy the OVA template it installs the Crosswork Hierarchical Controller platform and the various Crosswork Hierarchical Controller applications.

Note: It is suggested that you keep track of all settings in a spreadsheet.

To install Crosswork Hierarchical Controller:

1.     Right-click on the ESXi host in the vSphere Client screen, and then click Deploy OVF Template.

2.     On the Select an OVF template page, specify the location of the source OVA template:

       URL: A URL to an OVA template located online.

       Local file: A location with the OVA template.

3.     Click Next.

4.     On the Select a name and folder page, specify a unique name for the VM Instance. The virtual machine name must be a valid DNS name:

       contain no more than 253 characters

       contain only lowercase alphanumeric characters, '-' or '.'

       start with an alphanumeric character

       end with an alphanumeric character

5.     From the list of options select the location of the VM to be used.

6.     Click Next.

7.     On the Select a compute resource page, select the destination compute resource on which you want to deploy the VM.
Note: While selecting the compute resource the compatibility check proceeds until it completes successfully.

8.     Click Next.

9.     On the Review details page, verify the template details.

10.  Click Next.

11.  On the Select storage page, set the Select virtual disk format based on SSD.

12.  Leave the VM Storage Policy set to Datastore Default.

13.  Select the storage.

14.  Click Next.

15.  In the Select networks page, set the destination networks:

       Control Plane: The control plane network. This can be a dummy port group.

       Northbound: The VM network used for the VIP address for RESTCONF or UI access.

16.  Click Next.

Related image, diagram or screenshot

17.  In the Customize template page, set the values as follows:

Key

Value

General

 

Instance Hostname

The instance hostname.

This is the same as used in 2. Select a name and folder page and must be a valid DNS name.

SSH Public Key

The ssh public key generated by the customer’s admin.

For example:

ssh-keygen -t ed25519 -f ~/.ssh/...

Node Config

 

Node Name

The standalone node name. This must be a valid DNS name:

·       contain no more than 253 characters

·       contain only lowercase alphanumeric characters, '-' or '.'

·       start with an alphanumeric character

·       end with an alphanumeric character

This must exist in the zone config, that is, the name must match one of the zone assignments in the Initiator Config.

Initiator Node

This is checked by default. Leave as is.

The standalone node will be the initiator.

Data Volume Size (GB)

The data storage limit set for the host.

See Requirements.

Must be at least 500.

NTP Pools (comma separated)

(Optional) A comma-separated list of the NTP pools.

NTP Servers (comma separated)

(Optional) A comma-separated list of the NTP servers.

Cluster Join Token

This is filled in automatically. Leave as is.

Control Plane Node Count

Select 1.

Control Plane IP

The private IP for the node.

For standalone, this must be a local valid IP in the customer’s hypervisor.

Initiator IP

For standalone, use the same IP set as the control plane IP. 

Northbound Interface

 

Protocol

Select Static IP or DHCP from the menu.

IP(ip[/subnet]) - if not using DHCP

The public IP and subnet mask (in CIDR notation, that is, X.X.X.X/nn) for the instance northbound network if not using DHCP.

Note: The subnet mask is mandatory.

Gateway - if not using DHCP

The gateway IP for the instance northbound network if not using DHCP.

DNS

The DNS server IP.

Cluster Config

Complete the entry for the standalone node

Northbound Virtual IP

The IP of the standalone instance used for RESTCONF or UI access.

Required as the standalone node is the initiator.

This is the same as the IP(ip[/subnet]) - if not using DHCP.

Zone A Node Name

The standalone node name. This is the same as the Node Name in the Node Config section.

Zone B Node Name

Leave as is.

Zone C Node Name (Arbitrator)

Leave as is.

18.  Click Next.

19.  In the Review the details page, check the selections.

20.  Copy and save the properties as a backup.

21.  Click Finish.

22.  Right-click on the VM in the vSphere Client screen and select Edit Settings.

Related image, diagram or screenshot

23.  For CPU, select 10 and for Memory select 96.

24.  Edit the CPU Resources and set the Reservation to 100%.

25.  Edit the Memory Resources and set the Reservation to 100%.

26.  Click OK.

27.  Power on the VM. It may take a few minutes to get SSH access.

Related image, diagram or screenshot

28.  Try connecting to the VM. For this, use the private key associated with the public key used earlier during customizing public key options. Login to the VM:

# ssh -i <private-key_file> nxf@<hco_management_ip>

       If you are prompted for a password, there is probably a problem with the key.

       If the command timeouts, check the IP setting.

29.  Run the following command to check the system status:

sedo system status

30.  Change the default password:

sedo security user set --access role/admin admin

sedo security user set --password

31.  You can use sedo to configure the local users. See the Crosswork Hierarchical Controller Administration Guide for more details.

32.  Browse to the Crosswork Hierarchical Controller application using the standalone IP address.

Install HA Crosswork Hierarchical Controller

When you deploy the OVA template it installs the Crosswork Hierarchical Controller platform and the various Crosswork Hierarchical Controller applications.

The three VMs for HA can run on any single or multiple ESXi hosts. In case of multiple ESXi hosts, 10 Gbps is required for connecting each host. The control plane network is also associated with the same 10 Gbps interface and a 10 Gbps communication channel between the hosts is required.

To deploy Crosswork Hierarchical Controller in an HA configuration you will be deploying three VMs, the primary node, secondary node, and the witness (or arbitrator) node. The first node will be configured as the initiator (which installs up the cluster).

Important: Create all three VMs before turning them ON.

Note: It is suggested that you keep track of all settings in a spreadsheet.

To install Crosswork Hierarchical Controller:

1.     Right-click on the ESXi host in the vSphere Client screen, and then click Deploy OVF Template.

2.     On the Select an OVF template page, specify the location of the source OVA template:

       URL: A URL to an OVA template located online.

       Local file: A location with the OVA template.

3.     Click Next.

4.     On the Select a name and folder page, specify a unique name for the VM Instance. The virtual machine name must be a valid DNS name:

       contain no more than 253 characters

       contain only lowercase alphanumeric characters, '-' or '.'

       start with an alphanumeric character

       end with an alphanumeric character

5.     From the list of options select the location of the VM to be used.

6.     Click Next.

7.     On the Select a compute resource page, select the destination compute resource on which you want to deploy the VM.
Note: While selecting the compute resource the compatibility check proceeds until it completes successfully.

8.     Click Next.

9.     On the Review details page, verify the template details and click Next.

10.  On the Select storage page, set the Select virtual disk format based on SSD.

11.  Leave the VM Storage Policy set to Datastore Default.

12.  Select the storage.

13.  Click Next.

14.  In the Select networks page, set the destination networks:

       Control Plane: The control plane network used for communications between the nodes in the HA cluster.

       Northbound: The VM network used for the VIP address for RESTCONF or UI access.

15.  Click Next.

Related image, diagram or screenshot

16.  In the Customize template page, set the values as follows:

Key

Value

General

 

Instance Hostname

The instance hostname.

This is the same as used in 2. Select a name and folder page and must be a valid DNS name.

SSH Public Key

The ssh public key generated by the customer’s admin.

For example:

ssh-keygen -t ed25519 -f ~/.ssh/...

Node Config

 

Node Name

The node name. This must be a valid DNS name:

·       contain no more than 253 characters

·       contain only lowercase alphanumeric characters, '-' or '.'

·       start with an alphanumeric character

·       end with an alphanumeric character

This must exist in the zone config, that is, the name must match one of the zone assignments in the Initiator Config.

Initiator Node

This is checked by default.

·       Leave checked for the primary node.

Uncheck for the secondary and witness (Arbitrator) nodes.

Data Volume Size (GB)

The data storage limit set for the host.

See Requirements.

Must be at least 500 for the primary and secondary nodes, and 200 for the witness node.

NTP Pools (comma separated)

(Optional) A comma-separated list of the NTP pools.

NTP Servers (comma separated)

(Optional) A comma-separated list of the NTP servers.

Cluster Join Token

This is filled in automatically. Leave as is.

Control Plane Node Count

 Select 3.

Control Plane IP

The private IP for the node.

Initiator IP

The IP of the initiator node of the Control Plane. The initiator installs the cluster. 

For HA, this is the primary node.

Note: When installing the primary node, this is the same as the Control Plane IP.

Northbound Interface

 

Protocol

Select Static IP or DHCP from the menu.

IP(ip[/subnet]) - if not using DHCP

The public IP and subnet mask (in CIDR notation, that is, X.X.X.X/nn) for the instance northbound network if not using DHCP.

Note: The subnet mask is mandatory.

Gateway - if not using DHCP

The gateway IP for the instance northbound network if not using DHCP.

DNS

The DNS server IP.

Cluster Config

Complete all entries in this section for all three nodes!

Northbound Virtual IP

The external virtual IP of the cluster used for RESTCONF or UI access.

Required if the node is the initiator, that is, for the primary node.

Can be left blank for the secondary and witness (Arbitrator) nodes.

Zone A Node Name

The primary node name.

Note: When installing the primary node, this is the same as the Node Name in the Node Config section.

Zone B Node Name

The secondary node name.

Note: When installing the secondary node, this is the same as the Node Name in the Node Config section.

Zone C Node Name (Arbitrator)

The witness (arbitrator) node name.

Note: When installing the witness node, this is the same as the Node Name in the Node Config section.

17.  Click Next.

18.  In the Review the details page, check the selections.

19.  Copy and save the properties as a backup.

20.  Click Finish.

21.  Right-click on the VM in the vSphere Client screen and select Edit Settings.

Related image, diagram or screenshot

22.  For CPU and Memory select:

       Primary and Secondary Nodes: 96GB RAM and 10 vCPUs

       Witness Node: 32GB RAM and 4 vCPUs 

23.  Edit the CPU Resources and set the Reservation to 100%.

24.  Edit the Memory Resources and set the Reservation to 100%.

25.  Click OK.

26.  Repeat the procedure above for the secondary and witness (arbitrator) nodes.

27.  Power on all three VMs. It may take a few minutes to get SSH access.

Related image, diagram or screenshot

28.  Connect using the Virtual IP. For this, use the private key associated with the public key used earlier during customizing public key options. Login to the VM:

# ssh -i <private-key_file> nxf@<virtual_ip>

       If you are prompted for a password, there is probably a problem with the key.

       If the command timeouts, check the IP setting used during deployment.

29.  Run the following command to check the system status:

sedo system status

Related image, diagram or screenshot

30.  Change the default password:

sedo security user set --access role/admin admin

sedo security user set --password

31.  You can use sedo to configure the local users. See the Crosswork Hierarchical Controller Administration Guide for more details.

32.  Browse to the Crosswork Hierarchical Controller application using the HA VIP address.

View Installed Crosswork Hierarchical Controller Applications

To view the installed Crosswork Hierarchical Controller applications:

1.     After the installation is complete, ssh to the server.

2.     Run the following command to see which applications are installed:

sedo hco apps list

 Related image, diagram or screenshot

The output displays the installed applications with their name and version.

Add Network Adapters and Discover Network Devices

For instructions on how to add network adapters and discover network devices, refer to the Cisco Crosswork Hierarchical Controller Administration Guide.

Create Users via cloud-init 

This is an advanced procedure and is not for use with the OVA deployment using vCenter. For more information, contact Cisco Support.

When the NxF cluster is installed, a default ‘admin’ user is created with a pre-defined permission:

        permission/admin

The password is randomly generated during the Crosswork Hierarchical Controller VM first boot. You connect via SSH with public/private keys, and the private key is used to execute the SSH login.

Optionally, you can create additional users while installing the instance by adding users in the NxFOS Heat Template yaml file.

To create users via cloud-int:

1.     In the yaml file, under the node > properties > user_data section, add the entries to the str_replace > template > nxf > initiator > security > localUsers.

2.     Set the values as shown in this example:

- username: saleem

  displayName: support

  description: support

  locked: true

  mustChangePassword: false

  expiresInDays: 0

  access:

  - permission/admin

Key

Value

username

The user name.

displayName

The display name.

description

A description of the user.

locked

Whether or not the user is locked.

mustChangePassword

Whether or not the user must change the password on first login.  The password is randomly generated during the Crosswork Hierarchical Controller VM first boot. 0 means that the password does not expire.

expiresInDays

How many days before the password expires.

access

The user permission.

access:

- permission/admin

 

 

Upgrade Cisco Crosswork Hierarchical Controller

This topic describes how to upgrade Crosswork Hierarchical Controller.

Note: To upgrade from Crosswork Hierarchical Controller version 7.1 to version 9 is a two-step process:

1.     Upgrade from version 7.1 to version 8.0. Refer to the Crosswork Hierarchical Controller Installation Guide version 8.0.

2.     Upgrade from version 8.0 to version 9.0

Upgrade Cisco Crosswork Hierarchical Controller 8.0 to 9.0

Upgrading Crosswork Hierarchical Controller version 8.0 to version 9.0, requires you to copy and upload the system pack to one of the nodes, pull It to the other instances, and then apply the upgrade on all nodes.

Note:   Also download the adapter service packs. These will be required after the upgrade, and before you re-enable the adapters.  The installation command MUST use the adapter names that are in use prior to upgrading, so record the names that appear in Device Manager.

To upgrade Crosswork Hierarchical Controller 8.0 to 9.0:

1.     Make a full backup of the system.

2.     Disable all the adapters. For each adapter:

a.     In the applications bar in Crosswork Hierarchical Controller, select Device Manager > Adapters.

b.     Select the required adapter in the Adapters list on the left.

c.     Select the General tab.

d.     Deselect the Enabled checkbox.

e.     Click Save.

3.     Copy the system pack provided to one of the instances (e.g. node1).

4.     Upload the system pack (from the node it was copied to, e.g. node1):

sudo sedo system upgrade upload <system-pack-name>

5.     List the available upgrades:

sudo sedo system upgrade list

6.     Pull the system pack on all other instances (there is no need to pull it to the instance on which it was uploaded):

sudo sedo system upgrade pull <system-pack-name>

7.     Apply the upgrade (on all nodes):

sudo sedo --kubeconfig /etc/kubernetes/admin.conf system upgrade apply

Note: Wait for apply to be completed on all nodes before proceeding to the next step.

8.     Reboot to complete (all nodes):

sudo reboot 

9.     Check:

sedo version, sedo hco version, sedo nso version

10.  Download the adapter service packs.

11.  Install the adapter service packs. The installation command MUST use the name that was in use prior to upgrading (if this is not the default adapter name, that is, if the DYNAMIC_APP_GUID param was used in the original installation to modify the name, install the new service pack with DYNAMIC_APP_GUID=[adapter name as it was displayed in Device Manager on v8].

12.  Wait until the adapter pods are re-created using the newly installed service pack, and then validate that the adapter pods are restarted:

sedo system status command

13.  Re-enable the adapters in Device Manager.

Install a Cisco Network Services Orchestrator Crosswork Hierarchical Controller Function Pack

NSO Engine Embedded Inside Crosswork Hierarchical Controller

NSO runs as an Crosswork Hierarchical Controller micro-service, alongside the Crosswork Hierarchical Controller applications and adapters.

This exposes the NSO NBI from Crosswork Hierarchical Controller and the NSO UI as an Crosswork Hierarchical Controller application (which will mostly be used for configuration of Function Packs/NEDs).

Note: Crosswork Hierarchical Controller HA and embedded NSO integrate seamlessly. The NSO database exists on both the Crosswork Hierarchical Controller Active and Standby nodes, and the database is synchronized continuously. If the Crosswork Hierarchical Controller Active node fails, and the Standby node takes over and becomes the Active node, NSO is updated automatically and switches nodes too.

Related image, diagram or screenshot

Figure 2.   

Network Services Orchestrator (NSO)

The Crosswork Hierarchical Controller Function Pack integrates Cisco NSO with a controller to deploy services on the controller. This integration is with either a Nokia Service Provider (NSP) controller or a Cisco Crosswork Network Controller (CNC). The NEDs are installed as part of the Function Pack installation.

For full details on installing and using the Network Services Orchestrator (NSO) Crosswork Hierarchical Controller Function Pack, see the:

        Cisco NSO Crosswork Hierarchical Controller - Function Pack Installation Guide

        Cisco NSO Crosswork Hierarchical Controller - Function Pack User Guide.

For full details on installing and using the Cisco NSO Routed Optical Networking Core Function Pack, see the:

        Cisco NSO Routed Optical Networking Core Function Pack Installation Guide

        Cisco NSO Routed Optical Networking Core Function Pack User Guide

        Cisco RON Solution Guide 

Install NSO Function Pack in Crosswork Hierarchical Controller Embedded Instance

The embedded NSO instance is a fully functional standalone container installation of NSO. The installation procedure is the same as the standard installation with one difference: the file system of NSO is not readily available on the host server.

To load the new function pack, the administrator must copy the function pack files onto the NSO pod, and then log into the pod shell and place the files in the correct directories. Once the files are on the NSO pod, follow the instructions in the Function Pack Installation Guide.

To install NSO Function Pack in Crosswork Hierarchical Controller Embedded Instance:

1.     Connect to the Crosswork Hierarchical Controller host server via SSH.

2.     Download the NSO function pack.

3.     Copy the NSO function pack into the NSO pod:

kubectl cp [function-pack-file] <zone-a/zone-b>/nso-manager-srv-0:/usr/app

4.     Log into the pod shell:

sedo shell <zone-a/zone-b>/nso-manager-srv

cd /usr/app/nso-temp

5.     Continue with function pack extraction and installation as specified in the Function Pack Installation Guide.

Considerations for a High Availability (HA) Deployment

HA in NSO needs to be disabled for installing and updating function packs.

1.     On both the active and standby nodes, in the NSO CLI execute:

admin@ncs> request high-availability disable

2.     On both the active and standby nodes, install the function pack.

3.     Restart the NSO pods to reactivate HA protection:

   sudo kubectl --kubeconfig /etc/kubernetes/admin.conf -n zone-a scale statefulset nso-manager-srv --replicas=0

sudo kubectl --kubeconfig /etc/kubernetes/admin.conf -n zone-b scale statefulset nso-manager-srv --replicas=0

sudo kubectl --kubeconfig /etc/kubernetes/admin.conf -n zone-a scale statefulset nso-manager-srv --replicas=1

sudo kubectl --kubeconfig /etc/kubernetes/admin.conf -n zone-b scale statefulset nso-manager-srv --replicas=1

Example of How to Install the RON Function Pack

This describes an example of how to install a RON function pack on the NSO pod.

For the complete and most updated procedures, you must refer to the related Function Pack Installation Guide.

1.     Copy the function pack file into the pod:

kubectl cp nso-6.1-ron-2.1.1.tar.gz zone-a/nso-manager-srv-0:/usr/app

2.     Move into the NSO pod:

sedo shell zone-a/nso-manager-srv

cd /usr/app/nso-temp

3.     Untar the function pack tar.gz file:

tar xvzf nso-6.1-ron-2.1.1.tar.gz

cd nso-6.1-ron-2.1.1/

4.     Copy the function pack packages to the rundir:

cp ron/core-fp-packages/*.tar.gz $NCS_RUN_DIR/packages/

5.     Initiate NSO CLI command from the specified path for loading packages:

cd $NCS_RUN_DIR/packages/

ncs_cli -u admin

6.     Load the packages:

request packages reload

7.     Verify that the function pack has successfully loaded:

show packages package package-version | select build-info ncs version | select build-info file | select build-info package sha1 | select oper-status error-info | select oper-status up | tab

8.     Set SSH algorithms public-key:

configure

set devices global-settings ssh-algorithms public-key [ ssh-ed25519 ecdsa-sha2-nistp256 ecdsa-sha2-nistp384 ecdsa-sha2-nistp521 rsa-sha2-512 rsa-sha2-256 ssh-rsa ]

commit

9.     Initiate NSO CLI command from the specified path to load merge XMLs:

cd /nso/run/packages/nso-6.1-ron-2.1.1/ron/bootstrap-data

ncs_cli -u admin

10.  Load bootstrap data according to the function pack installation guide:

configure

unhide debug

unhide ron

load merge commit-queue-settings.xml

commit

...

<repeat for all files in installation guide>

...

load merge RON-status-codes.xml

commit

Add Devices

The device-type and ned-id depend on the actual device you want to connect, as well as the NED version installed on NSO. Update the commands below accordingly.

To add a device:

1.     Add credentials:

set devices authgroups group <credential_name> default-map remote-name <username> remote-password <password>

commit

2.     Add device:

set devices device <device_name> address <IP> authgroup <device_authgroup_name> device-type cli ned-id <cisco-iosxr-cli-7.49>

set devices device <device_name> state admin-state unlocked

commit

request devices device <device_name> ssh fetch-host-key

request devices device <device_name> connect

request devices device <device_name> sync-from

request devices device <device_name> check-syncCopyright

Learn more