NSO Orchestration for 4G CUPS

Feature Description

The Cisco Network Service Orchestrator (NSO) based VNF orchestration enables you to manage the lifecycle of newly created Virtual Network Function (VNF) devices such as CP, UP, and RCM.

The Cisco NSO Orchestration for 4G CUPS solution provides the following functions:

  • Instantiation via NSO CLI, Web-Interface, or NSO RESTCONF API

  • Onboarding of VNF devices such as CP, UP, and RCM upon successful instantiation

  • Pushing of Day-0.5, and Day-1 CUPS configuration after successful instantiation

  • Decommission of the VNF devices

Use Cases

The NSO orchestration solution caters to the following use cases:

  1. Instantiation of new CP, UP, and RCM

    Instantiating new 4G-based VNFs (CPs, UPs, or RCMs) for CUPS. CP can be a Virtualized Packet Core-Single Instance (VPC-SI) or Virtualized Packet Core-Distributed Instance (VPC-DI), but UP can only be a VPC-SI.

    Users are notified if there are any failures.

  2. Termination of CP, UP, and RCM

    Terminating 4G-based VNFs (CPs, UPs, or RCMs) for CUPS.

    Users are notified if there are any failures.

  3. Updating current status on the VNF dashboard

    Providing the current status on the dashboard of VNFs.

  4. Configuration of logical group for CP, UPs, and RCMs

    Configuring device group in NSO to group the CP, UPs, and RCMs, and adding the corresponding VNFs to the device group.

How it Works

Architecture

The Cisco NSO orchestration engine software modules handle the Network Functions Virtualization Orchestrator (NFVO) functions. The NFV solution follows the ETSI NFV Management and Orchestration (MANO) model, as shown in the following figure.

Figure 1. NFV Solution Architecture

The following diagram illustrates, at a high-level, the components and frameworks involved in the solution.

Figure 2. NFV Solution Components

Components

The following are some of the important components of NSO:

  • Cisco NFVO Functional Pack:

    Cisco NFVO Functional Pack contains the YANG models according to the MANO specification (SOL006).

    Cisco NFVO Functional Pack contains models for cisco-etsi-nfvo , which implements the instantiation logic of MANO descriptors on VNF Managers (VNFM) and OpenStack. Virtual Network Function (VNF) and Network Service (NS) are the main services in this package. Northbound users interact with these services to start VNFs or network services.

    It also includes models for cisco-etsi-nfvo-ro , which contains the Resource Orchestration (RO) functionality. Resource Orchestration manages the allocation of physical resources in the Virtualized Infrastructure Managers (VIMs). These physical resources are used by a VNF or an NS.

  • StarOS NED for NSO:

    StarOS-based Network Element Driver (NED) interfaces with the Cisco 4G CUPS VNFs for configuration push.

  • RCM NED for NSO:

    RCM-based NETCONF NED is used to establish communication between NSO and RCM devices.

  • Cisco ESC SOL003 NED:

    This NED is used for ETSI SOL3 compliant devices. Elastic Services Controller (ESC) is also added as SOL3 compliant device to NSO.

  • NFV Apps Mobility Package:

    This is a custom package that provides VNF life-cycle management, and VNF dashboard update.

Minimum Platform and Software Requirements

The following are the minimum platform and software requirements to support NSO Orchestration:

  • Supported VIM: OpenStack

  • Supported VNFM: Cisco ESC

  • Supported Orchestrator: NSO

  • Network Elements:

    • RCM

    • VPC-SI (UP/CP)

    • VPC-DI (CP)

Table 1. Software Versions
Software Minimum version

Redhat OpenStack

13 (Queens)

Note

 

VMWare or OSP 16 is not supported or validated.

Cisco ESC

5.5.0.86

Cisco NSO

6.1.6.1

OpenStack NED

4.2.30

ESC NED

5.10.0.97

StarOS NSO NED

5.52.4

Cisco NFVO FP

4.7.3

Mobility FP

3.5

NSO Resource Management

3.5.2

Cisco NSO HCC

6.0.1

This feature supports the following ETSI MANO specifications:

Table 2. ETSI MANO Specifications
Specification Supported Version Description
SOL001 v2.5.1 Defines the format and structure for the VNF Descriptor
SOL003 v2.4.1 Defines all interactions over the Or-Vnfm reference point

Network and Hardware Requirements

Network Requirements:

The following table demonstrates the NSO and ESC network requirements:

Table 3. NSO and ESC Network Requirements
Application Management IP Orchestration Connection between HA Pair
NSO (2 VMs + VIP) 3 3 L2 connection of 100 Mbps with latency less than 30 ms
ESC (2 VMs + VIP) 3 3 L2 connection of 100 Mbps with latency less than 30 ms

Hardware Requirements

The following table demonstrates the specifications for NSO and ESC Virtual Machine to support maximum of 250 VNFs.

Table 4. NSO and ESC VM Specifications
Application Number of VMs VM CPU Cores VM RAM VM Storage VM Connectivity
NSO 2 8 16 GB RAM baseline + 10 MB RAM for every StarOS device to be supported 100 GB disk (preferably SSD) One 10 Gbps network link
ESC 2 4 16 GB 100 GB

Licensing

The NSO Orchestration for 4G CUPS is a licensed Cisco feature. Contact your Cisco Account representative for detailed information on specific licensing requirements.

Call Flows

This section describes the key call flows for the 4G CUPS orchestration functionality.

VNF Onboarding

This section describes the VNF Onboarding flow.

Figure 3. VNF Onboarding
Table 5. Call Flow Description

Step

Description

1

Network operator uses NSO CLI to instantiate a VNF (CP, UP, or RCM). This includes the VIM ID to host the VNF, and the ESC.

2

NSO validates the data provided by the user via OpenStack.

3

NSO sends a SOL.003 request to instantiate the VNF on the ESC.

4

ESC sends a Grant Request to the NSO.

5

NSO sends a resource grant message to the ESC with VIM InstanceId.

6

ESC uses OpenStack API to instantiate the VNF.

7

OpenStack brings up the VNF.

8

ESC queries the OpenStack about the VNF status.

9

OpenStack replies with VNF-Up message.

10

ESC notifies the NSO about VNF instantiation.

11

NSO pushes Day-1 configuration onto the VNF.

12

NSO notifies the Operator that the VNF provisioning is complete.

P2P Module Installation

The mobility function pack supports installation of a P2P module as part of VNF deployment. The P2P module is installed after the device is onboarded. The P2P module file must be uploaded to NSO prior to the VNF deployment. The configurable parameters indicate the file location and whether P2P installation is required.

Once the P2P installation is completed, the newly instantiated VNF will bear a P2P default priority of 99 for MFP 3.4.2 and later versions. Prior to MFP 3.4.2, the P2P default priority starts with 10. To upgrade the P2P priority using the "mobility-library" action command, refer to the procedure in Appendix C.

VNF Termination

This section describes the VNF Termination flow.

Figure 4. VNF Termination Flow
Table 6. Call Flow Description

Steps

Description

1

Operator uses NSO CLI to terminate a VNF (CP, UP, or RCM). This includes the VNF ID, VIM ID to host the VNF, and the ESC.

2

NSO sends a SOL.003 request to terminate the VNF on the ESC.

3

ESC uses OpenStack API to terminate the VNF.

4

OpenStack terminates the VNF.

5

ESC queries the OpenStack about the VNF status.

6

OpenStack replies with a VNF Destroyed message.

7

ESC notifies the NSO about VNF termination.

8

NSO notifies the Operator that the VNF termination is complete.

Recovery

Auto-healing isn’t supported currently.

To recover from fault state to previous state, perform the following steps:

  • Cancel or terminate the VNF instantiation. The system returns to its original state

  • Cancel or recreate VNF termination process. The system returns to its original state

Limitation

The NSO Orchestration for 4G CUPS feature has the following limitation in this release:

  • Production NSO instance can run only on popular Linux flavors (for example, RedHat, Cisco Linux, Ubuntu, CentOS, and so on).

  • A VNF deployment may fail if the NSO/ESC instance handling the deployment goes down. This is applicable for both ESC/NSO HA as well as standalone ESC/NSO deployments. Operator intervention is required depending on the exact nature of the failure. In case of deployment followed by automated configuration push, it’s possible that the deployment succeeds but the subsequent configuration push fails depending on the timing of the NSO failure.

Installing NSO Packages

The NSO Orchestration solution uses a collection of NEDs and other NSO packages. The following is a detailed list of various packages and their roles. For installation instructions of these packages, see the "Packages" chapter in the NSO Administration Guide for the appropriate NSO version.

  1. NSO NED Packages

    Most NSO NED packages are published for downloading independently. Contact your Cisco representative for details on how to download.

    ncs-6.1-rcm-nc.v21.28.mx_20240415-072244Z.tar.gz—RCM NETCONF-based NED for RCM device communication from NSO

    ncs-6.1.6-cisco-staros-5.52.4.tar.gz—CLI-based NED for StarOS device (SI or DI) communication from NSO

    ncs-6.1.1-etsi-sol003-1.13.18.tar.gz—ETSI SOL003 based NED for ESC communication from NSO

    ncs-6.1-openstack-cos-4.2.30.tar.gz—Openstack NED for Openstack communication from NSO

    ncs-6.1.2.1-cisco-etsi-nfvo-4.7.3.tar.gz—NETCONF-based NED for ESC communication from NSO

    ncs-6.1.2.1-esc-5.10.0.97.tar.gz—ETSI SOL-based NED for ESC communication from NSO

  2. NSO Custom Packages

    These are custom-built packages for Mobility VNF orchestration. NSO custom packages are bundled in the mobility function pack tar archive.

    mobility-common.tar.gz—Common package for config and device metadata

    nfv-common.tar.gz—Common packages for VNF orchestration-related common utilities

    nfv-device-onboarding.tar.gz—Package to support NSO device onboarding

    nfv-vim.tar.gz—Package for Openstack related precheck functionality

    nfv-vnf-lcm.tar.gz—Package for VNF Instantiation and termination logic

    mop-common.tar.gz—Common packages for config MOP-related common utilities

    mobility-mop.tar.gz—Package for Mobility MOP Config Push

  3. VNF Packages Required for Orchestration (SOL003/SOL004)

    These are VNF packages that are used for onboarding a specific VNF. These packages are provided only as guidelines. Mostly, a given package is customized to suit the deployment environment.

    VPC-SI-2P-IMAGE-BOOT—Reference SOL003/SOL004 CSAR package for SI instantiation

    RCM-IMAGE-BOOT—Reference SOL003/SOL004 CSAR package for RCM instantiation

    VPC-DI-2P-1DI-ENCRYPTVOLBOOT—Reference SOL003/SOL004 CSAR package for VPC DI instantiation with two CF and four SF. SF has two service networks.

    VPC-DI-2P-1DI-ENCRYPTVOLBOOT-LTD—Reference SOL003/SOL004 CSAR package for VPC DI instantiation with two CF and two SF. SF has two service networks.

    VPC-DI-2P-1DI-ENCRYPTVOLBOOT-LTD-1S-NETWORK—Reference SOL003/SOL004 CSAR package for VPC DI instantiation with two CF and two SF. SF has only one service network.

    create-zip.sh—Shell script to rebuild the SOL003 package, if there are any changes to SOL001 definitions or Day-0 scripts.


Note


If the user is using the Mobility Function Pack already, refer to the procedure in Appendix B: Generic Upgrade Steps of Mobility Function Pack (MFP).


VNF Orchestration/Deployment and Automatic Configuration Management

This solution includes the following tasks:

  • Pre-population of config metadata for VNF orchestration.

  • Orchestration/Deployment of VNFs (CP, UP, or RCM)

  • Automatic device onboarding post VNF deployment

  • Post-deployment automatic configuration push

Pre-population of Config Metadata for VNF Orchestration

Pre-population of Config Metadata is important to achieve any post-deployment configuration push from NSO in an automated mode. If there are no prepopulated data for this device, NSO instantiates the VNF and on-boards as a device in NSO.

Prepopulating of config metadata has the following structure, and population of this data is based on the network scheme and data set:

container 
metadata-store {
	        list config-metadata {
	            key device-name;
	            leaf device-name {
	                tailf:info "onboarding device name";
	                type string;
	            }
	            leaf redundancy_scheme {
	                tailf:info "cluster-topology 1:1, N:M and N+2";
	                type string;
	            }
	            leaf device-type {
	                tailf:info "Onboarding device type vpc or rcm";
	                type string;
	            }
	            list attributes {
	                key attribute-name;
	                leaf attribute-name {
	                    tailf:info "Attribute Name";
	                    type string;
	                }
	                leaf attribute-value {
	                    tailf:info "Attribute Value";
	                    type string;
	                }
	            }
	            list configuration-type {
	                key config-type;
	                tailf:info "Configuration type Day0.5, Day1 or DayN";
	                leaf config-type {
	                    type string;
	                }
	                list files {
	                    key file-name;
	                    tailf:info "file name";
	                    leaf file-name {
	                        type string;
	                    }
	                    leaf config-scheme {
	                        type string;
	                    }
	                    // CP device info
	                    list additional-files {
	                        key device;
	                        //cp device
	                        leaf device {
	                            tailf:info "device name";
	                            type string;
	                        }
	                        list additional-file {
	                            key additional-file-name;
	                            leaf additional-file-name {
	                                tailf:info "file name";
	                                type string;
	                            }
	                        }
	                    }
	                }
	            }
	        }
	    }

The following table provides a description of the parameters:

Parameter Description
device-name Name of the NSO device corresponding to the VNF. Same as VNF name.
redundancy_scheme Type of redundancy scheme. N + 2 is standalone (no redundancy)
device-type vpc (for SI and DI) or rcm (for RCM)
configuration-type Day-0.5 is a special configuration for N:M redundancy. This configuration enables the UP to contact the RCM. This configuration is expected to be saved persistently.
Day-1 is the bulk of the configuration
Day-N generally changes to a working configuration. Does not apply to NSO Orchestration flows.
file-name Primary configuration file(s) to be pushed
config-scheme

This parameter can have one of the following values:

Common : Configuration is pushed to all UPs regardless of role (Active or Standby).

host-specific : This scheme is similar to “Common”, as the configuration is pushed to all UPs (Active or Standby). However, the configuration is pushed only after “common” configuration. This enables you to provide any configuration that is dependent on the “common” configuration. For example, the control-plane group configuration.

allHostSpecific : Contains the union of host-specific configurations for all active UPs. The configuration is pushed to all the standby UPs for N:M.

"Active1", "Active2",... "ActiveN” : Host-specific configuration for the respective active UP. It is pushed only to the specific UP.

"Active1-rcm", "Active2-rcm", .. "ActiveN-rcm" : This configuration is in RCM format, and is pushed to the RCMs. RCM needs this scheme to perform configuration negation when a standby takes over for a specific active device.

additional-files This parameter pushes the related configuration to other devices (for example, pushing configuration to CP when onboarding UP). This is not yet supported.
attribute-name This parameter identifies any attribute (variables) in the config files for dynamic substitution. Formatted as $attribute_name
attribute-value Value for the attribute

The following is an example of NSO action to populate or modify the config meta-data:

container 
  config-metadata {
	        // config true;
	        tailf:action config-metadata-request {
	            tailf:info "Invoke upgrade action on the selected devices";
	            tailf:actionpoint config-metadata-request;
	            input {
	                list config-metadata {
	                    key device-name;
	                    leaf device-name {
	                        tailf:info "onboarding device name";
	                        type string;
	                    }
	                    leaf device-type {
	                        tailf:info "Onboarding device type vpc or rcm";
	                        type enumeration {
	                            enum vpc;
	                            enum rcm;
	                        }
	                    }
	                    leaf redundancy_scheme {
	                        tailf:info "cluster-topology 1:1, N:M and N+2";
	                        type enumeration {
	                            enum 1:1;
	                            enum N:M;
	                            enum RCUPS;
	                        }
	                    }
	

	                    list configuration-type {
	                        key config-type;
	                        tailf:info "Configuration type Day0.5, Day1 or DayN";
	                        leaf config-type {
	                            type enumeration {
	                                enum Day0.5;
	                                enum Day1;
	                                enum DayN;
	                            }
	                        }
	                        list files {
	                            key file-name;
	                            tailf:info "file name";
	                            leaf file-name {
	                                type string;
	                            }
	                            leaf config-scheme {
	                                type enumeration {
	                                    enum common;
	                                    enum host-specific;
	                                    enum host-specific-common;
	                                }
	                            }
	                            // CP device info
	                            list additional-files {
	                                key device;
	                                //cp device
	                                leaf device {
	                                    tailf:info "device name";
	                                    type string;
	                                }
	                                list additional-file {
	                                    key additional-file-name;
	                                    leaf additional-file-name {
	                                        tailf:info "file name";
	                                        type string;
	                                    }
	                                }
	

	                                }
	                            }
	                        }
	                    }
	                    list attributes {
	                        key attribute-name;
	                        leaf attribute-name {
	                            tailf:info "Attribute Name";
	                            type string;
	                        }
	                        leaf attribute-value {
	                            tailf:info "Attribute Value";
	                            type string;
	                        }
	                    }
	                }
	                leaf delete-config-data {
	                    type boolean;
	                    default false;
	                }
	            }
	            output {
	                leaf status {
	                    type string;
	                }
	                leaf message {
	                	type string;
	                }
	            }
	        }
	    }

You can call this action from RESTCONF, as shown in the following example:

URI: http://<NSO-IP>:<NSO-REST-PORT>/restconf/data/mobility-common:config-metadata/config-metadata-request

Method: POST

Content-Type: application/yang-data+json

Payload:

{
        "config-metadata": { 
            "device-name": "test2",
     "schema" : "1:1",
            "attributes":{
                "attribute-name":"test",
                 "attribute-value": "gh"
                  },
            "configuration-type":{
                "config-type": "Day0.5",
                "files":{
                    "file-name":"/home/ubuntu/tmo_action/test.txt"
                    },
                "files":{
                    "file-name":"/home/ubuntu/tmo_action/day0.5.txt"
                    }
                    
            }
        }
}

Result:

{
    "mobility-common:output": {
        "status": "Success
/home/ubuntu/tmo_action/test.txt ==> syntax error: unknown command,Error: on line 3: kkkl,
/home/ubuntu/tmo_action/day0.5.txt ==> Success"
    }
}

You can call this action using NCS CLI, as shown in the following example:

ubuntu@ncs> request config-metadata config-metadata-request config-metadata { device-name staros-1 attributes { attribute-name hostname attribute-value TEST } configuration-type { config-type Day0.5 files { file-name /home/ubuntu/tmo_action/test.txt } files { file-name /home/ubuntu/tmo_action/day0.5.txt } } schema 1:1  }
status Success
/home/ubuntu/tmo_action/test.txt ==> syntax error: unknown command,Error: on line 3: kkkl,
/home/ubuntu/tmo_action/day0.5.txt ==> Success
[ok][2021-07-12 08:05:01]

NOTES:

  • Config-metadata-request action has internal config validator. Config validator allows detection of syntax or certain semantic errors (for example, out of range values) in advance before pushing the configuration. Config validation requires at least a device which is onboarded in NSO (Either real-one or NetSim).

    The configurable parameter is as follows:

    container 
    configurable-parameters {
    	  leaf config-pre-validation-vpc-device-name {
    	          type string;
    	  leaf config-pre-validation-rcm-device-name {
    	          type string;
    	  }
    }
    

    This config validation of files is also optional. If you do not want to validate the configs, you can turn-off this feature using configurable parameter. If config validation is turned off, then any error in the configuration files results in a config push error, and should be rolled back.

    container 
    configurable-parameters {
    	  leaf config-pre-validation-required {
    	          type boolean;
    	          default false;
    	  }
    	  } 
    	 }
    

    This config metadata contains all configurable parameters.

Onboarding ESC and Openstack as Devices

For ESC installation, see ESC documentation. Prior to configuration or onboarding and instantiation of VNFs, perform the following setup steps:

NSO and ESC Environment Setup for NFV

  1. SSH to ESC host using username and password

    ssh esc@<esc-ip>
  2. Become Sudo user

    sudo su 
  3. Edit the following file: vi /opt/cisco/esc/esc_database/etsi-production.properties

  4. Edit the information as shown below and save the file (Don't change anything in spring user and password). Change the NSO details accordingly. Use only local subnet management IP for communication, and not the floating-IP between ESC/NSO communication.

    spring.security.user.name=esc
    spring.security.user.password=$1$J7BUBX$Ce4vqA6JcrWCggRpYrPYg1
     
    security.pam.service=
    server.additionalConnector.port=8253
    server.additionalConnector.key-alias=esc
    server.esc.key-alias=esc
     
    nfvo.apiRoot=<NSO-IP>:9191
    nfvo.httpScheme=http
    nfvo.userName=<NSO-User-name>
    nfvo.password=<NSO-Password>
    nfvo.authenticationType=BASIC
     
    server.host=<ESC-Orch-IP>
    http.enabled=true
    https.enabled=false
    certificate.validation=false
    spring.datasource.password=${PGSQL_PASSWORD}
    spring.flyway.password=${PGSQL_PASSWORD}
    
  5. Restart the escadm service, as shown below:

    escadm restart 
    Stopping esc_service: [OK]
    Stopping escadm service: [OK]
    Starting escadm service: [OK]
    #
    
  6. Check for the escadm health till it becomes healthy, as shown below (It may take few minutes):

    escadm health 
    ============== ESC =================
    vimmanager (pgid 18651) is running
    monitor (pgid 18688) is running
    mona (pgid 18741) is running
    snmp is disabled at startup
    etsi (pgid 19316) is running
    pgsql (pgid 18944) is running
    portal (pgid 19355) is running
    confd (pgid 18978) is running
    escmanager (pgid 19131) is running
    =======================================
    ESC HEALTH PASSED
    
  7. Login to the NSO and modify the configs according to the environment and save it into a file:

    <config xmlns="http://tail-f.com/ns/config/1.0">
      <nfv xmlns="urn:etsi:nfv:yang:etsi-nfv-descriptors">
      <settings xmlns="http://cisco.com/ns/nso/cfp/cisco-etsi-nfvo">
        <image-server>
          <ip-address><NSO-IP></ip-address>
          <port>8010</port>
          <document-root>/var/opt/ncs/vnfpackages</document-root>
        </image-server>
        <etsi-sol3>
          <server>
            <ip-address><NSO-IP></ip-address>
            <port>9191</port>
            <use-ssl>false</use-ssl>
            <document-root>/var/opt/ncs</document-root>
            <auth-enabled>true</auth-enabled>
            <auth-types>
              <basic>
                <username><NSO-USERNAME></username>
                <password><NSO-PASSWORD></password>
              </basic>
            </auth-types>
          </server>
          <vnfm-behaviour>
            <vnfm-behaviour-override>
              <id>default-sol3</id>
              <rpc-behaviour>
                <rpc>
                  <include>
                    <vim-info>false</vim-info>
                  </include>
                </rpc>
                <modify>
                  <pre>
                    <rpc>false</rpc>
                  </pre>
                  <post>
                    <rpc>true</rpc>
                  </post>
                </modify>
              </rpc-behaviour>
              <grant>
                <store-history>false</store-history>
                <heal>
                  <authorise-grant>true</authorise-grant>
                </heal>
              </grant>
              <onboarding>
                <store-details>true</store-details>
              </onboarding>
            </vnfm-behaviour-override>
          </vnfm-behaviour>
        </etsi-sol3>
      </settings>
      </nfv>
    </config>
    
  8. Compile all the packages in package folder and perform package reload.

    ubuntu@test-nso:/var/opt/ncs/packages$ ncs_cli -C
    User ubuntu last logged in 2021-09-23T08:00:34.649202+00:00, to test-nso, from 209.165.200.225 using cli-ssh
    ubuntu connected from 209.165.200.225 using ssh on test-nso
    ubuntu@ncs# packages reload
     
  9. Load merge the file, as shown below. This step enables NSO as NFVO and runs NFVO service in 9191 port:

    ubuntu@test-nso:~$ vi config.xml
    ubuntu@test-nso:~$ ncs_cli -C
     
    User ubuntu last logged in 2021-08-04T09:10:55.819283+00:00, to test-nso, from 209.165.200.226 using cli-ssh
    ubuntu connected from 209.165.200.227 using ssh on test-nso
    ubuntu@ncs# config 
    Entering configuration mode terminal
    ubuntu@ncs(config)# load merge config.xml 
    Loading.
    1.54 KiB parsed in 0.01 sec (128.38 KiB/sec)
    ubuntu@ncs(config)# commit 
    
  10. Update NACM rule by adding NSO username to “ncsadmin” group

    ubuntu@test-nso:~$ ncs_cli -C
     
    User ubuntu last logged in 2021-08-06T09:56:26.370979+00:00, to test-nso, from 209.165.200.227 using cli-ssh
    ubuntu connected from 209.165.200.227 using ssh on test-nso
    ubuntu@ncs# config 
    Entering configuration mode terminal
    ubuntu@ncs(config)# nacm groups group ncsadmin user-name ubuntu 
    ubuntu@ncs(config-group-ncsadmin)# commit 
    
  11. Copy the necessary packages to the standard location on the NSO (typically /var/opt/ncs/packages).

  12. Perform package reload and check for package status. Status should be UP for all packages.

    ubuntu@test-nso:~$ ncs_cli -C
     
    User ubuntu last logged in 2021-08-06T09:58:39.866838+00:00, to test-nso, from 209.165.200.227 using cli-ssh
    ubuntu connected from 209.165.200.227 using ssh on test-nso
    ubuntu@ncs# packages reload
    ubuntu@ncs# show packages package oper-status
    
    NAME                  UP  PROGRAM CODE ERROR  JAVA UNINITIALIZED PYTHON UNINITIALIZED * 
    -----------------------------------------------------------------------------
    cisco-etsi-nfvo        X      -                     -              -              
    cisco-rcm-nc-1.0       X      -                     -              -              
    cisco-staros-cli-5.38  X      -                     -              -              
    esc                    X      -                     -              -              
    etsi-sol003-gen-1.13   X      -                     -              -              
    mobility-common        X      -                     -              -              
    mop-automation         X      -                     -              -              
    mop-common             X      -                     -              -              
    nfv-common             X      -                     -              -             
    nfv-device-onboarding  X      -                     -              -              
    nfv-vim                X      -                     -              -              
    nfv-vnf-lcm            X      -                     -              -              
    openstack-cos-gen-4.2  X      -                     -              -              
    
  13. Setup the notification stream: Update /etc/ncs/ncs.conf file to add "nfv-events" stream.

    <ncs-config>
        <event-streams>
          <notifications>
            <stream>
              <name>nfv-events</name>
              <description>Generic netconf notification stream for NFV events</description>
              <replay-support>true</replay-support>
              <builtin-replay-store>
                <enabled>true</enabled>
                <dir>${NCS_RUN_DIR}/state</dir>
                <max-size>S10M</max-size>
                <max-files>50</max-files>
              </builtin-replay-store>
            </stream>
          </event-streams>
        </notifications>
    </ncs-config>
    
  14. Restart NSO as sudo user.

    /etc/init.d/ncs stop   
    Stopping ncs (via systemctl):                              [  OK  ]
    /etc/init.d/ncs start  
    Starting ncs (via systemctl):                              [  OK  ]  
    
  15. Onboard NETCONF, ESC, ETSI SOL003 ESC, and Openstack as devices in NSO via device onboarding APIs.

    1. Onboard Openstack as a device. The following is an example. Customize to the specific deployment. This can be configured via NSO CLI in the configuration mode. See NSO documentation for information about authgroup.

      devices device openstack
      address 209.165.200.228
      port 5000
      authgroup openstack
      device-type generic ned-id openstack-cos-gen-4.2
      
    2. Onboard ESC ETSI interface as a device. The following is an example. Customize to the specific deployment.

      devices device esc-etsi
      address 209.165.200.229
      port 8250
      authgroup esc-etsi
      device-type generic ned-id etsi-sol003-gen-1.13
      
    3. Onboard ESC native NETCONF interface as a device. The following is an example. Customize to the specific deployment.

      devices device esc-netconf
      address 209.165.200.229
      ssh host-key ssh-rsa
      key-data "AAAAB3NzaC1yc2EAAAADAQABAAABAQDYwNCaa3ghJtnJSvn/
      aSPjCuoMKmssZds+J5d9JCOS\n3h3V/fCtJwiH7qMgMXnNc0LEr1fZhxQ4kg5o/
      IafmoYD7N+w/ECqWEp68sjeN+AftiZ9J74D\n+/KDonffgBCHxIVEo0XHYlojrtmpg/
      EH9/N3fQgoSzEhGItGG4uMaAzbWr1pO8AApOPlPi4r\nciL4Qemi6u4i/
      HGFr8MqQp5qcMFd8O30OlB1q1vKn9sq/9sL6EzqyUd2lMounDglEQYMgi8J\
      nyG6upsOFuvhiYRC9qfHML45quyepsJdVi2Li2QwUJLa89EDh148RlhLTJs4s2iAwBGNdvLdK\ntzLu2VGyWKqH"
      !
      authgroup esc-netconf
      device-type netconf ned-id esc
      
  16. Track the device addition status as shown below (for different devices):

    ubuntu@test-nso:~$ ncs_cli -C
     
    User ubuntu last logged in 2021-08-06T10:09:23.550686+00:00, to test-nso, from 209.165.200.227 using cli-ssh
    ubuntu connected from 209.165.200.227 using ssh on test-nso
    ubuntu@ncs# show vnf-status instances esc-netconf
                                           
    INSTANCE ID  TIMESTAMP   FUNCTION TYPE OPERATION  STATUS   STATUS MESSAGE
    -----------------------------------------------------------------------------------
    esc-netconf  2021-07-21 *    -          init   success   Device Onboarding initialized
                 2021-07-21 *    -          init   success   Device Onboarding initialized
                 2021-07-21 *    - fetch-ssh-keys  success   fetch-ssh-keys was successful
                 2021-07-21 *    -         connect success   connect was successful
                 2021-07-21 *    -       sync-from success   sync-from was successful
                 2021-07-21 *    - device-config   success   Subscribed to ESC Netconf    notification escEvent Stream
                 2021-07-21 *    -          ready  success   Device Successfully onboarded                                                                                                 
    

Prerequisites for VNF Instantiation

Before submitting the VNF deployment request, make the following configuration changes:

  1. Configurable parameters

    Set the following configurable parameters, if required:

    • configurable-parameters device-ping-sleeptimesec 30 (default value is 30 sec)

    • configurable-parameters device-ping-retries 150 (default value is 30. In case of RCM, configure it to some higher value, for example, 150)

    • configurable-parameters p2p-required true (default value is false)

    • configurable-parameters p2p-soFile-path /var/opt/ncs/patch_libp2p-2.64.1418.so.tgz

  2. Prepopulating of config metadata

    When you configure Config-metadata, the device name must be the same as VNF instance name.

    You can call this action from RESTCONF, as shown in the following example:

    URI: http://<NSO-IP>:<NSO-REST-PORT>/restconf/data/mobility-common:config-metadata/config-metadata-request

    Method: POST

    Content-Type: application/yang-data+json

    Sample Payload:

    {
            "config-metadata": { 
                "device-name": "test2",
         "schema" : "1:1",
                "attributes":{
                    "attribute-name":"test",
                     "attribute-value": "gh"
                      },
                "configuration-type":{
                    "config-type": "Day1",
                    "files":{
                        "file-name":"/home/ubuntu/tmo_action/test.txt"
                        },
                    "files":{
                        "file-name":"/home/ubuntu/tmo_action/day0.5.txt"
                        }
                        
                }
            }
        }
    

Ensure to follow the criteria described in the following figure while pre-populating config metadata:

VNF Instantiation

VNF is instantiated upon configuration. So, to instantiate the VNF, you must load the VNF configuration into the NSO. The VNF has references to the SOL006 VNFD. It also has references to VIM artifacts like Openstack tenant networks, and IP addresses. For details about YANG definition of VNF, see Appendix A.

Instantiating a VNF involves many components:

  • An ETSI SOL001 VNFD template packaged as a TOSCA VNF package

  • An ETSI SOL006 VNFD with the same name or ID as the VNF package

  • A VNF instance that is proprietary to the NSO

The Mobility function pack ships with some example VNF packages, which also contain the corresponding SOL006 VNFD. These examples can be used as a base, but additional customization is required to fit the deployment. An example VNF configuration is given below:

{
   "nfv-vnf-lcm:nfv-vnf":[
      {
         "network-function-type":"VPC-SI",
         "name":"test026",
         "vnfd":"VPC-SI-2P-IMAGE-BOOT",
         "instantiation-level":"default",
         "deployment-flavor":"default",
         "mgmt-user-name":"admin",
         "mgmt-password":"Csco@123",
         "host-name":"vpc-si",
         "domain-name":"cisco.com",
         "ntp-server":"209.165.201.1",
         "name-server":"209.165.201.2",
         "location":{
            "vim":{
               "name":"openstack",
               "project":"test",
               "zone-id":"nova"
            },
            "vnfm":"esc-etsi"
         },
         "network":[
            {
               "type":"VIM_NETWORK_MANAGEMENT",
               "extent":"external",
               "name":"test-mgmt",
               "subnet-name":"test-mgmt-subnet"
            },
            {
               "type":"VIM_NETWORK_ORCHESTRATION",
               "extent":"external",
               "name":"test-orch",
               "subnet-name":"test-orch-subnet"
            },
            {
               "type":"VIM_NETWORK_SERVICE_1",
               "extent":"external",
               "name":"service1",
               "subnet-name":"service1"
            },
            {
               "type":"VIM_NETWORK_SERVICE_2",
               "extent":"external",
               "name":"service2",
               "subnet-name":"service2"
            }
         ],
         "unit":[
            {
               "type":"VPC-SI",
               "image":"core-si-21.23",
               "flavor":"core-si",
               "connection-point":[
                  {
                     "name":"nic0",
                     "ip-address":[
                        {
                           "id":0,
                           "fixed-address":[
                              "209.165.201.3"
                           ]
                        }
                     ],
                     "security-group":[
                        "default"
                     ],
                     "network-type":"VIM_NETWORK_ORCHESTRATION"
                  },
                  {
                     "name":"nic1",
                     "ip-address":[
                        {
                           "id":0,
                           "fixed-address":[
                              "209.165.201.4"
                           ]
                        }
                     ],
                     "security-group":[
                        "default"
                     ],
                     "network-type":"VIM_NETWORK_MANAGEMENT"
                  },
                  {
                     "name":"nic2",
                     "ip-address":[
                        {
                           "id":0,
                           "fixed-address":[
                              "209.165.201.5"
                           ]
                        }
                     ],
                     "security-group":[
                        "default"
                     ],
                     "network-type":"VIM_NETWORK_SERVICE_1"
                  },
                  {
                     "name":"nic3",
                     "ip-address":[
                        {
                           "id":0,
                           "fixed-address":[
                              "209.165.201.6"
                           ]
                        }
                     ],
                     "security-group":[
                        "default"
                     ],
                     "network-type":"VIM_NETWORK_SERVICE_2"
                  }
               ]
            }
         ],
         "extra-parameters":[
            {
               "name":"BOOTUP_TIME",
               "value":"100"
            },
            {
               "name":"LICENSE_KEY",
               "value":"\"VER=1|DOI=1624646484|DOE=1640457684|ISS=3|NUM=212017|
CMT=SWIFT_License|LSG=5000000|LEC=10000000|LGT=5000000|FIS=Y|FR4=Y|FTC=Y|FSR=Y|
FPM=Y|FID=Y|FI6=Y|FLI=Y|FFA=Y|FCA=Y|FTP=Y|FTA=Y|FDR=Y|FDC=Y|FGR=Y|FAA=Y|FDQ=Y|
FEL=Y|BEP=Y|FAI=Y|FCP=Y|LCF=5000000|LPP=5000000|LSF=5000000|FLS=Y|FSG=Y|
LGW=5000000|HIL=XT2|LSB=5000000|LMM=5000000|FIB=Y|FND=Y|FAP=Y|FRE=Y|FHE=Y|
FUO=Y|FUR=Y|FOP=Y|FRB=Y|FCF=Y|FVO=Y|FST=Y|FSI=Y|FRV=Y|F6D=Y|F13=Y|FIM=Y|
FLP=Y|FSE=Y|FMF=Y|FEE=Y|FHH=Y|FIT=Y|FSB=Y|FDS=Y|LSE=5000000|FLR=Y|FLG=Y|
FMC=Y|FOC=Y|FOS=Y|FIR=Y|FNE=Y|FGD=Y|LIP=5000000|FOE=Y|FAU=Y|FEG=Y|FL2=Y|
FSH=Y|FLF=Y|FSP=Y|FNI=Y|FCI=Y|FME=Y|FCN=Y|FUB=Y|FSF=Y|FGO=Y|FPE=Y|FWI=Y|
FAC=Y|FIE=Y|FSM=Y|FAG=Y|FNQ=Y|FEW=Y|FAR=Y|FOX=Y|FPW=Y|FAM=Y|FGX=Y|FWT=Y|
FUA=Y|LDT=5000000|LEX=5000000|LVL=5000000|LQP=5000000|LMP=5000000|
LCU=10000000|LUU=10000000|FXS=Y|FLC=Y|FRT=Y|FSX=Y|FBS=Y|FRD=Y|FXM=Y|
LTO=10000000|FNS=Y|LNS=5000000|SIG=MC0CFBge/
0TZha2Ta7c1L5CLOL2tgDIDAhUAhIKwZxXEJJpr9Xk5buNyzZStrNM\""
            }
         ]
      }
   ]
}

The following is another example to instantiate RCM VNF:

{
    "nfv-vnf-lcm:nfv-vnf": [
      {
        "network-function-type": "RCM",
        "name": "RCM-ahhashem-sol003-78",
        "vnfd": "RCM-IMAGE-BOOT",
        "instantiation-level": "default",
        "deployment-flavor": "default",
        "mgmt-user-name": "luser",
        "mgmt-password": "$8$40/jVMTHJY+Jrd7mZiwqdrKEIz6Kc5Pt2Qvnwi0/65g=;”
        "host-name": "rcm",
        "domain-name": "cisco.com",
        "ntp-server": "209.165.201.1",
        "name-server": "209.165.201.1",
        "location": {
          "vim": {
            "name": "openstack",
            "project": "ahhashem",
            "zone-id": "nova"
          },
          "vnfm": "esc-etsi"
        },
        "network": [
          {
            "type": "VIM_NETWORK_MANAGEMENT",
            "name": "ahhashem-mgmt",
            "extent": "external",
            "subnet-name": "ahhashem-mgmt-subnet"
          },
          {
            "type": "VIM_NETWORK_ORCHESTRATION",
            "name": "ahhashem-orch",
            "extent": "external",
            "subnet-name": "ahhashem-orch-subnet"
          },
          {
            "type": "VIM_NETWORK_SERVICE_1",
            "name": "service1",
            "extent": "external",
            "subnet-name": "service1"
          } ,
          {
            "type": "VIM_NETWORK_SERVICE_2",
            "name": "service2",
            "extent": "external",
            "subnet-name": "service2"
          }
        ],
        "unit": [
          {
            "type": "RCM",
            "image": "core-rcm-21.23",
            "flavor": "mkal-rcm-hugepages",
            "connection-point": [
              {
                "name": "nic0",
                "ip-address":{
                    "id": 1,
                    "fixed-address": ["209.165.201.7"]
                },
                "security-group": ["default"],
                "network-type": "VIM_NETWORK_ORCHESTRATION"
              },
              {
                "name": "nic1",
                "ip-address":{
                    "id": 1,
                    "fixed-address":["209.165.201.8"]
                } ,
                "security-group": ["default"],
                "network-type": "VIM_NETWORK_MANAGEMENT"
              },
              {
                "name": "nic2",
                "ip-address": {
                    "id": 1,
                    "fixed-address": ["209.165.201.9"]
                },
                "security-group": ["default"],
                "network-type": "VIM_NETWORK_SERVICE_1"
              } ,
              {
                "name": "nic3",
                "ip-address": {
                    "id": 1,
                    "fixed-address": ["209.165.201.10"]
                },
                "security-group": ["default"],
                "network-type": "VIM_NETWORK_SERVICE_2"
              }

            ]
          }
        ],
        "extra-parameters": [
          {
            "name": "VIM_VM_NAME",
            "value": "RCM-ahhashem-sol003-78"
          },
          {
            "name": "HOST_NAME",
            "value": "rcm"
          },
            {
              "name": "NIC0_TYPE",
              "value": "virtual"
            },
            {
              "name": "NIC1_TYPE",
              "value": "virtual"
            },
            {
              "name": "NIC2_TYPE",
              "value": "direct"
            },
            {
              "name": "NIC3_TYPE",
              "value": "direct"
            },
            {
              "name": "MGMT_USER_NAME",
              "value": "luser"
            },
            {
              "name": "MGMT_PASSWORD_ROUND4096",
              "value": "$6$rounds=4096$P2wdTbEBO0LHmHi$OwbVEIarMbt
Qxbu5Us5kW0n0MOWp3QN9eVRX7WjvLm4xTJvFpl6vHez3XkKm39XJJ7dGRRIsZqXfcZRjQBA7E."
            },
            {
                "name": "SERVICE_INTERFACE_IP_1",
                "value": "209.165.201.9"
            },
            {
                "name": "SERVICE_INTERFACE_IP_2",
                "value": "209.165.201.11"
            }, 
            {
              "name": "NTP_SERVER",
              "value": ["209.165.201.12","209.165.201.13","209.165.201.14"]
            }

        ]
      }
    ]
  }

VNF Instantiation - Component Interactions and Flows

The following figure illustrates the complete flow of end-to-end instantiation automation:

Figure 5. VNF Instantiation Interactions

Detailed Steps:

  1. The Network operator has all the required details for VNF instantiation including the name, type, dynamic attributes, and the configuration files. The network operator places the config files into the NSO filesystem, and registers the details with NSO config DB for automation.

    This step includes the following tasks:

    • The network operator Secure copy (SCP) the config files into NSO filesystem. This location must be an NFS, or it’s replicated in NSO HA environment.

    • Registers all attribute value pairs, dynamic substitution values, Day-0.5, or Day-1 configurations.

    • Enables the validation of config files, and provides the testing device details.

    • If the revalidation flag is set to true, config metadata action internally validates all config files. Otherwise, it fails while applying the configuration.

  2. The Network operator prepares the payload for VNF instantiation with all the details. Then, the network operator invokes the payload to create an instance. It does the basic validation and processes the order.

    This step includes the following task:

    • Validates the inputs such as password length, image, flavor, and network existence in Openstack before invoking an order

  3. NSO processes the order internally and prepares the ESC VNF instantiation order.

    This step includes the following tasks:

    • Creates NSO footprint of the service

    • Does CSAR validation

    • Invokes ESC VNF instantiation order using SOL3/SOL4 input

    • Starts listening to ESC notifications (both ETSI and NETCONF)

  4. ESC performs input validation of SOL3/SOL4 and creates the order in VIM.

    This step includes the following tasks:

    • ESC invokes VNF instantiation.

    • On Successful invocation of VNF, it creates mono monitors to monitor the VNF.

    • Returns the updates via ETSI and NETCONF notifications to NSO (both Success and Failure).

  5. ESC returns the periodic updates on progress to NSO via ETSI or NETCONF notifications.

    This step includes the following tasks:

    • ESC constantly sends the ETSI and NETCONF notifications on progress.

    • ETSI notification comprises deploy – init, processing, and completed notification.

    • NETCONF notifications provide more granular information on VM status.

    • On Failure, it gives appropriate error message.

  6. On receiving VNF instantiation completion message from ESC, NSO onboards as an NSO device.

    This step includes the following tasks:

    • Instantiation logic fetches the details from input payload, and invokes device onboarding logic.

    • NSO performs the fetch-ssh-host-key from the device.

    • NSO performs connection check.

    • NSO performs sync-from.

    • NSO executes post check command such as “show version” on device.

    • NSO adds the device into NSO device tree.

  7. NSO instantiation logic waits for device addition to complete.

    This step includes the following tasks:

    • NSO checks if device onboarding process is complete.

    • If device onboarding fails, NSO stops the execution.

  8. NSO instantiation logic reads the prepopulated config metadata to interpret the config to be pushed.

    This step includes the following tasks:

    • Reads the prepopulated config metadata and interprets the Day-0.5 or Day-1 configuration files based on the device-name (Device name is based on VNF name)

    • For RCM-based N:M scheme, Day-0.5 is pushed.

    • On 1:1 case, Day-1 is pushed.

    • On missing information, instantiation completes and stops processing.

  9. NSO takes the config files from config metadata, formulates the Mobility MOP input format, and invokes the MOP for config push.

    This step includes the following tasks:

    • Invokes the Mobility MOP and gets the task-id.

    • Periodically checks for the status on task-id.

    • Saves the config permanently in device flash if its 1:1 CP or UP pairs (via MOP).

    • Completion status is updated in vnf-status ledger.

Checking the VNF Instantiation Status

You can check the status of VNF instantiation using vnf-status command periodically.

Any failure, processing, or completion related messages are appended in the status message.

show vnf-status instances vnf-instance-name  
INSTANCE ID TIMESTAMP TYPE OPERATION STATUS STATUS MESSAGE
---------------------------------------------------------------------
<VNF-Name> <Time-Stamp> <type> <function> <status> <message-if-any>

VNF Dashboard

VNF instantiation steps and current status of the VNF are displayed in NSO based dashboard.

VNF Deletion

The following flow diagram illustrates the complete flow of end-to-end deletion automation.

Figure 6. VNF Deletion Interactions

Detailed Steps:

  1. Network operator decides to decommission or delete the existing instance, which is in running or failed state.

    This step includes the following tasks:

    • Network operator provides the VNF-name with type for the deletion

    • NSO does the validation of the VNF existence

  2. NSO checks for the VNF instance status, and if there is a failed instance at NSO end, NSO invokes ESC for deletion from VIM or perform rollback.

    This step includes the following tasks:

    • NSO decides to push it to ESC or perform rollback (in case of failed instance within NSO)

    • NSO does the asynchronous request to ESC and waits for notifications.

  3. ESC does the clean-up and removes the VNF monitors.

  4. ESC generates ETSI/NETCONF notifications to NSO.

  5. NSO processes ESC notifications and performs the following:

    • Invokes the device-onboarding package for deletion of the instance

    • Removes entry from “nfv-vnf-inventory”

  6. Device onboarding package deletes the device from NSO, and the status is updated in the VNF ledger.

Checking the VNF Deletion Status

You can check the status of VNF deletion using vnf-status command.

Any failure, processing, or completion related messages are appended in the status message

show vnf-status instances vnf-instance-name 
INSTANCE ID TIMESTAMP TYPE OPERATION STATUS STATUS MESSAGE
---------------------------------------------------------------------
<VNF-Name> <Time-Stamp> <type> <function> <status> <message-if-any>

Removing Configuration Metadata

This is a manual step, and you need to remove the config metadata using NSO action. Keeping this data doesn’t have any impact.

Cleaning Config Files from NSO Filesystem

You need to remove the config files manually from NSO filesystem. Keeping this data doesn’t have any impact.

Automation Process - VNF Deployment, Onboarding, and Configuration Push

The Automation process includes the following sections:

Instantiation of VNF using Input Payload

After making necessary changes, submit the instantiation request using input payload. The automation process for VNF instantiation starts.

For input payload sample, see the section VNF Instantiation.

Onboarding VNF as a Device in NSO

Upon successful instantiation, the VNF is onboarded as a device on the NSO. The device name is the same as the VNF name.

Installing the P2P Module in VPC Device

If the "device-type" is of VPC type and the configurable-parameters "p2p-required" is set to "true" with the "p2p-soFile-path" defined, copy the P2P file to device flash directory, and then upgrade the P2P module.

The P2P module is installed on the device.

Configuration Push to the Onboarded Device

The following are the static parameters that are used during automated config push:

  • operation-type =Commit

  • mop-type=Common

  • save-config-permanently= default is false, and it is set to true when the device type is "vpc"

Once the config push is done using the configuration files, a task-id is generated. Using the task-id, it checks the status of config-push, and based on the status, the ledger entry is updated.


Note


NSO doesn't perform configuration audit on RCM. If an RCM reboots when NSO is in the process of pushing configuration to it, the NSO doesn't re-push the configuration upon reboot completion. The configuration must be re-pushed manually. NSO alerts the operator about configuration push failure. Any configuration successfully pushed to the RCM is persistent across reboots of that RCM.


Appendix A: YANG definition of VNF

This section provides a sample YANG definition of VNF.

module nfv-vnf-lcm {
  namespace "http://com/cisco/cx/servicepack/nfv/vnflcm";
  prefix nfv-vnf-lcm;

  import ietf-inet-types { prefix inet; }
  import tailf-common { prefix tailf; }
  import tailf-ncs { prefix ncs; }
  import nfv-common { prefix nfv-common; }
  import tailf-kicker { prefix kicker; }
  include nfv-vnf-lcm-nano {
    revision-date 2020-02-14;
  }

  organization "Cisco-AS";

  contact "Cisco AS";

  description "Generic NFV VNF LCM service package";

  revision 2020-10-22 {
    description "Active Inventory and LCM Auto/on-demand heal support";
  }

  revision 2020-07-01 {
    description "Re-branded per new naming convention";
  }

  revision 2020-02-14 {
    description "First version, ready for testing";
  }

  notification vnf-lcm {
    description "Notification about Network Function Operation";
    uses nfv-common:network-function-notification;
  }

  notification vnf-alarm {
    description "VNF alarms";
    uses nfv-common:vnf-alarm;
  }

  container nfv-vnf-inventory {
    tailf:info "CDB model to persist the VNFs, associated project, VIM and the
        VM details";
    config false;
    tailf:cdb-oper {
      tailf:persistent true;
    }

    list vnf {
      tailf:info "VNFs with associated VMs and status";
      key name;
      leaf name {
        tailf:info "VNF Name";
        type string;
      }
      leaf vnfd {
        type string;
        tailf:info "Associated VNFD name";
      }
      leaf project {
        type string;
        tailf:info "Associated vim tenant/project";
      }
      leaf vim {
        type string;
        tailf:info "Associated VIM";
      }
      leaf status {
        type string;
        tailf:info "Overall VNF status";
      }
      list vm {
        tailf:info "Associated VMs and the status";
        key name;
        leaf name {
          type string;
          tailf:info "VM name";
        }
        leaf type {
          type string;
          tailf:info "VM Type";
        }
        leaf flavor {
          type string;
          tailf:info "VIM flavor that is used to deploy the VM";
        }
        leaf host {
          type string;
          tailf:info "Compute host where the VM has been deployed";
        }
        list connection-point {
          key nic-id;
          leaf nic-id {
            type uint8;
            tailf:info "NIC id of the connection point";
          }
          leaf ip-address {
            type inet:ip-address;
            tailf:info "IP address of the connection point";
          }
        }
        leaf status {
          type string;
          tailf:info "VM status";
        }
      }
      
      leaf netconf-notification-done {
          tailf:hidden nfv-internal;
          type empty;
      }
    }
  }

  list nfv-vnf {
    description "Generic RFS model for VNF LCM";

    key "network-function-type name";

    leaf network-function-type {
      tailf:info "virtual network function type";
      type enumeration {
        enum "VPC-SI";
        enum "VPC-DI";
        enum "CSR1KV";
        enum "GENERIC";
        enum "VCU";
        enum "VDU";
        enum "EMS";
        enum "RCM";
      }
    }

    leaf name {
      tailf:info "Unique service id";
      type string;
    }

    leaf vnfd {
      mandatory true;
      type string;
      tailf:info "VNFD to use for this type of Network Function that has to be
            onboarded on the target VIM.";
    }

    uses ncs:service-data;
    ncs:servicepoint nfv-vnf-lcm;
    uses ncs:nano-plan-data;

    tailf:action heal {
      tailf:info "Heal VNF";
      tailf:actionpoint nfv-lcm-heal-ap;
      input {
      }
      output {
        uses nfv-common:standard-action-response;
      }
    }

 	tailf:action start {
            tailf:info "Start VNF";
            tailf:actionpoint nfv-lcm-start-ap;
            input {
            }
            output {
                uses nfv-common:standard-action-response;
            }
        }

 	tailf:action stop {
            tailf:info "Stop VNF";
            tailf:actionpoint nfv-lcm-stop-ap;
            input {
            }
            output {
                uses nfv-common:standard-action-response;
            }
        }

 tailf:action scale {
            tailf:info "Scale-In VNF";
            tailf:actionpoint nfv-lcm-scale-ap;
            input {
                leaf scale-type {
                    mandatory true;
                    tailf:info "SCALE IN or OUT";
                    type enumeration {
                        enum "OUT";
                        enum "IN";
                    }
                }

                leaf no-of-instances {
                    tailf:info "Number of scale IN or OUT instances. Default is 1";
                    type uint32;
                    default 1;
                }

                leaf vdu-type {
                    mandatory true;
                    tailf:info "vdu-type as CF/SF/VPC-SI etc";
                    type string;
                }

            }
            output {
                uses nfv-common:standard-action-response;
            }
        }
        
     	tailf:action retry {
            tailf:info "Stop VNF";
            tailf:actionpoint nfv-lcm-retry-ap;
            input {
            }
            output {
                uses nfv-common:standard-action-response;
            }
        }

    leaf instantiation-level {
      type string;
      default "default";
      tailf:info "Instantiation level defined in VNFD to use. This will determine
            the number of VMs/VDUs to be deployed.";
    }

    leaf deployment-flavor {
      type string;
      default "default";
      tailf:info "Deployment flavor defined in the VNFD to use. Describes a specific
            deployment version of a VNF with specific requirements for capacity
            and performance.";
    }
    leaf mgmt-user-name {
      type nfv-common:identifier;
      description " Management login username specific to this VNF. Default values
            can be configured per VNF type.";
    }

    leaf mgmt-password {
      tailf:suppress-echo "true";
      type tailf:aes-cfb-128-encrypted-string;
      description "Management login password specific to this VNF.";
    }

    leaf host-name {
      type inet:domain-name;
      description "Hostname to use to communicate with this network function";
    }

    leaf domain-name {
      type inet:domain-name;
      description "Domain name used to construct Fully Qualified Domain Name by
            concatenating with VM hostname: <hostname>.<domain>";
    }

    leaf ntp-server {
      description "NTP server to use for VNFs deployed in this data center";
      type inet:host;
    }

    leaf name-server {
      type inet:ip-address;
      description "Name server";
    }

    container location {
      container vim {
        leaf name {
          description "NFVI this Network Function is deployed on.";
          type leafref {
            path "/ncs:devices/ncs:device/ncs:name";
          }
          //must "/ncs:devices/ncs:device[ncs:name=current()]/ncs:platform/ncs:name
          //          = 'Openstack'" {
          //  error-message "Please select Openstack devices only";
          //}
        }
        leaf project {
          type nfv-common:identifier;
          description "VIM project used to instantiate VNFs";
          mandatory true;
        }
        leaf zone-id {
          type string;
          default "nova";
          description "VIM zone id";
        }
        //TODO might need to support user domain and project domain
      }
      leaf vnfm {
        mandatory true;
        type leafref {
          path "/ncs:devices/ncs:device/ncs:name";
        }
        //must "/ncs:devices/ncs:device[ncs:name=current()]/ncs:platform/ncs:name
        //        = 'ETSI SOL'" {
        //  error-message "Please select ETSI-SOL VNFM devices only";
        //}
        description "ESC VNFM onboarded";
      }
    }
    list network {
      key type;
      leaf type {
        type nfv-common:identifier;
      }
      leaf name {
        type nfv-common:identifier;
        mandatory true;
      }
      leaf extent {
        type nfv-common:network-extent;
      }
      leaf subnet-name {
        when "../extent='external'";
        type nfv-common:identifier;
        mandatory true;
      }
    }

    list unit {
      description "Virtual Deployment Unit, a single VM.";
      key type;

      leaf type {
        description "VDU type as defined in the VNFD of this Network Function.";
        type nfv-common:identifier;
      }
      leaf image {
        type string;
        description " Image to use for this type of Network Function. Must have been
                be onboarded on the target VIM.";
      }
      leaf flavor {
        mandatory true;
        type string;
        description " Flavor to use for this type of Network Function. Must have been
                onboarded on the target VIM.";
      }
      list storage-volume {
        key id;
        description "Out of band Storage volumes to use for this network function";
        leaf id {
          type string;
        }
        leaf volume-name {
          type string;
          description "Storage Volume to use for this type of Network function";
        }
      }
      list connection-point {
        key name;
        description " Network connection point such as a network interface card, as
                defined in the descriptor.";
        leaf name {
          mandatory true;
          type nfv-common:identifier;
        }

        list ip-address {
            key id;
            ordered-by user;
            leaf id {
            	type uint8;
                tailf:info "IP Address ID for connection points";
            }           
            leaf-list fixed-address {
                ordered-by user;
                description " IP address(es) to assign this network interface for both scaled and non-scaled VNF's. Both IPv4 and
                          IPv6 is possible to allow for dual-stack cases if this VNF requires
                          it for Internet access.";
                type inet:ip-address;    
             }
        }
        
        list vip {
          key address;
          ordered-by user;
          description " Virtual IP address(es) to assign this network interface. Both
                    IPv4 and IPv6 is possible to allow for dual-stack cases if this
                    VNF requires it for Internet access. Setting this will populate
                    allowed-address-pair list in the CVIM";

          leaf address {
            type inet:ip-address;
          }
          leaf netmask {
            type inet:ip-address;
            mandatory true;
          }
        }
        leaf-list security-group {
          type nfv-common:identifier;
          description "Security group(s) to apply to this network interface.";
        }
        leaf network-type {
          type leafref {
            path "../../../network/type";
          }
          description "Network used for this connection-point.";
        }
      }
    }
    list extra-parameters {
      description "VNF instance specific additional parameters defined in the VNFD.
            This will override the values configured in the VNFD";
      key name;
      leaf name {
        type string {
          pattern "[A-Za-z0-9_]+";
        }
      }
      leaf value {
        type string;
      }
    }
  }

  	list nfv-retry-vnfs {
        tailf:info "Retry VNF's to tweak the notifications";
        config false;
        tailf:cdb-oper {
            tailf:persistent true;
        }
        tailf:hidden nfv-internal;

        key name;
        leaf name {
            tailf:info "VNF Name";
            type string;
        }
    }
}

Appendix B: Generic Upgrade Steps of Mobility Function Pack (MFP)

This appendix covers the following procedures for:

Upgrading NSO 5.7.5.1-MFP 3.4.1 to NSO 5.8.10-MFP 3.4.2

Use the following procedure to upgrade NSO 5.7.5.1-MFP 3.4.1 to NSO 5.8.10-MFP 3.4.2. This is an MFP version upgrade with simultaneous NSO version upgrade.

  1. Copy the NSO 5.8.10 installation bin file to the /tmp folder and upgrade NSO to version 5.8.10.

  2. Set the symbolic link to the new NSO version 5.8.10 under /opt/ncs .

  3. Copy the packages and NEDs for MFP 3.4.2 and replace inside the /var/opt/ncs/packages folder.

  4. Restart NSO with the start-with-package-reload option. This will upgrade MFP 3.4.1 to 3.4.2 along with NSO upgrade from NSO 5.7.5.1 to 5.8.10.

The following is a detailed procedure to upgrade NSO 5.7.5.1-MFP 3.4.1 to NSO 5.8.10-MFP 3.4.2.


Note


If the upgrade is not completed, it is always recommended to take a backup for recovery later.

To backup the data, use the following configuration:

$ sudo su
# source /etc/profile.d/ncs.sh
# /etc/init.d/ncs stop
# ncs-backup
# exit
$

  1. Run MFP 3.4.1 on NSO 5.7.5.1:

    root@test-nso:/var/opt/ncs# ncs --version
    5.7.5.1
    
    root@ncs# show packages package package-version 
                             PACKAGE   
    NAME                     VERSION   
    -----------------------------------
    cisco-etsi-nfvo          4.7.2     
    cisco-rcm-nc-1.6         1.6       
    cisco-staros-cli-5.43    5.43.4    
    esc                      5.7.0.73  
    etsi-sol003-gen-1.13     1.13.16   
    mobility-common          3.4.1     
    mobility-rcm-subscriber  3.4.1     
    mop-automation           3.4.1     
    mop-common               3.4.1     
    nfv-common               3.4.1     
    nfv-device-onboarding    3.4.1     
    nfv-vim                  3.4.1     
    nfv-vnf-lcm              3.4.1     
    openstack-cos-gen-4.2    4.2.26    
    
    root@ncs# show packages package oper-status    
    packages package cisco-etsi-nfvo
     oper-status up
    packages package cisco-rcm-nc-1.6
     oper-status up
    packages package cisco-staros-cli-5.43
     oper-status up
    packages package esc
     oper-status up
    packages package etsi-sol003-gen-1.13
     oper-status up
    packages package mobility-common
     oper-status up
    packages package mobility-rcm-subscriber
     oper-status up
    packages package mop-automation
     oper-status up
    packages package mop-common
     oper-status up
    packages package nfv-common
     oper-status up
    packages package nfv-device-onboarding
     oper-status up
    packages package nfv-vim
     oper-status up
    packages package nfv-vnf-lcm
     oper-status up
    packages package openstack-cos-gen-4.2
     oper-status up
    root@ncs#
    
    root@ncs# show devices list 
    NAME         ADDRESS        DESCRIPTION  NED ID                 ADMIN STATE  
    ---------------------------------------------------------------------------
    esc-etsi     64.1.0.6       -            etsi-sol003-gen-1.13   unlocked     
    esc-netconf  64.1.0.6       -            esc                    unlocked     
    openstack    10.225.202.49  -            openstack-cos-gen-4.2  unlocked     
    root@ncs#
  2. Instantiate a test VNF VPC-SI device using MFP 3.4.1 with NSO 5.7.5.1:

    root@ncs# 
    System message at 2023-10-09 07:52:05...
    Commit performed by ubuntu via http using rest.
    root@ncs# 
    System message at 2023-10-09 07:52:05...
    Commit performed by ubuntu via http using rest.
    root@ncs# 
    System message at 2023-10-09 07:52:07...
    Commit performed by ubuntu via http using rest.
    root@ncs# 
    System message at 2023-10-09 07:52:07...
    Commit performed by ubuntu via http using rest.
    root@ncs# 
    System message at 2023-10-09 07:52:08...
    Commit performed by ubuntu via http using rest.
    
    root@ncs# show vnf-status instances S1-Test-00001 | tab
                                            FUNCTION                                                                
    INSTANCE ID    TIMESTAMP                TYPE      OPERATION       STATUS      STATUS MESSAGE                    
    ----------------------------------------------------------------------------------------------------------------
    S1-Test-00001  2023-10-09 07:50:55.198  VPC-SI    deploy          init        init                              
                   2023-10-09 07:51:38.595  VPC-SI    deploy          processing  processing                        
                   2023-10-09 07:52:01.639  VPC-SI    deploy          processing  processing                        
                   2023-10-09 07:52:03.997  VPC-SI    deploy          completed   completed                         
                   2023-10-09 07:53:43.293  -         init            success     Device Onboarding initialized     
                   2023-10-09 07:53:43.874  -         fetch-ssh-keys  success     fetch-ssh-keys was successful     
                   2023-10-09 07:53:45.285  -         connect         success     connect was successful            
                   2023-10-09 07:53:46.785  -         sync-from       success     sync-from was successful          
                   2023-10-09 07:53:46.964  -         ready           success     Device Successfully onboarded     
                   2023-10-09 07:54:13.305  -         config-read     success     Config MetaData is empty or null  
    
    root@ncs# show devices list 
    NAME           ADDRESS        DESCRIPTION  NED ID                 ADMIN STATE  
    -----------------------------------------------------------------------------
    S1-Test-00001  64.1.0.110     -            cisco-staros-cli-5.43  unlocked     
    esc-etsi       64.1.0.6       -            etsi-sol003-gen-1.13   unlocked     
    esc-netconf    64.1.0.6       -            esc                    unlocked     
    openstack      10.225.202.49  -            openstack-cos-gen-4.2  unlocked     
    root@ncs# 
  3. Copy the NSO 5.8.10 installation bin file to the /tmp folder and NSO upgrade to version 5.8.10.

    root@test-nso:/var/opt/ncs# cd /tmp
    root@test-nso:/tmp# ls -lrt
    total 397840
    -rwxrwxrwx 1 ubuntu   ubuntu   203071802 Nov 18  2022 nso-5.7.5.1.linux.x86_64.installer.bin
    drwx------ 3 root     root          4096 Sep 10 03:02 systemd-private-d7c0f02148d447358a1b6b5995f1f339-systemd-resolved.service-O5tL4V
    drwx------ 3 root     root          4096 Sep 10 03:02 systemd-private-d7c0f02148d447358a1b6b5995f1f339-systemd-logind.service-Uj4bic
    drwx------ 3 root     root          4096 Sep 10 03:02 snap.lxd
    drwx------ 2 ubuntu   ubuntu        4096 Sep 12 09:45 ssh-WxVBdtyvgGzB
    drwx------ 2 ubuntu   ubuntu        4096 Sep 12 19:28 ssh-kRFako4TgqJp
    drwx------ 2 ubuntu   ubuntu        4096 Sep 12 20:25 ssh-wyrZqTmiA4o1
    drwx------ 2 ubuntu   ubuntu        4096 Sep 12 20:50 ssh-a10wclKRgSP2
    -rwxrwxrwx 1 ubuntu   ubuntu   204258218 Sep 13 05:38 nso-5.8.10.linux.x86_64.installer.bin
    drwx------ 2 ubuntu   ubuntu        4096 Sep 13 12:21 ssh-ReWAFnmi3qSl
    drwx------ 2 ubuntu   ubuntu        4096 Sep 13 12:54 ssh-dn16O8f1nkaz
    drwx------ 2 ubuntu   ubuntu        4096 Sep 20 05:49 ssh-DtgyHvctQ5S0
    drwxr-xr-x 2 root     root          4096 Oct  9 07:01 hsperfdata_root
    drwxr-xr-x 2 nsoadmin nsoadmin      4096 Oct  9 07:01 hsperfdata_nsoadmin
    
    root@test-nso:/tmp# sh ./nso-5.8.10.linux.x86_64.installer.bin --system-install --install-dir /opt/ncs --config-dir /etc/ncs --run-dir /var/opt/ncs --log-dir /var/log/ncs --run-as-user nsoadmin --non-interactive
    INFO  Using temporary directory /tmp/ncs_installer.63734 to stage NCS installation bundle
    INFO  Using /opt/ncs/ncs-5.8.10 for static files
    INFO  Doing install for running as user nsoadmin
    INFO  Unpacked ncs-5.8.10 in /opt/ncs/ncs-5.8.10
    INFO  Found and unpacked corresponding DOCUMENTATION_PACKAGE
    INFO  Found and unpacked corresponding EXAMPLE_PACKAGE
    INFO  Found and unpacked corresponding JAVA_PACKAGE
    INFO  Generating default SSH hostkey (this may take some time)
    INFO  SSH hostkey generated
    INFO  Generating self-signed certificates for HTTPS
    INFO  Environment set-up generated in /opt/ncs/ncs-5.8.10/ncsrc
    INFO  NSO installation script finished
    INFO  Found and unpacked corresponding NETSIM_PACKAGE
    cp: cannot stat '/sbin/arping': No such file or directory
    WARN  Failed to copy /sbin/arping command - capability not set
    INFO  Found ncs.crypto_keys, not migrating
    INFO  The following files have been installed with elevated privileges:
      /opt/ncs/ncs-5.8.10/lib/ncs/lib/core/pam/priv/epam: setuid-root
      /opt/ncs/ncs-5.8.10/lib/ncs/erts/bin/ncs.smp: capability cap_net_bind_service
      /opt/ncs/ncs-5.8.10/lib/ncs/bin/ip: capability cap_net_admin
    
    INFO  NCS installation complete
    
    root@test-nso:/tmp# /etc/init.d/ncs stop
    Stopping ncs: .
    
    root@test-nso:/tmp# cd /opt/ncs
    root@test-nso:/opt/ncs# ls -lrt
    total 24
    drwxr-xr-x 17 root     root 4096 Oct  9 06:41 ncs-5.7.5.1
    -rw-r--r--  1 root     root    9 Oct  9 06:41 user
    -rw-r--r--  1 root     root   80 Oct  9 06:41 installdirs
    lrwxrwxrwx  1 root     root   11 Oct  9 06:41 current -> ncs-5.7.5.1
    drwxr-xr-x  2 nsoadmin root 4096 Oct  9 06:41 packages
    drwxr-xr-x  2 nsoadmin root 4096 Oct  9 06:41 downloads
    drwxr-xr-x 17 root     root 4096 Oct  9 09:43 ncs-5.8.10
    
    Set the current NSO to version 5.8.10 using symbolic link
    
    root@test-nso:/opt/ncs# rm -f current
    root@test-nso:/opt/ncs# ln -s ncs-5.8.10 current
    
    root@test-nso:/opt/ncs# ls -lrt
    total 24
    drwxr-xr-x 17 root     root 4096 Oct  9 06:41 ncs-5.7.5.1
    -rw-r--r--  1 root     root    9 Oct  9 06:41 user
    -rw-r--r--  1 root     root   80 Oct  9 06:41 installdirs
    drwxr-xr-x  2 nsoadmin root 4096 Oct  9 06:41 packages
    drwxr-xr-x  2 nsoadmin root 4096 Oct  9 06:41 downloads
    drwxr-xr-x 17 root     root 4096 Oct  9 09:43 ncs-5.8.10
    lrwxrwxrwx  1 root     root   10 Oct  9 09:44 current -> ncs-5.8.10
  4. See the previous packages and NEDs for MFP 3.4.1 in the /var/opt/ncs/packages folder and replace with the newer packages and NEDs for MFP 3.4.2.

    root@test-nso:/opt/ncs# cd /var/opt/ncs/packages/
    root@test-nso:/var/opt/ncs/packages# ls -lrt
    total 20104
    -rw-rw-r-- 1 ubuntu ubuntu 2191794 Jan 25  2023 ncs-5.7.5.1-cisco-rcm-nc-1.6.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 2694132 Jan 25  2023 ncs-5.7.3-etsi-sol003-1.13.16.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  655190 Jan 25  2023 ncs-5.7.2.1-esc-5.7.0.73.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 2685815 Jan 25  2023 ncs-5.7.2.1-cisco-etsi-nfvo-4.7.2.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 2702317 Jan 25  2023 ncs-5.7.2-openstack-cos-4.2.26.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 9606799 Jan 25  2023 ncs-5.7.2-cisco-staros-5.43.4.tar.gz
    -rwxrwxrwx 1 ubuntu ubuntu     435 Jan 25  2023 compile-all-packages.sh
    -rwxrwxrwx 1 ubuntu ubuntu     275 Jan 25  2023 Ha-Mop.sh
    drwxrwxr-x 6 ubuntu ubuntu    4096 Oct  9 06:55 nfv-common
    drwxrwxr-x 7 ubuntu ubuntu    4096 Oct  9 06:56 nfv-device-onboarding
    drwxrwxr-x 8 ubuntu ubuntu    4096 Oct  9 06:56 nfv-vim
    drwxrwxr-x 9 ubuntu ubuntu    4096 Oct  9 06:57 nfv-vnf-lcm
    drwxrwxr-x 8 ubuntu ubuntu    4096 Oct  9 07:00 mobility-common
    drwxrwxr-x 7 ubuntu ubuntu    4096 Oct  9 07:01 mop-common
    drwxrwxr-x 8 ubuntu ubuntu    4096 Oct  9 07:01 mobility-mop
    drwxrwxr-x 7 ubuntu ubuntu    4096 Oct  9 07:01 mobility-rcm-subscriber
    
    root@test-nso:/var/opt/ncs/packages# rm -rf *
    root@test-nso:/var/opt/ncs/packages# ls -lrt
    total 0
    root@test-nso:/var/opt/ncs/packages#
    
    Copy the MFP 3.4.2 packages along with NEDS:
    
    root@test-nso:/var/opt/ncs/packages# ls -lrt
    total 26328
    -rw-rw-r-- 1 ubuntu ubuntu 2191794 Sep 25 05:40 ncs-5.7.5.1-cisco-rcm-nc-1.6.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 2694132 Sep 25 05:40 ncs-5.7.3-etsi-sol003-1.13.16.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  655190 Sep 25 05:40 ncs-5.7.2.1-esc-5.7.0.73.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 2685815 Sep 25 05:40 ncs-5.7.2.1-cisco-etsi-nfvo-4.7.2.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 2702317 Sep 25 05:40 ncs-5.7.2-openstack-cos-4.2.26.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 9606799 Sep 25 05:40 ncs-5.7.2-cisco-staros-5.43.4.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  824211 Sep 25 05:40 nfv-vnf-lcm.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  307054 Sep 25 05:40 nfv-vim.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  197449 Sep 25 05:40 nfv-device-onboarding.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu   59217 Sep 25 05:40 nfv-common.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu 3905393 Sep 25 05:40 mop-common.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  113829 Sep 25 05:40 mobility-rcm-subscriber.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  243790 Sep 25 05:40 mobility-mop.tar.gz
    -rw-rw-r-- 1 ubuntu ubuntu  746045 Sep 25 05:40 mobility-common.tar.gz
  5. Restart NSO with the start-with-package-reload option. This will upgrade MFP from 3.4.1 to 3.4.2 in NSO 5.8.10.

    root@test-nso:/var/opt/ncs/packages# source /etc/profile.d/ncs.sh
    
    root@test-nso:/var/opt/ncs/packages# /etc/init.d/ncs start-with-package-reload
    Starting ncs: .
    
    root@test-nso:/var/opt/ncs/packages# ncs --version
    5.8.10
    
    root@test-nso:/var/opt/ncs/packages# ncs_cli -C
    
    root connected from 127.0.0.1 using console on test-nso
    root@ncs# show packages package package-version 
                             PACKAGE   
    NAME                     VERSION   
    -----------------------------------
    cisco-etsi-nfvo          4.7.2     
    cisco-rcm-nc-1.6         1.6       
    cisco-staros-cli-5.43    5.43.4    
    esc                      5.7.0.73  
    etsi-sol003-gen-1.13     1.13.16   
    mobility-common          3.4.2     
    mobility-rcm-subscriber  3.4.2     
    mop-automation           3.4.2     
    mop-common               3.4.2     
    nfv-common               3.4.2     
    nfv-device-onboarding    3.4.2     
    nfv-vim                  3.4.2     
    nfv-vnf-lcm              3.4.2     
    openstack-cos-gen-4.2    4.2.26    
    
    root@ncs# show packages package oper-status    
    packages package cisco-etsi-nfvo
     oper-status up
    packages package cisco-rcm-nc-1.6
     oper-status up
    packages package cisco-staros-cli-5.43
     oper-status up
    packages package esc
     oper-status up
    packages package etsi-sol003-gen-1.13
     oper-status up
    packages package mobility-common
     oper-status up
    packages package mobility-rcm-subscriber
     oper-status up
    packages package mop-automation
     oper-status up
    packages package mop-common
     oper-status up
    packages package nfv-common
     oper-status up
    packages package nfv-device-onboarding
     oper-status up
    packages package nfv-vim
     oper-status up
    packages package nfv-vnf-lcm
     oper-status up
    packages package openstack-cos-gen-4.2
     oper-status up
    root@ncs#
    
    root@ncs# show devices list 
    NAME           ADDRESS        DESCRIPTION  NED ID                 ADMIN STATE  
    -----------------------------------------------------------------------------
    S1-Test-00001  64.1.0.110     -            cisco-staros-cli-5.43  unlocked     
    esc-etsi       64.1.0.6       -            etsi-sol003-gen-1.13   unlocked     
    esc-netconf    64.1.0.6       -            esc                    unlocked     
    openstack      10.225.202.49  -            openstack-cos-gen-4.2  unlocked
  6. Push the configuration with MFP 3.4.2 over NSO 5.8.10 using the mop-automation method to the test VNF VPC-SI device that got instantiated on previous MFP 3.4.1 and NSO 5.7.5.1:

    root@test-nso:/var/opt/ncs# cat day1config.cfg 
    config
    port ethernet 1/1
    description test-description-1/1-by-mop18oct
    no shutdown
    exit
    
    root@ncs# mobility-mop:action mop-automation generate-dry-run true operation-type commit mop-type common mop-file-name { file-name day1config.cfg order 1 target-devices-list { target-device-name S1-Test-00001 } } save-config-permanently true  
    task-id 036f5e94-364b-4d5f-a95e-4663fe5ed08a
    time-stamp 2023-10-09T10:18:19+0000
    time-zone Coordinated Universal Time
    root@ncs# 
    
    root@ncs# mobility-mop:action mop-automation-status task-id 036f5e94-364b-4d5f-a95e-4663fe5ed08a
    task-id 036f5e94-364b-4d5f-a95e-4663fe5ed08a
    task-status COMPLETED
    start-date 2023-10-09T10:18:19+0000
    end-date 2023-10-09T10:18:23+0000
    time-zone Coordinated Universal Time
    operation-type commit
    action-type save
    devices-list {
        device-name S1-Test-00001
        device-status COMPLETED
        start-date 2023-10-09T10:18:19+0000
        end-date 2023-10-09T10:18:23+0000
        device-state common
        files {
            file-name day1config.cfg
            order 1
            dry-run-mop /var/opt/ncs//036f5e94-364b-4d5f-a95e-4663fe5ed08a/S1-Test-00001/day1config_commit_2023-10-09T101819+0000.cfg
            rollback-mop /var/opt/ncs//036f5e94-364b-4d5f-a95e-4663fe5ed08a/S1-Test-00001/day1config_rollback_commit_2023-10-09T101819+0000.cfg
            commit-queue-status completed
            commit-queue-id 1696846701998
        }
    }
    
    root@test-nso:/var/opt/ncs# cat /var/opt/ncs//036f5e94-364b-4d5f-a95e-4663fe5ed08a/S1-Test-00001/day1configroot@test-nso:/var/opt/ncs# cat /var/opt/ncs//036f5e94-364b-4d5f-a95e-4663fe5ed08a/S1-Test-00001/day1config_commit_2023-10-09T101819+0000.cfg
    config 
    port ethernet 1/1
     description test-description-1/1-by-mop18oct
    exit
    end
    
    root@test-nso:/var/opt/ncs# cat /var/opt/ncs//036f5e94-364b-4d5f-a95e-4663fe5ed08a/S1-Test-00001/day1configroot@test-nso:/var/opt/ncs# cat /var/opt/ncs//036f5e94-364b-4d5f-a95e-4663fe5ed08a/S1-Test-00001/day1config_rollback_commit_2023-10-09T101819+0000.cfg
    config 
    port ethernet 1/1
     no description
    exit
    end

Upgrading MFP 3.4.1 to MFP 3.4.2 without NSO Version Change

Use the following procedure to upgrade MFP 3.4.1 to MFP 3.4.2 without changing the NSO version:

  1. Copy the packages and NEDs for MFP 3.4.2 and replace inside the /var/opt/ncs/packages folder.

  2. Perform packages reload in ncs_cli to see the upgraded MFP version 3.4.2. Restart NSO.

Appendix C: P2P Priority Upgrade

Use the following procedure to upgrade the P2P priority using the mobility-library action command:

  1. Perform pre-checks including the P2P file placement and path settings followed by VNF instantiation:

    [cloud-user@qwerty ncs]$ ncs --version
    5.8.10
     
    [cloud-user@qwerty ncs]$ ncs_cli -C
    
    User cloud-user last logged in 2023-09-20T03:23:18.655123+00:00, to qwerty, from 10.65.51.122 using cli-ssh
    cloud-user connected from 10.65.51.122 using ssh on qwerty
    cloud-user@ncs# show packages package package-version 
                             PACKAGE   
    NAME                     VERSION   
    -----------------------------------
    cisco-etsi-nfvo          4.7.2     
    cisco-rcm-nc-1.6         1.6       
    cisco-staros-cli-5.43    5.43.4    
    esc                      5.7.0.73  
    etsi-sol003-gen-1.13     1.13.16   
    mobility-common          3.4.2     
    mobility-rcm-subscriber  3.4.2     
    mop-automation           3.4.2     
    mop-common               3.4.2     
    nfv-common               3.4.2     
    nfv-device-onboarding    3.4.2     
    nfv-vim                  3.4.2     
    nfv-vnf-lcm              3.4.2     
    openstack-cos-gen-4.2    4.2.26    
    
    cloud-user@ncs# show packages package oper-status    
    packages package cisco-etsi-nfvo
     oper-status up
    packages package cisco-rcm-nc-1.6
     oper-status up
    packages package cisco-staros-cli-5.43
     oper-status up
    packages package esc
     oper-status up
    packages package etsi-sol003-gen-1.13
     oper-status up
    packages package mobility-common
     oper-status up
    packages package mobility-rcm-subscriber
     oper-status up
    packages package mop-automation
     oper-status up
    packages package mop-common
     oper-status up
    packages package nfv-common
     oper-status up
    packages package nfv-device-onboarding
     oper-status up
    packages package nfv-vim
     oper-status up
    packages package nfv-vnf-lcm
     oper-status up
    packages package openstack-cos-gen-4.2
     oper-status up
    
    [cloud-user@qwerty ncs]$ ls -lrt
    total 4740
    drwxrwxrwx.  2 nsoadmin   root             6 Sep  5 03:36 scripts
    drwxrwxrwx.  2 nsoadmin   root             6 Sep  5 03:36 streams
    drwxrwxrwx.  2 nsoadmin   root             6 Sep  5 03:36 backups
    -rwxrwxrwx.  1 nsoadmin   root          1513 Sep  5 03:36 INSTALLATION-LOG
    drwxrwxrwx.  3 nsoadmin   nsoadmin        22 Sep  5 03:37 target
    drwxrwxrwx.  7 cloud-user cloud-user     204 Sep  5 03:56 vnfpackages
    -rwxrwxrwx.  1 root       root            87 Sep  5 06:26 day1config.cfg
    -rwxrwxrwx.  1 root       root            31 Sep  5 06:47 rcm-day1config.cfg
    
    -rwxrwxrwx.  1 root       root       4253395 Sep  8 03:19 patch_libp2p-2.69.0.1534.so.tgz
    
    -rwxrwxrwx.  1 cloud-user cloud-user     142 Sep 10 14:11 daynconfig.cfg
    
    drwxrwxrwx. 10 nsoadmin   root          4096 Sep 18 02:20 packages
    -rwxrwxrwx.  1 nsoadmin   nsoadmin       333 Sep 18 02:31 storedstate
    
    drwxrwxrwx.  2 nsoadmin   root            98 Sep 18 08:59 cdb
    
    drwxrwxrwx.  2 nsoadmin   root         20480 Sep 19 23:23 rollbacks
    drwxrwxrwx.  5 nsoadmin   root          4096 Sep 19 23:26 state
    
    cloud-user@ncs# config
    Entering configuration mode terminal
    cloud-user@ncs(config)# configurable-parameters p2p-required true
    cloud-user@ncs(config)# configurable-parameters p2p-soFile-path /var/opt/ncs/patch_libp2p-2.69.0.1534.so.tgz
    cloud-user@ncs(config)# commit
    Commit complete.
    cloud-user@ncs(config)# exit
    
    cloud-user@ncs# show vnf-status instances UP-Test001-p2p
                                             FUNCTION                                                                
    INSTANCE ID     TIMESTAMP                TYPE      OPERATION       STATUS      STATUS MESSAGE                    
    -----------------------------------------------------------------------------------------------------------------
    UP-Test001-p2p  2023-09-19 23:29:42.335  VPC-SI    deploy          init        init                              
                    2023-09-19 23:30:19.377  VPC-SI    deploy          processing  processing                        
                    2023-09-19 23:30:47.948  VPC-SI    deploy          processing  processing                        
                    2023-09-19 23:30:49.269  VPC-SI    deploy          completed   completed                         
                    2023-09-19 23:31:55.555  -         init            success     Device Onboarding initialized     
                    2023-09-19 23:31:56.061  -         fetch-ssh-keys  success     fetch-ssh-keys was successful     
                    2023-09-19 23:31:57.005  -         connect         success     connect was successful            
                    2023-09-19 23:31:58.353  -         sync-from       success     sync-from was successful          
                    2023-09-19 23:31:58.523  -         ready           success     Device Successfully onboarded     
                    2023-09-19 23:37:29.386  -         config-read     success     Config MetaData is empty or null  
    
    [cloud-user@qwerty ncs]$ ssh admin@64.1.0.96
    The authenticity of host '64.1.0.96 (64.1.0.96)' can't be established.
    RSA key fingerprint is SHA256:TKCql7DQvty52OHp8WzGt0lYKiloAtEmMt1xAMQ23a0.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? Csco@123
    Please type 'yes', 'no' or the fingerprint: yes
    Warning: Permanently added '64.1.0.96' (RSA) to the list of known hosts.
    Cisco Systems QvPC-SI Intelligent Mobile Gateway
    admin@64.1.0.96's password: 
    Last login: Tue Sep 19 23:32:04 -0400 2023 on pts/1 from 64.1.0.7.
    
    No entry for terminal type "xterm-256color";
    using dumb terminal settings.
    
    [local]vpc-si# show module p2p verbose
    Module p2p
       Priority  card   version  loaded     location       update/rollback time     status
           99      1  2.69.1534    2/2    /var/opt/lib   Tue Sep 19 23:32:35 2023   success                                  <<<<<<< p2p priority starting from 99 instead of 10
            X      1  1.161.656    0/2            /lib                  (never)     N/A
    
    [local]vpc-si# 
    [local]vpc-si# 
    [local]vpc-si# exit
    Connection to 64.1.0.96 closed.
    [cloud-user@qwerty ncs]$ 
    [cloud-user@qwerty ncs]$ 
    [cloud-user@qwerty ncs]$ ncs_cli -C
    
    User cloud-user last logged in 2023-09-20T03:30:52.813071+00:00, to qwerty, from 10.65.51.122 using rest-http
    cloud-user connected from 10.65.51.122 using ssh on qwerty
    cloud-user@ncs# 
    cloud-user@ncs# 
  2. Use the mobility-library action command for the actual upgrade of P2P priority:

    cloud-user@ncs# mobility-library configure-library library-name p2p device-list { device-name UP-Test001-p2p }
    status success
    message Configured Successfully
    
    cloud-user@ncs# 
    cloud-user@ncs# 
    cloud-user@ncs# exit
    [cloud-user@qwerty ncs]$ ssh admin@64.1.0.96
    Cisco Systems QvPC-SI Intelligent Mobile Gateway
    admin@64.1.0.96's password: 
    Last login: Tue Sep 19 23:41:37 -0400 2023 on pts/1 from 64.1.0.7.
    
    No entry for terminal type "xterm-256color";
    using dumb terminal settings.
    
    [local]vpc-si# show module p2p verbose
    Module p2p
       Priority  card   version  loaded     location       update/rollback time     status
    >      98      1  2.69.1534    2/2    /var/opt/lib   Tue Sep 19 23:41:39 2023   success
    *      99      1  2.69.1534    2/2    /var/opt/lib                  (never)     N/A
            X      1  1.161.656    0/2            /lib                  (never)     N/A
    
    >  current module priority is 98
    *  some modules have not unloaded from the p2p application and are still in use
    
    [local]vpc-si#