The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This document describes the steps required for correction of Universally Unique IDentifier (UUID) mismatch between Element Manager (EM) and StarOS Virtual Network Functions (VNFs) in an Ultra-M setup that hosts StarOS VNFs.
Ultra-M is a pre-packaged and validated virtualized mobile packet core solution that is designed in order to simplify the deployment of VNFs.
Ultra-M solution consists of these Virtual Machine (VM) types:
The high-level architecture of Ultra-M and the components involved are depicted in this image:
Note: Ultra M 5.1.x release is considered in order to define the procedures in this document.
VNF | Virtual Network Function |
CF | Control Function |
SF | Service Function |
ESC | Elastic Service Controller |
MOP | Method of Procedure |
OSD | Object Storage Disks |
HDD | Hard Disk Drive |
SSD | Solid State Drive |
VIM | Virtual Infrastructure Manager |
VM | Virtual Machine |
EM | Element Manager |
UAS | Ultra Automation Services |
UUID |
Universally Unique IDentifier |
There are three major components – ESC, EM, and StarOS VNF in an Ultra-M setup. EM act as proxy for the ConfD queries and send a response on behalf of the StarOS VNF. Each of these components run as a VM and maintain information. When data/state of VMs across these three nodes does not match, there is a UUID mismatch alarm in EM. ESC makes a YANG call to EM in order to get ConfD data. ConfD has both configuration information and operational data/state. EM translates the queries that come from ESC and send responses as needed.
Verify that EM is in HA mode and shows as master/slave:
ubuntu@vnfd2deploymentem-1:~$ ncs --status | more
vsn: 4.1.1
SMP support: yes, using 2 threads
Using epoll: yes
available modules: backplane,netconf,cdb,cli,snmp,webui
running modules: backplane,netconf,cdb,cli,webui
status: started
cluster status:
mode: master
node id: 6-1528831279
connected slaves: 1
Log in to EM and check whether EM cluster is healthy:
ubuntu@vnfd2deploymentem-1:~$ ncs_cli -u admin -C
admin@scm# show ems
EM VNFM
ID SLA SCM PROXY
---------------------
5 up up up
9 up up up
ubuntu@vnfd2deploymentem-1:~$ ncs_cli -u admin -C
admin@scm# show ncs-state ha
ncs-state ha mode master
ncs-state ha node-id 9-1518035669
ncs-state ha connected-slave [ 5-1518043097 ]
In ESC, validate that the netconf connection to EM is established:
[admin@vnfm2-esc-0 esc-cli]$ netstat -an | grep 830
tcp 0 0 0.0.0.0:830 0.0.0.0:* LISTEN
tcp 0 0 172.18.181.6:830 172.18.181.11:39266 ESTABLISHED
tcp 0 0 172.18.181.6:830 172.18.181.11:39267 ESTABLISHED
tcp 0 0 :::830 :::* LISTEN
[admin@vnfm2-esc-0 esc-cli]$
From ESC, ensure that all VMs are in alive state and that the service is active:
[admin@vnfm2-esc-0 esc-cli]$ ./esc_nc_cli get esc_datamodel | egrep "<vm_name>|<state>"
<state>IMAGE_ACTIVE_STATE</state>
<state>IMAGE_ACTIVE_STATE</state>
<state>IMAGE_ACTIVE_STATE</state>
<state>FLAVOR_ACTIVE_STATE</state>
<state>FLAVOR_ACTIVE_STATE</state>
<state>FLAVOR_ACTIVE_STATE</state>
<state>SERVICE_ACTIVE_STATE</state>
<vm_name>vnfd2-deployment_c1_0_13d5f181-0bd3-43e4-be2d-ada02636d870</vm_name>
<state>VM_ALIVE_STATE</state>
<vm_name>vnfd2-deployment_c4_0_9dd6e15b-8f72-43e7-94c0-924191d99555</vm_name>
<state>VM_ALIVE_STATE</state>
<vm_name>vnfd2-deployment_s2_0_b2cbf15a-3107-45c7-8edf-1afc5b787132</vm_name>
<state>VM_ALIVE_STATE</state>
<vm_name>vnfd2-deployment_s3_0_882cf1ed-fe7a-47a7-b833-dd3e284b3038</vm_name>
<state>VM_ALIVE_STATE</state>
<vm_name>vnfd2-deployment_s5_0_672bbb00-34f2-46e7-a756-52907e1d3b3d</vm_name>
<state>VM_ALIVE_STATE</state>
<vm_name>vnfd2-deployment_s6_0_6f30be77-6b9f-4da8-9577-e39c18f16dfb</vm_name>
<state>VM_ALIVE_STATE</state>
<state>SERVICE_ACTIVE_STATE</state>
<vm_name>vnfd2-deployment_vnfd2-_0_02d1510d-53dd-4a14-9e21-b3b367fef5b8</vm_name>
<state>VM_ALIVE_STATE</state>
<vm_name>vnfd2-deployment_vnfd2-_0_f17989e3-302a-4681-be46-f2ebf62b252a</vm_name>
<state>VM_ALIVE_STATE</state>
<vm_name>vnfd2-deployment_vnfd2-_0_f63241f3-2516-4fc4-92f3-06e45054dba0</vm_name>
<state>VM_ALIVE_STATE</state>
[admin@vnfm2-esc-0 esc-cli]$
Verify that the vnfm-proxy-agent is online:
[local]POD1-VNF2-PGW# show vnfm-proxy-agent status
Thursday June 21 07:25:02 UTC 2018
VNFM Proxy Agent Status:
State : online
Connected to : 172.18.180.3:2181
Bind Address : 172.18.180.13:38233
VNFM Proxy address count: 3
Verify the emctrl show alive status:
[local]POD1-VNF2-PGW# show emctrl status
Thursday June 21 07:25:09 UTC 2018
emctrl status:
emctrl in state: ALIVE
The UUID has to be compared between the StarOS VNF and the EM in order to identify the mismatch. These procedures list the steps to be carried out in StarOS VNF and EM in order to get the UUIDs from the respective nodes.
From StarOS, you can get the UUID from either show emctrl vdu list or from the show card hardware output.
[local]POD1-VNF2-PGW# show emctrl vdu list
Thursday June 21 07:24:28 UTC 2018
Showing emctrl vdu
card[01]: name[CFC_01 ] uuid[33C779D2-E271-47AF-8AD5-6A982C79BA62]
card[02]: name[CFC_02 ] uuid[E75AE5EE-2236-4FFD-A0D4-054EC246D506]
card[03]: name[SFC_03 ] uuid[E1A6762D-4E84-4A86-A1B1-84772B3368DC]
card[04]: name[SFC_04 ] uuid[B283D43C-6E0C-42E8-87D4-A3AF15A61A83]
card[05]: name[SFC_05 ] uuid[CF0C63DF-D041-42E1-B541-6B15B0BF2F3E]
card[06]: name[SFC_06 ] uuid[65344D53-DE09-4B0B-89A6-85D5CFDB3A55]
Incomplete command
[local]POD1-VNF2-PGW# show card hardware | grep -i uuid
Thursday June 21 07:24:46 UTC 2018
UUID/Serial Number : 33C779D2-E271-47AF-8AD5-6A982C79BA62
UUID/Serial Number : E75AE5EE-2236-4FFD-A0D4-054EC246D506
UUID/Serial Number : E1A6762D-4E84-4A86-A1B1-84772B3368DC
UUID/Serial Number : B283D43C-6E0C-42E8-87D4-A3AF15A61A83
UUID/Serial Number : CF0C63DF-D041-42E1-B541-6B15B0BF2F3E
UUID/Serial Number : 65344D53-DE09-4B0B-89A6-85D5CFDB3A55
List the UUIDs in the EM:
ubuntu@vnfd2deploymentem-1:~$ ncs_cli -u admin -C
admin@scm# show vdus vdu | select vnfci
CONSTITUENT MEMORY STORAGE
DEVICE DEVICE ELEMENT IS CPU UTILS USAGE
ID ID NAME GROUP GROUP INFRA INITIALIZED VIM ID UTILS BYTES BYTES
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
control-function BOOT_generic_di-chasis_CF1_1 scm-cf-nc scm-cf-nc di-chasis true true 33c779d2-e271-47af-8ad5-6a982c79ba62 - - -
BOOT_generic_di-chasis_CF2_1 scm-cf-nc scm-cf-nc di-chasis true true e75ae5ee-2236-4ffd-a0d4-054ec246d506 - - -
session-function BOOT_generic_di-chasis_SF1_1 - - di-chasis true false e1a6762d-4e84-4a86-a1b1-84772b3368dc - - -
BOOT_generic_di-chasis_SF2_1 - - di-chasis true false b283d43c-6e0c-42e8-87d4-a3af15a61a83 - - -
BOOT_generic_di-chasis_SF3_1 - - di-chasis true false 828281f4-c0f4-4061-b324-26277d294b86 - - -
BOOT_generic_di-chasis_SF4_1 - - di-chasis true false 65344d53-de09-4b0b-89a6-85d5cfdb3a55 - - -
From this output, you can see that the card 5 has UUID MISMATCH between the EM and StarOS:
[local]POD1-VNF2-PGW# show emctrl vdu list
Thursday June 21 07:24:28 UTC 2018
Showing emctrl vdu
.....
card[05]: name[SFC_05 ] uuid[CF0C63DF-D041-42E1-B541-6B15B0BF2F3E]
.....
admin@scm# show vdus vdu | select vnfci
CONSTITUENT MEMORY STORAGE
DEVICE DEVICE ELEMENT IS CPU UTILS USAGE
ID ID NAME GROUP GROUP INFRA INITIALIZED VIM ID UTILS BYTES BYTES
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
session-function .......
BOOT_generic_di-chasis_SF3_1 - - di-chasis true false 828281f4-c0f4-4061-b324-26277d294b86 - - -
......
Note: If multiple cards have UUID mismatch, ensure that you move onto the other only after you are done with one. If you try multiple cards at the same time, chances are you might encounter a problem with ESC VM-indexing.
If UUID mismatch is in CF card , ensure to do filesystem synchronization:
[local]VNF2# filesystem synchronize all
If mismatch UUID card is SF and active, perform card migrate in order to bring it to standby state:
[local]VNF2# card migrate from 4 to 5
If mismatch UUID card is CF and active, perform card switch to bring it to standby state:
[local]VNF2# card switch from 2 to 1
Suspend the card that has the UUID mismatch from NCS CLI in EM:
ubuntu@vnfd2deploymentem-1:~$ ncs_cli -u admin -C
admin@scm# suspend-vnfci vdu session-function vnfci BOOT_generic_di-chasis_SF3_1
success true
Note: In some rare scenarios, suspend-vnfci CLI from EM does not initiate the service update in ESC. In EM, the logs (/var/log/em/vnfm-proxy/vnfm-proxy.log) show an error message that indicates that EM has pending requests and that it is ignoring the new request. In order to fix this issue, check EM zookeeper in order to see any stuck pending requests and then manually clear them. Refer to the last section of this document in order to perform this action, "Clearing Pending Request in EM Zookeeper (Optional)."
Verify in yangesc.log on ESC that the transaction was accepted and wait for it to finalize:
####################################################################
# ESC on vnfm2-esc-0.novalocal is in MASTER state.
####################################################################
[admin@vnfm2-esc-0 ~]$ cd /opt/cisco/esc/esc-confd/esc-cli
[admin@vnfm2-esc-0 esc-cli]$ tail -f /var/log/esc/yangesc.log
19:27:31,333 12-Jun-2018 INFO Type: SERVICE_ALIVE
19:27:31,333 12-Jun-2018 INFO Status: SUCCESS
19:27:31,333 12-Jun-2018 INFO Status Code: 200
19:27:31,333 12-Jun-2018 INFO Status Msg: Service group deployment completed successfully!
19:27:31,333 12-Jun-2018 INFO Tenant: core
19:27:31,333 12-Jun-2018 INFO Deployment ID: 9bcad337-d1f0-463c-8450-de7697b1e104
19:27:31,333 12-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
19:27:31,333 12-Jun-2018 INFO ===== SEND NOTIFICATION ENDS =====
07:29:49,510 21-Jun-2018 INFO ===== GET OPERATIONAL/INFO DATA =====
07:30:32,318 21-Jun-2018 INFO ===== GET OPERATIONAL/INFO DATA =====
07:36:25,083 21-Jun-2018 INFO ===== GET OPERATIONAL/INFO DATA =====
07:36:25,628 21-Jun-2018 INFO
07:36:25,628 21-Jun-2018 INFO ===== CONFD TRANSACTION STARTED =====
07:36:25,717 21-Jun-2018 INFO
07:36:25,717 21-Jun-2018 INFO ===== UPDATE SERVICE REQUEST RECEIVED (UNDER TENANT) =====
07:36:25,717 21-Jun-2018 INFO Tenant name: core
07:36:25,717 21-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
07:36:25,843 21-Jun-2018 INFO
07:36:25,843 21-Jun-2018 INFO ===== CONFD TRANSACTION ACCEPTED =====
07:37:04,535 21-Jun-2018 INFO
07:37:04,536 21-Jun-2018 INFO ===== SEND NOTIFICATION STARTS =====
07:37:04,536 21-Jun-2018 INFO Type: VM_UNDEPLOYED
07:37:04,536 21-Jun-2018 INFO Status: SUCCESS
07:37:04,536 21-Jun-2018 INFO Status Code: 200
07:37:04,536 21-Jun-2018 INFO Status Msg: VM Undeployed during deployment update, VM name: [vnfd2-deployment_s6_0_6f30be77-6b9f-4da8-9577-e39c18f16dfb]
07:37:04,536 21-Jun-2018 INFO Tenant: core
07:37:04,536 21-Jun-2018 INFO Deployment ID: 9bcad337-d1f0-463c-8450-de7697b1e104
07:37:04,536 21-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
07:37:04,536 21-Jun-2018 INFO VM group name: s6
07:37:04,537 21-Jun-2018 INFO User configs: 1
07:37:04,537 21-Jun-2018 INFO VM Source:
07:37:04,537 21-Jun-2018 INFO VM ID: cf0c63df-d041-42e1-b541-6b15b0bf2f3e
07:37:04,537 21-Jun-2018 INFO Host ID: 47853854d13d80e6d0212dabb0be2e12c12e431bf23d4e0260642594
07:37:04,537 21-Jun-2018 INFO Host Name: pod1-compute-9.localdomain
07:37:04,537 21-Jun-2018 INFO ===== SEND NOTIFICATION ENDS =====
07:37:04,550 21-Jun-2018 INFO
07:37:04,550 21-Jun-2018 INFO ===== SEND NOTIFICATION STARTS =====
07:37:04,550 21-Jun-2018 INFO Type: SERVICE_UPDATED
07:37:04,550 21-Jun-2018 INFO Status: SUCCESS
07:37:04,550 21-Jun-2018 INFO Status Code: 200
07:37:04,550 21-Jun-2018 INFO Status Msg: Service group update completed successfully
07:37:04,550 21-Jun-2018 INFO Tenant: core
07:37:04,550 21-Jun-2018 INFO Deployment ID: 9bcad337-d1f0-463c-8450-de7697b1e104
07:37:04,550 21-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
07:37:04,550 21-Jun-2018 INFO ===== SEND NOTIFICATION ENDS =====
07:41:55,912 21-Jun-2018 INFO ===== GET OPERATIONAL/INFO DATA =====
After the VM is undeployed and service is updated, resume the card that was suspended:
admin@scm# resume-vnfci vdu session-function vnfci BOOT_generic_di-chasis_SF3_1
success true
Confirm from yangesc.log that the VM is deployed again and becomes alive:
####################################################################
# ESC on vnfm2-esc-0.novalocal is in MASTER state.
####################################################################
[admin@vnfm2-esc-0 ~]$ cd /opt/cisco/esc/esc-confd/esc-cli
[admin@vnfm2-esc-0 esc-cli]$ tail -f /var/log/esc/yangesc.log
07:41:55,912 21-Jun-2018 INFO ===== GET OPERATIONAL/INFO DATA =====
07:41:56,412 21-Jun-2018 INFO
07:41:56,413 21-Jun-2018 INFO ===== CONFD TRANSACTION STARTED =====
07:41:56,513 21-Jun-2018 INFO
07:41:56,513 21-Jun-2018 INFO ===== UPDATE SERVICE REQUEST RECEIVED (UNDER TENANT) =====
07:41:56,513 21-Jun-2018 INFO Tenant name: core
07:41:56,513 21-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
07:41:56,612 21-Jun-2018 INFO
07:41:56,612 21-Jun-2018 INFO ===== CONFD TRANSACTION ACCEPTED =====
07:43:53,615 21-Jun-2018 INFO
07:43:53,615 21-Jun-2018 INFO ===== SEND NOTIFICATION STARTS =====
07:43:53,616 21-Jun-2018 INFO Type: VM_DEPLOYED
07:43:53,616 21-Jun-2018 INFO Status: SUCCESS
07:43:53,616 21-Jun-2018 INFO Status Code: 200
07:43:53,616 21-Jun-2018 INFO Status Msg: VM Deployed in a deployment update. VM name: [vnfd2-deployment_s6_0_23cc139b-a7ca-45fb-b005-733c98ccc299]
07:43:53,616 21-Jun-2018 INFO Tenant: core
07:43:53,616 21-Jun-2018 INFO Deployment ID: 9bcad337-d1f0-463c-8450-de7697b1e104
07:43:53,616 21-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
07:43:53,616 21-Jun-2018 INFO VM group name: s6
07:43:53,616 21-Jun-2018 INFO User configs: 1
07:43:53,616 21-Jun-2018 INFO VM Source:
07:43:53,616 21-Jun-2018 INFO VM ID: 637547ad-094e-4132-8613-b4d8502ec385
07:43:53,616 21-Jun-2018 INFO Host ID: 47853854d13d80e6d0212dabb0be2e12c12e431bf23d4e0260642594
07:43:53,616 21-Jun-2018 INFO Host Name: pod1-compute-9.localdomain
07:43:53,616 21-Jun-2018 INFO ===== SEND NOTIFICATION ENDS =====
07:44:20,170 21-Jun-2018 INFO
07:44:20,170 21-Jun-2018 INFO ===== SEND NOTIFICATION STARTS =====
07:44:20,170 21-Jun-2018 INFO Type: VM_ALIVE
07:44:20,170 21-Jun-2018 INFO Status: SUCCESS
07:44:20,170 21-Jun-2018 INFO Status Code: 200
07:44:20,170 21-Jun-2018 INFO Status Msg: VM_Alive event received during deployment update, VM ID: [vnfd2-deployment_s6_0_23cc139b-a7ca-45fb-b005-733c98ccc299]
07:44:20,170 21-Jun-2018 INFO Tenant: core
07:44:20,170 21-Jun-2018 INFO Deployment ID: 9bcad337-d1f0-463c-8450-de7697b1e104
07:44:20,170 21-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
07:44:20,170 21-Jun-2018 INFO VM group name: s6
07:44:20,170 21-Jun-2018 INFO User configs: 1
07:44:20,170 21-Jun-2018 INFO VM Source:
07:44:20,170 21-Jun-2018 INFO VM ID: 637547ad-094e-4132-8613-b4d8502ec385
07:44:20,170 21-Jun-2018 INFO Host ID: 47853854d13d80e6d0212dabb0be2e12c12e431bf23d4e0260642594
07:44:20,170 21-Jun-2018 INFO Host Name: pod1-compute-9.localdomain
07:44:20,170 21-Jun-2018 INFO ===== SEND NOTIFICATION ENDS =====
07:44:20,194 21-Jun-2018 INFO
07:44:20,194 21-Jun-2018 INFO ===== SEND NOTIFICATION STARTS =====
07:44:20,194 21-Jun-2018 INFO Type: SERVICE_UPDATED
07:44:20,194 21-Jun-2018 INFO Status: SUCCESS
07:44:20,194 21-Jun-2018 INFO Status Code: 200
07:44:20,194 21-Jun-2018 INFO Status Msg: Service group update completed successfully
07:44:20,194 21-Jun-2018 INFO Tenant: core
07:44:20,194 21-Jun-2018 INFO Deployment ID: 9bcad337-d1f0-463c-8450-de7697b1e104
07:44:20,194 21-Jun-2018 INFO Deployment name: vnfd2-deployment-1.0.0-1
07:44:20,194 21-Jun-2018 INFO ===== SEND NOTIFICATION ENDS =====
Compare UUID again from StarOS and EM in order to confirm that the mismatch is fixed:
admin@scm# show vdus vdu | select vnfci
CONSTITUENT MEMORY STORAGE
DEVICE DEVICE ELEMENT IS CPU UTILS USAGE
ID ID NAME GROUP GROUP INFRA INITIALIZED VIM ID UTILS BYTES BYTES
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
control-function BOOT_generic_di-chasis_CF1_1 scm-cf-nc scm-cf-nc di-chasis true true 33c779d2-e271-47af-8ad5-6a982c79ba62 - - -
BOOT_generic_di-chasis_CF2_1 scm-cf-nc scm-cf-nc di-chasis true true e75ae5ee-2236-4ffd-a0d4-054ec246d506 - - -
session-function BOOT_generic_di-chasis_SF1_1 - - di-chasis true false e1a6762d-4e84-4a86-a1b1-84772b3368dc - - -
BOOT_generic_di-chasis_SF2_1 - - di-chasis true false b283d43c-6e0c-42e8-87d4-a3af15a61a83 - - -
BOOT_generic_di-chasis_SF3_1 - - di-chasis true false 637547ad-094e-4132-8613-b4d8502ec385 - - -
BOOT_generic_di-chasis_SF4_1 - - di-chasis true false 65344d53-de09-4b0b-89a6-85d5cfdb3a55 - - -
[local]POD1-VNF2-PGW# show emctrl vdu list
Thursday June 21 09:09:02 UTC 2018
Showing emctrl vdu
card[01]: name[CFC_01 ] uuid[33C779D2-E271-47AF-8AD5-6A982C79BA62]
card[02]: name[CFC_02 ] uuid[E75AE5EE-2236-4FFD-A0D4-054EC246D506]
card[03]: name[SFC_03 ] uuid[E1A6762D-4E84-4A86-A1B1-84772B3368DC]
card[04]: name[SFC_04 ] uuid[B283D43C-6E0C-42E8-87D4-A3AF15A61A83]
card[05]: name[session-function/BOOT_generic_di-chasis_SF3_1 ] uuid[637547AD-094E-4132-8613-B4D8502EC385]
card[06]: name[SFC_06 ] uuid[65344D53-DE09-4B0B-89A6-85D5CFDB3A55]
Incomplete command
[local]POD1-VNF2-PGW#
[local]POD1-VNF2-PGW#
[local]POD1-VNF2-PGW#
[local]POD1-VNF2-PGW# show card hardware | grep -i uuid
Thursday June 21 09:09:11 UTC 2018
UUID/Serial Number : 33C779D2-E271-47AF-8AD5-6A982C79BA62
UUID/Serial Number : E75AE5EE-2236-4FFD-A0D4-054EC246D506
UUID/Serial Number : E1A6762D-4E84-4A86-A1B1-84772B3368DC
UUID/Serial Number : B283D43C-6E0C-42E8-87D4-A3AF15A61A83
UUID/Serial Number : 637547AD-094E-4132-8613-B4D8502EC385
UUID/Serial Number : 65344D53-DE09-4B0B-89A6-85D5CFDB3A55
Note: This is optional.
Access zookeeper:
ubuntu@ultramvnfm1em-0:~$ /opt/cisco/usp/packages/zookeeper/current/bin/zkCli.sh
<snip>
[zk: localhost:2181(CONNECTED) 0]
List pending request:
[zk: localhost:2181(CONNECTED) 0] ls /request
Delete all listed requests:
[zk: localhost:2181(CONNECTED) 0] rmr /request/request00000000xx
Once all the pending requests are cleared, reinitiate the suspend request again.