You can optionally install Cisco VIM with a third-party software known as NFVIMON which is used to monitor the health and
performance of the NFV infrastructure. The NFVIMON feature enables extensive monitoring and performance data for various components
of the cloud infrastructure including Cisco UCS blade and rack servers, service profiles, Nexus top of rack switches, fabric
connections and also the OpenStack instances. The monitoring system is designed such that it can monitor single or multiple
pods from a single management system. NFVIMON can be enabled by extending the setup_data.yaml with relevant information on
an existing pod, through the reconfigure option.
NFVIMON consists of four components: ceilometer for data collection, collector, Resource Manager (RM), and control-center
with Cisco Zenpacks (CC). As NVIFMON is a third party software, care has been taken to make sure its integration into VIM
is loosely coupled and the VIM automation only deals with installing the ceilometer service software piece needed to monitor
the pod. The installing of the other NFVIMON components (collector, Resource Manager (RM) and control-center with Cisco Zenpacks
(CC)), are Cisco Advance Services led activity and those steps are outside the scope of the install guide.
Before you Begin
Ensure that you have engaged with Cisco Advance Services on the planning and installation of the NFVIMON accessories along
with its network requirements. Also, the image information of collector, Resource Manager (RM) and control-center with Cisco
Zenpacks (CC)) is available only through Cisco Advance Services. At a high level, you can have a node designated to host a
pair of collector VM for each pod, and a common node to host CC and RM VMs, which can aggregate and display monitoring information
from multiple pods.
The collector VMs must have two interfaces: an interface to br_mgmt of the VIM, and another interface that is routabl and
reachable to the VIM Installer REST API and the RM VMs. As the collector VM is available in an independent node, four IPs
from the management network of the pod must be pre-planned and reserved. The installation steps of the collector, Resource
Manager (RM) and control-center with Cisco Zenpacks (CC)) are part of Cisco Advance Services led activity.
Installation of NFVIMON
The ceilometer service is the only component in NFVIMON offering that is managed by VIM orchestrator. While the ceilometric
service collects the metrics to pass openstack information of the pod to the collectors, the Cisco NFVI Zenpack available
in the CC/RM node gathers the node level information. To enable NFVIMON as part of the VIM Install, update the setup_data
with the following information:
#Define the PODNAME
PODNAME: <PODNAME with no space>; ensure that this is unique across all the pods
NFVIMON:
MASTER: # Master Section
admin_ip: <IP address of Control Centre VM>
COLLECTOR: # Collector Section
management_vip: <VIP for ceilometer/dispatcher to use> #Should be unique across the VIM Pod; Should be part of br_mgmt network
Collector_VM_Info:
-
hostname: <hostname of Collector VM 1>
password: <password_for_collector_vm1> # max length of 32
ccuser_password: <password from master for 'ccuser' (to be used for self monitoring)> # max length of 32
admin_ip: <ssh_ip_collector_vm1> # Should be part of br_api network
management_ip: <mgmt_ip_collector_vm1> # Should be part of br_mgmt network
-
hostname: <hostname of Collector VM 2>
password: <password_for_collector_vm2> # max length of 32
ccuser_password: <password from master for 'ccuser' (to be used for self monitoring)> # max length of 32
admin_ip: <ssh_ip_collector_vm2> # Should be part of br_api network
management_ip: <mgmt_ip_collector_vm2> # Should be part of br_mgmt network
COLLECTOR_TORCONNECTIONS: # Optional. Indicates the port where the collector is hanging off. Recommended when Cisco NCS 5500 is used as ToR
- tor_info: {po: <int>, switch_a_hostname: ethx/y, switch_b_hostname: ethx/y}
DISPATCHER:
rabbitmq_username: admin # Pod specific user for dispatcher module.
NFVIMON_ADMIN: admin_name # Optional, once enabled, need to have only 1 admin reconfigurable to add/update user id
To monitor ToR, ensure that the following TORSWITCHINFO sections are defined in the setup_data.yaml.
TORSWITHCINFO:
SWITCHDETAILS:
-
hostname: <switch_a_hostname>: # Mandatory for NFVIMON if switch monitoring is needed
username: <TOR switch username> # Mandatory for NFVIMON if switch monitoring is needed
password: <TOR switch password> # Mandatory for NFVBENCH; Mandatory for NFVIMON if switch monitoring is needed
ssh_ip: <TOR switch ssh ip> # Mandatory for NFVIMON if switch monitoring is needed
....
-
hostname: <switch_b_hostname>: # Mandatory for NFVIMON if switch monitoring is needed
username: <TOR switch username> # Mandatory for NFVIMON if switch monitoring is needed
password: <TOR switch password> # Mandatory for NFVIMON if switch monitoring is needed
ssh_ip: <TOR switch ssh ip> # Mandatory for NFVIMON if switch monitoring is needed
....
To initiate the integration of NFVIMON on an existing pod, copy the setupdata into a local directory and update it manually
with information listed above, and then run the reconfiguration command as follows:
[root@mgmt1 ~]# cd /root/
[root@mgmt1 ~]# mkdir MyDir
[root@mgmt1 ~]# cd MyDir
[root@mgmt1 ~]# cp /root/openstack-configs/setup_data.yaml <my_setup_data.yaml>
[root@mgmt1 ~]# vi my_setup_data.yaml (update the setup_data to include NFVIMON related info)
[root@mgmt1 ~]# cd ~/installer-xxxx
[root@mgmt1 ~]# ciscovim reconfigure –-setupfile ~/MyDir/<my_setup_data.yaml>
To initiate the uninstallation of NFVIMON on an existing pod, copy the setupdata into a local directory and remove the entire
NFVIMON section listed above, and then run the reconfiguration command as follows:
[root@mgmt1 ~]# cd /root/ [root@mgmt1 ~]# mkdir MyDir [root@mgmt1 ~]# cd MyDir
[root@mgmt1 ~]# cp /root/openstack-configs/setup_data.yaml <my_setup_data.yaml>
[root@mgmt1 ~]# vi my_setup_data.yaml (update the setup_data to exclude NFVIMON related info)
[root@mgmt1 ~]# cd ~/installer-xxxx
[root@mgmt1 ~]# ciscovim –-setupfile ~/MyDir/<my_setup_data.yaml> reconfigure
Note |
|