Post Installation Configurations
Configure Control Center Access
After the installation is complete you need to configure the Control Center access. This is designed to give the customer a customized Control Center username. For more information on Control Center Access, refer to CPS Operations Guide.
Configure NTP on Cluster Manager
To configure NTP on Cluster Manager/Installer, perform the following steps:
Procedure
Step 1 |
Install NTP package by executing the following commands:
|
||
Step 2 |
Configure NTP Servers: Copy /etc/ntp.conf file from any deployed CPS VM.
|
||
Step 3 |
Synchronize time with lb01 or lb02.
|
||
Step 4 |
Start NTPD service by executing the following command:
|
IPv6 Support - VMware
Enable IPv6 Support
For VMware hypervisor, IPv6 needs to be enabled first.
Procedure
Step 1 |
Select the blade from the left panel where you want to enable IPv6 support. |
||
Step 2 |
Click Configure tab from the top menu from the right panel. |
||
Step 3 |
Under Networking, click Advanced from the options available. |
||
Step 4 |
Click Edit... in the upper right corner of the Advanced panel. The Edit Advanced Network Settings window opens. |
||
Step 5 |
From IPv6 support drop-down list, select Enabled to enable IPv6 support. By performing above steps, IPv6 will be enabled on the blade. Rebooting the blade is required for this setting to take effect.
|
Set Up IPv4 and IPv6 Addresses for VMs
Any hosts in the CPS cluster can be configured to have IPv4 or IPv6 addresses. Currently, IPv6 is supported only for policy director (lb) external interfaces.
For more information on how to configure IPv6 addresses for VMs, refer to the section Hosts Configuration.
Converting IPv4 to IPv6 on Policy Director External Interfaces
To convert an existing CPS deployment from IPv4 to IPv6 (external IP addresses on lb* VM), perform the following steps:
Procedure
Step 1 |
Log in to Cluster Manager. |
||
Step 2 |
Backup the relevant files using the following commands:
|
||
Step 3 |
Update the CSV files as per your IPv6 requirement. The following sample configuration files for Hosts.csv, AdditionalHosts.csv, and Vlan.csv that use IPv6 address are shown:
|
||
Step 4 |
Execute the following commands to update the changes through puppet and redeploy the Policy Director (lb) VMs:
|
||
Step 5 |
After modifying the files, run the following commands: /var/qps/install/current/scripts/build_all.sh
/var/qps/install/current/scripts/upgrade/reinit.sh
|
Synchronize Time Between Nodes
To synchronize time between VM nodes, perform the following steps:
Procedure
Step 1 |
Login to Cluster Manager VM. |
||
Step 2 |
Execute the following command to synchronize the time between nodes:
To check the current clock skew of the system, execute the following command:
The output numbers are in seconds. Refer to the following sample output:
|
Update the VM Configuration without Re-deploying VMs
Sometimes, certain configurations in the excel sheet need to be modified and updated to the deployed VMs. To update the configurations in the excel sheet, perform the following steps:
Procedure
Step 1 |
Make the changes to the excel. |
Step 2 |
Save them as CSV files. |
Step 3 |
Upload the csv files to the Cluster Manager VM in /var/qps/config/deploy/csv/. |
Step 4 |
Execute the following commands after uploading the csv files to Cluster Manager VM:
|
Reserving Memory on the Virtual Machines (VMs)
To avoid performance impact, you must reserve all allocated memory to each CPS virtual machine.
It is recommended to allocate 8 GB memory for the Hypervisor. For example, suppose the total memory allocated on a blade/ESXi host is 48 GB then we should only allocate 40 GB to CPS VMs and keep 8 GB for the Hypervisor.
Note |
This is required only if your ESXi host is added to vCenter. If not then the deployment takes care of the reservation. |
Power OFF the virtual machine before configuring the memory settings.
Procedure
Step 1 |
Log in to your ESXi host with the vSphere Client. |
Step 2 |
In the vSphere Client, right-click a virtual machine from the inventory and select Edit Settings.... |
Step 3 |
In the Virtual Machine Properties window, select Resources tab and select Memory. |
Step 4 |
In the Resource Allocation pane, set the memory reservation to allocated memory. |
Step 5 |
Click OK to commit the changes. |
Step 6 |
Power ON the Virtual Machine. |
Configure Custom Route
In lb01 and lb02, if needed, custom route should be configured to route diameter traffic to the PGWs.
Add a file called route-ethxx in the ./etc/sysconfig/network-scripts.
For example, 172.20.244.5/32 via 172.16.38.18
Destination subnet via GW of the subnet.
TACACS+
TACACS+ Configuration Parameters
Basic instructions for enabling TACACS+ AAA in the system can be found in the section Configure System Parameters for Deployment. There are a number of advanced configuration options which allow administrators to tune their deployments for their specific needs. The following table list TACACS+ configuration parameters that can be added in the Configuration sheet:
Parameter |
Description |
Value Range |
||
---|---|---|---|---|
tacacs_enabled* |
A boolean value indicating whether TACACS+ AAA must be enabled or not. |
Values: 1, 0, true, false For example: tacacs_enabled,1 |
||
tacacs_server* |
An ordered comma-separated list of <ip>[:port] pairs indicating which servers need to be queried for TACACS+ AAA. |
Values: NA For example: tacacs_server“10.0.2.154:49 ,172.18.63.187:49”
|
||
tacacs_secret* |
The 'secret' key string used for encrypting the TACACS+ protocol communications. |
Values: NA For example: tacacs_secret,CPE1704TKS |
||
tacacs_debug |
An integer value indicating the debug level to run the software in. Currently, this is effectively boolean. |
Value: 0 1 For example: tacacs_debug,1 Default: 0 |
||
tacacs_service |
A string value indicating which service to be used when authorizing and auditing against the TACACS+ servers. |
Value: NA For example: tacacs_servicepcrflinuxlogin Default: pcrflinuxlogin if no value is specified |
||
tacacs_protocol |
A string value indicating which protocol to be used when authorizing and auditing against the TACACS+ servers. |
Value: NA For example: tacacs_protocol,ssh Default: ssh |
||
tacacs_timeout |
An integer that represents how long the software needs to wait, in seconds, for the TACACS+ server to respond to the queries. |
Value: in seconds For example: tacacs_timeout,2 Default: 5 seconds |
The * mark indicates that the parameter is mandatory. * mark is not a part of the parameter.
AIO/Arbiter Configuration for TACACS+
Procedure
Step 1 |
Create the following yaml file on Cluster Manager: /etc/facter/facts.d/tacacs.yaml. tacacs_enabled: true tacacs_server: ip address tacacs_secret: password |
||
Step 2 |
puppet apply.
|
TACACS+ Enabler
The
enable_tacacs+
utility can be used to configure the
Cluster Manager VM for TACACS+-based authentication. The utility achieves this
by first validating if TACACS+ has been configured properly using the
Configuration sheet of CPS Deployment Template (Excel spreadsheet). Assuming
the required values are provided, the utility will then selectively apply
several Puppet manifests to enable TACACS+ authentication on the target VM.
To use the utility:
Procedure
Step 1 |
Get the tacacs_enabler.tar.gz package from Cisco Technical Representative. |
||
Step 2 |
Copy the utility package to the target VM. |
||
Step 3 |
Acquire shell access on the target VM with the ability to execute operations with 'root' privileges. |
||
Step 4 |
Extract the utility package using the tar utility on the target VM: tar -zxvf tacacs_enabler.tar.gz
|
||
Step 5 |
(Optional) Copy the utility to the /var/qps/bin/support directory.
|
||
Step 6 |
(Optional) Execute the script in 'check' mode to validate the configuration values:
|
||
Step 7 |
Execute the script without the '--check' command-line option to apply the configuration: enable_tacacs+ clustermgr --check
|
||
Step 8 |
Validate that TACACS+ authenticated users are now available on the target VM: id -a
<TACACS+ user>
|
Adding Indexes for Geo Location (ANDSF)
To create indexes for Geo Location Lookups, perform the following steps:
Procedure
Step 1 |
Login to the Cluster Manager with the valid username and credentials. |
Step 2 |
Connect to the active sessionmgr VM.
|
Step 3 |
Connect to the active ANDSF mongo database.
|
Step 4 |
Once the mongo is connected successfully, check for the ANDSF dB being present in the list of dB's. set01:PRIMARY> show dbs
|
Step 5 |
Switch to the ANDSF database.
|
Step 6 |
Check for the Geo Location and dmtl_Policy_EXT_GEO_LOC_STATIC table in the list of collections.
|
Step 7 |
Create the following 2 indexes on the Geo_Location Table and dmtl_Policy_EXT_GEO_LOC_STATIC using the following commands on the mongo prompt.
|
Step 8 |
The mongo should return a success code on creation of index. |
Configure Multiple Redis Instances
Note |
All the commands mentioned in the following section should be executed on Cluster Manager. |
Before you begin
Redis instance must be enabled and running.
Procedure
Step 1 |
To configure multiple redis instances, update redis_server_count parameter in Configuration.csv spreadsheet in QPS_deployment_config_template.xlsm deployment template file. |
||||
Step 2 |
After updating the Configuration.csv, execute the following command to import the new configuration file into Cluster Manager VM.
|
||||
Step 3 |
Edit redisTopology.ini file in /etc/broadhop/ directory and add all redis endpoints:
For example, for three redis instances, the redisTopology.ini file will look like:
|
||||
Step 4 |
After modifying the configuration file, to make the changes permanent, user needs to rebuild etc.tar.gz.
|
||||
Step 5 |
Reinitialize the environment:
|
||||
Step 6 |
Restart the qns service.
|
Configure Redis Instances for Keystore
Currently, keystore is being used internally for features such as RAN NAS Retry, Holding Rx STR, and so on.
-
Keystore is a temporary cache used by application to store temporary information. It stores information in the form of key-value pair.
-
Keystore internally uses redis cache for storing the key-value pair.
Note |
By default, keystore uses redis running on lb01:6379 and lb02:6379 if redis instances is not configured for keystore.
|
Before you begin
Redis instance must be installed and running on VMs.
Procedure
If you want to add more redis instances for keystore, run the following OSGi command: telnet
qns01 9091
Range of lbs can be defined using <start lb>:<end lb>. Range of redis ports can be defined using <start port>:<end port>. For example, to use redis instance running on 6379, 6380 on lb01 to lb04, configure the parameter as follows:
The current keystore instances which are being used in application can be checked by the running the following command: telnet
qns01 9091
For example:
|