CPS Interfaces and APIs
CPS includes southbound interfaces to various policy control enforcement functions (PCEFs) in the network, and northbound interfaces to OSS/BSS and subscriber applications, IMSs, and web applications.
Control Center GUI Interface
Purpose
Cisco Control Center enables you to do these tasks:
-
Manage subscriber data, that is, find or create and edit information about your subscribers.
-
View subscriber sessions.
-
View system sessions.
-
Populate custom reference data (CRD) tables.
URL and Port
HA: https://<lbvip01>:443
Protocol
HTTPS/HTTP
Accounts and Roles
There are two levels of administrative roles supported for Control Center: Full Privilege and View Only. The logins and passwords for these two roles are configurable in LDAP or in /etc/broadhop/authentication-password.xml.
-
Full Privilege Admin Users: These users can view, edit, and delete information and can perform all tasks. Admin users have access to all screens in Control Center.
-
View Only Admin Users: These users can view information in Control Center, but cannot edit or change information. View only administrators have access to a subset of screens in the interface.
CRD REST API
Purpose
The Custom Reference Data (CRD) REST API enables the query of, creation, deletion, and update of CRD table data without the need to access the Control Center GUI. The CRD APIs are available using an HTTP REST interface. The specific APIs are outlined in a later section in this guide.
URL and Port
HA: https:// <lbvip01>:443/custrefdata
A validation URL is:
HA: https:// <lbvip01>:8443/custrefdata
Protocol
HTTPS/HTTP
Accounts and Roles
Security and account management is accomplished by using the haproxy mechanism on the platform Policy Director (LB) by defining user lists, user groups, and specific users.
On Cluster Manager: /etc/puppet/modules/qps/templates/etc/haproxy/haproxy.cfg
Configure HAProxy
Update the HAProxy configuration to add authentication and authorization mechanism in the CRD API module.
-
Back up the /etc/haproxy/haproxy.cfg file.
-
Edit /etc/haproxy/haproxy.cfg on lb01/lb02 and add a userlist with at least one username and password as shown:
userlist <userlist name>
user <username1> password <encrypted password>
Use the following step to generate a encrypted password hash:
-
Execute
/var/qps/install/current/scripts/bin/support/generate_encrypted_password.sh
script to get encrypted password. -
After script execution the encrypted password will be like below.
+--------------------------------------------------------------------------------------------------------------+ | Fri May 29 11:43:47 UTC 2020 | | Encrypted key | | $6$bc732ffd2a5ad85e$dYuQfGowAsAS6E2mQyWgGtcSUY4IKss11.4AY1u852gGwZzr4Y54rBdkHG6zQytFPXXDJGwknx.IYIeDeW.jP. | +--------------------------------------------------------------------------------------------------------------+
-
-
Add the following line in frontend
https-api
to enable Authentication and Authorization for CRD REST API and create a new backend server ascrd_api_servers
to intercept CRD REST API requests:mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s
-
Update frontend
https_all_servers
by replacingapi_servers
withcrd_api_servers
for CRD API as follows:acl crd_api path_beg -i /custrefdata/
use_backend crd_api_servers if crd_api
-
Edit
/etc/haproxy/haproxy.cfg
on lb01/lb02 as follows:-
Add at least one group with user in userlist created in Step 2 as follows:
group qns-ro users readonly
group qns users apiuser
-
Add the following lines to the
backend crd_api_servers
:acl authoriseUsers http_auth_group(<cps-user-list>) <user-group>
http-request auth realm CiscoApiAuth if !authoriseUsers
Map the group created in Step 5 with the acl as follows:
acl authoriseUsers http_auth_group(<cps-user-list>) <user-group>
-
-
Add the following in the
backend crd_api_servers
to set read-only permission (GET HTTP operation) for group of users:http-request deny if !METH_GET authoriseUsers
HAProxy Configuration Example:
userlist cps_user_list
group qns-ro users readonly
group qns users apiuser
user readonly password
$6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75B
XKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1
user apiuser password
$6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75B
XKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1
frontend https-api
description API
bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem
mode http
acl crd_api path_beg -i /custrefdata/
use_backend crd_api_servers if crd_api
default_backend api_servers
reqadd X-Forwarded-Proto:\ https if { ssl_fc }
frontend https_all_servers
description Unified API,CC,PB,Grafana,CRD-API,PB-AP
bind lbvip01:443 ssl crt /etc/ssl/certs/quantum.pem no-sslv3 no-tlsv10
ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!
aNULL:!eNULL:!LOW:! 3DES:!MD5:!EXP:!PSK:!SRP:!DSS
mode http
acl crd_api path_beg -i /custrefdata/
use_backend crd_api_servers if crd_api
backend crd_api_servers
mode http
balance roundrobin
option httpclose
option abortonclose
server qns01_A qns01:8080 check inter 30s
server qns02_A qns02:8080 check inter 30s
acl authoriseReadonlyUsers http_auth_group(cps_user_list) qns-ro
acl authoriseAdminUsers http_auth_group(cps_user_list) qns
http-request auth realm CiscoApiAuth if !authoriseReadonlyUsers !authoriseAdminUsers
http-request deny if !METH_GET authoriseReadonlyUsers
Note |
The haproxy.cfg file is generated by the Puppet tool. Any manual changes to the file in lb01/lb02 would be reverted if the pupdate or vm-init scripts are run. |
Grafana
Purpose
Grafana is a metrics dashboard and graph editor used to display graphical representations of system, application KPIs, bulkstats of various CPS components.
URL and Port
HA: https://<lbvip01>:9443/grafana
Protocol
HTTPS/HTTP
Accounts and Roles
An administrative user account must be used to add, modify, or delete Grafana dashboards or perform other administrative actions.
Refer to the Graphite and Grafana and Prometheus and Grafana chapters in this guide for details on adding or deleting these user accounts.
HAProxy
Purpose
Haproxy is a frontend IP traffic proxy process in lb01/lb02 that routes the IP traffic for other applications in CPS. The details of individual port that haproxy forwards is already described in other individual sections.
As per the Diameter
configuration done, haproxy-diameter statistics will bind to one of the
configurations and that URL will be displayed in
about.sh
output. For various options for Diameter
configuration, refer to
Diameter
Related Configuration section in
CPS Installation Guide for
VMware.
More information about HAProxy is provided in the HAProxy.
Documentation for HAProxy is available at: #http://www.haproxy.org/#docs
URL and Port
To view statistics, open a browser and navigate to the following URL:
-
For HAProxy Statistics: http://<diameterconfig>:5540/haproxy?stats
-
For HAProxy Diameter Statistics: http://<diameterconfig>:5540/haproxy-diam?stats
Accounts and Roles
Not applicable.
JMX Interface
Purpose
Java Management Extension (JMX) interface can be used for managing and monitoring applications and system objects.
Resources to be managed / monitored are represented by objects called managed beans (mbeans). MBean represents a resource running in JVM and external applications can interact with mbeans through the use of JMX connectors and protocol adapters for collecting statistics (pull); for getting/setting application configurations (push/pull); and notifying events like faults or state changes (push).
CLI Access
External applications can be configured to monitor application over JMX. In addition to this, there are scripts provided by application that connects to application over JMX and provide required statistics/information.
Port
pcrfclient01/pcrfclient02:
-
Control Center: 9045
-
Policy Builder: 9046
lb01/lb02:
-
iomanager: 9045
-
Diameter Endpoints: 9046, 9047, 9048...
qns01/qns02/qns... : 9045
Ports should be blocked using firewall to prevent access from outside the CPS system.
Accounts and Roles
Not applicable.
Logstash
Purpose
Logstash is a process that consolidates the log events from CPS nodes into pcrfclient01/pcrfclient02 for logging and alarms. The logs are forwarded to CPS application to raise necessary alarms and the logs are stored at /var/log/logstash/logstash.log.
If logstash in not monitoring, then check the Policy Server (qns) process using monit summary
.
monit summary
Monit 5.25.1 uptime: 19h 45m
┌─────────────────────────────────┬────────────────────────────┬───────────────┐
│ Service Name │ Status │ Type │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ sav-pcrfclient01 │ OK │ System │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ whisper │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ stale-session-cleaner-helper │ Initializing │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ stale-session-cleaner │ Initializing │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ snmpd │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ qns-2 │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ qns-1 │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ corosync │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ memcached │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ logstash │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ collectd │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ carbon-relay │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ carbon-cache-c │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ carbon-cache-b │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ carbon-cache │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ carbon-aggregator-b │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ carbon-aggregator │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ auditrpms.sh │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ aido_client │ OK │ Process │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ monitor-qns-2 │ OK │ File │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ monitor-qns-1 │ OK │ File │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ kpi_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ db_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ failover_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ qps_process_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ admin_login_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ vm_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ qps_message_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ ldap_message_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ logstash_process_status │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ monitor_replica │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ mon_db_for_lb_failover │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ mon_db_for_callmodel │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ cpu_load_monitor │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ cpu_load_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ gen_low_mem_trap │ OK │ Program │
├─────────────────────────────────┼────────────────────────────┼───────────────┤
│ auto_heal_server │ OK │ Program │
└─────────────────────────────────┴────────────────────────────┴───────────────┘
Note |
On pcrfclient node, if Policy Server (qns) process is not running, 'logstash_process_status' program stops the logstash process so that the alarm is raised from another pcrfclient node. |
CLI Access
There is no specific CLI interface for logstash.
Protocol
TCP and UDP
Ports
TCP: 5544, 5545, 7546, 6514
UDP: 6514
Accounts and Roles
Not applicable.
LDAP SSSD
Purpose
In CPS 14.0.0 and higher releases, SSSD based authentication is supported, allowing users to authenticate against an external LDAP server and gain access to the CPS CLI. SSSD RPMs and default sssd.conf file is installed on each CPS VM when you perform a new installation or upgrade CPS.
For more information, refer to the CPS Installation Guide for VMware.
/etc/monit.d/sssd file has been added with the following content so that SSSD is monitored by monit:
check process sssd with pidfile /var/run/sssd.pid
start program = "/etc/init.d/sssd start" with timeout 30 seconds
stop program = "/etc/init.d/sssd stop" with timeout 30 seconds
Also /etc/logrotate.d/sssd file has been added to rotate the SSSD log files. Here is the default configuration:
“
/var/log/sssd/*.log {
daily
missingok
notifempty
sharedscripts
nodateext
rotate 5
size 100M
compress
delaycompress
postrotate
/bin/kill -HUP `cat /var/run/sssd.pid 2>/dev/null` 2> /dev/null || true
endscript
}
“
Use the
monit
summary
command to view the list of services managed by monit. Here is
an example:
monit summary
The Monit daemon 5.17.1 uptime: 4d 2h 22m
Process 'whisper' Running
Process 'sssd' Running
Process 'snmptrapd' Running
Process 'snmpd' Running
Program 'vip_trap' Status ok
Program 'gr_site_status_trap' Status ok
Process 'redis' Running
Process 'qns-4' Running
Process 'qns-3' Running
Process 'qns-2' Running
Process 'qns-1' Running
File 'monitor-qns-4' Accessible
File 'monitor-qns-3' Accessible
File 'monitor-qns-2' Accessible
File 'monitor-qns-1' Accessible
Process 'memcached' Running
Process 'irqbalance' Running
Process 'haproxy-diameter' Running
Process 'haproxy' Running
Process 'cutter' Running
Process 'corosync' Running
Program 'cpu_load_monitor' Status ok
Program 'cpu_load_trap' Status ok
Program 'gen_low_mem_trap' Status ok
Process 'collectd' Running
Process 'auditrpms.sh' Running
System 'lb01' Running
Important |
Setting of other configuration files to support LDAP based authentication and the changes required in sssd.conf file as per the customer deployment is out of scope of this document. For more information, consult your Cisco Technical Representative. |
Restriction |
Grafana support LDAP authentication over httpd and does not use SSSD feature. Due to this, if LDAP server is down then grafana is not accessible for LDAP users. |
CLI Access
No CLI is provided.
Port
Port number is not required.
Configure Policy Builder
Procedure
Step 1 |
To provide admin access, enter username in the following file: /var/www/svn/users-access-file
|
||
Step 2 |
Verify if you can export CRD data from the following link: https://<server_ip>:443/central/ |
Configure Grafana
Procedure
Step 1 |
Bypass the first level authentication by updating the /etc/httpd/conf.d/grafana-proxy.conf file as follows:
|
Step 2 |
Restart httpd by running the following command: /usr/bin/systemctl restart httpd
If port already in use error is displayed, execute the following steps: |
Step 3 |
Update /etc/grafana/grafana.ini file to point to LDAP authentication instead of Basic Auth as follows:
|
Step 4 |
Modify /etc/grafana/ldap.toml file to provide LDAP details (for example, search base dn, bind dn, group search base dn, member_of attribute) as follows:
|
Step 5 |
Restart Grafana server by running the following command: service grafana-server
restart
|
Step 6 |
Log in to Grafana using LDAP user credentials. |
Mongo Database
Purpose
MongoDB is used to manage session storage efficiently and address key requirements: Low latency reads/writes, high availability, multi-key access and so on.
CPS support different models of mongo database based on CPS deployment such as, HA or Geo-redundancy. The database list is specific to your deployment.
To rotate the MongoDB logs on the Session Manager VM, open the MongoDB file by executing the following command:
cat /etc/logrotate.d/mongodb
You will have output as similar to the following:
{
daily
rotate 5
copytruncate
create 640 root root
sharedscripts
postrotate
endscript
}
In the above script the MongoDB logs are rotated daily and it ensures that it keeps the latest 5 backups of these log files.
The standard definition for supported replica-set defined in configuration file. This configuration file is self-explanatory which contains replica-set, set-name, hostname, port number, data file path and so on.
Location: /etc/broadhop/mongoConfig.cfg
Database Name |
Port Number |
Primary DB Host |
Secondary DB Host |
Arbiter |
Purpose |
---|---|---|---|---|---|
session_cache |
27717 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Session database |
balance_mgmt |
27718 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Quota/Balance database |
audit |
27725 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Reporting database |
spr |
27720 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
USuM database |
cust_ref_data |
27717 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Custom Reference Data |
Note |
The list provided in the Table 1 is for reference purposes only. |
Note |
The port number configuration is based on what is configured in each of the respective Policy Builder plug-ins. Refer to the Plug-in Configuration chapter of the CPS Mobile Configuration Guide for correct port number and ports defined in mongo configuration file. |
CLI Access
Use the following commands to access the MongoDB CLI:
Login to pcrfclient01 or pcrfclient02 and run: diagnostics.sh --get_replica_status
This command will output information about the databases configured in the CPS cluster.
Note |
If a member is shown in an unknown state, it is likely that the member is not accessible from one of other members, mostly an arbiter. In that case, you must go to that member and check its connectivity with other members. Also, you can login to mongo on that member and check its actual status. |
Protocol
Not applicable.
Port
Not applicable.
Accounts and Roles
Restrict MongoDB access for Readonly Users: If firewall is enabled on system, then on all VMs for all readonly users, IP table rule will be created for outgoing connections to reject outgoing traffic to MongoDB replica sets.
For example, rule similar to the following is created.
REJECT tcp -- anywhere
sessionmgr01 tcp dpt:27718 owner GID match qns-ro reject-with
icmp-port-unreachable
With this, qns-ro user has restricted MongoDB access on sessionmgr01 on port 27718. Such rules are added for all readonly users who are part of qns-ro group for all replica sets.
Adding New Replica-set Members
Caution |
The following procedure must be performed only during a planned Maintenance Window (MW). |
Note |
The procedure is for reference purposes only. Contact your Cisco Account representative before running the procedure. |
Procedure
Step 1 |
Update the mongoConfig.cfg file with the new replica-set members to be configured. |
Step 2 |
Login to the Cluster Manager VM of the site where you want to add new replica-set members. |
Step 3 |
Take the backup of the current /etc/broadhop/mongoConfig.cfg file. /bin/cp /etc/broadhop/mongoConfig.cfg /etc/broadhop/mongoConfig.cfg.$(date +\%Y-\%m-\%d).backup |
Step 4 |
Copy the updated mongoConfig.cfg file to /etc/broadhop/. If the file is located on a remote machine, then If the file is present on the current VM but at a different location, then |
Step 5 |
Verify that the Session Manager and arbiter VMs have sufficient space available to create the new replica-set member. The verification command depends on where all the new replica-set members will get created. Here is an example in case all the Session Managers VMs are updated. for i in $(hosts-sessionmgr.sh); do echo $i; ssh $i "df -h";done For arbiter VM, you have to login to each VM and use |
Step 6 |
Verify if there are existing The verification command depends on where all the new replica-set members will get created. Here is an example in case all the Session Managers VMs are updated. for i in $(hosts-sessionmgr.sh); do echo $i; ssh $i "ls -ltrh /var/tmp/stopped-*"; done For arbiter VM, you have to login to each VM and use |
Step 7 |
Verify whether mongoConfig.cfg -* files exists under /var/aido/ on each Session Manager and arbiter VM. Here is an sample command to verify whether mongoConfig.cfg -* files exists: for i in $(hosts-sessionmgr.sh); do echo $i; ssh $i "ls -ltrh /var/aido/*"; done
For arbiter VM, you have to login to each VM and use |
Step 8 |
Execute
|
Step 9 |
SSH to remote Site2/Cluster2 and take the backup of mongoConfig.cfg file. |
Step 10 |
Copy the mongoConfig.cfg file from local Site1/Cluster1 to remote Site2/Cluster2 on /etc/broadhop. scp root@<local_site_cluman_ip>:/etc/broadhop/mongoConfig.cfg /etc/broadhop/ |
Step 11 |
Verify if there are existing The verification command depends on where all the new replica-set members will get created. Here is an example in case all the Session Managers VMs are updated. for i in $(hosts-sessionmgr.sh); do echo $i; ssh $i "ls -ltrh /var/tmp/stopped-*"; done For arbiter VM, you have to login to each VM and use |
Step 12 |
Verify the mongoConfig.cfg -* files under /var/aido/ on each Session Manager and arbiter VM. Here is an sample command to verify whether mongoConfig.cfg -* files exists: for i in $(hosts-sessionmgr.sh); do echo $i; ssh $i "ls -ltrh /var/aido/*"; done
For arbiter VM, you have to login to each VM and use |
Step 13 |
Execute
|
Step 14 |
On local Site1/Cluster1, execute /var/qps/install/current/scripts/build/build_etc.sh |
Step 15 |
On remote Site2/Cluster2, execute /var/qps/install/current/scripts/build/build_etc.sh |
Step 16 |
Wait for sometime (approx 5 minutes) and verify the new replica-set status by executing the following command. diagnostics.sh --get_replica_status |
Rollback Replica-set Members
Caution |
The following procedure must be performed only during a planned Maintenance Window (MW). |
Note |
The procedure is for reference purposes only. Contact your Cisco Account representative before running the procedure. |
Procedure
Step 1 |
Prepare a list of the replica-sets and setnames that you want to remove from the local and remote site. |
||
Step 2 |
Execute the following command to remove the all the replica-sets identified in Step 1. build_set.sh --session --remove-replica-set --setname setxx --force where, setxx with set name identified in Step 1. |
||
Step 3 |
Verify that the new replica-sets have been removed using |
||
Step 4 |
Copy the mongoConfig.cfg backup file saved in Adding New Replica-set earlier to /etc/broadhop on Cluster Manager VM of local site. /bin/cp /etc/broadhop/mongoConfig.cfg.*.backup /etc/broadhop/ |
||
Step 5 |
Verify that the file mongoConfig.cfg has older configuration.
|
||
Step 6 |
Execute
|
||
Step 7 |
Verify whether mongoConfig.cfg -* files exists under /var/aido/ on each Session Manager and arbiter VM. Here is an sample command to verify whether mongoConfig.cfg -* files exists: for i in $(hosts-sessionmgr.sh); do echo $i; ssh $i "ls -ltrh /var/aido/*"; done
For arbiter VM, you have to login to each VM and use |
||
Step 8 |
SSH to remote site and rollback a mongoConfig.cfg from backup. |
||
Step 9 |
Execute
|
||
Step 10 |
Verify whether mongoConfig.cfg -* files exists under /var/aido/ on each Session Manager and arbiter VM. Here is an sample command to verify whether mongoConfig.cfg -* files exists: for i in $(hosts-sessionmgr.sh); do echo $i; ssh $i "ls -ltrh /var/aido/*"; done
For arbiter VM, you have to login to each VM and use |
||
Step 11 |
On local site, apply the previous version of mongoConfig.cfg file by executing the following command . /var/qps/install/current/scripts/build/build_etc.sh |
||
Step 12 |
On remote site, apply the previous version of mongoConfig.cfg file by executing the following command. /var/qps/install/current/scripts/build/build_etc.sh |
||
Step 13 |
Verify the health check using |
Replica Set Arbiter: Security
As arbiters do not replicate any data, including user/role details, they just participate when voting happens for electing new primary. Thus, when the authentication is enabled, the only way to login to them is through the localhost exception.
For more information, refer to https://docs.mongodb.com/v3.6/core/replica-set-arbiter/#security.
A few commands (such as, isMaster
, ping
, connectionStatus
, and authenticate
) do not require any authentication even if authentication is enabled. This is because these are used to support to connect
to a deployment. On the other hand, majority of commands (including rs.status()
, show dbs
, show collections
and so on) requires authentication if authentication is enabled.
For detailed command list, refer to https://docs.mongodb.com/v3.6/reference/command/
Admin Database
Purpose
By default, admin replica-set holds the following databases:
-
sharding: This database holds the following information:
-
Session sharding: Session shard seeds and its databases.
-
Session type counters: Session statistics information. For example, number of sessions present in each shard for Gx, Rx, Sy, and so on.
-
Session compression dictionary data: Compression object data of session fields data.
-
Memcache rings data: Memcached rings and their sets data. Every set has two sessionmgr VMs followed with memcached port number.
-
Secondary Key sharding: Secondary Key shards seeds and its databases.
-
License data: Session license information.
-
-
scheduler: This database holds the information about rebuilding the secondary key tasks data. When you execute
rebuild sk rings
orrebuild sk db
, application creates the scheduled tasks to the “tasks” collection in this database. Once tasks are created, application pulls the information from the collection and execute those taks. -
diameter: This database holds the information about connected peers (inbound and outbound) connected to which load balance instance. This also holds the history of peers connected (start) and disconnected (stop) followed with timestamps.
-
queueing: This database holds the information about internal TCP connection between the Policy Director (LB) and Policy Server (QNS) VMs in and out queue data.
-
clusters: This database holds the information about sitenames and IP address of the ADMIN replica-set members in hosts collections.
-
policy_trace: This database holds the information about the particular subscriber traces. To view the traces, you need to enable the trace for a particular subscriber.
-
Keystore: This database holds the information about the redis key store configuration.
Note |
There are separate configurations available for the Trace Database and Endpoint Database in Cluster configuration under Policy Builder.
For more information on Trace Database and Endpoint Database configuration, see Adding an HA Cluster section in CPS Mobile Configuration Guide. |
Note |
In GR deployment, if any site looses the connectivity (Internal/Replication) to the ADMIN replica-set, then the following impact is observed:
|
Protocol
Not applicable.
Port
Not applicable.
Accounts and Roles
Not applicable.
OSGi Console
Purpose
CPS is based on Open Service Gateway initiative (OSGi) and OSGi console is a command-line shell which can be used for analyzing problems at OSGi layer of the application.
CLI Access
Use the following command to access the OSGi console:
telnet
<ip>
<port>
The following commands can be executed on the OSGi console:
ss
: List installed
bundle status.
start
<bundle-id> : Start the bundle.
stop
<bundle-id> : Stop the bundle.
diag
<bundle-id> : Diagnose the bundle.
Use the following OSGi commands to add or remove shards:
Command |
Description |
---|---|
|
Lists all the shards. |
|
Marks the shard for removal. If shard is non-backup, rebalance is required for shard to be removed fully. If shard is backup, it does not require rebalance of sessions and hence would be removed immediately. |
|
Rebalances the buckets and migrates session with rate limit. Rate limit is optional. If rate limit is passed, it is applied at rebalance. |
|
Rebalances the buckets and schedules background task to migrate sessions. Rate limit is optional. If rate limit is passed, it is applied at rebalance. |
|
Displays the current rebalance status. Status can be one of the following:
|
|
In order for CPS to identify a stale session from the latest session, the secondary key mapping for each site stores the primary key in addition to the bucket ID and the site ID, that is, Secondary Key = <Bucket Id>; <Site Id>; <Primary Key>. To enable this feature, add the flag Enabling this flag and running |
|
Displays the status of the migration and the current cache version. |
|
List the SK shards. |
|
Adds new SK shard. For backup shard, pass the backup option. |
|
Mark SK shard for deletion. |
|
Rebalance SK buckets across SK shards in foreground. |
|
Migrate SK data in foreground. If data is already migrated it will query and skip. |
|
Rebalance SK buckets across SK shards and schedule the distribute task to migrate SK data in background on multiple QNS |
|
Schedule the distribute task to migrate SK data in background on multiple QNS. If data is already migrated it will query and skip. |
|
Show SK DB shard rebalance status. |
|
Rebuild SK DB from Session DB. Default rate limit is 1000. |
|
Show current SK DB rebuild status. |
|
Get current caching system priority order. |
|
Example:
|
|
Lists the SK shards for corresponding site ID. |
|
Adds new SK shard for mentioned site. For backup shard, add the backup option. |
|
Rebalance SK buckets across SK shards in foreground for the mentioned site. |
|
Migrate SK data in foreground for the mentioned site. If data is already migrated it will query and skip. |
|
Rebalance SK buckets across SK shards for the mentioned site ID and schedule the distribute task to migrate SK data in background on multiple Policy Servers (QNS). |
|
Schedule the distribute task to migrate SK data in background on multiple Policy Servers (QNS) for the mentioned site. If data is already migrated it will query and skip. |
|
Show SK database shard rebalance status for the mentioned site. |
|
Rebuild SK database from Session database for the mentioned site. Default rate limit is 1000. |
|
Show current SK database rebuild status for the mentioned site ID. |
Use the following OSGi command to get the information related to open application alarms in CPS:
Command |
Description |
---|---|
|
To list the open/active application alarms since last restart of policy server (QNS) process on pcrfclient01/02 VM. |
Example:
osgi> listalarms
Active Application Alarms
id=1000 sub_id=3001 event_host=lb02 status=down date=2017-11-22,10:47:34,
051+0000 msg="3001:Host: site-host-gx Realm: site-gx-client.com is down"
id=1000 sub_id=3001 event_host=lb02 status=down date=2017-11-22,10:47:34,
048+0000 msg="3001:Host: site-host-sd Realm: site-sd-client.com is down"
id=1000 sub_id=3001 event_host=lb01 status=down date=2017-11-22,10:45:17,
927+0000 msg="3001:Host: site-server Realm: site-server.com is down"
id=1000 sub_id=3001 event_host=lb02 status=down date=2017-11-22,10:47:34,
091+0000 msg="3001:Host: site-host-rx Realm: site-rx-client.com is down"
id=1000 sub_id=3002 event_host=lb02 status=down date=2017-11-22,10:47:34,
111+0000 msg="3002:Realm: site-server.com:applicationId: 7:all peers are down"
Use the following OSGi commands to get the information related to memcache:
Note |
The memcache commands have been deprecated in CPS 20.1.0 and later releases. |
Command |
Description |
---|---|
|
Used to disable the complete memcached audit. Default: enable |
|
Used to enable the regular memcached audit. Default: enable |
|
Used to display the current regular memcached audit status. |
|
Used to enable Full Table Scan (FTS) threshold based audit. This works only when audit feature is enabled. Default: true |
|
Used to disable the FTS Threshold based audit. |
|
Used to display the current FTS based memcached audit status. |
|
Used to display current periodic memcached audit interval. |
|
Used to update the regular memcached audit interval. Audit interval cannot be less than 360 minutes. |
|
Used to provide the current FTS threshold for FTS based memcached audit. |
|
Used to specify the FTS threshold value for FTS based memcached audit. This value cannot be less than 25% of total allowed FTS per qns. |
|
Used to display next regular memcached audit schedule when memcache audit is done. |
Ports
pcrfclientXX:
-
Control Center: 9091
-
Policy Builder: 9092
lbXX:
-
iomanager: 9091
-
Diameter Endpoints: 9092, 9093, 9094 ...
qnsXX: 9091
Ports should be blocked using a firewall to prevent access from outside the CPS cluster.
Accounts and Roles
Not applicable.
Policy Builder GUI
Purpose
Policy Builder is the web-based client interface for the configuration of policies in Cisco Policy Suite.
URL and Port
HA: https://<lbvip01>:7443/pb
Protocol
HTTPS/HTTP
Accounts and Roles
Initial accounts are created during the software installation. Refer to the CPS Operations Guide for commands to add users and change passwords.
REST API
Purpose
To allow initial investigation into a Proof of Concept API for managing a CPS System and Custom Reference Data related through an HTTPS accessible JSON API.
CLI Access
This is an HTTPS/Web interface and has no Command Line Interface.
URL and Port
API: http://<Cluster Manager IP>:8458
Documentation: http://<Cluster Manager IP>:7070/doc/index.html
Accounts and Roles
Initial accounts are created during the software installation. Refer to the CPS Operations Guide for commands to add users and change passwords.
Rsyslog
Purpose
Enhanced log processing is provided using Rsyslog.
Rsyslog logs Operating System (OS) data locally (/var/log/messages etc.) using the /etc/rsyslog.conf and /etc/rsyslog.d/*conf configuration files.
rsyslog outputs all WARN level logs on CPS VMs to /var/log/warn.log file.
On all nodes, Rsyslog forwards the OS system log data to lbvip02 via UDP over the port defined in the logback_syslog_daemon_port variable as set in the CPS deployment template (Excel spreadsheet). To download the most current CPS Deployment Template (/var/qps/install/current/scripts/deployer/templates/QPS_deployment_config_template.xlsm), refer to the CPS Installation Guide for VMware or CPS Release Notes for this release.
Additional information is available in the Logging chapter of the CPS Troubleshooting Guide. Refer also to http://www.rsyslog.com/doc/ for the Rsyslog documentation.
CLI Access
Not applicable.
Protocol
UDP
Port
6514
Accounts and Roles
Account and role management is not applicable.
Rsyslog Customization
CPS provides the ability to configure forwarding of consolidated syslogs from rsyslog-proxy on Policy Director VMs to remote syslog servers (refer to CPS Installation Guide for VMware). However, if additional customizations are made to rsyslog configuration to forward logs to external syslog servers in customer's network for monitoring purposes, such forwarding must be performed via dedicated action queues in rsyslog. In the absence of dedicated action queues, when rsyslog is unable to deliver a message to the remote server, its main message queue can fill up which can lead to severe issues, such as, preventing SSH logging, which in turn can prevent SSH access to the VM.
Sample configuration for dedicated action queues is available in the Logging chapter of the CPS Troubleshooting Guide. Refer to rsyslog documentation on http://www.rsyslog.com/doc/v5-stable/concepts/queues.html for more details about action queues.
SVN Interface
Apache™ Subversion (SVN) is the versioning and revision control system used within CPS. It maintains all the CPS policy configurations and has repositories in which files can be created, updated and deleted. SVN maintains the file difference each time any change is made to a file on the server and for each change it generates a revision number.
In general, most interactions with SVN are performed via Policy Builder.
CLI Access
Use the following commands to access SVN:
From a remote machine with the SVN client installed, use the following commands to access SVN:
Get all files from the server:
svn checkout --username
<username> --password
<password> <SVN Repository URL> <Local
Path>
Example:
svn checkout --username
broadhop --password broadhop
http://pcrfclient01/repos/configuration/root/configuration
If <Local Path> is not provided, files are checked out to the current directory.
Store/check-in the changed files to the server:
svn commit --username
<username> --password
<password> <Local Path> -m “modified
config”
Example:
svn commit --username
broadhop --password broadhop /root/configuration -m “modified config”
Update local copy to latest from SVN:
svn update
<Local
Path>
Example:
svn update
/root/configuration/
Check current revision of files:
svn info
<Local
Path>
Example:
svn info
/root/configuration/
Note |
Use svn
--help for a list of other commands.
|
Protocol
HTTP
Port
80
Accounts and Roles
CPS 7.0 and Higher Releases
Add User with Read Only Permission
From the pcrfclient01 VM, run adduser.sh to create a new user.
/var/qps/bin/support/adduser.sh
Note |
This command can also be run from the Cluster Manager VM, but you must include the OAM (PCRFCLIENT) option:
|
Example:
[root@pcrfclient01 /]# /var/qps/bin/support/adduser.sh
Enter username: <username>
Enter group for the user: <any group>
Enter password:
Re-enter password:
Add User with Read/Write Permission
By default, the adduser.sh script creates a new user with read-only permissions. For read-write permission, you must assign the user to the qns-svn group and then run the vm-init command.
From the pcrfclient01 VM, run the adduser.sh script to create the new user.
Run the following command on both pcrfclient01 and pcrfclient02 VMs:
/etc/init.d/vm-init
You can now login and commit changes as the newly created user.
Change Password
From the pcrfclient01 VM, run the change_passwd.sh script to change the password of a user.
/var/qps/bin/support/change_passwd.sh
Example:
[root@pcrfclient01 /]# /var/qps/bin/support/change_passwd.sh
Enter username whose password needs to be changed: user1
Enter current password:
Enter new password:
Re-enter new password:
CPS Versions Earlier than 7.0
Add User
Perform all of the following commands on both the pcrfclient01 and pcrfclient02 VMs.
Use the htpasswd utility to add a new user
htpasswd -mb /var/www/svn/.htpasswd
<username> <password>
Example:
htpasswd -mb /var/www/svn/.htpasswd user1 password
In some versions, the password file is
/var/www/svn/password
Provide Access
Update the user role file /var/www/svn/users-access-file and add the
username under
admins
(for read/writer permissions) or
nonadmins
(for read-only permissions). For example:
[groups]
admins = broadhop
nonadmins = read-only, user1
[/]
@admins = rw
@nonadmins = r
Change Password
Use the htpasswd utility to change passwords.
htpasswd -mb /var/www/svn/.htpasswd
<username> <password>
Example:
htpasswd -mb /var/www/svn/.htpasswd user1 password
TACACS+ Interface
Purpose
CPS 7.0 and above has been designed to leverage the Terminal Access Controller Access Control System Plus (TACACS+) to facilitate centralized management of users. Leveraging TACACS+, the system is able to provide system-wide authentication, authorization, and accounting (AAA) for the CPS system.
Further the system allows users to gain different entitlements based on user role. These can be centrally managed based on the attribute-value pairs (AVP) returned on TACACS+ authorization queries.
CLI Access
No CLI is provided.
Port
CPS communicates to the AAA backend using IP address/port combinations configured by the operator.
Account Management
Configuration is managed by the Cluster Management VM which deploys the /etc/tacplus.conf and various PAM configuration files to the application VMs. For more account management information, refer to TACACS+ Service Requirements.
For more information about TACACS+, refer to the following links:
-
TACAC+ Protocol Draft: http://tools.ietf.org/html/draft-grant-tacacs-02
-
Portions of the solution reuse software from the open source pam_tacplus project hosted at: https://github.com/jeroennijhof/pam_tacplus
For information on CLI commands, refer to Accessing the CPS CLI.
Unified API
Purpose
Unified APIs are used to reference customer data table values.
URL and Port
HA: https://<lbvip01>:8443/ua/soap
Protocol
HTTPS/HTTP
Accounts and Roles
Currently there is no authorization for this API
Accessing the CPS CLI
sudo supports a plugin architecture for security policies and input/output logging. The default security policy is sudoers, which is configured via the file /etc/sudoers, contains the rules that users must follow when using the sudo command.
sudo allows a system administrator to delegate authority to give certain users (or groups of users) the ability to run some (or all) commands as root or another user while providing an audit trail of the commands and their arguments.
For example:
%adm ALL=(ALL)
NOPASSWD: ALL
This means that any user in the administrator group on any host may run any command as any user without a password. The first ALL refers to hosts, the second to target users, and the last to allowed commands.
When an authenticated
user has one of the above group permissions, they can access the CPS CLI and
run predefined commands available to that user role. A list of commands
available after authentication can be viewed using the
sudo
-l
command (-l for list), or any user with root privileges can use
sudo -l -U
<qns-role>
to see the available command
for a specific Policy Server (qns) role.
The /etc/sudoers file contains user specifications that define the commands that users may execute. When sudo is invoked, these specifications are checked in order, and the last match is used. A user specification looks like this at its most basic:
User Host = (Runas)
Command
Read this as "User may run Command as the Runas user on Host". Any or all of the above may be the special keyword ALL, which always matches. User and Runas may be usernames, group names prefixed with %, numeric UIDs prefixed with #, or numeric GIDs prefixed with %#. Host may be a hostname, IP address, or a whole network (for example, 192.0.2.0/24), but not 127.0.0.1.
Group Identifiers
The group identifier of the TACACS+ authenticated user on the VM nodes. This value should reflect the role assigned to a given user, based on the following values:
-
group id=500 (qns)
The group identifier used by Policy Server (qns) user in application.
-
group id=501 (qns-su)
This group identifier should be used for users that are entitled to attain superuser (or 'root') access on the CPS VM nodes.
-
group id=504 (qns-admin)
This group identifier should be used for users that are entitled to perform administrative maintenance on the CPS VM nodes.
Note
To execute administrative scripts from qns-admin, prefix the command with
sudo
. For examplesudo stopall.sh
-
group id=505 (qns-ro)
This group identifier should be used for users that are entitled to read-only access to the CPS VM nodes.
When an
authenticated user has one of the above group permissions, they can access the
CPS CLI and run predefined commands available to that user role. A list of
commands available after authentication can be viewed using the
sudo
-l
command (-l for list), or any user with root privileges can use
sudo -l -U
<qns-role>
to see the available command
for a specific Policy Server (qns) role.
For more information, refer to https://www.sudo.ws/intro.html.
The user's home directory on the CPS VM nodes. To enable simpler management of these systems, the users should be configured with a pre-deployed shared home directory based on the role they are assigned with the gid.
-
home=/home/qns-su should be used for users in the 'qns-su' group (gid=501)
-
home=/home/qns-admin should be used for users in the 'qnsadmin' group (gid=504)
-
home=/home/qns-ro should be used for users in the 'qns-ro' group (gid=505)
Support for Multiple User Login Credentials
CPS supports multiple user login credentials with different privileges for all non-cluman vms.
Add allow_user_for_cluman
flag in configuration.csv file, to update sudoers file. This flag functionality supports the following different privileges
accessible for cluman.
-
When
allow_user_for_cluman
flag is set to true, sudoers file is updated with CPS users and they are able to access cluman according to their privilege. -
When
allow_user_for_cluman
flag is set to false or not defined, CPS users are not able to execute any commands from cluman.
The following table describes CSV based configuration parameters.
Parameters |
Description |
---|---|
allow_user_for_cluman |
Used to update the /etc/sudoers with CPS entries on cluman. |
This feature is supported only in VMware.