Step 1
|
Run
diagnostics.sh to verify that the system is in healthy
state.
|
Step 2
|
Log in to
primary Cluster Manager using SSH.
|
Step 3
|
Take the backup
of
/etc/broadhop/mongoConfig.cfg file.
cp
/etc/broadhop/mongoConfig.cfg
/etc/broadhop/mongoConfig.cfg.date.BACKUP
|
Step 4
|
Take the
backup of admin database from Cluster Manager.
[root@cm-a ~]# mkdir admin
[root@cm-a ~]# cd admin
[root@cm-a admin]# mongodump -h sessionmgr01 --port 27721
connected to: sessionmgr01:27721
2016-09-23T16:31:13.962-0300 all dbs
** Truncated output **
|
Step 5
|
Edit
/etc/broadhop/mongoConfig.cfg file using
vi editor. Find the section for session replication
set. Add the new session replication set members.
Note
|
Server name
and ports are specific to each customer deployment. Make sure that new session
replication set has unique values.
Session set number must be incremented.
|
Make sure the port used in MEMBER1, MEMBER2, MEMBER3, MEMBER4,
and so on are same.
#SITE1_START
[SESSION-SET2]
SETNAME=set10
OPLOG_SIZE=1024
ARBITER1=pcrfclient01a:27727
ARBITER_DATA_PATH=/var/data/sessions.1/set10
MEMBER1=sessionmgr01a:27727
MEMBER2=sessionmgr02a:27727
MEMBER3=sessionmgr01b:27727
MEMBER4=sessionmgr02b:27727
DATA_PATH=/var/data/sessions.1/set10
[SESSION-SET2-END]
#SITE2_START
[SESSION-SET5]
SETNAME=set11
OPLOG_SIZE=1024
ARBITER1=pcrfclient01b:47727
ARBITER_DATA_PATH=/var/data/sessions.1/set11
MEMBER1=sessionmgr01b:37727
MEMBER2=sessionmgr02b:37727
MEMBER3=sessionmgr01a:37727
MEMBER4=sessionmgr02a:37727
DATA_PATH=/var/data/sessions.1/set11
[SESSION-SET5-END]
Run build_etc.sh to accept the changes done in mongoConfig.cfg file and wait for AIDO server to create the additonal replica-set.
Note
|
Verify that the
/etc/hosts file on the both sites is
correctly configured with alias.
Site1
/etc/hosts file should have the following
content:
x.x.x.a sessionmgr01 sessionmgr01a
x.x.x.b sessionmgr02 sessionmgr02a
y.y.y.a psessionmgr01 sessionmgr01b
y.y.y.b psessionmgr02 sessionmgr02b
Site2
/etc/hosts file should have the following
content:
y.y.y.a sessionmgr01 sessionmgr01b
y.y.y.b sessionmgr02 sessionmgr02b
x.x.x.a psessionmgr01 sessionmgr01a
x.x.x.b psessionmgr02 sessionmgr02a
|
|
Step 6
|
SSH to Cluster-A/Site1 Cluster Manager. Add the new session replication set information in /etc/broadhop/mongoConfig.cfg file. Run build_etc.sh to accept the changes and create new session replication set from Cluster Manager.
To verify the replica-set has been created, run the following command:
build_set.sh --session
OR
diagnostics.sh --get_replica_status
|
Step 7
|
Set priority
using
set_priority.sh command. The following are example
commands:
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db session
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db spr
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db admin
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db balance
|
Step 8
|
Add shard to Cluster-B/Site2. Add the new session replication set information in /etc/broadhop/mongoConfig.cfg file. Run build_etc.sh to accept the changes and create new session replication set from Cluster Manager.
To verify the replica-set has been created, run the following command:
build_set.sh --session
OR
diagnostics.sh --get_replica_status
|
Step 9
|
Set priority
using
set_priority.sh command. The following are example
commands:
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db session
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db spr
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db admin
cd /var/qps/bin/support/mongo/; ./set_priority.sh --db balance
|
Step 10
|
Copy
mongoConfig.cfg file to all the nodes using
copytoall.sh from Cluster Manager.
copytoall.sh
/etc/broadhop/mongoConfig.cfg /etc/broadhop/mongoConfig.cfg
Copying '/var/qps/config/mobile/etc/broadhop/mongoConfig.cfg'
to '/etc/broadhop/mongoConfig.cfg' on all VMs
lb01
mongoConfig.cfg 100% 4659 4.6KB/s 00:00
lb02
mongoConfig.cfg 100% 4659 4.6KB/s 00:00
sessionmgr01
** Truncated output **
|
Step 11
|
Transfer the modified mongoConfig.cfg file to Site2 (Cluster-B).
scp /etc/broadhop/mongoConfig.cfg cm-b:/etc/broadhop/mongoConfig.cfg
root@cm-b's password:
mongoConfig.cfg 100% 4659 100% 4659 4.6KB/s 00:00
|
Step 12
|
SSH Cluster-B
(Cluster Manager). Run
build_etc.sh to make sure modified
mongoConfig.cfg file is restored after the reboot.
/var/qps/install/current/scripts/build/build_etc.sh
Building /etc/broadhop...
Copying to /var/qps/images/etc.tar.gz...
Creating MD5 Checksum...
|
Step 13
|
Copy
mongoConfig.cfg file from Cluster-B (Cluster Manager)
to all the nodes using
copytoall.sh from Cluster Manager.
copytoall.sh
/etc/broadhop/mongoConfig.cfg /etc/broadhop/mongoConfig.cfg
Copying '/var/qps/config/mobile/etc/broadhop/mongoConfig.cfg'
to '/etc/broadhop/mongoConfig.cfg' on all VMs
lb01
mongoConfig.cfg 100% 4659 100% 4659 4.6KB/s 00:00
lb02
mongoConfig.cfg 100% 4659 100% 4659 4.6KB/s 00:00
** Truncated output **
|
Step 14
|
(Applicable
for HA and Active/Standby GR only) Adding shards default option. Login to OSGi
mode and add the shards as follows:
telnet qns01 9091
Trying XXX.XXX.XXX.XXX...
Connected to qns01.
Escape character is '^]'.
addshard seed1[,seed2] port
db-index siteid [backup]
osgi> addshard
sessionmgr01,sessionmgr02 27727 1 Site1
osgi> addshard
sessionmgr01,sessionmgr02 27727 2 Site1
osgi> addshard
sessionmgr01,sessionmgr02 27727 3 Site1
osgi> addshard
sessionmgr01,sessionmgr02 27727 4 Site1
osgi> addshard
sessionmgr01,sessionmgr02 37727 1 Site1
osgi> addshard
sessionmgr01,sessionmgr02 37727 2 Site1
osgi> addshard
sessionmgr01,sessionmgr02 37727 3 Site1
osgi> addshard
sessionmgr01,sessionmgr02 37727 4 Site1
osgi> rebalance
osgi> migrate
Migrate ...
All versions up to date - migration starting
|
Step 15
|
Verify that
the sessions have been created in the newly created replication set and are
balanced.
session_cache_ops.sh
--count site2
session_cache_ops.sh
--count site1
Sample output:
Session cache operation script
Thu Jul 28 16:55:21 EDT 2016
------------------------------------------------------
Session Replica-set SESSION-SET4
------------------------------------------------------
Session Database : Session Count
------------------------------------------------------
session_cache : 1765
session_cache_2 : 1777
session_cache_3 : 1755
session_cache_4 : 1750
------------------------------------------------------
No of Sessions in SET4 : 7047
------------------------------------------------------
------------------------------------------------------
Session Replica-set SESSION-SET5
------------------------------------------------------
Session Database : Session Count
------------------------------------------------------
session_cache : 1772
session_cache_2 : 1811
session_cache_3 : 1738
session_cache_4 : 1714
------------------------------------------------------
No of Sessions in SET5 : 7035
------------------------------------------------------
|
Step 16
|
(Applicable for
Active/Active GR only) Add shards with Site option. Login to OSGi mode and add
the shards as follows:
telnet qns01 9091
Trying XXX.XXX.XXX.XXX...
Connected to qns01.
Escape character is '^]'.
Run
listsitelookup if you are unsure about the site names.
Similar information can be obtained from
/etc/broadhop/qns.conf file (-DGeoSiteName=Site1 ).
Note
|
In listsitelookup configuration ‘LookupValues’ must be unique per PrimarySiteId. In case if they are not unique, you must
enable FTS (Full Table Scan) which will impact the performance.
|
osgi> listsitelookup
Id PrimarySiteId SecondarySiteId LookupValues
1 Site1 Site2 pcef-gx-1.cisco.com
1 Site1 Site2 pcef-gy-1.cisco.com
1 Site1 Site2 ocs1.server.cisco.com
2 Site2 Site1 pcef2-gx-1.cisco.com
2 Site2 Site1 pcef2-gy-1.cisco.com
2 Site2 Site1 ocs1.server.cisco.com
Note
|
Do not run
addshard command on multiple sites in parallel.
Wait for the command to finish on one site and then proceed to second site.
|
Adding shard
to Site1. Run the following command from the qns of Site1:
osgi> addshard sessionmgr01,sessionmgr02 27727 1 Site1
osgi> addshard sessionmgr01,sessionmgr02 27727 2 Site1
osgi> addshard sessionmgr01,sessionmgr02 27727 3 Site1
osgi> addshard sessionmgr01,sessionmgr02 27727 4 Site1
Adding shards
to Site2. Run the following command from the qns of Site2:
osgi> addshard sessionmgr01,sessionmgr02 37727 1 Site2
osgi> addshard sessionmgr01,sessionmgr02 37727 2 Site2
osgi> addshard sessionmgr01,sessionmgr02 37727 3 Site2
osgi> addshard sessionmgr01,sessionmgr02 37727 4 Site2
Run
osgi> rebalance Site1 command from Site1 qns.
Run
osgi> rebalance Site2 command from Site2 qns.
Run the following command from the Site1 qns:
osgi> migrate Site1
Migrate ...
All versions up to date - migration starting
Run the following command from the Site2 qns:
osgi> migrate Site2
Migrate ...
All versions up to date - migration starting
|
Step 17
|
Verify that
the sessions have been created in the newly created replication set and are
balanced.
session_cache_ops.sh
--count site2
session_cache_ops.sh
--count site1
Sample output:
Session cache operation script
Thu Jul 28 16:55:21 EDT 2016
------------------------------------------------------
Session Replica-set SESSION-SET4
------------------------------------------------------
Session Database : Session Count
------------------------------------------------------
session_cache : 1765
session_cache_2 : 1777
session_cache_3 : 1755
session_cache_4 : 1750
------------------------------------------------------
No of Sessions in SET4 : 7047
------------------------------------------------------
------------------------------------------------------
Session Replica-set SESSION-SET5
------------------------------------------------------
Session Database : Session Count
------------------------------------------------------
session_cache : 1772
session_cache_2 : 1811
session_cache_3 : 1738
session_cache_4 : 1714
------------------------------------------------------
No of Sessions in SET5 : 7035
------------------------------------------------------
|
Step 18
|
Secondary Key
Ring Configuration: This step only applies If you are adding additional session
replication set to a new session manager server. Assuming that existing setup
has the secondary key rings configured for existing session Replication
servers.
Refer to the
section
Secondary
Key Ring Configuration in
CPS Installation Guide for
VMware.
|
Step 19
|
Configure
session replication set priority from Cluster Manager.
cd
/var/qps/bin/support/mongo/; ./set_priority.sh --db session
|
Step 20
|
Verify
whether the replica set status and priority is set correctly by running the
following command from Cluster Manager:
diagnostics.sh --get_replica_status
|-------------------------------------------------------------------------------------------------------------|
| SESSION:set10 |
| Member-1 - 27727 : 192.168.116.33 - ARBITER - pcrfclient01a - ON-LINE - -------- - 0 |
| Member-2 - 27727 : 192.168.116.71 - PRIMARY - sessionmgr01a - ON-LINE - -------- - 5 |
| Member-3 - 27727 : 192.168.116.24 - SECONDARY - sessionmgr02a - ON-LINE - 0 sec - 4 |
| Member-4 - 27727 : 192.168.116.70 - SECONDARY - sessionmgr01b - ON-LINE - 0 sec - 3 |
| Member-5 - 27727 : 192.168.116.39 - SECONDARY - sessionmgr02b - ON-LINE - 0 sec - 2 |
|---------------------------------------------------------------------------------------------------------------------|
Note
|
If a member is shown in an unknown state, it is likely that the member is not accessible from one of other members, mostly
an arbiter. In that case, you must go to that member and check its connectivity with other members.
Also, you can login to mongo on that member and check its actual status.
|
|
Step 21
|
Run
diagnostics.sh to verify whether the priority for new
replication set has been configured or not.
|
Step 22
|
Add session
geo tag in MongoDBs. Repeat these steps for both session replication sets.
For more
information, refer to
Session Query Restricted to Local Site during Failover
for more details.
Site1 running log: This procedure only applies if customer have local site tagging enabled.
Note
|
To modify priorities, you must update the members array in the replica configuration object. The array index begins with 0. The array index value is different than the value
of the replica set member's members[n]._id field in the array.
|
mongo sessionmgr01:27727
MongoDB shell version: 2.6.3
connecting to: sessionmgr01:27727/test
set10:PRIMARY> conf = rs.conf();
{
"_id" : "set10",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "pcrfclient01a:27727",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "sessionmgr01a:27727",
"priority" : 5
},
{
"_id" : 2,
"host" : "sessionmgr02a:27727",
"priority" : 4
},
{
"_id" : 3,
"host" : "sessionmgr01b:27727",
"priority" : 3
},
{
"_id" : 4,
"host" : "sessionmgr02b:27727",
"priority" : 2
}
],
"settings" : {
"heartbeatTimeoutSecs" : 1
}
}
set10:PRIMARY> conf.members[1].tags = { "sessionLocalGeoSiteTag": "Site1" }
{ "sessionLocalGeoSiteTag" : "Site1" }
set10:PRIMARY> conf.members[2].tags = { "sessionLocalGeoSiteTag": "Site1"}
{ "sessionLocalGeoSiteTag" : "Site1" }
set10:PRIMARY> conf.members[3].tags = { "sessionLocalGeoSiteTag": "Site2"}
{ "sessionLocalGeoSiteTag" : "Site2" }
set10:PRIMARY> conf.members[4].tags = { "sessionLocalGeoSiteTag": "Site2"}
{ "sessionLocalGeoSiteTag" : "Site2" }
set10:PRIMARY> rs.reconfig(conf);
{ "ok" : 1 }
set10:PRIMARY> rs.conf();
{
"_id" : "set10",
"version" : 3,
"members" : [
{
"_id" : 0,
"host" : "pcrfclient01a:27727",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "sessionmgr01a:27727",
"priority" : 5,
"tags" : {
"sessionLocalGeoSiteTag" : "Site1"
}
},
{
"_id" : 2,
"host" : "sessionmgr02a:27727",
"priority" : 4,
"tags" : {
"sessionLocalGeoSiteTag" : "Site1"
}
},
{
"_id" : 3,
"host" : "sessionmgr01b:27727",
"priority" : 3,
"tags" : {
"sessionLocalGeoSiteTag" : "Site2"
}
},
{
"_id" : 4,
"host" : "sessionmgr02b:27727",
"priority" : 2,
"tags" : {
"sessionLocalGeoSiteTag" : "Site2"
}
}
],
"settings" : {
"heartbeatTimeoutSecs" : 1
}
}
set10:PRIMARY>
Site2 TAG
configuration:
Note
|
To modify
priorities, you must update the
members array in the replica configuration object. The
array index begins with 0. The array index value is different than the value of
the replica set member's
members[n]._id field in the array.
|
mongo sessionmgr01b:37727
MongoDB shell version: 2.6.3
connecting to: sessionmgr01b:37727/test
set11:PRIMARY> conf = rs.conf();
{
"_id" : "set11",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "pcrfclient01b:47727",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "sessionmgr01b:37727",
"priority" : 5
},
{
"_id" : 2,
"host" : "sessionmgr02b:37727",
"priority" : 4
},
{
"_id" : 3,
"host" : "sessionmgr01a:37727",
"priority" : 3
},
{
"_id" : 4,
"host" : "sessionmgr02a:37727",
"priority" : 2
}
],
"settings" : {
"heartbeatTimeoutSecs" : 1
}
}
set11:PRIMARY> conf.members[1].tags = { "sessionLocalGeoSiteTag": "Site2"}
{ "sessionLocalGeoSiteTag" : "Site2" }
set11:PRIMARY> conf.members[2].tags = { "sessionLocalGeoSiteTag": "Site2"}
{ "sessionLocalGeoSiteTag" : "Site2" }
set11:PRIMARY> conf.members[3].tags = { "sessionLocalGeoSiteTag": "Site1"}
{ "sessionLocalGeoSiteTag" : "Site1" }
set11:PRIMARY> conf.members[4].tags = { "sessionLocalGeoSiteTag": "Site1"}
{ "sessionLocalGeoSiteTag" : "Site1" }
set11:PRIMARY> rs.reconfig(conf);
{ "ok" : 1 }
set11:PRIMARY> rs.conf();
{
"_id" : "set11",
"version" : 3,
"members" : [
{
"_id" : 0,
"host" : "pcrfclient01b:47727",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "sessionmgr01b:37727",
"priority" : 5,
"tags" : {
"sessionLocalGeoSiteTag" : "Site2"
}
},
{
"_id" : 2,
"host" : "sessionmgr02b:37727",
"priority" : 4,
"tags" : {
"sessionLocalGeoSiteTag" : "Site2"
}
},
{
"_id" : 3,
"host" : "sessionmgr01a:37727",
"priority" : 3,
"tags" : {
"sessionLocalGeoSiteTag" : "Site1"
}
},
{
"_id" : 4,
"host" : "sessionmgr02a:37727",
"priority" : 2,
"tags" : {
"sessionLocalGeoSiteTag" : "Site1"
}
}
],
"settings" : {
"heartbeatTimeoutSecs" : 1
}
}
set11:PRIMARY>
|
Step 23
|
Run
diagnostics.sh to verify that the system is in healthy
state.
|