本產品的文件集力求使用無偏見用語。針對本文件集的目的,無偏見係定義為未根據年齡、身心障礙、性別、種族身分、民族身分、性別傾向、社會經濟地位及交織性表示歧視的用語。由於本產品軟體使用者介面中硬式編碼的語言、根據 RFP 文件使用的語言,或引用第三方產品的語言,因此本文件中可能會出現例外狀況。深入瞭解思科如何使用包容性用語。
思科已使用電腦和人工技術翻譯本文件,讓全世界的使用者能夠以自己的語言理解支援內容。請注意,即使是最佳機器翻譯,也不如專業譯者翻譯的內容準確。Cisco Systems, Inc. 對這些翻譯的準確度概不負責,並建議一律查看原始英文文件(提供連結)。
本文檔介紹在Ultra-M設定中替換故障對象儲存磁碟(OSD) — 計算伺服器所需的步驟。
此過程適用於NEWTON版本的Openstack環境,其中ESC不管理CPAR,並且CPAR直接安裝在部署在Openstack上的虛擬機器(VM)上。
Ultra-M是經過預先打包和驗證的虛擬化移動資料包核心解決方案,旨在簡化VNF的部署。OpenStack是適用於Ultra-M的Virtual Infrastructure Manager(VIM),包含以下節點型別:
Ultra-M的高級體系結構及涉及的元件如下圖所示:
附註:Ultra M 5.1.x版本用於定義本文檔中的過程。
MoP | 過程方法 |
OSD | 對象儲存磁碟 |
OSPD | OpenStack平台導向器 |
硬碟 | 硬碟驅動器 |
固態硬碟 | 固態驅動器 |
VIM | 虛擬基礎架構管理員 |
虛擬機器 | 虛擬機器 |
EM | 元素管理器 |
UAS | Ultra自動化服務 |
UUID | 通用唯一識別符號 |
備份
在替換Compute節點之前,請務必檢查Red Hat OpenStack平台環境的當前狀態。建議您檢查當前狀態,以避免Compute替換過程處於開啟狀態時的複雜情況。通過這種更換流程可以實現這一點。
在進行恢復時,思科建議使用以下步驟備份OSPD資料庫:
[root@director ~]# mysqldump --opt --all-databases > /root/undercloud-all-databases.sql [root@director ~]# tar --xattrs -czf undercloud-backup-`date +%F`.tar.gz /root/undercloud-all-databases.sql /etc/my.cnf.d/server.cnf /var/lib/glance/images /srv/node /home/stack tar: Removing leading `/' from member names
此過程可確保在不影響任何例項可用性的情況下替換節點。
附註:確保您擁有該例項的快照,以便在需要時恢復虛擬機器。按照有關如何建立VM快照的步驟進行操作。
[stack@director ~]$ nova list --field name,host | grep osd-compute-0 | 46b4b9eb-a1a6-425d-b886-a0ba760e6114 | AAA-CPAR-testing-instance | pod2-stack-compute-4.localdomain |
附註:此處顯示的輸出中,第一列對應於通用唯一識別符號(UUID),第二列是VM名稱,第三列是存在VM的主機名。此輸出的引數在後續小節中使用。
步驟1.開啟連線到網路的任何Secure Shell(SSH)客戶端並連線到CPAR例項。
重要的一點是,不要同時關閉一個站點內的所有4個AAA例項,而要逐個關閉。
步驟2.要關閉CPAR應用程式,請運行命令:
/opt/CSCOar/bin/arserver stop
出現「Cisco Prime Access Registrar Server Agent shutdown complete.」消息。 必須出現。
附註:如果使用者使命令列介面(CLI)會話保持開啟狀態,則arserver stop命令將無效,並顯示此消息。
ERROR: You cannot shut down Cisco Prime Access Registrar while the CLI is being used. Current list of running CLI with process id is: 2903 /opt/CSCOar/bin/aregcmd –s
在此示例中,需要終止突出顯示的進程ID 2903,然後才能停止CPAR。如果是這種情況,請運行命令以終止此進程:
kill -9 *process_id*
然後重複步驟1。
步驟3.要驗證CPAR應用程式確實已關閉,請運行命令:
/opt/CSCOar/bin/arstatus
必須出現以下消息:
Cisco Prime Access Registrar Server Agent not running Cisco Prime Access Registrar GUI not running
步驟1.輸入與當前正在處理的站點(城市)對應的Horizon GUI網站。
當您訪問水平線時,觀察的螢幕如下圖所示。
步驟2.導覽至專案>例項,如下圖所示。
如果使用的是CPAR,則此選單中只顯示4個AAA例項。
步驟3.一次僅關閉一個例項,並重複本文檔中的整個過程。若要關閉虛擬機器,請導覽至Actions > Shut Off Instance,如下圖所示,並確認您的選擇。
步驟4.通過檢查Status = Shutoff和Power State = Shutdown來驗證例項確實已關閉,如下圖所示。
此步驟結束CPAR關閉過程。
一旦CPAR VM關閉,可以並行拍攝快照,因為它們屬於獨立計算。
四個QCOW2檔案是並行建立的。
獲取每個AAA例項的快照。(25分鐘–1小時)(使用qcow映像作為源的例項為25分鐘,使用原始映像作為源的例項為1小時)
3.按一下Create Snapshot以繼續建立快照(需要在相應的AAA例項上執行該操作),如下圖所示。
4.執行快照後,按一下Images,驗證是否全部完成,並報告沒有問題,如下圖所示。
5.下一步是以QCOW2格式下載快照,並將其轉移到遠端實體,以防在此過程中丟失OSPD。為此,通過在OSPD級別運行命令glance image-list來識別快照。
[root@elospd01 stack]# glance image-list +--------------------------------------+---------------------------+ | ID | Name | +--------------------------------------+---------------------------+ | 80f083cb-66f9-4fcf-8b8a-7d8965e47b1d | AAA-Temporary | | 22f8536b-3f3c-4bcc-ae1a-8f2ab0d8b950 | ELP1 cluman 10_09_2017 | | 70ef5911-208e-4cac-93e2-6fe9033db560 | ELP2 cluman 10_09_2017 | | e0b57fc9-e5c3-4b51-8b94-56cbccdf5401 | ESC-image | | 92dfe18c-df35-4aa9-8c52-9c663d3f839b | lgnaaa01-sept102017 | | 1461226b-4362-428b-bc90-0a98cbf33500 | tmobile-pcrf-13.1.1.iso | | 98275e15-37cf-4681-9bcc-d6ba18947d7b | tmobile-pcrf-13.1.1.qcow2 | +--------------------------------------+---------------------------+
6.一旦確定要下載的快照(標籤為綠色的快照),您就可以使用glance image-download命令以QCOW2格式下載它,如圖所示。
[root@elospd01 stack]# glance image-download 92dfe18c-df35-4aa9-8c52-9c663d3f839b --file /tmp/AAA-CPAR-LGNoct192017.qcow2 &
7.下載過程完成後,需要執行壓縮過程,因為作業系統處理的進程、任務和臨時檔案可能使ZEROES填充該快照。用於檔案壓縮的命令是virt-sparsify。
[root@elospd01 stack]# virt-sparsify AAA-CPAR-LGNoct192017.qcow2 AAA-CPAR-LGNoct192017_compressed.qcow2
此過程可能需要一些時間(大約10-15分鐘)。 完成後,生成的檔案就是下一步中指定的需要傳輸到外部實體的檔案。
需要驗證檔案完整性,為了做到這一點,請運行下一個命令,並在輸出末尾查詢「corrupt」屬性。
[root@wsospd01 tmp]# qemu-img info AAA-CPAR-LGNoct192017_compressed.qcow2 image: AAA-CPAR-LGNoct192017_compressed.qcow2 file format: qcow2 virtual size: 150G (161061273600 bytes) disk size: 18G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false
[stack@director ~]$ nova list --field name,host | grep osd-compute-0 | 46b4b9eb-a1a6-425d-b886-a0ba760e6114 | AAA-CPAR-testing-instance | pod2-stack-compute-4.localdomain |
附註:此處顯示的輸出中,第一列對應於通用唯一識別符號(UUID),第二列是VM名稱,第三列是存在VM的主機名。此輸出的引數在後續小節中使用。
[heat-admin@pod2-stack-osd-compute-0 ~]$ sudo ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 13393G 11088G 2305G 17.21 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 3635G 0 metrics 1 3452M 0.09 3635G 219421 images 2 138G 3.67 3635G 43127 backups 3 0 0 3635G 0 volumes 4 139G 3.70 3635G 36581 vms 5 490G 11.89 3635G 126247
[heat-admin@pod2-stack-osd-compute-0 ~]$ sudo ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 13.07996 root default -2 4.35999 host pod2-stack-osd-compute-0 0 1.09000 osd.0 up 1.00000 1.00000 3 1.09000 osd.3 up 1.00000 1.00000 6 1.09000 osd.6 up 1.00000 1.00000 9 1.09000 osd.9 up 1.00000 1.00000 -3 4.35999 host pod2-stack-osd-compute-1 1 1.09000 osd.1 up 1.00000 1.00000 4 1.09000 osd.4 up 1.00000 1.00000 7 1.09000 osd.7 up 1.00000 1.00000 10 1.09000 osd.10 up 1.00000 1.00000 -4 4.35999 host pod2-stack-osd-compute-2 2 1.09000 osd.2 up 1.00000 1.00000 5 1.09000 osd.5 up 1.00000 1.00000 8 1.09000 osd.8 up 1.00000 1.00000 11 1.09000 osd.11 up 1.00000 1.00000
[heat-admin@pod2-stack-osd-compute-0 ~]$ systemctl list-units *ceph* UNIT LOAD ACTIVE SUB DESCRIPTION var-lib-ceph-osd-ceph\x2d0.mount loaded active mounted /var/lib/ceph/osd/ceph-0 var-lib-ceph-osd-ceph\x2d3.mount loaded active mounted /var/lib/ceph/osd/ceph-3 var-lib-ceph-osd-ceph\x2d6.mount loaded active mounted /var/lib/ceph/osd/ceph-6 var-lib-ceph-osd-ceph\x2d9.mount loaded active mounted /var/lib/ceph/osd/ceph-9 ceph-osd@0.service loaded active running Ceph object storage daemon ceph-osd@3.service loaded active running Ceph object storage daemon ceph-osd@6.service loaded active running Ceph object storage daemon ceph-osd@9.service loaded active running Ceph object storage daemon system-ceph\x2ddisk.slice loaded active active system-ceph\x2ddisk.slice system-ceph\x2dosd.slice loaded active active system-ceph\x2dosd.slice ceph-mon.target loaded active active ceph target allowing to start/stop all ceph-mon@.service instances at once ceph-osd.target loaded active active ceph target allowing to start/stop all ceph-osd@.service instances at once ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once ceph.target loaded active active ceph target allowing to start/stop all ceph*@.service instances at once LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type.
14 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.
[heat-admin@pod2-stack-osd-compute-0 ~]# systemctl disable ceph-osd@0 [heat-admin@pod2-stack-osd-compute-0 ~]# systemctl stop ceph-osd@0 [heat-admin@pod2-stack-osd-compute-0 ~]# ceph osd out 0
[heat-admin@pod2-stack-osd-compute-0 ~]# ceph osd crush remove osd.0
[heat-admin@pod2-stack-osd-compute-0 ~]# ceph auth del osd.0
[heat-admin@pod2-stack-osd-compute-0 ~]# ceph osd rm 0
[heat-admin@pod2-stack-osd-compute-0 ~]# umount /var/lib/ceph.osd/ceph-0 [heat-admin@pod2-stack-osd-compute-0 ~]# rm -rf /var/lib/ceph.osd/ceph-0
或,
[heat-admin@pod2-stack-osd-compute-0 ~]$ sudo ls /var/lib/ceph/osd ceph-0 ceph-3 ceph-6 ceph-9
[heat-admin@pod2-stack-osd-compute-0 ~]$ /bin/sh clean.sh [heat-admin@pod2-stack-osd-compute-0 ~]$ cat clean.sh
#!/bin/sh set -x CEPH=`sudo ls /var/lib/ceph/osd` for c in $CEPH do i=`echo $c |cut -d'-' -f2` sudo systemctl disable ceph-osd@$i || (echo "error rc:$?"; exit 1) sleep 2 sudo systemctl stop ceph-osd@$i || (echo "error rc:$?"; exit 1) sleep 2 sudo ceph osd out $i || (echo "error rc:$?"; exit 1) sleep 2 sudo ceph osd crush remove osd.$i || (echo "error rc:$?"; exit 1) sleep 2 sudo ceph auth del osd.$i || (echo "error rc:$?"; exit 1) sleep 2 sudo ceph osd rm $i || (echo "error rc:$?"; exit 1) sleep 2 sudo umount /var/lib/ceph/osd/$c || (echo "error rc:$?"; exit 1) sleep 2 sudo rm -rf /var/lib/ceph/osd/$c || (echo "error rc:$?"; exit 1) sleep 2 done sudo ceph osd tree
在所有OSD進程都進行了遷移/刪除之後,節點可以從超雲中刪除。
附註:刪除CEPH後,VNF HD RAID進入「已降級」狀態,但必須仍可以訪問hd-disk。
正常斷電
[stack@director ~]$ nova stop aaa2-21 Request to stop server aaa2-21 has been accepted. [stack@director ~]$ nova list +--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------------------------------------------+ | 46b4b9eb-a1a6-425d-b886-a0ba760e6114 | AAA-CPAR-testing-instance | ACTIVE | - | Running | tb1-mgmt=172.16.181.14, 10.225.247.233; radius-routable1=10.160.132.245; diameter-routable1=10.160.132.231 | | 3bc14173-876b-4d56-88e7-b890d67a4122 | aaa2-21 | SHUTOFF | - | Shutdown | diameter-routable1=10.160.132.230; radius-routable1=10.160.132.248; tb1-mgmt=172.16.181.7, 10.225.247.234 | | f404f6ad-34c8-4a5f-a757-14c8ed7fa30e | aaa21june | ACTIVE | - | Running | diameter-routable1=10.160.132.233; radius-routable1=10.160.132.244; tb1-mgmt=172.16.181.10 | +--------------------------------------+---------------------------+---------+------------+-------------+------------------------------------------------------------------------------------------------------------+
不論計算節點中託管的VM,本節中提到的步驟都是通用的。
從服務清單中刪除OSD-Compute Node。
[stack@director ~]$ openstack compute service list |grep osd-compute | 135 | nova-compute | pod2-stack-osd-compute-1.localdomain | AZ-esc2 | enabled | up | 2018-06-22T11:05:22.000000 | | 150 | nova-compute | pod2-stack-osd-compute-2.localdomain | nova | enabled | up | 2018-06-22T11:05:17.000000 | | 153 | nova-compute | pod2-stack-osd-compute-0.localdomain | AZ-esc1 | enabled | up | 2018-06-22T11:05:25.000000 |
[stack@director ~]$ openstack compute service delete 150
刪除中子代理
[stack@director ~]$ openstack network agent list | grep osd-compute-0 | eaecff95-b163-4cde-a99d-90bd26682b22 | Open vSwitch agent | pod2-stack-osd-compute-0.localdomain | None | True | UP | neutron-openvswitch-agent |
[stack@director ~]$ openstack network agent delete eaecff95-b163-4cde-a99d-90bd26682b22
從Ironic資料庫中刪除
[root@director ~]# nova list | grep osd-compute-0 | 6810c884-1cb9-4321-9a07-192443920f1f | pod2-stack-osd-compute-0 | ACTIVE | - | Running | ctlplane=192.200.0.109 | [root@al03-pod2-ospd ~]$ nova delete 6810c884-1cb9-4321-9a07-192443920f1f
[root@director ~]# source stackrc [root@director ~]# nova show pod2-stack-osd-compute-0 | grep hypervisor | OS-EXT-SRV-ATTR:hypervisor_hostname | 05ceb513-e159-417d-a6d6-cbbcc4b167d7
[stack@director ~]$ ironic node-delete 05ceb513-e159-417d-a6d6-cbbcc4b167d7 [stack@director ~]$ ironic node-list
現在不能在ironic node-list中列出已刪除的節點。
從超雲中刪除
openstack overcloud node delete --templates -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml -e /home/stack/custom-templates/network.yaml -e /home/stack/custom-templates/ceph.yaml -e /home/stack/custom-templates/compute.yaml -e /home/stack/custom-templates/layout.yaml -e /home/stack/custom-templates/layout.yaml --stack <stack-name> <UUID>
[stack@director ~]$ source stackrc [stack@director ~]$ /bin/sh delete_node.sh + openstack overcloud node delete --templates -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml -e /home/stack/custom-templates/network.yaml -e /home/stack/custom-templates/ceph.yaml -e /home/stack/custom-templates/compute.yaml -e /home/stack/custom-templates/layout.yaml -e /home/stack/custom-templates/layout.yaml --stack pod2-stack 7439ea6c-3a88-47c2-9ff5-0a4f24647444 Deleting the following nodes from stack pod2-stack: - 7439ea6c-3a88-47c2-9ff5-0a4f24647444 Started Mistral Workflow. Execution ID: 4ab4508a-c1d5-4e48-9b95-ad9a5baa20ae real 0m52.078s user 0m0.383s sys 0m0.086s
[stack@director ~]$ openstack stack list +--------------------------------------+------------+-----------------+----------------------+----------------------+ | ID | Stack Name | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+-----------------+----------------------+----------------------+ | 5df68458-095d-43bd-a8c4-033e68ba79a0 | pod2-stack | UPDATE_COMPLETE | 2018-05-08T21:30:06Z | 2018-05-08T20:42:48Z | +--------------------------------------+------------+-----------------+----------------------+----------------------+
安裝新的計算節點
導覽至Storage > Cisco 12G SAS Modular Raid Controller(SLOT-HBA)> Physical Drive Info,如下圖所示。
導覽至Storage > Cisco 12G SAS Modular Raid Controller(SLOT-HBA)> Controller Info > Create Virtual Drive from Unused Physical Drives,如下圖所示。
導覽至Admin > Communication Services > Communication Services,如下圖所示。
導覽至Compute > BIOS > Configure BIOS > Advanced > Processor Configuration,如下圖所示。
JOURNAL > From physical drive number 3 OSD1 > From physical drive number 7 OSD2 > From physical drive number 8 OSD3 > From physical drive number 9 OSD4 > From physical drive number 10
附註:此處顯示的影象和本節中提到的配置步驟是參考韌體版本3.0(3e),如果您使用其他版本,可能會有細微的變化。
將新的OSD計算節點新增到超雲
不論計算節點託管的VM,本節中提到的步驟都常見。
建立一個add_node.json檔案,該檔案僅包含要新增的新計算伺服器的詳細資訊。請確保以前未使用過新計算伺服器的索引號。通常,遞增下一個最高計算值。
範例:最高驗前是osd-compute-17,因此在2-vnf系統的情況下建立了osd-compute-18。
附註:請記住json格式。
[stack@director ~]$ cat add_node.json { "nodes":[ { "mac":[ "<MAC_ADDRESS>" ], "capabilities": "node:osd-compute-3,boot_option:local", "cpu":"24", "memory":"256000", "disk":"3000", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"<PASSWORD>", "pm_addr":"192.100.0.5" } ] }
[stack@director ~]$ openstack baremetal import --json add_node.json Started Mistral Workflow. Execution ID: 78f3b22c-5c11-4d08-a00f-8553b09f497d Successfully registered node UUID 7eddfa87-6ae6-4308-b1d2-78c98689a56e Started Mistral Workflow. Execution ID: 33a68c16-c6fd-4f2a-9df9-926545f2127e Successfully set all nodes to available.
[stack@director ~]$ openstack baremetal node manage 7eddfa87-6ae6-4308-b1d2-78c98689a56e [stack@director ~]$ ironic node-list |grep 7eddfa87 | 7eddfa87-6ae6-4308-b1d2-78c98689a56e | None | None | power off | manageable | False | [stack@director ~]$ openstack overcloud node introspect 7eddfa87-6ae6-4308-b1d2-78c98689a56e --provide Started Mistral Workflow. Execution ID: e320298a-6562-42e3-8ba6-5ce6d8524e5c Waiting for introspection to finish... Successfully introspected all nodes. Introspection completed. Started Mistral Workflow. Execution ID: c4a90d7b-ebf2-4fcb-96bf-e3168aa69dc9 Successfully set all nodes to available. [stack@director ~]$ ironic node-list |grep available | 7eddfa87-6ae6-4308-b1d2-78c98689a56e | None | None | power off | available | False |
OsdComputeIP:
internal_api: - 11.120.0.43 - 11.120.0.44 - 11.120.0.45 - 11.120.0.43 <<< take osd-compute-0 .43 and add here tenant: - 11.117.0.43 - 11.117.0.44 - 11.117.0.45 - 11.117.0.43 << and here storage: - 11.118.0.43 - 11.118.0.44 - 11.118.0.45 - 11.118.0.43 << and here storage_mgmt: - 11.119.0.43 - 11.119.0.44 - 11.119.0.45 - 11.119.0.43 << and here
[stack@director ~]$ ./deploy.sh ++ openstack overcloud deploy --templates -r /home/stack/custom-templates/custom-roles.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml -e /home/stack/custom-templates/network.yaml -e /home/stack/custom-templates/ceph.yaml -e /home/stack/custom-templates/compute.yaml -e /home/stack/custom-templates/layout.yaml --stack ADN-ultram --debug --log-file overcloudDeploy_11_06_17__16_39_26.log --ntp-server 172.24.167.109 --neutron-flat-networks phys_pcie1_0,phys_pcie1_1,phys_pcie4_0,phys_pcie4_1 --neutron-network-vlan-ranges datacentre:1001:1050 --neutron-disable-tunneling --verbose --timeout 180 … Starting new HTTP connection (1): 192.200.0.1 "POST /v2/action_executions HTTP/1.1" 201 1695 HTTP POST http://192.200.0.1:8989/v2/action_executions 201 Overcloud Endpoint: http://10.1.2.5:5000/v2.0 Overcloud Deployed clean_up DeployOvercloud: END return value: 0 real 38m38.971s user 0m3.605s sys 0m0.466s
[stack@director ~]$ openstack stack list +--------------------------------------+------------+-----------------+----------------------+----------------------+ | ID | Stack Name | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+-----------------+----------------------+----------------------+ | 5df68458-095d-43bd-a8c4-033e68ba79a0 | ADN-ultram | UPDATE_COMPLETE | 2017-11-02T21:30:06Z | 2017-11-06T21:40:58Z | +--------------------------------------+------------+-----------------+----------------------+----------------------+
[stack@director ~]$ source stackrc [stack@director ~]$ nova list |grep osd-compute-3 | 0f2d88cd-d2b9-4f28-b2ca-13e305ad49ea | pod1-osd-compute-3 | ACTIVE | - | Running | ctlplane=192.200.0.117 | [stack@director ~]$ source corerc [stack@director ~]$ openstack hypervisor list |grep osd-compute-3 | 63 | pod1-osd-compute-3.localdomain |
[heat-admin@pod1-osd-compute-3 ~]$ sudo ceph -s cluster eb2bb192-b1c9-11e6-9205-525400330666 health HEALTH_WARN 223 pgs backfill_wait 4 pgs backfilling 41 pgs degraded 227 pgs stuck unclean 41 pgs undersized recovery 45229/1300136 objects degraded (3.479%) recovery 525016/1300136 objects misplaced (40.382%) monmap e1: 3 mons at {Pod1-controller-0=11.118.0.40:6789/0,Pod1-controller-1=11.118.0.41:6789/0,Pod1-controller-2=11.118.0.42:6789/0} election epoch 58, quorum 0,1,2 Pod1-controller-0,Pod1-controller-1,Pod1-controller-2 osdmap e986: 12 osds: 12 up, 12 in; 225 remapped pgs flags sortbitwise,require_jewel_osds pgmap v781746: 704 pgs, 6 pools, 533 GB data, 344 kobjects 1553 GB used, 11840 GB / 13393 GB avail 45229/1300136 objects degraded (3.479%) 525016/1300136 objects misplaced (40.382%) 477 active+clean 186 active+remapped+wait_backfill 37 active+undersized+degraded+remapped+wait_backfill 4 active+undersized+degraded+remapped+backfilling
[heat-admin@pod1-osd-compute-3 ~]$ sudo ceph -s
cluster eb2bb192-b1c9-11e6-9205-525400330666 health HEALTH_OK monmap e1: 3 mons at {Pod1-controller-0=11.118.0.40:6789/0,Pod1-controller-1=11.118.0.41:6789/0,Pod1-controller-2=11.118.0.42:6789/0} election epoch 58, quorum 0,1,2 Pod1-controller-0,Pod1-controller-1,Pod1-controller-2 osdmap e1398: 12 osds: 12 up, 12 in flags sortbitwise,require_jewel_osds pgmap v784311: 704 pgs, 6 pools, 533 GB data, 344 kobjects 1599 GB used, 11793 GB / 13393 GB avail 704 active+clean client io 8168 kB/s wr, 0 op/s rd, 32 op/s wr [heat-admin@pod1-osd-compute-3 ~]$ sudo ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 13.07996 root default -2 0 host pod1-osd-compute-0 -3 4.35999 host pod1-osd-compute-2 1 1.09000 osd.1 up 1.00000 1.00000 4 1.09000 osd.4 up 1.00000 1.00000 7 1.09000 osd.7 up 1.00000 1.00000 10 1.09000 osd.10 up 1.00000 1.00000 -4 4.35999 host pod1-osd-compute-1 2 1.09000 osd.2 up 1.00000 1.00000 5 1.09000 osd.5 up 1.00000 1.00000 8 1.09000 osd.8 up 1.00000 1.00000 11 1.09000 osd.11 up 1.00000 1.00000 -5 4.35999 host pod1-osd-compute-3 0 1.09000 osd.0 up 1.00000 1.00000 3 1.09000 osd.3 up 1.00000 1.00000 6 1.09000 osd.6 up 1.00000 1.00000 9 1.09000 osd.9 up 1.00000 1.00000
可以使用前面步驟中拍攝的快照重新部署以前的例項。
步驟1。(可選)如果沒有可用的先前VM快照,則連線到傳送備份的OSPD節點,並將備份通過SFTP返回到其原始OSPD節點。使用sftp root@x.x.x.xwhere x.x.x.x是原始OSPD的IP。將快照檔案儲存在/tmp目錄中。
步驟2.連線到重新部署例項的OSPD節點。
使用以下命令獲取環境變數:
# source /home/stack/pod1-stackrc-Core-CPAR
步驟3.為了將快照用作影象,必須按原樣上載到水平面。運行下一個命令以執行此操作。
#glance image-create -- AAA-CPAR-Date-snapshot.qcow2 --container-format bare --disk-format qcow2 --name AAA-CPAR-Date-snapshot
如圖所示,這個過程可以在水平線上看到。
步驟4.在「水平線」中,定位至專案>例項,然後按一下啟動例項,如下圖所示。
步驟5.輸入例項名稱,然後選擇可用區,如下圖所示。
步驟6.在Source索引標籤中,選擇映像來建立例項。在Select Boot Source功能表中,選擇image,系統會顯示映像清單,選擇之前透過按一下+號上傳的映像,如下圖所示。
步驟7.在Flavor索引標籤中,按如下圖所示,按一下+符號選擇AAA調味。
步驟8.最後,導航到Network頁籤,通過按一下+號選擇例項需要的網絡。在這種情況下,請選擇diameter-soutable1、radius-routable1和tb1-mgmt,如下圖所示。
步驟9.最後,按一下Launch Instance建立它。可在Horizon中監控進度,如下圖所示。
幾分鐘後,該例項將完全部署並可供使用。
浮動IP地址是可路由地址,這意味著可以從Ultra M/Openstack體系結構外部訪問它,並且能夠與網路中的其他節點通訊。
步驟1。在Horizon頂部選單中,導航到Admin > Floating IPs。
步驟2.按一下Allocate IP to Project。
步驟3.在Allocate Floating IP視窗中,選擇新浮動IP所屬的Pool、將分配它的Project以及新的Floating IP Address本身。
例如:
步驟4.按一下Allocate Floating IP。
步驟5.在「展望期」頂部選單中,定位至「專案」>「例項」。
步驟6.在Action列中,按一下Create Snapshot按鈕中指向下方的箭頭,必須顯示選單。選擇關聯浮動IP選項。
步驟7.在「IP地址」欄位中選擇要使用的相應浮動IP地址,然後從將在要關聯的埠中分配此浮動IP的新例項中選擇相應的管理介面(eth0)。請參閱下一映像作為此過程的示例。
步驟8.最後,按一下Associate。
步驟1。在「展望期」頂部選單中,定位至「專案」>「例項」。
步驟2.按一下在Cunch a New Instance部分中建立的例項/VM的名稱。
步驟3.按一下Console。這將顯示VM的CLI。
步驟4.顯示CLI後,輸入正確的登入憑證:
使用者名稱:根
密碼:cisco123,如下圖所示。
步驟5.在CLI中,運行命令vi /etc/ssh/sshd_config以編輯ssh配置。
步驟6.開啟SSH配置檔案後,按I編輯該檔案。然後查詢此處顯示的部分,並將第一行從PasswordAuthentication no 更改為PasswordAuthentication yes。
步驟7.按ESC並輸入:wq!儲存sshd_config檔案更改。
步驟8.運行命令service sshd restart。
步驟9.為了測試已正確應用SSH配置更改,請開啟任何SSH客戶端,並嘗試使用分配給例項的浮動IP(即10.145.0.249)和使用者root來建立遠端安全連接。
步驟1.使用安裝應用程式的相應VM/伺服器的IP地址開啟SSH會話,如下圖所示。
完成活動並在關閉的站點中重新建立CPAR服務後,請遵循以下步驟。
步驟1.登入回展望期,定位至專案>例項>啟動例項。
步驟2.驗證例項的狀態是否為Active,電源狀態是否為Running,如下圖所示。
步驟1.在作業系統級別運行命令/opt/CSCOar/bin/arstatus:
[root@wscaaa04 ~]# /opt/CSCOar/bin/arstatus Cisco Prime AR RADIUS server running (pid: 24834) Cisco Prime AR Server Agent running (pid: 24821) Cisco Prime AR MCD lock manager running (pid: 24824) Cisco Prime AR MCD server running (pid: 24833) Cisco Prime AR GUI running (pid: 24836) SNMP Master Agent running (pid: 24835) [root@wscaaa04 ~]#
步驟2.在作業系統級別運行命令/opt/CSCOar/bin/aregcmd並輸入管理員憑據。驗證CPAr Health為10/10,並退出CPAR CLI。
[root@aaa02 logs]# /opt/CSCOar/bin/aregcmd Cisco Prime Access Registrar 7.3.0.1 Configuration Utility Copyright (C) 1995-2017 by Cisco Systems, Inc. All rights reserved. Cluster: User: admin Passphrase: Logging in to localhost [ //localhost ] LicenseInfo = PAR-NG-TPS 7.2(100TPS:) PAR-ADD-TPS 7.2(2000TPS:) PAR-RDDR-TRX 7.2() PAR-HSS 7.2() Radius/ Administrators/ Server 'Radius' is Running, its health is 10 out of 10 --> exit
步驟3.運行命令netstat | grep diameter並驗證所有DRA連線是否已建立。
此處提到的輸出適用於預期存在Diameter連結的環境。如果顯示的連結較少,則表示與需要分析的DRA斷開連線。
[root@aa02 logs]# netstat | grep diameter tcp 0 0 aaa02.aaa.epc.:77 mp1.dra01.d:diameter ESTABLISHED tcp 0 0 aaa02.aaa.epc.:36 tsa6.dra01:diameter ESTABLISHED tcp 0 0 aaa02.aaa.epc.:47 mp2.dra01.d:diameter ESTABLISHED tcp 0 0 aaa02.aaa.epc.:07 tsa5.dra01:diameter ESTABLISHED tcp 0 0 aaa02.aaa.epc.:08 np2.dra01.d:diameter ESTABLISHED
步驟4.檢查TPS日誌是否顯示CPAR正在處理的請求。突出顯示的值代表需要注意的TPS。
TPS的值不能超過1500。
[root@wscaaa04 ~]# tail -f /opt/CSCOar/logs/tps-11-21-2017.csv 11-21-2017,23:57:35,263,0 11-21-2017,23:57:50,237,0 11-21-2017,23:58:05,237,0 11-21-2017,23:58:20,257,0 11-21-2017,23:58:35,254,0 11-21-2017,23:58:50,248,0 11-21-2017,23:59:05,272,0 11-21-2017,23:59:20,243,0 11-21-2017,23:59:35,244,0 11-21-2017,23:59:50,233,0
步驟5.在name_radius_1_log中查詢任何「錯誤」或「警報」消息。
[root@aaa02 logs]# grep -E "error|alarm" name_radius_1_log
步驟6.要驗證CPAR進程使用的記憶體量,請運行命令:
top | grep radius
[root@sfraaa02 ~]# top | grep radius 27008 root 20 0 20.228g 2.413g 11408 S 128.3 7.7 1165:41 radius
此突出顯示的值必須小於7 Gb,這是應用程式級別允許的最大值。