此产品的文档集力求使用非歧视性语言。在本文档集中,非歧视性语言是指不隐含针对年龄、残障、性别、种族身份、族群身份、性取向、社会经济地位和交叉性的歧视的语言。由于产品软件的用户界面中使用的硬编码语言、基于 RFP 文档使用的语言或引用的第三方产品使用的语言,文档中可能无法确保完全使用非歧视性语言。 深入了解思科如何使用包容性语言。
思科采用人工翻译与机器翻译相结合的方式将此文档翻译成不同语言,希望全球的用户都能通过各自的语言得到支持性的内容。 请注意:即使是最好的机器翻译,其准确度也不及专业翻译人员的水平。 Cisco Systems, Inc. 对于翻译的准确性不承担任何责任,并建议您总是参考英文原始文档(已提供链接)。
本文档介绍在托管思科策略套件(CPS)虚拟网络功能(VNF)的Ultra-M设置中更换此处提及的故障组件所需的步骤。
作者:思科高级服务部Nitesh Bansal。
Ultra-M是预打包和经过验证的虚拟化解决方案,旨在简化VNF的部署。OpenStack是Ultra-M的虚拟化基础设施管理器(VIM),由以下节点类型组成:
在更换故障组件之前,必须检查Red Hat Open Stack平台环境的当前状态。建议您检查当前状态,以避免在更换过程开启时出现问题。
在恢复时,思科建议通过以下步骤备份OSPD数据库:
[root@director ~]# mysqldump --opt --all-databases > /root/undercloud-all-databases.sql [root@director ~]# tar --xattrs -czf undercloud-backup-`date +%F`.tar.gz /root/undercloud-all-databases.sql /etc/my.cnf.d/server.cnf /var/lib/glance/images /srv/node /home/stack tar: Removing leading `/' from member names
此过程可确保在不影响实例可用性的情况下更换节点。
注意:如果服务器是控制器节点,请继续部分,否则请继续下一节。
VNF |
虚拟网络功能 |
PD |
策略导向器(负载均衡器) |
电源 |
策略服务器(pcrfclient) |
ESC |
弹性服务控制器 |
MOP |
程序方法 |
OSD |
对象存储磁盘 |
硬盘 |
硬盘驱动器 |
SSD |
固态驱动器 |
VIM |
虚拟基础设施管理器 |
虚拟机 |
虚拟机 |
SM |
会话管理器 |
QNS |
Quantum名称服务器 |
UUID |
通用唯一IDentifier |
计算/OSD — 计算可托管多种类型的虚拟机。确定所有步骤,并继续执行各个步骤以及特定裸机节点和此计算上托管的特定VM名称:
[stack@director ~]$ nova list --field name,host | grep compute-10 | 49ac5f22-469e-4b84-badc-031083db0533 | SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 | pod1-compute-10.localdomain | | 49ac5f22-469e-4b84-badc-031083db0533 | SVS1-tmo_sm-s3_0_05966301-bd95-4071-817a-0af43757fc88 | pod1-compute-10.localdomain |
步骤1.创建快照,并将文件FTP到服务器外部或机架本身外部的其他位置。
openstack image create --poll
步骤2.从ESC停止VM。
/opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli vm-action STOP < CM vm-name>
步骤3.检验VM是否已停止。
[admin@esc ~]$ cd /opt/cisco/esc/esc-confd/esc-cli [admin@esc ~]$ ./esc_nc_cli get esc_datamodel | egrep --color "<state>|<vm_name>|<vm_id>|<deployment_name>" <snip> <state>SERVICE_ACTIVE_STATE</state> SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 VM_SHUTOFF_STATE
步骤1.登录Active lb并停止服务,如下所示
service corosync restart
service monit stop service qns stop
步骤 2。从ESC主键。
/opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli vm-action STOP < Standby PD vm-name>
第三步: 验证VM是否已停止。
admin@esc ~]$ cd /opt/cisco/esc/esc-confd/esc-cli [admin@esc ~]$ ./esc_nc_cli get esc_datamodel | egrep --color "| | | " SERVICE_ACTIVE_STATE SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 VM_SHUTOFF_STATE
步骤1.登录备用lb并停止服务。
service monit stop service qns stop
步骤2.从ESC Master。
/opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli vm-action STOP < Standby PD vm-name>
步骤3.检验VM是否已停止。
[admin@esc ~]$ cd /opt/cisco/esc/esc-confd/esc-cli [admin@esc ~]$ ./esc_nc_cli get esc_datamodel | egrep --color "| | | " SERVICE_ACTIVE_STATE SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 VM_SHUTOFF_STATE
步骤1.停止服务:
service monit stop service qns stop
步骤2.从ESC主页。
/opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli vm-action STOP < PS vm-name>
第三步: 验证VM是否已停止。
[dmin@esc ~]$ cd /opt/cisco/esc/esc-confd/esc-cli [dmin@esc ~]$ ./esc_nc_cli get esc_datamodel | egrep --color "| | | " SERVICE_ACTIVE_STATE SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 VM_SHUTOFF_STATE
对于SM VM正常关闭
步骤1.停止会话管理器中存在的所有mongo服务。
[root@sessionmg01 ~]# cd /etc/init.d [root@sessionmg01 init.d]# ls -l sessionmgr* [root@sessionmg01 ~]# /etc/init.d/sessionmgr-27717 stop Stopping mongod: [ OK ] [root@ sessionmg01 ~]# /etc/init.d/sessionmgr-27718 stop Stopping mongod: [ OK ] [root@ sessionmg01 ~]# /etc/init.d/sessionmgr-27719 stop Stopping mongod: [ OK ]
步骤2.从ESC Master。
/opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli vm-action STOP < PS vm-name>
步骤3.检验VM是否已停止。
[admin@esc ~]$ cd /opt/cisco/esc/esc-confd/esc-cli [admin@esc ~]$ ./esc_nc_cli get esc_datamodel | egrep --color "| | | " SERVICE_ACTIVE_STATE SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 VM_SHUTOFF_STATE
步骤1.检查策略SVN是否通过这些命令同步,如果返回值,则SVN已同步,无需从PCRFCLIENT02同步它。如果需要,您应跳过Recovery from the last backup仍可使用。
/usr/bin/svn propget svn:sync-from-url --revprop -r0 http://pcrfclient01/repos
步骤2.通过在PCRFCLIENT01上执行一系列命令,在pcrfclient01和pcrfclient02之间重新建立SVN主/从同步,将pcrfclient01作为主。
/bin/rm -fr /var/www/svn/repos /usr/bin/svnadmin create /var/www/svn/repos /usr/bin/svn propset --revprop -r0 svn:sync-last-merged-rev 0 http://pcrfclient02/repos-proxy-sync /usr/bin/svnadmin setuuid /var/www/svn/repos/ "Enter the UUID captured in step 2" /etc/init.d/vm-init-client /var/qps/bin/support/recover_svn_sync.sh
第三步: 在集群管理器中备份SVN。
config_br.py -a export --svn /mnt/backup/svn_backup_pcrfclient.tgz
第四步: 关闭pcrfclient中的服务。
service monit stop service qns stop
第五步: 从ESC Master:
/opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli vm-action STOP < pcrfclient vm-name>
第六步: 验证VM是否已停止。
[admin@esc ~]$ cd /opt/cisco/esc/esc-confd/esc-cli [admin@esc ~]$ ./esc_nc_cli get esc_datamodel | egrep --color "| | | " SERVICE_ACTIVE_STATE SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 VM_SHUTOFF_STATE
步骤1.登录仲裁服务器并关闭服务。
[root@SVS1OAM02 init.d]# ls -lrt sessionmgr* -rwxr-xr-x 1 root root 4382 Jun 21 07:34 sessionmgr-27721 -rwxr-xr-x 1 root root 4406 Jun 21 07:34 sessionmgr-27718 -rwxr-xr-x 1 root root 4407 Jun 21 07:34 sessionmgr-27719 -rwxr-xr-x 1 root root 4429 Jun 21 07:34 sessionmgr-27717 -rwxr-xr-x 1 root root 4248 Jun 21 07:34 sessionmgr-27720
service monit stop service qns stop
/etc/init.d/sessionmgr-[portno.] stop , where port no is the db port in the arbiter.
步骤2.从ESC主页。
/opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli vm-action STOP < pcrfclient vm-name>
步骤3.检验VM是否已停止。
[admin@esc ~]$ cd /opt/cisco/esc/esc-confd/esc-cli [admin@esc ~]$ ./esc_nc_cli get esc_datamodel | egrep --color "| | | " SERVICE_ACTIVE_STATE SVS1-tmo_cm_0_e3ac7841-7f21-45c8-9f86-3524541d6634 VM_SHUTOFF_STATE
对于弹性服务控制器(ESC)
步骤1. ESC-HA中的配置必须在ESC中进行任何上升或下降操作之前/之后以及配置更改之前/之后每月进行备份。必须备份此项,才能有效执行ESC的灾难恢复
/opt/cisco/esc/confd/bin/netconf-console --host 127.0.0.1 --port 830 -u-p --get-config > /home/admin/ESC_config.xml
步骤2.备份PCRF云配置部署XML中引用的所有脚本和用户数据文件。
file://opt/cisco/esc/cisco-cps/config/gr/cfg/std/pcrf-cm_cloud.cfg file://opt/cisco/esc/cisco-cps/config/gr/cfg/std/pcrf-oam_cloud.cfg file://opt/cisco/esc/cisco-cps/config/gr/cfg/std/pcrf-pd_cloud.cfg file://opt/cisco/esc/cisco-cps/config/gr/cfg/std/pcrf-qns_cloud.cfg file://opt/cisco/esc/cisco-cps/config/gr/cfg/std/pcrf-sm_cloud.cfg
示例 1:
PCRF_POST_DEPLOYMENT LCS::POST_DEPLOY_ALIVE FINISH_PCRF_INSTALLATION SCRIPT ---------- script_filename /opt/cisco/esc/cisco-cps/config/gr/tmo/cfg/../cps_init.py script_timeout 3600
示例 2:
PCRF_POST_DEPLOYMENT LCS::POST_DEPLOY_ALIVE FINISH_PCRF_INSTALLATION SCRIPT CLUMAN_MGMT_ADDRESS 10.174.132.46 CLUMAN_YAML_FILE /opt/cisco/esc/cisco-cps/config/vpcrf01/ cluman_orch_config.yaml script_filename /opt/cisco/esc/cisco-cps/config/vpcrf01/vpcrf_cluman_post_deployment.py wait_max_timeout 3600
如果部署ESC选项数据(在上一步中提取)包含任何突出显示的文件,请进行备份。
备份命令示例:
tar –zcf esc_files_backup.tgz /opt/cisco/esc/cisco-cps/config/
将此文件下载到本地计算机ftp/sftp,下载到云外的服务器。
Note:- Although opdata is synced between ESC master and slave, directories containing user-data, xml and post deploy scripts are not synced across both instances. It is suggested that customers can push the contents of directory containing these files using scp or sftp, these files should be constant across ESC-Master and ESC-Standby in order to recover a deployment when ESC VM which was master during deployment is not available do to any unforeseen circumstances.
步骤1.从两个ESC VM中收集日志并进行备份。
$ collect_esc_log.sh $ scp /tmp/@ :
步骤2.从主ECS节点备份数据库。
步骤3.切换到根用户并检查主ESC的状态,验证输出值是Master。
$ sudo bash $ escadm status Set ESC to maintenance mode & verify $ sudo escadm op_mode set --mode=maintenance $ escadm op_mode show
步骤4.使用变量设置文件名并包括日期信息,并调用备份工具并提供上一步中的文件名变量。
fname=esc_db_backup_$(date -u +"%y-%m-%d-%H-%M-%S") $ sudo /opt/cisco/esc/esc-scripts/esc_dbtool.py backup -- file /tmp/atlpod-esc-master-$fname.tar
步骤5.检查备份存储中的备份文件并确保该文件在备份存储中。
步骤6.将主ESC重新置于正常操作模式。
$ sudo escadm op_mode set --mode=operation
如果dbtool备份实用程序失败,请在ESC节点中应用以下解决方法一次。然后重复步骤6。
$ sudo sed -i "s,'pg_dump,'/usr/pgsql-9.4/bin/pg_dump," /opt/cisco/esc/esc-scripts/esc_dbtool.py
步骤1.登录节点中托管的ESC并检查它是否处于主状态。如果是,请将ESC切换到备用模式。
[admin@VNF2-esc-esc-0 esc-cli]$ escadm status 0 ESC status=0 ESC Master Healthy [admin@VNF2-esc-esc-0 ~]$ sudo service keepalived stop Stopping keepalived: [ OK ] [admin@VNF2-esc-esc-0 ~]$ escadm status 1 ESC status=0 In SWITCHING_TO_STOP state. Please check status after a while. [admin@VNF2-esc-esc-0 ~]$ sudo reboot Broadcast message from admin@vnf1-esc-esc-0.novalocal (/dev/pts/0) at 13:32 ... The system is going down for reboot NOW!
步骤2.一旦VM为ESC Standby(备用),请使用命令shutdown -r now
注意:如果要在OSD-Compute节点上更换故障组件,请在继续更换组件之前将CEPH置于服务器的维护状态。
[admin@osd-compute-0 ~]$ sudo ceph osd set norebalance set norebalance [admin@osd-compute-0 ~]$ sudo ceph osd set noout set noout [admin@osd-compute-0 ~]$ sudo ceph status cluster eb2bb192-b1c9-11e6-9205-525400330666 health HEALTH_WARN noout,norebalance,sortbitwise,require_jewel_osds flag(s) set monmap e1: 3 mons at {tb3-ultram-pod1-controller-0=11.118.0.40:6789/0,tb3-ultram-pod1-controller-1=11.118.0.41:6789/0,tb3-ultram-pod1-controller-2=11.118.0.42:6789/0} election epoch 58, quorum 0,1,2 tb3-ultram-pod1-controller-0,tb3-ultram-pod1-controller-1,tb3-ultram-pod1-controller-2 osdmap e194: 12 osds: 12 up, 12 in flags noout,norebalance,sortbitwise,require_jewel_osds pgmap v584865: 704 pgs, 6 pools, 531 GB data, 344 kobjects 1585 GB used, 11808 GB / 13393 GB avail 704 active+clean client io 463 kB/s rd, 14903 kB/s wr, 263 op/s rd, 542 op/s wr
关闭指定服务器。要更换UCS C240 M4服务器上的故障组件,请参阅以下步骤:
请参阅以下过程中的“持久记录”,并根据需要执行
[stack@director ~]$ nova list |grep VNF2-DEPLOYM_s9_0_8bc6cc60-15d6-4ead-8b6a-10e75d0e134d | 49ac5f22-469e-4b84-badc-031083db0533 | VNF2-DEPLOYM_s9_0_8bc6cc60-15d6-4ead-8b6a-10e75d0e134d | ERROR | - | NOSTATE |
[admin@VNF2-esc-esc-0 ~]$ sudo /opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli recovery-vm-action DO VNF2-DEPLOYM_s9_0_8bc6cc60-15d6-4ead-8b6a-10e75d0e134d [sudo] password for admin: Recovery VM Action /opt/cisco/esc/confd/bin/netconf-console --port=830 --host=127.0.0.1 --user=admin --privKeyFile=/root/.ssh/confd_id_dsa --privKeyType=dsa --rpc=/tmp/esc_nc_cli.ZpRCGiieuW
admin@VNF2-esc-esc-0 ~]$ tail -f /var/log/esc/yangesc.log … 14:59:50,112 07-Nov-2017 WARN Type: VM_RECOVERY_COMPLETE 14:59:50,112 07-Nov-2017 WARN Status: SUCCESS 14:59:50,112 07-Nov-2017 WARN Status Code: 200 14:59:50,112 07-Nov-2017 WARN Status Msg: Recovery: Successfully recovered VM [VNF2-DEPLOYM_s9_0_8bc6cc60-15d6-4ead-8b6a-10e75d0e134d].
[admin@esc ~]$ sudo service keepalived start
[admin@esc ~]$ escadm status 0 ESC status=0 ESC Slave Healthy
如果ESC由于意外状态而无法启动VM,思科建议通过重新启动主ESC执行ESC切换。ESC切换大约需要一分钟。在新的主ESC上运行脚本“health.sh”,检查状态是否为up。主ESC以启动VM并修复VM状态。完成此恢复任务最多需要5分钟。
您可以监控/var/log/esc/yangesc.log和/var/log/esc/escmanager.log。如果您在5-7分钟后未看到VM恢复,则用户需要转到并手动恢复受影响的VM。
如果ESC VM未恢复,请按照步骤部署新的ESC VM。请联系思科支持部门了解相关步骤。
从OSPD,登录控制器并验证pc是否处于良好状态 — 所有三个控制器联机和加莱拉都显示所有三个控制器为主控制器。
注意:正常的集群需要2个活动控制器,因此,请验证其余两个控制器是联机的且处于活动状态。
heat-admin@pod1-controller-0 ~]$ sudo pcs status Cluster name: tripleo_cluster Stack: corosync Current DC: pod1-controller-2 (version 1.1.15-11.el7_3.4-e174ec8) - partition with quorum Last updated: Mon Dec 4 00:46:10 2017 Last change: Wed Nov 29 01:20:52 2017 by hacluster via crmd on pod1-controller-0 3 nodes and 22 resources configured Online: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] Full list of resources: ip-11.118.0.42 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 ip-11.119.0.47 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 ip-11.120.0.49 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 ip-192.200.0.102 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 Clone Set: haproxy-clone [haproxy] Started: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] Master/Slave Set: galera-master [galera] Masters: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] ip-11.120.0.47 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 Clone Set: rabbitmq-clone [rabbitmq] Started: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] Master/Slave Set: redis-master [redis] Masters: [ pod1-controller-2 ] Slaves: [ pod1-controller-0 pod1-controller-1 ] ip-10.84.123.35 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 openstack-cinder-volume (systemd:openstack-cinder-volume): Started pod1-controller-2 my-ipmilan-for-pod1-controller-0 (stonith:fence_ipmilan): Started pod1-controller-0 my-ipmilan-for-pod1-controller-1 (stonith:fence_ipmilan): Started pod1-controller-0 my-ipmilan-for-pod1-controller-2 (stonith:fence_ipmilan): Started pod1-controller-0 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
[heat-admin@pod1-controller-0 ~]$ sudo pcs cluster standby
[heat-admin@pod1-controller-0 ~]$ sudo pcs status Cluster name: tripleo_cluster Stack: corosync Current DC: pod1-controller-2 (version 1.1.15-11.el7_3.4-e174ec8) - partition with quorum Last updated: Mon Dec 4 00:48:24 2017 Last change: Mon Dec 4 00:48:18 2017 by root via crm_attribute on pod1-controller-0 3 nodes and 22 resources configured Node pod1-controller-0: standby Online: [ pod1-controller-1 pod1-controller-2 ] Full list of resources: ip-11.118.0.42 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 ip-11.119.0.47 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 ip-11.120.0.49 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 ip-192.200.0.102 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 Clone Set: haproxy-clone [haproxy] Started: [ pod1-controller-1 pod1-controller-2 ] Stopped: [ pod1-controller-0 ] Master/Slave Set: galera-master [galera] Masters: [ pod1-controller-1 pod1-controller-2 ] Slaves: [ pod1-controller-0 ] ip-11.120.0.47 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 Clone Set: rabbitmq-clone [rabbitmq] Started: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] Master/Slave Set: redis-master [redis] Masters: [ pod1-controller-2 ] Slaves: [ pod1-controller-1 ] Stopped: [ pod1-controller-0 ] ip-10.84.123.35 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 openstack-cinder-volume (systemd:openstack-cinder-volume): Started pod1-controller-2 my-ipmilan-for-pod1-controller-0 (stonith:fence_ipmilan): Started pod1-controller-1 my-ipmilan-for-pod1-controller-1 (stonith:fence_ipmilan): Started pod1-controller-1 my-ipmilan-for-pod1-controller-2 (stonith:fence_ipmilan): Started pod1-controller-2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
关闭指定服务器的电源。在UCS C240 M4服务器上更换故障组件的步骤可从以下内容参考:
[stack@tb5-ospd ~]$ source stackrc [stack@tb5-ospd ~]$ nova list |grep pod1-controller-0 | 1ca946b8-52e5-4add-b94c-4d4b8a15a975 | pod1-controller-0 | ACTIVE | - | Running | ctlplane=192.200.0.112 |
[heat-admin@pod1-controller-0 ~]$ sudo pcs cluster unstandby [heat-admin@pod1-controller-0 ~]$ sudo pcs status Cluster name: tripleo_cluster Stack: corosync Current DC: pod1-controller-2 (version 1.1.15-11.el7_3.4-e174ec8) - partition with quorum Last updated: Mon Dec 4 01:08:10 2017 Last change: Mon Dec 4 01:04:21 2017 by root via crm_attribute on pod1-controller-0 3 nodes and 22 resources configured Online: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] Full list of resources: ip-11.118.0.42 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 ip-11.119.0.47 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 ip-11.120.0.49 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 ip-192.200.0.102 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 Clone Set: haproxy-clone [haproxy] Started: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] Master/Slave Set: galera-master [galera] Masters: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] ip-11.120.0.47 (ocf::heartbeat:IPaddr2): Started pod1-controller-2 Clone Set: rabbitmq-clone [rabbitmq] Started: [ pod1-controller-0 pod1-controller-1 pod1-controller-2 ] Master/Slave Set: redis-master [redis] Masters: [ pod1-controller-2 ] Slaves: [ pod1-controller-0 pod1-controller-1 ] ip-10.84.123.35 (ocf::heartbeat:IPaddr2): Started pod1-controller-1 openstack-cinder-volume (systemd:openstack-cinder-volume): Started pod1-controller-2 my-ipmilan-for-pod1-controller-0 (stonith:fence_ipmilan): Started pod1-controller-1 my-ipmilan-for-pod1-controller-1 (stonith:fence_ipmilan): Started pod1-controller-1 my-ipmilan-for-pod1-controller-2 (stonith:fence_ipmilan): Started pod1-controller-2 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
[heat-admin@pod1-controller-0 ~]$ sudo ceph -s cluster eb2bb192-b1c9-11e6-9205-525400330666 health HEALTH_OK monmap e1: 3 mons at {pod1-controller-0=11.118.0.10:6789/0,pod1-controller-1=11.118.0.11:6789/0,pod1-controller-2=11.118.0.12:6789/0} election epoch 70, quorum 0,1,2 pod1-controller-0,pod1-controller-1,pod1-controller-2 osdmap e218: 12 osds: 12 up, 12 in flags sortbitwise,require_jewel_osds pgmap v2080888: 704 pgs, 6 pools, 714 GB data, 237 kobjects 2142 GB used, 11251 GB / 13393 GB avail 704 active+clean client io 11797 kB/s wr, 0 op/s rd, 57 op/s wr