Installing Active/Active High Availability Cluster

This chapter contains the following sections:

Installing Active/Active High Availability Cluster

To configure the Active/Active HA cluster, you can run the following commands on OpenStack, where openrc my-server is openrc for the OpenStack, and test is the stack name.

source ~/elastic-services-controller/esc-bootvm-scripts/openrc my-server-42
openstack stack create test -t aa.yaml

To check the status of the stack, use the following commands:

  • openstack stack list
  • openstack stack show test
  • openstack stack event list test

Once the stack is in CREATE_COMPLETE status, you can ssh into the VMs. The openstack stack show test command lists the IP addresses for the 3 VMs, you can use them to access to the VMs.

Setting up the User Configuration

You can configure some of the parameters based on your settings, such as network, subnet, using either static IP or DHCP to allocate IP address, flavor, image, password, etc. The configurable parameters are available in aa-param.yaml, Openstack heat environment file, when the ESC cluster is instantiated via heat template (aa.yaml).

To instantiate ESC cluster, use the following example:
openstack stack create name -t aa.yaml -e aa-params.yaml
The following example shows how to use a static IP address to configure the port while using environment template:
sample@my-server-39:~/aa4.5/apr15$ more aa-params.yaml
parameters:
  network_1_name: esc-net
  subnet_name: esc-subnet
  esc_1_ip: 172.23.0.228
  esc_2_ip: 172.23.0.229
  esc_3_ip: 172.23.0.230
Following are the user configurable parameters available in aa.yaml :
parameters:
  network_1_name:
    type: string
    description: Name of the image
    default: esc-net
  subnet_name:
    type: string
    description: subnet name
  esc_1_ip:
    type: string
    description: static IP address of esc-1 VM.
 
  esc_2_ip:
    type: string
    description: static IP address of esc-2 VM.
 
  esc_3_ip:
    type: string
    description: static IP address of esc-3 VM.
 
 
resources:
  esc_1_port:
    type: OS::Neutron::Port
    properties:
      network_id: { get_param: network_1_name }
      fixed_ips: [ { "subnet": { get_param: subnet_name}, "ip_address": { get_param: esc_1_ip } } ]
 
  esc_2_port:
    type: OS::Neutron::Port
    properties:
      network_id: { get_param: network_1_name }
      fixed_ips: [ { "subnet": { get_param: subnet_name}, "ip_address": { get_param: esc_2_ip } } ]
 
  esc_3_port:
    type: OS::Neutron::Port
    properties:
      network_id: { get_param: network_1_name }
      fixed_ips: [ { "subnet": { get_param: subnet_name}, "ip_address": { get_param: esc_3_ip } } ]
 
...omitting...
The following example shows how to use configurable image, flavor, and vm name prefix from environment template:
sample@my-server-39:~/aa4.5/apr15$ more aa-params.yaml
parameters:
  nameprefix: abc
  image_name: ESC-5_0_DEV_4
  flavor_name: m1.large
sample@my-server-39:~/aa4.5/apr15$
The following example shows how to use configurable image, flavor, and vmnameprefix from heat template:
parameters:
  nameprefix:
    type: string
    description: Name prefix of vm
    default: helen
  image_name:
    type: string
    description: Name of the image
    default: ESC-5_0_DEV_4
  flavor_name:
    type: string
    description: Name of the image
    default: m1.large
 
  esc-1:
    type: OS::Nova::Server
    properties:
      name:
       str_replace:
         template: $nameprefix-esc-1
         params:
           $nameprefix : { get_param: nameprefix }
      image: { get_param: image_name }
      flavor: { get_param: flavor_name }
      ... omitting...

Validating Active/Active High Availability Cluster Post Installation

To verify all the ESC nodes use the following commands. Here, all ESC nodes implies each VMs.

sample@my-server-39:~$ openstack --insecure server list | grep abc
| 5ea6fc79-2b2a-4064-9c6a-a83d6b06c225 | abc-test-esc-3                                              | ACTIVE  | esc-net=172.23.7.203                                                                      | ESC-5_0_DEV_13              | m1.large                 |
| 10e165d9-5015-4b64-88fe-19e874e6e7c1 | abc-test-esc-1                                              | ACTIVE  | esc-net=172.23.7.205                                                                      | ESC-5_0_DEV_13              | m1.large                 |
| 35f6bad1-865f-4155-8411-d37e2616e079 | abc-test-esc-2                                              | ACTIVE  | esc-net=172.23.7.204                                                                      | ESC-5_0_DEV_13              | m1.large                 |

To find out the leader node, ssh into one of the nodes/VMs and run the following:
[admin@sample-test-esc-1 ~]$ sudo escadm elector dump
{
    "13078@sample-test-esc-3.novalocal:42143": {
        "state": "FOLLOWER",
        "location": "13078@test1-test-esc-3.novalocal:42143",
        "service": "esc_service"
    },
    "13053@sample-test-esc-2.novalocal:50474": {
        "state": "FOLLOWER",
        "location": "13053@sample-test-esc-2.novalocal:50474",
        "service": "esc_service"
    },
    "13187@sample-test-esc-1.novalocal:59514": {
        "state": "LEADER",
        "location": "13187@sample-test-esc-1.novalocal:59514",
        "service": "esc_service"
    }
}

Adding Default VIM Connector to the Active/Active High Available Cluster

You can add a default vim connector to the 3 ESC VM cluster in the following two different ways:

  1. Once the 3 ESC VM cluster boots up, use netconf interface to add a default vim connector by using the following command. The vim.xml is the default vim connector deployment file.
    [admin@name-esc-1 ~]$ esc_nc_cli --host db.service.consul --user admin --password <admin_password> edit-config vim.xml
  2. To configure a default vim connector, you must add the default vim connector configuration inside the heat template day0 file. Add the following block in the write_files section under the cloud-config in aa-day0.yaml file. Once the 3 ESC VM cluster boots up, it creates a default vim connector by its own.

    The following example shows how to configure default vim connector in heat template day0 file:
    - path: /opt/cisco/esc/esc-config/esc_params.conf
      content: |
        openstack.os_auth_url=http://10.85.103.38:35357/v3
        openstack.os_project_name=admin
        openstack.os_tenant_name=admin
        openstack.os_user_domain_name=default
        openstack.os_project_domain_name=default
        openstack.os_identity_api_version=3
        openstack.os_image_api_version=2
        openstack.os_username=admin
        openstack.os_password=password1
    

Adding BGP in an Active/Active Cluster

To initiate the BGP process, add the anycast IP to the lo device. You can configure it in sys-cfg.yaml.

For example:
#cloud-config
write_files:
 - path: /etc/cloud/cloud.cfg.d/sys-cfg.yaml
   content: |
     network:           
       version: 1       
       config:
       - type: physical 
         name: lo
         subnets:
         - type: static
           address: 172.23.188.188/23

You must specify the advertise IP for consul. In esc-config.yaml, add the following:
consul:
  advertise_addr: 172.23.1.149
Following is the example for adding the BGP section:
bgp:
  depend_on: elector:leader
  anycast_ip: 172.23.188.188/23
  local_as: '65001'
  local_ip: 192.168.1.11
  local_router_id: 192.168.1.11
  remote_as: '65000'
  remote_ip: 192.168.1.12