Deploying in Linux KVM

Prerequisites and Guidelines

You must have reviewed and completed the general prerequisites described in the Deployment Overview.

In addition, the following apply when deploying in Linux KVM:

Table 1. Deployment Requirements
Version Requirements

Release 1.1.3d*

*We do not recommend deploying earlier releases

  • Linux Kernel 3.10.0-957.el7.x86_64 or later with KVM libvirt-4.5.0-23.el7_7.1.x86_64 or later

  • 16 vCPUs

  • 48 GB of RAM

  • 800 GB disk

    Each node requires a dedicated disk partition

  • The disk must have I/O latency of 20ms or less.

    You can verify the I/O latency using the following command:

    # fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data_with_se --size=22m --bs=2300 --name=mytest

    And confirm that the 99.00th=[<value>] in the fsync/fdatasync/sync_file_range section is under 20ms.

  • We recommend that each Application Services Engine node is deployed in a different KVM server.

  • Additional or higher requirements may depend on the applications installed in the cluster.

    Consult individual applications' documentation for details

Deploying Cisco Application Services Engine in Linux KVM

This section describes how to deploy Cisco Application Services Engine cluster in Linux KVM.

Before you begin

Procedure


Step 1

Download the Cisco Application Services Engine image.

  1. Browse to the Software Download page.

    https://software.cisco.com/download/home/286324815/type

  2. Click Application Services Engine Software.

  3. From the left sidebar, choose the Application Services Engine version you want to download.

  4. Download the Cisco Application Services Engine image for Linux KVM (case-dk9.<version>.qcow2).

Step 2

Log in to your Linux KVM as the root user.

Step 3

Copy the image to all KVM hosts.

Note 

Keep in mind, the qcow2 image on each node must be on a unique disk partition.

You can use wget or scp to copy the image, for example:

# scp case-dk9.1.1.3d.qcow2 root@<kvm-host-ip>:/home/sn_base/qcow2

The following steps assume you copied the image into the /home/sn_base/qcow2 directory.

Step 4

On each node, create a snapshot of the base qcow2 image.

You will create a snapshot of the image you downloaded and use the snapshots as the disk images for the nodes' VMs.

  1. Log in to the KVM host where you will host a node.

  2. Create a folder for the node's snapshot.

    The following steps assume you create the snapshot in the /home/mso-node1 directory.

    # mkdir -p /home/mso-node1/
    # cd /home/mso-node1
  3. Create the snapshot.

    # qemu-img create -f qcow2 -b /home/sn_base/qcow2/case-dk9.1.1.3d.qcow2 /home/mso-node1/disk0.qcow2
  4. Repeat this step for every node in the cluster.

Step 5

Deploy the VMs for all nodes.

  1. Open the KVM console and click New Virtual Machine.

  2. In the New VM screen, choose Import existing disk image option and click Forward.

  3. In the Provide existing storage path tab, choose the disk0.qcow2 file.

    We recommend that each node's disk image is stored on its own disk partition.

  4. Choose Generic for the OS type and Version, then click Forward.

  5. Choose 48GB memory and 16 CPUs, then click Forward.

  6. Enter the Name of the virtual machine and check the Customize configuration before install option. Then click Finish.

  7. Select the NIC for the Virtual Network Interface and choose the device model as e1000.

  8. Leave the default Mac address.

  9. Click Apply.

  10. Click Begin Installation.

  11. Repeat this step for the other nodes.

Step 6

Configure the first node.

  1. Connect to first node's console.

    You will be prompted to run the first-time setup utility:

    [...]
    [ OK ] Started atomix-boot-setup.
           Starting atomix-ready...
           Starting Initial cloud-init job (pre-networking)...
    Starting cloud data source ...
    Press any key to run first-boot setup on this console...
    
  2. Enter the cluster name for the service node.

    Cluster Name: ServiceEngine
  3. Specify that this is the master node.

    All nodes in the cluster will be set to master.

    Master Node? (Y/n): y
  4. Specify that you are configuring the first node.

    When configuring second and third node, you will be able to skip some steps by downloading configuration from the first node. Since this is the first node you are configuring, enter n.

    Download Config From Peers? (Y/n): n
  5. Enter the node name for the service node.

    Node Name: ServiceNodel
  6. Enter and confirm the password for the root user.

    This password will be used for the Application Services Engine's rescue-user login, as well as the initial password for the GUI's admin user.

    Admin Password:
    Reenter Admin Password:
  7. Enter the data network information.

    You will be prompted to enter the data network IP address, netmask, and gateway.

    Optionally, you can also provide the VLAN ID for the network. For most deployments, you can leave this field blank.

    Data Network:
      IP Address/Mask: 192.168.6.172/24
      Gateway: 192.168.6.1
      Vlan ID (optional): 410
  8. Enter the management network information.

    You will be prompted to enter the data network IP address, netmask, and gateway.

    Management Network:
      IP Address/Mask: 192.168.9.172/24
      Gateway: 192.168.9.1
  9. Provide the list of other nodes.

    Data network IP address and serial numbers of the other master nodes in the cluster.

    Unlike physical appliance deployments where you obtain the serial numbers from CIMC, for Linux KVM you can choose the strings to use. We recommend using CiscoSN01, CiscoSN02, CiscoSN03 for node1, node2, node3.

    Master List (Space Separated Data Network IP,Serialnumber List)
    (Ex: 192.192.5.101,WZP22451Q1R 192.192.5.103,WZP22451Q5A): 192.168.6.173,CiscoSN02 192.168.6.174,CiscoSN03
  10. Provide the DNS details.

    You will need to specify the DNS hostname or IP address as well as the search domain to be used by the Application Services Engine node.

    DNS:
      Providers (Space Separated IP List): 192.168.10.10
      Search Domains (Space Separated List): tme-lab.local
  11. Provide the NTP servers.

    NTP Servers (Space Separated IP List): 192.168.10.120
  12. Provide the internal networks information.

    You will need to provide the service and application subnet information.

    The application overlay network defines the address space used by the application's services running in the Application Services Engine. The services network is an internal network used by the Application Services Engine and its processes. Both of these subnets must be /16.

    Note 

    Communications between containers deployed in different Application Services Engine nodes is VXLAN-encapsulated and uses the data interfaces IP addresses as source and destination. This means that the Application Overlay and Service Overlay addresses are never exposed outside the data network and any traffic on these subnets is routed internally and does not leave the cluster nodes. As such, when configuring these networks, ensure that they are unique and do not overlap with any existing networks or services you may need to access from the Application Services Engine cluster nodes

    Service Subnet (not exposed externally) [100.80.0.0/16]: 100.80.0.0/16
    App Subnet (not exposed externally) [172.17.0.1/16]: 172.17.0.1/16

Example:

Starting apic-sn setup utility
Setup utility for Application Services Engine with SerialNumber WZP23340A7X and running version 1.1.3d
Use AD anytime to start over
Cluster Name: ServiceEngine
Master Node? (Y/n): y
Download Config From Peers? (Y/n): n
Node Name: ServiceNodel
Admin Password:
Reenter Admin Password:
Data Network:
  IP Address/Mask: 192.168.6.172/24
  Gateway: 192.168.6.1
  Vlan ID (optional): 410
Management Network:
  IP Address/Mask: 192.168.9.172/24
  Gateway: 192.168.9.1
Master List (Space Separated Data Network IP,Serialnumber List)
(Ex: 192.192.5.101,WZP22451Q1R 192.192.5.103,WZP22451Q5A): 192.168.6.173,CiscoSN02 192.168.6.174,CiscoSN03
DNS:
  Providers (Space Separated IP List): 192.168.10.10
  Search Domains (Space Separated List): tme-lab.local
NTP Servers (Space Separated IP List): 192.168.10.120
Service Subnet (not exposed externally) [100.80.0.0/16]: 100.80.0.0/16
App Subnet (not exposed externally) [172.17.0.1/16]: 172.17.0.1/16
Step 7

Review the configuration

After you enter all the configuration, review it and confirm.

Please review the config:
App Subnet: 172.17.0.1/16
Cluster Name: ServiceEngine
Cluster Size: 3
DNS:
  Domain Name: dev-infra12.case.local
  Providers:
  - 171.70.168.183
  Search Domains:
  - atomix.local
Download Config: false
Data Network:
  Gateway: 192.168.6.1
  IP Address/Mask: 192.168.6.172/24
  Vlan ID: 410
Management Network:
  Gateway: 192.168.9.1
  IP Address/Mask: 192.168.9.172/24
Master List:
- ipAddress: 192.168.6.173
  name: CiscoSN02
  serialNumber: CiscoSN02
- ipAddress: 192.168.6.174
  name: CiscoSN03
  serialNumber: CiscoSN03
NTP Servers:
- 192.168.10.120
Node Name: ServiceNodel
Node Role: Master
Node Type: Physical
Password: <hidden>
Service Subnet: 100.80.0.0/16

Re-enter config?(y/N) n

Login with rescue-user & issue acidiag health to check cluster status

CentOS Linux 7 (Core)
Kernel 4.14.174stock-1 on an x86_64

ServiceNodel login: 
Step 8

Configure the second node.

  1. Connect to the second node's console.

    You will be prompted to run the first-time setup utility:

    [...]
    [ OK ] Started atomix-boot-setup.
           Starting atomix-ready...
           Starting Initial cloud-init job (pre-networking)...
    Starting cloud data source ...
    Press any key to run first-boot setup on this console...
    
  2. Enter the cluster name for the service node.

    Cluster Name: ServiceEngine
  3. Specify that this is the master node.

    All nodes in the cluster will be set to master.

    Master Node? (Y/n): y
  4. Specify that you've already configured the first node.

    When configuring second and third node, you can skip some steps by downloading configuration from the first node.

    Download Config From Peers? (Y/n): y
  5. Enter the node name for the service node.

    Node Name: ServiceNode2
  6. Enter and confirm the password for the rescue-user user.

    We recommend configuring the same password for all nodes, however you can choose to provide different passwords for the second and third node. If you provide different passwords, the first node's password will be used as the initial password of the admin user in the GUI.

    Admin Password:
    Reenter Admin Password:
  7. Enter the data network information.

    You will be prompted to enter the data network IP address, netmask, and gateway.

    Optionally, you can also provide the VLAN ID for the network. For most deployments, you can skip the VLAN ID parameter.

    Data Network:
      IP Address/Mask: 192.168.6.173/24
      Gateway: 192.168.6.1
      Vlan ID (optional):
  8. Enter the management network information.

    You will be prompted to enter the data network IP address, netmask, and gateway.

    Management Network:
      IP Address/Mask: 192.168.9.173/24
      Gateway: 192.168.9.1
  9. Provide the list of other nodes.

    Data network IP address and serial numbers of the other master nodes in the cluster.

    Unlike physical appliance deployments where you obtain the serial numbers from CIMC, for Linux KVM you can choose the strings to use. We recommend using CiscoSN01, CiscoSN02, CiscoSN03 for node1, node2, node3.

    Master List (Space Separated Data Network IP,Serialnumber List)
    (Ex: 192.192.5.101,WZP22451Q1R 192.192.5.103,WZP22451Q5A): 192.168.6.172,CiscoSN01 192.168.6.174,CiscoSN03
  10. Review the configuration

    After you enter all the configuration for second and third node, review it and confirm.

    Please review the config
    Cluster Name: ServiceEngine Cluster Size: 3
    Data Network:
      Gateway: 192.168.6.1
      IP Address/Mask: 192.168.6.173/24
      Vlan ID: 410
      Download Config: true
    Management Network:
      Gateway: 192.168.9.1
      IP Address/Mask: 192.168.9.173/24
      Master List:
        ipAddress: 192.168.6.172 serialNumber: CiscoSN01
        ipAddress: 192.168.6.174 serialNumber: CiscoSN03
    Node Name: ServiceNode2
    Node Role: Master Node
    Type: Physical Password: <hidden>
    Re-enter config?(y/N): n
Step 9

Repeat the previous step to configure the third node.

Step 10

Verify that the cluster is healthy.

It may take up to 30 minutes for the cluster to form and all the services to start.

After all three nodes are ready, you can log in to any one node via SSH and run the following command to verify cluster health:

  1. Verify that the cluster is up and running.

    You can check the current status of cluster deployment by logging in to any of the nodes and running the acidiag health command.

    While the cluster is converging, you may see the following outputs:

    $ acidiag health
    k8s install is in-progress
    $ acidiag health
    k8s services not in desired state - [...]
    $ acidiag health
    k8s: Etcd cluster is not ready
    When the cluster is up and running, the following output will be displayed:
    $ acidiag health
    All components are healthy
  2. Log in to the Application Services Engine GUI.

    After the cluster becomes available, you can access it by browsing to any one of your nodes' management IP addresses. The default password for the admin user is the same as the rescue-user password you chose for the first node of the Application Services Engine cluster.

    When you first log in, you will be prompted to change the password.