Multi WAE Collection

Table 1. Feature History

Feature

Release Information

Description

Multi WAE Collection Support

Cisco WAE Release 7.6.0

WAE is now updated to support Multi WAE Collection. This is very usefule for large networks. Multi WAE allows you to split the WAE topology into smaller WAE instances. Collection is done in parallel for each WAE instance and plan file generated from each instance is then merged into a single plan file for the entire network.

This section contains the following topics:

Multi WAE Collection Overview

Multi WAE collection allows you to split the WAE topology into smaller WAE instances. Collection is done in parallel for each WAE instance. The plan file generated after collection from each WAE instance is then merged to a single plan file for the entire network.

Multi WAE is particularly useful for large networks. It improves performance and reduces the collection time by performing collection and model building of few NIMOs in parallel using different WAE instances. Any change in the network is propagated to that particular WAE instance final network using notification. The merge NIMO, when run will propagate to the final merged network.

Each WAE instance has its own WAE process running with access to WAE CLI and WAE GUI. WAE Design can also be installed in each of the WAE instances and can be used to view the plan files from each instance.

The SR-PCE agent is configured in active state only on one WAE instance and this is called the WAE Scale Primary instance. SR-PC agent is disabled for other WAE instances and these are called WAE Scale Secondary instances. Each WAE instance is given a unique scale ID which is shared between the HA primary and HA secondary instances. This ID is called the Scale ID.

Example:

If a network has around 3k devices within the single SR-PCE server, then Multiple WAE instances can be configured to process a subnetwork of the topology. Here the split of the network is done by SR-PCE agent.

The BGP-LS XTC NIMO which is the first NIMO to run after SR-PCE agent is configured to read the sub network database file instead of full SR-PCE agent output database. BGP-LS XTC NIMO processes the sub network and provides input to other NIMOs. The remaining NIMOs work on the sub network. Output from SAgE contains topology and traffic information pertaining to the sub network. Merge NIMO merges the output from each WAE instance SAgE network.

NIMOs like Netflow NIMO, Multicast NIMO, and Demands NIMO require full network as source network and hence must be run after the merged NIMO. It is recommended to have a separate DARE and SAgE workflow starting with merged NIMO as the only source and the remaining required dependency NIMO like Demands, etc as dependency network.

For multiple AS typologies, recommended way is to configure separate WAE instances for each AS and then merge their SAgE plan files to get full network topology. A non multi WAE approach is recommended when multiple AS topology is being discovered. It is simpler and has no dependency on multi WAE.

Splitting WAE into Multiple Instances

You can split WAE topology into multiple instances based on area, level, AS, node count, or other user defined configurations. The maximum number of supported WAE instances is 5.

Split Based on Area/Level/AS

The split is based on the area/level/AS and the number of nodes coming under the area/level/AS.

  • If number of areas/levels/AS is equal to the number of WAE instances participating in the split, then one WAE instance processes nodes belonging to one area/level/AS.

  • If number of areas/levels/AS is more than the number of WAE instances then one or more areas/levels/AS are grouped and associated to each WAE instance depending on the node count belonging to the area/level/AS.

  • If the number of areas/levels/AS is less than the number of WAE instances then the area/level/AS having more than the desired node count [= total node count/ number of WAE instances] is split into one or more WAE instances.

Split Based on Node Count

The split is purely based on the node count only. The total node count divided by the number of WAE instances is put into each of the WAE instances. The reminder nodes are added to the WAE instance belonging to the lowest Scale ID.

Split Based on User Defined Configurations

You can specify the WAE topology to split based on some user defined configurations. Comma separated IP Addresses, Host names, Areas, Levels, or AS for each Scale ID can be specified and the same will be used for the split.


Note


IP addresses provided by SR PCE agent are used. IP Manage is not used for the split.


Merging Topologies

The plan files from each of the WAE instances are merged by the inter-as-nimo. The following information is merged.

  • All the topology related tables like Nodes, Circuits, Interfaces, BGP, SRLG, etc

  • All the LSP related tables like LSPs, SegmentLists, NamedPaths, etc

  • Traffic information


Note


  • It is recommended to schedule the merge NIMO to enable propagation of the topology updates through notification to the final merged network.

  • The L2VPN information is not merged since the same VPN is named in each of the split files.


Licensing

The license for Multi WAE works similar to the HA setup. In case of smart licensing, secondary users must be added for additional WAE instances taking part in Multi WAE. In case of traditional licensing, the license file must have list of MAC address associated with additional WAE instances taking part in Multi WAE.

Multi WAE Collection Workflow

The following workflow describes the high-level steps to configure multi WAE collection when using the Cisco WAE CLI.


Note


There can be a maximum 5 WAE instances participating in Multi WAE. There can be a maximum of 1 scale primary instance.


Before you begin

Install Multi WAE using the ansible playbook. See Cisco WAE Installtion Guide.

Procedure


Step 1

Configure the SR-PCE agent on all servers. See Configure Agents Using the Expert Mode.

Step 2

Configure multi WAE on all servers.

Step 3

Restart SR-PCE (on multi WAE scale primary server).

Note

 

On all multi WAE secondary servers, the SR-PCE agent will be disabled.

Step 4

Configure other NIMOs on all severs – topo-bgpls-xtc-nimo, lsp-pcep-xtc-nimo, lsp-snmp-nimo, topo-vpn-nimo, topo-bgp-nimo, traffic-poll-nimo, inventory-nimo. See Network Interface Modules (NIMOs).

Note

 

Multicast nimo and layout-nimo are not supported on individual servers due to partial topology.

Step 5

Configure inter-as-nimo on multi WAE scale primary which merges SAgE plan of all servers to get final topology.

Step 6

Configure the final network on on multi WAE scale primary instance. Configure inter-as-nimo as the source for final network. Add layout-nimo, multicast nimo, and traffic-demands-nimo to the final network.

Step 7

Run the collection of all NIMO on all servers. Run inter-as-nimo on scale primary server.

Step 8

Verify and merge the final plan file.


Cisco WAE UI—Multi WAE

You can use Cisco WAE UI to access the details related to different WAE instances and to see the split details.


Note


The Multi WAE Dashboard option is available only if Multi WAE is configured in WAE.


Navigate to Cisco WAE UI > Multi WAE Dashboard.

This is a view-only screen providing details about the number of splits, split type, username, install, run path, etc. You can use this screen to view the IP addresses of remote WAE and remote HA WAE as well.

You can check the status of the WAE instances participating in Multi WAE using Cisco WAE UI > Status Dashboard. See Status Dashboard.

Cisco WAE CLI—Multi WAE

The different configuration options available for multi WAE are:

ha-enabled

Indicates if HA is configured for the WAE instances taking part in the split.

igp-protocol

IGP protocol running in the network.

install-path

Install path for the WAE instance (mandatory parameter).

num-of-splits

Number of WAE instances configured to handle network split (default value = 2).

remote-wae-details

WAE details of all the WAE instances

Note

 

The number of entries in remote-wae-details must be equal to num-of-splits, else commit does not succeed.

run-path

Run directory for the WAE instance (mandatory parameter).

split-enabled

If set, the WAE instance is part of Multi-WAE instance.

split-type

This field indicates the split algorithm to be used to split the topology.

user-name

User name associated with the WAE instance (mandatory parameter).

remote-wae-ha-details

IP address of the WAE HA instance associated with the scale ID.


Note


Once the Multi WAE configuration is done in all the WAE instances, the changes done to basic multi WAE configuration (other than the advanced configuration) in scale primary instance is propagated to all the scale secondary instances using Kafka.


To set advanced options, use:

wae components multi-wae advanced

Options:

asn

Autonomous system number for the network.

debug-health-check

Action added for debug purpose only.

health-check-enabled

Specify if you want to enable the health check. Default is false.

health-check-interval

Specify the interval between the health check runs in minutes. Default is 5 min and minimum is 1 min.

kafkaPort

Kafka port exposed by scale primary.

load-split-config

Loads the split config from yang configuration.

node-filter

Use a node filter.

node-split-record-file-path

The record file path to store the mapping between the node and split number when record is enabled.

record-node-split-mapping

Enable recording of the node to split mapping.

rsync-pool-size

The number of threads processing rsync tasks, default is 5.

rsync-timeout

Rsync Timeout for fetching SR-PCE topology output from primary WAE instance in minutes, default is 20 minutes.

single-ended-ebgp-discovery

Discover eBGP links that only have a single link end (not common).

split-action

Can be used for debug purpose. Split the agent topology db file into sub files.

Use the following WAE CLI commands to retrieve information related to Multi WAE.

  • To retrieve details of every WAE instance taking part in the split, use:

    wae components multi-wae remote-wae-details <scale-id>
  • To see the Multi WAE config used by WAE, use:

    wae components multi-wae load-split-config

    Use the command to see the different multi WAE configuration values used internally in WAE instance in wae-java-vm.log file. This command can be used for debugging purposes.

  • To verify how the split output looks for a given Multi WAE configuration, use:

    wae components multi-wae split-action
  • To clear the Multi WAE config, use:

    wae components multi-wae clear-multi-wae-config
  • To copy the configuration from scale primary to scale secondary given the xpath, use:

    wae components multi-wae copy-config

    Note


    • The multi WAE configuration must be first configured on all the instances for this command to work.

    • copy-config works on scale primary instance only.


Health Check in Multi WAE

You can check the status of the WAE instances participating in Multi WAE using Health Check. An Ansible playbook is used to determine the status of the WAE instances. The playbook is used to get the status which is displayed on Status Dashboard.

Configure Multi WAE so that all the prerequisites required for running of the playbook are met:

wae components multi-wae advanced health-check

Options

health-check-enabled

Specify if you want to enable the health check. Default is false.

health-check-interval

Specify the interval between the health check runs in minutes. Default is 5 min and minimum is 1 min.

If health check enabled, the topo-bgpls-xtc-nimo or lsp-pcep-xtc-nimo cannot be run until health check service completes. Wait for the scheduled interval to run or run the below command to enable running of the topo-bgpls-xtc-nimo.

wae components multi-wae get-remote-wae-health run-xtc-status-check true

It is not mandatory to run the health check on the scale secondary instances unless HA is configured for scale primary. The health check status collected by scale primary is propagated to all the scale secondary instances via kafka.

High Availability in Multi WAE

Every WAE instance participating in Multi WAE can have the HA standby instance. Configure HA using the following command:
wae components multi-wae ha-enabled true remote-ha-wae-details

When HA is enabled on multi WAE primary instance, Health Check must be enabled on scale primary. When HA is enabled on multi WAE secondary instance, Health Check must be enabled on all servers. For more information, see Health Check in Multi WAE.

Multi WAE Configuration Examples

  • Sample Multi WAE Configuration

    show running-config wae components multi-wae 
     wae components multi-wae split-enabled true
    wae components multi-wae ha-enabled false
    wae components multi-wae num-of-splits 2
    wae components multi-wae user-name user1
    wae components multi-wae run-path /home/user1/750/mw/wae-run/
    wae components multi-wae install-path /home/user1/750/mw/wae-install/
    wae components multi-wae igp-protocol ospf
    wae components multi-wae remote-wae-details 11
     ip-address 10.0.0.1
     role       scale-primary
    !
    wae components multi-wae remote-wae-details 14
     ip-address 10.0.0.8
     role       scale-secondary
    !
    wae components multi-wae split-type area
    wae components multi-wae advanced health-check-enabled true
    
  • Sample Merge Nimo Configuration

    networks network merge_nimo
     nimo status active  false
     nimo status last-run 2021-08-23T07:06:14.933+00:00
     nimo status last-successful-run 2021-08-23T07:06:14.933+00:00
     nimo inter-as-nimo single-as-merge true
     nimo inter-as-nimo sources wae-redhat-1
      network        sage
     wae-scale-id   11
     !
     nimo inter-as-nimo sources wae-redhat-2
      network        sage
     wae-scale-id   14
     !
    !
    
  • Sample Final network with demands configuration

    wae components aggregators aggregator final_dare
     sources source merge_nimo
      nimo inter-as-nimo
     !
     dependencies dependency traffic_demand
      nimo traffic-demands-nimo
     !
     final-network final_sage
    !
    
  • Sample show command output

    show wae components multi-wae split-details 
           SPLIT    
    SCALE  TYPE     
    ID     VALUES   
    ----------------
    11     0,40,30  
    14     20,10
    
  • Sample show Health Status output

    show wae components multi-wae health-status 
    wae components multi-wae health-status last-run "23-Aug-21 12:30:40 IST"
    wae components multi-wae health-status active-topo-xtc-agents XTC-Standby,XTC-Active
    wae components multi-wae health-status active-lsp-xtc-agents XTC-Standby,XTC-Active
    wae components multi-wae health-status wae-details 10.0.0.1
     scale-id                    11
     role                        scale-primary
     wae-status                  true
     kafka-status                true
     ha-status                   Disabled
     topo-xtc-agents-configured  XTC-Standby,XTC-Active
     lsp-xtc-agents-configured   XTC-Standby,XTC-Active
     primary-scale-id-configured 11
     primary-scale-ip-configured 10.0.0.1
     wae-version                 "WAE v7.6.0-822-g95e355c for linux on x86_64."
    wae components multi-wae health-status wae-details 10.0.0.1
     scale-id                    14
     role                        scale-secondary
     wae-status                  true
     kafka-status                true
     ha-status                   Disabled
     topo-xtc-agents-configured  XTC-Standby,XTC-Active
     lsp-xtc-agents-configured   XTC-Standby,XTC-Active
     primary-scale-id-configured 11
     primary-scale-ip-configured 10.0.0.1
     wae-version                 "WAE v7.6.0-822-g95e355c for linux on x86_64."
    

Multi WAE Collection Limitations

  • SR-PCE agent configuration is required on all servers and is controlled from only the primary server only.

  • WAE can be split into a maximum of 5 servers.

  • After any change in Multi WAE configuration –

    • Restart the SR-PCE agent

    • Resync Aggregator

  • After changing the number of splits, WAE services must be stopped and started again.

  • Apply node filter configuration under multi WAE configuration. After applying node filter configuration under multi WAE configuration knob, restart the SR-PCE agent.

  • L2VPN is not supported in Multi WAE environment.

  • Recommended way to remove Multi WAE configuration –

    #wae components multi-wae clear-multi-wae-config