Overview

This section contains the following topics:

Cisco WAE Overview

The Cisco WAN Automation Engine (WAE) platform is an open, programmable framework that interconnects software modules, communicates with the network, and provides APIs to interface with external applications.

Cisco WAE provides the tools to create and maintain a model of the current network through the continual monitoring and analysis of the network and the traffic demands that is placed on it. At a given time, this network model contains all relevant information about a network, including topology, configuration, and traffic information. You can use this information as a basis for analyzing the impact on the network due to changes in traffic demands, paths, node and link failures, network optimizations, or other changes.

The Cisco WAE platform has numerous use cases, including:

  • Traffic engineering and network optimization—Compute TE LSP configurations to improve the network performance, or perform local or global optimization.

  • Demand engineering—Examine the impact on network traffic flow of adding, removing, or modifying traffic demands on the network.

  • Topology and predictive analysis—Observe the impact to network performance of changes in the network topology, which is driven either by design or by network failures.

  • TE tunnel programming—Examine the impact of modifying tunnel parameters, such as the tunnel path and reserved bandwidth.

  • Class of service (CoS)-aware bandwidth on demand—Examine existing network traffic and demands, and admit a set of service-class-specific demands between routers.

Cisco WAE Architecture

At its core, Cisco WAE defines an abstract network model, which can be built from an actual network by stitching together network interface modules (NIMOs).

The Cisco WAE network model is defined in YANG and is extensible via standard YANG mechanisms. WAE itself is implemented on top of a YANG run-time system that automatically generates APIs (NETCONF, RESTConf, CLI) from the YANG models.

Network Interface Modules

A network interface module (NIMO) is a WAE package that populates parts of the abstract network model, possibly querying the network to do so. Most NIMOs operate as follows:

  1. They read a source network model (or simply, a source model).

  2. They augment the source model with information obtained from the actual network.

  3. They produce a destination network model (or simply, a destination model) with the resulting model.

WAE includes several different NIMOs, such as:

  • Topology NIMO—Populates a basic network model with topology information (nodes, interfaces, circuits) based on the discovered IGP database augmented by SNMP queries. The topology NIMO does not have a source model.

  • LSP configuration NIMO—Augments a source model with LSP information, producing a destination model with the extra information.

  • Traffic poller NIMO—Augments a source model with traffic statistics polled from the network, producing a new destination model with extra information.

  • Layout NIMO—Adds layout properties to a source model to improve visualization. It produces a new destination model with the extra layout information. The NIMO records changes to the layout properties, so when the source model changes and the destination model is updated, the layout properties in the destination model are updated accordingly.

For a comprehensive list of all the NIMOs supported by WAE, see Network Interface Modules (NIMOs)

Network Models

A model building chain is an arrangement of NIMOs organized in such a way as to produce a network model with the desired information.

Delta Aggregation Rules Engine

The DARE aggregator is a WAE component that brings together various NIMOs, selecting model information from each of them, and consolidating the information into a single model. DARE first consolidates any configured topology NIMOs, creates a model, then runs other NIMOs against that model. For example, DARE consolidates an LSP configuration NIMO, L3 topology NIMO, L1 topology NIMO into a single model. It is then followed by traffic collection, inventory collection, layout, netflow and demands.

The following diagram shows a chain tied together by the DARE aggregator:


Note


Since DARE works and is based off of changes, it should be configured before changes are made to NIMO models.


For information on how to configure the aggregator to use DARE, see NIMO Collection Consolidation.

Simple Aggregation Engine

Simple Aggregation Engine (SAgE) is a WAE component which consolidates all the network information such as traffic, inventory, layout, netflow, demands and aggregates these changes along with the topology changes from DARE network into the final network. The network information from all the NIMOs (which support native format) is written into plan files. The network changes can be archived from SAgE.

SAgE aggregator enables to run traffic collection, inventory collection, layout, etc in parallel.

For information on how to configure the SAgE aggregator, see Run Traffic Collection or a Custom Script Using the Network Model Composer

Cisco WAE Modeling Daemon (WMD)

WMD receives changes from SAgE, incorporating scheduled NIMO runs. All updates are consolidated into a near real-time Master Model of the network. Cisco WAE applications (described in the next section) are able to connect to WMD and gain access to a copy of this near real-time model in order to leverage Cisco WAE OPM API functionality. WMD configuration is optional and is only required when using and Bandwidth applications.

For information on how to configure WMD, see Configure the WAE Modeling Daemon (WMD).

Cisco WAE Applications

Cisco WAE provides a flexible and powerful application development infrastructure. A simple Cisco WAE application consists of:

  • The application interface, defined in a YANG model. This interface usually includes RPCs and data models. The YANG models can, if necessary, extend the Cisco WAE network model, adding new data types.

  • The application logic, implemented using the Optimization and prediction modules (OPMs).

    OPM APIs provide a powerful Python API to manipulate network models. It lets you operate on the network without having to worry about device-specific properties. Even if the underlying routers are replaced by routers from a different vendor, the API calls remain exactly the same.

Because Cisco WAEautomatically generates APIs from YANG definitions, a Cisco WAE application has its APIs automatically exposed. A Cisco WAE application is, in a sense, a seamless extension of Cisco WAE functionality.

Bandwidth on Demand Application

The Bandwidth on Demand (BWoD) application utilizes the near real-time model of the network offered by WMD to compute and maintain paths for SR policies with bandwidth constraints delegated to WAE from XTC. In order to compute the shortest path available for a SR policy with a bandwidth constraint and ensure that path will be free of congestion, a Path Computation Element (PCE) must be aware of traffic loading on the network. The WAE BWoD application extends the existing topology-aware PCE capabilities of XTC by allowing delegation of bandwidth-aware path computation of SR policies to be sub-delegated to WAE through a new XTC REST API. Users may fine-tune the behavior of the BWoD application, affecting the path it computes, through selection of application options including network utilization threshold (definition of congestion) and path optimization criteria preferences.

For information on how to configure the BWoD application, see Bandwidth on Demand Configuration Workflow.

Bandwidth Optimization Application

The Bandwidth Optimization application is an approach to managing network traffic that focuses on deploying a small number of LSPs to achieve a specific outcome in the network. Examples of this type of tactical traffic engineering are deploying LSPs to shift traffic away from a congested link, establishing a low-latency LSP for priority voice or video traffic, or deploying LSPs to avoid certain nodes or links. WAE provides the Bandwidth Optimization application to react and manage traffic as the state of the network changes.

For information on how to configure the Bandwidth Optimization application, see Bandwidth Optimization Application Workflow.

Cisco WAE Interfaces

Cisco WAE has three interfaces that you can use to configure your network model:

Cisco WAE UI

The Cisco WAE UI provides an easy-to-use interface that hides the complexity of creating a model building chain for a network. The Cisco WAE UI combines the configuration of multiple data collections under one network and can produce a single plan file that contains the consolidated data. However, there are certain operations that cannot be performed with the Cisco WAE UI. Any configurations done using the Expert Mode or Cisco WAECLI may not appear in the Cisco WAE UI configuration screens. See Network Model Configuration—Cisco WAE UI.

Expert Mode

The Expert Mode is a YANG model browser with additional device and service functionality that might not be available in the WAE UI. Users might prefer to use the Expert Mode over the Cisco WAE CLI because all options for each operation are visible in the Expert Mode. See Network Model Configuration—Expert Mode.

Cisco WAE CLI

The Cisco WAE CLI is the interface in which the user responds to a visual prompt by typing a command; a system response is returned. It is the bare-bones interface for all Cisco WAE configurations. Operations available in the Expert Mode are also available in the Cisco WAE CLI. See Network Model Configuration—Cisco WAE CLI.

Network Model Creation Workflow

The following is a high-level workflow on how to configure individual network models. The detailed steps differ depending on what type of interface you use (Expert Mode, Cisco WAE UI, or Cisco WAE CLI).

If you plan to run multiple NIMOs and consolidate the information into one final network, do not run collections until after you have set up the aggregator NIMO. For more information, see NIMO Collection Consolidation.

  1. Configure device authgroups, SNMP groups, and network profile access.

  2. (Optional) Configure agents. This step is required only for collecting XTC, LAG and port interface, or multilayer information.

  3. Configure an aggregated network and sources with a topology NIMO.

  4. Configure additional collections such as demands, traffic, layout, inventory, and so on.

  5. Schedule when to run collections.

  6. Configure the archive file system location and interval at which plan files are periodically stored.

  7. (Optional) View plan files in Cisco WAE applications.