MPLS Configuration Guide for Cisco NCS 540 Series Routers, Cisco IOS XR Release 6.3.x
Bias-Free Language
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Traditional IP routing
emphasizes on forwarding traffic to the destination as fast as possible. As a
result, the routing protocols find out the least-cost route according to its
metric to each destination in the network and every router forwards the packet
based on the destination IP address and packets are forwarded hop-by-hop. Thus,
traditional IP routing does not consider the available bandwidth of the link.
This can cause some links to be over-utilized compared to others and bandwidth
is not efficiently utilized. Traffic Engineering (TE) is used when the problems
result from inefficient mapping of traffic streams onto the network resources.
Traffic engineering allows you to control the path that data packets follow and
moves traffic flows from congested links to non-congested links that would not
be possible by the automatically computed destination-based shortest path.
Multiprotocol Label
Switching (MPLS) with its label switching capabilities, eliminates the need for
an IP route look-up and creates a virtual circuit (VC) switching function,
allowing enterprises the same performance on their IP-based network services as
with those delivered over traditional networks such as Frame Relay or
Asynchronous Transfer Mode (ATM). MPLS traffic engineering (MPLS-TE) relies on
the MPLS backbone to replicate and expand upon the TE capabilities of Layer 2
ATM and Frame Relay networks.
MPLS-TE learns the
topology and resources available in a network and then maps traffic flows to
particular paths based on resource requirements and network resources such as
bandwidth. MPLS-TE builds a unidirectional tunnel from a source to a
destination in the form of a label switched path (LSP), which is then used to
forward traffic. The point where the tunnel begins is called the tunnel headend
or tunnel source, and the node where the tunnel ends is called the tunnel
tailend or tunnel destination. A router through which the tunnel passes is
called the mid-point of the tunnel.
MPLS uses extensions
to a link-state based Interior Gateway Protocol (IGP), such as Intermediate
System-to-Intermediate System (IS-IS) or Open Shortest Path First (OSPF). MPLS
calculates TE tunnels at the LSP head based on required and available resources
(constraint-based routing). If configured, the IGP automatically routes the
traffic onto these LSPs. Typically, a packet that crosses the MPLS-TE backbone
travels on a single LSP that connects the ingress point to the egress point.
MPLS TE automatically establishes and maintains the LSPs across the MPLS
network by using the Resource Reservation Protocol (RSVP).
Overview of MPLS-TE
Features
In MPLS traffic
engineering, IGP extensions flood the TE information across the network. Once
the IGP distributes the link attributes and bandwidth information, the headend
router calculates the best path from head to tail for the MPLS-TE tunnel. This
path can also be configured explicitly. Once the path is calculated, RSVP-TE is
used to set up the TE LSP (Labeled Switch Path).
To forward the
traffic, you can configure autoroute, forward adjacency, or static routing. The
autoroute feature announces the routes assigned by the tailend router and its
downstream routes to the routing table of the headend router and the tunnel is
considered as a directly connected link to the tunnel.
If forward adjacency
is enabled, MPLS-TE tunnel is advertised as a link in an IGP network with the
link's cost associated with it. Routers outside of the TE domain can see the TE
tunnel and use it to compute the shortest path for routing traffic throughout
the network.
MPLS-TE provides
protection mechanism known as fast reroute to minimize packet loss during a
failure. For fast reroute, you need to create back up tunnels. The autotunnel
backup feature enables a router to dynamically build backup tunnels when they
are needed instead of pre-configuring each backup tunnel and then assign the
backup tunnel to the protected interfaces.
DiffServ Aware
Traffic Engineering (DS-TE) enables you to configure multiple bandwidth
constraints on an MPLS-enabled interface to support various classes of service
(CoS). These bandwidth constraints can be treated differently based on the
requirement for the traffic class using that constraint.
The MPLS traffic
engineering auto-tunnel mesh feature allows you to set up full mesh of TE
tunnels automatically with a minimal set of MPLS traffic engineering
configurations. The MPLS-TE auto bandwidth feature allows you to automatically
adjusts bandwidth based on traffic patterns without traffic disruption.
The MPLS-TE interarea
tunneling feature allows you to establish TE tunnels spanning multiple Interior
Gateway Protocol (IGP) areas and levels, thus eliminating the requirement that
headend and tailend routers should reside in a single area.
MPLS-TE automatically
establishes and maintains label switched paths (LSPs) across the backbone by
using RSVP. The path that an LSP uses is determined by the LSP resource
requirements and network resources, such as bandwidth. Available resources are
flooded by extensions to a link state based Interior Gateway Protocol (IGP).
MPLS-TE tunnels are calculated at the LSP headend router, based on a fit
between the required and available resources (constraint-based routing). The
IGP automatically routes the traffic to these LSPs. Typically, a packet
crossing the MPLS-TE backbone travels on a single LSP that connects the ingress
point to the egress point.
The following
sections describe the components of MPLS-TE:
Tunnel
Interfaces
From a Layer 2
standpoint, an MPLS tunnel interface represents the headend of an LSP. It is
configured with a set of resource requirements, such as bandwidth and media
requirements, and priority. From a Layer 3 standpoint, an LSP tunnel interface
is the headend of a unidirectional virtual link to the tunnel destination.
MPLS-TE Path
Calculation Module
This calculation
module operates at the LSP headend. The module determines a path to use for an
LSP. The path calculation uses a link-state database containing flooded
topology and resource information.
RSVP with TE
Extensions
RSVP operates at
each LSP hop and is used to signal and maintain LSPs based on the calculated
path.
MPLS-TE Link
Management Module
This module operates
at each LSP hop, performs link call admission on the RSVP signaling messages,
and keep track on topology and resource information to be flooded.
Link-state
IGP
Either Intermediate
System-to-Intermediate System (IS-IS) or Open Shortest Path First (OSPF) can be
used as IGPs. These IGPs are used to globally flood topology and resource
information from the link management module.
Label Switching
Forwarding
This forwarding
mechanism provides routers with a Layer 2-like ability to direct traffic across
multiple hops of the LSP established by RSVP signaling.
Configuring
MPLS-TE
MPLS-TE requires
co-ordination among several global neighbor routers. RSVP, MPLS-TE and IGP are
configured on all routers and interfaces in the MPLS traffic engineering
network. Explicit path and TE tunnel interfaces are configured only on the
head-end routers. MPLS-TE requires some basic configuration tasks explained in
this section.
Building MPLS-TE
Topology
Building MPLS-TE
topology, sets up the environment for creating MPLS-TE tunnels. This procedure
includes the basic node and interface configuration for enabling MPLS-TE. To
perform constraint-based routing, you need to enable OSPF or IS-IS as IGP
extension.
Before You
Begin
Before you start to
build the MPLS-TE topology, the following pre-requisites are required:
Stable router
ID is required at either end of the link to ensure that the link is successful.
If you do not assign a router ID, the system defaults to the global router ID.
Default router IDs are subject to change, which can result in an unstable link.
Enable RSVP on
the port interface.
Example
This example enables
MPLS-TE on a node and then specifies the interface that is part of the MPLS-TE.
Here, OSPF is used as the IGP extension protocol for information distribution.
This example enables
MPLS-TE on a node and then specifies the interface that is part of the MPLS-TE.
Here, IS-IS is used as the IGP extension protocol for information distribution.
Creating an MPLS-TE
tunnel is a process of customizing the traffic engineering to fit your network
topology. The MPLS-TE tunnel is created at the headend router. You need to
specify the destination and path of the TE LSP.
To steer traffic
through the tunnel, you can use the following ways:
Static Routing
Autoroute Announce
Forwarding
Adjacency
Before You
Begin
The following
prerequisites are required to create an MPLS-TE tunnel:
You must have a
router ID for the neighboring router.
Stable router
ID is required at either end of the link to ensure that the link is successful.
If you do not assign a router ID to the routers, the system defaults to the
global router ID. Default router IDs are subject to change, which can result in
an unstable link.
Configuration
Example
This example
configures an MPLS-TE tunnel on the headend router with a destination IP
address 192.168.92.125. The bandwidth for the tunnel, path-option, and
forwarding parameters of the tunnel are also configured. You can use static
routing, autoroute announce or forwarding adjacency to steer traffic through
the tunnel.
Verify the
configuration of MPLS-TE tunnel using the following command.
RP/0/RP0/CPU0:router# show mpls traffic-engineering tunnels brief
Signalling Summary:
LSP Tunnels Process: running
RSVP Process: running
Forwarding: enabled
Periodic reoptimization: every 3600 seconds, next in 2538 seconds
Periodic FRR Promotion: every 300 seconds, next in 38 seconds
Auto-bw enabled tunnels: 0 (disabled)
TUNNEL NAME DESTINATION STATUS STATE
tunnel-te1 192.168.92.125 up up
Displayed 1 up, 0 down, 0 recovering, 0 recovered heads
Fast reroute (FRR)
provides link protection to LSPs enabling the traffic carried by LSPs that
encounter a failed link to be rerouted around the failure. The reroute decision
is controlled locally by the router connected to the failed link. The headend
router on the tunnel is notified of the link failure through IGP or through
RSVP. When it is notified of a link failure, the headend router attempts to
establish a new LSP that bypasses the failure. This provides a path to
reestablish links that fail, providing protection to data transfer. The path of
the backup tunnel can be an IP explicit path, a dynamically calculated path, or
a semi-dynamic path. For detailed conceptual information on fast reroute, see
MPLS-TE Features - Details
Before You
Begin
The following
prerequisites are required to create an MPLS-TE tunnel:
You must have a
router ID for the neighboring router.
Stable router
ID is required at either end of the link to ensure that the link is successful.
If you do not assign a router ID to the routers, the system defaults to the
global router ID. Default router IDs are subject to change, which can result in
an unstable link.
Configuration
Example
This example
configures fast reroute on an MPLS-TE tunnel. Here, tunnel-te 2 is configured
as the back-up tunnel. You can use the
protected-by command to configure path protection for
an explicit path that is protected by another path.
The MPLS Traffic
Engineering Auto-Tunnel Backup feature enables a router to dynamically build
backup tunnels on the interfaces that are configured with MPLS TE tunnels
instead of building MPLS-TE tunnels statically.
The MPLS-TE
Auto-Tunnel Backup feature has these benefits:
Backup tunnels are
built automatically, eliminating the need for users to pre-configure each
backup tunnel and then assign the backup tunnel to the protected interface.
Protection is
expanded—FRR does not protect IP traffic that is not using the TE tunnel or
Label Distribution Protocol (LDP) labels that are not using the TE tunnel.
The TE attribute-set
template that specifies a set of TE tunnel attributes, is locally configured at
the headend of auto-tunnels. The control plane triggers the automatic
provisioning of a corresponding TE tunnel, whose characteristics are specified
in the respective attribute-set.
Configuration
Example
This example
configures Auto-Tunnel backup on an interface and specifies the attribute-set
template for the auto tunnels. In this example, unused backup tunnels are
removed every 20 minutes using a timer and also the range of tunnel interface
numbers are specified.
This example shows a
sample output for automatic backup tunnel configuration.
RP/0/RP0/CPU0:router# show mpls traffic-eng tunnels brief
TUNNEL NAME DESTINATION STATUS STATE
tunnel-te0 200.0.0.3 up up
tunnel-te1 200.0.0.3 up up
tunnel-te2 200.0.0.3 up up
tunnel-te50 200.0.0.3 up up
*tunnel-te60 200.0.0.3 up up
*tunnel-te70 200.0.0.3 up up
*tunnel-te80 200.0.0.3 up up
The backup tunnels
that bypass only a single link of the LSP path are referred as Next Hop (NHOP)
backup tunnels because they terminate at the LSP's next hop beyond the point of
failure. They protect LSPs, if a link along their path fails, by rerouting the
LSP traffic to the next hop, thus bypassing the failed link.
Configuration
Example
This example
configures next hop backup tunnel on an interface and specifies the
attribute-set template for the auto tunnels. In this example, unused backup
tunnels are removed every 20 minutes using a timer and also the range of tunnel
interface numbers are specified.
Shared Risk Link
Groups (SRLG) in MPLS traffic engineering refer to situations in which links in
a network share common resources. These links have a shared risk, and that is
when one link fails, other links in the group might fail too.
OSPF and IS-IS flood
the SRLG value information (including other TE link attributes such as
bandwidth availability and affinity) using a sub-type length value (sub-TLV),
so that all routers in the network have the SRLG information for each link.
MPLS-TE SRLG feature
enhances backup tunnel path selection by avoiding using links that are in the
same SRLG as the interfaces it is protecting while creating backup tunnels.
Configuration
Example
This example creates
a backup tunnel and excludes the protected node IP address from the explicit
path.
MPLS-TE Flexible
Name-based Tunnel Constraints provides a simplified and more flexible means of
configuring link attributes and path affinities to compute paths for the
MPLS-TE tunnels.
In traditional TE,
links are configured with attribute-flags that are flooded with TE link-state
parameters using Interior Gateway Protocols (IGPs), such as Open Shortest Path
First (OSPF).
MPLS-TE Flexible
Name-based Tunnel Constraints lets you assign, or map, up to 32 color names for
affinity and attribute-flag attributes instead of 32-bit hexadecimal numbers.
After mappings are defined, the attributes can be referred to by the
corresponding color name.
Configuration
Example
This example shows
assigning a how to associate a tunnel with affinity constraints.
RP/0/RP0/CPU0:router# configure
RP/0/RP0/CPU0:router(config)# mpls traffic-eng
RP/0/RP0/CPU0:router(config-mpls-te)# affinity-map red 1
RP/0/RP0/CPU0:router(config-mpls-te)# interface HundredGigabitEthernet0/0/1/0RP/0/RP0/CPU0:router(config-mpls-te-if)# attribute-names red
RP/0/RP0/CPU0:router(config)# interface tunnel-te 2
RP/0/RP0/CPU0:router(config-if)# affinity include red
RP/0/RP0/CPU0:router(config)# commit
Configuring
Automatic Bandwidth
Automatic bandwidth
allows you to dynamically adjust bandwidth reservation based on measured
traffic. MPLS-TE automatic bandwidth monitors the traffic rate on a tunnel
interface and resizes the bandwidth on the tunnel interface to align it closely
with the traffic in the tunnel. MPLS-TE automatic bandwidth is configured on
individual Label Switched Paths (LSPs) at every headend router.
The following table specifies the parameters that can be configured as part of automatic bandwidth configuration.
Table 1. Automatic Bandwidth Parameters
Bandwidth Parameters
Description
Application frequency
Configures how often the tunnel bandwidths changed for each tunnel. The default value is 24 hours.
Bandwidth limit
Configures the minimum and maximum automatic bandwidth to set on a tunnel.
Bandwidth collection frequency
Enables bandwidth collection without adjusting the automatic bandwidth. The default value is 5 minutes.
Overflow threshold
Configures tunnel overflow detection.
Adjustment threshold
Configures the tunnel-bandwidth change threshold to trigger an adjustment.
Configuration
Example
This example enables automatic bandwidth on MPLS-TE tunnel interface and configure the following automatic bandwidth variables.
Application frequency
Bandwidth limit
Adjustment threshold
Overflow detection
RP/0/RP0/CPU0:router# configure
RP/0/RP0/CPU0:router(config)# interface tunnel-te 1
RP/0/RP0/CPU0:router(config-if)# auto-bw
RP/0/RP0/CPU0:router(config-if-tunte-autobw)# application 1000
RP/0/RP0/CPU0:router(config-if-tunte-autobw)# bw-limit min 30 max 1000
RP/0/RP0/CPU0:router(config-if-tunte-autobw)# adjustment-threshold 50 min 800
RP/0/RP0/CPU0:router(config-if-tunte-autobw)# overflow threshold 100 limit 1
RP/0/RP0/CPU0:router(config)# commit
Verification
Verify the automatic
bandwidth configuration using the
show mpls
traffic-eng tunnels auto-bw brief command.
RP/0/RP0/CPU0:router# show mpls traffic-eng tunnels auto-bw brief
Tunnel LSP Last appl Requested Signalled Highest Application
Name ID BW(kbps) BW(kbps) BW(kbps) BW(kbps) Time Left
-------------- ------ ---------- ---------- ---------- ---------- --------------
tunnel-te1 5 500 300 420 1h 10m
The MPLS-TE
auto-tunnel mesh (auto-mesh) feature allows you to set up full mesh of TE
Point-to-Point (P2P) tunnels automatically with a minimal set of MPLS traffic
engineering configurations. You can configure one or more mesh-groups and each
mesh-group requires a destination-list (IPv4 prefix-list) listing destinations,
which are used as destinations for creating tunnels for that mesh-group.
You can configure
MPLS-TE auto-mesh type attribute-sets (templates) and associate them to
mesh-groups. Label Switching Routers (LSRs) can create tunnels using the tunnel
properties defined in this attribute-set.
Auto-Tunnel mesh
configuration minimizes the initial configuration of the network. You can
configure tunnel properties template and mesh-groups or destination-lists on TE
LSRs that further creates full mesh of TE tunnels between those LSRs. It
eliminates the need to reconfigure each existing TE LSR in order to establish a
full mesh of TE tunnels whenever a new TE LSR is added in the network.
Configuration
Example
This example
configures an auto-tunnel mesh group and specifies the attributes for the
tunnels in the mesh-group.
Verify the
auto-tunnel mesh configuration using the
show mpls
traffic-eng auto-tunnel mesh command.
RP/0/RP0/CPU0:router# show mpls traffic-eng auto-tunnel mesh
Auto-tunnel Mesh Global Configuration:
Unused removal timeout: 1h 0m 0s
Configured tunnel number range: 1000-2000
Auto-tunnel Mesh Groups Summary:
Mesh Groups count: 1
Mesh Groups Destinations count: 3
Mesh Groups Tunnels count:
3 created, 3 up, 0 down, 0 FRR enabled
Mesh Group: 10 (3 Destinations)
Status: Enabled
Attribute-set: 10
Destination-list: dl-65 (Not a prefix-list)
Recreate timer: Not running
Destination Tunnel ID State Unused timer
---------------- ----------- ------- ------------
192.168.0.2 1000 up Not running
192.168.0.3 1001 up Not running
192.168.0.4 1002 up Not running
Displayed 3 tunnels, 3 up, 0 down, 0 FRR enabled
Auto-mesh Cumulative Counters:
Last cleared: Wed Oct 3 12:56:37 2015 (02:39:07 ago)
Total
Created: 3
Connected: 0
Removed (unused): 0
Removed (in use): 0
Range exceeded: 0
Configuring an MPLS
Traffic Engineering Interarea Tunneling
The MPLS TE Interarea
Tunneling feature allows you to establish MPLS TE tunnels that span multiple
Interior Gateway Protocol (IGP) areas and levels. This feature removes the
restriction that required the tunnel headend and tailend routers both to be in
the same area. The IGP can be either Intermediate System-to-Intermediate System
(IS-IS) or Open Shortest Path First (OSPF).To configure an inter-area tunnel,
you specify on the headend router a loosely routed explicit path for the tunnel
label switched path (LSP) that identifies each area border router (ABR) the LSP
should traverse using the next-address loose command. The headend router and
the ABRs along the specified explicit path expand the loose hops, each
computing the path segment to the next ABR or tunnel destination.
Configuration
Example
This example
configures an IPv4 explicit path with ABR configured as loose address on the
headend router.
Configuring PBTS is
a process of directing incoming traffic into specific TE tunnels based on a
classification criteria (DSCP). The traffic forwarding decisions are made based
on the categorized traffic classes and the destination network addresses. The
following section lists the steps to configure PBTS on a MPLS-TE Tunnel
network:
Define a class-map based on a
classification criteria.
Define a policy-map by
creating rules for the classified traffic.
Associate a forward-class to
each type of ingress traffic.
Enable PBTS on the ingress
interface, by applying this service-policy.
Create one or more egress
MPLS-TE Tunnels (to carry packets based on priority) to the destination.
Associate the egress MPLS-TE
Tunnel to a forward-class.
For more information on PBTS, see
Policy-Based Tunnel Selection
in the
Implementing MPLS Traffic Engineering chapter.
Configuration
Example
The following
section illustrates PBTS implementation:
RP/0/RP0/CPU0:router#configure
/* Class-map; classification using DSCP */
RP/0/RP0/CPU0:router(config)# class-map match-any AF41-Class RP/0/RP0/CPU0:router(config-cmap)# match dscp AF41
RP/0/RP0/CPU0:router(config-cmap)# exit
/* Policy-map */
RP/0/RP0/CPU0:router(config)# policy-map INGRESS-POLICY RP/0/RP0/CPU0:router(config-pmap)# class AF41-Class
/* Associating forward class */
RP/0/RP0/CPU0:router(config-pmap-c)# set forward-class 1 RP/0/RP0/CPU0:router(config-pmap-c)# exit
RP/0/RP0/CPU0:router(config-pmap)# exit
RP/0/RP0/CPU0:router(config)# interface GigabitEthernet0/0/0/1
/* Applying service-policy to ingress interface */
RP/0/RP0/CPU0:router(config-if)# service-policy input INGRESS-POLICY RP/0/RP0/CPU0:router(config-if)# ipv4 address 10.1.1.1 255.255.255.0
RP/0/RP0/CPU0:router(config-if)# exit
/* Creating TE-tunnels to carry traffic based on priority */
RP/0/RP0/CPU0:router(config)# interface tunnel-te61 RP/0/RP0/CPU0:router(config-if)# ipv4 unnumbered Loopback0
RP/0/RP0/CPU0:router(config-if)# signalled-bandwidth 1000
RP/0/RP0/CPU0:router(config-if)# autoroute announce
RP/0/RP0/CPU0:router(config-if)# destination 10.20.20.1
RP/0/RP0/CPU0:router(config-if)# record route
/* Associating egress TE tunnels to forward class */
RP/0/RP0/CPU0:router(config-if)# forward-class 1 RP/0/RP0/CPU0:router(config-if)# path-option 1 explicit identifier 61
RP/0/RP0/CPU0:router(config-if)# exit
Verification
Use
show mpls forwarding
tunnels command to verify the PBTS configuration:
LDP and RSVP-TE are signaling protocols used for establishing LSPs in MPLS networks. While LDP is easy to configure and reilable,
it lacks the traffic engineering capabilities of RSVP that helps to avoid traffic congestions. LDP over MPLS-TE feature combines
the benefits of both LDP and RSVP. In LDP over MPLS-TE, an LDP signalled label-switched path (LSP) runs through a TE tunnel
established using RSVP-TE.
The following diagram explains a use case for LDP over MPLS-TE. In this diagram, LDP is used as the signalling protocol between
provider edge (PE) router and provider (P) router. RSVP-TE is used as the signalling protocol between the P routers to establish
an LSP. LDP is tunneled over the RSVP-TE LSP.
Restrictions and Guidelines for LDP over MPLS-TE
The following restrictions and guidelines apply for this feature in Cisco IOS-XR release 6.3.2:
MPLS services over LDP over MPLS-TE are supported when BGP neighbours are on the head or tail node of the TE tunnel.
MPLS services over LDP over MPLS-TE are supported when the TE headend router is acting as transit point for that service.
If MPLS services are originating from the TE headend, but the TE tunnel is ending before the BGP peer, LDP over MPLS-TE feature
is not supported.
If LDP optimization is enabled using the hw-module fib mpls ldp lsr-optimized command, the following restrictions apply:
EVPN is not supported.
For any prefix or label all outgoing paths has to be LDP enabled.
Configuration Example:
This example shows how to configure an MPLS-TE tunnel from provider router P1 to P2 and then enbale LDP over MPLS-TE. In this
example, the destination of the tunnel from P1 is configured as the loop back for P2.
Path protection provides an end-to-end failure recovery mechanism for MPLS-TE tunnels. A secondary Label Switched Path (LSP)
is established, in advance, to provide failure protection for the protected LSP that is carrying a tunnel's TE traffic. When
there is a failure on the protected LSP, the source router immediately enables the secondary LSP to temporarily carry the
tunnel's traffic. Failover is triggered by a RSVP error message sent to the LSP head end. Once the head end received this
error message, it switches over to the secondary tunnel. If there is a failure on the secondary LSP, the tunnel no longer
has path protection until the failure along the secondary path is cleared. Path protection can be used within a single area
(OSPF or IS-IS), external BGP [eBGP], and static routes. Both the explicit and dynamic path-options are supported for the
MPLS-TE path protection feature. You should make sure that the same attributes or bandwidth requirements are configured on
the protected option.
Before You Begin
The following prerequisites are required for enabling path protection.
You should ensure that your network supports MPLS-TE, Cisco Express Forwarding, and Intermediate System-to-Intermediate System
(IS-IS) or Open Shortest Path First (OSPF).
You should configure MPLS-TE on the routers.
Configuration Example
This example configures how to configure path protection for a mpls-te tunnel. The primary path-option should be present to
configure path protection. In this configuration, R1 is the headend router and R3 is the tailend router for the tunnel while
R2 and R4 are mid-point routers. In this example, 6 explicit paths and 1 dynamic path is created for path protection. You
can have upto 8 path protection options for a primary path.
Use the show mpls traffic-eng tunnels command to verify the MPLS-TE path protection configuration.
RP/0/RP0/CPU0:router# show mpls traffic-eng tunnels 0
Fri Oct 13 16:24:39.379 UTC
Name: tunnel-te0 Destination: 192.168.92.125 Ifhandle:0x8007d34
Signalled-Name: router
Status:
Admin: up Oper: up Path: valid Signalling: connected
path option 1, type explicit r1-r2-r3-00 (Basis for Setup, path weight 2)
Protected-by PO index: 2
path option 2, type explicit r1-r2-r3-01 (Basis for Standby, path weight 2)
Protected-by PO index: 3
path option 3, type explicit r1-r4-r3-01
Protected-by PO index: 4
path option 4, type explicit r1-r3-00
Protected-by PO index: 5
path option 5, type explicit r1-r2-r4-r3-00
Protected-by PO index: 6
path option 6, type explicit r1-r4-r2-r3-00
Protected-by PO index: 7
path option 7, type dynamic
G-PID: 0x0800 (derived from egress interface properties)
Bandwidth Requested: 0 kbps CT0
Creation Time: Fri Oct 13 15:05:28 2017 (01:19:11 ago)
Config Parameters:
Bandwidth: 0 kbps (CT0) Priority: 7 7 Affinity: 0x0/0xffff
Metric Type: TE (global)
Path Selection:
Tiebreaker: Min-fill (default)
Hop-limit: disabled
Cost-limit: disabled
Delay-limit: disabled
Path-invalidation timeout: 10000 msec (default), Action: Tear (default)
AutoRoute: enabled LockDown: disabled Policy class: not set
Forward class: 0 (not enabled)
Forwarding-Adjacency: disabled
Autoroute Destinations: 0
Loadshare: 0 equal loadshares
Auto-bw: disabled
Fast Reroute: Disabled, Protection Desired: None
Path Protection: Enabled
BFD Fast Detection: Disabled
Reoptimization after affinity failure: Enabled
Soft Preemption: Disabled
History:
Tunnel has been up for: 01:14:13 (since Fri Oct 13 15:10:26 UTC 2017)
Current LSP:
Uptime: 01:14:13 (since Fri Oct 13 15:10:26 UTC 2017)
Reopt. LSP:
Last Failure:
LSP not signalled, identical to the [CURRENT] LSP
Date/Time: Fri Oct 13 15:08:41 UTC 2017 [01:15:58 ago]
Standby Reopt LSP:
Last Failure:
LSP not signalled, identical to the [STANDBY] LSP
Date/Time: Fri Oct 13 15:08:41 UTC 2017 [01:15:58 ago]
First Destination Failed: 192.3.3.3
Prior LSP:
ID: 8 Path Option: 1
Removal Trigger: path protection switchover
Standby LSP:
Uptime: 01:13:56 (since Fri Oct 13 15:10:43 UTC 2017)
Path info (OSPF 1 area 0):
Node hop count: 2
Hop0: 192.168.1.2
Hop1: 192.168.3.1
Hop2: 192.168.3.2
Hop3: 192.168.3.3
Standby LSP Path info (OSPF 1 area 0), Oper State: Up :
Node hop count: 2
Hop0: 192.168.2.2
Hop1: 192.168.3.1
Hop2: 192.168.3.2
Hop3: 192.168.3.3
Displayed 1 (of 4001) heads, 0 (of 0) midpoints, 0 (of 0) tails
Displayed 1 up, 0 down, 0 recovering, 0 recovered heads
MPLS-TE Features -
Details
MPLS TE Fast
Reroute Link and Node Protection
Fast Reroute (FRR)
is a mechanism for protecting MPLS TE LSPs from link and node failures by
locally repairing the LSPs at the point of failure, allowing data to continue
to flow on them while their headend routers try to establish new end-to-end
LSPs to replace them. FRR locally repairs the protected LSPs by rerouting them
over backup tunnels that bypass failed links or node.
Backup tunnels that
bypass only a single link of the LSP’s path provide link protection. They
protect LSPs if a link along their path fails by rerouting the LSP’s traffic to
the next hop (bypassing the failed link). These tunnels are referred to as
next-hop (NHOP) backup tunnels because they terminate at the LSP’s next hop
beyond the point of failure.
The following figure
illustrates link protection.
FRR provides node
protection for LSPs. Backup tunnels that bypass next-hop nodes along LSP paths
are called next-next-hop (NNHOP) backup tunnels because they terminate at the
node following the next-hop node of the LSP paths, bypassing the next-hop node.
They protect LSPs if a node along their path fails by enabling the node
upstream of the failure to reroute the LSPs and their traffic around the failed
node to the next-next hop. NNHOP backup tunnels also provide protection from
link failures, because they bypass the failed link and the node.
The following figure
illustrates node protection.
MPLS-TE
Forwarding Adjacency
MPLS TE forwarding
adjacency allows you to handle a TE label-switched path (LSP) tunnel as a link
in an Interior Gateway Protocol (IGP) network that is based on the Shortest
Path First (SPF) algorithm. Both Intermediate System-to-Intermediate System
(IS-IS) and Open Shortest Path First (OSPF) are supported as the IGP. A
forwarding adjacency can be created between routers regardless of their
location in the network. The routers can be located multiple hops from each
other.
As a result, a TE
tunnel is advertised as a link in an IGP network with the tunnel's cost
associated with it. Routers outside of the TE domain see the TE tunnel and use
it to compute the shortest path for routing traffic throughout the network. TE
tunnel interfaces are advertised in the IGP network just like any other links.
Routers can then use these advertisements in their IGPs to compute the SPF even
if they are not the headend of any TE tunnels.
Automatic
Bandwidth
Automatic bandwidth
allows you to dynamically adjust bandwidth reservation based on measured
traffic. MPLS-TE automatic bandwidth is configured on individual Label Switched
Paths (LSPs) at every headend router. MPLS-TE automatic bandwidth monitors the
traffic rate on a tunnel interface and resizes the bandwidth on the tunnel
interface to align it closely with the traffic in the tunnel.
MPLS-TE automatic
bandwidth can perform these functions:
Monitors periodic polling of
the tunnel output rate
Resizes the tunnel bandwidth
by adjusting the highest rate observed during a given period.
For every
traffic-engineered tunnel that is configured for an automatic bandwidth, the
average output rate is sampled, based on various configurable parameters. Then,
the tunnel bandwidth is readjusted automatically based on either the largest
average output rate that was noticed during a certain interval, or a configured
maximum bandwidth value.
While re-optimizing
the LSP with the new bandwidth, a new path request is generated. If the new
bandwidth is not available, the last good LSP remains used. This way, the
network experiences no traffic interruptions. If minimum or maximum bandwidth
values are configured for a tunnel, the bandwidth, which the automatic
bandwidth signals, stays within these values.
The output rate on a
tunnel is collected at regular intervals that are configured by using the
application command in MPLS-TE auto bandwidth
interface configuration mode. When the application period timer expires, and
when the difference between the measured and the current bandwidth exceeds the
adjustment threshold, the tunnel is re-optimized. Then, the bandwidth samples
are cleared to record the new largest output rate at the next interval. If a
tunnel is shut down, and is later brought again, the adjusted bandwidth is
lost, and the tunnel is brought back with the initially configured bandwidth.
When the tunnel is brought back, the application period is reset.
MPLS Traffic
Engineering Interarea Tunneling
The MPLS-TE
interarea tunneling feature allows you to establish TE tunnels spanning
multiple Interior Gateway Protocol (IGP) areas and levels, thus eliminating the
requirement that headend and tailend routers reside in a single area.
Interarea support
allows the configuration of a TE LSP that spans multiple areas, where its
headend and tailend label switched routers (LSRs) reside in different IGP
areas. Customers running multiple IGP area backbones (primarily for scalability
reasons) requires Multiarea and Interarea TE . This lets you limit the amount
of flooded information, reduces the SPF duration, and lessens the impact of a
link or node failure within an area, particularly with large WAN backbones
split in multiple areas.
The following
figure shows a typical interarea TE network using OSPF.
The following
figure shows a typical interlevel (IS-IS) TE Network.
As shown in the
Figure 4,
R2, R3, R7, and R4 maintain two databases for routing and TE information. For
example, R3 has TE topology information related to R2, flooded through Level-1
IS-IS LSPs plus the TE topology information related to R4, R9, and R7, flooded
as Level 2 IS-IS Link State PDUs (LSPs) (plus, its own IS-IS LSP).
Loose hop
optimization allows the re-optimization of tunnels spanning multiple areas and
solves the problem which occurs when an MPLS-TE LSP traverses hops that are not
in the LSP's headend's OSPF area and IS-IS level. Interarea MPLS-TE allows you
to configure an interarea traffic engineering (TE) label switched path (LSP) by
specifying a loose source route of ABRs along the path. Then it is the
responsibility of the ABR (having a complete view of both areas) to find a path
obeying the TE LSP constraints within the next area to reach the next hop ABR
(as specified on the headend router). The same operation is performed by the
last ABR connected to the tailend area to reach the tailend LSR.
You must be aware
of these considerations when using loose hop optimization:
You must specify the router
ID of the ABR node (as opposed to a link address on the ABR).
When multiarea is deployed
in a network that contains subareas, you must enable MPLS-TE in the subarea for
TE to find a path when loose hop is specified.
You must specify the
reachable explicit path for the interarea tunnel.
Policy-Based Tunnel
Selection
Policy-Based Tunnel
Selection (PBTS) is a mechanism that lets you direct traffic into specific TE
tunnels based on different classification criteria. PBTS will benefit Internet
service providers (ISPs) that carry voice and data traffic through their MPLS
and MPLS/VPN networks and would have to route this traffic to provide optimized
voice service.
PBTS works by
selecting tunnels based on the classification criteria of the incoming packets,
which are based on the IP precedence or differentiated services code point
(DSCP), or the Type of Service (ToS) fields in the packets. The traffic
forwarding decisions are made based on the traffic classes AND the destination
network addresses instead of only considering the destination network.
Default-class
configured for paths is always zero (0). If there is no TE for a given
forward-class, then the default-class (0) will be tried. If there is no
default-class, then the packet is tried against the lowest configured
forward-class tunnels. PBTS supports up to seven (exp 1 - 7) EXP values
associated with a single TE-tunnel.
The following figure
illustrates PBTS Network Topology:
Tunnels are created between
Ingress and Egress nodes through LSR 1-2 and LSR 1-3-4-2 paths.
High priority
traffic takes the path: Ingress->LSR1->LSR2->Egress.
Low priority
traffic takes the path: Ingress->LSR1->LSR3->LSR4->LSR2->Egress
PBTS Function
Details
The following PBTS functions are supported on the
Cisco NCS 540 Series Routers:
Classify the Ingress traffic
into different classes by creating rules using PBR configuration.
Classify packets using
DSCP/IP precedence for both IPv4 and IPv6 traffic.
After classification, set the
desired forward-class to each type of Ingress traffic.
Define one or many MPLS-TE
tunnels in the destination using Tunnel configuration.
Associate the MPLS-TE tunnels
to a specific forward-class under Tunnel configuration.
Enable PBTS on the Ingress
interface by applying the service policy that uses the configured
classification rules.
The following list
gives PBTS support information:
PBTS is supported only on
Ipv4/Ipv6 incoming traffic only.
A maximum of eight
forward-classes per destination prefix is supported.
A maximum of 64 TE-tunnels
within each forward class is supported.
A maximum of 64 TE-tunnels
can be configured on a given destination.
Incoming labeled traffic is
not supported.
PBTS with L2VPN/L3VPN traffic
is not supported.
PBTS Forward
Class
A class-map is
defined for various types of packets and these class-maps are associated with a
forward-class. A class-map defines the matching criteria for classifying a
particular type of traffic and a forward-class defines the forwarding path
these packets should take.
After a class-map is
associated with a forwarding-class in the policy map, all the packets that
match the class-map are forwarded as defined in the policy-map. The egress
traffic engineering (TE) tunnel interfaces that the packets should take for
each forwarding-class is specified by associating the TE interface explicitly
(or implicitly in case of default value) with the forward-group.
When the TE
interfaces are associated with the forward-class, they can be exported to the
routing protocol module using the
auto-route
command. This will then associate the route in the FIB database with these
tunnels. If the TE interface is not explicitly associated with a forward-class,
it gets associated with a default-class (0). All non-TE interfaces will be
routed to the forwarding plane (with forward-class set to default-class) by the
routing protocol.