Per-Tunnel QoS for DMVPN
|
|||||||||||||||
Contents
Per-Tunnel QoS for DMVPNLast Updated: September 30, 2012
The Per-Tunnel QoS for DMVPN feature introduces per-tunnel quality of service (QoS) support for Dynamic Multipoint VPN (DMVPN) and increases per-tunnel QoS performance for Internet Protocol Security (IPsec) tunnel interfaces. This feature allows you to apply a QoS policy on a DMVPN hub on a tunnel instance (per-endpoint or per-spoke basis) in the egress direction for DMVPN hub-to-spoke tunnels. The QoS policy on a DMVPN hub on a tunnel instance allows you to shape the tunnel traffic to individual spokes (parent policy) and to differentiate individual data flows going through the tunnel for policing (child policy). The QoS policy that is used by the hub for a particular endpoint or spoke is selected by the Next Hop Resolution Protocol (NHRP) group in which the spoke is configured. Even though many spokes may be configured in the same NHRP group, the tunnel traffic of each spoke is measured individually for shaping and policing.
Finding Feature InformationYour software release may not support all the features documented in this module. For the latest caveats and feature information, see Bug Search Tool and the release notes for your platform and software release. To find information about the features documented in this module, and to see a list of the releases in which each feature is supported, see the feature information table at the end of this module. Use Cisco Feature Navigator to find information about platform support and Cisco software image support. To access Cisco Feature Navigator, go to www.cisco.com/go/cfn. An account on Cisco.com is not required. Prerequisites for per-Tunnel QoS for DMVPN
Restrictions for per-Tunnel QoS for DMVPN
Information About per-Tunnel QoS for DMVPN
Per-Tunnel QoS for DMVPN OverviewThe Per-Tunnel QoS for DMVPN feature lets you apply a QoS policy on a DMVPN hub on a per-tunnel instance (per-spoke basis) in the egress direction for DMVPN hub-to-spoke tunnels. The QoS policy on a DMVPN hub on a per-tunnel instance lets you shape tunnel traffic to individual spokes (a parent policy) and differentiate individual data flows going through the tunnel for policing (a child policy). The QoS policy that the hub uses for a specific spoke is selected according to the specific NHRP group into which that spoke is configured. Although you can configure many spokes into the same NHRP group, the tunnel traffic for each spoke is measured individually for shaping and policing. You can use this feature with DMVPN with or without Internet Protocol Security (IPsec). When the Per-Tunnel QoS for DMVPN feature is enabled, queueing and shaping are performed at the outbound physical interface for generic routing encapsulation (GRE)/IPsec tunnel packets. The per-Tunnel QoS for DMVPN feature ensures that the GRE header, the IPsec header, and the Layer 2 (for the physical interface) header are included in the packet-size calculations for shaping and bandwidth queueing of packets under QoS. Benefits of per-Tunnel QoS for DMVPNBefore the introduction of per-Tunnel QoS for DMVPN feature, QoS on a DMVPN hub could be configured to measure only either the outbound traffic in the aggregate (overall spokes) or outbound traffic on a per-spoke basis (with extensive manual configuration). The per-Tunnel QoS for DMVPN feature provides the following benefits:
NHRP QoS Provisioning for DMVPNNHRP performs the provisioning for the Per-Tunnel QoS for DMVPN feature by using NHRP groups. An NHRP group, a new functionality introduced by this feature, is the group identity information signaled by a DMVPN node (a spoke) to the DMVPN hub. The hub uses this information to select a locally defined QoS policy instance for the remote node. You can configure an NHRP group on the spoke router on the DMVPN GRE tunnel interface. The NHRP group name is communicated to the hub in each of the periodic NHRP registration requests sent from the spoke to the hub. NHRP group-to-QoS policy mappings are configured on the hub DMVPN GRE tunnel interface. The NHRP group string received from a spoke is mapped to a QoS policy, which is applied to that hub-to-spoke tunnel in the egress direction. Once an NHRP group is configured on a spoke, the group is not immediately sent to the hub, but is sent in the next periodic registration request. The spoke can belong to only one NHRP group per GRE tunnel interface. If a spoke is configured as part of two or more DMVPN networks (multiple GRE tunnel interfaces), then the spoke can have a different NHRP group name on each of the GRE tunnel interfaces. If an NHRP group is not received from the spoke, then a QoS policy is not applied to the spoke, and any existing QoS policy applied to that spoke is removed. If an NHRP group is received from the spoke when previous NHRP registrations did not have an NHRP group, then the corresponding QoS policy is applied. If the same NHRP group is received from a spoke similar to the earlier NHRP registration request, then no action is taken because a QoS policy would have already been applied for that spoke. If a different NHRP group is received from the spoke than what was received in the previous NHRP registration request, any applied QoS policy is removed, and the QoS policy corresponding to the new NHRP group is applied. How to Configure per-Tunnel QoS for DMVPNTo configure the Per-Tunnel QoS for DMVPN feature, you define an NHRP group on the spokes and then map the NHRP group to a QoS policy on the hub.
Configuring an NHRP Group on a SpokeSUMMARY STEPS
DETAILED STEPS Mapping an NHRP Group to a QoS Policy on the Hub
SUMMARY STEPS
DETAILED STEPS Verifying per-Tunnel QoS for DMVPN
SUMMARY STEPS
DETAILED STEPS Configuration Examples for per-Tunnel QoS for DMVPN
Example: Configuring an NHRP Group on a SpokeConfiguring the First Spokeinterface tunnel 1 ip address 209.165.200.225 255.255.255.224 no ip redirects ip mtu 1400 ip nhrp authentication testing ip nhrp group spoke_group1 ip nhrp map 209.165.200.226 203.0.113.1 ip nhrp map multicast 203.0.113.1 ip nhrp network-id 172176366 ip nhrp holdtime 300 ip tcp adjust-mss 1360 ip nhrp nhs 209.165.200.226 tunnel source fastethernet 2/1/1 tunnel mode gre multipoint tunnel protection ipsec profile DMVPN interface fastethernet 2/1/1 ip address 203.0.113.2 255.255.255.0 Configuring the Second Spokeinterface tunnel 1 ip address 209.165.200.227 255.255.255.224 no ip redirects ip mtu 1400 ip nhrp authentication testing ip nhrp group spoke_group1 ip nhrp map 209.165.200.226 203.0.113.1 ip nhrp map multicast 203.0.113.1 ip nhrp network-id 172176366 ip nhrp holdtime 300 ip tcp adjust-mss 1360 ip nhrp nhs 209.165.200.226 tunnel source fastethernet 2/1/1 tunnel mode gre multipoint tunnel protection ipsec profile DMVPN interface fastethernet 2/1/1 ip address 203.0.113.3 255.255.255.0 Configuring the Third Spokeinterface tunnel 1 ip address 209.165.200.228 255.255.255.224 no ip redirects ip mtu 1400 ip nhrp authentication testing ip nhrp group spoke_group2 ip nhrp map 209.165.200.226 203.0.113.1 ip nhrp map multicast 203.0.113.1 ip nhrp network-id 172176366 ip nhrp holdtime 300 ip tcp adjust-mss 1360 ip nhrp nhs 209.165.200.226 tunnel source fastethernet 2/1/1 tunnel mode gre multipoint tunnel protection ipsec profile DMVPN interface fastethernet 2/1/1 ip address 203.0.113.4 255.255.255.0 Example: Mapping an NHRP Group to a QoS Policy on the HubThe following example shows how to map NHRP groups to a QoS policy on the hub. The example shows a hierarchical QoS policy (parent: group1_parent/group2_parent; child: group1/group2) that will be used for configuring per-tunnel QoS for DMVPN feature. The example also shows how to map the NHRP group spoke_group1 to the QoS policy group1_parent and map the NHRP group spoke_group2 to the QoS policy group2_parent on the hub: class-map match-all group1_Routing match ip precedence 6 class-map match-all group2_Routing match ip precedence 6 class-map match-all group2_voice match access-group 100 class-map match-all group1_voice match access-group 100 policy-map group1 class group1_voice priority 1000 class group1_Routing bandwidth percent 20 policy-map group1_parent class class-default shape average 3000000 service-policy group1 policy-map group2 class group2_voice priority percent 20 class group2_Routing bandwidth percent 10 policy-map group2_parent class class-default shape average 2000000 service-policy group2 interface tunnel 1 ip address 209.165.200.225 255.255.255.224 no ip redirects ip mtu 1400 ip nhrp authentication testing ip nhrp map multicast dynamic ip nhrp map group spoke_group1 service-policy output group1_parent ip nhrp map group spoke_group2 service-policy output group2_parent ip nhrp network-id 172176366 ip nhrp holdtime 300 ip nhrp registration no-unique tunnel source fastethernet 2/1/1 tunnel mode gre multipoint tunnel protection ipsec profile DMVPN interface fastethernet 2/1/1 ip address 209.165.200.226 255.255.255.224 Example: Verifying per-Tunnel QoS for DMVPNThe following example shows how to display the information about NHRP groups received from the spokes and display the QoS policy that is applied to each spoke tunnel. You can enter this command on the hub.
Device# show dmvpn detail
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
N - NATed, L - Local, X - No Socket
# Ent --> Number of NHRP entries with same NBMA peer
NHS Status: E --> Expecting Replies, R --> Responding
UpDn Time --> Up or Down Time for a Tunnel
==========================================================================
Interface Tunnel1 is up/up, Addr. is 209.165.200.225, VRF ""
Tunnel Src./Dest. addr: 209.165.200.226/MGRE, Tunnel VRF ""
Protocol/Transport: "multi-GRE/IP", Protect "DMVPN"
Type:Hub, Total NBMA Peers (v4/v6): 3
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb Target Network
----- --------------- --------------- ----- -------- ----- -----------------
1 209.165.200.227 192.0.2.2 UP 00:19:20 D 192.0.2.2/32
NHRP group: spoke_group1
Output QoS service-policy applied: group1_parent
1 209.165.200.228 192.0.2.3 UP 00:19:20 D 192.0.2.3/32
NHRP group: spoke_group1
Output QoS service-policy applied: group1_parent
1 209.165.200.229 192.0.2.4 UP 00:19:23 D 192.0.2.4/32
NHRP group: spoke_group2
Output QoS service-policy applied: group2_parent
Crypto Session Details:
-----------------------------------------------------------------------------
Interface: tunnel1
Session: [0x04AC1D00]
IKE SA: local 209.165.200.226/500 remote 209.165.200.227/500 Active
Crypto Session Status: UP-ACTIVE
fvrf: (none), Phase1_id: 209.165.200.227
IPSEC FLOW: permit 47 host 209.165.200.226 host 209.165.200.227
Active SAs: 2, origin: crypto map
Outbound SPI : 0x9B264329, transform : ah-sha-hmac
Socket State: Open
Interface: tunnel1
Session: [0x04AC1C08]
IKE SA: local 209.165.200.226/500 remote 209.165.200.228/500 Active
Crypto Session Status: UP-ACTIVE
fvrf: (none), Phase1_id: 209.165.200.228
IPSEC FLOW: permit 47 host 209.165.200.226 host 209.165.200.228
Active SAs: 2, origin: crypto map
Outbound SPI : 0x36FD56E2, transform : ah-sha-hmac
Socket State: Open
Interface: tunnel1
Session: [0x04AC1B10]
IKE SA: local 209.165.200.226/500 remote 209.165.200.229/500 Active
Crypto Session Status: UP-ACTIVE
fvrf: (none), Phase1_id: 209.165.200.229
IPSEC FLOW: permit 47 host 209.165.200.226 host 209.165.200.229
Active SAs: 2, origin: crypto map
Outbound SPI : 0xAC96818F, transform : ah-sha-hmac
Socket State: Open
Pending DMVPN Sessions:
The following example shows how to display information about the NHRP groups that are received from the spokes. You can enter this command on the hub.
Device# show ip nhrp
192.0.2.240/32 via 192.0.2.240
Tunnel1 created 00:22:49, expire 00:01:40
Type: dynamic, Flags: registered
NBMA address: 209.165.200.227
Group: spoke_group1
192.0.2.241/32 via 192.0.2.241
Tunnel1 created 00:22:48, expire 00:01:41
Type: dynamic, Flags: registered
NBMA address: 209.165.200.228
Group: spoke_group1
192.0.2.242/32 via 192.0.2.242
Tunnel1 created 00:22:52, expire 00:03:27
Type: dynamic, Flags: registered
NBMA address: 209.165.200.229
Group: spoke_group2
The following example shows how to display the details of NHRP group mappings on a hub and the list of tunnels using each of the NHRP groups defined in the mappings. You can enter this command on the hub.
Device# show ip nhrp group-map
Interface: tunnel1
NHRP group: spoke_group1
QoS policy: group1_parent
Tunnels using the QoS policy:
Tunnel destination overlay/transport address
198.51.100.220/203.0.113.240
198.51.100.221/203.0.113.241
NHRP group: spoke_group2
QoS policy: group2_parent
Tunnels using the QoS policy:
Tunnel destination overlay/transport address
198.51.100.222/203.0.113.242
The following example shows how to display statistics about a specific QoS policy as it is applied to a tunnel endpoint. You can enter this command on the hub.
Device# show policy-map multipoint
Interface tunnel1 <--> 203.0.113.252
Service-policy output: group1_parent
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 750 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 3000000, bc 12000, be 12000
target shape rate 3000000
Service-policy : group1
queue stats for all priority classes:
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group1_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 1000 kbps, burst bytes 25000, b/w exceed drops: 0
Class-map: group1_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 150 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 20% (600 kbps)
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Interface tunnel1 <--> 203.0.113.253
Service-policy output: group1_parent
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 750 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 3000000, bc 12000, be 12000
target shape rate 3000000
Service-policy : group1
queue stats for all priority classes:
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group1_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 1000 kbps, burst bytes 25000, b/w exceed drops: 0
Class-map: group1_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 150 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 20% (600 kbps)
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Interface tunnel1 <--> 203.0.113.254
Service-policy output: group2_parent
Class-map: class-default (match-any)
14 packets, 2408 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 500 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 2000000, bc 8000, be 8000
target shape rate 2000000
Service-policy : group2
queue stats for all priority classes:
queue limit 100 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group2_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 20% (400 kbps), burst bytes 10000, b/w exceed drops: 0
Class-map: group2_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 50 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 10% (200 kbps)
Class-map: class-default (match-any)
14 packets, 2408 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Additional ReferencesRelated DocumentsTechnical Assistance
Feature Information for per-Tunnel QoS for DMVPNThe following table provides release information about the feature or features described in this module. This table lists only the software release that introduced support for a given feature in a given software release train. Unless noted otherwise, subsequent releases of that software release train also support that feature. Use Cisco Feature Navigator to find information about platform support and Cisco software image support. To access Cisco Feature Navigator, go to www.cisco.com/go/cfn. An account on Cisco.com is not required.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R) Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental. © 2012 Cisco Systems, Inc. All rights reserved.
|
|||||||||||||||