MLPoE at PTA

The Multilink PPP over Ethernet (MLPoE) at PPP Termination and Aggregation (PTA) feature allows customer premises equipment (CPE) and PTA devices to interleave high-priority and low-latency packets (PPP encapsulated) between Multilink PPP fragments of lower-priority and higher-latency packets.

Finding Feature Information

Your software release may not support all the features documented in this module. For the latest caveats and feature information, see Bug Search Tool and the release notes for your platform and software release. To find information about the features documented in this module, and to see a list of the releases in which each feature is supported, see the feature information table.

Use Cisco Feature Navigator to find information about platform support and Cisco software image support. To access Cisco Feature Navigator, go to www.cisco.com/go/cfn. An account on Cisco.com is not required.

Prerequisites for MLPoE at PTA

Before configuring Multilink PPP over Ethernet (MLPoE) at PPP termination and aggregation (PTA), you must complete the following tasks:

  • Creating a Class Map

  • Creating a Policy Map

  • Defining a PPP over Ethernet Profile

  • Configuring a Virtual Template Interface

For more information see Configuring Multilink PPP over Broadband section.

Restrictions for MLPoE at PTA

  • In-Service Software Upgrade (ISSU) and Stateful Switchover (SSO) for Broadband MLP sessions are not supported.
  • Multilink PPP over Ethernet (MLPoE) using EtherChannel is not supported.
  • Cisco IOS XE software supports a maximum of 4000 member links using MLPoE.
  • For MLP virtual access bundles, the default Layer 3 (that is IP, IPv6) maximum transmission unit (MTU) value is 1500. When the member link of the MLPPP bundle are Ethernet-like in MLPoEoE, MLPoEoVLAN, and MLPoEoQinQ, the MTU value of 1500 can cause an issue when sending IP packets close to this size. For example, when a 1500-byte IP packet is sent by a device over MLPoEoE, the actual packet size transmitted is 1522: 14 (Ethernet header) + 8 (PPPoE header) + 6 (MLP header) + 1500 (IP) = 1528. A device enforcing MRU might drop the incoming packet as a "giant" because it exceeds the default expected maximum packet size. The 1500-byte MTU size does not take into account any PPPoE or MLP header overhead and, hence, causes packets greater than 1492 bytes to be dropped by the peer. To address this issue, do one of the following:
    • Lower the MTU on the MLP bundle to 1492.
    • Increase the MTU on the Ethernet interface to 9216. Also, increase the MTU on the bundle by adjusting the MTU of the virtual template to 1508.
  • Member Link Session bandwidth—For MLPoE PPP termination and aggregation (PTA) variations, by default the bandwidth of the member link session is that of the parent interface. If a bandwidth statement is added to the virtual template, the member link session uses that bandwidth as the member link session bandwidth. This bandwidth is in turn communicated to MLPPP in the bundle member link aggregate data rate bandwidth calculation.
  • If the Digital Subscriber Line Access Multiplexer (DSLAM) between the CPE and PTA communicates the link rate via the PPPoE dsl-sync-rate tags (Actual Data-Rate Downstream [0x82/130d] tag), this data is passed by the PTA device to the RADIUS server but is not acted upon by the ASR 1000 device. The data rate of the session remains as described above in the previous bullet. Note that this behavior is specific to PTA mode; LAC/LNS behaves differently. Use the dsl line info forwarding command on the LAC to transport the LAC access speed to the LNS.

Information About MLPoE at PTA

MLPoE at PTA Overview

Single-link PPP over Ethernet and Multilink PPP over Ethernet (MLPoE) bundles support upstream and downstream link fragmentation and interleaving (LFI). Upstream refers to the traffic from the customer premises equipment (CPE) and downstream refers to the traffic to the CPE. The receiving device (CPE for downstream and PPP termination and aggregation [PTA] for upstream) reassembles fragmented, nonpriority packets. To reduce any delay in forwarding high-priority packets, the receiving device processes high-priority PPP packets as soon as they arrive.

The figure below shows a sample MLPoE network with LFI.

Figure 1. MLPoE DSL Network with LFI

PPP over Ethernet (PPPoE) sessions in MLPoE on a PTA device are handled as follows:

  • All supported variations of PPPoE, such as PPP over Ethernet over ATM (PPPoEoA), PPP over Ethernet over Ethernet (PPPoEoE), PPP over Ethernet over Queue-in-Queue (PPPoEoQinQ), and PPP over Ethernet over VLAN (PPPoVLAN), can be used as member links for MLPoE bundles.

  • Termination of an MLPoE bundle in a virtual routing and forwarding (VRF) block is similar to terminating a PPPoE session in a VRF instance.

  • MLPoE bundles are distinguished by the username that was used to authenticate the PPPoE member link session.

How to Configure MLPoE at PTA

Configuring MLPoE at PTA

SUMMARY STEPS

  1. enable
  2. configure terminal
  3. interface type number
  4. negotiation auto
  5. pppoe enable group group-name
  6. end

DETAILED STEPS

  Command or Action Purpose
Step 1

enable

Example:

Device> enable

Enables privileged EXEC mode.

  • Enter your password if prompted.

Step 2

configure terminal

Example:

Device# configure terminal

Enters global configuration mode.

Step 3

interface type number

Example:

Device(config)# interface GigabitEthernet 0/0/1

Specifies a Gigabit Ethernet interface for which Multilink PPP must be configured and enters interface configuration mode.

Step 4

negotiation auto

Example:

Device(config-if)# negotiation auto

Enables the autonegotiation protocol to configure the speed, duplex, and automatic flow control of the Gigabit Ethernet interface.

Step 5

pppoe enable group group-name

Example:

Device(config-if)# pppoe enable group mlpoe-bba-group-10m

Enables PPPoE sessions on an Ethernet interface or subinterface.

Step 6

end

Example:

Device(config-if)# end 

Exits interface configuration mode and returns to privileged EXEC mode.

Configuring MLPoE over VLAN

SUMMARY STEPS

  1. enable
  2. configure terminal
  3. interface type number
  4. encapsulation dot1q vlan-id
  5. pppoe enable group group-name
  6. end

DETAILED STEPS

  Command or Action Purpose
Step 1

enable

Example:

Device> enable

Enables privileged EXEC mode.

  • Enter your password if prompted.

Step 2

configure terminal

Example:

Device# configure terminal

Enters global configuration mode.

Step 3

interface type number

Example:

Device(config)# interface GigabitEthernet 0/0/0.1

Specifies the Gigabit Ethernet interface for which Multilink PPP must be configured and enters subinterface configuration mode.

Step 4

encapsulation dot1q vlan-id

Example:

Device(config-subif)# encapsulation dot1q 13

Enables IEEE 802.1q encapsulation of traffic on the specified subinterface in VLANs.

  • vlan-id is the virtual LAN identifier. The range is from 1 to 1000.
Step 5

pppoe enable group group-name

Example:

Device(config-subif)# pppoe enable group mlpoe-bba-group-10m

Enables PPPoE sessions on the subinterface.

Step 6

end

Example:

Device(config-subif)# end 

Exits subinterface configuration mode and returns to privileged EXEC mode.

Configuring MLPoE over QinQ

SUMMARY STEPS

  1. enable
  2. configure terminal
  3. interface type number
  4. encapsulation dot1q vlan-id second-dot1q {any | vlan-id | vlan-id-vlan-id | [, vlan-id-vlan-id ]}
  5. pppoe enable group group-name
  6. end

DETAILED STEPS

  Command or Action Purpose
Step 1

enable

Example:

Device> enable

Enables privileged EXEC mode.

  • Enter your password if prompted.

Step 2

configure terminal

Example:

Device# configure terminal

Enters global configuration mode.

Step 3

interface type number

Example:

Device(config)# interface GigabitEthernet 0/0/1.1

Specifies the Gigabit Ethernet subinterface for which Multilink PPP must be configured and enters subinterface configuration mode.

Step 4

encapsulation dot1q vlan-id second-dot1q {any | vlan-id | vlan-id-vlan-id | [, vlan-id-vlan-id ]}

Example:

Device(config-subif)# encapsulation dot1q 14 second-dot1q 140

Enables IEEE 802.1q encapsulation of traffic on a specified subinterface in VLANs.

  • vlan-id is the Virtual LAN identifier. Enter a hyphen to separate the starting and ending VLAN ID values that are used to define a range of VLAN IDs. Optionally, enter a comma to separate each VLAN ID range from the next range. The range is from 1 to 4094.

Step 5

pppoe enable group group-name

Example:

Device(config-subif)# pppoe enable group mlpoe-bba-group-10m

Enables PPPoE sessions on an Ethernet interface or subinterface.

Step 6

end

Example:

Device(config-subif)# end 

Exits subinterface configuration mode and returns to privileged EXEC mode.

Configuration Examples for MLPoE at PTA

Example: Configuring MLPoE at PTA

The following example shows how to configure the Multilink PPP over Ethernet (MLPoE) on the PTA device:

class-map match-all ip-prec-1
 match ip precedence 1 
!
policy-map mlp-child-lfi-policy
 class ip-prec-1
  priority percent 10
policy-map mlp-parent-250K
 class class-default
  shape average 250000
  service-policy mlp-child-lfi-policy
policy-map mlp-parent-10M
 class class-default
  shape average 10000000
  service-policy mlp-child-lfi-policy
interface virtual-template 15
 description MLPoE/oEoVLAN/oEoQinQ (single-link bundle) Virtual Template
 ip address negotiated
 peer default ip address pool MLP-IPv4-Pool
 ppp max-failure 30
 ppp chap password 0 passowrd1
 ppp multilink
 ppp multilink interleave
 ppp multilink endpoint magic-number
 ppp timeout retry 4
 service-policy output mlp-parent-10M
bba-group pppoe mlpoe-bba-group-10M
 virtual-template 15
ip local pool MLP-IPv4-Pool 209.165.201.2 209.165.201.10
interface GigabitEthernet  0/0/0
 description MLPoE (Single-Link Bundles) Session (PTA Mode) to 7200-41 0/1
 no ip address
 negotiation auto
 pppoe enable group mlpoe-bba-group-10M

Example: Configuring MLPoE over VLAN

The following example shows how to configure Multilink PPP over Ethernet over VLAN (MLPoEoVLAN) on the PTA device:

class-map match-all ip-prec-1
 match ip precedence 1 
policy-map mlp-child-lfi-policy
 class ip-prec-1
  priority percent 10
policy-map mlp-parent-250K
 class class-default
  shape average 250000
  service-policy mlp-child-lfi-policy
policy-map mlp-parent-10M
 class class-default
  shape average 10000000
  service-policy mlp-child-lfi-policy
interface virtual-template 15
 description MLPoE/oEoVLAN/oEoQinQ (single-link bundle) Virtual Template
 ip address negotiated
 peer default ip address pool MLP-IPv4-Pool
 ppp max-failure 30
 ppp chap password 0 password1
 ppp multilink
 ppp multilink interleave
 ppp multilink endpoint magic-number
 ppp timeout retry 4
 service-policy output mlp-parent-10M
bba-group pppoe mlpoe-bba-group-10M
 virtual-template 15
ip local pool MLP-IPv4-Pool 209.165.201.2 209.165.201.10
interface GigabitEthernet 0/0/0.13
 description MLPoEoVLAN Session (Single-Link Bundles) Session (PTA Mode)
 encapsulation dot1Q 13
 pppoe enable group mlpoe-bba-group-10M

Example: Configuring MLPoE over QinQ

The following example shows how to configure Multilink PPP over Ethernet over Queue-in-Queue (MLPoEoQinQ) on the PTA device:

class-map match-all ip-prec-1
 match ip precedence 1 
policy-map mlp-child-lfi-policy
 class ip-prec-1
  priority percent 10
policy-map mlp-parent-250K
 class class-default
  shape average 250000
  service-policy mlp-child-lfi-policy
policy-map mlp-parent-10M
 class class-default
  shape average 10000000
  service-policy mlp-child-lfi-policy
interface virtual-template 15
 description MLPoE/oEoVLAN/oEoQinQ (single-link bundle) Virtual Template
 ip address negotiated
 peer default ip address pool MLP-IPv4-Pool
 ppp max-failure 30
 ppp chap password 0 password1
 ppp multilink
 ppp multilink interleave
 ppp multilink endpoint magic-number
 ppp timeout retry 4
 service-policy output mlp-parent-10M
bba-group pppoe mlpoe-bba-group-10M
 virtual-template 15
ip local pool MLP-IPv4-Pool 40.1.0.1 40.1.0.6
interface GigabitEthernet 0/0/0.14
 description MLPoEoQinQ Sessions (Single-Link Bundles) Session (PTA Mode)
 encapsulation dot1Q 14 second-dot1q 140
 pppoe enable group mlpoe-bba-group-10M

Additional References for MLPoE at PTA

Related Documents

Related Topic

Document Title

Cisco IOS commands

Cisco IOS Master Command List, All Releases

PPP commands

Dial Technologies Command Reference

Multilink PPP

Multilink PPP Feature Functionality on the ASR 1000 Series Aggregation Services Router

Standards and RFCs

Standard/RFC

Title

RFC 1990

The PPP Multilink Protocol (MP)

RFC 2686

The Multi-Class Extension to Multi-Link PPP

MIBs

Technical Assistance

Description

Link

The Cisco Support and Documentation website provides online resources to download documentation, software, and tools. Use these resources to install and configure the software and to troubleshoot and resolve technical issues with Cisco products and technologies. Access to most tools on the Cisco Support and Documentation website requires a Cisco.com user ID and password.

http://www.cisco.com/cisco/web/support/index.html

Feature Information for MLPoE at PTA

The following table provides release information about the feature or features described in this module. This table lists only the software release that introduced support for a given feature in a given software release train. Unless noted otherwise, subsequent releases of that software release train also support that feature.

Use Cisco Feature Navigator to find information about platform support and Cisco software image support. To access Cisco Feature Navigator, go to www.cisco.com/go/cfn. An account on Cisco.com is not required.
Table 1. Feature Information for Multilink PPP Over Ethernet at PTA

Feature Name

Releases

Feature Information

MLPoE at PTA

12.2(33)XNE

Cisco IOS XE Release 3.4S

Cisco IOS XE Release 3.10S

Cisco IOS XE Release 3.12S

Multilink PPP over Ethernet (MLPoE) at PPP Termination and Aggregation (PTA) feature allows the customer premises equipment (CPE) and PTA devices to interleave high-priority and low-latency packets (PPP encapsulated) between MLPPP fragments of lower-priority and higher-latency packets.