The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This document describes the In-band Signaling Global MLDP which is Profile 7 for Next Generation Multicast over VPN (mVPN). It uses an example and the implementation in Cisco IOS in order to illustrate the behaviour.
Opaque Value is used to map an MP LSP to an IP multicast flow.
Contents of the opaque value are derived from the multicast flow.
IPv4 PIM-SSM transit allows global PIM-SSM streams to be transported across the Multiprotocol Label Switch (MPLS) core. The Opaque Value contains the actual (S, G) which resides in the global mroute table of the Ingress and Egress PE routers.
Not only does the Opaque Value uniquely identify the MP LSP but it may also carry the (S, G) stream information from the edge IP multicast network. P routers in the MP Label- Switched Path (LSP) do not need to parse the Opaque Value but uses the value as an index into their local MP LSP database to determine the next hop(s) to replicate the multicast packet to. However, the Ingress PE LSP (closest to the source) decodes value so that it can select the correct MP LSP for the incoming (S, G) stream. The Egress PE can use value to install (S, G) state into the local VRF or global mroute table.
Receiver Driver and root learned signalled using MLDP P2MP FEC.
Uniquely Identified:
Configuration Driven and root is manually configured.
Uniquely Identified:
Used to carry Multicast Stream Information, which has meaning to Root and leaves.
Type 1: Defined by MDLP, contains LSP-ID to Manage ID-Space for P2MP / MP2MP LSPs.
Type 2: Defined for provisioning MP-LDP Tunnels and used for BGP-MVPN without any overlap.
MPLS Technology Extensions to support Multicast using Labels:
P2MP | MP2MP | |
Label | Upstream Allocation | Upstream and Downstream |
Traffice | Downstream Flow | Upstream and Downstream |
Root | Ingress Router | Provide/Provide Edge |
Traffice Type | Control Router | Only Control Traffic |
LSP Type | Root to Many Leaves | Many Roots to Many Leaves |
Step 1. Enable MPLS MLDP in Core nodes.
On PE1, PE2 and PE3:
# mpls mldp logging
Step 2. Enable MLDP INBAND SIGNALING in CORE.
On PE1, PE2 and PE3:
# ip multicast mpls mldp
# ip pim mpls source loopback 0
Step 3. Enable multicast routing.
On all nodes:
# ip multicast-routing
Step 4. Enable Protocol Independent Multicast (PIM) SSM in Customer Edge (CE).
On CE nodes:
# ip pim ssm default
Step 5. Enable PIM SM in all CE interfaces and Provider Edge (PE) interface.
On CE1, CE2, CE3 and all CE facing PE interface:
# interface x/x
# ip pim sparse-mode
# interface loopback x/x
# ip pim sparse-mode
Note: x represents the interface number that PE connected to CE and vice-versa.
Task 1: Verify Physical Connectivity.
Task 2: Verify BGP Address Family IPv4 unicast
Task 3: Verify Multicast Traffic end to end.
Task 4: Verify MPLS CORE.
Interior Gateway Protocol (IGP), MPLS LDP, Border Gateway Protocol (BGP) runs fine across our network end to end.
In this section, verify to check in the core/aggregation network. Check adjacency and the control-plane and Data plane for traffic over MPLS network.
In order to verify that the local and remote CE devices can communicate across the Multiprotocol Label Switching (MPLS) core, perform the steps shown in the image:
Verify the control plane, in which Label imposition occurs when the PE router forwards, based on the IP header and adds an MPLS label to the packet as it enters an MPLS network.
In the direction of label imposition, the router switches packets based on a Cisco Express Forwarding (CEF) table lookup to find the next hop and adds the appropriate label information stored in the FIB for the destination. When a router performs label swapping in the core on an MPLS packet, the router does an MPLS table lookup. The router derives this MPLS table (LFIB) from information in the CEF table and the Label Information Base (LIB).
Label disposition occurs when the PE router receives an MPLS packet, makes a forwarding decision based on the MPLS label, removes the label, and sends an IP packet. The PE router uses the LFIB for path determination for a packet in this direction. As stated previously, a special iBGP session facilitates the advertisement of VPNv4 prefixes and their labels between PE routers. At the advertising PE, BGP allocates labels for the VPN prefixes learned locally and installs them in the Label Forwarding Informtion Base (LFIB), which is the MPLS forwarding table.
MLDP-MFI: Enabled MLDP MFI client on Ethernet0/0; status = ok MLDP-MFI: Enabled MLDP MFI client on Ethernet0/1; status = ok MLDP: P2MP Wildcard label request sent to 11.11.11.11:0 success MLDP: MP2MP Wildcard label request sent to 11.11.11.11:0 success MLDP-NBR: 11.11.11.11:0 ask LDP for adjacencies
Note: Use # debug mpls mldp all to check the preceding establishment.
PE1#sh mpls mldp neighbors MLDP peer ID : 11.11.11.11:0, uptime 00:02:05 Up, Target Adj : No Session hndl : 1 Upstream count : 0 Branch count : 0 Path count : 1 Path(s) : 10.0.1.2 LDP Ethernet0/1 Nhop count : 0
ip pim mpls source loopback 0
ip multicast mpls mldp
MLDP: Enabled IPv4 on Lspvif0 unnumbered with Loopback0 MLDP-MFI: Enabled MLDP MFI client on Lspvif0; status = ok PIM(*): PIM subblock added to Lspvif0 MLDP: Enable pim on lsp vif: Lspvif0 MLDP: Add success lsp vif: Lspvif0 address: 0.0.0.0 application: MLDP vrf_id: 0 MLDP-DB: Replaying database events for opaque type value: 3 %LINEPROTO-5-UPDOWN: Line protocol on Interface Lspvif0, changed state to up PIM(0): Check DR after interface: Lspvif0 came up! %PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 1.1.1.1 on interface Lspvif0
Note: Use # debug mpls mldp all to check the preceding establishment.
PE1#sh int lspvif 0 Lspvif0 is up, line protocol is up Hardware is Interface is unnumbered. Using address of Loopback0 (1.1.1.1) MTU 17940 bytes, BW 8000000 Kbit/sec, DLY 5000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation LOOPBACK, loopback not set
Note: MPLS MLDP is not yet created as Receiver is not online yet.
Receiver 3 comes online and sends the PIM JOIN (S, G) messages to PE3.
PIM(0): Received v2 Join/Prune on Ethernet0/2 from 10.2.0.2, to us PIM(0): Join-list: (10.1.0.2/32, 232.1.1.1), S-bit set MRT(0): Create (*,232.1.1.1), RPF (unknown, 0.0.0.0, 2147483647/0) MRT(0): RPF Track start on 10.1.0.2 for (10.1.0.2, 232.1.1.1) MRT(0): Reset the z-flag for (10.1.0.2, 232.1.1.1) MLDP: Enabled IPv4 on Lspvif1 unnumbered with Loopback0 MLDP-MFI: Enable lsd on int failed; not registered; PIM(*): PIM subblock added to Lspvif1 MLDP: Enable pim on lsp vif: Lspvif1 MLDP: Add success lsp vif: Lspvif1 address: 1.1.1.1 application: MLDP vrf_id: 0 MLDP-MRIB-IP: (10.1.0.2,232.1.1.1/32) update (t=0) RPF: 0.0.0.0 MLDP-MRIB-IP: (10.1.0.2,232.1.1.1/32) set rpf nbr: 0.0.0.0 MLDP-MRIB-IP: wavl insert success (10.1.0.2, 232.1.1.1) MLDP-MRIB-IP: no RPF neighbor, done! MLDP-MRIB-IP: (10.1.0.2,232.1.1.1/32) update (t=1) RPF: 1.1.1.1 MLDP-MRIB-IP: (10.1.0.2,232.1.1.1/32) set rpf nbr: 1.1.1.1 MLDP-MRIB-IP: Change RPF neighbor from 0.0.0.0 to 1.1.1.1 MLDP-MRIB-IP: (10.1.0.2,232.1.1.1/32) update idb = Lspvif1, (f=2,c=2) MLDP-MRIB-IP: add accepting interface: Lspvif1 root: 1.1.1.1 MLDP-MRIB-IP: change interface from NULL to Lspvif1 %LINEPROTO-5-UPDOWN: Line protocol on Interface Lspvif1, changed state to up PIM(0): Check DR after interface: Lspvif1 came up! PIM(0): Changing DR for Lspvif1, from 0.0.0.0 to 2.2.2.2 (this system) %PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 2.2.2.2 on interface Lspvif1
Note: Use # debug mpls mldp all and # debug ip bgp ipv4 mvpn updates Debugs to check the peceding establishment.
Any communication from Receiver (S,G) Join, is converted to MLDP and all messages are traverse towards the Lspvif 1
With PIM JOIN (S,G) as MLDP is receiver-driven protocol, it builds the MLDP database from Receiver to Source. This is Downstream Label allocation for P2MP MLDP.
Note: In Inband signaling, Label Switched Path Virtual Interfaces (LSPVIFs) are created per ingress-PE to implement strict-RPF i.e,. to accept a (S,G) packet only if it comes from the expected remote PE; this is LSPVIF1 in your case. A source PE, the default LSPVIF is used to forward onto the core. Note that there is no sign of LSPVIF interface numbers, i.e. lspvif0 is not always the default interface and lspvif1 is not always the per-PE interface. These numbers are allocated on demand as required.
PE3#sh ip mroute 232.1.1.1 verbose IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, T - SPT-bit set, p - PIM Joins on route, (10.1.0.2, 232.1.1.1), 00:19:28/00:02:42, flags: sTp Incoming interface: Lspvif1, RPF nbr 1.1.1.1 Outgoing interface list: Ethernet0/0, Forward/Sparse, 00:19:28/00:02:42, p PE3#sh mpls mldp database * For interface indicates MLDP recursive forwarding is enabled * For RPF-ID indicates wildcard value > Indicates it is a Primary MLDP MDT Branch LSM ID : 1 Type: P2MP Uptime : 00:28:02 FEC Root : 1.1.1.1 Opaque decoded : [ipv4 10.1.0.2 232.1.1.1] Opaque length : 8 bytes Opaque value : 03 0008 0A010002E8010101 Upstream client(s) : 11.11.11.11:0 [Active] Expires : Never Path Set ID : 1 Out Label (U) : None Interface : Ethernet0/3* Local Label (D): 24 Next Hop : 10.0.3.2 Replication client(s): MRIBv4(0) Uptime : 00:28:02 Path Set ID : None Interface : Lspvif1 RR-P #sh mpls mldp database * For interface indicates MLDP recursive forwarding is enabled * For RPF-ID indicates wildcard value > Indicates it is a Primary MLDP MDT Branch LSM ID : A Type: P2MP Uptime : 00:40:52 FEC Root : 1.1.1.1 Opaque decoded : [ipv4 10.1.0.2 232.1.1.1] Opaque length : 8 bytes Opaque value : 03 0008 0A010002E8010101 Upstream client(s) : 1.1.1.1:0 [Active] Expires : Never Path Set ID : A Out Label (U) : None Interface : Ethernet0/1* Local Label (D): 24 Next Hop : 10.0.1.1 Replication client(s): 2.2.2.2:0 Uptime : 00:40:52 Path Set ID : None Out label (D) : 23 Interface : Ethernet0/3* Local label (U): None Next Hop : 10.0.2.1 3.3.3.3:0 Uptime : 00:40:52 Path Set ID : None Out label (D) : 24 Interface : Ethernet0/2* Local label (U): None Next Hop : 10.0.3.1
The information received at the Source PE based on the RPF lookup for the next hop.
MLDP-LDP: [ipv4 10.1.0.2 232.1.1.1] label mapping from: 11.11.11.11:0 label: 23 root: 1.1.1.1 Opaque_len: 11 sess_hndl: 0x1 MLDP: LDP root 1.1.1.1 added MLDP-DB: Added [ipv4 10.1.0.2 232.1.1.1] DB Entry MLDP-DB: [ipv4 10.1.0.2 232.1.1.1] Changing branch 11.11.11.11:0 from Null/0.0.0.0 to Ethernet0/1/10.0.1.2 MLDP-MFI: Could not add Path type: PKT, Label: 23, Next hop: 11.11.11.11, Interface: NULL to set: 3, error 1 MLDP-DB: [ipv4 10.1.0.2 232.1.1.1] Added P2MP branch for 11.11.11.11:0 label 23 MLDP-MRIB-IP: [ipv4 10.1.0.2 232.1.1.1] client update: We are root MLDP-MRIB-IP: wavl insert success (10.1.0.2, 232.1.1.1) MLDP-MRIB-IP: [ipv4 10.1.0.2 232.1.1.1] Created: Lspvif0 for: 0.0.0.0 MLDP-MRIB: Created adjacency for LSM ID 3 MLDP-MRIB-IP: [ipv4 10.1.0.2 232.1.1.1] Created adjacency on Lspvif0 MLDP: nhop 1.1.1.1 added MRT(0): Set the T-flag for (10.1.0.2, 232.1.1.1) MRT(0): (10.1.0.2,232.1.1.1), RPF install from /0.0.0.0 to Ethernet0/2/10.1.0.2 PIM(0): Insert (10.1.0.2,232.1.1.1) join in nbr 10.1.0.2's queue MLDP-MRIB-IP: (10.1.0.2,232.1.1.1/32) update (t=1) RPF: 10.1.0.2 MLDP-MRIB-IP: (10.1.0.2,232.1.1.1/32) set rpf nbr: 10.1.0.2 MLDP-MRIB-IP: ignoring interface Ethernet0/2, no LS
Note: Use # debug mpls mldp all and # debug ip bgp ipv4 mvpn updates to check the preceding establishment.
PE1#sh ip mroute 232.1.1.1 verbose IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, I - Received Source Specific Host Report, (10.1.0.2, 232.1.1.1), 00:25:14/stopped, flags: sTI Incoming interface: Ethernet0/2, RPF nbr 10.1.0.2 Outgoing interface list: Lspvif0, LSM ID: 4, Forward/Sparse, 00:25:14/00:01:45 PE1# sh mpls mldp database * For interface indicates MLDP recursive forwarding is enabled * For RPF-ID indicates wildcard value > Indicates it is a Primary MLDP MDT Branch LSM ID : 4 Type: P2MP Uptime : 00:25:25 FEC Root : 1.1.1.1 (we are the root) Opaque decoded : [ipv4 10.1.0.2 232.1.1.1] Opaque length : 8 bytes Opaque value : 03 0008 0A010002E8010101 Upstream client(s) : None Expires : N/A Path Set ID : 4 Replication client(s): 11.11.11.11:0 Uptime : 00:25:25 Path Set ID : None Out label (D) : 24 Interface : Ethernet0/1* Local label (U): None Next Hop : 10.0.1.2 MLDP-LDP: [id 0] Wildcard label request from: 11.11.11.11:0 label: 0 root: 6.2.0.0 Opaque_len: 0 sess_hndl: 0x1 MLDP-LDP: [ipv4 10.1.0.2 232.1.1.1] label mapping from: 11.11.11.11:0 label: 23 root: 1.1.1.1 Opaque_len: 11 sess_hndl: 0x1 Neighbor 11.11.11.11 request for the label request to PE1.
Note: Respond to Typed Wildcard Label Requests received from peer by replaying its label database for prefixes. Make use of Typed Wildcard Label Requests towards peers to request replay of peer label database for prefixes.
MLDP-LDP: [ipv4 10.1.0.2 232.1.1.1] label mapping from: 11.11.11.11:0 label: 24 root: 1.1.1.1 Opaque_len: 11 sess_hndl: 0x1 MLDP: LDP root 1.1.1.1 added MLDP-DB: Added [ipv4 10.1.0.2 232.1.1.1] DB Entry MLDP-DB: [ipv4 10.1.0.2 232.1.1.1] Changing branch 11.11.11.11:0 from Null/0.0.0.0 to Ethernet0/1/10.0.1.2 %MLDP-5-ADD_BRANCH: [ipv4 10.1.0.2 232.1.1.1] Root: 1.1.1.1, Add P2MP branch 11.11.11.11:0 remote label 24 debug ip mfib pak debug ip mfib mrib
Traffic from Source 10.1.0.2 streaming from 232.1.1.1. Enters through ethernet0/2. The packet got forwarded via Lspvif 0. PIM(0): Insert (10.1.0.2,232.1.1.1) join in nbr 10.1.0.2's queue PIM(0): Building Join/Prune packet for nbr 10.1.0.2 PIM(0): Adding v2 (10.1.0.2/32, 232.1.1.1), S-bit Join PIM(0): Send v2 join/prune to 10.1.0.2 (Ethernet0/2) MFIBv4(0x0): Pkt (10.1.0.2,232.1.1.1) from Ethernet0/2 (FS) accepted for forwarding MFIBv4(0x0): Pkt (10.1.0.2,232.1.1.1) from Ethernet0/2 (FS) sent on Lspvif0, LSM NBMA/4
This packet get tunneled into the Lspvif 0.
At the receiver Side: At the receiver side the packet reach at the Lspvif 1. MFIBv4(0x0): Pkt (10.1.0.2,232.1.1.1) from Lspvif1 (FS) accepted for forwarding MFIBv4(0x0): Pkt (10.1.0.2,232.1.1.1) from Lspvif1 (FS) sent on Ethernet0/0 PIM(0): Received v2 Join/Prune on Ethernet0/0 from 10.3.0.2, to us PIM(0): Join-list: (10.1.0.2/32, 232.1.1.1), S-bit set PIM(0): Update Ethernet0/0/10.3.0.2 to (10.1.0.2, 232.1.1.1), Forward state, by PIM SG Join
When packet hit the PE1, it checks the LSM ID to forward the traffic, which label to impose in the Multicast packet.
Multipoint LDP (M-LDP) in-band signaling enables you to carry multicast traffic across an existing IP/MPLS backbone, while use of PIM in the provider core, is avoided.
On the Label-Edge Router (LER), enable PIM to use M-LDP in-band signaling for the upstream neighbors when the LER does not detect a PIM upstream neighbor.