MPLS VPN Multicast

Overview

One basic MPLS VPN services have been implemented, there is various additional services that can run on the top. One example are MPLS Multicast VPN and MPVN services. MPLS Multicast VPN service simply allow your customer to route multicast over your MPLS infrastructure. All Multicast trees can be configured so ASM, SSM and Bidir are supported.

MVPN service is also called Multicast VPN Extranet service and allows for multicast flows originated from one customer site to reach other customers sites.

Topology used

image

The topology is composed of two P routers and two PE routers. P1 is the MPLS route reflector for VPNv4 and the customer sites separation is done through the use of VLAN and subinterfaces on PEs.

First, let’s check that the MPLS VPN is working properly :

CE1 routing table and configuration interfaces :

CE1#sh ip route
Gateway of last resort is 100.0.0.1 to network 0.0.0.0
 
S*    0.0.0.0/0 [1/0] via 100.0.0.1
      100.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
C        100.0.0.0/24 is directly connected, FastEthernet1/0
L        100.0.0.2/32 is directly connected, FastEthernet1/0
CE1#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
FastEthernet0/0            unassigned      YES unset  administratively down down
FastEthernet1/0            100.0.0.2       YES manual up                    up
FastEthernet1/1            unassigned      YES unset  administratively down down

CE2 routing table and configuration interfaces :

CE2#sh ip route
Gateway of last resort is 200.0.0.1 to network 0.0.0.0
 
S*    0.0.0.0/0 [1/0] via 200.0.0.1
      200.0.0.0/24 is variably subnetted, 2 subnets, 2 masks
C        200.0.0.0/24 is directly connected, FastEthernet1/0
L        200.0.0.2/32 is directly connected, FastEthernet1/0
CE2#sh ip int brief
Interface                  IP-Address      OK? Method Status                Protocol
FastEthernet0/0            unassigned      YES unset  administratively down down
FastEthernet1/0            200.0.0.2       YES manual up                    up
FastEthernet1/1            unassigned      YES unset  administratively down down

Ping and Traceroute from CE1 to CE2 :

CE1#ping 200.0.0.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 200.0.0.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 84/101/112 ms
 
CE1#traceroute 200.0.0.2 numeric
Type escape sequence to abort.
Tracing the route to 200.0.0.2
VRF info: (vrf in name/id, vrf out name/id)
  1 100.0.0.1 32 msec 40 msec 56 msec
  2 200.0.0.1 84 msec 72 msec 88 msec
  3 200.0.0.2 108 msec *  104 msec

Connectivity is ok, what we can also see is that this version of IOS (15.1M) is hiding the MPLS TTL by default.

Configuration

Multicast Support

So first, we need to support Multicast natively on the provider network. We need this because the multicast flows that will come from the customers in the future will travel through the provider network in… multicast.

So let’s follow the basic steps for Multicast which are :

  • Enable Multicast globally
  • Enable PIM on all interfaces that need to support multicast (here I chose PIM sparse mode) and for me this includes the loopback of P1.
  • Select an RP inside the provider network, for me it will be P1 and I will choose it through BSR

P1 configuration :

ip multicast-routing
!
interface Loopback0
 ip address 1.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface FastEthernet1/0
 ip address 192.168.1.1 255.255.255.0
 ip pim sparse-mode
 mpls ip
!
ip pim bsr-candidate Loopback0 0
ip pim rp-candidate Loopback0

When PIM is configured everywhere we can check if the RP Election is OK :

PE1#sh ip pim bsr-router
PIMv2 Bootstrap information
  BSR address: 1.1.1.1 (?)
  Uptime:      12:13:15, BSR Priority: 0, Hash mask length: 0
  Expires:     00:01:22
PE2#sh ip pim bsr-router
PIMv2 Bootstrap information
  BSR address: 1.1.1.1 (?)
  Uptime:      12:13:35, BSR Priority: 0, Hash mask length: 0
  Expires:     00:02:0

If you want to make further tests, join a group from on PE loopback and generate some multicast traffic from the other PE.

Multicast VRF Configuration

So now that we support Multicast inside the provider network, we need to provide this service to the customer.

First we need to enable multicast for the customer VRF at the PEs. Then as with usual multicast configuration we will need to enable PIM on the needed interfaces and if you selected sparse mode, use an RP that should be reachable from the Customer VRF (could be one of the CEs or one of the PEs).

The last needed configuration we need is to select over which multicast address the flows from the customer will be mapped inside the provider network :

PE1 :

ip vrf CUST_A
 rd 1:1
 mdt default 239.1.1.1
 route-target export 1:1
 route-target import 1:1
!
ip multicast-routing vrf CUST_A
!
interface FastEthernet1/0.100
 encapsulation dot1Q 100
 ip vrf forwarding CUST_A
 ip address 100.0.0.1 255.255.255.0
 ip pim sparse-mode

PE2 :

ip vrf CUST_A
 rd 1:1
 mdt default 239.1.1.1
 route-target export 1:1
 route-target import 1:1
!
ip multicast-routing vrf CUST_A
!
interface FastEthernet1/0.100
 encapsulation dot1Q 100
 ip vrf forwarding CUST_A
 ip address 200.0.0.1 255.255.255.0
 ip pim sparse-mode

The MDT default is the multicast group address that will be used by the customer inside the MPLS network. So to test this setup, from the CE2 you can join a multicast group :

interface FastEthernet1/0
 ip address 200.0.0.2 255.255.255.0
 ip pim sparse-mode
 ip igmp join-group 239.11.11.11
 duplex auto
 speed auto
end

And then generate some multicast traffic from CE1 :

CE1#ping 239.11.11.11
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.11.11.11, timeout is 2 seconds:
 
Reply to request 0 from 200.0.0.2, 368 ms

If you want to check the multicast routing table from the provider network, don’t forget that you need to check the multicast routing table of the customer VRF :

From PE1 :

PE1#sh ip mroute vrf CUST_A
(*, 239.11.11.11), 12:26:25/00:03:10, RP 100.0.0.1, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel3, Forward/Sparse, 12:26:25/00:03:10
 
(100.0.0.2, 239.11.11.11), 00:02:43/00:00:16, flags: T
  Incoming interface: FastEthernet1/0.100, RPF nbr 100.0.0.2
  Outgoing interface list:
    Tunnel3, Forward/Sparse, 00:02:43/00:03:10

We have the *,G and the S,G with the OIL indicating Tunnel3 :”

PE1#sh int tunnel 3
Tunnel3 is up, line protocol is up
  Hardware is Tunnel
  Interface is unnumbered. Using address of Loopback0 (11.11.11.11)
  MTU 17916 bytes, BW 100 Kbit/sec, DLY 50000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation TUNNEL, loopback not set
  Keepalive not set
  Tunnel source 11.11.11.11 (Loopback0)
   Tunnel Subblocks:
      src-track:
         Tunnel3 source tracking subblock associated with Loopback0
          Set of tunnels with source Loopback0, 1 member (includes iterators), on interface <OK>
  Tunnel protocol/transport multi-GRE/IP
    Key disabled, sequencing disabled
    Checksumming of packets disabled

So this tunnel3 interface is an unnumbered interface cloned from the loopback0. This output seems to indicate that this is a GRE tunnel so let’s check in Wireshark :

image

What we see is that the original multicast flow has been encapsulated with GRE/IP, the new header has the source IP of the originating PE and the destination is the MDT address we specified.

So now let’s be curious and configure another customer with the same MDT :

PE1(config)#ip vrf CUST_B
PE1(config-vrf)# rd 2:2
PE1(config-vrf)# mdt default 239.1.1.1
% MDT-default group 239.1.1.1 overlaps with vrf CUST_A

This mean that you cannot re-use on MDT address for multiple customers.

MPLS Multicast Extranet Support

Ok so what if I have the customer B, inside its own VRF, wants to receive the multicast feeds from Customer A ? In our setup, the two customers are connected to the same PE. What we need first is the customer to have the same RP for the distribution tree, which means that we need to play with the import/export feature inside the VRFs.

Some thoughts here :

The RP for CUST_A is currently the PE1, so we can configure the same PE for CUST_B. This seems not very scalable, one better way to do this is probably to setup some kind of service VRF where the customer can find the RP service and from where we can also control who gets connected to who.

Import or Export ? The issue with the import feature is that it will be needed on all PEs where you want the cross service to be present. The export feature allows who to do it only once. Depending on what you are trying to accomplish, choose one or the other.

So at this point we should have the vrf for CUST_B with it’s own RD, RTs and MDT.

Let’s export the routes from CUST_A with the RT of 2:2 which is the RT or CUST_B :

ip vrf CUST_A
 rd 1:1
 mdt default 239.1.1.1
 route-target export 1:1
 route-target export 2:2
 route-target import 1:1

Now we need to take a look at the routing table of CUST_B where the routes from CUST_A should now be present. I have added a loopback on CE1 to simulate some internal network of the customer :

PE1#sh ip route vrf CUST_B
Gateway of last resort is not set
 
      100.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
B        100.0.0.0/24
           is directly connected (CUST_A), 00:06:05, FastEthernet1/0.100
C        100.0.200.0/24 is directly connected, FastEthernet1/0.200
L        100.0.200.1/32 is directly connected, FastEthernet1/0.200
B     192.168.0.0/24 [20/0] via 100.0.0.2 (CUST_A), 00:02:14

Routes are there, including the inside network 192.168.0.0/24 that we would need to filter later. Now we need can specify the RP for CUST_B with the same address as CUST_A because the route is present for both customer :

ip pim vrf CUST_A rp-address 100.0.0.1
ip pim vrf CUST_B rp-address 100.0.0.1

Now let’s generate multicast traffic from CE1 :

CE1#ping 239.11.11.11 rep 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.11.11.11, timeout is 2 seconds:
 
Reply to request 0 from 100.0.200.2, 68 ms
Reply to request 0 from 200.0.0.2, 88 ms
Reply to request 1 from 100.0.200.2, 80 ms
Reply to request 1 from 200.0.0.2, 84 ms
Reply to request 2 from 100.0.200.2, 72 ms
Reply to request 2 from 200.0.0.2, 72 ms

Let’s take a look at the mroute on PE1 for VRF CUST_A :

PE1#sh ip mroute vrf CUST_A 239.11.11.11
IP Multicast Routing Table
(*, 239.11.11.11), 00:28:35/00:02:02, RP 100.0.0.1, flags: SJCE
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel3, Forward/Sparse, 00:28:05/00:02:02
  Extranet receivers in vrf CUST_B:
(*, 239.11.11.11), 00:28:35/stopped, RP 100.0.0.1, OIF count: 1, flags: SJC
 
(100.0.0.2, 239.11.11.11), 00:04:55/00:02:28, flags: TE
  Incoming interface: FastEthernet1/0.100, RPF nbr 100.0.0.2
  Outgoing interface list:
    Tunnel3, Forward/Sparse, 00:04:55/00:03:29
  Extranet receivers in vrf CUST_B:
  (100.0.0.2, 239.11.11.11), 00:04:55/stopped, OIF count: 1, flags: T

The OIL list contains the Extranet receiver, specifying the outgoing vrf !

Now let move CE4 inside the CUST_B VRF to see what happens if the CE are not connected to the same PE :

CE1#ping 239.11.11.11 rep 100
Reply to request 0 from 100.0.200.2, 68 ms
Reply to request 0 from 200.0.0.2, 80 ms
Reply to request 0 from 200.0.200.2, 68 ms

This is working, traffic that needs to be carried from PE1 to PE2 for the CUST_B is now encapsulated with the 239.2.2.2 MDT address.

image

Leave a Reply

Your email address will not be published. Required fields are marked *