NG-MVPN Extranet: Part 2

Welcome back.

In the previous part, we had configured all service provider devices and got our first part of control plane information, populated by BGP.

This time, we are going to configure the rest: provider tunnel, PIM, start on a p-multicast source and request some groups by customers.

Type 1 Route

For the sake of the story continuation our last result is shown again:

show route table MVPN-NG

As we knew already, Type 1 Route uses by some service provider device closely located to multicast source and a used to detemine with which PE shall inclusive PMSI (P-Multicast Service Interface) tunnels be established.
Provider tunnels for multicasts are unidirectional by nature of multicast and the most important point is to have Type 1 Routes from all NG-MVPN devices on our source – P2. Gladly, we see all of them: PE1, PE2 and PE3.

Provider tunnels

The type of tunnels will be RSVP-TE and we have to configure some statements on the source of tunnels only. Means those endpoints of tunnel support functionality of tunnel type – RSVP in our case.
An LSP template should be defined in support of dynamic nature of p2mp LSP – if new PE appears, then the source of LSP should be able to include him dynamically on basis of Type 1 Routes information:

RSVP-TE Provider Tunnel Configuration

The configuration is pretty straightforward: define the LSP template and order to use it for any provider tunnels in that VRF.

As a result of such configuration LSP should be established immediately:

show mpls lsp p2mp [detail]

Most interesting here, except a success, are paths chosen for LSPs – lines are highlighted above and paths are depicted below:

ng-mvpn_provider-tunnel-paths

Because of our LSP template do not have any explicit constraints – RSV-based LSPs are presented with constrained-path type, in other words fully relies on IGP information.

Route Type 5

As I see, very useful from that point would be to configure a p-multicast source and receivers, so we could be sure that solid walls we will encounter (and we definitely will) exist not because there are no senders and receivers.

The Sender

On the station connected to the same network as ge-0/0/0.0 interface at P2 we are going to use the very small software ideally applying to our environment and configure “NSend” in this manner:

multicast-source-gui

Is the source registered?

Receivers

In comparing with Cisco, where static IGMP should be configured on a separate node that represents a customer, on Juniper static IGMP group can be configured on PE itself, which is pretty useful and allows an administrator to test the configuration without involving anyone.

IGMP static groups on PEs

Type 5

As a result of the source activation, the new MVPN NLRI has originated by P2 – Type 5.

Route Type 5

Route Type 5 originated by a PIM DR router after a source has been registered successfully.
This options might be regulated by mvpn configuration – only the site marked as “sender-site” explicitly or without any explicit limitation allowed to originate Type 5.

PIM

Seems like we have routes type 1 originating across and the provider point-to-multipoint tunnel has established as intended. The next step shall be done is to establish PIM neighborships between PE routers and P2.
The current PIM section under MVPN instance defines P2 loopback as static RP:

Basic PIM section at PE

Obviously, anyone experienced with PIM would say: “It just can’t work without PIM interfaces enabled” and will be wrong.

Allow me to fixate that moment in outputs:

Basic PIM section at PE

No interfaces are PIM enabled nor a neighborship exists.
Moreover, some show commands tell us in a pretty straight way that configuration hasn’t ended yet:

So, a high priority in PIM section is to ensure that PE routers are able to exchange PIM requests with RP’s IP address – 200.200.200.200 (P2’s lo0.200 interface). But, as we decided at start “family inet-vpn unicast” is not enabled, so RP address is not originated by P2 and PE routers have no chance to recognize how to build RPT.

What options do we have?
There is no LSP from PE to P2 – provider tunnels made by RSVP are unidirectional and may not be used to help us with the issue.
What if use resolution options, like in a case when lack some prefix in bgp.l3vpn.0 resolved by using inet.0? Nope, we don’t have “200.200.200.200” in any table, so a resolution is not a solution.
Well, feel free to share your ideas about how to cope with the issue in post comments, but here we are going to simply enable VPNv4 announces on P2 and PE routers.

Of course, now RP’s IP is available in the instance inet.0 table:

Are PE routers able to recognise then how to reach the RP?

Check information about source at PE

The magic we watching here is about PIM – no PIM-enabled interfaces are presented, but links to MVPN information are in use instead: “Through BGP” and “Through MVPN“.
“10.0.1.11” above is the station with p-multicast source software running, but why 200.200.200.200 named as the source as well?
It is a classic PIM-SM behaviour: some moments at a start a path thru RP used to get multicast traffic. That called as shared tree and cause due the receiver don’t know yet an actual IP of a source, but RP address is known and that RP should know about all sources in the network. After a short delay, when multicast traffic successfully flows to receivers thru RP, a PE router takes source IP address from routed packets and might try to receive a flow without the RP in the middle. This behaviour manipulated by a configuration and in out situation it could be a reasonable option – RP located on the shortest path to the source, so SPT would never be better than RPT and no triggers shall switch from one to another:

Almost done

All outputs above may be misconstrued as a happy end to the story because it seems the job is done: routers know all MVPN related information, the provider tunnel is up, PIM get used magic to figure out how to fill mandatory field without any PIM-enabled interfaces.
Furthermore, if check multicast route at P2 router, topology looks like totally converged:

At line 7 – directly connected interfaces to PE2 (.202) and to PE1 (.201) are in downstream interface list, which is correct to initial scheme. PE3 is a receiver for 231.0.0.3 which is not streaming yet.
At line 9 and 11 – a lot of packets have been transmitted already!
So what is wrong?

After all, except the sending is important, a receiver side should be able to decapsulate labeled packet and decide somehow where a pack of multicast data should be send.

But PE routers aren’t capable of decapsulating and deciding which action should be done, because we don’t have any interface, except CE-faced.

Another way to prove that the multicast stream is not sended to CE-faced interface is to monitor interface dashboard:

monitor interface ge-0/0/1.51 @ PE1

The column “Current delta” tells there is not transmitted data.

So, labeled multicast data arriving and simply dropped because a system has no point to inspect the data.

As far as I know, the problem with point of presense could be resolved in only one way – a special virtual interface called “VT” (virtual tunnel) should be added to a receiver’s service instance.

VT tunnel configuration

What if check PE1 multicast route again?

Seems like data is transmitted and VT-interface counts as an upstream.

monitor interface ge-0/0/1.51

Henry Frankenstein: Look! It’s moving. It’s alive. It’s alive… It’s alive, it’s moving, it’s alive, it’s alive, it’s alive, it’s alive, IT’S ALIVE!

Route Type 6 and 7

Of course, multicast data shall be sent only to explicit requests and that part  is provided by separate MVPN NLRI Types – 6 and 7.

types 6 and 7 in the MVPN table

Route Type 6 is originated by receiver site as a request to get multicast stream via shared tree (*,G) = thru RP. BGP NLRI with Type 6 will not be sent to sender site until Type 5 for that source is presented. Route prefix “200.200.200.200:32:231.0.0.1” shall be written from right to left as “requesting group 231.0.0.1 from 200.200.200.200”. Type 6 route exists in receiver’s site routing table only.
As mentioned before, after shared tree getting work, receiver PE will try to get a multicast stream from the source directly via source tree (S,G), because now an IP of the source is known. In that last moment, receiver site will send Route Type 7 – “10.0.1.11:32:231.0.0.1“.

If MVPN infrastructure works in RPT-SPT mode, then type 6 must be announced before type 7. If SPT-only is in use, then type 7 might be announced independently.

Double group at PE2

As a desert lets assume how the delivery of two multicast streams could be presented for PE2.

  1. Source streams two groups: 231.0.0.1 and 231.0.0.3;
  2. P2 and PE2 exchanged Type 1 routes;
  3. P2 established a provider tunnel with PE2 as one of the ends;
  4. P2 originated both groups as Type 5 routes;
  5. PE2 have a customer requesting some groups;
  6. PE2 examined the IGMP group report and saw a request to participate in groups 231.0.0.1 and 231.0.0.3;
  7. PE2 examined service instance MVPN table and saw Type 5 route for both groups;
  8. PE2 originated Type 6 route as a request to build shared tree upon RP, which knows where is the source to requested groups;
  9. P2 started to forward multicast stream up to PE2 router;
  10. PE2 performing some actions with data:
    1. receiving labeled data;
    2. transferring to VT-interface
    3. pop all labels,
    4. sending again to VT-interface
    5. examine headers
    6. sending to a receiver
  11. other steps for a source tree.

What show command will tell us?

SHOWs

Seems legit, isn’t it?

The end

That is actually all I have to show.
I really hope these 2 articles brought some interesting information to you and from that moment you get more familiar with the best known way to deliver multicast nowadays.

Sources

 

Configuration

PE2 full configuration
SHARE: Tweet about this on TwitterShare on FacebookShare on VKShare on LinkedInShare on Google+Email this to someone