NG-MVPN Extranet: Part 1

We are continuing our fancy Juniper service provider journey and today we have an exciting subject – Next-Generation Multicast VPN (NG-MVPN shortly).

In a nutshell, that type of service gives to a service provider a possibility to deliver multicast data on a set of customer sites, almost as simple as it works with IP/MPLS BGP L3VPN.
Initially intended place for a multicast source is a customer site (that multicast calls “c-multicast”) and receivers are supposed to be in the same service instance.
But, in this article, we are network engineers in the service provider and we are getting famous with NG-MVPN from perspective to distribute our own multicast for a bunch of separate customers.


As a rule, NG-MVPN focuses on c-multicast and most of implementation and configuration guides are represent the situation when a customer have an own multicast infrastructure and all is needed by SP is to deliver that data to other customer sites. Described scheme depicted below:ng-mvpn_classic-scheme For a change, we’ll try to create something more complicated, but not too much. The article is introducing NG-MVPN usage for approaches with more than simple PIM relationship between PE-CE in a classic NG-MVPN scheme.
We know NG-MVPN is the extension to well-known “BGP/MPLS IP Virtual Private Networks (VPNs)” (RFC4364) which is a quite easy to understand service. So, from the ground up, our exploration path will encounter to a common service provider aim – an effective way to deliver provider’s multicast (p-multicast) to a variety of separated customers. Our target scheme depicted below:



I would like to provide a comprehensive explanation, why NG-MVPN may counts as the best way in compare with others, like “Cisco Systems’ Solution for Multicast in BGP/MPLS IP VPNs” known as “Draft Rosen”, but I know the place where it is perfectly described already – the article by Diptanshu Singh at PacketPushers.
I hope you will see and value further how minimalistic and pretty attractive from the point of network operations the approach is.


The topology

The topology has built on JNCIP-SP Lab, the last version of which is available on the permanent link –

The topology contains five SP devices and three devices belong to few customers.
Multicast address block we use in our service is 231/8 and two main multicast groups are and
A P-multicast source can be connected to any SP device and PE device would suit the best, but let’s make things cleaner and connect the source to a device not related to customer sites.
Our target scheme, superimposed with lab topology is depicted below:ng-mvpn_target-scheme-on-lab

Control plane


As mentioned above NG-MVPN is an extension to IP/MPLS BGP L3VPN and control plane here presented as a subset of NLRI with specific route types.
NG-MVPN has separate SAFI number 5 (MCAST-VPN) and configuration of BGP to exchange these specific route types should be enabled explicitly.
Minimal amount of configuration to exchange multicast VPN routes looks pretty easy and the configuration block below follows the topology where P2, our future RP, is a BGP Route Reflector – all PE routers have BGP session with P2 and transparently retrieve data from whole SP AS:

Check inet-mvpn family compatibility

Command “family inet-mvpn signaling” allows the router to send and receive inet-mvpn routes:

Check inet-mvpn family compatibility

Service instance

We are the service provider and if we provide VPN to end customers we have to follow the letter “P” means “Private”.
Every customer limited to named VRF and initial configuration to separate customer’s multicast receiver seems exact as normal VRF routing instance for L3VPN – type/RD/VT/interface:

PE VRF basic configuration

“vrf-target” defines a BGP extended community to represent distributed service instance as a unique entity.
If a distribution of p-multicast to a wide range of different customers is our main goal, we have to use unique entity on the source router (P2) with an interface connected to a source as well, right?
In classic L3VPN, the common way to interconnect the master site only with plenty of others is made with VRF route leaking.
Route leaking is a procedure when routes from VRF1 RIB also exists in VRF2 RIB, and again, this is vital technique to get “hub and spoke” (RFC7024) topology, where every spoke may connect to the hub, but communication between spokes is not possible without third side. It is exactly we are looking for – customers should not be able to somehow communicate with each other.

For the sake of brevity, we are going to use the trick, which helps us to avoid configuring BGP import/export policies on PE devices. The trick may be described by following sentences:

  1. VRF for Customer #1 – extended community “target:666:51” named MVPN-NG1;
  2. VRF for Customer #2 – extended community “target:666:52” named MVPN-NG2;
  3. VRF for P-multicast source have no extended community, but:
    • All routes from inet-mvpn family (SAFI 5) and with extended community MVPN-NG1 or MVPN-NG2 are accepted to import in VRF;
    • All routes originated within VRF as marked with extended communities MVPN-NG1 and MVPN-NG2 simultaneously.

This trick allows routes originated within P-multicast source VRF to be accepted on PE devices, connected to receiver sites, without any explicit configuration on them – because NLRI will have the same communities as local VRF and importing allowed automatically. Moreover, we don’t need to apply any unique entity to P-multicast source VRF, because we don’t need to identify this instance on neighbors.

Described configuration presented below:

P2 VRF route leaking


With NG-MVPN 7 route types are enough to describe a part of service provider network in charge of multicast distribution:


At that point, before data plane discussed, it’s hard to describe roles of route types and we will frequently come back to this table while explaining the sequence of NG-MVPN network convergence further.
The best way is to keep the table open in another browser tab.

Data plane


NG-MVPN supports PIM as a protocol to build complete puzzle from multicast related routers. Both sparse and dense modes are supported.
Technologies to provide RP reliability like Auto-RP and BSR are supported as well.
As long as the amount of multicast customers getting higher, a role of p-multicast flows control becomes important and only PIM sparse mode gives us all required control levers.
Eventually, PIM sparse mode will be in use and all PE routers will have been configured with static RP points to P2.

Provider tunnel

Besides the information about NG-MVPN related routers, sources and receivers we have to build a transport to deliver p-multicast data.
A direction of delivery is unidirectional, from source to receivers and, in compare with Draft Rosen MVPN there is no requirement to store any kind of multicast information on transit routers – routers not connected to sender nor receiver. It means at least that router P1 will never know if traffic forwarded by is multicast and additional to BGP-free core we have PIM-free core.
Described situation can only be a truth if tunnel technique is in use and unidirectional tunnels, from source (P2) to PE devices might be set up with such technologies:

  • LDP point-to-multipoint (p2m);
  • RSVP point-to-multipoint;
  • GRE signaled by PIM-SM (ASM/SSM).

A provider tunnel may be presented as inclusive or selective:

• If you configure a VPN to use an inclusive provider tunnel, the sender PE router signals one point-to-multipoint LSP for the VPN.
• If you configure a VPN to use selective provider tunnels, the sender PE router signals a point-to-multipoint LSP for each selective tunnel configured.

There is no doubt that selective mode gives more flexibility because different p2m LSPs may have different templates, with a variety of specific characteristics.
As an example, traffic to multicast group 229/8 may have the highest priority to reserve bandwidth and follows thru shortest path.

The lab fully configured for MPLS and ready to establish few RSVP LSPs from P2 to PE routers.
RSVP LSPs will establish dynamically, have point-to-multipoint nature and made by inclusive provider tunnel.



Initial configuration has none commands presenting NG-MVPN service, provides basic IP-connectivity and framework for services:

Initial configuration


Firstable, the configuration of protocols PIM and MVPN are most important.
PIM configuration the same for all PE routers and aimed to use P2 lo0.200 interface IP as static defined RP.
PIM configuration for P2 defines itself as RP and also enables  interface ge-0/0/0.0 directly connected to p-multicast source to act as PIM, but prevent any neighborship.

Main configuration

As you may be noticed, the configuration is extremely simple and mvpn contain only one string.
Under “protocols mvpn” stanza, it is possible to strictly define if a local site may contain only receivers or senders (sources) and not declared allows playing both roles.

Is viewed configuration enough to populate our tables with some valuable data?

MVPN related routes are kept in table with “mvpn” prefix. Is PE1 has anything on board?

> show route table MVPN-NG

The very first part of MVPN routes has been arrived!
The format of MVPN routes is following:


So, the route “1:” is a Type 1 Route, made unique with RD “” and advertised by some device with RID “”.

MVPN Type 1 Route originated by all provider devices participating in MVPN service. This type of routes used by sender site to get a list of egress routers for inclusive point-to-multipoint LSPs.
Apparently, doesn’t matter which type of provider tunnels will be in use, and we actually were not configured any yet – Type 1 Routes are originated anyway.

That is more important, service instances for different customers are divided perfectly:

  • P2 sees everyone;
  • PE1 sees P2 and PE2;
  • PE3 seen P2 only.

At the moment, we got the correct control plane by BGP, but we have to carry on about PIM as well.
PIM will not work till data plane turn up and that is going to be discussed in Part 2.