Multicast is a popular feature used mainly by IP-networks of Enterprise customers. Multicast allows the efficient distribution of information between a single multicast source and multiple receivers. An example of a multicast source in a corporate network would be a financial information server provided by a third-party company such as Bloomberg's or Reuters. The receivers would be individual PCs scattered around the network all receiving the same financial information from the server. The multicast feature allows a single stream of information to be transmitted from a source device, regardless of how many receivers are active for the information from that source device. The routers automatically replicate a single copy of the stream to each interface where multicast receivers can be reached. Therefore, multicast significantly reduces the amount of traffic required to distribute information to many interested parties.
This chapter describes in detail how an MPLS VPN service provider can provide multicast services between multiple sites of a customer VPN that has an existing multicast network or is intending to deploy the multicast feature within their network. This feature is known as multicast VPN (mVPN) and is available from Cisco IOS 12.2(13)T onward. This chapter includes an introduction to general IP Multicast concepts, an overall description of the mVPN feature and architecture, a detailed description of each IP Multicast component modified to support the mVPN feature, and a case study that shows how you can implement mVPN in an MPLS VPN backbone.
Introduction to IP Multicast
IP multicast is an efficient mechanism for transmitting data from a single source to many receivers in a network. The destination address of a multicast packet is always a multicast group address. This address comes from the IANA block 224.0.0.0239.255.255.255. (Before the concept of classless interdomain routing, or CIDR, existed, this range was referred to as the D-class.) A source transmits a multicast packet by using a multicast group address, while many receivers "listen" for traffic from that same group address.
Examples of applications that would use multicast are audio/video services such as IPTV, Windows Media Player, conferencing services such as NetMeeting or stock tickers, and financial information such as those that TIBCO and Reuters provide.
NOTE
If you want to gain a more complete or detailed understanding of IP multicast, then read the Cisco Press book titled Developing IP Multicast Networks (ISBN 1-57870-077-9) or any other book that provides an overview of multicast technologies. You can obtain further information on advanced multicast topics from http://www.cisco.com/go/ipmulticast.
Multicast packets are forwarded through the network by using a multicast distribution tree. The network is responsible for replicating the same packet at each bifurcation point (the point at which the branches fork) in the tree. This means that only one copy of the packet travels over any particular link in the network, making multicast trees extremely efficient for distributing the same information to many receivers.
There are two types of distribution trees: source trees and shared trees.
Source Trees
A source tree is the simplest form of distribution tree. The source host of the multicast traffic is located at the root of the tree, and the receivers are located at the ends of the branches. Multicast traffic travels from the source host down the tree toward the receivers. The forwarding decision on which interface a multicast packet should be transmitted out is based on the multicast forwarding table. This table consists of a series of multicast state entries that are cached in the router. State entries for a source tree use the notation (S, G) pronounced S comma G. The letter S represents the IP address of the source, and G represents the group address.
NOTE
The notion of direction is used for packets that are traveling along a distribution tree. When a packet travels from a source (or root) toward a receiver, it is deemed to be traveling down the tree. If a packet is traveling from the receiver toward the source (such as a control packet), it is deemed to be traveling up the tree.
A source tree is depicted in Figure 7-1. The host 196.7.25.12 at the root of the tree is transmitting multicast packets to the destination group 239.194.0.5, of which there are two interested receivers. The forwarding cache entry for this multicast stream is (196.7.25.12, 239.194.0.5).
A source tree implies that the route between the multicast source and receivers is the shortest available path; therefore, source trees are also referred to as shortest path trees (SPTs). A separate source tree exists for every source that is transmitting multicast packets, even if those sources are transmitting data to the same group. This means that there will be an (S, G) forwarding state entry for every active source in the network. Referring to our earlier example, if another source, such as 196.7.25.18, became active that was also transmitting to group 239.194.0.5, then an additional state entry (and a different SPT) would be created as (196.7.25.18, 239.194.0.5). Therefore, source trees or SPTs provide optimal routing at the cost of additional multicast state information in the network.
Figure
7-1 Source Distribution Tree
The important thing to remember about source trees is that the receiving end can only join the source tree if it has knowledge of the IP address of the source that is transmitting the group in which it is interested. In other words, to join a source tree, an explicit (S, G) join must be issued from the receiving end. (This explicit [S, G] join is issued by the last hop router, not the receiving host. The receiving host makes the last hop router aware that it wants to receive data from a particular group, and the last hop router figures out the rest.)
Shared Trees
Shared trees differ from source trees in that the root of the tree is a common point somewhere in the network. This common point is referred to as the rendezvous point (RP). The RP is the point at which receivers join to learn of active sources. Multicast sources must transmit their traffic to the RP. When receivers join a multicast group on a shared tree, the root of the tree is always the RP, and multicast traffic is transmitted from the RP down toward the receivers. Therefore, the RP acts as a go-between for the sources and receivers. An RP can be the root for all multicast groups in the network, or different ranges of multicast groups can be associated with different RPs.
Multicast forwarding entries for a shared tree use the notation (*, G), which is pronounced star comma G. This is because all sources for a particular group share the same tree. (The multicast groups go to the same RP.) Therefore, the * or wildcard represents all sources. A shared tree is depicted in Figure 7-2. In this example, multicast traffic from the source host 196.7.25.18 and 196.7.25.12 travel to the RP and then down the tree toward the two receivers. There are two routing entries, one for each of the multicast groups that share the tree: (*, 239.194.0.5) and (*, 239.194.0.7). In a shared tree, if more sources become active for either of these two groups, there will still be only two routing entries due to the wildcard representing all sources for that group.
Figure
7-2 Shared Distribution Tree
Shared trees are not as optimal in their routing as source trees because all traffic from sources must travel to the RP and then follow the same (*, G) path to receivers. However, the amount of multicast routing state information required is less than that of a source tree. Therefore, there is a trade-off between optimal routing versus the amount of state information that must be kept.
Shared trees allow the receiving end to obtain data from a multicast group without having to know the IP address of the source. The only IP address that needs to be known is that of the RP. This can be configured statically on each router or learned dynamically by mechanisms such as Auto-RP or Bootstrap Router (BSR).
Shared trees can be categorized into two types: unidirectional and bidirectional. Unidirectional trees are essentially what has already been discussed; sources transmit to the RP, which then forwards the multicast traffic down the tree toward the receivers.
In a bidirectional shared tree, multicast traffic can travel up and down the tree to reach receivers. Bidirectional shared trees are useful in an any-to-any environment, where many sources and receivers are evenly distributed throughout the network. Figure 7-3 shows a bidirectional tree. Source 196.7.25.18 is transmitting to two receivers A and B for group 239.194.0.7. The multicast traffic from the source host is forwarded in both directions as follows:
Up the tree toward the root (RP). When the traffic arrives at the RP, it is then transmitted down the tree toward receiver A.
Down the tree toward receiver B. (It does not need to pass the RP.)
Bidirectional trees offer improved routing optimality over unidirectional shared trees by being able to forward data in both directions while retaining a minimum amount of state information. (Remember, state information refers to the amount of (S, G) or (*, G) entries that a router must hold.)
Figure
7-3 Bidirectional Shared Tree
Multicast Forwarding
Packet forwarding in a router can be divided into two types: unicast forwarding and multicast forwarding. The difference between unicast forwarding and multicast forwarding can be summarized as follows:
Unicast forwarding is concerned with where the packet is going.
Multicast forwarding is concerned with where the packet came from.
In unicast routing, the forwarding decision is based on the destination address of the packet. At each router along the path, you can derive the next-hop for the destination by finding the longest match entry for that destination in the unicast routing table. The unicast packet is then forwarded out the interface that is associated with the next-hop.
Forwarding of multicast packets cannot be done in the same manner because the destination is a multicast group address that you will most likely need to forward out multiple interfaces. Multicast group addresses do not appear in the unicast routing table; therefore, forwarding of multicast packets requires a different process. This process is called Reverse Path Forwarding (RPF), and it is the basis for forwarding multicast packets in most multicast routing protocols. In particular, RPF is used with Protocol Independent Multicast (PIM), which is the protocol used and described throughout this chapter.
RPF
Every multicast packet received on an interface at a router is subject to an RPF check. The RPF check determines whether the packet is forwarded or dropped and prevents looping of packets in the network. RPF operates like this:
When a multicast packet arrives at the router, the source address of that packet is checked to make sure that the incoming interface indeed leads back to the source. (In other words, it is on the reverse path.)
If the check passes, then the multicast packet is forwarded out the relevant interfaces (but not the RPF interface).
If the RPF check fails, the packet is discarded.
The interface used for the RPF check is referred to as the RPF interface. The way that this interface is determined depends on the multicast routing protocol that is in use. This chapter is concerned only with PIM, which is the most widely used protocol in Enterprise networks. PIM is discussed in the next section. PIM uses the information in the unicast routing table to determine the RPF interface. Figure 7-4 shows the process of an RPF check for a packet that arrives on the wrong interface. A multicast packet from the source 196.7.25.18 arrives on interface S0. A check of the multicast routing table shows that network 196.7.25.0 is reachable on interface S1, not S0; therefore, the RPF check fails and the packet is dropped.
Figure
7-4 RPF Check Fails
Figure 7-5 shows the RPF check for a multicast packet that arrives on the correct interface. The multicast packet with source arrives on interface S1, which matches the interface for this network in the unicast routing table. Therefore, the RPF check passes and the multicast packet is replicated to the interfaces in the outgoing interface list (called the olist) for the multicast group.
Figure
7-5 RPF Check Succeeds
If the RPF check has to refer to the unicast routing table for each arriving multicast packet, this will have a detrimental affect on router performance. Instead, the RPF interface is cached as part of the (S, G) or (*, G) multicast forwarding entry. When the multicast forwarding entry is created, the RPF interface is set to the interface that leads to the source network in the unicast routing table. If the unicast routing table changes, then the RPF interface is updated automatically to reflect the change.
Example 7-1 shows a multicast forwarding entry for (194.22.15.2, 239.192.20.16). You can also refer to this entry as a multicast routing table entry. The presence of the source in the (S, G) notation indicates that this entry is associated with a source tree or shortest path tree. The incoming interface is the RPF interface, which has been set to POS3/0. This setting matches the next-hop interface shown in the OSPF routing entry for the source 194.22.15.2. There are two interfaces in the outgoing olist: Serial4/0 and Serial4/2. The outgoing interface list provides the interfaces that the multicast packet should be replicated out. Therefore, packets that pass the RPF check from source 194.22.15.2 (they must come in on POS3/0) that are destined to group 239.192.20.16 are replicated out interface Serial4/0 and Serial4/2.
Example 7-1 Source Tree Multicast Forwarding Entry
(194.22.15.2, 239.192.20.16), 00:03:30/00:03:27, flags: sT Incoming interface: POS3/0, RPF nbr 194.22.15.17 Outgoing interface list: Serial4/0, Forward/Sparse-Dense, 00:03:30/00:02:55 Serial4/2, Forward/Sparse-Dense, 00:02:45/00:02:05 Routing entry for 194.22.15.2/32 Known via "ospf 1", distance 110, metric 2, type intra area Last update from 194.22.15.17 on POS3/0, 1w5d ago Routing Descriptor Blocks: * 194.22.15.17, from 194.22.15.2, 1w5d ago, via POS3/0 Route metric is 2, traffic share count is 1
For completeness, a shared tree routing entry is shown in Example 7-2. This entry represents all sources transmitting to group 239.255.0.20. The RPF interface is shown to be FastEthernet0/1, which is the next-hop interface to the RP 196.7.25.1. Remember that the root of a shared tree are always the RP; therefore, the RPF interface for a shared tree is the reverse path back to the RP.
Example 7-2 Shared Tree Multicast Forwarding Entry
(*, 239.255.0.20), 2w5d/00:00:00, RP 196.7.25.1, flags: SJCL Incoming interface: FastEthernet0/1, RPF nbr 192.168.2.34 Outgoing interface list: FastEthernet0/0, Forward/Sparse, 00:03:29/00:02:54
The outgoing interface lists in the preceding examples are determined by the particular multicast protocol in use.
PIM
Over the years, various multicast protocols have been developed, such as Distance Vector Multicast Routing Protocol (DVMRP), Multicast Open Shortest Path First (MOSPF), and Core Base Trees (CBT). The characteristic that these protocols have in common is that they create a multicast routing table based on their own discovery mechanisms. The RPF check does not use the information already available in the unicast routing table.
The protocol that is the most widely deployed and relevant to this chapter is PIM. As discussed previously, PIM uses the unicast routing table to discover whether the multicast packet has arrived on the correct interface. The RPF check is independent because it does not rely on a particular protocol; it bases its decisions on the contents of the unicast routing table.
Several PIM modes are available: dense mode (PIM DM), sparse mode (PIM SM), Bidirectional PIM (PIM Bi-Dir), and a recent addition known as Source Specific Multicast (SSM).
PIM DM
The deployment of PIM DM is diminishing because it has been proven to be inefficient in comparison to PIM SM. PIM DM is based on the assumption that for every subnet in the network, at least one receiver exists for every (S, G) multicast stream. Therefore, all multicast packets are pushed or flooded to every part of the network. Routers that do not want to receive the multicast traffic because they do not have a receiver for that (S, G) send a prune message back up the tree. Branches that do not have receivers are pruned off, the result being a source distribution tree with branches that have receivers. Periodically, the prune message times out, and multicast traffic begins to flood through the network again until another prune is received.
PIM SM
PIM SM is more efficient than PIM DM in that it does not use flooding to distribute traffic. PIM SM employs the pull model, in which traffic is distributed only where is it requested. Multicast traffic is distributed to a branch only if an explicit join message has been received for that multicast group. Initially, receivers in a PIM SM network join the shared tree (rooted at the RP). If the traffic on the shared tree reaches a certain bandwidth threshold, the last hop router (that is, the one to which the receiver is connected) can choose to join a shortest-path tree to the source. This puts the receiver on a more optimal path to the source.
PIM Bi-Dir
PIM Bi-Dir creates a two-way forwarding tree, as shown in Figure 7-3. All multicast routing entries for bidirectional groups are on a (*, G) shared tree. Because traffic can travel in both directions, the amount of state information is kept to a minimum. Routing optimality is improved because traffic does not have to travel unnecessarily toward the RP. Source trees are never built for bidirectional multicast groups. Bidirectional trees in the service provider network are covered in the section "Case Study of mVPN Operation in SuperCom" later in this chapter.
SSM
SSM implies that the IP address of the source for a particular group is known before a join is issued. SSM in Cisco IOS is implemented in addition to PIM SM and co-exists with IP Multicast networks based on PIM SM. SSM always builds a source tree between the receivers and the source. The source is learned through an out-of-band mechanism. Because the source is known, an explicit (S, G) join can be issued for the source tree that obviates the need for shared trees and RPs. Because no RPs are required, optimal routing is assured; traffic travels the most direct path between source and receiver. SSM is a recent innovation in multicast networks and is recommended for new deployments, particularly in the service provider core for an mVPN environment. A practical deployment of SSM is discussed in the section, "Case Study of mVPN Operation in SuperCom" later in this chapter.
Multicast is a powerful feature that allows the efficient one-to-many distribution of information. Multicast uses the concept of distribution trees, where the source is the root of the tree and the receivers are at the leaves of the tree. The routers replicate packets at each branch of the tree, known as the bifurcation point. The tree is represented as a series of multicast state entries in each router, and packets are forwarded down this tree (toward the leaves) by using RPF. There are various modes of multicast operation in networks with the most popular one being PIM SM.
Enterprise Multicast in a Service Provider Environment
The fundamental problem that service providers face today when offering native multicast services to end customers is the amount of multicast distribution information (that is [S, G] or [*, G] states) that needs to be maintained to provide the most optimal multicast traffic distribution. When a multicast source becomes active within a particular customer site, the multicast traffic must travel through the service provider network to reach all PE routers that have receivers connected to CE routers for that multicast group. To prevent unnecessary traffic delivery, the service provider must avoid sending traffic to PE routers that have no interested receivers. To accomplish this goal and achieve optimal routing, each P router in the network must maintain state information for all active customer distribution trees.
However, a problem arises in that the service provider has no visibility into how its end customers manage multicast within their enterprise. In addition, the service provider does not have control over the distribution of sources and receivers or the number of groups that the end customer chooses to use. In this situation, the P routers are required to support an unbounded amount of state information based on the enterprise customer's application of multicast.
Figure 7-6 illustrates this scenario in the SuperCom network. (This chapter uses SuperCom as the example network.) As shown in the figure, SuperCom provides native multicast services to VPN customers FastFoods and EuroBank. In this example, native multicast means that the SuperCom network provides both customers with multicast services via the global multicast routing table by using standard multicast procedures. To obtain multicast services, each EuroBank or FastFoods site must ultimately connect to a SuperCom global interface (that is, one with no VRF defined). Multicast traffic travels across the SuperCom network using standard IP multicast; no tunnels or encapsulations are used. The FastFoods organization has three active distribution trees rooted at two sources (A and B). Similarly, EuroBank has three active distribution trees rooted at three sources (C, D, and E). Each of these trees has at least one receiver that is connected to a CE somewhere in the global multicast network.
To provide optimal multicast traffic distribution, the Washington P router must hold the state information for all six trees. This applies equally to any other P and PE routers that are in the path of the distribution trees. Because all multicast routing operates in the global SuperCom table, it is possible that multicast groups that different customers use will conflict (as would be the case with multiple customers using the same RFC 1918 addressing in a unicast network). To avoid this situation, SuperCom must allocate each VPN a unique range of multicast groups.
Figure
7-6 Supporting Native Enterprise Multicast
The total amount of state information that the SuperCom network must hold is determined by the way the customer deploys multicast in his network. For each unique customer source, a separate state entry exists in the global table for each multicast group that source is serving. Deploying features such as bidirectional trees reduces the amount of multicast state information required, although traffic distribution is not as optimal. Given that the amount of state information is unbounded (cannot be limited) and the service provider must allocate and manage multicast groups, the deployment of native multicast services in this manner is not recommended from a scaling and provisioning standpoint.
A common way to provide multicast over a service provider IP or MPLS VPN network is to overlay generic routing encapsulation (GRE) tunnels between CE routers. This eliminates the need for any state information to be kept in the P routers because all multicast packets between VPN sites are encapsulated by using GRE within the service provider network. This solution also allows different enterprise customers to use overlapping multicast groups. However, the disadvantage of this solution is that unless the customer implements a full mesh of GRE tunnels between CE routers, optimal multicast routing is not achieved. In fact, more bandwidth can be wasted by multicast traffic backtracking over different GRE tunnels across the P network. Further to this, Multicast over GRE is inherently unscalable due to the potential number of tunnels required and the amount of operational and management overhead.
A more scalable model for providing multicast within a VPN can be derived from the way optimal unicast routing is achieved in an MPLS VPN.
In an MPLS VPN
A P router maintains routing information and labels for the global routing table only. It does not hold routing or state information for customer VPNs.
A CE router maintains a routing adjacency with its PE router neighbor only. CE routers do not peer with other CE routers but still have the ability to access other CE routers in their VPN through the most optimal route that the P network provides.
As you will see, the mVPN solution that is implemented in Cisco IOS provides a scalable and efficient method of transporting multicast traffic between sites of a VPN customer and has similar characteristics mentioned in the previous bullet points.
In a service provider network that is enabled with mVPN
A P router maintains multicast state entries for the global routing table only. It does not hold multicast state entries for customer VPNs.
A CE router maintains a multicast PIM adjacency with its PE router neighbor only. CE routers do not have multicast peerings with other CE routers, but they can exchange multicast information with other CE routers in the same VPN.
The following sections describe the mVPN architecture as implemented by Cisco IOS.
mVPN Architecture
The mVPN solution discussed in this chapter is based on section 2 of Multicast in MPLS/BGP VPNs Internet draft (draft-rosen-vpn-mcast).
Section 2 of this Internet draft describes the concept of multicast domains in which CE routers maintain a PIM adjacency with their local PE router only, and not with other CE routers. As mentioned previously, this adjacency characteristic is identical to that used in MPLS VPNs. Enterprise customers can maintain their existing multicast configurations, such as PIM SM/PIM DM and any RP discovery mechanisms, and they can transition to an mVPN service by using multicast domains without configuration changes. P routers do not hold state information for individual customer source trees; instead, they can hold as little as a single state entry for each VPN (assuming that PIM Bi-Dir is deployed) regardless of the number of multicast groups within that VPN.
If a service provider is using PIM SM in the core (instead of PIM Bi-Dir), then the greatest amount of state information that would be required in a P router would be roughly equivalent to the number of PE routers in the backbone multiplied by the number of VPNs defined on those PE routers. This should be significantly less than the number of potential customer multicast groups. Although you can reduce the amount of P-network state information, the real point to note here is that with multicast domains, regardless of which multicast mode the service provider is using (PIM SM, Bi-Dir, SSM), the amount of state information in the core is deterministic. The core information does not depend on the customer's multicast deployment.
Customer networks are also free to use whatever multicast groups they need without the possibility of overlap with other VPNs. These groups are invisible to the P router network, in the same manner that VPN unicast routes are invisible to P routers in an MPLS VPN network.
Multicast Domain Overview
A multicast domain is a set of multicast-enabled virtual routing and forwarding instances (VRFs) that can send multicast traffic to each other. These multicast VRFs are referred to as mVRFs. Multicast domains map all of a customer's multicast groups that exist in a particular VPN to a single unique global multicast group in the P-network. This is achieved by encapsulating the original customer multicast packets within a provider packet by using GRE. The destination address of the GRE packet is the unique multicast group that the service provider has allocated for that multicast domain. The source address of the GRE packet is the BGP peering address of the originating PE router. A different global multicast group address is required for every multicast domain. Therefore, the set of all customer multicast states (*, G1)_(*, GN) can be mapped to a single (S, G) or (*, G) in the service provider network.
NOTE
The use of GRE in a multicast domain is not the same as the overlay solution in which point-to-point GRE tunnels are used between CE routers. The GRE tunnels used here are between PE routers in a multicast configuration. The tunnels can be considered point-to-multipoint connections if PIM SM is deployed or even multipoint-to-multipoint if using PIM Bi-Dir. Therefore, the use of GRE for multicast domains is inherently more efficient than GRE overlay.
Each PE router that is supporting an mVPN customer is part of the multicast domain for that customer. Multiple end customers can attach to a particular PE router, which means that the PE router can be a member of many multicast domainsone for each mVPN customer who is connected to it.
One of the major attractions of the multicast domain solution is that the P routers do not need a software upgrade to enable new multicast features to support mVPNs. Only native multicast is required in the core network to support multicast domains. The advantage of this is that native multicast is a mature technology in Cisco IOS; therefore, the operational risk is minimized in the service provider network when deploying multicast domains.
The P-network builds a default multicast distribution tree (Default-MDT) between PE routers for each multicast domain by using a unique multicast group address that the service provider allocates. These unique multicast groups are referred to as MDT-Groups. Each mVRF belongs to a default MDT. Therefore, the amount of state information that a P router must hold is not a function of the number of customer multicast groups in the network; instead, it is the number of VPNs. This considerably reduces the amount of state information required in a P router. If the MDT is configured as a bidirectional tree, then it is possible to have a single (*, G) multicast state entry for each VPN.
Figure 7-7 shows the concept of multicast domains in the SuperCom network. The FastFoods and EuroBank VPNs belong to separate multicast domains. The SuperCom core creates a Default-MDT for each of these multicast domains by using the MDT-group addresses 239.192.10.1 for FastFoods and 239.192.10.2 for EuroBank. The PE routers at San Jose and Paris join both Default-MDTs as they are connected to the FastFoods and EuroBank sites. The Washington PE router only needs to connect to the Default-MDT for the EuroBank VPN.
Figure
7-7 Multicast Domains
Any EuroBank or FastFoods packets that travel along these Default-MDTs are encapsulated by using GRE. The source of the outer packet is the Multiprotocol BGP peering address of the sending PE router, and the destination is the appropriate MDT-group address. GRE essentially hides the customer multicast packet from the P-network and allows us to map many multicast groups in a VPN to a single provider multicast group. The SuperCom P routers only see the source and destination of the outer IP header that SuperCom allocates. This source and destination appear as an (S, G) state entry in the SuperCom global multicast table.
Assuming that the SuperCom network has been configured with PIM Bi-Dir, only two (*, G) states are required in each P router: (*, 239.192.10.1) and (*, 239.192.10.2). This compares favorably with the six states required in the native multicast network described earlier in Figure 7-6. Also note in our example that the amount of state information in the P-network is always bounded to two entries regardless of how many new sources and groups FastFoods or EuroBank introduce.
NOTE
A P router is only aware of the PE router source addresses and the MDT-Group addresses that form the MDTs. CE router traffic traveling along an MDT is forwarded in a GRE-encapsulated packet (P-packet) using the MDT-group address as the destination (more on this in the later section, "MDTs"). The GRE P-packet uses IP only, and no MPLS label headers are applied to MDT traffic. Only pure IP multicast exists in the core.
mVPN will be supported from IOS versions 12.2(13)T and 12.0(23)S for Cisco 7200 and 7500 series routers. Support for Cisco 10000 series routers will be available from IOS version 12.0(23)SX, Cisco 12000 series is suppored in 12.0(26)S. The initial release will permit a VPN to participate only in a single multicast domain; access to Internet multicast or other multicast domains will not be permissible. However, it is expected that this limitation will be removed in future versions of IOS.
PIM SM or SSM are the only multicast modes supported in the P-network for mVPN.
To summarize, the goals of the multicast domain solution are as follows:
To deliver Enterprise Multicast to customers who subscribe to an MPLS VPN service
To minimize the amount of state information in the P-network (the service provider core) while providing optimal routing
To allow customers the freedom to choose their own multicast groups, multicast operations mode, RP placement, and so on
To allow multicast in the P-network to be completely separated from the operation of multicast in the customer network.
The various components used to deliver multicast domains are explained in the following sections.
Multicast VRF
On a PE router, each VRF can have an associated multicast routing and forwarding table configured, referred to as a multicast VRF (mVRF). The mVRF is the PE router's view into the enterprise VPN multicast network. The mVRF contains all the multicast routing information for that VPN. This information includes the state entries for distribution trees or RP-to-group mappings (if PIM SM is being used). When a PE router receives multicast data or control packets from a CE router interface in a VRF, multicast routing such as RPF checks and forwarding will be performed on the associated mVRF.
The PE router also can configure multicast features or protocols in the context of the mVRF. For example, if the customer network were using static RP configurations (that is, it was not using Auto-RP to distribute RP information), then the PE router would need to configure the same static RP entry information that was being used in the C-network. The multicast routing protocols in Cisco IOS such as PIM, IGMP, and MSDP have been modified to operate in the context of an mVRF and as such only modify data structures and states within that mVRF.
Example 7-3 shows the PIM and MSDP commands available in the context of a VRF.
Example 7-3 VRF-Aware Multicast Configuration Commands
SuperCom_Paris(config)#ip pim vrf EuroBank ? accept-register Registers accept filter accept-rp RP accept filter bsr-candidate Candidate bootstrap router (candidate BSR) register-rate-limit Rate limit for PIM data registers register-source Source address for PIM Register rp-address PIM RP-address (Rendezvous Point) rp-announce-filter Auto-RP announce message filter rp-candidate To be a PIMv2 RP candidate send-rp-announce Auto-RP send RP announcement send-rp-discovery Auto-RP send RP discovery message (as RP-mapping agent) spt-threshold Source-tree switching threshold ssm Configure Source Specific Multicast state-refresh PIM DM State-Refresh configuration SuperCom_Paris(config)#ip msdp vrf EuroBank ? default-peer Default MSDP peer to accept SA messages from description Peer specific description filter-sa-request Filter SA-Requests from peer mesh-group Configure an MSDP mesh-group originator-id Configure MSDP Originator ID peer Configure an MSDP peer redistribute Inject multicast route entries into MSDP sa-filter Filter SA messages from peer sa-limit Configure SA limit for a peer sa-request Configure peer to send SA-Request messages to shutdown Administratively shutdown MSDP peer timer MSDP timer ttl-threshold Configure TTL Thresold for MSDP Peer
In addition to the commands in the previous example, there are several multicast show commands that support VRF contexts. These are shown in Example 7-4.
Example 7-4 VRF-Aware Multicast show Commands
SuperCom_Paris#show ip pim vrf EuroBank ? autorp Global AutoRP information bsr-router Bootstrap router (v2) interface PIM interface information mdt Multicast tunnel information neighbor PIM neighbor information rp PIM Rendezvous Point (RP) information rp-hash RP to be chosen based on group selected SuperCom_Paris#show ip msdp vrf EuroBank ? count SA count per AS peer MSDP Peer Status sa-cache MSDP Source-Active Cache summary MSDP Peer Summary SuperCom_Paris#show ip igmp vrf EuroBank ? groups IGMP group membership information interface IGMP interface information membership IGMP membership information for forwarding tracking IGMP Explicit Tracking information udlr IGMP undirectional link multicast routing information
Example 7-5 shows the commands to enable multicast for the EuroBank VRF. The ip multicast-routing vrf enables multicast routing on the associated EuroBank VRF. In addition, any multicast interfaces in the EuroBank VRF will also require PIM to be enabled, as shown with the ip pim sparse command. The various PIM adjacencies that can exist are discussed in the following section.
Example 7-5 Enabling Multicast in a VRF
ip multicast-routing vrf EuroBank ! interface Serial0/0 ip vrf forwarding EuroBank ip address 192.168.2.26 255.255.255.252 ip pim sparse
NOTE
If the ip vrf forwarding command is removed from the PE router configuration, not only is the ip address command removed from any associated VRFs, but the ip pim sparse command is also removed.
PIM Adjacencies
Each VRF that has multicast routing enabled has a single PIM instance created on the PE router. This VRF-specific PIM instance forms a PIM adjacency with each PIM-enabled CE router in that mVRF. The customer multicast routing entries that each PIM instance creates are specific to the corresponding mVRF.
In addition to the CE router PIM adjacency, the PE router forms two other types of PIM adjacencies. The first is a PIM adjacency with other PE routers that have mVRFs in the same multicast domain. This PE router PIM adjacency is accessible through the multicast tunnel interface (MTI) and is used to transport multicast information between mVRFs (through a MDT) across the backbone. MDTs and MTIs are described later in this chapter. The PE router PIM adjacencies are maintained by using the same PIM instance that is used between the PE router and CE router for the associated mVRF.
The second type of PIM adjacency is created by the global PIM instance. The PE router maintains global PIM adjacencies with each of its IGP neighbors, which will be P routers, or directly connected PE routers (that are also providing a P router function). The global PIM instance is used to create the multicast distribution trees (MDTs) that connect the mVRFs.
NOTE
CE routers do not form PIM adjacencies with each other, nor does a CE router form an adjacency with a PE router by using the global PIM instance.
Figure 7-8 shows the different types of PIM adjacencies in the SuperCom network for the FastFoods VPN. A PIM adjacency exists between the San Francisco FastFoods CE router and San Jose PE router, as well as between the Lyon FastFoods CE router and the Paris PE router. Because the FastFoods mVRFs are part of the same multicast domain, a PIM adjacency is active between the San Jose and Paris PE routers. Both San Jose and Paris PE routers have separate PIM adjacency in the global table to the Washington P router.
Figure
7-8 PIM Adjacencies
MDTs
MDTs are multicast tunnels through the P-network. MDTs transport customer multicast traffic encapsulated in GREs that are part of the same multicast domain. The two types of MDTs are as follows:
The Default-MDTAn mVRF uses this MDT to send low-bandwidth multicast traffic or traffic that is destined to a widely distributed set of receivers. The Default-MDT is always used to send multicast control traffic between PE routers in a multicast domain.
The Data-MDTThis MDT type is used to tunnel high-bandwidth source traffic through the P-network to interested PE routers. Data-MDTs avoid unnecessary flooding of customer multicast traffic to all PE routers in a multicast domain.
Default-MDT
When a VRF is multicast enabled (as described in Example 7-5), it must also be associated with a Default-MDT. The PE router always builds a Default-MDT to peer PE routers that have mVRFs with the same configured MDT-group address. Every mVRF is connected to a Default-MDT. An MDT is created and maintained in the P-network by using standard PIM mechanisms. For example, if PIM SM were being used in the P-network, PE routers in a particular multicast domain would discover each other by joining the shared tree for the MDT-group that is rooted at the service provider's RP.
The configuration of the Default-MDT for the FastFoods VRF is shown in Example 7-6.
Example 7-6 Configuration of the Default-MDT
ip vrf FastFoods rd 10:26 route-target export 10:26 route-target import 10:26 mdt default 239.192.10.1
The example shows that only a single additional command is required for the existing VRF configuration. Upon application of the mdt default command, a multicast tunnel interface is created within the FastFoods mVRF, which provides access to the MDT-Group 239.192.10.1 within the SuperCom network. If other PE routers in the network are configured with the same group, then a shared or source tree is built between those PE routers.
NOTE
Enabling multicast on a VRF does not guarantee that there is any multicast activity on a CE router interface, only that there is a potential for sources and receivers to exist. After multicast is enabled on a VRF and a Default-MDT is configured, the PE router joins the Default-MDT for that domain regardless of whether sources or receivers are active. This is necessary so that the PE router can build PIM adjacencies to other PE routers in the same domain and that at the very least, mVPN control information can be exchanged.
At present, an mVRF can belong only to a single Default-MDT; therefore, extranets cannot be formed between mVPNs.
When a PE router joins an MDT, it becomes the root of that tree, and the remote peer PE routers become leaves of the MDT. Conversely, the local PE router becomes a leaf of the MDT that is rooted at remote PE routers. Being a root and a leaf of the same tree allows the PE router to participate in a multicast domain as both a sender and receiver. Figure 7-9 illustrates the MDT root and leaves in the SuperCom network.
Figure
7-9 MDT Roots and Leaves
NOTE
In our example, there are three (S, G) state entries, one for each PE router root of group 239.192.10.1. You can minimize the amount of state information for the MDT in the P-network to a single (*,239.192.10.1). This can be done by either setting the PIM spt-threshold to infinity for the MDT-Group or by deploying PIM Bi-Dir. However, doing so would change the MDT from a source tree to a shared tree, which in turn could affect routing optimality.
As mentioned previously, when a PE router forwards a customer multicast packet onto an MDT, it is encapsulated with GRE. This is so that the multicast group of a particular VPN can be mapped to a single MDT-group in the P-network. The source address of the outer IP header is the PE Multiprotocol BGP local peering address, and the destination address is the MDT-Group address assigned to the multicast domain. Therefore, the P-network is only concerned with the IP addresses in the GRE header (allocated by the service provider), not the customer-specific addressing.
The packet is then forwarded in the P-network by using the MDT-Group multicast address just like any other multicast packet with normal RPF checks being done on the source address (which, in this case, is the originating PE). When the packet arrives at a PE router from an MDT, the encapsulation is removed and the original customer multicast packet is forwarded to the corresponding mVRF. The target mVRF is derived from the MDT-Group address in the destination of the encapsulation header. Therefore, using this process, customer multicast packets are tunneled through the P-network to the appropriate MDT leaves. Each MDT is a mesh of multicast tunnels forming the multicast domain.
In Cisco IOS, access to the MDT is represented as the MTI and is discussed in a following section. Cisco IOS creates this tunnel interface automatically upon configuration of the MDT.
NOTE
GRE, as defined in RFC 2784, is the default encapsulation method for the multicast tunnel. A future possibility is to encapsulate the customer packet with MPLS (multicast forwarding using labels). This forwarding method is described in the draft RFC farinacci-mpls-multicast, "Using PIM to Distribute Labels for Multicast Routes," which you can obtain from http://www.ietf.org/. However, at the time of writing this chapter, only pure IP encapsulation and forwarding is supported for multicast domains.
Figure 7-10 shows the process of customer packet encapsulation across an MDT.
Figure
7-10 MDT Packet Encapsulation
For clarity in this and further examples, any information pertaining to the customer network will be preceded by a "C-" and information pertaining to the provider network will be preceded by a "P-". For example, a packet originating from a customer network will be referred to as a C-packet, and a PIM join message in the service provider network will be referred to as a P-join.
In the example, a source at San Francisco is sending traffic to a receiver at FastFoods Lyon by using the group (*, 239.255.0.20). The Default-MDT for the FastFoods multicast domain has been defined to be 239.192.10.1, and this value is configured on each of the FastFoods VRFs. The San Jose PE router encapsulates multicast traffic destined to the group 239.255.0.20 from the source 195.12.2.6 at the FastFoods San Francisco site into a P-Packet by using GRE encapsulation. The Type-of-Service byte of the C-packet is also copied to the P-packet. The source address of the P-packet is the BGP peering address of the San Jose PE router (194.22.15.2), and the destination address is the MDT-Group (239.192.10.1). When the P-packet arrives at the Paris PE router, the encapsulation is stripped and the original C-packet is forwarded to the receiver.
NOTE
It is recommended that the MDT-group addresses for the P-network be taken from the range defined in RFC 2365, "Administratively Scoped IP Multicast." This ensures that the provision of multicast domains does not interfere with the simultaneous support of Internet multicast in the P-network.
Data-MDT
Any traffic offered to the Default-MDT (via the multicast tunnel interface) is distributed to all PE routers that are part of that multicast domain, regardless of whether active receivers are in an mVRF at that PE router. For high-bandwidth applications that have sparsely distributed receivers, this might pose the problem of unnecessary flooding to dormant PE routers. To overcome this, a special MDT group called a Data-MDT can be created to minimize the flooding by sending data only to PE routers that have active VPN receivers. The Data-MDT is created dynamically if a particular multicast stream exceeds a bandwidth threshold. Each VRF can have a pool of Data-MDT groups allocated to it.
NOTE
Note that the Data-MDT is only created for data traffic. All multicast control traffic travels on the Default-MDT to ensure that all PE routers receive control information.
When a traffic threshold is exceeded on the Default-MDT, the PE router that is connected to the VPN source of the multicast traffic can switch the (S, G) from the Default-MDT to a group associated with the Data-MDT.
NOTE
The rate at which the threshold is checked is a fixed value, which varies between router platforms. The bandwidth threshold is checked per (S, G) multicast stream rather than an aggregate of all traffic on the Default-MDT.
The group selected for the Data-MDT is taken from a pool that has been configured on the VRF. For each source that exceeds the configured bandwidth threshold, a new Data-MDT is created from the available pool for that VRF. If there are more high-bandwidth sources than there are groups available in the pool, then the group that has been referenced the least is selected and reused. This implies that if the pool contains a small number of groups, then a Data-MDT might have more than one high-bandwidth source using it. A small Data-MDT pool ensures that the amount of state information in the P-network is minimized. A large Data-MDT pool allows more optimal routing (less likely for sources to share the same Data-MDT) at the expense of increased state information in the P-network.
NOTE
The Data-MDT is triggered only by an (S, G) entry in the mVRF, not a (*, G) entry. If a customer VPN is using PIM Bi-Dir or the spt-threshold is set to infinity, then the Default-MDT is used for all traffic regardless of bandwidth.
Example 7-7 shows how to configure a Data-MDT pool for the EuroBank VRF.
Example 7-7 Configuration of the Data-MDT
ip vrf EuroBank rd 10:27 route-target export 10:27 route-target import 10:27 mdt default 239.192.10.2 mdt data 239.192.20.32 0.0.0.15 threshold 1 [list <access-list>]
The mdt data specifies a range of addresses to be used in the Data-MDT pool. Specifying the mask 0.0.0.15 allows you to use the range 239.192.20.32 through 239.192.20.47 as the address pool.
Because these are multicast group addresses (D-class addresses), there is no concept of a subnet; therefore, you can use all addresses in the mask range. The threshold is specified in kilobits. In this example, a threshold of 1 kilobit per second has been set, which means that if a multicast stream exceeds 1 Kbps, then a Data-MDT is created. The mdt data command can also limit the creation of Data-MDT to particular (S,G) VPN entries by specifying these addresses in an <access-list>.
When a PE router creates a Data-MDT, the multicast source traffic is encapsulated in the same manner as the Default-MDT, but the destination group is taken from the Data-MDT pool. Any PE router that has interested receivers needs to issue a P-join for the Data-MDT; otherwise, the receivers cannot see the C-packets because it is no longer active on the Default-MDT. For this to occur, the source PE router must inform all other PE routers in the multicast domain of the existence of the newly created Data-MDT. This is achieved by transmitting a special PIM-like control message on the Default-MDT containing the customer's (S, G) to Data-MDT group mapping. This message is called a Data-MDT join.
The Data-MDT join is an invitation to peer PE routers to join the new Data-MDT if they have interested receivers in the corresponding mVRF. The message is carried in a UDP packet destined to the ALL-PIM-ROUTERS group (224.0.0.13) with UDP port number 3232. The (S, G, Data-MDT) mapping is advertised by using the type, length, value (TLV) format, as shown in Figure 7-11.
Figure
7-11 Data-MDT Join TLV Format
Any PE routers that receive the (S, G, Data-MDT) mapping join the Data-MDT if they have receivers in the mVRF for G. The source PE router that initiated the Data-MDT waits several seconds before sending the multicast stream onto the Data-MDT. The delay is necessary to allow receiving PE routers time to build a path back to the Data-MDT root and avoid packet loss when switching from the Default-MDT.
The Data-MDT is a transient entity that exists as long as the bandwidth threshold is being exceeded. If the traffic bandwidth falls below the threshold, the source is switched back to the Default-MDT. To avoid transitions between the MDTs, traffic only reverts to the Default-MDT if the Data-MDT is at least one minute old.
NOTE
PE routers that do not have mVRF receivers for the Data-MDT will cache the (S, G, Data-MDT) mappings in an internal table so that the join latency can be minimized if a receiver appears. The Data-MDT join message is sent every minute by the source PE-router and any cached (S, G, MDT) mappings are aged out after three minutes if they are not refreshed.
Figure 7-12 shows the operation of a Data-MDT in the SuperCom network.
Figure
7-12 Data-MDT Operation
EuroBank has a high bandwidth source (196.7.25.12) located at its Paris headquarters that is servicing the EuroBank multicast group 239.255.0.20. This group has an interested receiver in EuroBank San Francisco. The following steps describe the operation of the Data-MDT:
| Step 1 |
The source at EuroBank Paris begins to transmit. Shortly thereafter, it exceeds the bandwidth threshold. |
| Step 2 |
The Paris PE router notices that the source is exceeding the bandwidth threshold and creates a new Data-MDT from the pool configured for the EuroBank VRF, in this case 239.192.20.32. |
| Step 3 |
The Paris PE router advertises the existence of the Data-MDT via a UDP packet that contains the TLV (196.7.25.12, 239.255.0.20, 239.192.20.32). This TLV describes the Data-MDT that the customer's (S, G) is being switched over to. |
| Step 4 |
The San Jose PE router receives the (S, G, Data-MDT) mapping on the Default-MDT and issues a P-join for (*, 239.192.20.32) to the SuperCom network. This allows the San Jose PE router to join the tree for the Data-MDT in the SuperCom network. |
| Step 5 |
The PE router in Washington also receives the (S, G, Data-MDT) mapping but does not issue a P-join because no interested receivers are connected to it. Instead, the PE router caches the entry for future reference. |
| Step 6 |
After waiting for three seconds, the Paris PE router begins to transmit the multicast data for (196.7.25.12, 239.255.0.20) over the Data-MDT 239.192.20.32. The three-second delay is required to ensure that the network has had enough time to create the Data-MDT. |
MTI
The MTI is the representation of access to the multicast domain in Cisco IOS. MTI appears in the mVRF as an interface called Tunnelx, where x is the tunnel number. For every multicast domain in which an mVRF participates, there is a corresponding MTI. (Note that the current IOS implementation supports only one domain per mVRF.) An MTI is essentially a gateway that connects the customer environment (mVRF) to the service provider's global environment (MDT). Any C-packets sent to the MTI are encapsulated into a P-packet (using GRE) and forwarded along the MDT. When the PE router sends to the MTI, it is the root of that MDT; when the PE router receives traffic from an MTI, it is the leaf of that MDT.
NOTE
Only a single MTI is necessary to access a multicast domain. The same MTI is used to forward traffic regardless of whether it is to the Default-MDT or to multiple Data-MDTs associated with that multicast domain.
PIM adjacencies are formed to all other PE routers in the multicast domain via the MTI. Therefore, for a specific mVRF, PE router PIM neighbors are all seen as reachable via the same MTI. The MTI is treated by an mVRF PIM instance as if it were a LAN interface. All PIM LAN procedures are valid over the MTI.
The PE router sends PIM control messages across the MTI so that multicast forwarding trees can be created between customer sites that are separated by the P-network. The forwarding trees referred to here are visible only in the C-network, not the P-network. To allow multicast forwarding between a customer's sites, the MTI is part of the outgoing interface list (olist) for the (S, G) or (*, G) states that originate from the mVRF.
The MTI is created dynamically upon configuration of the Default-MDT and cannot be explicitly configured. PIM Sparse-Dense (PIM SD) mode is automatically enabled so that various customer group modes can be supported. For example, if the customer were using PIM DM exclusively, then the MTI would be added to the olist in the mVRF with the entry marked Forward/Dense to allow distribution of traffic to other customer sites. If the PE router neighbors all sent a prune message back, and no prune override was received, then the MTI in the olist entry would be set to Prune/Dense exactly as if it were a LAN interface. If the customer network were running PIM SM, then the MTI would be added to the olist only on the reception of an explicit join from a remote PE router in the multicast domain.
NOTE
Although the MTI cannot be configured explicitly, it derives its IP properties from the same interface being used for Multiprotocol BGP peering. This is usually, but not necessarily, the loopback0 interface, and this interface must be multicast enabled.
The MTI is not accessible or visible to the IGP (such as OSPF or ISIS) operating in the customer network. In other words, no unicast routing is forwarded over the MTI because the interface does not appear in the unicast routing table of the associated VRF. Because the RPF check is performed on the unicast routing table for PIM, traffic received through an MTI has direct implications on current RPF procedures.
RPF Check
RPF is a fundamental requirement of multicast routing. The RPF check ensures that multicast traffic has arrived from the correct interface that leads back to the source. If this check passes, the multicast packets can be distributed out the appropriate interfaces away from the source. RPF consists of two pieces of information: the RPF interface and the RPF neighbor. The RPF interface is used to perform the RPF check by making sure that the multicast packet arrives on the interface it is supposed to, as determined by the unicast routing table. The RPF neighbor is the IP address of the PIM adjacency. It is used to forward messages such as PIM joins or prunes for the (*, G) or (S, G) entries (back toward the root of the tree where the source or RP resides). The RPF interface and neighbor are created during control plane setup of a (*, G) or (S, G) entry. During data forwarding, the RPF check is executed using the RPF interface cached in the state entry.
In an mVPN environment, the RPF check can be categorized into three types of multicast packets:
C-packets received from a PE router customer interface in the mVRF
P-packets received from a PE router or P router interface in the global routing table
C-packets received from a multicast tunnel interface in the mVRF
The RPF check for the first two categories is performed as per legacy RPF procedures. The interface information is gleaned from the unicast routing table and cached in a state entry. For C-packets, the C-source lookup in the VRF unicast routing table returns a PE router interface associated with that VRF. For P-packets, the P-source lookup in the global routing table returns an interface connected to another P router or PE router. The results of these lookups are used as the RPF interface.
The third category, C-packets that are received from an MTI, is treated a little differently and requires some modification to the way the (S, G) or (*, G) state is created. C-packets in this category originated from remote PE routers in the network and have traveled across the P-network via the MDT. Therefore, from the mVRF's perspective, these C-packets must have been received on the MTI. However, because the MTI does not participate in unicast routing, a lookup of the C-source in the VRF does not return the tunnel interface. Instead, the route to the C-source will have been distributed by Multiprotocol BGP as a VPNv4 prefix from the remote PE router. This implies that the receiving interface is actually in the P-network. In this case, the RPF procedure has been modified so that if Multiprotocol BGP has learned a prefix that contains the C-source address, the RPF interface is set to the MTI that is associated with that mVRF.
NOTE
The modified RPF interface procedure is applicable only to mVRFs that are part of a single multicast domain. Although the multicast domain architecture can support multiple domains in an mVRF, the current Cisco implementation limits an mVRF to one domain.
The procedure for determining the RPF neighbor has also been modified. If the RPF interface is set to the MTI, then the RPF neighbor must be a remote PE router. (Remember that a PE router forms PIM adjacencies to other PE routers via the MTI.) The RPF neighbor is selected according to two criteria. First, the RPF neighbor must be the BGP next-hop to the C-source, as appears in the routing table for that VRF. Second, the same BGP next-hop address must appear as a PIM neighbor in the adjacency table for the mVRF. This is the reason that PIM must use the local BGP peering address when it sends hello packets across the MDT. Referencing the BGP table is done once during setup in the control plane (to create the RPF entries). When multicast data is forwarded, verification only needs to takes place on the cached RPF information.
Multiprotocol BGP MDT Updates and SSM
When a PE router creates a Default-MDT group, it updates all its peers by using Multiprotocol BGP. The Multiprotocol BGP update provides two pieces of information: the MDT-Group created and the root address of the tree (which is the BGP peering address of the PE router that originated the message). At present, this information is used only to support P-networks that use SSM. If an MDT-Group range is enabled for SSM, then the source tree is joined immediately. This differs from PIM SM, where the shared tree that is rooted at the RP is initially joined.
If an MDT-Group range has been configured to operate in SSM mode on a PE router, then that PE router needs to know the source address of the MDT root to establish an (S, G) state. This is provided in the Multiprotocol BGP update. For PE routers that do not use SSM, the information received is cached in the BGP VPNv4 table.
NOTE
One of the primary advantages is that SSM does not depend on RPs, which eliminates the RP as a single point of failure. A practical example of SSM operation with MDTs is discussed later in the chapter.
The MDT-Group is carried in the BGP update message as an extended community attribute by using the type code of 0x0009. The attribute supports the AS format only and is shown in Figure 7-13.
Figure
7-13 MDT Extended Community Attribute
The root address of the MDT is carried in the BGP MP_REACH_NLRI attribute (AFI=1 and SAFI=128) by using the same format as a VPN-IPv4 address. We refer to it as an mVPN-IPv4 address. However, no label information is carried in the NLRI portion of the attribute. The MDT root address is carried in 2B:4B (AS # : Assigned Number) route distinguisher format but with a type code of 0x0002. The route distinguisher for the root address is shown in Figure 7-14.
NOTE
The RD type code 0x0002 conflicts with the official route distinguisher format definition as described in RFC 2547bis "BGP/MPLS VPNs," available from http://www.ietf.org. This value will eventually be changed to avoid conflict with the standard.
Figure
7-14 Route Distinguisher for MDT Root Address
NOTE
Information about Data-MDTs is not carried in Multiprotocol BGP messages. The Data-MDT join message is used for this purpose.
Figure 7-15 shows how the Default-MDT would be created by using Multiprotocol BGP updates if SuperCom were configured to operate in SSM mode only. For the purposes of this example, assume that the SSM range has been defined to be 239.192.10.0/24.
Figure
7-15 Multiprotocol BGP Updates and SSM
Figure 7-15 describes the creation of the Default-MDT as follows:
| Step 1 |
The EuroBank VRF on the Paris PE router is enabled for multicast and is configured with the Default-MDT group of 239.192.10.2. |
| Step 2 |
The Paris PE router generates a Multiprotocol BGP update message to both the Washington and San Jose PE router peers. (Note: This update message is generated even if SSM is not used.) The update contains the following information: |
|
|
|
| Step 3 |
When the San Jose PE router receives the BGP update, it immediately issues a P-join to (194.22.15.1, 239.192.10.2) by using SSM procedures. The join is issued because the San Jose PE router previously defined an mVRF to the same multicast domain (same group address of 239.192.10.2). |
| Step 4 |
The Washington PE router also receives the BGP update, but because it does not have an mVRF in that domain, it stores the update for future reference. |
mVPN State Flags
Several new state flags have been created to identify multicast routing entries associated with multicast domains. These flags are shown in Table 7-1.
Table 7-1 mVPN State Flags
|
Flag |
Description |
Detail |
|
Z |
Multicast Tunnel |
This flag appears in multicast entries in the global multicast routing table. It signifies that the multicast packets are received or transmitted on a multicast tunnel (MDT) entry. This flag appears only if mVRFs are present on the PE router that is associated with this entry. The Z flag directs that the P-packet should be de-encapsulated to reveal the C-packet. |
|
Y |
Joined MDT-Data Group |
This flag appears in multicast entries for the mVRF. It signifies that data for this (*, G) or (S, G) is being received over a Data-MDT group. An entry with the Y flag signifies that this PE router received a Data-MDT join message from a source PE router and has issued a join toward it. |
|
y |
Sending to MDT-Data Group |
This flag appears in multicast entries for the mVRF. It signifies that data for this (*, G) or (S, G) is being transmitted over a Data-MDT group. The y flag signifies that this PE router instigated a new Data-MDT for this customer (S, G). |
Because only a single MTI exists in the mVRF per multicast domain, both the Data-MDT and the Default-MDT use the same tunnel interface for customer traffic. The Y/y flags are necessary to distinguish Default-MDT traffic from Data-MDT traffic and ensure that customer multicast routing entries use the correct MDT-Data group by referring to an internal table that holds the (S, G, Data-MDT) mappings.
Example 7-8 shows the value of the state flags from the Paris PE router. Do not be concerned with the context of the output shown here. A full discussion on the operation of mVPN in the SuperCom network is included in a later section.
Example 7-8 mVPN State Flag
SuperCom_Paris#show ip mroute 239.192.20.32 [snip] (*, 239.192.20.32), 1d18h/00:03:23, RP 194.22.15.3, flags: BCZ Bidir-Upstream: Null, RPF nbr 0.0.0.0 Outgoing interface list: Serial4/0, Forward/Sparse, 1d18h/00:02:30 MVRF EuroBank, Forward/Sparse, 1d18h/00:00:00 SuperCom_Paris#show ip mroute vrf EuroBank 239.255.0.20 [snip] (196.7.25.12, 239.255.0.20), 1d18h/00:03:22, flags: TY Incoming interface: Tunnel0, RPF nbr 194.22.15.1 Outgoing interface list: Ethernet5/0, Forward/Sparse-Dense, 1d18h/00:02:50
The example shows output from two commands. The first command shows the entry for a Data-MDT 239.192.20.32 in the global multicast routing table. The Z flag is set to show it is associated with a multicast tunnel. The second command shows an entry in the EuroBank mVRF for the state (196.7.25.12, 239.255.0.20). This entry happens to be receiving traffic from the (239.192.20.32) Data-MDT in the global table as signaled by the Y flag, although the correlation is not shown in the output. Detailed examples on the operation of the Data-MDT are provided in the later section titled "Case Study of mVPN Operation in SuperCom."
mVPN Forwarding
Forwarding can be divided into two categories: C-packets that are received from a PE router customer interface in mVRF (excluding the MTI), and P-packets received from a PE router global multicast interface. To simplify things, assume that control checks such as time-to-live (TTL) and RPF are always successful.
C-Packets Received from a PE Router Customer Multicast Interface
The following describes the steps that the router takes when a multicast packet arrives at the PE router from a VRF interface:
| Step 1 |
A C-Packet arrives on an VRF-configured PE router interface. |
| Step 2 |
The VRF that is configured for that interface implicitly identifies the mVRF. |
| Step 3 |
An RPF check is done on the C-packet, and if successful the C-packet is replicated based on the contents of the olist for the (S, G) or (*, G) entry in the mVRF. The olist might contain multicast-enabled interfaces in the same mVRF, in which case packet forwarding follows standard multicast procedures. The olist might also contain a tunnel interface that connects the multicast domain. |
| Step 4 |
If the olist contains a tunnel interface, then the packet is encapsulated by using GRE, with the source being the BGP peering address of the local PE router and the destination being the MDT Group address. The decision on whether the Default-Group or the Data-MDT group is selected depends on whether the y flag is set on the (S, G) entry in the mVRF. The Type-of-Service byte of the C-packet is copied to the P-packet. |
| Step 5 |
The C-Packet is now a P-Packet in the global multicast routing table. |
| Step 6 |
The P-packet is forwarded all the way through the P-network by using standard multicast procedures. P routers are unaware of any mVPN activity and treat the packet as native multicast. |
P-Packets Received from a PE Router Global Multicast Interface
The following describes the steps that the router takes when a multicast packet arrives at the P router from another P router or PE router in the global routing table:
| Step 1 |
A P-packet arrives from a PE router interface in the global network. |
| Step 2 |
The P-packet's corresponding (S, G) or (*, G) entry is looked up in the global mroute table, and a global RPF check is done. |
| Step 3 |
If the RPF check is successful, the P-packet is replicated out any P-network interfaces that appear in the olist for its (S, G) or (*, G) entry. At this point, the P-packet is still being treated as native multicast. |
| Step 4 |
If the (S, G) or (*, G) entry has the Z flag set, then this is a Default- or Data-MDT with an associated mVRF; therefore, the P-packet must be de-encapsulated to reveal the C-packet. |
| Step 5 |
The destination mVRF of the C-packet is derived from the MDT-group address in the P-packet. The incoming MTI is also resolved from the MDT-group address. |
| Step 6 |
The C-packet is presented to the target mVRF, with the appropriate MTI set as the incoming interface. The RPF check verifies this tunnel interface. |
| Step 7 |
The C-packet is once again a native multicast packet, but it resides in the customer's network. The C-packet is replicated to all multicast-enabled interfaces in the mVRF that appears in the olist for the (S, G) or (*, G) entry. |
Case Study of mVPN Operation in SuperCom
Now that the various components and procedures of mVPN have been covered, it is useful to consolidate this information into a case study of mVPN operation in the SuperCom network. Figure 7-16 shows the SuperCom network topology to be used for the case study.
Figure
7-16 SuperCom mVPN Network
SuperCom is supporting two mVPN customers: EuroBank and FastFoods. Each of these customers is participating in a separate multicast domain via the PE routers at San Jose, Washington, and Paris.
EuroBank has three sites located at San Francisco, Washington, and Paris. One active source is connected to the Paris CE router that provides a multicast stream to an interested receiver at the Washington CE router. Even though the San Francisco CE router does not have receivers, the mVRF in the San Jose PE router still connects to the EuroBank multicast domain (in the event that a receiver does become active at the CE router). The EuroBank network has been configured with PIM SM mode, and the RP is located at the Paris CE router, denoted in Figure 7-16 by RPE. RP information is statically distributed. The SuperCom network has been configured so that the Default-MDT EuroBank uses between all PE routers is 239.192.10.2 (shown previously in Figure 7-7), and the EuroBank Data-MDTs are created by using addresses from the range 239.192.20.32239.192.20.47. FastFoods has two sites located at San Francisco and Lyon. One active source is connected to the San Francisco CE router and provides a multicast stream to an interested receiver at the Lyon CE router. The FastFoods network has been configured to operate in SSM mode; therefore, the Lyon CE router has issued a source-specific C-join to the server at FastFoods San Francisco. The SuperCom network has been configured so that the Default-MDT FastFoods uses between all PE routers is 239.192.10.1 (shown previously in Figure 7-7). The Data-MDTs are created by using addresses from the range 239.192.20.16239.192.20.31.
NOTE
Both FastFoods and EuroBank are using the multicast range 239.255.0.0/16 for multicast services within their VPNs. This follows the convention laid out in RFC 2365 for the use of site local addressing. Because FastFoods and EuroBank are in different multicast domains, there is no conflict of the 239.255.0.0/16 range.
SuperCom is using AS 10 and has deployed PIM Bi-Dir. This means that although the routing in the core is not the most optimal, the amount of state information is kept to a minimum. The Paris PE router acts as the RP (denoted in the figure by RPS) and serves as the root of all MDT shared trees in the SuperCom global space. The SuperCom RP to group mapping information is distributed via Auto-RP. Later in the chapter, you will learn about the operation of PIM SSM in the SuperCom network as an alternative to PIM SM.
The San Jose, Washington, and Paris PE routers join the EuroBank multicast domain (239.192.10.2) because they have a EuroBank mVRF configured (regardless of whether receivers are active). Only the San Jose and Paris PE routers join the FastFoods multicast domain (239.192.10.1). The Washington PE router does not join the FastFoods domain because it does not have a FastFoods VRF configured. Figure 7-7 shows the logical view of the Default-MDTs in the SuperCom network.
Table 7-2 provides a summary of the topology information in the SuperCom network to assist in understanding the examples in the following sections.
Table 7-2 SuperCom Topology Information
|
Company |
Site/Category |
Item |
Value |
|
SuperCom Backbone |
Paris (PE Router) |
Lo0: |
194.22.15.1/32 |
|
|
San Jose (PE Router) |
Lo0: |
194.22.15.2/32 |
|
|
Washington (PE Router) |
Lo0: |
194.22.15.3/32 |
|
|
Circuit Addresses |
PE<->CE |
192.168.2.0/24 |
|
|
PIM |
Mode |
Bidirectional |
|
|
PIM |
Rendezvous Point (Auto-RP) |
195.22.15.1 (SuperCom Paris) |
|
FastFoods |
San Jose (CE Router) |
Subnet |
195.12.2.0/24 |
|
|
San Jose (CE Router) |
Source Group |
(195.12.2.6, 239.255.0.30) |
|
|
Lyon |
Subnet |
10.2.1.0/24 |
|
|
PIM |
Mode |
SSM |
|
|
MDT |
Default |
239.192.10.1 |
|
|
MDT |
Data |
239.192.20.16/28 |
|
EuroBank |
Paris (CE Router) |
Subnet |
196.7.25.0/24 |
|
|
Paris (CE Router) |
Source Group |
(196.7.25.12, 239.255.0.20) |
|
|
Washington (CE Router) |
Subnet |
196.7.26.0/24 |
|
|
San Jose (CE Router) |
Subnet |
10.2.1.0/24 |
|
|
PIM |
Mode |
Sparse |
|
|
PIM |
Rendezvous Point (Static) |
196.7.25.1 (EuroBank Paris) |
|
|
MDT |
Default |
239.192.10.2 |
|
|
MDT |
Data |
239.192.20.32/28 |
PIM SM in the SuperCom Network
As highlighted throughout this chapter, the only requirement on the core network is that native multicast be enabled. Because we are using Auto-RP, each applicable P-network interface in SuperCom is configured with the command ip pim sparse-dense-mode. To keep the multicast routing state to a minimum, PIM Bi-Dir mode is also enabled on each SuperCom router with the global command ip pim bidir-enable. (In addition, Bi-Dir must be enabled for individual groups.)
Auto-RP is used to distribute the Default-MDT (239.192.10.0/24) and Data-MDT (239.192.20.0/24) ranges of group addresses to all other P routers and PE routers. This is accomplished by configuring the Paris PE router (the RP for SuperCom), as shown in Example 7-9.
Example 7-9 Auto-RP Configuration for SuperCom
ip access-list standard MDT-Range permit 239.192.10.0 0.0.0.255 permit 239.192.20.0 0.0.0.255 ! ip pim send-rp-announce Loopback0 scope 64 group-list MDT-Range bidir ip pim send-rp-discovery Loopback0 scope 64
You can verify distribution of RP information by examining the group-to-rendezvous point mapping cache on another PE router, as shown in Example 7-10.
Example 7-10 Confirming Auto-RP Information
SuperCom_Washington#show ip pim rp map PIM Group-to-RP Mappings Group(s) 239.192.10.0/24 RP 194.22.15.1 (SuperCom_Paris), v2v1, bidir Info source: 194.22.15.1 (SuperCom_Paris), elected via Auto-RP Uptime: 3d15h, expires: 00:02:52 Group(s) 239.192.20.0/24 RP 194.22.15.1 (SuperCom_Paris), v2v1, bidir Info source: 194.22.15.1 (SuperCom_Paris), elected via Auto-RP Uptime: 3d15h, expires: 00:02:55
The output confirms that the Default-MDT and the Data-MDT will operate in Bi-Dir mode and that the root of the shared trees created from these groups will be the Paris PE router. This means that all traffic for Default- and Data-MDTs will flow via the Paris PE router. If Bi-Dir mode were not enabled, then a shortest path tree would eventually be created for each Default- or Data-MDT by using an (S, G) pair instead of (*, G). That would create more state information in the network, but it might provide a more optimal route.
NOTE
Bi-Dir mode has only been enabled for the MDT group ranges in the SuperCom network. This does not preclude the use of other available modes such as PIM SM or SSM for other multicast groups.
PIM must also be enabled on the interface that Multiprotocol BGP uses for its peering address. This is important because the address on that interface is used as the root of the MDT and is carried in PIM hello messages via the MTI. All the SuperCom routers use loopback0 as their BGP interface and have multicast enabled, as shown in Example 7-11 for the Paris PE router.
Example 7-11 Enabling Multicast on the BGP Peering Interface
router bgp 10 no synchronization no bgp default ipv4-unicast bgp log-neighbor-changes neighbor 194.22.15.2 remote-as 10 neighbor 194.22.15.2 update-source Loopback0 neighbor 194.22.15.3 remote-as 10 neighbor 194.22.15.3 update-source Loopback0 no auto-summary ! [snip] interface Loopback0 ip address 194.22.15.1 255.255.255.255 ip pim sparse-dense-mode !
Enabling Multicast in VRFs
After basic multicast has been enabled in the core of the SuperCom network, you can enable multicast on each of the FastFoods and EuroBank VRFs. The configurations vary slightly depending on whether a Data-MDT is required (that is, multicast sources originate from this VRF) and which multicast mode the customer is using.
Example 7-12 shows the configuration for the FastFoods VRF. Every FastFoods VRF uses the same Default-MDT of 239.192.10.1. The Data-MDT range of 239.192.20.16/28 is used for any multicast stream on the Default-MDT that exceeds 1 Kbps. Note that the mdt data command only needs to be applied to the PE router at San Jose because this PE router has a FastFoods source connected. However, if FastFoods VPN sources existed at other PE routers, then the same mdt data commands could be applied. Multicast routing is enabled on the VRF by using the ip multicast-routing vrf command. Each interface that is associated with the FastFoods VRF requires PIM to be enabled. Because FastFoods has chosen to use SSM, you must make the VRF aware of this fact so that it can create the correct multicast routing entries in the FastFoods mVRF. You can accomplish this with the ip pim vrf FastFoods ssm range command.
Example 7-12 FastFoods mVRF Configuration
ip vrf FastFoods rd 10:26 route-target export 10:26 route-target import 10:26 mdt default 239.192.10.1 mdt data 239.192.20.16 0.0.0.15 threshold 1 fl San Jose PE only ! ip multicast-routing vrf FastFoods ! interface Serial4/0 ip vrf forwarding FastFoods ip address 192.168.2.18 255.255.255.252 ip pim sparse-mode ! ip pim vrf FastFoods ssm range FastFoods_Site_Local_Scope ! ip access-list standard FastFoods_Site_Local_Scope permit 239.255.0.0 0.0.255.255
The EuroBank configuration shown in Example 7-13 is similar to FastFoods. (The MDT ranges differ, of course!) EuroBank is using a static RP configuration; therefore, you must configure each EuroBank VRF with a static group to RP mapping by using the command ip pim vrf EuroBank rp-address. The Data-MDT configuration is only required at the Paris PE router because the only source EuroBank has is in its Paris site.
Example 7-13 EuroBank mVRF Configuration
ip vrf EuroBank rd 10:27 route-target export 10:27 route-target import 10:27 mdt default 239.192.10.2 mdt data 239.192.20.32 0.0.0.15 threshold 1 fl Paris PE only ! ip multicast-routing vrf EuroBank ! interface Serial0/0 ip vrf forwarding EuroBank ip address 192.168.2.26 255.255.255.252 ip pim sparse-mode ! ip pim vrf EuroBank rp-address 196.7.25.1 EuroBank_Site_Local_Scope ! ip access-list standard EuroBank_Site_Local_Scope permit 239.255.0.0 0.0.255.255
Multicast Tunnel Interfaces
When the Default-MDT is configured, the SuperCom PE router immediately creates a tunnel interface by using the IP characteristics from the loopback0 interface. A Multiprotocol BGP update message is then sent to all the other PE routers that are BGP peers to signal the existence of the new Default-MDT. The PE router issues a P-join to the SuperCom RP for the Default-MDT group.
Example 7-14 shows some interesting information when a Default-MDT is configured, in this case on the EuroBank VRF at the SuperCom Paris PE router. Tunnel0 is used as the MTI for the EuroBank mVRF. The interface characteristics show that traffic entering Tunnel0 is encapsulated by using GRE with a destination address of 239.192.10.2 (Default-MDT) and a source address of 194.22.15.1 (learned from loopback0).
Inside the EuroBank VRF, you can see two PIM-enabled interfaces. Serial0/0 is the connection to the EuroBank CE router in Paris, and Tunnel0 is the MTI that provides access to and from the Default-MDT. The MTI is treated as a multiaccess interface; therefore, a designated router (DR) with the IP address 194.22.15.3 has been selected by using normal PIM designated router election rules. The PIM adjacencies that are formed over the MTI are discussed in a following section. Note that the tunnel operates in SD mode and the neighbor count is 2, which corresponds to the other PE routers.
Example 7-14 EuroBank Multicast Tunnel Interface
02:05:15: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel0, changed state to up SuperCom_Paris#show interface tunnel0 Tunnel0 is up, line protocol is up Hardware is Tunnel Interface is unnumbered. Using address of Loopback0 (194.22.15.1) MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation TUNNEL, loopback not set Keepalive not set Tunnel source 194.22.15.1 (Loopback0), destination 239.192.10.2, fastswitch TTL 255 Tunnel protocol/transport GRE/IP Multicast, key disabled, sequencing disabled Checksumming of packets disabled, fast tunneling enabled [snip] SuperCom_Paris#show ip pim vrf EuroBank interface Address Interface Ver/ Nbr Query DR DR Mode Count Intvl Prior 192.168.2.26 Serial0/0 v2/S 1 30 1 0.0.0.0 194.22.15.1 Tunnel0 v2/SD 2 30 1 194.22.15.3
Example 7-15 shows debug output from the San Jose PE router of the BGP update message that was received from the Paris PE router when the EuroBank Default-MDT was created. The MDT extended community attribute shows that the MDT group is 239.192.10.2 and that the root of this group is 194.22.15.1, as shown in the mVPN-IPV4 address 2:10:27:194.22.15.1/32.
Example 7-15 EuroBank MDT BGP update
BGP(2): 194.22.15.1 rcvd UPDATE w/ attr: nexthop 194.22.15.1, origin ?, localpref 100, extended community RT:10:27 MDT:10:239.192.10.2 BGP(2): 194.22.15.1 rcvd 2:10:27:194.22.15.1/32
Example 7-16 shows the contents of the BGP VPNv4 table on the San Jose PE router for the MDT updates it has received from its peers. Two route distinguisher entries correspond to the FastFoods (2:10:26) and EuroBank (2:10:27) multicast domains. Each PE router that has advertised a Default-MDT for these domains is listed under the route distinguisher entry. As you can see, if you exclude the local San Jose PE router entry, there is one peer for FastFoods (Paris PE router 194.22.15.1), and there are two peers for EuroBank (Paris PE router and Washington PE router 194.22.15.3), as per the SuperCom topology.
Example 7-16 Default-MDT Summary BGP VPNv4 Table
SuperCom_SanJose#show ip bgp vpnv4 all | begin 2:10:26 Route Distinguisher: 2:10:26 *>i194.22.15.1/32 194.22.15.1 100 0 ? *> 194.22.15.2/32 0.0.0.0 0 ? Route Distinguisher: 2:10:27 *>i194.22.15.1/32 194.22.15.1 100 0 ? *> 194.22.15.2/32 0.0.0.0 0 ? *>i194.22.15.3/32 194.22.15.3 100 0 ?
Example 7-17 shows the details of the EuroBank and FastFoods MDT entries received via Multiprotocol BGP from the Paris PE router.
Example 7-17 Detail MDT BGP Entry
SuperCom_SanJose#show ip bgp vpnv4 all 194.22.15.1 BGP routing table entry for 2:10:26:194.22.15.1/32, version 38 Paths: (1 available, best #1, no table, not advertised to EBGP peer) Not advertised to any peer Local 194.22.15.1 (metric 66) from 194.22.15.1 (194.22.15.1) Origin incomplete, localpref 100, valid, internal, mdt, no-import, best Extended Community: RT:10:26 MDT:10:239.192.10.1 BGP routing table entry for 2:10:27:194.22.15.1/32, version 37 Paths: (1 available, best #1, no table, not advertised to EBGP peer) Not advertised to any peer Local 194.22.15.1 (metric 66) from 194.22.15.1 (194.22.15.1) Origin incomplete, localpref 100, valid, internal, mdt, no-import, best Extended Community: RT:10:27 MDT:10:239.192.10.2
NOTE
As discussed previously, this BGP information is currently accessed only by SSM procedures. All routers cache this information regardless of whether they are configured to use SSM. Other uses for this information are currently being investigated.
Even though the BGP update contains route target extended community, it is not imported into a VRF because of the presence of the MDT extended community attribute.
Now that the mVRFs and the MTIs have been created, it is a good time to examine the MDTs that have been created in the core of the SuperCom network.
Multicast Distribution Trees
The SuperCom network creates a separate, bidirectional tree for each of the FastFoods and EuroBank multicast domains by using standard PIM procedures. Example 7-18 shows the global multicast routing table for the Paris PE router. The other PE routers that are within the SuperCom network have similar (*, G) entries.
The multicast domains are represented by a bidirectional entry, denoted with the B flag. The Z flag signifies that this entry is a multicast tunnel and that a mVRF is connected to it, indicated by the C flag. The associated VRF appears in the olist of the entry. The olist entry also shows Serial0/2, which is a global interface that connects to the other SuperCom routers. Because the Paris PE router is the RP, the Bidir-upstream field is null. If this router were not the RP, this field would contain the local interface in the direction of the RP.
Example 7-18 Paris PE Global Multicast Routing Table
SuperCom_Paris#show ip mroute IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.192.10.1), 06:00:44/00:03:12, RP 194.22.15.1, flags: BCZ Bidir-Upstream: Null, RPF nbr 0.0.0.0 Outgoing interface list: MVRF FastFoods, Forward/Sparse-Dense, 06:00:44/00:00:00 Serial0/2, Forward/Sparse-Dense, 06:00:44/00:02:57 (*, 239.192.10.2), 06:00:44/00:03:22, RP 194.22.15.1, flags: BCZ Bidir-Upstream: Null, RPF nbr 0.0.0.0 Outgoing interface list: MVRF EuroBank, Forward/Sparse-Dense, 06:00:44/00:00:00 Serial0/2, Forward/Sparse-Dense, 06:00:45/00:02:31
An MDT multicast entry does not necessarily have the Z flag set on all routers. For example, the Washington PE router has a connection only to the EuroBank CE router; therefore, it has no need to originate (be the root) or terminate (be the leaf) the FastFoods MDT (239.192.10.1). The Washington PE multicast table is shown in Example 7-19. The FastFoods MDT entry has only the B flag set, which means that Washington just passes traffic straight through.
Example 7-19 Washington PE Global Multicast Routing Table
SuperCom_Washington#show ip mroute IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.192.10.1), 3d23h/00:03:27, RP 194.22.15.1, flags: B Bidir-Upstream: Serial4/0, RPF nbr 194.22.15.22 Outgoing interface list: POS3/0, Forward/Sparse-Dense, 07:54:24/00:03:09 Serial4/0, Bidir-Upstream/Sparse-Dense, 07:54:24/00:00:00 (*, 239.192.10.2), 3d23h/00:03:30, RP 194.22.15.1, flags: BCZ Bidir-Upstream: Serial4/0, RPF nbr 194.22.15.22 Outgoing interface list: POS3/0, Forward/Sparse-Dense, 07:54:24/00:03:30 Serial4/0, Bidir-Upstream/Sparse-Dense, 07:54:26/00:00:00 MVRF EuroBank, Forward/Sparse-Dense, 07:54:27/00:00:00
mVRF PIM Adjacencies
In each mVRF, PIM adjacencies are formed with the associated FastFoods or EuroBank CE routers, and also over the MTI to the other PE routers in the multicast domain. Example 7-20 shows the adjacencies that are formed at the Paris PE router for EuroBank and FastFoods. The example shows that the Paris PE router has created two tunnel interfaces: Tunnel0 for EuroBank and Tunnel1 for FastFoods.
For EuroBank, two PIM adjacencies have been formed over Tunnel0one to the San Jose PE router (194.22.15.2) and the other to the Washington PE router (194.22.15.3)because both of these PE routers have a EuroBank VRF that is multicast enabled. Because the tunnel behaves as a multiaccess medium, the designated router elected is the PE router with the highest IP address or the highest nominated priority. (In our example, all the priorities are set to a default of 1.)
Tunnel1 in the FastFoods VRF has formed a single PIM adjacency to the San Jose PE router, with that PE router also being elected the DR. The PIM adjacencies to CE routers in both VRFs are formed in the normal manner. Note that the neighbor addresses on the tunnels are also those used for BGP peering.
Example 7-20 VRF PIM Adjacencies
SuperCom_Paris#show ip pim vrf EuroBank neighbor PIM Neighbor Table Neighbor Interface Uptime/Expires Ver DR Address Prio/Mode 192.168.2.25 Serial0/0 02:47:14/00:01:32 v2 1 / B S 194.22.15.2 Tunnel0 02:46:38/00:01:37 v2 1 / B S 194.22.15.3 Tunnel0 02:46:38/00:01:38 v2 1 / DR B S SuperCom_Paris#show ip pim vrf FastFoods neighbor PIM Neighbor Table Neighbor Interface Uptime/Expires Ver DR Address Prio/Mode 192.168.2.21 FastEthernet0/1 08:35:18/00:01:38 v2 1 / B S 194.22.15.2 Tunnel1 08:34:38/00:01:40 v2 1 / DR B S
mVRF Routing Entries
Now that the MDTs are set up and the PE router PIM adjacencies have been formed, you can look at the multicast routing tables that have been created in each of the mVRFs. Example 7-21 shows the routing tables for the EuroBank VRF at the Paris and Washington PE routers. The San Jose PE router does not have active receivers or sources connected to the EuroBank San Francisco mVRF; therefore, its EuroBank multicast routing table is empty. For purposes of clarity, the output has been clipped to show relevant information only.
Example 7-21 Multicast Routing Table for EuroBank VPN
SuperCom_Paris#show ip mroute vrf EuroBank [snip] (*, 239.255.0.20), 09:15:02/00:03:02, RP 196.7.25.1, flags: S Incoming interface: Serial0/0, RPF nbr 192.168.2.25 Outgoing interface list: Tunnel0, Forward/Sparse-Dense, 09:15:02/00:03:02 SuperCom_Washington#show ip mroute vrf EuroBank [snip] (*, 239.255.0.20), 4d01h/00:03:27, RP 196.7.25.1, flags: S Incoming interface: Tunnel0, RPF nbr 194.22.15.1 Outgoing interface list: Ethernet5/0, Forward/Sparse, 4d01h/00:03:27
Looking at the Paris PE router (*, 239.255.0.20) routing entry, you can see that the incoming interface is Serial0/0, which connects to the EuroBank Paris CE router where the source resides. The olist contains Tunnel0, which means that any multicast traffic that is destined to this interface is encapsulated and transmitted via the Default-MDT.
There is a receiver for (*, 239.255.0.20) at the Washington PE router, which you can see in the Washington mVRF routing table. The incoming interface is Tunnel0, and the olist contains Ethernet5/0, which points to the FastFoods Washington CE router. The EuroBank Washington CE router has issued a C-join toward the EuroBank RP (196.7.25.1) over Tunnel0.
The multicast packets received from the EuroBank Paris source are de-encapsulated and forwarded to the EuroBank mVRF, not because the incoming interface is Tunnel0, but because of the global entry for the EuroBank MDT (*, 239.192.10.2) having the Z flag set. This can be confirmed by referring back to Example 7-19, which shows the Washington PE router's global multicast routing table.
The incoming Tunnel0 interface is used to verify the RPF check for the EuroBank source (196.7.25.12), as shown in Example 7-22. Notice that the RPF neighbor is the BGP peer address of the Paris PE router where 196.7.25.12 originated.
Example 7-22 RPF Information for EuroBank Source
SuperCom_Washington#show ip rpf vrf EuroBank 196.7.25.12 RPF information for Eurobank_Paris_Source (196.7.25.12) RPF interface: Tunnel0 RPF neighbor: SuperCom_Paris (194.22.15.1) RPF route/mask: 196.7.25.0/24 RPF type: unicast (bgp 100) RPF recursion count: 0 Doing distance-preferred lookups across tables
The EuroBank routing tables shown here are in the PIM SM steady state; that is, the routing entries are connected to the shared tree. No source data is flowing (or the spt-threshold has not been met) from EuroBank Paris; therefore, no shortest path tree has been built back to the Paris source. If there were, you would see an (S, G) entry instead of just (*, G). If an (S, G) entry existed, then it would switch over to a Data-MDT (assuming the threshold was exceeded). You will learn the operation of the Data-MDT in a further section.
Now is a good time to look at the multicast routing entries for FastFoods, shown in Example 7-23. Once again, unnecessary information has been clipped. FastFoods is operating in SSM mode; therefore, the routing entries are denoted by the s flag stating that this entry is part of an SSM group. SSM does not use a RP, and it always uses a source tree (S, G) instead of a shared tree (*, G). As you can see, Tunnel1 appears in the olist at the San Jose PE router and is the incoming interface at the Paris PE router, signifying that the source is at San Jose. Initially, the traffic stream flows over the Default-MDT; when the MDT threshold is exceeded, the traffic stream switches over to the Data-MDT.
Example 7-23 Multicast Routing Table for FastFoods VPN
SuperCom_SanJose#show ip mroute vrf FastFoods [snip] (195.12.2.6, 239.255.0.30), 04:15:49/00:02:35, flags: sT Incoming interface: Serial4/0, RPF nbr 192.168.2.17 Outgoing interface list: Tunnel1, Forward/Sparse-Dense, 04:15:49/00:02:35 SuperCom_Paris#show ip mroute vrf FastFoods [snip] (195.12.2.6, 239.255.0.30), 14:25:19/00:02:50, flags: s Incoming interface: Tunnel1, RPF nbr 194.22.15.2 Outgoing interface list: FastEthernet0/1, Forward/Sparse, 14:25:19/00:02:50
The important thing to remember about these examples is that the SuperCom network is oblivious to the multicast mode of operation in the FastFoods or EuroBank networks. The MTI intrinsically supports PIM SD mode; therefore, the customer's choice of multicast mode, RP placement, or RP-to-group distribution method is of little relevance to the SuperCom network.
Data-MDT Operation
As mentioned previously, due to the absence of receivers or sources, the San Francisco EuroBank mVRF at the San Jose PE router does not have routing entries, as shown in Example 7-24. You can see that although the EuroBank mVRF is empty, the San Jose PE router is still joined to the EuroBank Default-MDT shared tree (*, 239.192.10.2), regardless of whether EuroBank San Francisco has receivers (or sources).
Example 7-24 San Jose PE EuroBank mVRF and Global MDT Entry
SuperCom_SanJose#show ip mroute vrf EuroBank IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode SuperCom_SanJose#show ip mroute 239.192.10.2 [snip] (*, 239.192.10.2), 03:50:40/00:02:41, RP 194.22.15.1, flags: BCZ Bidir-Upstream: POS3/0, RPF nbr 194.22.15.18 Outgoing interface list: POS3/0, Bidir-Upstream/Sparse-Dense, 03:49:35/00:00:00 MVRF EuroBank, Forward/Sparse-Dense, 03:50:40/00:00:00
This means that any traffic that the source in EuroBank Paris sends is not only received by the Washington PE router (which has an interested receiver in its path), but the P-packet also is replicated along the Default-MDT toward the San Jose PE router. At San Jose, the P-packet is decapsulated, and as there is no forwarding entry for this C-packet in the EuroBank mVRF, it is dropped. This process is illustrated in Figure 7-17.
Figure
7-17 Unnecessary Replication of Packets in MDT
To overcome this problem, you can use a Data-MDT to send packets only to the PE routers that are interested in the traffic. In the SuperCom network, assume that the two active sources at FastFoods San Jose and EuroBank Paris have started to transmit multicast traffic. After these streams exceed the MDT threshold (set at 1 Kbps), they switch over to a separate Data-MDT. The Data-MDT group is taken from the pool of addresses configured on the respective VRF at the source PE routers (that is, San Jose PE router for FastFoods and Paris PE router for EuroBank).
The Paris PE router joins the Data-MDT (*, 239.192.20.16) for FastFoods, and the Washington PE router joins the Data-MDT (*, 239.192.20.32) for EuroBank. The Data-MDTs that are created are illustrated in Figure 7-18. Notice that the San Jose PE router does not join the EuroBank Data-MDT; therefore, it does not receive unwanted multicast traffic.
Figure
7-18 Active Data-MDTs for EuroBank and FastFoods
Using the EuroBank multicast stream as an example, you will see the process of creating the Data-MDT. Because EuroBank is operating in PIM SM, the initial traffic from the EuroBank Paris source is sent over the shared tree to the EuroBank Washington CE router. A source tree (196.7.25.12, 239.255.0.20) is built back to EuroBank Paris from EuroBank Washington within the mVRF context following standard PIM SM rules. This is an important step; if the source traffic were to remain on the (*, G) shared tree, then it would be ineligible to be switched to a Data-MDT.
On the other hand, the FastFoods network does not need to switch from a shared tree because it uses SSM. Therefore, all of FastFoods' traffic is always on a source tree and eligible to be switched to a Data-MDT.
Example 7-25 shows the multicast routing entries for the EuroBank mVRF at the Paris PE router. You can see that there are two entries: the shared tree entry (*, 239.255.0.20) and the newly created source tree entry (196.7.25.12, 239.255.0.20). The interesting thing about the source tree entry is that the "y" flag is set. This means that the source tree entry has switched from the Default-MDT (because the threshold was exceeded) and is now sending its traffic by using the Data-MDT via Tunnel0.
Example 7-25 EuroBank mVRF Routing Entries at SuperCom Paris
SuperCom_Paris#show ip mroute vrf EuroBank IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.255.0.20), 2d02h/stopped, RP 196.7.25.1, flags: S Incoming interface: Serial0/0, RPF nbr 192.168.2.25 Outgoing interface list: Tunnel0, Forward/Sparse-Dense, 2d02h/00:03:10 (196.7.25.12, 239.255.0.20), 00:11:06/00:03:23, flags: Ty Incoming interface: Serial0/0, RPF nbr 192.168.2.25 Outgoing interface list: Tunnel0, Forward/Sparse-Dense, 00:11:12/00:03:10
When the threshold for (196.7.25.12, 239.255.0.20) was exceeded, the Paris PE router sent a Data-MDT TLV join message over the Default-MDT to the Washington and San Jose PE routers. Because the Washington PE router has an interested receiver, it immediately joined the new Data-MDT, whereas the San Jose PE router just cached the message. Example 7-26 shows the PIM debug messages that the Washington PE router output. Note that the messages shown here are from two PIM instances. PIM(1) is the instance running in the mVRF, and PIM(0) is the global instance. The Data-MDT join message indicates that EuroBank traffic for (196.7.25.12, 239.255.0.20) will be switched over to the Data-MDT group 239.192.20.32. This prompts the Washington PE router to issue a P-join for (*, 239.192.20.32) so that it can continue to receive the traffic.
Example 7-26 EuroBank Data-MDT TLV Join Message
PIM(1): MDT join TLV received for (196.7.25.12,239.255.0.20) MDT-data: 239.192.20.32 PIM(1): MDT-data group (*,239.192.20.32) added on interface: Loopback0 PIM(0): Check RP 194.22.15.1 into the (*, 239.192.20.32) entry PIM(0): Building triggered (*,G) Join / (S,G,RP-bit) Prune message for 239.192.20.32
If you look at the routing entries for the EuroBank mVRF in the Washington PE router, as shown in Example 7-27, you see that the source tree entry has the "Y" flag set, indicating that receive traffic for (196.7.25.12,239.255.0.20) has been switched to Data-MDT.
Example 7-27 EuroBank mVRF Routing Entries at the Washington PE
SuperCom_Washington#show ip mroute vrf EuroBank IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.255.0.20), 5d19h/stopped, RP 196.7.25.1, flags: S Incoming interface: Tunnel0, RPF nbr 194.22.15.1 Outgoing interface list: Ethernet5/0, Forward/Sparse, 5d19h/00:03:24 (196.7.25.12, 239.255.0.20), 00:45:48/00:03:28, flags: TY Incoming interface: Tunnel0, RPF nbr 194.22.15.1 Outgoing interface list: Ethernet5/0, Forward/Sparse, 00:45:48/00:03:24
NOTE
The procedure for creating the FastFoods Data-MDT is the same as EuroBank except that a different pool of address is used. It would be superfluous to cover the scenario again for FastFoods.
Now that the Data-MDTs have been created, the last thing to examine is the entries in the SuperCom global multicast table. Example 7-28 shows the multicast routing entries at the Paris PE router. Unnecessary information, such as Auto-RP entries, has been pruned (pardon the pun) from the output to improve readability of the example. You can see that two additional shared tree routing entries correspond to the two Data-MDTs (239.192.20.16 and 239.192.20.32). Because the Paris PE router has a receiver in the FastFoods mVRF, it has joined the FastFoods Data-MDT and is forwarding traffic from (*, 239.192.10.16) to the FastFoods mVRF. The Z flag indicates this is a multicast tunnel, and the C flag indicates that an mVRF is connected. The Paris PE router is a leaf of the (*, 239.192.10.16) entry.
Example 7-28 Paris PE Data-MDT Routing Entries
SuperCom_Paris#show ip mroute [snip] (*, 239.192.10.1), 2d03h/00:03:28, RP 194.22.15.1, flags: BCZ Bidir-Upstream: Null, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/2, Forward/Sparse-Dense, 1d17h/00:03:16 MVRF FastFoods, Forward/Sparse-Dense, 2d03h/00:00:00 (*, 239.192.10.2), 2d03h/00:03:18, RP 194.22.15.1, flags: BCZ Bidir-Upstream: Null, RPF nbr 0.0.0.0 Outgoing interface list: MVRF EuroBank, Forward/Sparse-Dense, 2d03h/00:00:00 Serial0/2, Forward/Sparse-Dense, 2d03h/00:02:38 (*, 239.192.20.16), 00:50:15/00:03:26, RP 194.22.15.1, flags: BCZ Bidir-Upstream: Null, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/2, Forward/Sparse-Dense, 00:50:12/00:03:20 MVRF FastFoods, Forward/Sparse-Dense, 00:50:15/00:00:00 (*, 239.192.20.32), 19:08:21/00:03:27, RP 194.22.15.1, flags: BZ Bidir-Upstream: Null, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/2, Forward/Sparse-Dense, 01:12:54/00:03:19 [snip]
There is something interesting about the last entry (*, 239.192.10.32), which is the Data-MDT for EuroBank. No C flag is present, which indicates that no mVRF is connected. This is because the Paris PE router is sending traffic to this tunnel from its connected source; the PE router is not receiving traffic from the tunnel. The Paris PE router is the root of the (*, 239.192.10.32) entry only.
To see how the EuroBank or FastFoods (S, G) entries are mapped to a particular Data-MDT, you use the show ip pim mdt command. Example 7-29 shows the (S, G, Data-MDT) details for both active multicast streams at the Paris PE router. The receive command shows that the Paris PE router has joined the Data-MDT 239.192.20.16 for the FastFoods source tree (195.12.2.6, 239.255.0.30). The send command shows that the Paris PE router is encapsulating traffic from the EuroBank source (196.7.25.12, 239.255.0.20) and is sending it to the Data-MDT 239.192.20.32.
Example 7-29 FastFoods and EuroBank Data-MDT Mappings
SuperCom_Paris#show ip pim vrf FastFoods receive detail Joined MDT-data groups for VRF: FastFoods group: 239.192.20.16 source: 0.0.0.0 ref_count: 1 (195.12.2.6, 239.255.0.30), 2d04h/00:03:23/00:03:00, OIF count: 1, flags: sTY SuperCom_Paris#show ip pim vrf EuroBank mdt send MDT-data send list for VRF: EuroBank (source, group) MDT-data group ref_count (196.7.25.12, 239.255.0.20) 239.192.20.32 1
The example also shows a ref_count value. If the Data-MDTs in a pool for a given mVRF have been exhausted due to many active high-bandwidth sources, then Data-MDTs are reused based on the entry that has the lowest ref_count.
SSM in the SuperCom Core
You can deploy the SuperCom core with SSM instead of PIM SM. Doing so obviates the need for a RP, which in turn eliminates a single point of failure. The configuration to enable SSM for MDT groups is simple (see Example 7-30). This configuration is applied to all SuperCom routers in place of RP to MDT-group mappings. The MDT-Range access-list contains the address ranges that the SuperCom network uses for both Default-MDT and Data-MDT. This access-list is associated with SSM by using the ip pim ssm range global command; therefore, any multicast traffic that contains these destination group addresses uses SSM control procedures. Note that both the Default-MDT and Data-MDT ranges are part of the SSM range.
Example 7-30 Enabling SSM
ip pim ssm range MDT-Range ! ip access-list standard MDT-Range permit 239.192.10.0 0.0.0.255 permit 239.192.20.0 0.0.0.255
When a SuperCom PE router creates a new Default-MDT through user configuration, it is signaled by a Multiprotocol BGP update to all its peers, as discussed previously. When a local PE router receives the update, a source tree join is issued back to the originator of the BGP message if the following conditions are met:
The Default-MDT group address matches the local SSM range.
A local mVRF is configured with the same Default-MDT group.
Example 7-31 shows the debug output for the BGP update and the corresponding PIM join from one of the SuperCom routers. This router has received a BGP update from 194.22.15.2 (which happens to be the San Jose PE router) stating that it has created a new Default-MDT 239.192.10.2 (EuroBank VPN). Because this router meets the conditions stated previously, it immediately issues a source join to (194.22.15.2, 239.192.10.2) for the Default-MDT.
Example 7-31 Joining the Default-MDT Using SSM
BGP(2): 194.22.15.2 rcvd UPDATE w/ attr: nexthop 194.22.15.2, origin ?, localpref 100, extended community RT:10:27 MDT:10:239.192.10.2 BGP(2): 194.22.15.2 rcvd 2:10:27:194.22.15.2/32 PIM(0): Send v2 Join on Serial0/2 to 194.22.15.2 for (194.22.15.2/32, 239.192.10.2), S-bit
How does the multicast routing table differ when you are using SSM? Compare the Paris PE router routing table in Example 7-32 using SSM with the same table using PIM Bi-Dir shown previously in Example 7-18. With PIM Bi-Dir, there were only two (*, G) shared tree entries: one for each of the EuroBank and FastFoods multicast domains. As you can see in Example 7-32, with SSM, you have five entries represented by the s flag. Because these are all source trees, you have more optimal routing because traffic does not have to traverse the RP.
Three of the entries in Example 7-32 are source trees that are rooted at remote PE routers. The Paris PE router has joined these source trees (by using SSM) based on Default-MDT information received in the Multiprotocol BGP update; therefore, the I flag has been set for these entries. The Paris PE router forwards traffic received from these entries to the corresponding mVRF in the olist. Of these three I entries, two are for the EuroBank Default-MDT (239.192.10.2) connecting to the San Jose and Washington PE routers, and the third is for the FastFoods Default-MDT (239.192.10.1) connecting to the San Jose PE router.
The other two routing entries in Example 7-32 do not have the I flag set. These entries represent the source trees for the two Default-MDTs rooted at the Paris PE router. Note the S of the (S, G) is 194.22.15.1, which is the loopback0 interface address of the Paris PE router. The remote PE routers issue a corresponding SSM join back to the Paris PE router. The outgoing interfaces point to the SuperCom core network.
Example 7-32 Paris PE Global Multicast Routing Table Using SSM
SuperCom_Paris#show ip mroute IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (194.22.15.1, 239.192.10.1), 00:04:22/00:03:27, flags: sTZ Incoming interface: Loopback0, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/2, Forward/Sparse-Dense, 00:02:45/00:03:25 (194.22.15.2, 239.192.10.1), 00:03:02/00:02:57, flags: sTIZ Incoming interface: Serial0/2, RPF nbr 194.22.15.2 Outgoing interface list: MVRF FastFoods, Forward/Sparse-Dense, 00:03:02/00:00:00 (194.22.15.1, 239.192.10.2), 00:04:23/00:03:25, flags: sTZ Incoming interface: Loopback0, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/2, Forward/Sparse-Dense, 00:02:47/00:03:24 (194.22.15.2, 239.192.10.2), 00:03:04/00:02:45, flags: sTIZ Incoming interface: Serial0/2, RPF nbr 194.22.15.2 Outgoing interface list: MVRF EuroBank, Forward/Sparse-Dense, 00:03:04/00:00:00 (194.22.15.3, 239.192.10.2), 00:03:10/00:02:45, flags: sTIZ Incoming interface: Serial0/2, RPF nbr 194.22.15.2 Outgoing interface list: MVRF EuroBank, Forward/Sparse-Dense, 00:03:10/00:00:00
SSM also uses the Data-MDT join message to trigger a join and switch over to the Data-MDT. The way it does this differs slightly from PIM SM. The Data-MDT join message only contains the (S, G, Data-MDT) mapping. PIM SM only requires the value of Data-MDT, so a (*, Data-MDT) P-join can be issued toward the rendezvous point by the receiving PE router. However, SSM requires the source address of the originating PE router, so that a (S-PE, Data-MDT) P-join can be issued. The value of S-PE is derived from the RPF neighbor of S in the (S, G, Data-MDT) mapping. This is verified in the debug and RPF output in Example 7-33.
Example 7-33 Joining the Data-MDT Using SSM
PIM(1): MDT join TLV received for (196.7.25.12,239.255.0.20) MDT-data: 239.192.20.32 PIM(1): MDT-data group (194.22.15.1,239.192.20.32) added on interface: Loopback0 PIM(0): Send v2 Join on Serial4/0 to 194.22.15.22 for (194.22.15.1/3 2, 239.192.20.32), S-bit PIM(0): Building Join/Prune message for 239.192.20.32 SuperCom_Washington#show ip rpf vrf EuroBank 196.7.25.12 RPF information for Eurobank_Paris_Source (196.7.25.12) RPF interface: Tunnel0 RPF neighbor: SuperCom_Paris (194.22.15.1) RPF route/mask: 196.7.25.0/24 RPF type: unicast (bgp 100) RPF recursion count: 0 Doing distance-preferred lookups across tables
Summary
Cisco Systems Multicast-VPN feature is based on the multicast domain solution described in section 2 of draft-rosen-vpn-mcast, "Multicast in MPLS/BGP VPN," which you can find at http://www.ietf.org. Multicast domains allow a service provider to offer mVPN services to their customers by using native multicasting techniques in the core. Native multicast is a mature technology on Cisco IOS; therefore, stability of the P-network is protected because no new features or software upgrades need to be performed on the P routers.
Scalability of the solution is ensured because the P-network has no visibility into the customer's multicast network. All mVPN traffic is carried inside a single multicast tunnel for that VPN. The number of multicast tunnels in the provider network is predictable and is significantly lower than the number of potential multicast groups in all VPNs.
From the end customer's point of view, no changes need to be made in the network to connect to a mVPN service. The service provider can support all customer modes, including PIM SM, PIM DM, PIM SSM, and any type of customer rendezvous point topology.
Routing optimality is improved in the P-network via the use of special tunnels for high-bandwidth sources that ensure that enterprise multicast traffic is delivered only to PE routers that have an interested receiver.
