Troubleshooting Any Transport over MPLS Based VPNs

Date: Jun 10, 2005 By Mark Lewis. Sample Chapter is provided courtesy of Cisco Press.
MPLS Layer 3 VPNs allow a service provider to provision IP connectivity for multiple customers over a shared IP backbone, while maintaining complete logical separation of customer traffic and routing information. Learn more about MPLS Layer 3 VPNs in this sample chapter from Cisco.

Multiprotocol Label Switching (MPLS) Layer 3 VPNs are described in Internet Draft draft-ietf-l3vpn-rfc2547bis (RFC2547bis). MPLS Layer 3 VPNs allow a service provider to provision IP connectivity for multiple customers over a shared IP backbone, while maintaining complete logical separation of customer traffic and routing information. Each customer VPN consists of a several geographically dispersed sites. IP connectivity between sites is provisioned over the provider backbone.

There are two basic VPN models:

  • The overlay model, in which there is no exchange of routing information between the customer and the service provider
  • The peer model, in which routing information is exchanged between customer and service provider

MPLS Layer 3 VPNs conform to the peer model, but unlike other peer VPN architectures, each customer's routing information is maintained in separate routing and forwarding tables.

Figure 6-1 illustrates a service provider backbone with two MPLS VPNs provisioned.

In Figure 6-1 there are two VPNs, mjlnet_VPN and cisco_VPN. Each VPN has three sites, with site 1 in each VPN connected to Chengdu_PE, site 2 connected to HongKong_PE, and site 3 connected to Shanghai_PE.

The MPLS VPN topology is very flexible. The service provider can configure intranet and extranet topologies, such as hub-and-spoke and full-mesh, simply by controlling the distribution of customer routes between service provider (edge) routers.

The service provider can also act as a backbone to carry traffic between different sites of another service provider. This is known as the carrier's carrier topology.

Finally, service providers can combine to offer VPN connectivity to a customer, with some customer sites connected to one provider and other customer sites connected to other providers. This is called an interprovider VPN.

Figure 6-1

Figure 6-1. MPLS VPNs

Technical Overview

There are two main components in an MPLS VPN backbone, the customer routing and forwarding tables maintained on the provider (edge) routers, and the underlying mechanism used to transport customer traffic. When a customer data packet arrives on the ingress service provider edge router, it is encapsulated with a MPLS (VPN) label that corresponds to the best route in the appropriate customer routing and forwarding table. Then it is forwarded over an MPLS label switched path (LSP) to the egress service provider edge router. Alternatively, MPLS VPN traffic may be tunneled over a non-MPLS network using IP or GRE, L2TPv3, or IP/IPSec.

An understanding of both components is essential for fast and effective troubleshooting of MPLS VPNs. A brief review of MPLS and MPLS VPN operation is included here, beginning with a description of the MPLS architecture.

MPLS Architecture

MPLS is an IETF standard, which builds upon early work done by companies such as Cisco, Ipsilon, Toshiba, and IBM. MPLS allows routers to switch packets based on labels rather than doing Layer 3 lookups. Routers that switch packets based upon labels are known as Label Switch Routers (LSRs).

MPLS offers a number of benefits, including closer integration between IP and ATM, the capability to remove BGP configuration from core routers, and applications such as VPNs and traffic engineering (MPLS/TE).

MPLS Forwarding

When an IP packet arrives at the edge of the MPLS network, the ingress LSR classifies the packet into a Forwarding Equivalence Class (FEC). The FEC is a classification that describes how packets are forwarded over an MPLS network. This can be based upon network prefix (route), quality of service, and so on. In this chapter, it is assumed that classification into an FEC is based on a network prefix (an entry in the routing/forwarding table of the ingress LSR).

Once classification has taken place, the ingress LSR imposes a label on the packet. This label corresponds to the FEC and functions as an identifier that allows LSRs to forward the packet without having to do a Layer 3 lookup.

At each hop through the MPLS backbone, the label is swapped, until the packet reaches the penultimate LSR in the path through the MPLS network. Note that, although the label is swapped, it still corresponds to the same FEC.

The penultimate hop LSR may remove or pop the label before forwarding the packet to the egress LSR. This is called penultimate hop popping, and it saves the egress LSR from having to do a label lookup, remove the label, do a Layer 3 lookup, and finally forward the packet. Instead, because the label is removed at the penultimate hop, the egress LSR can simply do a Layer 3 lookup and forward the packet accordingly.

Note that penultimate hop popping is performed only for labels corresponding to directly connected networks or aggregate routes on the egress LSR.

Figure 6-2 illustrates the forwarding of an IP packet across the MPLS backbone.

The path that a packet takes across the MPLS network is known as a Label Switched Path (LSP).

Figure 6-2

Figure 6-2. Label Switched Path

MPLS Modes

MPLS can operate in two modes:

  • Frame-mode is used over Ethernet, Frame Relay, PPP (including POS), HDLC, and ATM PVCs.
  • Cell-mode is used between label switching controlled ATM (LC-ATM) interfaces. ATM cells sent and received on LC-ATM interfaces carry labels in the VCI or VPI and VCI fields of the ATM cell headers. A device that switches ATM cells between LC-ATM interfaces using label values contained in the VPI/VCI fields in the cell headers is known as an ATM-LSR.

Labels

The precise form of the MPLS label differs depending on whether frame-mode or cell-mode MPLS is used, as detailed in the sections that follow.

Frame-Mode

In frame-mode, the label is carried as a "shim" header between the Layer 2 and Layer 3 headers. MPLS labels are 4 octets long and consist of a 20-bit label, a 3-bit Experimental (EXP) field, a bottom of label stack (S) bit, and an 8-bit Time-to-Live (TTL) field. This is illustrated in Figure 6-3.

Figure 6-3

Figure 6-3. MPLS Label

The Label field carries the label value itself. This corresponds to an FEC. The Exp field, in spite of its name, usually carries quality of service information. The bottom of label stack bit is used to indicate the bottom of the stack label. The Time-to-Live (TTL) field serves exactly the same function as that contained within the IP packet header. The TTL field is decremented by 1 at every hop, and if it reaches 0, the labeled packet is discarded. This mechanism provides protection against forwarding loops in the MPLS network, as well as limiting the forwarding scope of the packet.

Cell-Mode

In cell-mode, the label is carried in the VPI/VCI fields of the ATM cell header, as shown in Figure 6-4.

Figure 6-4

Figure 6-4. MPLS Label Carried in the VPI/VCI Fields of the ATM Cell Header

Note that when the original packet is segmented into cells on the ingress ATM-LSR, the first of those cells also carries the label or labels in the form shown in Figure 6-3. This is to preserve any other information, such as quality of service, carried in the EXP bits.

Label Stack

A labeled packet is said to contain a label stack. The label stack consists of one or more labels. In a simple MPLS VPN environment, the label stack consists of two labels. If MPLS VPN traffic is being carried over an MPLS traffic-engineering (TE) tunnel, the label stack may consist of two, three, or four labels, depending on how TE is configured.

The outermost (top) label in a stack is used to carry the packet over the MPLS backbone between ingress and egress LSRs. This outer label is the IGP label.

Because the outermost label has only local significance, LSRs must use a signaling protocol to exchange label to prefix bindings. The signaling protocol can be either Cisco's proprietary Tag Distribution Protocol (TDP) or the Label Distribution Protocol (LDP).

If traffic is being carried over a traffic engineering (TE) tunnel, the outermost label corresponds to the TE tunnel. In this case, the label signaling protocol can be either the Resource Reservation Protocol (RSVP), or Constraint-based Routed Label Distribution Protocol (CR-LDP). Cisco routers use RSVP to signal traffic engineering tunnels.

Note that although the outermost (IGP) label may be either TDP/LDP or RSVP signaled, in this book the term "TE label" is used where appropriate to distinguish RSVP signaled labels.

When MPLS VPN traffic is being transported, the innermost (bottom) label corresponds to either:

  • The VPN Routing and Forwarding instance (VRF, which is discussed later in this chapter)
  • The outgoing interface on the egress PE router

This is called the VPN label. Figure 6-5 illustrates the format of the labeled packet as it is transmitted.

Figure 6-5

Figure 6-5. Labeled Packet

Label Information Base, Label Forwarding Information Base, and Cisco Express Forwarding

Labels are stored in three separate types of tables on Cisco routers:

  • The Label Information Base (LIB)—The LIB contains all label bindings received from peer LSRs, or a subset of label bindings that correspond to the best routes for network prefixes. Whether the LSR retains all labels or just a subset depends on the mode of label retention that it is using.
  • The Label Forwarding Information Base (LFIB)— The LFIB contains only those labels that correspond to the next-hop of the best route for each network prefix. The LFIB also contains outgoing interface information.

The LFIB is used for label swapping within the MPLS backbone.

  • The Cisco Express Forwarding (CEF) tables—The CEF tables contain information from the routing table, including prefixes, next-hops, and outgoing interfaces. The CEF tables also interface to the LIB and contain labels associated with prefixes.

CEF is used for label imposition at the edge of the MPLS network on the ingress LSR.

Control and Data Planes

There are two channels or planes of communication between LSRs in an MPLS network:

  • The control plane—Used to exchange routing information and label bindings
  • The data (or forwarding) plane—Used for the transmission of labeled or unlabeled packets

LSP Control, Label Assignment, and BGP Routes

The way that LSPs are established within the MPLS network depends on whether the LSRs are using independent or ordered LSP control, as described in the following list:

  • Independent LSP control—When independent control is used, LSRs assign labels to prefixes (FECs) independently. This means that labels are assigned irrespective of whether other LSRs have assigned labels.

Once labels have been assigned (or bound) to prefixes, these bindings are advertised to peer LSRs.

  • Ordered LSP control—When ordered control is used, an LSR assigns labels to prefixes (FECs) for which it is the egress LSR. If an LSR is not the egress for a prefix, it does not assign a label until the next-hop LSR has sent a label binding for the prefix in question.

It is possible for both independent and ordered control to coexist within a network.

In both independent and ordered control mode, labels are assigned to all prefixes in the routing table, with the exception of BGP routes (in regular MPLS operation). Instead, BGP routes are assigned the label that corresponds to their next-hop. This means, for example, that if BGP route 172.16.5.0/24 has a next-hop of 10.1.1.4, and prefix 10.1.1.4/32 is assigned label 25, then the BGP route will also be assigned label 25. This seemingly insignificant fact has pretty significant consequences.

When a packet enters an MPLS network, the ingress LSR does a route lookup, and if the longest match is a BGP route, the (IGP) label corresponding to the route's next-hop will be imposed on the packet. The packet is then forwarded across the MPLS backbone, and as long as the LSRs in the path have a label corresponding to the BGP next-hop, they are able to forward the packet.

When the packet arrives at the egress LSR, the packet is forwarded out of the correct customer interface. The upshot is that only edge LSRs need to run BGP. Core LSRs can simply run the IGP used to advertise BGP next-hop information.

It is useful to remember this fact when troubleshooting MPLS VPNs because customer routes are advertised across the MPLS VPN backbone using MP-BGP. VPN packets, therefore, use a label corresponding to the MP-BGP next-hop to cross the backbone. The MP-BGP next-hop for VPN routes is the advertising PE router's BGP update source.

Downstream Label Distribution

Label bindings are distributed from downstream to upstream LSRs. Downstream LSRs are closer to the destination network than upstream LSRs. Label distribution, therefore, takes place in the opposite direction to traffic flow.

Figure 6-6 illustrates downstream label distribution.

Figure 6-6

Figure 6-6. Downstream Label Distribution

Downstream label distribution can be either one of the following:

  • Unsolicited downstream label distribution—LSRs that use unsolicited label distribution do not wait for label bindings to be requested before advertising them to their upstream neighbors.
  • Downstream-on-demand label distribution—If LSRs use downstream-on-demand label distribution, an LSR can request a label for a prefix from its downstream peer.

Figure 6-7 illustrates downstream-on-demand label distribution.

Figure 6-7

Figure 6-7. Downstream-on-Demand Label Distribution

Label Retention

After receiving label bindings from its peers, an LSR must decide whether to retain all these bindings or only those that correspond to the best routes in the network. The two modes of label retention are as follows:

  • Liberal label retention—If an LSR operates in liberal label retention mode, all label bindings sent to it from other LSRs are retained.

The advantage of this mode of operation is that the LSR can failover to an alternate LSP if the original LSP fails. The disadvantage is that more memory is required to store the labels.

  • Conservative label retention—An LSR operating in conservative label retention mode retains only those label bindings that correspond to the best route for each network prefix. Any other label bindings are simply discarded.

The advantage of this mode is that less memory is required to store label bindings. The disadvantage is that it takes longer to failover to an alternate path if the original LSP fails.

Label Distribution Protocols

A number of label distribution protocols can be used within a MPLS network depending upon the particular applications being used. Label distribution protocols include Tag Distribution Protocol (TDP), Label Distribution Protocol (LDP), RSVP, Multiprotocol Extensions for BGP-4 (MP-BGP), and Protocol Independent Multicast (PIM).

LDP and TDP

The LDP and Cisco's proprietary TDP can both be used to advertise labels bindings for IGP prefixes. Although TDP and LDP are similar, there are a number of differences. Table 6-1 outlines some of the primary differences between LDP and TDP.

Table 6-1 LDP versus TDP

LDP

TDP

IETF standard protocol

Cisco proprietary protocol

Uses an all-routers multicast address (224.0.0.2) for directly connected neighbor discovery

Uses local broadcasts

Uses UDP and TCP port 646 for neighbor discovery and session establishment

Uses UDP and TCP port 711

Provides optional MD5 authentication

No optional MD5 authentication provided


Extensions to the RSVP

Extended RSVP is used in MPLS networks to signal TE tunnels. TE LSP tunnels can be used to make better use of bandwidth by taking advantage of underutilized paths through the network.

TE LSPs can be reserved based upon bandwidth requirements and administrative policies. TE LSPs can follow an explicit or dynamic path. Irrespective of whether they are explicit or dynamic, however, paths must conform to any bandwidth and administrative requirements.

Extensions to OSPF and IS-IS facilitate the flooding of link bandwidth and policy information throughout the MPLS network. This allows the TE tunnel initiating (head-end) LSR to calculate the path using a constrained shortest path (CSPF) algorithm.

Once the path has been calculated, the tunnel is signaled using RSVP Path and Resv messages. Path messages contain a LABEL_REQUEST object (among others) and travel hop-by-hop along the path described to the tunnel tail-end. Resv messages contain a LABEL object and travel back along the path from the tail-end to the head-end LSR. The purpose of the LABEL_REQUEST object is, as the name suggests, to request a label binding for the LSP. The purpose of the LABEL object is to distribute label bindings for the LSP. TE tunnels use downstream-on-demand label distribution.

Figure 6-8 illustrates TE LSP tunnel signaling using extensions to RSVP.

Figure 6-8

Figure 6-8. TE LSP Tunnel Signaling Using Extensions to RSVP

Extensions to RSVP for traffic engineering are discussed in RFC 3209. Other useful documents include RFC 2702, which describes requirements for traffic engineering over MPLS, draft-ietf-isis-traffic, which describes IS-IS extensions for traffic engineering, and RFC 3630, which describes TE extensions for OSPF.

MP-BGP

MP-BGP is used in a MPLS VPN environment to advertise customer routes, associated labels, and other attributes. MP-BGP is discussed further in the next section, "MPLS Layer-3 VPNs." Multiprotocol extensions for BGP are discussed in RFC 2858.

MPLS Layer-3 VPNs

MPLS VPNs can be provisioned over a shared provider backbone. They allow IP connectivity between customer sites in a VPN.

RFC2547bis uses a number of terms to describe devices at customer sites and within the service provider's backbone:

  • Customer edge (CE) routers—Routers at the customer site that are directly connected to the service provider network.
  • Customer routers—Other routers within the customer site that are not directly connected to the service provider network.
  • Provider edge (PE) routers—Routers within the service provider backbone that connect to customer sites. Note that in an MPLS network, PE routers also function as edge LSRs.
  • Provider (P) routers—Routers within the service provider backbone that do not connect directly to customer sites. Note that in an MPLS network, P routers also function as LSRs.

Figure 6-9 illustrates CE, PE, and P routers.

Figure 6-9

Figure 6-9. CE, PE, and P Routers

Overlapping Address Space

Different customers VPNs might use overlapping IP address space. To allow overlapping IP address space to be distinguished within the MPLS VPN backbone, PE routers translate customer routes into VPN-IPv4 prefixes.

A VPN-IPv4 prefix consists of an 8-byte Route Distinguisher (RD) and the original 4-byte IPv4 prefix. RDs are different for customer VPNs, which ensures that VPN-IPv4 prefixes are unique.

Figure 6-10 illustrates the format of a VPN-IPv4 prefix.

Figure 6-10

Figure 6-10. VPN-IPv4 Prefix Format

The RD is encoded using the format shown in Figure 6-11.

Figure 6-11

Figure 6-11. RD Format

The Type field is 2 bytes, and the Value field is 6 bytes. The three currently defined RD types are 0, 1, and 2. The Value field is broken into the Administrator and Assigned Number subfields, as shown in Figure 6-12.

Figure 6-12

Figure 6-12. Administrator and Assigned Number Subfields

If a Type 0 RD is specified, then the Administrator subfield and Assigned Number subfields are 2 bytes and 4 bytes, respectively, and are encoded as shown in Figure 6-13.

Figure 6-13

Figure 6-13. RD Type 0 Encoding

As shown in Figure 6-13, when using a type 0 RD, the Administrator subfield contains an autonomous system (AS) number.

If the service provider is using autonomous system number 64512, and the assigned number is 100, then the IPv4 prefix

172.16.5.0/24 

would translate into

0:64512:100:172.16.5.0

If a Type 1 RD is specified, the Administrator subfield and Assigned Number subfields are 4 bytes and 2 bytes, respectively, and they are encoded as shown in Figure 6-14.

Figure 6-14

Figure 6-14. RD Type 1 Encoding

As you can see, when using a type 1 RD, the Administrator subfield contains an IP address.

If the service provider is using IP address 192.168.1.1, and the assigned number is 100, then the IPv4 prefix

172.16.5.0/24 

would translate into

1:192.168.1.1:100:172.16.5.0

If a Type 2 RD is specified, the Administrator subfield and Assigned Number subfields are 4 and 2 bytes, respectively, and they are encoded as shown in Figure 6-15.

Figure 6-15

Figure 6-15. RD Type 2 Encoding

If the service provider is using autonomous system number 64512, and the assigned number is 100, then the IPv4 prefix

172.16.5.0/24 

would translate into

2:64512:100:172.16.5.0

If you are the observant type, you might have noticed a striking similarity between the format of type 0 and type 2 RDs. They do, in fact, have a similar format.

Type 0 and 1 RDs are used when translating IPv4 prefixes into VPN-IPv4 prefixes. Type 2 RDs can be used to signal Multicast VPNs (MVPNs).

VPN Routing and Forwarding Instances

To allow complete logical separation of routes belonging to different customers, separate routing tables and forwarding tables are used on PE routers. These routing and forwarding tables collectively make up what is known as a VPN Routing and Forwarding (VRF) instance. PE router interfaces connected to different customers are then associated with these VRFs.

Customer routes received on an interface are stored in the associated VRF routing table. Similarly, customer traffic received on an interface is routed according to the associated VRF.

Figure 6-16 illustrates VRF tables on the PE routers.

Figure 6-16

Figure 6-16. VRF Tables on PE Routers

The global routing table is still maintained on the PE router and contains backbone IGP routes, as well as any Internet routes.

Note that on Cisco routers it is now possible to associate incoming traffic with a VRF based on its source IP address rather than incoming interface. This feature is known as VRF Selection.

Route Target Attribute

Although RDs facilitate the disambiguation of overlapping IP address space, they are not flexible enough to allow the provisioning of complex network topologies over an MPLS VPN backbone. To allow this provisioning, a BGP extended community attribute called a Route Target (RT) is used.

The BGP extended community has two fields, the Type field and the Value field. The Type field used with Route Targets is 2 octets (Extended Type), and the Value field is 6 octets. Figure 6-17 illustrates the BGP extended community attribute.

Figure 6-17

Figure 6-17. BGP Extended Community Attribute (Extended Type)

The (Extended) Type field is subdivided into high and low order octets. The low order octet has a value of 0x02 when the extended community is a Route Target.

The Value field is subdivided into Global and Local Administrator fields. The length of these fields depends on the value of the Type high order octet. If the Type high order octet has a value of 0x00 or 0x02, the Global Administrator is 2 octets (if the high order octet is 0x00) or 4 octets (if the high order octet is 0x02), and the Local Administrator is 4 or 2 octets. In this case, an autonomous system number (either 2 or 4 octets) is carried in the Global Administrator field. The Local Administrator field is, as the name suggests, a value is assigned by the local administrator.

Figure 6-18 shows the Route Target attribute when the Type high order octet is 0x00 or 0x02.

Figure 6-18

Figure 6-18. Route Target Attribute (High Order Octet Is 0x00 or 0x02)

If the service provider uses autonomous system number 64512, and the Local Administrator number is 100, the Route Target attribute would be 64512:100. If the Type high order octet has a value of 0x01, then the Global Administrator is 4 octets, and the Local Administrator is 2 octets. In this case, an IP Address is carried in the Global Administrator field.

Figure 6-19 shows the Route Target attribute when the Type high order octet is 0x01.

Figure 6-19

Figure 6-19. Route Target Attribute (High Order Octet Is 0x01)

If the service provider uses IP address 192.168.1.1, and the Local Administrator number is 100, the Route Target attribute would be 192.168.1.1:100.

Each VRF is configured with a set of import and export RTs. When VPN-IPv4 routes are inserted into the MP-BGP table, one or more route targets are attached. These are known as export route targets.

When a PE router receives a VPN-IPv4 route, it compares the attached RTs with the import RTs for each of its VRFs. If there is at least one match, the route is installed into the VRF.

Figure 6-20

Figure 6-20. VPN-IPv4 Route Export and Import

Figure 6-20 illustrates VPN-IPv4 route export and import based on RTs.

VPN Route Distribution

In a MPLS Layer 3 VPN, customer edge (CE) routers advertise routes to provider edge (PE) routers using Routing Information Protocol (RIP) version 2, Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), or Exterior Border Gateway Protocol (EBGP).

After receiving customer routes from the CE, the PE converts them into VPN-IPv4 routes. One or more (export) RTs are then attached, and they are advertised in Multiprotocol BGP (MP-BGP) to other PE routers. The next-hop of these routes is the BGP update source of the advertising PE router.

VPN labels are also advertised, along with VPN-IPv4 prefixes. These labels identify the VRF or outgoing interface on the advertising PE router and are used for packet forwarding.

Other standard and extended BGP communities, such as Site of Origin (used for loop prevention in multihomed sites), may also be attached to the VPN-IPv4 routes.MP-BGP routes received by PE routers are installed into VRFs depending on the attached RTs. Routes installed into a VRF are then advertised to the attached customer sites in the VPN using RIP, EIGRP, OSPF, or EBGP.

Figure 6-21 shows the advertisement of customer routes across a service provider MPLS VPN backbone.

Figure 6-21

Figure 6-21. CE, PE, and P Routers

Although Figure 6-21 illustrates route advertisement only from CE2 to CE1, route advertisement from CE1 to CE2 is identically configured, just in the opposite direction.

The example in this section describes the use of PE-CE routing protocols, but static routes may also be configured for PE to CE connectivity.

Forwarding VPN Traffic Across the Backbone

When VPN traffic is forwarded across the MPLS VPN backbone, a two-label stack is used. The outer label is known as the IGP label and is used to forward the traffic from the ingress PE to the egress PE. The inner label is known as the VPN label and is used to identify the VRF or outgoing interface on the egress PE.

Figure 6-22 illustrates the forwarding of a packet across an MPLS VPN backbone from host 172.16.1.1 to host 172.16.5.1.

Figure 6-22

Figure 6-22. Packet Forwarding Across an MPLS VPN Backbone

In Figure 6-22, an IP packet is sent from host 172.16.1.1 (on the left) to host 172.16.5.1 (on the right).

The IP packet is forwarded by CE1 to Chengdu_PE. Chengdu_PE does a Layer 3 lookup in the VRF mjlnet_VPN routing table and finds a route to network 172.16.5.0/24 with a next-hop of 10.1.1.4. 10.1.1.4 is the BGP update source on HongKong_PE.

Chengdu_PE then imposes a two-label stack, with the inner (VPN) label (36) corresponding to the IP prefix 172.16.5.0/24 in VRF mjlnet_VPN on HongKong_PE, and the outer (IGP) label (29) corresponding to the next-hop of route 172.16.5.0/24 (10.1.1.4).

Chengdu_PE forwards the packet to Chengdu_P. Chengdu_P consults its LFIB and swaps outer label 29 for label 25. The VPN label is unmodified.

Chengdu_P forwards the packet to HongKong_P. HongKong_P consults its LFIB, and pops (removes) outer label 25 (HongKong_P is the penultimate hop). Again, the VPN label is unmodified.

HongKong_P then forwards the packet to HongKong_PE. HongKong_PE examines the VPN label and, having found that it corresponds to VRF mjlnet_VPN, removes the label and forwards the unlabeled IP packet to CE2. Finally, CE2 forwards the packet onwards to host 172.16.5.1.

As previously noted, IP or GRE tunnels can be used to transport VPN traffic over a non-MPLS network between PE routers. In this case, the outermost MPLS label can be replaced by GRE or IP encapsulation.

MPLS VPN traffic can also be transported over a non-MPLS network using an L2TPv3 or IPSec tunnel. When L2TPv3 is used to transport VPN traffic over a non-MPLS network, the outermost MPLS label is replaced by L2TPv3 encapsulation. When MPLS VPN traffic is transported over an IPSec tunnel between PE routers, the outermost MPLS label is replaced by IP/IPSec encapsulation.

When comparing the three methods of encapsulation for transport of MPLS VPN traffic over a non-MPLS network, L2TPv3 allows a compromise between the strong security but high overhead of IP/IPSec and the very limited security of IP/GRE. The L2TPv3 cookie makes blind spoofing attacks more difficult to achieve when compared with IP/GRE because an attacker has to guess the cookie values in use (as well as the MPLS label value).

See draft-ietf-mpls-in-ip-or-gre, draft-townsley-l2tpv3-mpls, and draft-ietf-l3vpn-ipsec-2547 for more information on MPLS VPN transport over IP or GRE, L2TPv3, and IP/IPSec respectively. See also Chapter 5, "Troubleshooting L2TP v3 Based VPNs," for more details on L2TPv3.Unless otherwise specified, this chapter assumes transport of MPLS VPN traffic over an MPLS backbone.

Internet Access

VPN customers often require Internet access. The MPLS VPN provider can configure this in several ways. Two of the most popular ways of configuring Internet access are packet leaking between the VRF and global routing tables via static routes, and using separate interfaces for VPN and Internet access. Other methods of providing Internet access include via a shared service VPN or a separate ISP.

Providing VPN Customers Internet Access with Packet Leaking via Static Routes

Normally, VRF and global routing tables are completely separated. The VRF routing tables contain VPN routes, and the global routing table contains backbone IGP routes and either Internet routes or a route to an Internet gateway.

If packet leaking via static routes is configured, traffic outbound from the customer VPN to the Internet is allowed to "leak" from the VRF routing tables to the global routing table. Similarly, traffic inbound from the Internet is selectively allowed to leak into the customer VPN via the VRF interface.

Leaking from the VRF to the global routing table (for traffic outbound from the customer VPN) is accomplished by configuring a static VRF route with a next-hop in the global routing table. This static VRF route is usually a default route. Similarly, a global static route pointing to the customer networks (for traffic inbound from the Internet) is configured with an outgoing VRF interface.

Figure 6-23 illustrates Internet via route leaking.

In Figure 6-23, traffic outbound to the Internet from mjlnet_VPN site 1 arrives on the VRF interface of the PE router. The PE router routes the traffic using the default route in the VRF routing table. The next-hop is in the global routing table.

When traffic inbound from the Internet to mjlnet_VPN site 1 arrives on the PE router, the PE router forwards the traffic using the route in the global routing table. The outgoing interface of the route is the mjlnet_VPN VRF interface.

When configuring packet leaking, the global static route that points to the customer network should be redistributed into global BGP. This ensures that hosts on the Internet have a route back to the PE router. Also, the VRF static default route should be redistributed into the PE-CE routing protocol, if one is being used.

Figure 6-23

Figure 6-23. Internet Access via Route Leaking

Providing VPN Customers Internet Access with a Separate Interface

Another way of configuring Internet access for a customer site is to configure one interface for VPN connectivity on the CE router and another separate interface for Internet connectivity. The interface for Internet connectivity is associated with the global routing table on the PE router.

Figure 6-24 illustrates Internet access via a separate interface.

Figure 6-24

Figure 6-24. Internet Access via a Separate Interface

In Figure 6-24, traffic both outbound and inbound from the Internet is routed via the Internet (global) interface on the PE router. Routing can be configured between the CE and PE routers over the Internet interface in the standard way using BGP or static default routes.

Multicast VPNs (MVPNs)

Previously, if a customer required multicast connectivity between sites in an MPLS VPN, a mesh of point-to-point GRE tunnels between the CE routers was required. With the advent of Multicast VPNs (MVPNs), this is no longer necessary. The MVPN feature is based on Multicast Domains (MD), which are described in Internet Draft draft-rosen-vpn-mcast.

MVPNs allow a service provider to tunnel customer multicast traffic between sites over a core multicast tree (in other words, multicast over multicast tunneling).

MVPN presupposes support for PIM within the customer network, as well as the provider backbone. Supported modes include PIM Sparse-Mode (PIM-SM), PIM Bi-directional (PIM-BIDIR), and PIM Source Specific Multicast (PIM-SSM). PIM Dense Mode (PIM-DM) is also supported within the customer network.

When configuring and troubleshooting MVPN, you should have a good understanding of the following elements:

  • MVPN support on PE routers
  • Formation of PIM adjacencies in an MVPN environment
  • Default multicast forwarding in the backbone
  • Optimizing multicast forwarding in the backbone

The sections that follow discuss these elements in greater detail.

MPVN Support on the PE Router: The Multicast VRF, Multicast Tunnel, and Multicast Tunnel Interface

When MVPN is configured for a VRF on a PE router, a Multicast VRF (MVRF) and a Multicast Tunnel Interface (MTI) are created. The MVRF is the multicast routing table for the VRF. The MTI is an endpoint of the Multicast Tunnel (MT) and is used to forward customer multicast traffic between sites in an MVPN.

The MT source address is the MP-BGP update source on the PE router, and the destination address is the Multicast Distribution Tree (MDT) address.

Formation of PIM Adjacencies in an MVPN Environment

Each PE router maintains one instance of PIM for the backbone network, as well as one instance per MVRF. Provider backbone PIM adjacencies are maintained between PE routers' core interfaces and P routers. MVPN PIM adjacencies are maintained between PE and CE routers, as well as between PE routers over the MT.

Figure 6-25 illustrates PIM adjacencies in a MVPN environment.

Note that each PE router in Figure 6-25 has only one (multipoint) MTI. PE routers create a single MTI per MVRF. This means that each PE router in Figure 6-25 is maintaining two PIM adjacencies on their MTIs.

Figure 6-25

Figure 6-25. PIM Adjacencies in a MVPN Environment

Default Multicast Forwarding in the Backbone: The Default MDT

A default Multicast Distribution Tree (default MDT) is maintained in the provider backbone for the purpose of forwarding customer multicast traffic and PIM control traffic between the MTIs on the PE routers. All PE routers participating in the MPVN join the default MDT.

Figure 6-26 illustrates the default MDT.

In Figure 6-26, the default MDT (group 239.0.0.1) has been established over the provider backbone between Chengdu_PE, HongKong_PE, and Shanghai_PE. The backbone network in this example is configured for PIM Sparse Mode (PIM-SM), with Chengdu_P as the Rendezvous Point (RP). Customer multicast traffic (in this example, group 233.253.233.1) is forwarded over the default MDT.

Figure 6-26

Figure 6-26. Default MDT

Note that in Figure 6-26, HongKong_PE drops multicast traffic for group 233.253.233.1 because there are no receivers at MVPN_mjlnet site 2. This is the disadvantage of forwarding customer multicast traffic over the default MDT. Traffic is forwarded to all PE routers participating in the MVPN, regardless of whether there are any receivers at the site to which they are connected. The solution to this issue is the data MDT.

Optimizing Multicast Forwarding in the Backbone: The Data MDT

A data MDT (see Figure 6-27) is constructed across the provider backbone when traffic for a particular customer multicast group crosses a configured bandwidth threshold. Crucially, only PE routers connected to sites with receivers for this group join the data MDT.

In Figure 6-27, the bandwidth threshold for group 233.253.233.1 has been exceeded, and a data MDT (group 239.0.1.0) has been established.

Figure 6-27

Figure 6-27. Data MDT

There is a receiver for group 233.253.233.1 at site 3, and so Shanghai_PE joins the data MDT. There are no receivers for this group at site 2, however, and so HongKong_PE does not join the data MDT.

Note that data MDTs are not established for PIM dense mode groups.

Configuring MPLS VPNs

Misconfiguration is a common cause of problems with MPLS VPNs. In this section, therefore, MPLS VPN configuration is discussed.

When configuring an MPLS VPN, there are three types of devices that must be configured, the CE router, the PE router, and the P router. The configuration of each of these devices is discussed in this section.

It may be useful to reference Figure 6-31 on page 476 while reading this section. Note, however, that not all configuration discussed in this section is illustrated.

Configuring the CE Router

Configuration of the CE router is standard—nothing special is required. The only restriction is that the routing protocol used between the CE and PE routers must currently be RIP version 2, EIGRP, OSPF, or EBGP. Static routes can also be used.

Configuring the PE Router

Configuration of the PE router is much more complicated than that of the CE router.

The 12 basic steps involved are summarized as follows:

Step 1

Configure the loopback interface to be used as the BGP update source and LDP router ID.

Step 2

Enable CEF.

Step 3

Configure the label distribution protocol.

Step 4

Configure the TDP/LDP router-id (optional).

Step 5

Configure MPLS on core interfaces.

Step 6

Configure the MPLS VPN backbone IGP.

Step 7

Configure global BGP parameters.

Step 8

Configure MP-BGP neighbor relationships.

Step 9

Configure the VRF instances.

Step 10

Configure VRF interfaces.

Step 11

Configure PE-CE routing protocols / static routes.

Step 12

Redistribute customer routes into MP-BGP.


The sections that follow examine each step in detail.

Step 1: Configure the Loopback Interface to Be Used as the BGP Update Source and LDP Router ID

A loopback interface should be configured to act as the update source for BGP sessions, as well as the LDP router ID.

Ensure that the IP address on the loopback interface is configured with a 32-bit mask. This will prevent a lot of problems later.

For example, if the IGP used in the MPLS backbone is OSPF, and the loopback interface is not configured with a 32-bit mask, the PE router will advertise a label binding for the loopback address with the mask as specified on the loopback interface. The route advertised in OSPF to neighboring routers, on the other hand, will include a 32-bit mask. This is because OSPF advertises loopback addresses with a 32-bit mask by default (irrespective of the configured mask). The neighboring routers (LSRs) will create a label binding that corresponds to the OSPF route advertised by the PE router (using the advertised 32-bit mask), but because the label binding advertised by the PE router uses the configured non-32-bit mask, an LSP failure will result.

There are two ways around this: either configure the loopback interface on the PE router with a 32-bit mask, or configure the ip ospf network point-to-point command on the loopback interface, which will cause OSPF to advertise the mask as it is actually configured.

It is also very important to configure just one update source for MP-BGP if you intend to configure MVPNs. More than one update source can break MVPN.

Example 6-1 shows the configuration of the loopback interface to be used as the BGP update source and LDP router ID.

Example 6-1 Configuration of the Loopback Interface

interface Loopback0
 ip address 10.1.1.1 255.255.255.255

It's good practice to allocate one address block to use for all PE router loopback interface addresses.

Note that PE router loopback addresses should not be summarized in the core because this will break LSPs within the MPLS backbone.

Step 2: Enable CEF

Be sure to enable CEF. If CEF is not enabled on the PE router, MPLS will not function.

Example 6-2 shows how to enable CEF on the router.

Example 6-2 Enabling CEF

ip cef [distributed]

Note the keyword distributed. This is used to enable distributed CEF (dCEF). dCEF is available on high-end platforms such as the 12000 GSR and 7500 series.

Step 3: Configure the LDP

If using LDP in the MPLS backbone, you should configure LDP next. Note that TDP is the default label distribution protocol on Cisco routers. Example 6-3 shows the global configuration of LDP as the label distribution protocol.

Example 6-3 Configuration of LDP as the Label Distribution Protocol

mpls label protocol ldp

Step 4: Configure the TDP/LDP Router ID (Optional)

The next step is to configure the TDP/LDP router ID. This step is optional, but it can make the troubleshooting process easier if you are able to easily identify TDP/LDP routers in the network.

Example 6-4 shows the configuration of the LDP router ID.

Example 6-4 Configuration of the LDP Router ID

mpls ldp router-id Loopback0 [force]

In Example 6-4, the IP address on interface loopback 0 is configured as the LDP router ID. Note the optional force keyword, which ensures that the IP address on interface loopback 0, and not the IP address of any other interface, becomes the LDP router ID.

If the LDP router ID is not explicitly configured as shown in Example 6-4, the LDP ID will become the highest loopback interface address or, in the absence of a loopback interface, the highest IP address configured on a physical interface. It is definitely a good idea to ensure that the LDP ID corresponds to a loopback interface because loopback interfaces are always in an up state.

Step 5: Configure MPLS on Core Interfaces

The next step is to enable MPLS on interfaces connected to other PE and P routers. Note that when MPLS is enabled on the first interface, it is also globally enabled on the router.

Example 6-5 shows the configuration of MPLS on core frame-mode interfaces.

Example 6-5 Configuring MPLS on Core Frame-Mode Interfaces

interface Serial4/0
 mpls ip

As previously mentioned, ATM interfaces can be configured for either frame-mode or cell-mode.

Frame-mode can be configured over ATM PVCs between edge LSRs. In this case, intervening ATM switches do not participate in MPLS at all and do not need to be MPLS-enabled.

Example 6-6 shows the configuration of an ATM interface for frame-mode MPLS.

Example 6-6 Configuration of Frame-mode MPLS on an ATM Interface

interface ATM3/0.1 point-to-point
 ip address 10.20.100.1 255.255.255.0
 pvc 1/50
 encapsulation aal5snap
 !
 mpls ip

In Example 6-6, MPLS is enabled on an ATM PVC with VPI/VCI 1/50. Note that the subinterface type is point-to-point and that the mpls ip command is configured on the subinterface.

ATM interfaces can also be configured for cell-mode MPLS. These interfaces are known as Label Controlled ATM (LC-ATM) interfaces.

Example 6-7 shows the configuration of cell-mode MPLS on an ATM interface of an IOS router.

Example 6-7 Configuration of Cell-Mode MPLS on an ATM Interface

interface ATM3/0.1 mpls
 ip address 10.20.90.1 255.255.255.0
 mpls ip

In Example 6-7, the subinterface type is mpls. Also note the command mpls ip on the subinterface itself.

When cell-mode MPLS is enabled on an ATM interface, a PVC with VPI/VCI 0/32 (by default) is automatically created for control plane traffic.

Step 6: Configure the MPLS VPN Backbone IGP

Although it is possible to use any IGP for IP reachability within the MPLS VPN backbone, IS-IS and OSPF are the two most commonly chosen because they are the only two IGPs that currently support MPLS traffic engineering.

The OSPF and IS-IS protocol configurations covered in the two sections that follow are only examples.

IS-IS

The configuration of IS-IS on the PE router is, to a large extent, standard.

Example 6-8 shows the configuration of IS-IS for IP reachability within the MPLS VPN backbone.

Example 6-8 Configuration of IS-IS as the MPLS VPN Backbone IGP

router isis 
 passive-interface Loopback0
 net 49.0001.0000.0000.0001.00
 is-type level-2-only
 metric-style wide

The router isis command enables IS-IS on the PE router.

Interface loopback 0 is then enabled for IS-IS using the passive-interface loopback0 command. Note that because the interface is passive, no IS-IS packets are needlessly sent on the interface.

Be sure to advertise the BGP update source into the IS-IS. If the update source is not advertised, MPLS VPNs will break.

The third command in the configuration is net 49.0001.0000.0000.0001.00. This is used to configure the network entity title (NET). 49.0001 is the area ID, 0000.0000.0001 is the system ID, and .00 is the selector value.

The next command, is-type level-2-only, configures the PE router as a Level 2 (backbone) router only.

Finally, the command metric-style wide configures the router to send and to receive only new style 24- or 32-bit metrics. Support for new style metrics are essential if you are intending to use MPLS traffic engineering.

Ensure that all IS-IS routers in the backbone are configured to support standard or wide metrics (or both). IS-IS must also be enabled on each of its core interfaces.

Example 6-9 shows the configuration of IS-IS on core interfaces.

Example 6-9 Configuration of IS-IS on Core Interfaces

interface FastEthernet1/0
 ip router isis 

In Example 6-9, IS-IS for IP is enabled on interface FastEthernet1/0 using the command ip router isis.

OSPF

OSPF configuration for the backbone is, again, fairly standard.

Example 6-10 shows the configuration of OSPF for IP reachability within the MPLS VPN backbone.

Example 6-10 Configuration of OSPF as the MPLS VPN Backbone IGP

router ospf 100
 passive-interface Loopback0
 network 10.0.0.0 0.255.255.255 area 0

The command router ospf 100 enables OSPF process 100 on the PE router.

All backbone interfaces in network 10.0.0.0/8 are placed in OSPF area 0 using the network 10.0.0.0 0.255.255.255 area 0 command.

Finally, the passive-interface Loopback0 prevents the sending of OSPF packets on interface loopback 0.

Be sure to advertise the BGP update source into OSPF. If the update source is not advertised, MPLS VPNs will break.

Note that if your network consists of ATM-LSRs, make sure that summarization of IGP routes is not configured on P routers. This is because ATM-LSRs have no "IP intelligence" on the data plane.

Step 7: Configure Global BGP Parameters

MP-BGP is used to advertise customer routes across the MPLS VPN backbone between PE routers. The configuration of MP-BGP is a two-step process, with neighbors being configured globally and then activated for MP-BGP route exchange under the VPNv4 (VPN-IPv4) address family.

Example 6-11 shows global BGP configuration on the PE router.

Example 6-11 Global BGP Configuration on the PE Router

router bgp 64512
 no synchronization
 neighbor 10.1.1.4 remote-as 64512
 neighbor 10.1.1.4 update-source Loopback0
 neighbor 10.1.1.6 remote-as 64512
 neighbor 10.1.1.6 update-source Loopback0
 no auto-summary

The first command, router bgp autonomous_system, enables BGP on the PE router.

Global IGP synchronization is then disabled using the no synchronization command.

The command neighbor ip_address remote-as autonomous_system configures the IP address and autonomous system of the remote PE router or route reflector.

Next comes the neighbor ip_address update-source Loopback0. This configures interface loopback 0 as the update source for the BGP session.

It is highly recommended that a single interface (preferably with a 32-bit mask) be configured as the MP-BGP update source. Not doing so might result in broken MPLS VPNs and MVPNs.

The command no auto-summary is used to ensure that routes redistributed into BGP (via the redistribute command) are not summarized at major network boundaries.

One other command that might be useful on the PE router is the no bgp default ipv4-unicast command, which disables the exchange of global BGP (Internet) routes. Only MP-BGP, and not global BGP, routes are required for MPLS VPN functionality.

Step 8: Activate MP-BGP Neighbors

MP-BGP is used for the exchange of VPN routes between the PE routers. MP-BGP route exchange must be activated under the VPNv4 address family.

Example 6-12 shows the activation of MP-BGP route exchange.

Example 6-12 Activation of MP-BGP Route Exchange

router bgp 64512
!
address-family vpnv4
 neighbor 10.1.1.4 activate
 neighbor 10.1.1.4 send-community extended
 neighbor 10.1.1.6 activate
 neighbor 10.1.1.6 send-community extended
 no auto-summary
 exit-address-family

The command address-family vpnv4 is used to enter the VPNv4 address family configuration mode.

The neighbor ip_address activate is used to activate MP-BGP route exchange.

The command neighbor ip_address send-community extended is configured by default and enables the exchange of BGP extended communities, such as route target and site of origin.

TIP

If you want BGP peers to also exchange standard BGP communities, you must use the keyword both in place of the extended keyword.

Finally, the command no auto-summary command specifies that redistributed routes should not be summarized at major network boundaries. This command is configured by default.

Note that if route reflectors are used for VPN route exchange between PE routers, ensure that they are also configured for MP-BGP route exchange between route reflector clients.

Step 9: Configure the VRF Instances

The next step is the configuration of the VRFs, as demonstrated in Example 6-13.

Example 6-13 Configuration of a VRF

ip vrf mjlnet_VPN
 rd 64512:100
 route-target export 64512:100
 route-target import 64512:100

The first line of the configuration enables a VRF named mjlnet_VPN.

Step 10: Configure VRF Interfaces

After configuring the VRF, the next step is to associate customer interfaces with it.

Example 6-14 shows the configuration of VRF interfaces on PE routers.

Example 6-14 Configuration of VRF Interfaces

interface Serial4/1
 ip vrf forwarding mjlnet_VPN

The ip vrf forwarding mjlnet_VPN command associates an interface with a customer VRF. In this case, interface serial 4/1 is associated with VRF mjlnet_VPN.

Step 11: Configure PE-CE Routing Protocols / Static Routes

Configuration of the PE-CE routing protocol varies according to whether RIP version 2, EIGRP, OSPF, or EBGP is being used. Static routes can also be used for PE-CE connectivity.

The sections that follow describe configuration of the various PE-CE routing protocols.

RIP Version 2

When configuring RIP version 2 for PE-CE routing, most of the configuration is under the IPv4 address family. Example 6-15 shows the configuration of RIP version 2 for PE-CE routing.

Example 6-15 Configuration of RIP Version 2 for PE-CE Routing

router rip
 version 2
 !
 address-family ipv4 vrf mjlnet_VPN
 version 2
 redistribute bgp 64512 metric transparent
 network 172.16.0.0
 no auto-summary
 exit-address-family

The command router rip enables RIP on the PE router. RIP version 2 is then configured using the command version 2.

Next comes the address-family ipv4 vrf vrf_name command. RIP configuration for the VRF is configured under the IPv4 address family.

By specifying version 2 globally (directly under router rip), it is inherited by all the address families configured under RIP.

Under the address family, be sure to specify redistribution from (MP-BGP or BGP into RIP. Alternatively, you can originate a default route into RIP if it is a large network. Remember that customer routes are advertised between PE routers using MP-BGP. These routes are then imported into the customer VRFs. The command redistribute bgp autonomous_system metric transparent can then be used to redistribute these routes into RIP for advertisement to the attached customer site or sites.

Note the use of metric transparent. RIP metrics are preserved when they are advertised in MP-BGP (they are copied into the MED attribute), which ensures that these metrics are redistributed back into RIP unmodified.

Make sure that a metric, whether a specific metric or the keyword transparent, is configured when redistributing MP-BGP routes into RIP. If one is not specified, the routes may not be redistributed.

The rest of the configuration is pretty standard stuff, with the network command used to specify the networks enabled for RIP, and the no-auto-summary command used to ensure that networks are not summarized at major network boundaries. Note that no auto-summary is on by default under the address family.

EIGRP

Configuration of EIGRP is similar to RIP, with most parameters configured under the IPv4 address-family.

Example 6-16 shows a sample configuration of EIGRP for PE-CE routing.

Example 6-16 Configuration of EIGRP for PE-CE Routing

router eigrp 10
 no auto-summary
 !
 address-family ipv4 vrf mjlnet_VPN
 redistribute bgp 64512 metric 1 1 255 1 1500
 network 172.16.0.0
 no auto-summary
 autonomous-system 100
 exit-address-family

The router eigrp 10 command enables EIGRP autonomous system 10 on the PE router.

The second command is no auto-summary. This ensures that networks are not summarized at major network boundaries.

The configuration of EIGRP for PE-CE connectivity itself is specified under an IPv4 address-family (address-family ipv4 vrf vrf_name). Each customer VRF requires a separate address family.

The configuration under the IPv4 address family starts with redistribution of MP-BGP routes from other customer sites into EIGRP using redistribute bgp autonomous_system metric metric (bandwidth, delay, reliability, load, and MTU). Make sure that you specify a metric when redistributing MP-BGP routes into EIGRP. If one is not specified, redistribution may fail. Next is the network command, which is used to specify the networks enabled for EIGRP. The no auto-summary command is configured by default under the address family.

The final command under the address family is autonomous-system autonomous_system. This is the EIGRP autonomous system number for the customer VPN. If this is not the same as that configured as that on the CE router, then no adjacency will be formed.

OSPF

When configuring OSPF, a separate OSPF process must be configured for each customer VRF running OSPF as the PE-CE routing protocol.

Example 6-17 shows the configuration of OSPF for customer site routing.

Example 6-17 Configuration of OSPF for PE-CE Routing

router ospf 100 vrf mjlnet_VPN
 redistribute bgp 64512 subnets
 network 172.16.4.0 0.0.0.255 area 0

The first command in the configuration is router ospf process_ID vrf vrf_name. In this case, OSPF process 100 is configured for VRF mjlnet_VPN.

The third command is redistribute bgp autonomous_system subnets. This is used to configure redistribution of MP-BGP (routes from remote sites) into OSPF. Note the subnets keyword. This ensures that subnets, and not just major networks, are redistributed.

EBGP

Configuration of EBGP for PE-CE connectivity is pretty straightforward. Again, most of the configuration is under the IPv4 address family.

Example 6-18 shows the configuration of EBGP for PE-CE routing.

Example 6-18 Configuration of EBGP for PE-CE Routing

router bgp 64512
!
address-family ipv4 vrf mjlnet_VPN
 neighbor 172.16.4.2 remote-as 65001
 neighbor 172.16.4.2 activate
 no auto-summary
 no synchronization
 exit-address-family

The address-family ipv4 vrf vrf_name command is used to enter the IPv4 address family configuration mode.

The first command under the IPv4 address family is neighbor ip_address remote-as autonomous_system. This configures the IP address and autonomous system of the CE router.

Next is neighbor ip_address activate. This activates the BGP session with the CE router.

Finally, the no auto-summary and no synchronization commands are used to disable auto summarization at major network boundaries for routes redistributed via the redistribute command into BGP, and to disable IGP synchronization. These two commands are enabled by default.

Note that unlike for other PE-CE routing protocols, redistribution is unnecessary from MP-BGP into EBGP.

Static Routes

Static routes can also be used for PE-CE connectivity. Example 6-19 shows configuration of static routes for PE-CE connectivity.

Example 6-19 Configuration of Static Routes for PE-CE Connectivity

ip route vrf mjlnet_VPN 172.16.1.0 255.255.255.0 172.16.4.2 [permanent]

Configuration of static routes is the same as that for regular static routes with the network, mask, and next-hop specified. The vrf keyword must be used, however, to ensure that the static route is placed in the VRF specified (in this case, mjlnet_VPN).

Note also the permanent keyword. This can optionally be used to ensure that the route will remain in the VRF even if reachability to the next hop is lost. This can be important for stability when redistributing static routes into MP-BGP.

Step 12: Redistribute Customer Routes into MP-BGP

The final step is to configure the redistribution of customer routes into MP-BGP, as demonstrated in Example 6-20.

Example 6-20 Redistribution of Customer Routes into MP-BGP

router bgp 64512
!
address-family ipv4 vrf mjlnet_VPN
 redistribute rip
 no auto-summary
 no synchronization
 exit-address-family

The address-family ipv4 vrf vrf_name command is used to enter the IPv4 address family configuration mode.

The redistribute rip command is used to redistribute customer RIPv2 routes into MP-BGP.

If the PE-CE routing protocol is EIGRP, the command redistribute eigrp autonomous_system is used. Ensure that the autonomous system number configured corresponds to that specified under the EIGRP IPv4 address family.

For OSPF, the command redistribute ospf process_ID match internal external 1 external 2 can be used. Note that in this case, internal and external type 1 and 2 routes are redistributed.

Finally, if static routes are being used, the command redistribute static can be used. It is also worth noting that if EBGP is being used, redistribution is not required.

Finally, the no auto-summary and no synchronization commands are defaults that specify that redistributed routes should not be summarized at major network boundaries, and that synchronization should be disabled.

That concludes the configuration of the PE router.

PE Router Sample Configuration

Example 6-21 shows a complete sample configuration of a PE router.

Example 6-21 Complete Sample Configuration of a PE Router

Chengdu_PE#show running-config
Building configuration...
Current configuration : 3434 bytes
!
version 12.0
service nagle
no service pad
service tcp-keepalives-in
service timestamps debug datetime msec localtime show-timezone
service timestamps log datetime msec localtime show-timezone
service password-encryption
!
hostname Chengdu_PE
!
logging buffered 16384 debugging
enable secret 5 $1$4pDG$mVThUgDZG33pNYZ20.UKU/
!
ip subnet-zero
no ip source-route
!
! Enable Cisco Express Forwarding (CEF)
ip cef
!
!
no ip finger
no ip bootp server
!
! Configure the VPN Routing and Forwarding (VRF) instances
ip vrf mjlnet_VPN
 rd 64512:100
 route-target export 64512:100
 route-target import 64512:100
!
ip vrf cisco_VPN
 rd 64512:200
 route-target export 64512:200
 route-target import 64512:200
!
! Configure the label distribution protocol
mpls label protocol ldp
no mpls traffic-eng auto-bw timers frequency 0
!
! Configure the TDP/LDP router-id (tag-switching tdp router-id = mpls ldp router-id)
tag-switching tdp router-id Loopback0 force
!
! Configure the loopback interface to be used as the BGP update source and LDP router ID
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 no ip directed-broadcast
!
! Configure MPLS on core interfaces
interface FastEthernet1/0
 ip address 10.20.10.1 255.255.255.0
 no ip redirects
 no ip directed-broadcast
 no ip proxy-arp
 ip router isis
 tag-switching ip
 no cdp enable
!
! Configure VRF interfaces
interface Serial4/1
 ip vrf forwarding mjlnet_VPN
 ip address 172.16.4.1 255.255.255.0
 no ip redirects
 no ip directed-broadcast
 no ip proxy-arp
 encapsulation ppp
 no cdp enable
!
interface Serial4/2
 ip vrf forwarding cisco_VPN
 ip address 192.168.4.1 255.255.255.0
 no ip redirects
 no ip directed-broadcast
 no ip proxy-arp
 no cdp enable
!
! Configure PE-CE routing protocol for cisco_VPN
router ospf 200 vrf cisco_VPN
 log-adjacency-changes
 redistribute bgp 64512 subnets
 network 192.168.4.0 0.0.0.255 area 0
!
! Configure the MPLS VPN backbone IGP
router isis
 passive-interface Loopback0
 net 49.0001.0000.0000.0001.00
 is-type level-2-only
 metric-style wide
!
! Configure PE-CE routing protocol for mjlnet_VPN
router rip
 version 2
 !
 address-family ipv4 vrf mjlnet_VPN
 version 2
 redistribute bgp 64512 metric transparent
 network 172.16.0.0
 no auto-summary
 exit-address-family
!
! Configure basic BGP parameters
router bgp 64512
 no synchronization
 bgp log-neighbor-changes
 neighbor 10.1.1.4 remote-as 64512
 neighbor 10.1.1.4 update-source Loopback0
 neighbor 10.1.1.6 remote-as 64512
 neighbor 10.1.1.6 update-source Loopback0
 no auto-summary
 !
 ! Configure MP-BGP neighbor relationships
 address-family vpnv4
 neighbor 10.1.1.4 activate
 neighbor 10.1.1.4 send-community extended
 neighbor 10.1.1.6 activate
 neighbor 10.1.1.6 send-community extended
 no auto-summary
 exit-address-family
 !
! Redistribute customer routes into MP-BGP
 address-family ipv4 vrf cisco_VPN
 redistribute ospf 200 match internal external 1 external 2
 no auto-summary
 no synchronization
 exit-address-family
 !
 address-family ipv4 vrf mjlnet_VPN
 redistribute rip
 no auto-summary
 no synchronization
 exit-address-family
!
ip classless
!
logging trap debugging
!
!
line con 0
 exec-timeout 0 0
 password 7 1511021F0725
 login
line aux 0
line vty 0 4
 password 7 110A1016141D
 login
!
end

You might notice that a number of commands discussed in this section are not immediately apparent in the configuration shown in Example 6-21. An example is the mpls ip command. In fact, the mpls keyword is translated into the tag-switching keyword. This allows backward compatibility with versions of the Cisco IOS software that do not support the mpls keyword.

The only exception to this is the mpls label protocol command, which remains in its original form.

Configuring the P Router

Configuration of P routers is, by comparison with that of PE routers, very simple.

The six basic steps in the configuration are as follows:

Step 1

Configure the loopback interface to be used as the LDP router ID.

Step 2

Enable CEF.

Step 3

Configure the label distribution protocol.

Step 4

Configure the TDP/LDP router ID (optional).

Step 5

Configure MPLS on core interfaces.

Step 6

Configure IS-IS or OSPF as the MPLS VPN backbone IGP.


As you can see, these six steps are identical to the first six steps for the configuration of the PE router. Please refer to the previous section for an explanation of each of these steps.

Example 6-22 shows a complete sample configuration of a P router.

Example 6-22 Complete Sample Configuration of a P Router

Chengdu_P#show running-config
Building configuration...
Current configuration : 1991 bytes
!
version 12.0
service nagle
no service pad
service tcp-keepalives-in
service timestamps debug datetime msec localtime show-timezone
service timestamps log datetime msec localtime show-timezone
service password-encryption
!
hostname Chengdu_P
!
logging buffered 16384 debugging
no logging console
enable secret 5 $1$4pDG$mVThUgDZG33pNYZ20.UKU/
!
ip subnet-zero
no ip source-route
!
! Enable Cisco Express Forwarding (CEF)
ip cef
!
!
no ip finger
no ip bootp server
!
! Configure the label distribution protocol
mpls label protocol ldp
no mpls traffic-eng auto-bw timers frequency 0
!
! Configure the TDP/LDP router-id
tag-switching tdp router-id Loopback0 force
!
! Configure the loopback interface to be used as the LDP router id
interface Loopback0
 ip address 10.1.1.2 255.255.255.255
 no ip directed-broadcast
!
! Configure MPLS on core interfaces
interface FastEthernet1/0
 ip address 10.20.10.2 255.255.255.0
 no ip redirects
 no ip directed-broadcast
 no ip proxy-arp
 ip router isis
 tag-switching ip
 no cdp enable
!
interface Serial1/0
 ip address 10.20.20.1 255.255.255.0
 no ip redirects
 no ip directed-broadcast
 no ip proxy-arp
 ip router isis
 encapsulation ppp
 tag-switching ip
 no fair-queue
 no cdp enable
!
interface Serial1/1
 ip address 10.20.40.1 255.255.255.0
 no ip redirects
 no ip directed-broadcast
 no ip proxy-arp
 ip router isis
 encapsulation ppp
 tag-switching ip
 no fair-queue
 no cdp enable
!
!
! Configure IS-IS as the MPLS VPN backbone IGP
router isis
 passive-interface Loopback0
 net 49.0001.0000.0000.0002.00
 is-type level-2-only
 metric-style wide
!
ip classless
!
logging trap debugging
!
line con 0
 exec-timeout 0 0
 password 7 1511021F0725
 login
line aux 0
line vty 0 4
 password 7 110A1016141D
 login
!
end

Notice again that the mpls keyword has been converted into the tag-switching keyword for backward compatibility. This completes the configuration of MPLS VPNs.

Configuring MVPNs

MVPNs are configured on PE routers in an MPLS VPN backbone. In addition, PIM must be enabled on CE and P routers. This section discusses the configuration of CE, PE, and P routers to enable MPVN in the MPLS VPN backbone.

Configuring the CE Router

MVPN supports the configuration of PIM within customer sites. PIM must be enabled on the CE interface connected to the MVPN-enabled PE router.

Example 6-23 shows the configuration of PIM on the PE router connected interface of the CE router.

Example 6-23 Configuration of PIM on CE Router Interfaces

interface Serial0
 ip pim sparse-dense-mode

In this example, the ip pim sparse-dense-mode command is used to enable PIM sparse-dense mode on the interface connected to the PE router (serial 0, in this case).

Configuring the Backbone (P Routers)

PIM must be configured on P routers within the MPLS VPN backbone. PIM configuration can be flexible, but PIM dense mode is not supported.

The service provider backbone may, for example, be configured for PIM Sparse-Mode (PIM-SM) or PIM Source Specific Multicast (PIM-SSM). If PIM-SSM is configured, then a rendezvous point (RP) is not required.

Because PIM can be configured in a variety of ways within the provider backbone, no particular configuration is given here.

Configuring the PE Router

The five basic steps in the configuration of the PE router are as follows:

Step 1

Globally enable multicasting and MVRFs.

Step 2

Configure PIM on the BGP update source interface.

Step 3

Configure default and (optionally) data MDTs.

Step 4

Configure PIM on the core interfaces.

Step 5

Configure PIM on the VRF interface.


The following sections detail each step.

Step 1: Globally Enable Multicasting and MVRFs

Multicasting must be enabled globally and for MVRFs.

Example 6-24 shows the configuration of global multicasting and MVRFs.

Example 6-24 MVRF Configuration

ip multicast-routing [distributed]
ip multicast-routing vrf mjlnet_VPN [distributed]

The command ip multicast-routing [distributed] is used to globally enable multicast on the PE router.

The following command, ip multicast-routing vrf vrf_name [distributed], is then used to enable multicast for a VRF. In this case, multicast is enabled for VRF mjlnet_VPN.

Note that the distributed keyword enables support for Multicast Distributed Switching (MDS). This feature is available on the Cisco 7500 and 12000 series routers.

Step 2: Configure Default and (Optionally) Data MDTs

The next step is to configure the default and data MDTs.

Example 6-25 shows the configuration of default and data MDTs.

Example 6-25 Configuration of the Default and Data MDTs

ip vrf mjlnet_VPN
 mdt default 239.0.0.1
 mdt data 239.0.1.0 0.0.0.7 threshold 50 list 101
!
access-list 101 permit ip any 233.253.233.0 0.0.0.255

The ip vrf vrf_name command is used to begin VRF configuration. The default MDT group is then configured using the mdt default group_address command. In this example, the group address 239.0.0.1 is used.

Note that the group address for the default MDT must be the same on all PE routers for the same VPN. Also, each VPN must have a unique group address for the default MDT.

The command mdt data group_address wildcard_bits [threshold threshold_value] [list access-list] is then used to configure the data MDT groups.

In this case, a pool of eight group address (239.0.1.0 to 239.0.1.7) is configured for data MDTs.

A threshold of 50 kbps is specified for those multicast groups that match access list 101 (233.253.233.0 to 233.253.233.255). This means that a data MDT will be dynamically created whenever multicast traffic matching access list 101 exceeds the 50-kbps threshold. When configuring data MDTs, the threshold and list parameters are optional.

Step 3: Configure PIM on the BGP Update Source Interface

The next step is the configuration of PIM on the BGP update source. Example 6-26 shows the configuration of PIM on the BGP update source interface.

Example 6-26 Configuration of PIM on the BGP Update Source Loopback Interface

interface Loopback0
 ip pim sparse-dense-mode

The ip pim sparse-dense-mode command is used to enable PIM sparse-dense mode on the BGP update source (in this case, interface loopback 0).

Make sure that only one MP-BGP update source is used on each PE router. If more than one is used, this can break MVPN.

Also ensure that ip mroute-cache [distributed] is configured on the loopback interface. Again, the distributed keyword enables support for MDS.

Step 4: Configure PIM on the Core Interfaces

Next, you should configure PIM on core interfaces.

Example 6-27 shows the configuration of PIM on the core interfaces.

Example 6-27 Configure PIM on Core Interfaces

interface FastEthernet1/0
 ip pim sparse-dense-mode

The ip pim sparse-dense-mode command is used to enable PIM sparse-dense mode on the core interface (in this case, interface Fast Ethernet 1/0).

Step 5: Configure PIM on the VRF Interfaces

Finally, you should configure PIM on the VRF interfaces.

Example 6-28 shows the configuration of PIM on the VRF interfaces.

Example 6-28 Configuration of PIM on the VRF Interfaces

interface Serial4/1
 ip pim sparse-dense-mode

The ip pim sparse-dense-mode command is used to enable PIM sparse-dense mode on the VRF interface (in this case, interface serial 4/1).

Configuring TE Tunnels to Carry MPLS VPN Traffic

MPLS VPN traffic can be carried over MPLS traffic engineering (TE) tunnels.

Two different configurations are discussed in this section:

  • Transport of MPLS VPN traffic over TE tunnels between PE routers

  • Transport of MPLS VPN traffic over TE tunnels between P routers

Figure 6-28 shows both of these configurations.

Figure 6-28

Figure 6-28. Transport of MPLS VPN Traffic over PE-PE and P-P Routers

Configuring TE Tunnels Between PE Routers

In this configuration, PE routers are configured as TE tunnel head-end and tail-end routers. P routers are configured as TE midpoint routers.

Configuring the Head-End Router

The configuration of the head-end (PE) router consists of the following steps, which are examined in detail in the sections that follow:

Step 1

Globally enable MPLS TE.

Step 2

Configure interface parameters.

Step 3

Configure the backbone IGP for MPLS TE.

Step 4

Configure the MPLS TE tunnel.


Step 1: Globally Enable MPLS TE

The first step in the configuration of the head-end router is to globally enable MPLS TE.

Example 6-29 shows how to enable MPLS TE on the head-end router.

Example 6-29 Enabling MPLS TE on the Head-end Router

mpls traffic-eng tunnels

The mpls traffic-eng tunnels command globally enables MPLS TE on the router.

Step 2: Configure Interface Parameters

The next step is to enable MPLS TE and RSVP on core interfaces.

Example 6-30 shows the configuration of MPLS TE on core interfaces.

Example 6-30 Configuration of MPLS TE on Core Interfaces

interface Serial4/0
 mpls traffic-eng tunnels
 ip rsvp bandwidth 1024

The command mpls traffic-eng tunnels enables MPLS TE on core interfaces.

The ip rsvp bandwidth interface_bandwidth command configures the maximum reservable bandwidth on the interface. In this case, 1024 kbps is configured as the amount of bandwidth reservable on the interface.

Step 3: Configure the Backbone IGP for MPLS TE

The backbone IGP must be configured to support MPLS TE. The only two IGPs that currently support MPLS TE are IS-IS and OSPF.

Example 6-31 shows the configuration of IS-IS to support MPLS TE.

Example 6-31 Configuration of IS-IS to Support MPLS TE

router isis
 metric-style wide
 mpls traffic-eng router-id Loopback0
 mpls traffic-eng level-2

The router isis command begins IS-IS configuration on the router.

The command metric-style wide configures IS-IS to send and receive 24- and 32-bit metrics. Support for wide metrics is essential for MPLS TE.

The next command, mpls traffic-eng router-id interface, configures a router ID to be used with TE.

The final command in the configuration is mpls traffic-eng level-1 | level-2. This command enables MPLS TE within the specified IS-IS level. The appropriate IS-IS level should be specified, but note that if interarea MPLS TE is being used, both level 1 and level 2 should be specified on level 1/2 routers.

Example 6-32 shows the configuration for OSPF to support MPLS TE.

Example 6-32 Configuration of OSPF to Support MPLS TE

router ospf 10
 mpls traffic-eng router-id Loopback0
 mpls traffic-eng area 0

The router ospf process_id command begins OSPF configuration on the router.

Next is the mpls traffic-eng router-id interface command, which configures a router ID for MPLS TE.

Finally, the command mpls traffic-eng area area_id enables MPLS TE within the specified area. If you are using interarea MPLS TE, ensure that all areas that the tunnel will transit are specified on Area Border Routers (ABRs). On non-ABRs, configure the area in which the router is.

Step 4: Configure the MPLS TE Tunnel

The next step in the configuration of the head-end router is the configuration of the TE tunnel itself.

Example 6-33 shows the configuration of the MPLS TE tunnel.

Example 6-33 Configuration of the MPLS TE Tunnel

interface Tunnel10
 ip unnumbered Loopback0
 tunnel destination 10.1.1.4
 tunnel mode mpls traffic-eng
 tunnel mpls traffic-eng autoroute announce
 tunnel mpls traffic-eng bandwidth 256
 tunnel mpls traffic-eng path-option 10 dynamic

In Example 6-33, tunnel interface 10 is configured for MPLS TE.

The command ip unnumbered interface is configured. MPLS tunnels must be configured with an IP address, and though it is not mandatory, ip unnumbered is recommended. Do not forget this command when configuring the TE tunnel. If an IP address is not configured on the tunnel interface, no traffic will be forwarded over it.

The next command, tunnel destination ip_address, configures the TE tunnel destination. This is the MPLS router ID of the tail-end router.

The command tunnel mode mpls traffic-eng command enables the tunnel interface for MPLS TE.

Following the tunnel mode mpls traffic-eng command is tunnel mpls traffic-eng autoroute announce. This command allows the IGP to use the tunnel in SPF calculations, which, in turn, allows VPN traffic to be forwarded over the tunnel. Note that tunnel mpls traffic-eng autoroute announce is not supported for interarea tunnels.

Another method of forwarding VPN traffic over TE tunnels is to use static routes. Make sure that you configure either autoroute or static routes to direct traffic into the TE tunnel. If you do not do this, the TE tunnel will not carry VPN traffic.

Next is the tunnel mpls traffic-eng bandwidth bandwidth command. This command is optional and specifies an amount of bandwidth to be reserved for the TE tunnel along its path.

The final command is tunnel mpls traffic-eng path-option number dynamic. This command configures the tunnel to take a dynamically calculated route across the network to the tail-end router. Note that this command is not supported for interarea tunnels.

Another option is to use an explicit path. When using explicit paths, the path across the backbone is specified on a hop-by-hop basis.

Example 6-34 shows the configuration of an explicit path.

Example 6-34 Configuration of an Explicit Path for an MPLS TE Tunnel

interface Tunnel10
 tunnel mpls traffic-eng path-option 5 explicit name TEPATH1
!
ip explicit-path name TEPATH1 enable
 next-address 10.20.10.2
 next-address 10.20.40.2
 next-address 10.20.50.2
 next-address 10.20.30.2

In Example 6-34, an explicit path called TEPATH1 is specified on a hop-by-hop basis across the backbone. These hops can be either interface addresses or TE IDs.

When specifying an explicit path for interarea tunnels, be sure to use the loose keyword for each hop that corresponds to an IS-IS level-1/level-2 router or OSPF ABR. In addition, the hop-by-hop path should include the addresses of IS-IS level-1/level-2 routers or OSPF ABRs on the way to the tail-end router.

Configuring the Midpoint and Tail-End Routers

The configuration of the midpoint and the tail-end routers is to a large extent the same as that for the head-end router.

The steps used to configure intermediate and tail-end routers are as follows:

Step 1

Globally enable MPLS TE.

Step 2

Configure interface parameters.

Step 3

Configure the IGP for MPLS TE.


As you can see, the only difference in the configuration is that a tunnel interface is not configured. Refer to the previous section for an explanation of the configuration of these steps. Remember that TE tunnels are unidirectional, so you will need another TE tunnel configured in the opposite direction for bidirectional TE traffic forwarding.

TE Tunnels Between P Routers

The configuration of a TE tunnel to support MPLS VPN traffic between P routers is much the same as that for a tunnel between PE routers but with one key difference on the head-end router: enabling LDP (or TDP) on the tunnel interface.

Because the configuration for intermediate and tail-end routers is the same as that described in the previous section, only the configuration of the head-end router is described here.

Example 6-35 shows the configuration of the MPLS TE tunnel interface.

Example 6-35 Configuration of the MPLS TE Tunnel Interface

interface Tunnel10
 ip unnumbered Loopback0
 no ip directed-broadcast
 mpls ip
 tunnel destination 10.1.1.3
 tunnel mode mpls traffic-eng
 tunnel mpls traffic-eng autoroute announce
 tunnel mpls traffic-eng bandwidth 256
 tunnel mpls traffic-eng path-option 5 dynamic

Highlighted line 1 shows the key difference in the configuration of a MPLS TE tunnel between P routers. The command mpls ip enables LDP (or TDP) on the tunnel interface. This is crucial if VPN traffic is not to be dropped by the tail-end router.

Note that although it is not strictly required, you may also want to configure the mpls ip command on TE tunnels between PE routers—it cannot hurt, and you never know when it might become essential because of a network topology change.

Refer to the previous section for an explanation of other commands configured in Example 6-35.

Troubleshooting MPLS VPNs

MPLS VPNs are relatively complex, but by adopting an end-to-end, step-by-step approach, troubleshooting can be relatively fast and efficient. The process of troubleshooting MPLS VPNs can be broken down into two basic elements, troubleshooting route advertisement between the customer sites, and troubleshooting the LSP across the provider backbone.

The flowcharts in Figure 6-29 and 6-30 describe the processes used for troubleshooting route advertisement between the customer sites and troubleshooting the LSPs across the provider backbone. You can use these flowcharts to quickly access the section of the chapter relevant to problems you are experiencing on your network.

Figure 6-29

Figure 6-29. Flowchart for Troubleshooting Route Advertisement Between the Customer Sites in an MPLS VPN

Figure 6-30

Figure 6-30. Flowchart for Troubleshooting the LSPs Across the Provider MPLS Backbone

These two MPLS VPN troubleshooting elements are discussed in the sections that follow. Before diving in, however, it is a good idea to try to locate the issue using the ping and traceroute commands.

The sample topology is used as a reference throughout this section is illustrated in Figure 6-31.

Figure 6-31

Figure 6-31. Sample MPLS VPN Topology

Newer Cisco IOS software commands (such as show mpls ldp bindings) are used in the sections that follow. Table 6-2 at the end of the chapter shows newer commands and their older equivalents (such as show tag-switching tdp bindings). Note, however, that almost without exception, older commands use the tag-switching keyword in place of the mpls keyword, and the tdp keyword in place of the ldp keyword.

Locating the Problem in an MPLS VPN

Two commands that are particularly good for locating problems in the MPLS VPN are ping and traceroute.

The ping command can be used to give you are general idea of the location the problem. The ping command can be used to verify both the LSP and route advertisement across the MPLS VPN backbone.

The traceroute command, on the other hand, can be used for a more detailed examination of the LSP.

Note that if you are using IOS 12.0(27)S or above, you can also take advantage of the ping mpls and trace mpls MPLS Embedded Management feature commands to test LSP connectivity and trace LSPs respectively. These commands use MPLS echo request and reply packets ([labelled] UDP packets on port 3503), and allow you to specify a range of options including datagram size, sweep size range, TTL (maximum number of hops), MPLS echo request timeouts, MPLS echo request intervals, and Experimental bit settings.

Verifying IP Connectivity Across the MPLS VPN

As previously mentioned, the ping command can be useful in locating problems in the MPLS VPN. Two tests that can be very useful are to ping from the PE router to the connected CE router, and from the ingress PE router to the egress PE router.

Can You Ping from the PE to the Connected CE?

The first step in verifying IP connectivity across the MPLS VPN is to check whether you can ping from both the ingress and egress PE routers to their respective connected CE routers. Do not forget to specify the VRF when pinging the CE router.

Example 6-36 shows a ping test from the PE router (Chengdu_PE) to the connected CE router (CE2).

Example 6-36 Pinging the Connected CE Router

Chengdu_PE#ping vrf mjlnet_VPN 172.16.4.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.8.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 148/148/152 ms
Chengdu_PE#

If the ping is not successful, there may be a problem with the configuration of the VRF interface, the configuration of the connected CE router, or the PE-CE attachment circuit.

Can You Ping from the Ingress PE to the Egress PE (Globally and in the VRF)?

If you are able to ping from the PE router to the attached CE router, you should now try pinging between the ingress and egress PE routers' BGP update sources (typically loopback interfaces), as shown in Example 6-37.

Example 6-37 Pinging Between the Ingress the Egress Routers' BGP Update Sources

Chengdu_PE#ping
Protocol [ip]:
Target IP address: 10.1.1.4
Repeat count [5]:
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Source address or interface: 10.1.1.1
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.1.1.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 88/90/92 ms
Chengdu_PE#

If the ping is not successful, there might be a problem with the backbone IGP, or the ingress or egress router's BGP update source is not being advertised into the backbone IGP.

If you are able to ping between the ingress and egress PE routers' BGP update sources, try pinging from the VRF interface on the ingress PE to the VRF interface on the egress PE router.

Example 6-38 shows the output of a ping from the VRF interface on the ingress PE router to the VRF interface on the egress PE router.

Example 6-38 Pinging the VRF Interface on the Egress PE Router

Chengdu_PE#ping vrf mjlnet_VPN 172.16.8.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.8.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 88/90/92 ms
Chengdu_PE#

If the ping is not successful, it may indicate a problem with the VRF interface on either the ingress or egress PE router; it might indicate a problem with the LSP between the ingress and egress PE routers; or it might indicate a problem with the advertisement of customer VPN routes across the MPLS VPN backbone from the egress PE router to the ingress PE router.

Using traceroute to Verify the LSP

One very useful tool for verifying MPLS LSPs is the traceroute command.

When using traceroute on a PE or P router, the label stack used for packet forwarding is displayed.

Global traceroute can be used to trace an LSP across the MPLS backbone from the ingress to the egress PE router.

In Example 6-39, the LSP is traced from the ingress PE (Chengdu_PE) to the egress PE (HongKong_PE).

Example 6-39 Tracing the LSP from the Ingress PE to the Egress PE Router

Chengdu_PE#traceroute 10.1.1.4
Type escape sequence to abort.
Tracing the route to 10.1.1.4
 1 10.20.10.2 [MPLS: Label 20 Exp 0] 48 msec 48 msec 228 msec
 2 10.20.20.2 [MPLS: Label 17 Exp 0] 32 msec 32 msec 32 msec
 3 10.20.30.2 16 msec 16 msec *
Chengdu_PE#

Highlighted line 1 shows that ingress PE router Chengdu_PE imposes IGP label 20 on the packet and forwards it to Chengdu_P (10.20.10.2).

In highlighted line 2, Chengdu_P swaps label 20 for label 17, and the packet transits the link to HongKong_P (10.20.20.2).

In highlighted line 3, HongKong_P pops the label and forwards the unlabeled packet to egress PE router HongKong_PE (10.20.30.2).

VRF traceroute can used to examine a labeled VPN packet as it crosses the MPLS backbone from the mjlnet_VPN VRF interface of the ingress PE router to mjlnet_VPN site 2, as shown in Example 6-40.

Example 6-40 VRF traceroute from the VRF Interface on the Ingress PE Router to mjlnet_VPN Site 2

Chengdu_PE#traceroute vrf mjlnet_VPN 172.16.8.2
Type escape sequence to abort.
Tracing the route to 172.16.8.2
 1 10.20.10.2 [MPLS: Labels 20/23 Exp 0] 96 msec 96 msec 96 msec
 2 10.20.20.2 [MPLS: Labels 17/23 Exp 0] 80 msec 80 msec 80 msec
 3 172.16.8.1 [MPLS: Label 23 Exp 0] 76 msec 76 msec 76 msec
 4 172.16.8.2 36 msec 136 msec *
Chengdu_PE#

Highlighted line 1 shows that ingress PE router Chengdu_PE imposes IGP label 20, plus VPN label 23, on the packet and forwards it to Chengdu_P (10.20.10.2).

In highlighted line 2, Chengdu_P swaps IGP label 20 for label 17, and the packet transits the link to HongKong_P (10.20.20.2). Note that the VPN label (23) remains unchanged.

In highlighted line 3, HongKong_P pops the IGP label and forwards the packet to egress PE router HongKong_PE (172.16.8.1, its mjlnet_VPN VRF interface address). Again, the VPN label remains unchanged.

Finally, in highlighted line 4, egress PE router HongKong_PE removes the VPN label and forwards the unlabeled packet to the CE router (CE2, 172.16.8.2).

TIP

If the no mpls ip propagate-ttl command is configured on the ingress PE, the MPLS backbone will be represented as 1 hop when tracing from the CE or PE routers. To allow the TTL to be propagated in traceroute on PE routers, the mpls ip propagate-ttl local command can be used.

Troubleshooting the Backbone IGP

Although in-depth troubleshooting of the backbone IGP is beyond the scope of this book, basic issues that will prevent correct operation of both OSPF and IS-IS are briefly discussed here.

Note that the troubleshooting steps for OSPF and IS-IS discussed here are generic in nature; they are equally applicable in a regular IP (non-MPLS) backbone.

Routing Protocol Is Not Enabled on an Interface

Check that OSPF or IS-IS is enabled on the interface using the show ip ospf interface or show clns interface commands.

Routers Are Not on a Common Subnet

Ensure that neighboring routers are configured on the same IP subnet.

Use the show ip interface command to verify interface IP address and mask configuration.

Passive Interface Is Configured

Ensure that an interface that should be transmitting OSPF or IS-IS packets is not configured as a passive interface.

Use the show ip protocols command to verify interface configuration.

Area Mismatch Exists

Ensure that areas are correctly configured on OSPF or IS-IS routers.

Check the OSPF area ID using the show ip ospf interface command.

Check that the IS-IS area is correctly configured using the show clns protocol command.

Network Type Mismatch Exists

Verify that there is not a network type mismatch between the interfaces of neighboring routers.

Use the show ip opsf interface command to verify the OSPF network type. Ensure that neighboring routers are configured with a consistent network type.

Use the show running-config command to check whether there is a network type mismatch between IS-IS routers. If IS-IS is configured on a point-to-point subinterface on one router, but a multipoint interface on the neighboring router, adjacency will fail.

Timer Mismatch Exists

Verify that there is not an OSPF or IS-IS timer mismatch between neighboring routers.

Use the show ip ospf interface command to check that hello and dead intervals are consistent between neighboring OSPF routers.

Use the show running-config command to check the configuration of the hello interval and hello multiplier timers on IS-IS routers.

Authentication Mismatch Exists

Check to see whether there is an authentication mismatch between the routers.

Use the debug ip ospf adj command to troubleshoot OSPF authentication issues.

Use the debug isis adj-packets command to troubleshoot IS-IS authentication issues.

General Misconfiguration Issues

Check the section "Step 6: Configure the MPLS VPN Backbone IGP" on page 449 to ensure that the backbone IGP is correctly configured.

Troubleshooting the LSP

Customer VPN traffic uses an LSP to transit the service provider backbone between the ingress and egress PE routers. When troubleshooting the LSP, you should verify correct operation of CEF, MPLS, and TDP/LDP on all LSRs along the path.

Verifying CEF

If CEF switching is not enabled on all MPLS backbone routers, label switching will not function.

In this section, you will see how to verify that CEF is enabled globally and on an interface.

CEF Is Globally Disabled

To verify that CEF switching is globally enabled on a router, use the show ip cef command, as demonstrated in Example 6-41.

Example 6-41 Verifying CEF Using the show ip cef Command (CEF Is Disabled)

Chengdu_P#show ip cef
%CEF not running
Prefix       Next Hop       Interface
Chengdu_P#

Highlighted line 1 shows that CEF is not enabled on Chengdu_P.

To enable CEF on a router, use the command ip cef [distributed]. The distributed keyword is used only on routers with a distributed architecture such as the 12000 and 7500 series routers.

Example 6-42 shows CEF being enabled on Chengdu_P.

Example 6-42 Globally Enabling CEF

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#ip cef
Chengdu_P(config)#exit
Chengdu_P#

CEF is enabled in the highlighted line in Example 6-42.

In Example 6-43, the show ip cef command is again used to verify CEF.

Example 6-43 CEF Is Enabled

Chengdu_P#show ip cef
Prefix       Next Hop       Interface
0.0.0.0/0      drop         Null0 (default route handler entry)
0.0.0.0/32     receive
10.1.1.1/32     10.20.10.1      FastEthernet0/0
10.1.1.2/32     receive
10.1.1.3/32     10.20.20.2      Serial1/1
10.1.1.4/32     10.20.20.2      Serial1/1
10.20.10.0/24    attached       FastEthernet0/0
10.20.10.0/32    receive
10.20.10.1/32    10.20.10.1      FastEthernet0/0
10.20.10.2/32    receive
10.20.10.255/32   receive
10.20.20.0/24    attached       Serial1/1
10.20.20.0/32    receive
10.20.20.1/32    receive
10.20.20.2/32    attached       Serial1/1
10.20.20.255/32   receive
10.20.30.0/24    10.20.20.2      Serial1/1
224.0.0.0/4     0.0.0.0
224.0.0.0/24    receive
255.255.255.255/32 receive
Chengdu_P#

Example 6-43 shows a summary of the CEF forwarding information base (FIB).

Highlighted line 1 shows a default route to interface Null0 that reports a drop state. This indicates that packets for this FIB entry will be dropped.

In highlighted line 2, an entry for prefix 10.1.1.1/32 is shown. The entry includes the associated next-hop and (outgoing) interface.

Highlighted line 3 shows an entry for 10.1.1.2/32. This entry indicates a receive state. The receive state is used for host addresses configured on the local router. This entry corresponds to the IP address configured on Chengdu_P's interface loopback 0.

Finally, highlighted line 4 shows an entry for 10.20.10.0/24. This entry indicates an attached state. An attached state indicates that the prefix is directly reachable via the interface indicated (here, Fast Ethernet 0/0).

CEF Is Disabled on an Interface

After verifying that the CEF is globally enabled, also ensure that CEF is enabled on interfaces. CEF is responsible for label imposition and, therefore, must be enabled on the VRF interfaces on PE routers.

Use the show cef interface interface_name command to verify that CEF is enabled on an interface, as shown in Example 6-44.

Example 6-44 show cef interface Command Output

Chengdu_PE#show cef interface serial 4/1
Serial4/1 is up (if_number 6)
 Corresponding hwidb fast_if_number 6
 Corresponding hwidb firstsw->if_number 6
 Internet address is 172.16.4.1/24
 ICMP redirects are never sent
 Per packet load-sharing is disabled
 IP unicast RPF check is disabled
 Inbound access list is not set
 Outbound access list is not set
 IP policy routing is disabled
 BGP based policy accounting is disabled
 Interface is marked as point to point interface
 Hardware idb is Serial4/1
 Fast switching type 7, interface type 67
 IP CEF switching disabled
 IP Feature Fast switching turbo vector
 IP Null turbo vector
 VPN Forwarding table "mjlnet_VPN"
 Input fast flags 0x1000, Output fast flags 0x0
 ifindex 5(5)
 Slot 4 Slot unit 1 Unit 1 VC -1
 Transmit limit accumulator 0x0 (0x0)
 IP MTU 1500
Chengdu_PE#

As you can see in the highlighted line, CEF is disabled on interface serial 4/1.

To enable CEF on the interface, use the ip route-cache cef command, as shown in Example 6-45.

Example 6-45 Configuration of CEF on the Interface

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#interface serial 4/1
Chengdu_PE(config-if)#ip route-cache cef
Chengdu_PE(config-if)#end
Chengdu_PE#

The highlighted line indicates that CEF is enabled on interface serial 4/1.

Verifying MPLS

If MPLS is disabled either globally or on an interface, label switching will not function.

This section discusses how to verify whether MPLS is either disabled globally or on an interface.

MPLS Is Globally Disabled

If MPLS has been globally enabled, label switching will not function on any interface.

The show mpls interfaces or show mpls forwarding-table commands can be used to verify that MPLS is enabled, as demonstrated in Example 6-46 and Example 6-47.

Example 6-46 Verifying MPLS Using the show mpls interfaces Command

Chengdu_PE#show mpls interfaces
IP MPLS forwarding is globally disabled on this router.
Individual interface configuration is as follows:
Interface       IP      Tunnel  Operational
Chengdu_PE#

The highlighted line clearly shows that MPLS is globally disabled.

Example 6-47 Verifying MPLS Using the show mpls forwarding-table Command

Chengdu_PE#show mpls forwarding-table
Tag switching is not operational.
CEF or tag switching has not been enabled.
No TFIB currently allocated.
Chengdu_PE#

Highlighted line 1 shows that either CEF or MPLS is disabled.

In highlighted line 2, you can see that no LFIB (shown as TFIB here) has been allocated.

MPLS can be enabled using the mpls ip command. In Example 6-48, MPLS is configured on Chengdu_PE.

Example 6-48 Configuration of MPLS on Chengdu_PE

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#mpls ip
Chengdu_PE(config)#exit
Chengdu_PE#

In Example 6-48, MPLS is globally enabled using the mpls ip command.

MPLS Is Disabled on an Interface

If MPLS is disabled on an interface, label switching will not function on that interface. Ensure that MPLS is enabled on all core interfaces of all PE and P routers. Note that MPLS should not be enabled on PE routers' VRF interfaces unless carrier's carrier MPLS is being used.

To verify that MPLS is enabled on core interfaces, use the show mpls interfaces command, as shown in Example 6-49.

Example 6-49 Verifying MPLS on an Interface Using the show mpls interfaces Command (MPLS Is Disabled)

Chengdu_PE#show mpls interfaces
Interface       IP      Tunnel  Operational
Chengdu_PE#

As you can see, no interfaces on Chengdu_PE are enabled for MPLS. In this case, MPLS should be enabled on core interface Fast Ethernet 1/0.

The mpls ip command is then used to enable MPLS on interface Fast Ethernet 1/0, as demonstrated in Example 6-50.

Example 6-50 Enabling MPLS on Interface Fast Ethernet 1/0 Using the mpls ip Command

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#interface fastethernet 1/0
Chengdu_PE(config-if)#mpls ip
Chengdu_PE(config-if)#end
Chengdu_PE#

In highlighted line 1, MPLS is enabled on interface fastethernet 1/0.

As shown in Example 6-51, the show mpls interfaces command is then used to confirm that MPLS is enabled on the interface.

Example 6-51 Verifying MPLS on an Interface (MPLS Is Enabled)

Chengdu_PE#show mpls interfaces
Interface       IP      Tunnel  Operational
FastEthernet1/0    Yes (ldp)   No    Yes
Chengdu_PE#

As you can see, MPLS is now enabled on interface FastEthernet1/0.

Verifying TDP/LDP

TDP and LDP are used to exchange label bindings, but if they are not functioning correctly, label bindings will not be exchanged, and MPLS will not function.

This section examines how to verify correct operation of TDP or LDP. Note that examples in this section focus on LDP.

LDP Neighbor Discovery and Session Establishment Fails

If LDP neighbor discovery fails, session establishment will fail. Similarly, if LDP session establishment fails, label bindings will not be distributed.

LDP Neighbor Discovery Fails

If LDP discovery fails, session establishment will fail between neighboring LSRs.

Figure 6-32 shows LDP neighbor discovery between Chengdu_PE and Chengdu_P.

Figure 6-32

Figure 6-32. LDP Neighbor Discovery Between Chengdu_PE and Chengdu_P

Note that Figure 6-32 shows LDP neighbor discovery between directly connected neighbors.

To verify that LDP neighbor discovery has been successful, the show mpls ldp discovery command can be used, as shown in Example 6-52.

Example 6-52 Verifying LDP Neighbor Discovery Using the show mpls ldp discovery Command (Discovery Is Successful)

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    FastEthernet1/0 (ldp): xmit/recv
      LDP Id: 10.1.1.2:0
Chengdu_PE#

Highlighted line 1 shows the local LDP ID (10.1.1.1:0), which is comprised of a 32-bit router ID and a 16-bit label space identifier. In this case, the router ID is 10.1.1.1, and the label space identifier is 0 (which corresponds to a platform-wide label space).

Note that if an interface is using the platform-wide label space, it indicates that labels assigned on this interface are taken from a common pool. If an interface is using an interface label space, it indicates that labels assigned on the interfaces are taken from a pool of labels specific to this interface. Frame-mode interfaces use the platform-wide label space (unless a carrier's carrier architecture is deployed), and cell-mode interfaces use an interface label space.

Highlighted line 2 shows the interface on which LDP hello messages are being transmitted to (xmit) and received from (recv) the peer LSR. Note that the label protocol configured on the interface (in this case, LDP) is also indicated here. In highlighted line 3, the peer LSR's LDP ID is shown (10.1.1.2:0).

LDP neighbor discovery can fail for a number of reasons, including the following:

  • A label protocol mismatch exists.
  • An access list blocks neighbor discovery.
  • A control-VC mismatch exists on LC-ATM interfaces.

These issues are detailed in the following sections.

Label Protocol Mismatch

If there is a mismatch between the label protocol configured on neighboring LSRs, discovery will fail.

To verify neighbor discovery, use the show mpls ldp discovery command.

Example 6-53 shows the output of show mpls ldp discovery when there is a label protocol mismatch between LSRs.

Example 6-53 Label Protocol Mismatch Between Peer LSRs

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    FastEthernet1/0 (ldp): xmit
Chengdu_PE#

As you can see, LDP hello messages are being transmitted (xmit) but not received (recv) on interface FastEthernet1/0. This might indicate that TDP is configured on the peer LSR.

To check the label protocol being used on the peer LSR, use the show mpls interfaces command, as shown in Example 6-54.

Example 6-54 Verifying the Label Protocol on the Peer LSR Using the show mpls interfaces Command

Chengdu_P#show mpls interfaces
Interface       IP      Tunnel  Operational
FastEthernet0/0    Yes (tdp)   No    Yes
Serial1/1       Yes (ldp)   No    Yes
Chengdu_P#

The highlighted line shows that TDP is indeed configured on the peer LSR's connected interface.

As shown in Example 6-55, the label protocol is changed to LDP on the interface using the mpls label protocol command.

Example 6-55 Changing the Label Protocol Using the mpls label protocol Command

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#interface fastethernet0/0
Chengdu_P(config-if)#mpls label protocol ldp
Chengdu_P(config-if)#end
Chengdu_P#

The highlighted line shows that the label protocol is reconfigured to be LDP using the mpls label protocol command. Note that it is possible to configure both LDP and TDP on an interface using the mpls label protocol both command.

Once LDP has been configured on the peer LSR's interface, neighbor discovery is rechecked using the show mpls ldp discovery command, as demonstrated in Example 6-56.

Example 6-56 Verifying Neighbor Discovery (Discovery Is Now Successful)

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    FastEthernet1/0 (ldp): xmit/recv
      LDP Id: 10.1.1.2:0
Chengdu_PE#

In Example 6-56, the highlighted line shows that LDP messages are now being both sent and received on interface FastEthernet1/0. LDP neighbor discovery has been successful.

Access List Blocks LDP Neighbor Discovery

LDP neighbor discovery uses UDP port 646 and the all routers multicast address (224.0.0.2) for directly connected neighbors. If neighbors are not directly connected, then UDP port 646 is also used, but hello messages are unicast.

If an access list blocks UDP port 646 or the all routers multicast address, neighbor discovery will not function.

Note that TDP uses UDP 711 and the local broadcast address (255.255.255.255) for neighbor discovery. If neighbors are not directly connected, then unicast communication is again used.

LDP neighbor discovery can be verified using the show mpls ldp discovery command, as shown in Example 6-57.

Example 6-57 Verifying LDP Neighbor Discovery Using the show mpls ldp discovery Command

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    FastEthernet1/0 (ldp): xmit
Chengdu_PE#

As highlighted line 1 shows, LDP hello messages are being transmitted (xmit), but not received (recv) on interface FastEthernet1/0. This may indicate the presence of an access list.

To check for the presence of an access list on an interface, use the show ip interface command, as demonstrated in Example 6-58.

Note that only the relevant portion of the output is shown.

Example 6-58 Verifying the Presence of an Access List Using the show ip interface Command

Chengdu_PE#show ip interface fastethernet 1/0
FastEthernet1/0 is up, line protocol is up
 Internet address is 10.20.10.1/24
 Broadcast address is 255.255.255.255
 Address determined by non-volatile memory
 MTU is 1500 bytes
 Helper address is not set
 Directed broadcast forwarding is disabled
 Multicast reserved groups joined: 224.0.0.2
 Outgoing access list is not set
 Inbound access list is 101
 Proxy ARP is disabled
 Local Proxy ARP is disabled
 Security level is default
 Split horizon is enabled

As you can see, access list 101 is configured inbound on interface FastEthernet 1/0.

To examine access list 101, use the show ip access-lists command, as demonstrated in Example 6-59.

Example 6-59 Verifying the Contents of the Access List Using the show ip access-lists Command

Chengdu_PE#show ip access-lists 101

Extended IP access list 101
  permit tcp any any eq bgp
  permit tcp any any eq ftp
  permit tcp any any eq ftp-data
  permit tcp any any eq nntp
  permit tcp any any eq pop3
  permit tcp any any eq smtp
  permit tcp any any eq www
  permit tcp any any eq telnet
  permit udp any any eq snmp
  permit udp any any eq snmptrap
  permit udp any any eq tacacs
  permit udp any any eq tftp
Chengdu_PE#

As you can see, UDP port 646 (LDP) is not permitted by access list 101, and it is, therefore, denied by the implicit deny any statement at the end of the access list.

There are two choices here:

  • Modify the access list
  • Remove the access list

In this case, it is decided that the access list is unnecessary, and so it is removed, as shown in Example 6-60.

Example 6-60 Access List 101 Is Removed

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#interface fastethernet 1/0
Chengdu_PE(config-if)#no ip access-group 101 in
Chengdu_PE(config-if)#end
Chengdu_PE#

The highlighted line shows the removal of access list 101 on interface fastethernet 1/0.

After access list 101 is removed, the show mpls ldp discovery command is used to verify that the LDP neighbor discovery is functioning, as demonstrated in Example 6-61.

Example 6-61 LDP Discovery Is Now Successful

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    FastEthernet1/0 (ldp): xmit/recv
      LDP Id: 10.1.1.2:0
Chengdu_PE#

Highlighted line 1 shows that LDP hello messages are now being received (recv) on interface FastEthernet1/0.

Neighbor discovery is now successful.

Control VC Mismatch on LC-ATM Interfaces

On LC-ATM interfaces, if there is a mismatch of the VPI/VCI for the control (plane) VC, LDP neighbor discovery will fail.

Use the show mpls ldp discovery command to view the neighbor discovery status on the LSR, as shown in Example 6-62.

Example 6-62 Verifying LDP Discovery

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    ATM3/0.1 (ldp): xmit
Chengdu_PE#

Highlighted line 1 shows that LDP packets are being transmitted (xmit) but not received (recv) on interface ATM 3/0.1.

The next step is to check the control VC on the interface using the show mpls interfaces detail command, as shown in Example 6-63.

Example 6-63 Checking the Control VC Using the show mpls interfaces detail Command on the Local LSR

Chengdu_PE#show mpls interfaces atm 3/0.1 detail
Interface ATM3/0.1:
    IP labeling enabled (ldp)
    LSP Tunnel labeling not enabled
    BGP labeling not enabled
    MPLS operational
    Optimum Switching Vectors:
     IP to MPLS Turbo Vector
     MPLS Turbo Vector
    Fast Switching Vectors:
     IP to MPLS Fast Switching Vector
     MPLS Turbo Vector
    MTU = 4470
    ATM labels: Label VPI = 1, Control VC = 0/32
Chengdu_PE#

Highlighted line 1 shows that the control VC used on this LC-ATM interface is 0/32 (VPI/VCI). This is the default.

The control VC is then verified on the peer LSR, as shown in Example 6-64.

Example 6-64 Checking the Control VC on the Peer LSR

HongKong_PE#show mpls interfaces atm 4/0.1 detail
Interface ATM4/0.1:
    IP labeling enabled (ldp)
    LSP Tunnel labeling not enabled
    BGP labeling not enabled
    MPLS not operational
    Optimum Switching Vectors:
     IP to MPLS Turbo Vector
     MPLS Turbo Vector
    Fast Switching Vectors:
     IP to MPLS Fast Switching Vector
     MPLS Turbo Vector
    MTU = 4470
    ATM labels: Label VPI = 1, Control VC = 0/40
HongKong_PE#

As you can see, the control VC is 0/40 on HongKong_PE. There is a control VC mismatch between LDP peers.

To resolve this issue, the control VC is modified on HongKong_PE, as shown in Example 6-65.

Example 6-65 Reconfiguration of the Control VC on HongKong_PE

HongKong_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_PE(config)#interface ATM4/0.1 mpls
HongKong_PE(config-subif)#mpls atm control-vc 0 32
HongKong_PE(config-subif)#end
HongKong_PE#

The control VC is reset to the default 0/32 values, as the highlighted line indicates.

Once the control VC VPI/VCI is modified, the show mpls ldp discovery command is again used to examine the LDP neighbor discovery state, as shown in Example 6-66.

Example 6-66 LDP Discovery Is Now Successful

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    ATM3/0.1 (ldp): xmit/recv
      LDP Id: 10.1.1.4:1; IP addr: 10.20.60.2
Chengdu_PE#

In highlighted line 1, LDP hello packets are being both transmitted (xmit) and received (recv) on interface ATM 3/0.1. LDP discovery is now successful.

In highlighted line 2, the LDP ID (10.1.1.4:1) of HongKong_PE is shown, together with its IP address on the connected interface (10.20.60.2).

LDP Session Establishment Fails

If LDP session establishment fails, label bindings will not be advertised to neighboring LSRs.

Figure 6-33 illustrates an LDP session between Chengdu_PE and Chengdu_P.

Figure 6-33

Figure 6-33. An LDP Session Between Chengdu_PE and Chengdu_P

To verify LDP session establishment, use the show mpls ldp neighbor command.

Example 6-67 shows the output of the show mpls ldp neighbor command when session establishment is successful.

Example 6-67 LDP Session Establishment Is Successful

Chengdu_PE#show mpls ldp neighbor
  Peer LDP Ident: 10.1.1.2:0; Local LDP Ident 10.1.1.1:0
    TCP connection: 10.1.1.2.11206 - 10.1.1.1.646
    State: Oper; Msgs sent/rcvd: 76/75; Downstream
    Up time: 00:56:44
    LDP discovery sources:
     FastEthernet1/0, Src IP addr: 10.20.10.2
    Addresses bound to peer LDP Ident:
     10.1.1.2    10.20.20.1   10.20.10.2
    Chengdu_PE#

Highlighted line 1 shows the peer LDP ID (10.1.1.2:0), as well as the local LDP ID (10.1.1.1:0).

In highlighted line 2, the TCP ports open on peer and local LSRs for the LDP session (11206 and 646, respectively) are shown.

In highlighted line 3, the session state is shown as operational (established). The number of messages sent and received (76 and 75), together with the method of label distribution (unsolicited downstream), are also shown.

The LDP session uptime is shown in highlighted line 4 (56 minutes and 44 seconds). In highlighted line 5, the discovery sources (local LSR interface and peer's connected IP address) are shown. Finally, the LDP peer's interface IP addresses are shown.

Numerous issues can prevent LDP session establishment, including the following:

  • The neighbor's LDP ID is unreachable.
  • An access list blocks LDP session establishment.
  • An LDP authentication mismatch exists.
  • VPI ranges do not overlap between LC-ATM interfaces.

The sections that follow discuss these issues in more detail.

Neighbor's LDP ID Is Unreachable

An LDP session is established over a TCP connection between LSRs. On Cisco LSRs, the endpoint of the TCP connection corresponds to the LDP ID address by default, unless peer LSRs are connected via LC-ATM interfaces. If the LDP ID of the peer is unreachable, session establishment will fail.

Use the show mpls ldp discovery command to troubleshoot this issue, as shown in Example 6-68.

Example 6-68 No Route to the LDP ID of the Neighboring LSR Exists

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    FastEthernet1/0 (ldp): xmit/recv
      LDP Id: 10.1.1.2:0; no route
Chengdu_PE#

The highlighted line shows that there is no route to the LDP ID of the neighboring LSR. As previously mentioned, LDP sessions are established between the LDP ID addresses of the neighboring LSRs. The absence of a route to the neighbor's LDP ID can be confirmed using the show ip route command, as demonstrated in Example 6-69.

Example 6-69 No Route to the LDP ID of the Peer LSR Exists in the Routing Table

Chengdu_PE#show ip route 10.1.1.2
% Subnet not in table
Chengdu_PE#

As you can see, there is no route to 10.1.1.2 (the peer's LDP ID).

When the configuration of the backbone IGP (in this case, IS-IS) is examined on the neighboring LSR, the problem is revealed.

Example 6-70 shows the output of the show running-config command. Note that only the relevant portions of the output are shown.

Example 6-70 Interface Loopback0 Is Not Advertised by IS-IS

Chengdu_P#show running-config
Building configuration...
!
tag-switching tdp router-id Loopback0 force
!
!
interface Loopback0
 ip address 10.1.1.2 255.255.255.255
 no ip directed-broadcast
!
!
router isis
 net 49.0001.0000.0000.0002.00
 is-type level-2-only
 metric-style wide
!

In highlighted line 1, the MPLS LDP ID (shown as the TDP ID) is configured as the IP address on interface Loopback0.

The configuration of interface Loopback 0 begins in highlighted line 2. The IP address is 10.1.1.2/32. This is the LDP ID.

Notice that the command ip router isis is not configured on the interface. This command is one way to advertise the interface address in IS-IS.

The configuration of IS-IS begins in highlighted line 3. Notice the absence of the passive-interface Loopback0 command. The passive-interface command alone can also be used to advertise the loopback interface although some versions of IOS may require you to configure the ip router isis on the loopback interface in addition to the passive interface command.

Interface Loopback0 is not being advertised in IS-IS. IS-IS must, therefore, be configured to advertise interface Loopback0, as shown in Example 6-71.

Example 6-71 Configuring IS-IS to Advertise Interface Loopback0

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#router isis
Chengdu_P(config-router)#passive-interface Loopback0
Chengdu_P(config-router)#end
Chengdu_P#

The highlighted line shows where IS-IS is configured to advertise interface loopback 0.

The LDP discovery state is now rechecked using the show mpls ldp discovery command, as shown in Example 6-72.

Example 6-72 LDP Discovery Is Now Successful

Chengdu_PE#show mpls ldp discovery
 Local LDP Identifier:
  10.1.1.1:0
  Discovery Sources:
  Interfaces:
    FastEthernet1/0 (ldp): xmit/recv
      LDP Id: 10.1.1.2:0
Chengdu_PE#

The highlighted line shows the peer LDP ID, and crucially, the absence of the "no route" message (as shown in Example 6-68) indicates that there is now a route to the neighbor's LDP ID.

The LDP session state can then be verified using the show mpls ldp neighbor command, as shown in Example 6-73.

Example 6-73 LDP Session Is Established

Chengdu_PE#show mpls ldp neighbor
  Peer LDP Ident: 10.1.1.2:0; Local LDP Ident 10.1.1.1:0
    TCP connection: 10.1.1.2.11007 - 10.1.1.1.646
    State: Oper; Msgs sent/rcvd: 12/11; Downstream
    Up time: 00:00:43
    LDP discovery sources:
     FastEthernet1/0, Src IP addr: 10.20.10.2
    Addresses bound to peer LDP Ident:
     10.20.10.2   10.1.1.2    10.20.20.1
Chengdu_PE#

Highlighted line 1 shows the peer (10.1.1.2:0) and local LDP IDs (10.1.1.1:0).

In highlighted line 2, the session state is shown as operational (established). The number of messages sent and received (12 and 11), together with the label distribution method (unsolicited downstream) are also shown.

The LDP session uptime is shown in highlighted line 3 (43 seconds). The session has now been established.

It is worth noting that reachability issues between LDP ID addresses in a carrier's carrier configuration between the PE and CE routers can easily be resolved by using the mpls ldp discovery transport-address interface command. If this command is configured on connected PE and CE interfaces, the LDP session will be established between the connected interface addresses rather than LDP ID addresses.

Access List Blocks LDP Session Establishment

LDP sessions are established between two peers over a unicast connection on TCP port 646. The unicast connection is between the LDP ID addresses of the adjacent LSRs. If an access list blocks TCP port 646 or the LDP ID addresses, then session establishment will fail.

When designing access lists, consider that the passive peer (the peer with the lower LDP ID) opens TCP port 646, and the active peer (the peer with the higher LDP ID) opens an ephemeral (short-lived) port for LDP session establishment.

Note that TDP uses TCP port 711 and a unicast connection for session establishment.

Use the show ip interface command to check for an access list on an interface, as demonstrated in Example 6-74.

Example 6-74 Verifying the Presence of an Access List

Chengdu_PE#show ip interface fastethernet 1/0
FastEthernet1/0 is up, line protocol is up
 Internet address is 10.20.10.1/24
 Broadcast address is 255.255.255.255
 Address determined by non-volatile memory
 MTU is 1500 bytes
 Helper address is not set
 Directed broadcast forwarding is disabled
 Multicast reserved groups joined: 224.0.0.2
 Outgoing access list is not set
 Inbound access list is 101
 Proxy ARP is disabled
 Local Proxy ARP is disabled
 Security level is default
 Split horizon is enabled

The highlighted line shows that access list 101 is configured inbound on interface FastEthernet 1/0.

Use the show ip access-lists command to examine access list 101, as shown in Example 6-75.

Example 6-75 Verifying the Contents of the Access List

Chengdu_PE#show ip access-lists
Extended IP access list 101
  permit icmp any any
  permit gre any any
  permit tcp any any eq bgp
  permit tcp any any eq domain
  permit tcp any any eq ftp
  permit tcp any any eq ftp-data
  permit tcp any any eq telnet
  permit tcp any any eq www
  permit udp any any eq 646
  permit udp any any eq ntp
  permit udp any any eq snmp
  permit udp any any eq snmptrap
  permit udp any any eq tacacs
  permit udp any any eq tftp

Chengdu_PE#

As you can see, access list 101 does not permit TCP port 646 and, therefore, it is blocked by the implicit deny any statement at the end of the access list.

The two choices here are:

  • Modify the access list
  • Remove the access list

In this case, it is decided that the access list is not needed, and it is removed, as shown in Example 6-76.

Example 6-76 Removing the Access List

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#interface fastethernet 1/0
Chengdu_PE(config-if)#no ip access-group 101 in
Chengdu_PE(config-if)#end
Chengdu_PE#

The highlighted line shows the removal of access list 101 on interface fastethernet 1/0.

Once the access list is removed, session establishment is verified using the show mpls ldp neighbor command, as demonstrated in Example 6-77.

Example 6-77 LDP Session Establishment Succeeds

Chengdu_PE#show mpls ldp neighbor
  Peer LDP Ident: 10.1.1.2:0; Local LDP Ident 10.1.1.1:0
    TCP connection: 10.1.1.2.11075 - 10.1.1.1.646
    State: Oper; Msgs sent/rcvd: 15/14; Downstream
    Up time: 00:02:49
    LDP discovery sources:
     FastEthernet1/0, Src IP addr: 10.20.10.2
    Addresses bound to peer LDP Ident:
     10.1.1.2    10.20.20.1   10.20.10.2
Chengdu_PE#

Highlighted line 1 shows the peer (10.1.1.2:0) and local LDP IDs (10.1.1.1:0).

Highlighted line 2 shows that the session state is now operational (established). The number of messages sent and received (15 and 14), together with the label distribution method (unsolicited downstream), is also shown.

The LDP session uptime is shown in highlighted line 3 (2 minutes 49 seconds).

LDP Authentication Mismatch

LDP can be configured to use the TCP MD5 authentication for session connections. If LDP authentication is configured on one peer, but not the other, or if passwords are mismatched, session establishment will fail.

LDP Authentication Is Configured on One Peer But Not the Other

If LDP authentication is configured on one LDP peer, but not the other, session establishment will fail, and an error message will be logged.

Example 6-78 shows the error message logged if the LDP session messages do not contain an MD5 digest.

Example 6-78 LDP Authentication Is Not Configured on the Peer LSR

*Jan 20 08:34:16.775 UTC: %TCP-6-BADAUTH: No MD5 digest from 10.1.1.2(11023) to
 10.1.1.1(646)

In Example 6-78, an LDP session message has been received from LDP peer 10.1.1.2 without the expected MD5 digest.

To resolve this issue, either peer 10.1.1.2 can be configured for LDP authentication or LDP authentication can be removed on peer 10.1.1.1. In this case, LDP authentication is configured on peer 10.1.1.2, as shown in Example 6-79.

Example 6-79 Configuration of LDP Authentication on Peer 10.1.1.2

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#mpls ldp neighbor 10.1.1.1 password cisco
Chengdu_P(config)#exit
Chengdu_P#

Once LDP authentication has been configured, the LDP session is established. This is verified using the show mpls ldp neighbor command, as shown in Example 6-80.

Example 6-80 LDP Session Establishment Is Successful

Chengdu_PE#show mpls ldp neighbor
  Peer LDP Ident: 10.1.1.2:0; Local LDP Ident 10.1.1.1:0
    TCP connection: 10.1.1.2.11115 - 10.1.1.1.646
    State: Oper; Msgs sent/rcvd: 12/11; Downstream
    Up time: 00:00:21
    LDP discovery sources:
     FastEthernet1/0, Src IP addr: 10.20.10.2
    Addresses bound to peer LDP Ident:
     10.1.1.2    10.20.20.1   10.20.10.2
Chengdu_PE#

In highlighted line 1, the peer (10.1.1.2:0) and local LDP IDs (10.1.1.1:0) are shown.

Highlighted line 2 shows that the session state is operational (established). This line also shows the number of messages sent and received (12 and 11), together with the label distribution method (unsolicited downstream).

Finally, highlighted line 3 shows the LDP session uptime (21 seconds).

LDP Authentication Password Mismatch

If there is a LDP authentication password mismatch between peers, session establishment will fail, and an error message will be logged.

Example 6-81 shows the error message logged if there is an LDP password mismatch.

Example 6-81 LDP Passwords Are Mismatched

*Jan 20 09:42:54.091 UTC: %TCP-6-BADAUTH: Invalid MD5 digest from 10.1.1.2
 (11034) to 10.1.1.1(646)

As the highlighted portion shows, an invalid MD5 digest is received from LDP peer 10.1.1.2.

To ensure that the LDP password is consistent, reconfigure the password on both peers (10.1.1.1 and 10.1.1.2) as shown in Example 6-82.

Example 6-82 Reconfiguration of the LDP Password

! On Chengdu_PE (10.1.1.1):
Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#mpls ldp neighbor 10.1.1.2 password cisco
Chengdu_PE(config)#exit
Chengdu_PE#
! On Chengdu_P (10.1.1.2):
Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#mpls ldp neighbor 10.1.1.1 password cisco
Chengdu_P(config)#exit
Chengdu_P#

Once the LDP password has been reconfigured, use the show mpls neighbor command to verify LDP session establishment as demonstrated in Example 6-83.

Example 6-83 LDP Session Establishment Is Successful After Reconfiguration of the LDP Password

Chengdu_PE#show mpls ldp neighbor
  Peer LDP Ident: 10.1.1.2:0; Local LDP Ident 10.1.1.1:0
    TCP connection: 10.1.1.2.11118 - 10.1.1.1.646
    State: Oper; Msgs sent/rcvd: 12/11; Downstream
    Up time: 00:00:10
    LDP discovery sources:
     FastEthernet1/0, Src IP addr: 10.20.10.2
    Addresses bound to peer LDP Ident:
     10.1.1.2    10.20.20.1   10.20.10.2
Chengdu_PE#

The peer (10.1.1.2:0) and local LDP IDs (10.1.1.1:0) are shown in highlighted line 1.

In highlighted line 2, you can see that the session state is now operational (established). The number of messages sent and received (12 and 11), together with the label distribution method (unsolicited downstream), are also shown.

Highlighted line 3 shows the LDP session uptime (10 seconds).

VPI Ranges Do Not Overlap Between LC-ATM Interfaces

During LDP session initialization, session parameters—such as LDP protocol version, label distribution method, and (on LC-ATM interfaces) VPI/VCI ranges used for label switching—are negotiated between peers.

If there is no overlap between VPI ranges configured on LDP peers, an error message is logged and session establishment fails, as shown in Example 6-84.

Example 6-84 VPI Ranges Do Not Overlap Between LC-ATM Interfaces

*Feb 8 14:09:06.038 UTC: %TDP-3-TAGATM_BAD_RANGE: Interface ATM3/0.1, Bad
 VPI/VCI range. Can't start a TDP session

In Example 6-84, the error message indicates that the VPI/VCI negotiation has failed on interface atm3/0.1, and the LSRs are unable to start a LDP (shown as TDP) session.

You can also use the debug mpls atm-ldp api command to troubleshoot this issue, as shown in Example 6-85.

Example 6-85 debug atm-ldp api Command Output

Chengdu_PE#debug mpls atm-ldp api
LC-ATM API debugging is on
Chengdu_PE#
*Feb 8 14:27:07.226 UTC: TAGATM_API: Disjoint VPI local[1-1], peer[2-3]
Chengdu_PE#

The highlighted portion reveals that VPI range 1–1 is configured locally, and VPI range 2–3 is configured on the peer LSR. Note that the default VPI range is 1–1.

To correct this problem, the VPI range is reconfigured on the peer LSR. This is shown in Example 6-86.

Example 6-86 Reconfiguration of the VPI Range on the Peer LSR

HongKong_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_PE(config)#interface atm4/0.1 mpls
HongKong_PE(config-subif)#no mpls atm vpi 2-3
HongKong_PE(config-subif)#end
HongKong_PE#

The highlighted line shows that the VPI range 2–3 is removed. This resets the VPI to the default range of 1–1.

After the VPI range on the peer LSR (Hongkong_PE) is reconfigured, LDP session establishment is successful.

To verify successful session establishment, the show mpls ldp neighbor command is used on HongKong_PE, as shown in Example 6-87.

Example 6-87 LDP Session Establishment Succeeds After Reconfiguration of the VPI Range on HongKong_PE

Chengdu_PE#show mpls ldp neighbor
  Peer LDP Ident: 10.1.1.4:1; Local LDP Ident 10.1.1.1:1
    TCP connection: 10.20.60.2.11036 - 10.20.60.1.646
    State: Oper; Msgs sent/rcvd: 14/14; Downstream on demand
    Up time: 00:06:03
    LDP discovery sources:
     ATM3/0.1, Src IP addr: 10.20.60.2
Chengdu_PE#

The peer (10.1.1.4:1) and local LDP IDs (10.1.1.1:1) are shown in highlighted line 1.

Note that the label space identifier used here is 1. Remember that LC-ATM interfaces do not use the platform-wide label space, which is indicated by the label space identifier 0.

Highlighted line 2 shows that the session state is now operational (established). The number of messages sent and received (14 and 14) and the label distribution method (downstream-on-demand) are also shown.

Highlighted line 3 shows the LDP session uptime (6 minutes 3 seconds).

Label Bindings Are Not Advertised Correctly

If LDP session establishment is successful, but label bindings are not advertised correctly, label switching will not function correctly.

Figure 6-34 shows the advertisement of label bindings between Chengdu_PE and Chengdu_P.

Figure 6-34

Figure 6-34. Advertisement of Label Bindings Between Chengdu_PE and Chengdu_P

To verify that labels are being advertised correctly, use the show mpls ldp bindings command, as shown in Example 6-88. The resulting output shows the contents of the Label Information Base (LIB).

Example 6-88 Verifying the Contents of the LIB

Chengdu_PE#show mpls ldp bindings
 tib entry: 10.1.1.1/32, rev 2
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.2:0, tag: 19
 tib entry: 10.1.1.2/32, rev 8
    local binding: tag: 17
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.1.1.3/32, rev 14
    local binding: tag: 20
    remote binding: tsr: 10.1.1.2:0, tag: 18
 tib entry: 10.1.1.4/32, rev 18
    local binding: tag: 22
    remote binding: tsr: 10.1.1.2:0, tag: 20
 tib entry: 10.20.10.0/24, rev 4
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.20.20.0/24, rev 10
    local binding: tag: 18
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.20.20.1/32, rev 16
    local binding: tag: 21
 tib entry: 10.20.20.2/32, rev 6
    local binding: tag: 16
    remote binding: tsr: 10.1.1.2:0, tag: 16
 tib entry: 10.20.30.0/24, rev 12
    local binding: tag: 19
    remote binding: tsr: 10.1.1.2:0, tag: 17
Chengdu_PE#

Example 6-88 shows that label bindings are being received for all prefixes from the peer LSR.

For example, highlighted line 1 shows the LIB (shown here as TIB) entry for prefix 10.1.1.4/32. In highlighted line 2, the locally assigned label for this prefix is shown (22). In highlighted line 3, the label assigned by the peer LSR (10.1.1.2:0) for this prefix is shown (20).

The label bindings that correspond to the best routes are also contained within the LFIB. To examine the contents of the LFIB, use the show mpls forwarding-table command, as shown in Example 6-89.

Example 6-89 Verifying the Contents of the LFIB

Chengdu_PE#show mpls forwarding-table
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
16   16     10.20.20.2/32   0     Fa1/0   10.20.10.2
17   Pop tag   10.1.1.2/32    0     Fa1/0   10.20.10.2
18   Pop tag   10.20.20.0/24   0     Fa1/0   10.20.10.2
19   17     10.20.30.0/24   0     Fa1/0   10.20.10.2
20   18     10.1.1.3/32    0     Fa1/0   10.20.10.2
21   Untagged  10.20.20.1/32   0     Fa1/0   10.20.10.2
22   20     10.1.1.4/32    0     Fa1/0   10.20.10.2
23   Untagged  172.16.1.0/24[V] 0     Se4/1   point2point
24   Untagged  172.16.2.0/24[V] 0     Se4/1   point2point
25   Untagged  172.16.3.0/24[V] 0     Se4/1   point2point
26   Aggregate  172.16.4.0/24[V] 2080
27   Untagged  172.16.4.2/32[V] 0     Se4/1   point2point
Chengdu_PE#

The LFIB contains the locally assigned and outgoing (advertised by the peer LSR) labels for each prefix. Additionally, the number of bytes label switched, the outgoing interface, and the next-hop are shown.

As an example, the locally assigned and outgoing labels for prefix 10.1.1.4/32 are 22 and 20 respectively (see highlighted line 1). The number of bytes switched in 0, the outgoing interface is Fast Ethernet 1/0, and the next-hop is 10.20.10.2.

If label bindings are not advertised correctly, it may be because of a number of reasons, including:

  • The no mpls ldp advertise-labels command is configured on the peer LSR.
  • Conditional label advertisement blocks label bindings.
  • CEF disables local label assignment.

The sections that follow discuss these issues.

no mpls ldp advertise-labels Command Is Configured on the Peer LSR

If no label bindings are being received from a peer LSR, this may indicate that the peer LSR is configured not to advertise its locally assigned label bindings.

To verify that label bindings are being received from the peer LSR, use the show mpls ldp bindings command, as shown in Example 6-90.

Example 6-90 No Label Bindings Are Received from the Peer LSR

Chengdu_PE#show mpls ldp bindings
 tib entry: 10.1.1.1/32, rev 2
    local binding: tag: imp-null
 tib entry: 10.1.1.2/32, rev 8
    local binding: tag: 17
 tib entry: 10.1.1.3/32, rev 14
    local binding: tag: 20
 tib entry: 10.1.1.4/32, rev 18
    local binding: tag: 22
 tib entry: 10.20.10.0/24, rev 4
    local binding: tag: imp-null
 tib entry: 10.20.20.0/24, rev 10
    local binding: tag: 18
 tib entry: 10.20.20.1/32, rev 16
    local binding: tag: 21
 tib entry: 10.20.20.2/32, rev 6
    local binding: tag: 16
 tib entry: 10.20.30.0/24, rev 12
    local binding: tag: 19
Chengdu_PE#

In Example 6-90, no label bindings are being received from LSR 10.1.1.2:0.

The highlighted line shows the LIB entry for prefix 10.1.1.4/32. As you can see, there is no label binding from LSR 10.1.1.2:0; there is only a local binding.

The configuration of the peer LSR is checked using the show running-config command, as demonstrated in Example 6-91. Note that only the relevant portion of the output is shown.

Example 6-91 Checking the Configuration of the Peer LSR Using the show running-config Command

Chengdu_P#show running-config
Building configuration...
!
ip multicast-routing
mpls label protocol ldp
no tag-switching advertise-tags
!

As you can see, the no mpls ldp advertise-labels (shown as no tag-switching advertise-tags) command is configured on the peer LSR. This command disables advertisement of label bindings by the LSR.

To enable that the LSR advertises labels, use the mpls ldp advertise-labels command, as shown in Example 6-92.

Example 6-92 Label Advertisement Is Enabled on Chengdu_P

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#mpls ldp advertise-labels
Chengdu_P(config)#exit
Chengdu_P#

Once label advertisement on the peer LSR is enabled, the show mpls ldp bindings command is used to verify that the bindings are being received, as shown in Example 6-93.

Example 6-93 Label Bindings Are Now Received from the Peer LSR

Chengdu_PE#show mpls ldp bindings
 tib entry: 10.1.1.1/32, rev 2
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.2:0, tag: 19
 tib entry: 10.1.1.2/32, rev 8
    local binding: tag: 17
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.1.1.3/32, rev 14
    local binding: tag: 20
    remote binding: tsr: 10.1.1.2:0, tag: 18
 tib entry: 10.1.1.4/32, rev 18
    local binding: tag: 22
    remote binding: tsr: 10.1.1.2:0, tag: 20
 tib entry: 10.20.10.0/24, rev 4
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.20.20.0/24, rev 10
    local binding: tag: 18
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.20.20.1/32, rev 16
    local binding: tag: 21
 tib entry: 10.20.20.2/32, rev 6
    local binding: tag: 16
    remote binding: tsr: 10.1.1.2:0, tag: 16
 tib entry: 10.20.30.0/24, rev 12
    local binding: tag: 19
    remote binding: tsr: 10.1.1.2:0, tag: 17
Chengdu_PE#

As you can see, label bindings are now being received from peer LSR 10.1.1.2:0. In highlighted line 1, the LIB entry for prefix 10.1.1.4/32 is shown. Highlighted line 2 shows the label binding for this prefix advertised by LSR 10.1.1.2:0.

Conditional Label Advertisement Blocks Label Bindings

If some, but not all, expected label bindings are being received from a peer LSR, this might indicate the presence of conditional label advertisement on the peer LSR.

You can use the show mpls ldp bindings command to examine label bindings advertised from the peer LSRs, as shown in Example 6-94.

Example 6-94 Verifying Label Bindings Advertised by Peer LSRs

Chengdu_PE#show mpls ldp bindings
 tib entry: 10.1.1.1/32, rev 4
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.2:0, tag: 19
 tib entry: 10.1.1.2/32, rev 8
    local binding: tag: 17
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.1.1.3/32, rev 14
    local binding: tag: 20
    remote binding: tsr: 10.1.1.2:0, tag: 18
 tib entry: 10.1.1.4/32, rev 18
    local binding: tag: 22
 tib entry: 10.20.10.0/24, rev 2
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.20.20.0/24, rev 10
    local binding: tag: 18
    remote binding: tsr: 10.1.1.2:0, tag: imp-null
 tib entry: 10.20.20.1/32, rev 16
    local binding: tag: 21
 tib entry: 10.20.20.2/32, rev 6
    local binding: tag: 16
    remote binding: tsr: 10.1.1.2:0, tag: 16
 tib entry: 10.20.30.0/24, rev 12
    local binding: tag: 19
    remote binding: tsr: 10.1.1.2:0, tag: 17
Chengdu_PE#

If you look closely at the output in Example 6-94, you will notice that there are both local and remote bindings for all prefixes, with the exception of 10.1.1.4/32 (highlighted). There is no remote binding for this prefix, which indicates that the peer LSR is not advertising one.

To check for the presence of conditional label advertisement on the peer LSR, use the show running-config command, as demonstrated in Example 6-95. Note that only the relevant portion of the configuration is shown.

Example 6-95 Checking for the Presence of Conditional Label Advertisement

Chengdu_P#show running-config
Building configuration...
!
ip multicast-routing
mpls label protocol ldp
no tag-switching advertise-tags
tag-switching advertise-tags for 1
!
!
access-list 1 permit 10.1.1.2
access-list 1 permit 10.1.1.3
access-list 1 permit 10.1.1.1
access-list 1 permit 10.20.10.0 0.0.0.255
access-list 1 permit 10.20.20.0 0.0.0.255
access-list 1 permit 10.20.30.0 0.0.0.255
!

In highlighted lines 1 and 2, the peer LSR (Chengdu_P) is configured to advertise only labels for those prefixes specified in access list 1.

Highlighted lines 3 to 8 show access list 1. As you can see, prefix 10.1.1.4/32 is not permitted, which prevents the advertisement of a binding for this prefix.

To allow the advertisement of a binding for prefix 10.1.1.4/32, you can either modify or remove access list 1. In this scenario, conditional label advertisement is unnecessary, so it is removed, as shown in Example 6-96.

Example 6-96 Conditional Label Advertisement Is Removed on Chengdu_P

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#mpls ldp advertise-labels
Chengdu_P(config)#no mpls ldp advertise-labels for 1
Chengdu_P(config)#exit
Chengdu_P#

In highlighted lines 1 and 2, conditional label advertisement is removed on Chengdu_P.

Having removed conditional label advertisement on Chengdu_P, use the show mpls ldp bindings command to confirm proper label bindings advertisement, as demonstrated in Example 6-97.

Example 6-97 Confirming Advertisement of a Label Binding for Prefix 10.1.1.4/32

Chengdu_PE#show mpls ldp bindings 10.1.1.4 32
 tib entry: 10.1.1.4/32, rev 18
    local binding: tag: 22
    remote binding: tsr: 10.1.1.2:0, tag: 20
Chengdu_PE#

As you can see, a label binding for prefix 10.1.1.4/32 has now been received from the Chengdu_P.

Label bindings can also be filtered as they are received on an LSR using the mpls ldp neighbor [vrf vpn-name] neighbor-address labels accept acl command. Labels corresponding to prefixes permitted in a standard access list are accepted from the specified neighbor. Verify the presence of this command using the show mpls ldp neighbor neighbor-address detail command.

CEF Disables Local Label Assignment

If labels are not being bound to prefixes locally, this might indicate that CEF is disabled on the LSR.

You can use the show mpls ldp bindings command to verify local label bindings as shown in Example 6-98.

Example 6-98 Local Label Assignment Is Disabled

Chengdu_P#show mpls ldp bindings
 tib entry: 10.1.1.1/32, rev 5
    remote binding: tsr: 10.1.1.1:0, tag: imp-null
    remote binding: tsr: 10.1.1.3:0, tag: 19
 tib entry: 10.1.1.2/32, rev 2
    remote binding: tsr: 10.1.1.1:0, tag: 17
    remote binding: tsr: 10.1.1.3:0, tag: 17
 tib entry: 10.1.1.3/32, rev 7
    remote binding: tsr: 10.1.1.1:0, tag: 20
    remote binding: tsr: 10.1.1.3:0, tag: imp-null
 tib entry: 10.1.1.4/32, rev 8
    remote binding: tsr: 10.1.1.1:0, tag: 22
    remote binding: tsr: 10.1.1.3:0, tag: 20
 tib entry: 10.20.10.0/24, rev 1
    remote binding: tsr: 10.1.1.1:0, tag: imp-null
    remote binding: tsr: 10.1.1.3:0, tag: 18
 tib entry: 10.20.20.0/24, rev 4
    remote binding: tsr: 10.1.1.1:0, tag: 18
    remote binding: tsr: 10.1.1.3:0, tag: imp-null
 tib entry: 10.20.20.1/32, rev 9
    remote binding: tsr: 10.1.1.1:0, tag: 21
    remote binding: tsr: 10.1.1.3:0, tag: 16
 tib entry: 10.20.20.2/32, rev 3
    remote binding: tsr: 10.1.1.1:0, tag: 16
 tib entry: 10.20.30.0/24, rev 6
    remote binding: tsr: 10.1.1.1:0, tag: 19
    remote binding: tsr: 10.1.1.3:0, tag: imp-null
Chengdu_P#

As you can see, the LIB contains remote label bindings but no local label bindings.

As shown in Example 6-99, you can use the show ip cef summary command to check whether CEF is running.

Example 6-99 Verifying CEF Operation

Chengdu_P#show ip cef summary
IP CEF without switching (Table Version 1), flags=0x0
 4294967293 routes, 0 reresolve, 0 unresolved (0 old, 0 new), peak 0
 0 leaves, 0 nodes, 0 bytes, 4 inserts, 4 invalidations
 0 load sharing elements, 0 bytes, 0 references
 universal per-destination load sharing algorithm, id 88235174
 2(0) CEF resets, 0 revisions of existing leaves
 Resolution Timer: Exponential (currently 1s, peak 0s)
 0 in-place/0 aborted modifications
 refcounts: 0 leaf, 0 node
 Table epoch: 0
%CEF not running
Chengdu_P#

The highlighted line shows that CEF is disabled. To enable CEF, use the ip cef command, as shown in Example 6-100.

Example 6-100 Enabling CEF on Chengdu_P

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#ip cef
Chengdu_P(config)#exit
Chengdu_P#

Once CEF has been enabled, the LIB is again examined using the show mpls ldp bindings command, as shown in Example 6-101.

Example 6-101 Local Label Assignment Is Now Enabled

Chengdu_P#show mpls ldp bindings
 tib entry: 10.1.1.1/32, rev 15
    local binding: tag: 19
    remote binding: tsr: 10.1.1.1:0, tag: imp-null
    remote binding: tsr: 10.1.1.3:0, tag: 19
 tib entry: 10.1.1.2/32, rev 12
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.1:0, tag: 17
    remote binding: tsr: 10.1.1.3:0, tag: 17
 tib entry: 10.1.1.3/32, rev 13
    local binding: tag: 18
    remote binding: tsr: 10.1.1.1:0, tag: 20
    remote binding: tsr: 10.1.1.3:0, tag: imp-null
 tib entry: 10.1.1.4/32, rev 16
    local binding: tag: 20
    remote binding: tsr: 10.1.1.1:0, tag: 22
    remote binding: tsr: 10.1.1.3:0, tag: 20
 tib entry: 10.20.10.0/24, rev 17
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.1:0, tag: imp-null
    remote binding: tsr: 10.1.1.3:0, tag: 18
 tib entry: 10.20.20.0/24, rev 14
    local binding: tag: imp-null
    remote binding: tsr: 10.1.1.1:0, tag: 18
    remote binding: tsr: 10.1.1.3:0, tag: imp-null
 tib entry: 10.20.20.1/32, rev 9
    remote binding: tsr: 10.1.1.1:0, tag: 21
    remote binding: tsr: 10.1.1.3:0, tag: 16
 tib entry: 10.20.20.2/32, rev 11
    local binding: tag: 17
    remote binding: tsr: 10.1.1.1:0, tag: 16
 tib entry: 10.20.30.0/24, rev 10
    local binding: tag: 16
    remote binding: tsr: 10.1.1.1:0, tag: 19
    remote binding: tsr: 10.1.1.3:0, tag: imp-null
Chengdu_P#

As you can see, the LIB now contains local label bindings.

Troubleshooting Route Advertisement Between VPN Sites

When troubleshooting route advertisement across the MPLS VPN backbone, you need to consider a number of issues. Before examining end-to-end troubleshooting of route advertisement, it is worthwhile to briefly review the issues involved.

Figure 6-35 illustrates route advertisement across the MPLS VPN backbone.

Figure 6-35

Figure 6-35. Route Advertisement Across the MPLS VPN Backbone

In Figure 6-35, route advertisement from CE2 to CE1 is as follows:

  1. CE2 advertises customer site 2 routes to HongKong_PE using the PE-CE routing protocol (assuming that static routes are not being used).
  2. HongKong_PE redistributes customer routes into MP-BGP.
  3. HongKong_PE advertises the routes across the MPLS VPN backbone to Chengdu_PE, which imports the routes into its VRF.
  4. Chengdu_PE redistributes the MP-BGP routes into the PE-CE routing protocol.
  5. Chengdu_PE advertises the routes to CE1.

Note that Chengdu_PE is the ingress PE-router and HongKong_PE is the egress PE router, with respect to traffic flow (which is in the opposite direction to route advertisement).

Note also that CE1 advertises customer site 1 across the MPLS VPN backbone to CE2 in the same manner as that used for route advertisement from CE2 to CE1.

The next sections discuss troubleshooting itself, beginning with route advertisement from the CE router to connected PE router.

Troubleshooting Route Advertisement Between the PE and CE Routers

The first step in ensuring correct route advertisement is to make sure that the PE router is receiving routes from its connected CE routers.

Figure 6-36 illustrates route advertisement from CE2 to HongKong_PE.

Figure 6-36

Figure 6-36. Route Advertisement from CE2 to HongKong_PE

In Figure 6-36, CE2 advertises routes 172.16.5.0/24, 172.16.6.0/24, and 172.16.7.0/24 to the egress PE, HongKong_PE.

To check that the attached customer site routes are being received on the attached PE router, you should use the show ip route vrf vrf_name command.

Example 6-102 shows the output of the show ip route vrf vrf_name on the PE router.

Example 6-102 Verifying That Customer Routes Are Being Received on the Attached PE Router

HongKong_PE#show ip route vrf mjlnet_VPN
Routing Table: mjlnet_VPN
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
    D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
    N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
    E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
    i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
    * - candidate default, U - per-user static route, o - ODR
Gateway of last resort is not set
   172.16.0.0/16 is variably subnetted, 5 subnets, 2 masks
B    172.16.4.0/24 [200/0] via 10.1.1.1, 03:04:28
B    172.16.4.2/32 [200/0] via 10.1.1.1, 03:04:28
B    172.16.1.0/24 [200/1] via 10.1.1.1, 03:04:28
B    172.16.2.0/24 [200/1] via 10.1.1.1, 03:04:28
B    172.16.3.0/24 [200/1] via 10.1.1.1, 03:04:28
HongKong_PE#

As you can see, site 2 routes (172.16.5.0/24, 172.16.6.0/24, and 172.16.7.0/24) are not in the VRF mjlnet_VPN routing table.

Note that the routes shown in Example 6-102 are from mjlnet_VPN site 1.

The most likely causes for this are the following:

  • The customer interface is misconfigured or down on either the PE or CE router.
  • There is a problem with PE-CE routing (routing protocol / statics).

The sections that follow examine these issues in more detail.

Customer Interface Is Misconfigured or Down

One common cause of PE to CE routing issues is misconfiguration of the customer interface.

Use the show ip vrf interfaces command to verify the configuration of the customer interface, as shown in Example 6-103.

Example 6-103 Verifying the Configuration of the Customer Interface Using the show ip vrf interfaces Command

HongKong_PE#show ip vrf interfaces
Interface       IP-Address   VRF               Protocol
Serial2/1       172.16.8.1   cisco_VPN              up
Serial2/2       192.168.8.1   cisco_VPN              up
HongKong_PE#

In Example 6-103, you will notice that interface serial 2/1 is assigned to VRF cisco_VPN. In fact, it should be assigned to VRF mjlnet_VPN.

To re-assign interface serial 2/1 to VRF mjlnet_VPN, use the ip vrf forwarding vrf_name command as shown in Example 6-104.

Example 6-104 Reassignment of Interface serial 2/1 to VRF mjlnet_VPN

HongKong_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_PE(config)#interface serial 2/1
HongKong_PE(config-if)#ip vrf forwarding mjlnet_VPN
% Interface Serial2/1 IP address 172.16.8.1 removed due to enabling VRF mjlnet_VPN
HongKong_PE(config-if)#ip address 172.16.8.1 255.255.255.0
HongKong_PE(config-if)#end
HongKong_PE#

In highlighted line 1, interface Serial2/1 is reassigned to VRF mjlnet_VPN. Notice that the IP address must be reconfigured when the interface is reassigned to VRF mjlnet_VPN (highlighted lines 2 and 3).

After the interface is reassigned, the interface to VRF assignment is rechecked in Example 6-105.

Example 6-105 Interface Serial2/1 Is Now Correctly Assigned to VRF mjlnet_VPN

HongKong_PE#show ip vrf interfaces
Interface       IP-Address   VRF               Protocol
Serial2/1       172.16.8.1   mjlnet_VPN              up
Serial2/2       192.168.8.1   cisco_VPN              up
HongKong_PE#

As you can see, interface Serial2/1 is now correctly assigned to VRF mjlnet_VPN.

When verifying the configuration of the customer interface, ensure that an IP address is configured on the interface and that the interface is in an up state. Also be sure to check that the CE router interface connected to the PE router is in an up state, and is correctly configured.

Troubleshooting the PE-CE Routing Protocol and Static Routes

If the customer interface is correctly configured and in an up state, the next step is to troubleshoot the PE-CE routing protocol or static routes.

Static Routes Are Misconfigured

If static routes are misconfigured, connectivity between the PE and the customer site will fail.

To check that static routes are correctly configured, use the show ip route vrf vrf_name static command as shown in Example 6-106.

Example 6-106 Checking That Static Routes Are Correctly Configured Using the show ip route vrf vrf_name static Command

HongKong_PE#show ip route vrf mjlnet_VPN static
HongKong_PE#

As you can see, there are no static routes in the VRF routing table.

The next step is to check the configuration of the static routes. This can be done using the show running-configuration | begin ip route command, as shown in Example 6-107.

Example 6-107 Checking the Configuration of Static Routes

Chengdu_PE#show running-config | begin ip route
!
ip route 172.16.5.0 255.255.255.0 Serial2/1
ip route 172.16.6.0 255.255.255.0 Serial2/1
ip route 172.16.7.0 255.255.255.0 Serial2/1
!

The output in Example 6-107 reveals the problem. The static routes are configured as global static routes and not as VRF static routes.

As shown in Example 6-108, the static routes are then reconfigured as VRF static routes.

Example 6-108 Reconfiguration of the Static Routes

HongKong_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_PE(config)#no ip route 172.16.5.0 255.255.255.0 Serial2/1
HongKong_PE(config)#no ip route 172.16.6.0 255.255.255.0 Serial2/1
HongKong_PE(config)#no ip route 172.16.7.0 255.255.255.0 Serial2/1
HongKong_PE(config)#ip route vrf mjlnet_VPN 172.16.5.0 255.255.255.0 serial 2/1
HongKong_PE(config)#ip route vrf mjlnet_VPN 172.16.6.0 255.255.255.0 serial 2/1
HongKong_PE(config)#ip route vrf mjlnet_VPN 172.16.7.0 255.255.255.0 serial 2/1
HongKong_PE(config)#exit
HongKong_PE#

The incorrectly configured static routes are removed in highlighted lines 1 to 3. In highlighted lines 4 to 6, the VRF static routes are configured.

The show ip route vrf vrf_name static command is then used to verify that the static routes are in the VRF routing table, as shown in Example 6-109.

Example 6-109 Verifying the VRF Static Routes

HongKong_PE#show ip route vrf mjlnet_VPN static
   172.16.0.0/16 is variably subnetted, 10 subnets, 2 masks
S    172.16.5.0/24 is directly connected, Serial2/1
S    172.16.6.0/24 is directly connected, Serial2/1
S    172.16.7.0/24 is directly connected, Serial2/1
HongKong_PE#

Highlighted lines 1 to 3 show that the VRF static routes are now correctly configured.

When troubleshooting VRF static routes, also ensure that the VRF name, network prefix, mask, outgoing interface, and next-hop (if used) are correctly specified.

PE-CE Routing Protocols

If PE-CE routing is not functioning correctly, this may be because of one or more of the following issues:

  • The routing protocol is configured globally.
  • The routing protocol is not enabled on the VRF interface.
  • Routing protocol timers are mismatched.
  • The routers are not on a common subnet.
  • A passive interface is configured.
  • An access list blocks the routing protocol.
  • Distribute lists, prefix lists, or route maps block route updates.
  • An authentication mismatch exists.
  • The PE-CE routing protocol is otherwise misconfigured.

Although in-depth PE-CE IGP troubleshooting is beyond the scope of this book, this section briefly discusses these issues.

These issues are discussed from the perspective of the PE router, but make sure that you also verify the configuration of the PE-CE routing protocol on the CE router.

Routing Protocol Is Configured Globally

Make sure that the PE-CE routing protocol is not configured globally on the PE router. If the PE-CE routing protocol is RIPv2, EIGRP, or EBGP, it should be configured under the IPv4 address family. If the PE-CE routing protocol is OSPF, a separate process should be configured for the VRF.

Use the show ip protocols vrf vrf_name command to troubleshoot this issue.

Routing Protocol Is Not Enabled on the VRF Interface

Verify that the PE-CE routing protocol is enabled on the VRF interface.

Use the show ip protocols vrf vrf_name command to check this.

Routing Protocol Timers Are Mismatched

Ensure that routing protocol timers match between the PE and the CE routers. For example, if the PE-CE routing protocol is OSPF, make sure the hello and dead intervals are the same.

Use the show ip protocols vrf vrf_name command to troubleshoot this issue.

Routers Are Not on a Common Subnet

Check that the VRF interface and the CE router interface that is connected to the PE router are correctly addressed (including the address mask).

Use the show ip vrf interfaces command to verify this on the PE router.

Passive Interface Is Configured

Make sure that the VRF interface is not configured as a passive interface. If the VRF interface is configured as a passive interface, routing updates will not be sent on the interface.

The show ip protocols vrf vrf_name command can be used to verify this.

Access List Blocks the Routing Protocol

Verify that an access list is not blocking the PE-CE routing protocol on the VRF interface.

Check for access lists using the show ip interface command.

Distribute Lists, Prefix Lists, or Route Maps Block Route Updates

Check that distribute lists, prefix lists, or route maps are not blocking route updates.

Use the show ip protocols vrf vrf_name command to verify this.

Authentication Mismatch Exists

Check to see whether there is an authentication mismatch between the PE and the CE routers.

The command to verify this issue depends on the PE-CE routing protocol.

  • For OSPF, use the debug ip ospf adj command.
  • For RIPv2, use the debug ip rip command.
  • For EIGRP, use the debug eigrp packets verbose command.
  • For EBGP, an error message is logged. Use the show logging command to see the message.

PE-CE Routing Protocol Is Otherwise Misconfigured

Simple misconfiguration is the most common cause of PE-CE routing issues.

Check the section, "Step 11: Configure PE-CE Routing Protocols / Static Routes," on page 454. Proper configuration, as well as a number of protocol-specific issues, is discussed in this section.

Other Useful PE-CE Routing Protocol Troubleshooting Commands

Regular routing protocol show and debug commands can be used to troubleshoot PE-CE routing protocols. However, there are some VRF-specific commands for RIPv2 and EIGRP that may be useful:

  • RIP—The show ip rip database vrf vrf_name command can be used to examine the RIP database.
  • EIGRP—VRF-specific commands for use with EIGRP are as follows:
    • The show ip eigrp vrf vrf_name interfaces command can be used to verify EIGRP VRF interfaces.
    • The show ip eigrp vrf vrf_name neighbors command can be used to verify EIGRP neighbors on VRF interfaces.
    • Use the show ip eigrp vrf vrf_name topology to display the VRF EIGRP topology table.
    • The show ip eigrp vrf vrf_name traffic command can be used to view EIGRP traffic statistics for the VRF.

Redistribution of Customer Routes into MP-BGP Is Not Successful on the Egress PE Router

One you have verified PE-CE routing, the next step in troubleshooting route exchange across the MPLS VPN backbone is to ensure that the redistribution of customer routes into MP-BGP is functioning correctly.

Figure 6-37 illustrates redistribution of customer routes into MP-BGP.

Figure 6-37

Figure 6-37. Redistribution of Customer Routes into MP-BGP

To verify that customer routes are being redistributed, use the show ip bgp vpnv4 vrf vrf_name command, as shown in Example 6-110.

Example 6-110 Customer Routes Are Not Redistributed in MP-BGP

HongKong_PE#show ip bgp vpnv4 vrf mjlnet_VPN
BGP table version is 26, local router ID is 10.1.1.4
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
       S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
  Network     Next Hop      Metric LocPrf Weight Path
Route Distinguisher: 64512:100 (default for vrf mjlnet_VPN)
*>i172.16.1.0/24  10.1.1.1         1  100   0 ?
*>i172.16.2.0/24  10.1.1.1         1  100   0 ?
*>i172.16.3.0/24  10.1.1.1         1  100   0 ?
*>i172.16.4.0/24  10.1.1.1         0  100   0 ?
*>i172.16.4.2/32  10.1.1.1         0  100   0 ?
HongKong_PE#

You will notice that the output of the show ip bgp vpnv4 vrf vrf_name command does not show any of the routes from customer site 2 (172.16.5.0/24, 172.16.6.0/24, and 172.16.7.0/24).

If routes are not being redistributed (as in Example 6-114), the first thing to check is the configuration of redistribution of PE-CE routing protocols or static routes using the show running-config command, as shown in Example 6-111.

Note that only the relevant portion of the output is shown.

Example 6-111 Checking the Configuration of Redistribution on the PE Router

HongKong_PE#show running-config | begin router bgp
router bgp 64512
 no synchronization
 bgp log-neighbor-changes
 redistribute rip
 neighbor 10.1.1.1 remote-as 64512
 neighbor 10.1.1.1 update-source Loopback0
 neighbor 10.1.1.6 remote-as 64512
 neighbor 10.1.1.6 update-source Loopback0
 no auto-summary
 !
address-family ipv4 vrf mjlnet_VPN
 no auto-summary
 no synchronization
 exit-address-family
!

Highlighted line 1 shows the problem—RIP is being redistributed into global BGP. RIP redistribution into MP-BGP should be configured under the IPv4 VRF mjlnet_VPN address family (see highlighted line 2).

Redistribution of the PE-CE routing protocol into MP-BGP is then reconfigured, as shown in Example 6-112.

Example 6-112 Reconfiguration of Redistribution from the PE-CE Routing Protocol into MP-BGP

HongKong_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_PE(config)#router bgp 64512
HongKong_PE(config-router)#no redistribute rip
HongKong_PE(config-router)#address-family ipv4 vrf mjlnet_VPN
HongKong_PE(config-router-af)#redistribute rip
HongKong_PE(config-router-af)#end
HongKong_PE#

In highlighted line 1, redistribution of the PE-CE routing protocol into global BGP is disabled.

Redistribution of the PE-CE routing protocol into MP-BGP is then configured in highlighted line 2.

To ensure that redistribution of your PE-CE routing protocol is configured correctly, check the section "Step 12: Redistribute Customer Routes into MP-BGP" on page 458.

Once the configuration has been corrected, customer routes are redistributed into MP-BGP. This is verified using the show ip bgp vpnv4 vrf vrf_name command, as shown in Example 6-113.

Example 6-113 Customer Routes Are Now Successfully Redistributed into MP-BGP

HongKong_PE#show ip bgp vpnv4 vrf mjlnet_VPN
BGP table version is 21, local router ID is 10.1.1.4
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
       S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
  Network     Next Hop      Metric LocPrf Weight Path
Route Distinguisher: 64512:100 (default for vrf mjlnet_VPN)
*>i172.16.1.0/24  10.1.1.1         1  100   0 ?
*>i172.16.2.0/24  10.1.1.1         1  100   0 ?
*>i172.16.3.0/24  10.1.1.1         1  100   0 ?
*>i172.16.4.0/24  10.1.1.1         0  100   0 ?
*>i172.16.4.2/32  10.1.1.1         0  100   0 ?
*> 172.16.5.0/24  172.16.8.2        1     32768 ?
*> 172.16.6.0/24  172.16.8.2        1     32768 ?
*> 172.16.7.0/24  172.16.8.2        1     32768 ?
*> 172.16.8.0/24  0.0.0.0         0     32768 ?
*> 172.16.8.2/32  0.0.0.0         0     32768 ?
HongKong_PE#

The highlighted lines show that customer site 2 routes are now being successfully redistributed into MP-BGP.

MP-BGP Routes from the Egress PE Router Are Not Installed in the BGP Table of the Ingress PE Router

If customer routes are correctly redistributed into MP-BGP, the next step is to check that the routes advertised by the egress PE are correctly advertised and installed in the BGP table of the ingress PE router.

Figure 6-38 illustrates the advertisement of MP-BGP routes to the ingress PE router.

Figure 6-38

Figure 6-38. Advertisement of MP-BGP Routes to the Ingress PE Router

To examine the BGP routing table of the ingress PE router, use the show ip bgp vpnv4 vrf vrf_name command, as shown in Example 6-114.

Example 6-114 Verifying Installation of MP-BGP Routes in the BGP Table of the Ingress PE Router

Chengdu_PE#show ip bgp vpnv4 vrf mjlnet_VPN
BGP table version is 1, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
       S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
  Network     Next Hop      Metric LocPrf Weight Path
Route Distinguisher: 64512:100 (default for vrf mjlnet_VPN)
*> 172.16.1.0/24  172.16.4.2        1     32768 ?
*> 172.16.2.0/24  172.16.4.2        1     32768 ?
*> 172.16.3.0/24  172.16.4.2        1     32768 ?
*> 172.16.4.0/24  0.0.0.0         0     32768 ?
*> 172.16.4.2/32  0.0.0.0         0     32768 ?
Chengdu_PE#

As you can see, mjlnet_VPN site 2 routes (172.16.5.0/24, 172.16.6.0/24, and 172.16.7.0/24) are not in the BGP routing table on the ingress PE router (Chengdu_PE).

The show ip route vrf vrf_name command can also be used to verify that mjlnet_VPN site 2 routes are not being installed into the VRF routing table, as demonstrated in Example 6-115.

Example 6-115 Verifying the VRF Routing Table

Chengdu_PE#show ip route vrf mjlnet_VPN
Routing Table: mjlnet_VPN
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
    D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
    N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
    E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
    i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
    * - candidate default, U - per-user static route, o - ODR
Gateway of last resort is not set
   172.16.0.0/16 is variably subnetted, 5 subnets, 2 masks
C    172.16.4.0/24 is directly connected, Serial4/1
C    172.16.4.2/32 is directly connected, Serial4/1
R    172.16.1.0/24 [120/1] via 172.16.4.2, 00:00:26, Serial4/1
R    172.16.2.0/24 [120/1] via 172.16.4.2, 00:00:26, Serial4/1
R    172.16.3.0/24 [120/1] via 172.16.4.2, 00:00:26, Serial4/1
Chengdu_PE#

Again, no evidence of mjlnet_VPN site 2 routes. There are several possible causes for this, including the following:

  • The VPN-IPv4 address family is misconfigured on the ingress or egress PE router.
  • Export and import route targets are mismatched on the egress and ingress PE routers.
  • An export map is misconfigured.
  • Routes are blocked by an import map.

These issues are discussed in the sections that follow.

VPN-IPv4 Address Family Is Misconfigured on the Ingress or Egress

PE Router

If customer routes are not being advertised to the ingress PE router, the first thing to check is that the VPN-IPv4 (VPNv4) address family is configured correctly on the egress and ingress PE routers.

To verify MP-BGP configuration, use the show ip bgp neighbors [neighbor-address] command, as shown in Example 6-116.

Note that only the relevant portion of the output is shown.

Example 6-116 Verifying MP-BGP Configuration Using the show ip bgp neighbors Command

HongKong_PE#show ip bgp neighbors 10.1.1.1
BGP neighbor is 10.1.1.1, remote AS 64512, internal link
 BGP version 4, remote router ID 10.1.1.1
 BGP state = Established, up for 00:14:47
 Last read 00:00:47, hold time is 180, keepalive interval is 60 seconds
 Neighbor capabilities:
  Route refresh: advertised and received(new)
  Address family IPv4 Unicast: advertised and received
  Address family VPNv4 Unicast: advertised
 Received 427 messages, 0 notifications, 0 in queue
 Sent 427 messages, 0 notifications, 0 in queue
 Default minimum time between advertisement runs is 5 seconds

Highlighted line 1 shows that the BGP session between the egress and ingress PE routers is Established.

In highlighted line 2, you will notice that the VPNv4 address family is advertised. This indicates that the local router (HongKong_PE) supports the VPNv4 (VPN-IPv4) address family. Unfortunately, there is no indication that the ingress PE router (Chengdu_PE) supports the VPNv4 address family (this would be indicated by the received keyword).

This is not good. If the neighbor does not support the VPNv4 address family, there is no chance of VPN routes being exchanged between BGP peers.

The configuration of the ingress PE router is examined using the show running-config command, as demonstrated in Example 6-117.

Example 6-117 Checking the Configuration of the Ingress PE Router

Chengdu_PE#show running-config | begin router bgp
router bgp 64512
 no synchronization
 bgp log-neighbor-changes
 neighbor 10.1.1.4 remote-as 64512
 neighbor 10.1.1.4 update-source Loopback0
 neighbor 10.1.1.6 remote-as 64512
 neighbor 10.1.1.6 update-source Loopback0
 no auto-summary
 !
 address-family ipv4 vrf cisco_VPN
 redistribute ospf 200 match internal external 1 external 2
 no auto-summary
 no synchronization
 exit-address-family
 !
 address-family ipv4 vrf mjlnet_VPN
 redistribute rip
 no auto-summary
 no synchronization
 exit-address-family
!

As you can see, the VPNv4 address family is not configured.

The VPNv4 address family is then configured on the ingress PE router, as shown in Example 6-118.

Example 6-118 Configuration of the VPNv4 Address Family on the Ingress PE Router

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#router bgp 64512
Chengdu_PE(config-router)#address-family vpnv4
Chengdu_PE(config-router-af)#neighbor 10.1.1.4 activate
Chengdu_PE(config-router-af)#end
Chengdu_PE#

In Example 6-118, the VPNv4 address family is configured (highlighted line 1), and neighbor 10.1.1.4 (HongKong_PE) is activated (highlighted line 2).

Once the VPNv4 address family has been configured on the ingress PE, the BGP VPNv4 table is again checked for mjlnet_VPN site 2 routes.

Example 6-119 shows the output of the show ip bgp vpnv4 vrf vrf_name command after configuration of the VPNv4 address family.

Example 6-119 mjlnet_VPN Site 2 Routes Are Now Installed into the BGP VPNv4 Table

Chengdu_PE#show ip bgp vpnv4 vrf mjlnet_VPN
BGP table version is 36, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
       S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
  Network     Next Hop      Metric LocPrf Weight Path
Route Distinguisher: 64512:100 (default for vrf mjlnet_VPN)
*> 172.16.1.0/24  172.16.4.2        1     32768 ?
*> 172.16.2.0/24  172.16.4.2        1     32768 ?
*> 172.16.3.0/24  172.16.4.2        1     32768 ?
*> 172.16.4.0/24  0.0.0.0         0     32768 ?
*> 172.16.4.2/32  0.0.0.0         0     32768 ?
*>i172.16.5.0/24  10.1.1.4         1  100   0 ?
*>i172.16.6.0/24  10.1.1.4         1  100   0 ?
*>i172.16.7.0/24  10.1.1.4         1  100   0 ?
*>i172.16.8.0/24  10.1.1.4         0  100   0 ?
*>i172.16.8.2/32  10.1.1.4         0  100   0 ?
Chengdu_PE#

Highlighted lines 1 to 3 show that mjlnet_VPN site 2 routes are now installed in the BGP VPNv4 table.

Note that if route reflectors are being used, you should ensure that the route reflectors are configured within the VPNv4 address family to reflect MP-BGP (VPNv4) routes to the PE routers.

Export and Import Route Targets Are Mismatched on the Egress and Ingress PE Routers

If export and import route targets are mismatched on the egress and ingress PE routers, MP-BGP routes will not be installed into the ingress PE router's BGP VPNv4 table.

Examine the export route target on the egress PE router using the show ip vrf detail vrf_name command as shown in Example 6-120.

Example 6-120 Verifying the Export Route Target on the Egress PE Router

HongKong_PE#show ip vrf detail mjlnet_VPN
VRF mjlnet_VPN; default RD 64512:100; default VPNID <not set>
 Interfaces:
  Serial2/1
 Connected addresses are not in global routing table
 Export VPN route-target communities
  RT:64512:100
 Import VPN route-target communities
  RT:64512:100
 No import route-map
 No export route-map
HongKong_PE#

Highlighted line 1 shows that the export route target on the egress PE router is 64512:100.

Having ascertained the export route target on the egress PE router, your next step is to verify the import route target on the ingress PE, as demonstrated in Example 6-121.

Example 6-121 Verifying the Import Route Target on the Ingress PE Router

Chengdu_PE#show ip vrf detail mjlnet_VPN
VRF mjlnet_VPN; default RD 64512:100; default VPNID <not set>
 Interfaces:
  Serial4/1
 Connected addresses are not in global routing table
 Export VPN route-target communities
  RT:64512:100
 Import VPN route-target communities
  RT:64512:400
 No import route-map
 No export route-map
Chengdu_PE#

As you can see, the import route target configured on the ingress PE router is 64512:400 (highlighted line 1). Clearly, there is a mismatch between the export route target configured on the egress PE router (64512:100) and the import route target configured on the ingress PE router (64512:400).

You can resolve this problem one of two ways:

  • Reconfigure the export route target on the egress PE router.
  • Reconfigure the import route target on the ingress PE router.

In this case, the import route target is reconfigured on the ingress PE router, as shown in Example 6-122.

Example 6-122 Reconfiguration of the Import Route Target on the Ingress PE Router

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#ip vrf mjlnet_VPN
Chengdu_PE(config-vrf)#no route-target import 64512:400
Chengdu_PE(config-vrf)#route-target import 64512:100
Chengdu_PE(config-vrf)#end
Chengdu_PE#

Once the import route target has been reconfigured on the ingress PE router, the BGP VPNv4 table is rechecked using the show ip bgp vpnv4 vrf vrf_name command, as shown in Example 6-123.

Example 6-123 mjlnet_VPN Routes Are Now Installed into the BGP VPNv4 Table on the Ingress PE Router

Chengdu_PE#show ip bgp vpnv4 vrf mjlnet_VPN
BGP table version is 36, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
       S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
  Network     Next Hop      Metric LocPrf Weight Path
Route Distinguisher: 64512:100 (default for vrf mjlnet_VPN)
*> 172.16.1.0/24  172.16.4.2        1     32768 ?
*> 172.16.2.0/24  172.16.4.2        1     32768 ?
*> 172.16.3.0/24  172.16.4.2        1     32768 ?
*> 172.16.4.0/24  0.0.0.0         0     32768 ?
*> 172.16.4.2/32  0.0.0.0         0     32768 ?
*>i172.16.5.0/24  10.1.1.4         1  100   0 ?
*>i172.16.6.0/24  10.1.1.4         1  100   0 ?
*>i172.16.7.0/24  10.1.1.4         1  100   0 ?
*>i172.16.8.0/24  10.1.1.4         0  100   0 ?
*>i172.16.8.2/32  10.1.1.4         0  100   0 ?
Chengdu_PE#

The highlighted lines show that mjlnet_VPN site 2 routes have now been imported into the BGP VPNv4 table.

Export Map Is Misconfigured

If an export map is misconfigured on the egress PE router, VPNv4 routes advertised from the egress PE router may not be installed in the BGP VPNv4 table on the ingress PE router.

The first step is to examine the import route targets configured on the ingress PE router using the show ip vrf detail vrf_name command, as demonstrated in Example 6-124.

Example 6-124 Verifying Import Route Targets on the Ingress PE Router Using the show ip vrf detail Command

Chengdu_PE#show ip vrf detail mjlnet_VPN
VRF mjlnet_VPN; default RD 64512:100; default VPNID <not set>
VRF Table ID = 1
 Interfaces:
  Serial4/1
 Connected addresses are not in global routing table
 Export VPN route-target communities
  RT:64512:100
 Import VPN route-target communities
  RT:64512:100
 No import route-map
 No export route-map
Chengdu_PE#

As you can see, the only import route target configured for VRF mjlnet_VPN is 64512:100.

Next you should verify MP-BGP routes on the egress PE router using the show ip bgp vpnv4 vrf vrf_name prefix command, as shown in Example 6-125.

Example 6-125 Verifying Route Targets on the Egress PE Router Using the show ip bgp vpnv4 vrf Command

HongKong_PE#show ip bgp vpnv4 vrf mjlnet_VPN 172.16.7.0/24
BGP routing table entry for 64512:100:172.16.7.0/24, version 18
Paths: (1 available, best #1, table mjlnet_VPN)
Flag: 0x820
 Advertised to update-groups:
   1
 Local
  172.16.8.2 (via mjlnet_VPN) from 0.0.0.0 (10.1.1.4)
   Origin incomplete, metric 1, localpref 100, weight 32768, valid, sourced,
 best
   Extended Community: RT:64512:300
HongKong_PE#

In Example 6-125, the MP-BGP (site 2) route 172.16.7.0/24 is verified on egress router HongKong_PE. As you can see, only route target 64512:300 is attached to this route.

Clearly there is a mismatch between the import route target configured on Chengdu_PE (64512:100) and the export route target attached to routes by egress router HongKong_PE (64512:300).

Having checked the route target attached to MP-BGP routes, you should now verify route target configuration on the egress PE router using the show ip vrf detail vrf_name command, as demonstrated in Example 6-126.

Example 6-126 Verifying Route Target Configuration on the Egress PE Router Using the show ip vrf detail Command

HongKong_PE#show ip vrf detail mjlnet_VPN
VRF mjlnet_VPN; default RD 64512:100; default VPNID <not set>
 Interfaces:
  Serial2/1
 Connected addresses are not in global routing table
 Export VPN route-target communities
  RT:64512:100
 Import VPN route-target communities
  RT:64512:100
 No import route-map
 Export route-map: AddExportRT
HongKong_PE#

In highlighted line 1, you can see that the export route target is configured as 64512:100. This is the same as the import route target configured on the ingress PE router. If the export route target configured on the egress PE router is 64512:100 (the same as the import route target configured on the ingress PE router), why is route target 64512:300 and not route target 64512:100 attached to MP-BGP routes?

If you look a little further down the output, you will see that export map AddExportRT is configured on the egress PE router (highlighted line 2).

The export route map can be examined using the show route-map route_map_name command.

Example 6-127 shows the output of the show route-map route_map_name command on the egress PE router.

Example 6-127 Examining the Export Map

HongKong_PE#show route-map AddExportRT
route-map AddExportRT, permit, sequence 10
 Match clauses:
  ip address (access-lists): 10
 Set clauses:
  extended community RT:64512:300
 Policy routing matches: 0 packets, 0 bytes
HongKong_PE#

The highlighted lines show that the route map has a set clause configured to assign route target 64512:300 to routes that match access list 10.

There is one problem with the set clause in the route map, however. The additive keyword is missing. This means that the route target 64512:300 overwrites route target 64512:100 (shown in Example 6-126).

In this scenario, it is intended that route target 64512:300 be added to routes matching access list 10 in addition to route target 64512:100. The route map must, therefore, be modified so that the set clause includes the additive keyword. If this is included, then route target 64512:300 will be attached in addition to route target 64512:100, rather than overwriting it.

Example 6-128 shows the reconfiguration of the route map to include the additive keyword.

Example 6-128 Reconfiguration of the Route Map to Include the additive Keyword

HongKong_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_PE(config)#route-map AddExportRT permit 10
HongKong_PE(config-route-map)#no set extcommunity rt 64512:300
HongKong_PE(config-route-map)#set extcommunity rt 64512:300 additive
HongKong_PE(config-route-map)#end
HongKong_PE#

In highlighted line 1, the existing set clause is removed. In highlighted line 2, the set clause is reconfigured with the additive keyword.

After reconfiguring the export map, check the BGP VPNv4 table on the ingress PE router using the show ip bgp vpnv4 vrf_name command, as shown in Example 6-129.

Example 6-129 MP-BGP Routes Are Now Correctly Installed into the BGP VPN4 Table on the Ingress PE Router

Chengdu_PE#show ip bgp vpnv4 vrf mjlnet_VPN
BGP table version is 36, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
       S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
  Network     Next Hop      Metric LocPrf Weight Path
Route Distinguisher: 64512:100 (default for vrf mjlnet_VPN)
*> 172.16.1.0/24  172.16.4.2        1     32768 ?
*> 172.16.2.0/24  172.16.4.2        1     32768 ?
*> 172.16.3.0/24  172.16.4.2        1     32768 ?
*> 172.16.4.0/24  0.0.0.0         0     32768 ?
*> 172.16.4.2/32  0.0.0.0         0     32768 ?
*>i172.16.5.0/24  10.1.1.4         1  100   0 ?
*>i172.16.6.0/24  10.1.1.4         1  100   0 ?
*>i172.16.7.0/24  10.1.1.4         1  100   0 ?
*>i172.16.8.0/24  10.1.1.4         0  100   0 ?
*>i172.16.8.2/32  10.1.1.4         0  100   0 ?
Chengdu_PE#

As the highlighted lines indicate, mjlnet_VPN site 2 routes are now in the BGP VPNv4 table on the ingress PE router.

Routes Are Blocked by an Import Map

A misconfigured import map on the ingress PE router can cause routes not to be installed in the BGP VPNv4 table.

To verify whether an import map is configured on the ingress PE router, use the show ip vrf detail vrf_name command, as shown in Example 6-130.

Example 6-130 Verifying Whether an Import Map Is Configured Using the show ip vrf detail Command

Chengdu_PE#show ip vrf detail mjlnet_VPN
VRF mjlnet_VPN; default RD 64512:100; default VPNID <not set>
 Interfaces:
  Serial4/1
 Connected addresses are not in global routing table
 Export VPN route-target communities
  RT:64512:100
 Import VPN route-target communities
  RT:64512:100
 Import route-map: FilterImport
 No export route-map
Chengdu_PE#

The highlighted line shows that import map FilterImport is configured on the ingress PE router.

The import route map can be examined using the show route-map route_map_name command, as shown in Example 6-131.

Example 6-131 Examining the Route Map

Chengdu_PE#show route-map FilterImport
route-map FilterImport, permit, sequence 10
 Match clauses:
  ip address (access-lists): 10
 Set clauses:
 Policy routing matches: 0 packets, 0 bytes
Chengdu_PE#

As the highlighted line shows, there is one match clause in the import route map. This match clause references access list 10.

Access list 10 is then examined using the show ip access-lists access_list_number command, as shown in Example 6-132.

Example 6-132 Verifying Access List 10

Chengdu_PE#show ip access-lists 10
Standard IP access list 10
Standard IP access list 10
  deny  172.16.5.0, wildcard bits 0.0.0.255 (2 matches)
  deny  172.16.6.0, wildcard bits 0.0.0.255 (2 matches)
  deny  172.16.7.0, wildcard bits 0.0.0.255 (2 matches)
  deny  172.16.8.0, wildcard bits 0.0.0.255 (2 matches)
  permit any (15 matches)
Chengdu_PE#

As you can see, the mjlnet_VPN site 2 routes (172.16.5.0/24, 172.16.6.0/24, and 172.16.7.0/24) are denied by access list 10.

To resolve this problem, you can either modify or remove the import map. In this case, it is decided that the import map is unnecessary and is removed, as shown in Example 6-133.

Example 6-133 Removal of the Import Map on the Ingress PE Router

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#ip vrf mjlnet_VPN
Chengdu_PE(config-vrf)#no import map FilterImport
Chengdu_PE(config-vrf)#end
Chengdu_PE#

The highlighted line indicates the removal of the import map FilterImport.

After removing the import map, check the BGP VPNv4 table on the ingress PE router using the show ip bgp vpnv4 vrf vrf_name command, as shown in Example 6-134.

Example 6-134 mjlnet_VPN Routes Are Now Correctly Installed in the Ingress PE Router's BGP VPNv4 Table

Chengdu_PE#show ip bgp vpnv4 vrf mjlnet_VPN
BGP table version is 61, local router ID is 10.1.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
       S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
  Network     Next Hop      Metric LocPrf Weight Path
Route Distinguisher: 64512:100 (default for vrf mjlnet_VPN)
*> 172.16.1.0/24  172.16.4.2        1     32768 ?
*> 172.16.2.0/24  172.16.4.2        1     32768 ?
*> 172.16.3.0/24  172.16.4.2        1     32768 ?
*> 172.16.4.0/24  0.0.0.0         0     32768 ?
*> 172.16.4.2/32  0.0.0.0         0     32768 ?
*>i172.16.5.0/24  10.1.1.4         1  100   0 ?
*>i172.16.6.0/24  10.1.1.4         1  100   0 ?
*>i172.16.7.0/24  10.1.1.4         1  100   0 ?
*>i172.16.8.0/24  10.1.1.4         0  100   0 ?
*>i172.16.8.2/32  10.1.1.4         0  100   0 ?
Chengdu_PE#

The highlighted lines show that mjlnet_VPN site 2 routes are now in the BGP VPNv4 table on the ingress PE router.

Redistribution from MP-BGP into the PE-CE Routing Protocol on the Ingress PE Router

If routes from the egress PE router are installed into the BGP VPNv4 table of the ingress PE router, the next step is to verify redistribution of those routes into the PE-CE routing protocol (assuming that static routes or default routing are not being used).

Figure 6-39 illustrates the redistribution of MP-BGP routes into the PE-CE routing protocol.

Figure 6-39

Figure 6-39. MPLS VPNs

In Figure 6-39, routes advertised across the MPLS VPN backbone by egress PE router HongKong_PE are redistributed into the PE-CE routing protocol on ingress PE router Chengdu_PE.

Redistribution Is Incorrectly Configured on the Ingress PE Router

If redistribution of VPNv4 routes into the PE-CE routing protocol is misconfigured and the PE routers do not advertise a default route to the CE routers, VPN routing will fail.

In this case, the PE-CE routing protocol is RIP version 2, so the redistribution of VPNv4 routes can be verified using the show ip rip database vrf vrf_name command, as shown in Example 6-135.

Example 6-135 Verifying Redistribution of BGP VPNv4 Routes into RIP Using the show ip rip database vrf Command

Chengdu_PE#show ip rip database vrf mjlnet_VPN
172.16.0.0/16  auto-summary
172.16.1.0/24
  [1] via 172.16.4.2, 00:00:15, Serial4/1
172.16.2.0/24
  [1] via 172.16.4.2, 00:00:15, Serial4/1
172.16.3.0/24
  [1] via 172.16.4.2, 00:00:15, Serial4/1
172.16.4.0/24  directly connected, Serial4/1
172.16.4.2/32  directly connected, Serial4/1
Chengdu_PE#

As you can see, none of the mjlnet_VPN site 2 routes (172.16.5.0/24, 172.16.6.0/24, and 172.16.7.0/24) are in the RIP database. Redistribution is not taking place.

The configuration of RIP is then examined using the show running-config command, as shown in Example 6-136.

Example 6-136 Examining the RIP Configuration

Chengdu_PE#show running-config | begin router rip
router rip
 version 2
 redistribute bgp 64512 metric transparent
 !
 address-family ipv4 vrf mjlnet_VPN
 version 2
 network 172.16.0.0
 no auto-summary
 exit-address-family
!

The highlighted line indicates that MP-BGP routes are being redistributed globally. The redistribute command should be configured under the IPv4 address family.

Redistribution is then reconfigured as shown in Example 6-137.

Example 6-137 Reconfiguration of Redistribution of VPNv4 Routes into RIP on the Ingress PE Router

Chengdu_PE#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_PE(config)#router rip
Chengdu_PE(config-router)#no redistribute bgp 64512
Chengdu_PE(config-router)#address-family ipv4 vrf mjlnet_VPN
Chengdu_PE(config-router-af)#redistribute bgp 64512 metric transparent
Chengdu_PE(config-router-af)#end
Chengdu_PE#

Redistribution of MP-BGP into global RIP is disabled in highlighted line 1.

Redistribution of MP-BGP into RIP is then configured under the IPv4 address family in highlighted line 2.

Note that when configuring redistribution of MP-BGP routes into RIP, you should ensure that a metric is configured (through the metric option of the redistribute command or by using the default-metric command). If none is configured, routes will be redistributed with the metric of infinity (that is, not redistributed).

Now, recheck redistribution using the show ip rip database vrf vrf_name command, as shown in Example 6-138.

Example 6-138 MP-BGP Routes Are Now Successfully Redistributed into RIP

Chengdu_PE#show ip rip database vrf mjlnet_VPN
172.16.0.0/16  auto-summary
172.16.1.0/24
  [1] via 172.16.4.2, 00:00:19, Serial4/1
172.16.2.0/24
  [1] via 172.16.4.2, 00:00:19, Serial4/1
172.16.3.0/24
  [1] via 172.16.4.2, 00:00:19, Serial4/1
172.16.4.0/24  directly connected, Serial4/1
172.16.4.2/32  directly connected, Serial4/1
172.16.5.0/24  redistributed
  [2] via 10.1.1.4,
172.16.6.0/24  redistributed
  [2] via 10.1.1.4,
172.16.7.0/24  redistributed
  [2] via 10.1.1.4,
172.16.8.0/24  redistributed
  [1] via 10.1.1.4,
172.16.8.2/32  redistributed
  [1] via 10.1.1.4,
Chengdu_PE#

The highlighted lines show that MP-BGP routes are now being redistributed into RIP correctly.

In the scenario described in this section, RIP is the PE-CE routing protocol. However, if your PE-CE routing protocol is EIGRP, then the show ip eigrp vrf vrf_name topology command can be used to verify redistribution. Also, be sure to specify a metric when configuring redistribution of MP-BGP into EIGRP.

If OSPF is your PE-CE routing protocol, the show ip ospf process_id command can be used to verify redistribution. Do not forget to specify the subnets keyword when redistributing MP-BGP into OSPF. If the subnets keyword is not specified, only major networks will be redistributed.

If your PE-CE routing protocol is EBGP, redistribution is not needed.

For a full description of the configuration of redistribution of MP-BGP into the PE-CE routing protocol, see the section, "Step 11: Configure PE-CE Routing Protocols / Static Routes," on page 454 earlier in this chapter.

PE to CE Route Advertisement

After you have ensured that MP-BGP routes are being successfully redistributed into the PE-CE routing protocol, the next step is to verify that these routes are being advertised from the ingress PE router to the CE router.

Figure 6-40 illustrates the advertisement of routes from the ingress PE router to the CE router.

Figure 6-40

Figure 6-40. Advertisement of Routes from the Ingress PE Router to the CE Router

In Figure 6-40, Chengdu_PE advertises mjlnet_VPN site 2 routes to CE1.

To verify that routes are being advertised from the ingress PE router to the CE router, examine the routing table on the CE router using the show ip route command on the CE router, as shown in Example 6-139.

Example 6-139 Verifying that Routes Are Being Advertised from the Ingress PE Router to the CE Router

mjlnet_VPN_CE1#show ip route
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
    D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
    N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
    E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
    i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
    * - candidate default, U - per-user static route, o - ODR
Gateway of last resort is not set
   172.16.0.0/24 is subnetted, 4 subnets
C    172.16.4.0 is directly connected, Serial0
R    172.16.1.0 [120/1] via 172.16.3.2, 00:00:02, Ethernet0
R    172.16.2.0 [120/1] via 172.16.3.2, 00:00:02, Ethernet0
C    172.16.3.0 is directly connected, Ethernet0
mjlnet_VPN_CE1#

As you can see, none of the mjlnet_VPN site 2 routes (172.16.5.0/24, 172.16.6.0/24, and 172.16.7.0/24) are present in CE1's routing table.

There are a number of possible reasons that routes are not being received on the CE router, including that the CE or PE VRF interface is misconfigured or down, or that there is a problem with the PE-CE routing protocol.

To troubleshoot this issue, see the section "Troubleshooting Route Advertisement Between the PE and CE Routers" on page 512.

Case Studies

This section contains descriptions and solutions for a number of issues that do not fit easily into the main troubleshooting section.

MPLS MTU Is Too Small in the MPLS VPN Backbone

If large packets are sent across the MPLS backbone with the Don't Fragment (DF) bit set in the IP packet header, and LSR interfaces and Ethernet switches are not configured to support large labeled packets, the packets will be dropped.

In an MPLS network, link MTU sizes must take the label stack into account. In a simple MPLS VPN network without MPLS TE, a label stack depth of two is used (TDP/LDP signaled IGP label + VPN label). If MPLS traffic engineering (TE) is being used between P routers in an MPLS VPN backbone, a label stack depth of three is used (RSVP signaled TE label + TDP/LDP signaled IGP label + VPN label). And if you are using Fast Reroute with MPLS TE, that is four labels. Each label is 4 bytes, so the total size of the label stack is number of labels multiplied by 4 bytes.

In this scenario, large packets are being dropped in the MPLS VPN backbone. Figure 6-41 illustrates the customer VPN and MPLS backbone topology used in this scenario.

Figure 6-41

Figure 6-41. Customer and MPLS VPN Backbone Topology

Path MTU across the MPLS VPN backbone is verified using the extended ping vrf vrf_name command, as shown in Example 6-140.

Example 6-140 Extended ping vrf Command Output

HongKong_PE#ping vrf mjlnet_VPN
Protocol [ip]:
Target IP address: 172.16.4.1
Repeat count [5]: 1
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Source address or interface: 172.16.8.1
Type of service [0]:
Set DF bit in IP header? [no]: y
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]: y
Sweep min size [36]: 1450
Sweep max size [18024]: 1500
Sweep interval [1]:
Type escape sequence to abort.
Sending 51, [1450..1500]-byte ICMP Echos to 172.16.4.1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!M.M.
Success rate is 92 percent (47/51), round-trip min/avg/max = 12/13/16 ms
HongKong_PE#

Highlighted lines 1 and 3 show the destination and source IP addresses used with the extended ping. In this case, the source is the VRF mjlnet_VPN interface on HongKong_PE (172.16.8.1), and the destination is VRF mjlnet_VPN interface on Chengdu_PE (172.16.4.1).

Repeat count is set to 1 packet in highlighted line 2. This is the repeat count per packet size, which is set in highlighted lines 5 to 7.

In highlighted line 4, the Don't Fragment (DF) bit is set. In highlighted lines 5 to 7, a ping sweep of packet sizes 1450 to 1500 is entered. Highlighted line 8 shows that ping is successful for most packet sizes, but that as the packet size nears 1500 bytes, the pings fail.

Note that the "M" character here indicates reception of an ICMP destination unreachable message (ICMP message type 3) from a router in the path across the network. This ICMP unreachable message carries code 4, which indicates that fragmentation is required on the (ping) packet, but that the Don't Fragment bit is set.

The MPLS MTU size for backbone LSRs is examined using the show mpls forwarding-table prefix detail command.

When the MPLS MTU size is examined on Chengdu_P, it is revealed that it is too small (see Example 6-141).

Example 6-141 Verifying the MPLS MTU Size Using the show mpls forwarding-table Command

Chengdu_P#show mpls forwarding-table 10.1.1.1 detail
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
18   Pop tag   10.1.1.1/32    1544    Fa1/0   10.20.10.1
    MAC/Encaps=14/14, MTU=1500, Tag Stack{}
    00049BD60C1C00D06354701C8847
    No output feature configured
  Per-packet load-sharing
Chengdu_P#

The IP address (BGP update source) of the egress PE router (Chengdu_PE) is specified in highlighted line 1. This address corresponds to the next-hop of all mjlnet_VPN site 1 routes.

Highlighted line 2 shows that the outgoing interface for this prefix is interface Fast Ethernet 1/0.

Highlighted line 3 shows that the maximum packet size that can be label switched out of interface Fast Ethernet 1/0 without being fragmented is 1500 bytes. 1500 bytes is clearly not a sufficient maximum packet size if a two-label stack (IGP + VPN) is included (1500 + 8 = 1508). Note, however, that in this case, Chengdu_P is the penultimate hop router, so it will pop the IGP label—but it is still a very good idea to accommodate a minimum of two labels here.

Chengdu_P's interface Fast Ethernet 1/0 is then configured to support large labeled packets, using the mpls mtu command as shown in Example 6-142.

Example 6-142 Configuration of the mpls mtu Command on Interface fastethernet 1/0 on Chengdu_P

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#interface fastethernet 1/0
Chengdu_P(config-if)#mpls mtu 1508
Chengdu_P(config-if)#end
Chengdu_P#

The highlighted line indicates that interface fastethernet 1/0 is configured to support a label stack depth of two (1500 + [2 * 4]=1508). In this scenario, Cisco 6500 switches are being used in the POPs (in Chengdu and HongKong), so they are configured for jumbo frame support.

To enable support for jumbo frames on the Cisco 6500 series switch Ethernet ports, use the set port jumbo mod/port enable command, as shown in Example 6-143.

Example 6-143 Configuration of Jumbo Frame Support on the Cisco 6500

Chengdu_POP1> (enable) set port jumbo 3/1 enable
Jumbo frames enabled on port 3/1. 

By enabling support for jumbo frames, the MTU is increased to 9216 bytes for most line cards.

After the MPLS MTU on all the applicable LSRs is reconfigured and jumbo frame support on the Cisco 6500 switches is enabled, extended ping is again used to verify that 1500-byte packets can be carried across the backbone without fragmentation.

Example 6-144 shows the output of the extended ping vrf vrf_name command after support for large labeled packets has been enabled in the MPLS VPN backbone.

Example 6-144 1500-Byte Packets Can Now Be Carried Across the MPLS VPN Backbone

HongKong_PE#ping vrf mjlnet_VPN
Protocol [ip]:
Target IP address: 172.16.4.1
Repeat count [5]:
Datagram size [100]: 1500
Timeout in seconds [2]:
Extended commands [n]: y
Source address or interface: 172.16.8.1
Type of service [0]:
Set DF bit in IP header? [no]: y
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 172.16.4.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/14/16 ms
HongKong_PE#

Highlighted lines 1 and 3 show the destination and source addresses of the ping packets. These are again the VRF mjlnet_VPN interface on Chengdu_PE and the VRF mjlnet_VPN interface on HongKong_PE, respectively.

In highlighted line 2, the packet size is 1500 bytes, and in highlighted line 4, the Don't Fragment (DF) bit is set.

In highlighted line 5, a success rate of 100 percent is shown.

It is also worth noting that if you are using IOS 12.0(27)S or above in your network, you can use the trace mpls MPLS Embedded Management feature command to verify the MTU that can be supported (without fragmentation) over an LSP in the MPLS backbone. This command can display the maximum receive unit (MRU, the maximum labelled packet size) at each hop across the MPLS backbone.

Summarization of PE Router Loopback Addresses Causes VPN Packets to Be Dropped

Routes to the next hops of VPN routes (the BGP updates sources on PE routers) should not be summarized in the MPLS VPN backbone; otherwise, VPN packets will be dropped.

Figure 6-42 illustrates the topology used in this scenario.

Figure 6-42

Figure 6-42. PE Router Loopback Addresses Are Summarized in the MPLS VPN Backbone

In this scenario, traffic from mjlnet_VPN site 1 transiting the MPLS VPN backbone to mjlnet_VPN site 2 is dropped.

Example 6-145 shows the output of a ping between the VRF mjlnet_VPN interface on Chengdu_PE and host 172.16.5.1 at site 2.

Example 6-145 Ping from the mjlnet_VPN Interface on Chengdu_PE to Host 172.16.5.1

Chengdu_PE#ping vrf mjlnet_VPN 172.16.5.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.5.1, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Chengdu_PE#

In highlighted line 1, a ping test is conducted between the VRF mjlnet_VPN interface on Chengdu_PE and host 172.16.5.1. As you can see, the ping test failed with a success rate of 0 percent (highlighted line 2).

Figure 6-43 shows the routing protocol configuration within the MPLS VPN backbone.

Figure 6-43

Figure 6-43. Routing Protocol Configuration with the MPLS VPN Backbone

In this scenario, OSPF is the backbone routing protocol, with Chengdu_PE, Chengdu_P, and HongKong_P in area 0, and HongKong_P and HongKong_PE in area 1.

When mjlnet_VPN packets transiting the network from mjlnet_VPN site 1 to mjlnet_VPN site 2 reach ingress PE Chengdu_PE, a two-label stack is imposed. The outer label is the IGP label (which corresponds to BGP next-hop 10.1.1.4 on egress PE router HongKong_PE), and the inner label is the VPN label.

VPN traffic is not successfully transiting the backbone. But is the problem something to do with the underlying LSP between Chengdu_PE and HongKong_PE, is it with VPN route exchange, or is it something else?

The next step is to verify the LSP between Chengdu_PE (10.1.1.1) and HongKong_PE (10.1.1.4) using the traceroute command, as shown in Example 6-146.

Example 6-146 traceroute to Egress Router HongKong_PE

Chengdu_PE#traceroute 10.1.1.4

Type escape sequence to abort.
Tracing the route to 10.1.1.4

 1 10.20.10.2 [MPLS: Label 17 Exp 0] 0 msec 0 msec 4 msec
 2 10.20.20.2 0 msec 0 msec 4 msec
 3 10.20.30.2 32 msec 0 msec *
Chengdu_PE#

In highlighted line 1, Chengdu_PE imposes (IGP) label 17 to the traceroute packet and forwards it to Chengdu_P.

In highlighted line 2, Chengdu_P forwards the packet to HongKong_P. There is, however, no evidence of a label stack at all. The same is true in highlighted line 3 as the packet is forwarded from HongKong_P to HongKong_PE.

Apparently there is a problem with the LSP. But what exactly is going on here? To track the answer down, it is useful to examine the LFIB/LIBs on the routers in the path. The IGP label (for next-hop 10.1.1.4) is verified on ingress PE router Chengdu_PE using the show mpls forwarding-table prefix detail command. This command displays the LFIB.

Example 6-147 shows the LFIB entry for prefix 10.1.1.4/32 on ingress PE router Chengdu_PE.

Example 6-147 LFIB Entry for Prefix 10.1.1.4

Chengdu_PE#show mpls forwarding-table 10.1.1.4 detail
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
18   17     10.0.0.0/8    0     Fa1/0   10.20.10.2
    MAC/Encaps=14/18, MRU=1500, Tag Stack{17}
    00502AFE080000049BD60C1C8847 00011000
    No output feature configured
  Per-packet load-sharing, slots: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Chengdu_PE#

As you can see, the IGP label used for BGP next-hop 10.1.1.4 is 17 (highlighted line 2). This label corresponds to prefix 10.0.0.0/8 (highlighted line 1). This is a bit of a mystery. Why is there no label corresponding directly to 10.1.1.4/32?

The routing table is then examined using the show ip route command, as shown in Example 6-148.

Example 6-148 Global Routing Table on Chengdu_PE

Chengdu_PE#show ip route
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
    D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
    N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
    E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
    i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
    * - candidate default, U - per-user static route, o - ODR
Gateway of last resort is not set
   10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks
O    10.1.1.2/32 [110/2] via 10.20.10.2, 01:09:42, FastEthernet1/0
O    10.20.20.0/24 [110/65] via 10.20.10.2, 01:09:42, FastEthernet1/0
O IA  10.0.0.0/8 [110/66] via 10.20.10.2, 01:09:42, FastEthernet1/0
C    10.1.1.1/32 is directly connected, Loopback0
C    10.20.10.0/24 is directly connected, FastEthernet1/0
Chengdu_PE#

Highlighted line 1 shows a summary route (10.0.0.0/8). Notice the absence of a route to the loopback 0 interface on HongKong_PE (10.1.1.4/32). This is the reason that there is no label for prefix 10.1.1.4/32 in the LFIB.

The label stack for mjlnet_VPN site 2 destination 172.16.5.1 is now examined using the show mpls forwarding-table vrf vrf_name prefix detail command, as shown in Example 6-149. Note that destination 172.16.5.1 is used here only for illustrative purposes.

Example 6-149 Label Stack for mjlnet_VPN Site 2 Destination 172.16.5.1

Chengdu_PE#show mpls forwarding-table vrf mjlnet_VPN 172.16.5.1 detail
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
None  17     172.16.5.0/24   0     Fa1/0   10.20.10.2
    MAC/Encaps=14/22, MRU=1496, Tag Stack{17 21}
    00502AFE080000049BD60C1C8847 0001100000015000
    No output feature configured
Chengdu_PE#

The highlighted portion shows the label stack (17, 21) used for mjlnet_VPN site 2 destination 172.16.5.1. The outer label (17) is the IGP label, and the inner label (21) is the VPN label.

The LFIB is then checked on Chengdu_P (the downstream LSR in the LSP to egress PE router HongKong_PE) using the show mpls forwarding-table labels label_value detail command, as shown in Example 6-150.

Example 6-150 Verifying the LFIB on Chengdu_P

Chengdu_P#show mpls forwarding-table labels 17 detail
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
17   Pop tag   10.0.0.0/8    900    Se1/1   point2point
    MAC/Encaps=4/4, MRU=1504, Tag Stack{}
    FF030281
    No output feature configured
  Per-packet load-sharing
Chengdu_P#

The highlighted portion shows that local label 17 (the IGP label on Chengdu_PE) is popped on Chengdu_P. This means that the downstream LSR (HongKong_P) is advertising an implicit-null label for prefix 10.0.0.0/8 (the summary route).

This is verified using the show mpls ldp bindings prefix mask_length detail command on HongKong_P as shown in Example 6-151.

Example 6-151 Verifying the LIB on HongKong_P

HongKong_P#show mpls ldp bindings 10.0.0.0 8 detail
 tib entry: 10.0.0.0/8, rev 12
    local binding: tag: imp-null
     Advertised to:
     10.1.1.4:0       10.1.1.2:0
    remote binding: tsr: 10.1.1.2:0, tag: 17
HongKong_P#

The output in Example 6-151 confirms that an implicit-null (highlighted line 1) is advertised to LSR Chengdu_P (10.1.1.2, highlighted line 2) for prefix 10.0.0.0/8.

The effect of Chengdu_P popping the IGP label is that it forwards mjlnet_VPN packets (including those for 172.16.5.1) with a label stack consisting of only the VPN label to HongKong_P.

Unfortunately, HongKong_P has no knowledge of VPN labels (only PE routers do). This is confirmed using the show mpls forwarding-table labels label_value command. Remember that the VPN label is 21 (see Example 6-149).

Example 6-152 shows the LFIB entry corresponding to label value 21.

Example 6-152 LFIB on HongKong_P

HongKong_P#show mpls forwarding-table labels 21
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
HongKong_P#

As you can see, there is no entry. The debug mpls packets command that follows in Example 6-153 is used here to illustrate what happens to mjlnet_VPN packets as they transit HongKong_P from mjlnet_VPN site 1 to mjlnet_VPN site 2.

CAUTION

The debug mpls packet command is used here to illustrate label switching on HongKong_P. Note that you should be especially careful when using the debug mpls packets command because it can produce copious output.

Example 6-153 shows the output of the debug mpls packets command on HongKong_P.

Example 6-153 debug mpls packets Command Output on HongKong_P

HongKong_P#debug mpls packets
MPLS packet debugging is on
HongKong_P#
01:10:26: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21
01:10:28: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21
01:10:30: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21
01:10:32: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21
01:10:34: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21
HongKong_P#

In highlighted line 1, a packet is received (recvd) with label 21 (the VPN label) on interface serial 1/0 (from Chengdu_P). Notice that the packet is not transmitted (xmit) onward to HongKong_PE (on interface Fast Ethernet 1/0).

Figure 6-44 illustrates label switching of packets across the MPLS VPN backbone when summarization is configured.

Figure 6-44

Figure 6-44. Label Switching Across the MPLS VPN Backbone When Summarization Is Configured

So, summary route 10.0.0.0/8 is causing VPN packets to be dropped by HongKong_P. The OSPF configuration on HongKong_P is examined using the show running-config command as shown in Example 6-154. Note that only the relevant portion of the output is shown.

Example 6-154 Configuration of OSPF on HongKong_P

HongKong_P#show running-config | begin router ospf
router ospf 100
 log-adjacency-changes
 area 1 range 10.0.0.0 255.0.0.0
 passive-interface Loopback0
 network 10.1.1.3 0.0.0.0 area 1
 network 10.20.20.0 0.0.0.255 area 0
 network 10.20.30.0 0.0.0.255 area 1
!

The highlighted line indicates the cause of the problem. Summary route 10.0.0.0/8 is configured for area 1 addresses. This summary blocks the advertisement of the route 10.1.1.4/32 (the BGP next-hop for mjlnet_VPN site 2 routes) to Chengdu_P and Chengdu_PE.

The summary route is then removed on HongKong_P, as shown in Example 6-155.

Example 6-155 The Summary Route Is Removed on Chengdu_P

HongKong_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_P(config)#router ospf 100
HongKong_P(config-router)#no area 1 range 10.0.0.0 255.0.0.0
HongKong_P(config-router)#end
HongKong_P#

In highlighted line 1, the summary route is removed on HongKong_P. After the summary route is removed, mjlnet_VPN traffic transits the MPLS VPN backbone successfully from site 1 to site 2.

Example 6-156 shows the output of a ping test from the VRF mjlnet_VPN interface on Chengdu_PE to host 172.16.5.1.

Example 6-156 Ping Now Succeeds Across the MPLS VPN Backbone

Chengdu_PE#ping vrf mjlnet_VPN 172.16.5.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.5.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 148/148/152 ms
Chengdu_PE#

Highlighted line 1 shows a ping test from the VRF mjlnet_VPN interface on Chengdu_PE to site 2 host 172.16.5.1. Highlighted line 2 indicates the test had a 100 percent success rate.

MPLS VPN Traffic Is Dropped on TE Tunnels Between P Routers

If TE tunnels are configured between P routers in the MPLS VPN backbone, you must take care must to ensure that VPN traffic is not dropped.

Figure 6-45 shows the network topology and TE tunnel configuration used in this scenario.

Figure 6-45

Figure 6-45. Network Topology and TE Tunnel Configuration

In this scenario, a TE tunnel is configured from Chengdu_P via Shanghai_P to HongKong_P. A TE tunnel is also configured in the opposite direction from HongKong_P to Chengdu_P.

Unfortunately, when connectivity is tested from Chengdu_PE's VRF mjlnet_VPN interface to HongKong_PE's VRF mjlnet_VPN interface using ping, the success rate is 0 percent.

Examine 6-157 shows the results of a ping test from Chengdu_PE's VRF mjlnet_VPN interface to HongKong_PE's VRF mjlnet_VPN interface (172.16.8.1).

Example 6-157 Ping from Chengdu_PE's VRF mjlnet_VPN Interface to HongKong_PE's VRF mjlnet_VPN Interface Fails

Chengdu_PE#ping vrf mjlnet_VPN 172.16.8.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.8.1, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)
Chengdu_PE#

The next step is to verify the LSP from Chengdu_PE to HongKong_PE using traceroute, as shown in Example 6-158.

Example 6-158 Verifying the LSP Using traceroute

Chengdu_PE#traceroute 10.1.1.4

Type escape sequence to abort.
Tracing the route to 10.1.1.4

 1 10.20.10.2 [MPLS: Label 25 Exp 0] 4 msec 4 msec 4 msec
 2 10.20.40.2 [MPLS: Label 23 Exp 0] 8 msec 4 msec 8 msec
 3 10.20.50.2 4 msec 4 msec 4 msec
 4 10.20.30.2 8 msec 4 msec *
Chengdu_PE#

In highlighted line 1, Chengdu_PE imposes TDP/LDP signaled IGP label 25 on the packet and forwards it to Chengdu_P. Chengdu_P then swaps label 25 for (RSVP signaled TE) label 23 and forwards the packet to Shanghai_P over the TE tunnel (highlighted line 2).

Then in highlighted line 3, something strange happens: Shanghai_P forwards the packet unlabeled to HongKong_P. To track down what is happening, the LSP for mjlnet_VPN traffic is examined hop-by-hop from Chengdu_PE to HongKong_P.

The first thing to do is to verify the label stack for prefix 172.16.8.1/32 (the mjlnet_VPN interface on HongKong_PE) on Chengdu_PE using the show mpls forwarding-table vrf vrf_name detail command as shown in Example 6-159.

Example 6-159 Label Stack on Chengdu_PE for Packets to 172.16.8.1

Chengdu_PE#show mpls forwarding-table vrf mjlnet_VPN 172.16.8.1 detail
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
None  25     172.16.8.0/24   0     Fa1/0   10.20.10.2
  MAC/Encaps=14/22, MRU=1496, Tag Stack{25 34}
  00D06354701C00049BD60C1C8847 0001900000022000
  No output feature configured
  Per-packet load-sharing, slots: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Chengdu_PE#

The highlighted portion shows the label stack for mjlnet_VPN destination 172.16.8.1. The outer label is the TDP/LDP signaled IGP label (25), and the inner label is the VPN label (34).

The CEF entry for prefix 10.1.1.4/32 (the next hop for 172.16.8.1) is then examined on Chengdu_P using the show ip cef prefix detail command as shown in Example 6-160.

Example 6-160 CEF Entry for Prefix 10.1.1.4/32 on Chengdu_P

Chengdu_P#show ip cef 10.1.1.4 detail
10.1.1.4/32, version 25
0 packets, 0 bytes
 tag information set
  local tag: 25
  fast tag rewrite with Tu0, point2point, tags imposed: {23}
 via 10.1.1.3, Tunnel0, 0 dependencies
  next hop 10.1.1.3, Tunnel0
  valid adjacency
  tag rewrite with Tu0, point2point, tags imposed: {23}
Chengdu_P#

Highlighted lines 1 and 2 shows that (TDP/LDP signaled IGP) label 25 is removed and outgoing (RSVP signaled TE) label 23 is imposed as packets are switched over the TE tunnel (Tu0, tunnel 0).

The label stack for VPN packets at this point is as follows: the outer label is the TE label (23), and the inner label is the VPN label (34).

The TE tunnel is routed via Shanghai_P, and so the LFIB is now examined on Shanghai_P as shown in Example 6-161.

Example 6-161 LFIB on Shanghai_P

Shanghai_P#show mpls forwarding-table label 23 detail
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
23   Pop tag   10.1.1.2 0 [71]  14618931  Se1/1   point2point
  MAC/Encaps=4/4, MTU=1504, Tag Stack{}
  FF030281
  No output feature configured
Shanghai_P#

The highlighted portion shows that TE tunnel label 23 is popped on Shanghai_P. This is OK because the TE tunnel tail-end is HongKong_P (Shanghai_P is the penultimate hop for the tunnel).

Because the outer (TE) label has now been removed, the label stack at this point consists of only the VPN label (34). Packets are then forwarded to HongKong_P.

The output of the debug mpls packet command on Shanghai_P is shown in Example 6-162.

CAUTION

The debug mpls packet command is used here to illustrate label switching on Shanghai_P. Note that you should exercise extra caution when using this command because it can produce copious output and severely impact router performance.

Example 6-162 Label Switching on Shanghai_P

Shanghai_P#debug mpls packets
Tagswitch packet debugging is on
Shanghai_P#
*Mar 1 05:06:03.562 UTC: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=23/34
*Mar 1 05:06:03.562 UTC: TAG: Se1/1: xmit: CoS=0, TTL=253, Tag(s)=34
*Mar 1 05:06:05.562 UTC: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=23/34
*Mar 1 05:06:05.562 UTC: TAG: Se1/1: xmit: CoS=0, TTL=253, Tag(s)=34
*Mar 1 05:06:07.562 UTC: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=23/34
*Mar 1 05:06:07.562 UTC: TAG: Se1/1: xmit: CoS=0, TTL=253, Tag(s)=34

In highlighted line 1, you can see an mjlnet_VPN packet received on interface serial 1/0 with label stack 23/34 (the TE and VPN labels, respectively). Then in highlighted line 2, you can see that the TE label is popped, and the packet is forwarded with only VPN label 34.

When the LFIB on HongKong_P is examined using the show mpls forwarding-table label label_value command in Example 6-163, there is no entry for incoming (VPN) label 34.

Example 6-163 LFIB on HongKong_P

HongKong_P#show mpls forwarding-table label 34 detail
Local Outgoing  Prefix      Bytes tag Outgoing  Next Hop
tag  tag or VC  or Tunnel Id   switched  interface
HongKong_P#

The absence of an LFIB entry for VPN label 34 is no surprise because HongKong_P is not a PE router—remember that only PE routers have knowledge of VPN labels. The upshot is that all mjlnet_VPN packets arriving on HongKong_P are dropped.

Figure 6-46 illustrates label switching of packets across the MPLS VPN backbone when LDP is not enabled on a TE tunnel configured between P routers.

Figure 6-46

Figure 6-46. Packets Are Dropped on HongKong_P

To solve this issue, you must find a solution where packets do not arrive at the TE tunnel tail-end (HongKong_P) with a label stack consisting of only the VPN label. You can resolve this issue by enabling MPLS (LDP) on the TE tunnel itself. MPLS (LDP) should be enabled on both the tunnel from Chengdu_P to HongKong_P and the tunnel from HongKong_P to Chengdu_P.

Example 6-164 shows the configuration of MPLS (LDP) on the TE tunnels between Chengdu_P and HongKong_P.

Example 6-164 Configuration of MPLS (LDP) on the TE Tunnels

! On Chengdu_P:
Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#interface tunnel 0
Chengdu_P(config-if)#mpls ip
Chengdu_P(config-if)#end
Chengdu_P#
! On HongKong_P:
HongKong_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
HongKong_P(config)#interface tunnel 0
HongKong_P(config-if)#mpls ip
HongKong_P(config-if)#end
HongKong_P#

Once MPLS has been enabled on the TE tunnels, Chengdu_P and HongKong_P discover each other over the TE tunnel (via LDP neighbor discovery). Crucially, this means that the TDP/LDP signaled IGP label is swapped instead of removed from mjlnet_VPN packets as they enter the TE tunnel.

When packets reach TE tunnel tail-end HongKong_P, it pops the IGP label (because of penultimate hop popping, requested by HongKong_PE) and forwards the packets to HongKong_PE with a label stack consisting only of the VPN label.

The path is now verified from Chengdu_PE's VRF mjlnet_VPN interface to HongKong_PE's VRF mjlnet_VPN interface using VRF traceroute, as shown in Example 6-165.

Example 6-165 traceroute Succeeds from Chengdu_PE's VRF mjlnet_VPN Interface to HongKong_PE's VRF mjlnet_VPN Interface

Chengdu_PE# traceroute    Vrf         mjlnet_VPN
Protocol                  [ip]:
Target                    IP          address:       172.16.8.1
Source                    address:    172.16.4.1
Numeric                   display     [n]:
Timeout                   in          seconds        [3]:
Probe                     count       [3]:
Minimum                   Time        to             Live        [1]:
Maximum                   Time        to             Live        [30]:
Port                      Number      [33434]:
Loose,                    Strict,     Record,        Timestamp,  Verbose[none]:
Type                      escape      sequence       to          abort.
Tracing                   the         route          to          172.16.8.1

                 1        10.20.10.2  [MPLS:         Labels      25/34          Exp    0]   4   msec   4   msec   4 msec
                 2        10.20.40.2  [MPLS:         Labels      23/16/34       Exp    0]   4   msec   4   msec   4 msec
                 3        10.20.50.2  [MPLS:         Labels      16/34          Exp    0]   4   msec   4   msec   4 msec
                 4        172.16.8.1  0              msec        0              msec   *
Chengdu_PE#

In highlighted line 1, Chengdu_PE imposes label stack 25/34 (TDP/LDP signaled IGP and VPN label, respectively) on the packet and forwards it to Chengdu_P.

In highlighted line 2, Chengdu_P swaps IGP label 25 for IGP TDP/LP signaled label 16 and additionally imposes RSVP signaled TE label 23. VPN label 34 is unmodified. The packet is then forwarded over the TE tunnel to Shanghai_P.

Because the next hop, HongKong_P, is the tunnel tail-end, Shanghai_P pops the TE label. IGP label 16 and VPN label 34 are unmodified (highlighted line 3). The packet is then forwarded to HongKong_P.

HongKong_P then pops IGP label 16 because it is the penultimate hop. The VPN label is unmodified (not shown), and the packet is forwarded to HongKong_PE. VPN traffic is now successfully transiting the TE tunnel between Chengdu_P and HongKong_P.

Figure 6-47 illustrates label switching of packets across the MPLS VPN backbone when TE tunnels are configured between P routers with MPLS (LDP) enabled.

Figure 6-47

Figure 6-47. Label Switching Across the MPLS VPN Backbone When LDP Is Enabled over TE Tunnels Between P Routers

MVPN Fails When TE Tunnels Are Configured in the MPLS VPN Backbone

When configuring MVPN in a MPLS backbone with TE tunnels, the IGP (IS-IS or OSPF) must be configured to ensure that Reverse Path Forwarding (RPF) checks for multicast traffic do not fail.

In this scenario, MVPN is configured for VRF mjlnet_VPN on PE routers Chengdu_PE, HongKong_PE, and Shanghai_PE. Additionally, TE tunnels are configured between P routers Chengdu_P, HongKong_P, and Shanghai_P. These TE tunnels are configured with autoroute. The backbone is configured for PIM sparse mode, with the Rendezvous Point (RP) on Chengdu_P.

There is a multicast server at mjlnet_VPN site 1 and multicast receivers at mjlnet_VPN sites 2 and 3. Unfortunately, the receivers are unable to receive any multicast traffic from the multicast server.

Figure 6-48 illustrates the MVPN and TE configuration in the MPLS VPN backbone.

Figure 6-48

Figure 6-48. MVPN and TE Configuration in the MPLS VPN Backbone

The first step in troubleshooting this issue is to check whether PE routers Chengdu_PE, HongKong_PE, and Shanghai_PE are correctly advertising participation in the MPVN to each other. Each PE router is checked in turn using the show ip pim mdt bgp command, as shown in Example 6-166.

Example 6-166 show ip pim mdt bgp Command Output on the PE Routers

! On Chengdu_PE:
Chengdu_PE#show ip pim mdt bgp
 Peer (Route Distinguisher + IPv4)                Next Hop
MDT group 239.0.0.1
 2:64512:100:10.1.1.4                      10.1.1.4
 2:64512:100:10.1.1.6                      10.1.1.6
Chengdu_PE#
! On HongKong_PE:
HongKong_PE#show ip pim mdt bgp
 Peer (Route Distinguisher + IPv4)                Next Hop
MDT group 239.0.0.1
 2:64512:100:10.1.1.1                      10.1.1.1
 2:64512:100:10.1.1.6                      10.1. 1.6
HongKong_PE#
! On Shanghai_PE:
Shanghai_PE#show ip pim mdt bgp
Peer (Route Distinguisher + IPv4)                 Next Hop
 MDT group 239.0.0.1
  2:64512:100:10.1.1.1                      10.1.1.1
  2:64512:100:10.1.1.4                      10.1.1.4
Shanghai_PE#

Highlighted line 1 shows the default MDT address (239.0.0.1). Highlighted lines 2 and 3 shows that Chengdu_PE has received advertisements signaling participation in the MVPN from both HongKong_PE (10.1.1.4), and Shanghai_PE (10.1.1.6). Note the RD used here (2:ASN:XX). This is a type 2 RD used to advertise MPVN participation.

As you can see, HongKong_PE has received advertisements from Chengdu_PE and Shanghai_PE (10.1.1.1 and 10.1.1.6). Shanghai_PE has received advertisements from Chengdu_PE and HongKong_PE (10.1.1.1 and 10.1.1.4). Participation in the MVPN is being correctly advertised between the PE routers.

Starting at Chengdu_PE, the multicast state for the default MDT is then verified using the show ip mroute command, as shown in Example 6-167.

Example 6-167 Verifying the Multicast State for Chengdu_PE

Chengdu_PE#show ip mroute 239.0.0.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
    L - Local, P - Pruned, R - RP-bit set, F - Register flag,
    T - SPT-bit set, J - Join SPT, M - MSDP created entry,
    X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
    U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel,
    Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.0.0.1), 07:28:15/stopped, RP 10.1.1.2, flags: SJCFZ
 Incoming interface: FastEthernet1/0, RPF nbr 10.20.10.2
 Outgoing interface list:
  MVRF mjlnet_VPN, Forward/Sparse-Dense, 07:28:15/00:00:00
(10.1.1.1, 239.0.0.1), 07:14:24/00:02:53, flags: PFTZ
 Incoming interface: Loopback0, RPF nbr 0.0.0.0
 Outgoing interface list: Null
Chengdu_PE#

Highlighted line 1 shows that a (*, G) entry has been created for the default MDT group address (*, 239.0.0.1). An (S, G) entry for source 10.1.1.1 (Chengdu_PE itself) is shown in highlighted line 2. Notice that the P (Pruned) flag is set. Finally, in highlighted line 3, you can see that the outgoing interface list for source 10.1.1.1 is null. This is consistent with the fact that the P flag is set. Also note the Z flag, which indicates that this entry corresponds to a multicast tunnel.

Two things are wrong here:

  • (S, G) entries should exist for sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE).
  • The outgoing interface list should be populated for source 10.1.1.1.

The next thing to check is whether Chengdu_PE has discovered PIM neighbors (specifically, Chengdu_P).

To verify PIM neighbor discovery on Chengdu_PE, use the show ip pim neighbor command as shown in Example 6-168.

Example 6-168 Verifying PIM Neighbor Discovery

Chengdu_PE#show ip pim neighbor
PIM Neighbor Table
Neighbor     Interface        Uptime/Expires  Ver  DR
Address                              Priority/Mode
10.20.10.2    FastEthernet1/0     02:27:46/00:01:43 v2  N / DR
Chengdu_PE#

The highlighted line shows that Chengdu_PE has discovered one neighbor (10.20.10.2)—Chengdu_P, which is the Rendezvous Point (RP) in the backbone network.

Moving on to Chengdu_P, the multicast state for the default MDT (group 239.0.0.1) is again checked using the show ip mroute command, as shown in Example 6-169.

Example 6-169 Multicast State for the Default MDT on Chengdu_PE

Chengdu_P#show ip mroute 239.0.0.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned
    R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT
    M - MSDP created entry, X - Proxy Join Timer Running
    A - Candidate for MSDP Advertisement
Outgoing interface flags: H - Hardware switched
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.0.0.1), 02:40:37/00:03:06, RP 10.1.1.2, flags: S
 Incoming interface: Null, RPF nbr 0.0.0.0
 Outgoing interface list:
  FastEthernet1/0, Forward/Sparse-Dense, 02:40:32/00:03:06
  Serial1/0, Forward/Sparse-Dense, 02:19:01/now
  Serial1/1, Forward/Sparse-Dense, 01:57:48/now
(10.1.1.1, 239.0.0.1), 02:40:25/00:01:27, flags: PT
 Incoming interface: FastEthernet1/0, RPF nbr 10.20.10.1
 Outgoing interface list: Null
(10.1.1.4, 239.0.0.1), 02:19:13/00:02:29, flags: T
 Incoming interface: Null, RPF nbr 0.0.0.0
 Outgoing interface list:
  FastEthernet1/0, Forward/Sparse-Dense, 02:19:13/00:02:36
(10.1.1.6, 239.0.0.1), 02:40:06/00:02:36, flags: T
 Incoming interface: Null, RPF nbr 0.0.0.0
 Outgoing interface list:
  FastEthernet1/0, Forward/Sparse-Dense, 02:40:06/00:02:34
Chengdu_P#

Things now get very mysterious. Highlighted line 1 shows the (*, G) entry for the default MDT: (*, 239.0.0.1).

Highlighted line 2 shows the (S, G) entry for source 10.1.1.1: Chengdu_PE. Notice that the P flag is set. This indicates that the outgoing interface list should be null. The incoming interface is Fast Ethernet 1/0, and the RPF neighbor is 10.20.10.1. This is logical because 10.20.10.1 is Chengdu_PE. Again, the outgoing interface list for this source is, as expected, null. This is no good because it should contain the interfaces serial 1/0 and serial 1/1 (toward HongKong_P and Shanghai_P, respectively).

Highlighted lines 3 and 4 show the (S, G) entries for sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE). Notice that the incoming interface is null, and the RPF neighbor is 0.0.0.0. This is very strange. The incoming interface for 10.1.1.4 should be serial 1/0, and the RPF neighbor should be 10.1.1.3 (HongKong_P). Similarly, the incoming interface for 10.1.1.6 should be serial 1/1, and the RPF neighbor 10.1.1.5 (Shanghai_P).

Next, PIM neighbor discovery is checked on Chengdu_P using the show ip pim neighbor command as shown in Example 6-170.

Example 6-170 PIM Neighbor Discovery Is Checked on Chengdu_P

Chengdu_P#show ip pim neighbor
PIM Neighbor Table
Neighbor Address Interface        Uptime  Expires  Ver Mode
10.20.10.1    FastEthernet0/0     03:12:17 00:01:25 v2
10.20.20.2    Serial1/0        02:50:47 00:01:15 v2
10.20.40.2    Serial1/1        02:29:29 00:01:19 v2
Chengdu_P#

The highlighted lines show that HongKong_P (10.20.20.2) and Shanghai_P (10.20.40.2) are PIM neighbors, which is as it should be.

The debug ip pim command is then used to check which PIM messages are being received by Chengdu_P from the PE routers, as shown in Example 6-171 (note that only the relevant portions of the output are shown).

Example 6-171 debug ip pim Command Output on Chengdu_P

Chengdu_P#debug ip pim
PIM debugging is on
Chengdu_P#
04:35:56: PIM: Received v2 Register on Serial1/0 from 10.20.30.2
04:35:56:   (Data-header) for 10.1.1.4, group 239.0.0.1
04:35:56: PIM: RPF lookup failed to source 10.1.1.4
04:36:03: PIM: Received v2 Register on Serial1/1 from 10.20.60.2
04:36:03:   (Data-header) for 10.1.1.6, group 239.0.0.1
04:36:03: PIM: RPF lookup failed to source 10.1.1.6
Chengdu_P#

Here are some answers. Highlighted lines 1 to 3 show that HongKong_PE (10.1.1.4) is sending PIM Register messages to Chengdu_P (remember that Chengdu_P is the RP). HongKong_PE is trying to notify Chengdu_P that it is an active source for the default MDT (239.0.0.1). Unfortunately, the RPF check on the encapsulated multicast packet fails. Highlighted lines 4 to 6 show that exactly the same thing is happening for Shanghai_PE (10.1.1.6).

The RPF check state on Chengdu_P can also be verified for sources 10.1.1.4 and 10.1.1.6 using the show ip rpf command, as shown in Example 6-172.

Example 6-172 show ip rpf Command Output on Chengdu_P

! For HongKong_PE (10.1.1.4):
Chengdu_P#show ip rpf 10.1.1.4
RPF information for ? (10.1.1.4) failed, no route exists
Chengdu_P#
! For Shanghai_PE (10.1.1.6):
Chengdu_P#show ip rpf 10.1.1.6
RPF information for ? (10.1.1.6) failed, no route exists
Chengdu_P#

The highlighted lines show that the RPF fails for both source 10.1.1.4 and 10.1.1.6 because no route exists for these sources.

This is verified using the show ip route command as shown in Example 6-173.

Example 6-173 show ip route Command Output

! For source 10.1.1.4 (HongKong_PE):
Chengdu_P#show ip route 10.1.1.4
Routing entry for 10.1.1.4/32
 Known via "isis", distance 115, metric 20, type level-2
 Redistributing via isis
 Last update from 10.1.1.3 on Tunnel20, 03:34:51 ago
 Routing Descriptor Blocks:
 * 10.1.1.3, from 10.1.1.4, via Tunnel20
   Route metric is 20, traffic share count is 1
Chengdu_P#
! For source 10.1.1.6 (Shanghai_PE):
Chengdu_P#show ip route 10.1.1.6
Routing entry for 10.1.1.6/32
 Known via "isis", distance 115, metric 20, type level-2
 Redistributing via isis
 Last update from 10.1.1.5 on Tunnel10, 03:13:46 ago
 Routing Descriptor Blocks:
 * 10.1.1.5, from 10.1.1.6, via Tunnel10
   Route metric is 20, traffic share count is 1
Chengdu_P#

Highlighted line 1 reveals that there is in fact a route to 10.1.1.4 /32 via interface tunnel 20 (a TE tunnel). Similarly, highlighted line 2 shows that there is a route to 10.1.1.6/32 via interface tunnel 10. Why does the show ip rpf command not show a route?

The answer is that multicast packets (Register messages, in this case) are received on interfaces serial 1/0 and serial 1/1, but the unicast routes back to the source of the packets are via the TE tunnels. This causes the RPF check failure. When the RPF check fails, multicast packets are dropped.

You might think that the solution to this problem is to enable PIM on the TE tunnels. Unfortunately, TE tunnels are unidirectional, so that will not work.

The answer is to allow unicast traffic to be forwarded over the TE tunnels, while ensuring that multicast uses the physical interfaces. This can be achieved by configuring the mpls traffic-eng multicast-intact command on the head-end router of each TE tunnel. This command can be configured under either the IS-IS or OSPF process, depending on which IGP you are using. This command is enabled on Chengdu_P, HongKong_P, and Shanghai_P (the TE tunnel head-ends) as shown in Example 6-174.

Example 6-174 Configuration of mpls traffic-eng multicast intact on Chengdu_P

Chengdu_P#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Chengdu_P(config)#router isis
Chengdu_P(config-router)#mpls traffic-eng multicast-intact
Chengdu_P(config-router)#end
Chengdu_P#

In Example 6-174, mpls traffic-eng multicast-intact is enabled on Chengdu_P. This command is similarly enabled on HongKong_P and Shanghai_P.

RPF information for sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE) is now rechecked on Chengdu_P using the show ip rpf command, as shown in Example 6-175.

Example 6-175 Rechecking RPF Information for Sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE)

! For 10.1.1.4 (HongKong_PE):
Chengdu_P#show ip rpf 10.1.1.4
RPF information for ? (10.1.1.4)
 RPF interface: Serial1/0
 RPF neighbor: ? (10.20.20.2)
 RPF route/mask: 10.1.1.4/32
 RPF type: unicast (isis)
 RPF recursion count: 0
 Doing distance-preferred lookups across tables
Chengdu_P#
! For 10.1.1.6 (Shanghai_PE):
Chengdu_P#show ip rpf 10.1.1.6
RPF information for ? (10.1.1.6)
 RPF interface: Serial1/1
 RPF neighbor: ? (10.20.40.2)
 RPF route/mask: 10.1.1.6/32
 RPF type: unicast (isis)
 RPF recursion count: 0
 Doing distance-preferred lookups across tables
Chengdu_P#

As you can see, the RPF check for source 10.1.1.4 (HongKong_PE) now uses interface serial 1/0, and the RPF check for source 10.1.1.6 (Shanghai_PE) now uses interface serial 1/1.

The multicast routing table is then examined using the show ip mroute command, as shown in Example 6-176.

Example 6-176 Examining the Multicast Routing Table

Chengdu_P#show ip mroute 239.0.0.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned
    R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT
    M - MSDP created entry, X - Proxy Join Timer Running
    A - Candidate for MSDP Advertisement
Outgoing interface flags: H - Hardware switched
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.0.0.1), 00:19:55/00:03:29, RP 10.1.1.2, flags: S
 Incoming interface: Null, RPF nbr 0.0.0.0
 Outgoing interface list:
  FastEthernet1/0, Forward/Sparse-Dense, 00:19:26/00:02:44
  Serial1/0, Forward/Sparse-Dense, 00:07:51/00:03:29
  Serial1/1, Forward/Sparse-Dense, 00:07:44/00:02:40
(10.1.1.1, 239.0.0.1), 00:19:41/00:02:58, flags: T
 Incoming interface: FastEthernet1/0, RPF nbr 10.20.10.1
 Outgoing interface list:
  Serial1/0, Forward/Sparse-Dense, 00:07:44/00:02:40
  Serial1/1, Forward/Sparse-Dense, 00:07:51/00:03:29
(10.1.1.4, 239.0.0.1), 00:08:31/00:03:25, flags: T
 Incoming interface: Serial1/0, RPF nbr 10.20.20.2
 Outgoing interface list:
  FastEthernet1/0, Forward/Sparse-Dense, 00:08:33/00:02:42
  Serial1/1, Forward/Sparse-Dense, 00:07:46/now
(10.1.1.6, 239.0.0.1), 00:08:45/00:03:28, flags: T
 Incoming interface: Serial1/1, RPF nbr 10.20.40.2
 Outgoing interface list:
  FastEthernet1/0, Forward/Sparse-Dense, 00:08:45/00:02:42
Chengdu_P#

Highlighted line 1 shows the (*, G) entry for the default MDT (239.0.0.1). In highlighted line 2, the (S, G) entry for source 10.1.1.1 (Chengdu_PE) is shown. The incoming interface is now FastEthernet1/0, and the RPF neighbor is 10.20.10.1 (Chengdu_PE). The outgoing interface list is now Serial1/1 (toward Shanghai_PE) and Serial1/0 (toward HongKong_PE).

Highlighted line 3 shows the (S, G) entry for source 10.1.1.4 (HongKong_PE). The incoming interface is Serial1/0. The RPF neighbor is 10.20.20.2 (HongKong_P). The outgoing interface list is Serial1/1 and FastEthernet1/0.

Finally, the entry for source 10.1.1.6 (Shanghai_PE) is shown in highlighted line 4. The incoming interface is Serial1/1. The RPF neighbor is 10.20.40.2 (Shanghai_P). The outgoing interface list is FastEthernet1/0.

Default MDT traffic is now being forwarded correctly across the MPLS VPN backbone, and multicast traffic from the server at site 1 is now being received by multicast receivers at sites 2 and 3.

Additional Troubleshooting Commands

This section introduces additional commands that might be useful in troubleshooting MPLS VPNs.

show ip cef vrf vrf_name detail

The show ip cef [vrf vrf_name] [prefix] detail command can be used to view detailed information about CEF FIB entries.

Example 6-177 shows the VRF mjlnet_VPN CEF FIB entry for prefix 172.16.5.0/24.

Example 6-177 show ip cef vrf vrf_name detail Command Output

Chengdu_PE#show ip cef vrf mjlnet_VPN 172.16.5.0 detail
172.16.5.0/24, version 52, epoch 0, cached adjacency 10.20.10.2
0 packets, 0 bytes
 tag information set, all rewrites owned
  local tag: VPN route head
  fast tag rewrite with Fa1/0, 10.20.10.2, tags imposed {35 16}
 via 10.1.1.4, 0 dependencies, recursive
  next hop 10.20.10.2, FastEthernet1/0 via 10.1.1.4/32 (Default)
  valid cached adjacency
  tag rewrite with Fa1/0, 10.20.10.2, tags imposed {35 16}
Chengdu_PE#

Highlighted line 1 shows the prefix itself (172.16.5.0/24), the FIB version, the epoch, and the cached adjacency for next-hop 10.20.10.2.

The epoch indicates the number of times the CEF table has been rebuilt. The epoch number starts at 0 and increments with each rebuild to a maximum of 255, at which point it loops back to 0. The clear ip cef epoch command can be used to rebuild the CEF table. This command can be used if there are inconsistencies in the CEF table.

The next-hop (10.20.10.2) cached adjacency shows that a Layer 2 header has been built and stored in the adjacency table. Note that this next-hop is the resolved (physical Layer 3) next-hop and not necessarily the next-hop of the route shown in the routing table.

Highlighted line 2 shows the next-hop for the prefix. This next-hop is the one stored in the routing table. Because this is a VPN route, this is the BGP update source of the PE router that sent the route.

Finally, highlighted line 3 shows the outgoing interface (Fast Ethernet 1/0), the resolved next-hop, and the labels to be imposed on packets being forwarded to the prefix (destination).

In this case, there is a two-label stack, with an outer IGP label (35), and an inner VPN label (16). Note that CEF is responsible for imposing a label stack on IP packets as they enter the MPLS backbone.

show adjacency detail

You can use the show adjacency detail command to examine the contents of the CEF adjacency table.

Example 6-178 shows the CEF adjacency table for Chengdu_PE.

Example 6-178 show adjacency detail Command Output

Chengdu_PE#show adjacency detail
Protocol Interface         Address
IP    Serial4/1         point2point(20)
                  8138 packets, 630040 bytes
                  FF030021
                  CEF  expires: 00:02:18
                     refresh: 00:00:18
                  Epoch: 0
TAG   FastEthernet1/0      10.20.10.2(24)
                  6467 packets, 3867274 bytes
                  0006535AEFC0
                  00049BD60C1C8847
                  TFIB    02:27:03
                  Epoch: 0
IP    FastEthernet1/0      10.20.10.2(68)
                  0 packets, 0 bytes
                  0006535AEFC0
                  00049BD60C1C0800
                  ARP    02:26:54
                  Epoch: 0
Chengdu_PE#

Highlighted line 1 shows the adjacency table entry used for label (tag) switching packets out of interface FastEthernet1/0. The IP address of the next-hop is also shown, together with (in brackets) the number of times that this adjacency table entry is referenced by CEF FIB table entries (seen with the show ip cef command).

Highlighted line 2 shows CEF accounting statistics, including the number of packets and bytes switched.

Highlighted lines 3 and 4 show the cached Layer 2 header to be used when label switching out of interface FastEthernet1/0. Note in particular the last four hexadecimal numerals (8847). This is the Ethertype for MPLS.

Highlighted line 5 indicates that source of the adjacency table entry. In this case, CEF is interfacing to the LFIB (shown as TFIB). The time until this entry expires is also shown.

Highlighted line 6 indicates the epoch of the adjacency table (0). In highlighted line 7, the adjacency table entry used for switching IP packets out of interface FastEthernet1/0 is shown.

Note highlighted line 8 and 9. These lines show the cached Layer 2 header used when switching IP packets out of the interface. You will notice that this is the same as that shown in highlighted lines 3 and 4. The one difference is the Ethertype shown in the last four numerals (0800) in highlighted line 9. This number (0800) is the Ethertype for IP.

show mpls ldp parameters

The show mpls ldp parameters command is used to examine LDP parameters as demonstrated in Example 6-179.

Example 6-179 show mpls ldp parameters Command Output

Chengdu_PE#show mpls ldp parameters
Protocol version: 1
Downstream label generic region: min label: 16; max label: 100000
Session hold time: 180 sec; keep alive interval: 60 sec
Discovery hello: holdtime: 15 sec; interval: 5 sec
Discovery targeted hello: holdtime: 90 sec; interval: 10 sec
Downstream on Demand max hop count: 255
TDP for targeted sessions
LDP initial/maximum backoff: 15/120 sec
LDP loop detection: off
Chengdu_PE#

Highlighted line 1 shows the LDP version in use. Currently there is only one version. Highlighted line 2 shows the generic label range available for label assignment (16 to 100000). Labels 1 to 15 are reserved.

Label value 0 is "IPv4 Explicit Null," label value 1 is "Router Alert," label value 2 is "IPv6 Explicit Null," and label value 3 is "Implicit Null." Label values 4 to 15 are reserved for future use.

In highlighted line 3, the session holdtime (180 seconds) and keepalive interval (60 seconds) are shown. Note that one session is maintained between LSRs per label space. This means that one session is maintained for all the frame-mode interfaces that connect neighboring LSRs, and another session is maintained per LC-ATM interface that connect LSRs.

Highlighted line 4 shows the discovery holdtime and interval (15 and 5 seconds, respectively). These parameters are used for directly connected neighbors.

Highlighted line 5 shows the discovery parameters for neighbors that are not directly connected (90 seconds for holdtime, and 10 seconds for the hello interval).

Highlighted line 6 shows the maximum hop count for label request messages in a downstream-on-demand environment (255). If an ATM-LSR receives a label request message with this hop count, the message is dropped because it is assumed that the message is looping between adjacent LSRs. This hop count is configurable using the mpls ldp maxhops command.

Highlighted line 7 shows any configured parameters for targeted LDP sessions. Targeted sessions are those between non-directly connected neighbors.

Highlighted line 8 shows the backoff timers (initial and maximum). The backoff mechanism ensures that if two LSRs are configured with incompatible LDP parameters, they will not reattempt session establishment at a constant time interval. Instead, the time interval between session establishment attempts will gradually increase.

Finally, highlighted line 9 shows whether LDP loop detection is enabled.

show mpls atm-ldp capability

The show mpls atm-ldp capability command displays ATM (cell-mode) parameters negotiated during session initialization.

Example 6-180 shows the output of the show mpls atm-ldp capability command.

Example 6-180 show mpls atm-ldp capability Command Output

Chengdu_PE#show mpls atm-ldp capability
        VPI      VCI      Alloc  Odd/Even VC Merge
ATM3/0.1    Range     Range     Scheme Scheme  IN  OUT
 Negotiated  [1 - 1]    [33 - 1018]  UNIDIR      -  -
 Local    [1 - 1]    [33 - 1018]  UNIDIR      NO  NO
 Peer     [1 - 1]    [33 - 1023]  UNIDIR      -  -
Chengdu_PE#

Highlighted line 2 shows the VPI (1 to 1) and VCI (33 to 1018) ranges used for label switching by the local ATM-LSR. VPI/VCI ranges must overlap between adjacent ATM-LSRs, otherwise the session will not be established. The VPI range can be modified using the mpls atm vpi command.

Highlighted line 2 also shows that the allocation scheme is unidirectional for VCIs with the same VPI value. Highlighted line 2 also shows that VC-merge is not supported in either an inbound or an outbound direction on this ATM interface.

Highlighted line 3 shows the same information for the peer ATM-LSR. In this case, the peer is using the VPI range 1 to 1 and the VCI range 33 to 1023.

The VPI/VCI ranges negotiated between the local and peer ATM-LSRs during session initialization (1 to 1, and 33 to 1018) are shown in highlighted line 1.

show atm vc

The show atm vc command can be used to verify the control VC, together with any Label Virtual Circuits (LVCs) created on LC-ATM interfaces.

Example 6-181 shows the output of the show atm vc command.

Example 6-181 show atm vc Command Output

Chengdu_PE#show atm vc
      VCD /                    Peak Avg/Min Burst
Interface Name     VPI  VCI Type  Encaps  SC  Kbps  Kbps  Cells Sts
3/0.1   1       0  32 PVC  SNAP   UBR 155000        UP
3/0.1   2       1  33 TVC  MUX   UBR 155000        UP
3/0.1   3       1  34 TVC  MUX   UBR 155000        UP
3/0.1   4       1  35 TVC  MUX   UBR 155000        UP
3/0.1   5       1  36 TVC  MUX   UBR 155000        UP
3/0.1   6       1  37 TVC  MUX   UBR 155000        UP
Chengdu_PE#

The control VC is shown in highlighted line 1. It uses the default VPI/VCI of 0/32. Note additionally that it uses a SNAP encapsulation.

Highlighted line 2 shows a LVC (shown as a TVC) created on the LC-ATM interface. This LVC uses VPI/VCI 1/33.

Note the four other LVCs created on the interface, each with a unique VPI/VCI. These LVCs use MUX encapsulation because they will carry only MPLS datagrams. The control VC VPI/VCI can be modified using the mpls atm control-vc command. The LVC VPI range can be modified using the mpls atm vpi command.

show ip bgp vpnv4 vrf vrf_name labels

The show ip bgp vpnv4 vrf vrf_name labels command can be used to examine the VPN labels assigned to VRF prefixes.

Example 6-182 shows the output of the show ip bgp vpnv4 vrf vrf_name labels command.

Example 6-182 show ip bgp vpnv4 vrf vrf_name labels Command Output

HongKong_PE#show ip bgp vpnv4 vrf mjlnet_VPN labels
  Network     Next Hop   In label/Out label
Route Distinguisher: 64512:100 (mjlnet_VPN)
  172.16.1.0/24  10.1.1.1    nolabel/26
  172.16.2.0/24  10.1.1.1    nolabel/27
  172.16.3.0/24  10.1.1.1    nolabel/28
  172.16.4.0/24  10.1.1.1    nolabel/29
  172.16.4.2/32  10.1.1.1    nolabel/30
  172.16.5.0/24  172.16.8.2   26/nolabel
  172.16.6.0/24  172.16.8.2   27/nolabel
  172.16.7.0/24  172.16.8.2   28/nolabel
  172.16.8.0/24  0.0.0.0     29/aggregate(mjlnet_VPN)
  172.16.8.2/32  0.0.0.0     30/nolabel
HongKong_PE#

Highlighted lines 1 to 5 show the VPN labels assigned to remote VRF prefixes. Highlighted lines 6 to 10 show VPN labels assigned to local customer VPN routes.

debug mpls ldp transport events

The LDP peer discovery mechanism can be monitored using the debug mpls ldp transport events command.

Example 6-183 shows the output of the debug mpls ldp transport events command.

Example 6-183 debug mpls ldp transport events Command Output

Chengdu_PE#debug mpls ldp transport events
LDP transport events debugging is on
Chengdu_PE#
*Jan 22 06:12:22.407 UTC: ldp: enabling ldp on Serial4/0
*Jan 22 06:12:22.407 UTC: ldp: Set intf id: intf 0x61F4BBA4, Serial4/0, not
 lc-atm, intf_id 0
*Jan 22 06:12:22.407 UTC: ldp: i/f status change: Serial4/0; cur/des flags
 0x2/0x2mcast 1
*Jan 22 06:12:22.411 UTC: ldp: Send ldp hello; Serial4/0, src/dst
 10.20.10.1/224.0.0.2, inst_id 0
*Jan 22 06:12:26.555 UTC: ldp: Send ldp hello; Serial4/0, src/dst
 10.20.10.1/224.0.0.2, inst_id 0
*Jan 22 06:12:26.695 UTC: ldp: Rcvd ldp hello; Serial4/0, from 10.20.10.2
 (10.1.1.2:0), intf_id 0, opt 0xC
*Jan 22 06:12:26.695 UTC: ldp: ldp Hello from 10.20.10.2 (10.1.1.2:0) to
 224.0.0.2, opt 0xC
*Jan 22 06:12:26.695 UTC: ldp: New adj 0x622FFB10 for 10.1.1.2:0, Serial4/0
*Jan 22 06:12:26.695 UTC: ldp: adj_addr/xport_addr 10.20.10.2/10.1.1.2
*Jan 22 06:12:26.695 UTC: ldp: local idb = Serial4/0, holdtime = 15000, peer
 10.20.10.2 holdtime = 15000
*Jan 22 06:12:26.695 UTC: ldp: Link intvl min cnt = 2, intvl = 5000,
 idb = Serial4/0
Chengdu_PE#

In highlighted line 1, LDP is enabled on interface serial 4/0. Highlighted line 2 shows that an interface ID has been set for the interface. Note that the interface ID is 0. This indicates that this is not an LC-ATM interface.

In highlighted line 3, the interface status changes, and then in highlighted lines 4 and 5, two LDP neighbor discovery messages are sent. Notice that these messages are sent to the all routers multicast address (224.0.0.2).

In highlighted line 6, a discovery (hello) message is received from the peer LSR. When a discovery message is received on an interface, initiation of a link adjacency is allowed. In highlighted line 7, the receipt of the discovery hello message is reported, and in highlighted lines 8 and 9, the LDP adjacency with the peer is confirmed.

debug mpls ldp messages

The debug mpls ldp messages command shows LDP messages sent and received by the LSR, as demonstrated in the sample output in Example 6-184.

Example 6-184 debug mpls ldp messages Command Output

Chengdu_PE#debug mpls ldp messages sent
LDP sent PDUs, excluding periodic Keep Alives debugging is on
Chengdu_PE#
*Jan 22 06:07:06.079 UTC: ldp: Sent init msg to 10.1.1.2:0 (pp 0x0)
*Jan 22 06:07:06.083 UTC: ldp: Sent keepalive msg to 10.1.1.2:0 (pp 0x0)
*Jan 22 06:07:06.135 UTC: ldp: Sent address msg to 10.1.1.2:0 (pp 0x622AFBE0)
*Jan 22 06:07:06.139 UTC: ldp: Sent label mapping msg to 10.1.1.2:0
 (pp 0x622AFBE0)
*Jan 22 06:07:06.139 UTC: ldp: Sent label mapping msg to 10.1.1.2:0 (pp 0x622AFB
E0)
*Jan 22 06:07:06.139 UTC: ldp: Sent label mapping msg to 10.1.1.2:0 (pp 0x622AFB
E0)
Chengdu_PE#

Highlighted line 1 shows an Initialization message being sent by the local LSR neighboring LSR 10.1.1.2:0. Initialization messages are sent during session establishment and are used to negotiate common parameters such as VPI/VCI range and VC-merge capability.

In highlighted line 2, a keepalive message is sent. This keepalive is used to monitor the underlying LDP session transport connection.

An Address message is sent to LSR 10.1.1.2:0 in highlighted line 3. The Address message is used by an LSR to advertise its interface addresses to a peer.

In highlighted line 4, a Label Mapping message is sent. This is used to advertise a label binding.

debug mpls ldp advertisements

The debug mpls ldp advertisements command is used to monitor address (interface) and label bindings advertisement to peer LSRs.

CAUTION

As with all debug commands, exercise extra caution when using this command because it can produce copious output and impact device performance.

Example 6-185 shows the output of the debug mpls ldp advertisements command.

Example 6-185 debug mpls ldp advertisements Command Output

Chengdu_PE#debug mpls ldp advertisements
LDP label and address advertisements debugging is on
Chengdu_PE#
*Jan 22 06:01:53.347 UTC: tagcon: Assign peer id; 10.1.1.2:0: id 0
*Jan 22 06:01:53.347 UTC: tagcon: peer 10.1.1.2:0 (pp 0x622B0234): advertise
 10.1.1.1
*Jan 22 06:01:53.347 UTC: tagcon: peer 10.1.1.2:0 (pp 0x622B0234): advertise 10.
20.10.1
*Jan 22 06:01:53.347 UTC: tagcon: peer 10.1.1.2:0 (pp 0x622B0234): advertise 10.
1.1.1/32, label 3 (imp-null) (#2)
*Jan 22 06:01:53.351 UTC: tagcon: peer 10.1.1.2:0 (pp 0x622B0234): advertise
 10.20.10.2/32, label 16 (#4)
*Jan 22 06:01:53.351 UTC: tagcon: peer 10.1.1.2:0 (pp 0x622B0234): advertise
 10.20.10.0/24, label 3 (imp-null) (#6)
*Jan 22 06:01:53.351 UTC: tagcon: peer 10.1.1.2:0 (pp 0x622B0234): advertise
 10.20.20.2/32, label 17 (#8)
Chengdu_PE#

In highlighted line 1, Chengdu_PE advertises an interface address (10.1.1.1) to peer LSR 10.1.1.2:0.

In highlighted line 2, a binding for prefix 10.20.10.2/32 with label 16 is advertised to peer 10.1.1.2:0.

debug mpls ldp bindings

The debug mpls ldp bindings command can be used to examine addresses and bindings received from the peer LSR.

CAUTION

As with all debug commands, exercise extra caution when using this command because it can produce copious output and impact device performance.

Example 6-186 shows the output of the debug mpls ldp binding command.

Example 6-186 debug mpls ldp binding Command Output

Chengdu_PE#debug mpls ldp bindings
LDP Label Information Base (LIB) changes debugging is on
Chengdu_PE#
*Jan 22 06:16:33.571 UTC: tagcon: tibent(10.1.1.0/24): label 23 from 10.1.1.2:0
 removed
*Jan 22 06:16:33.571 UTC: tagcon: tibent(10.1.1.1/32): label 21 from 10.1.1.2:0
 removed
*Jan 22 06:16:33.571 UTC: tib: Not OK to announce label; nh 10.1.1.1 not bound
 to 10.1.1.2:0
*Jan 22 06:16:33.571 UTC: tagcon: Omit route_tag_change for: 10.1.1.1/32
    lsr 10.1.1.2:0: connected route
*Jan 22 06:16:33.571 UTC: tagcon: tibent(10.1.1.2/32): label imp-null from
 10.1.1.2:0 removed
*Jan 22 06:16:33.571 UTC: tagcon: tibent(10.1.1.3/32): label 20 from 10.1.1.2:0
 removed
*Jan 22 06:16:33.571 UTC: tagcon: tibent(10.20.10.0/24): label imp-null from
 10.1.1.2:0 removed
*Jan 22 06:16:33.571 UTC: tagcon: tibent(10.20.10.1/32): label 16 from
 10.1.1.2:0 removed
Chengdu_PE#

Highlighted lines 1 and 2 show label bindings for prefixes 10.1.1.0/24 and 10.1.1.1/32 being removed from the LIB (shown as TIB).

show and debug Command Summary

Table 6-2 summarizes show and debug commands used to troubleshoot MPLS in this chapter. Note that where appropriate, the equivalent old-style TDP command structure is shown.

Table 6-2show and debug Command Summary

Command

Parameter

Description

show ip cef

<none>

Displays the CEF FIB

 

summary

Displays a summary of the CEF FIB

 

vrf vrf_name detail

Shows detailed information about entries in the VRF CEF FIB

show cef

interface

Shows interface CEF information

show adjacency

detail

Displays detailed CEF adjacency table information

show atm vc

<none>

Shows all ATM PVCs, SVCs, and LVCs

show mpls (show tag-switching)

interfaces

Displays information about interfaces configured for MPLS

 

interfaces detail

Displays detailed information about interfaces configured for MPLS

 

forwarding-table

Displays the LFIB

 

forwarding-table prefix detail

Displays detailed information about a prefix in the LFIB

 

forwarding-table labels label_value

Shows LFIB entry that contains the specified local label

 

 

continues

 

forwarding-table labels label_value detail

Shows detailed information about LFIB entry that contains the specified local label

show mpls ldp (show tag-switching tdp)

discovery

Shows information about LDP neighbor discovery

 

neighbor

Shows information about LDP sessions

 

bindings

Displays the LIB

 

parameters

Shows configured LDP parameters

show mpls atm-ldp (show tag-switching atm-tdp)

capability

Shows LDP parameters negotiated on LC-ATM interfaces

show ip vrf

interfaces

Displays information about VRF interfaces

 

detail vrf_name

Displays detailed information about VRF interfaces

show ip protocols

vrf vrf_name

Shows routing protocol information for a specified VRF

show ip route

vrf vrf_name

Shows the VRF routing table

show ip rip database

vrf vrf_name

Shows the RIP database for a specified VRF

show ip eigrp vrf vrf_name

interfaces

Shows EIGRP interface information for a specified VRF

 

neighbors

Shows EIGRP neighbor information for a specified VRF

 

topology

Displays the EIGRP topology table for the specified VRF

 

traffic

Shows EIGRP traffic information for the specified VRF

show ip bgp vpnv4 vrf vrf_name

<none>

Shows the (VPN-IPv4 address family) BGP table for the specified VRF

 

labels

Shows labels associated with VPN-IPv4 prefixes for a specified VRF

show ip pim mdt bgp

<none>

Shows information about MP-BGP neighbors advertising participation in MVPNs

debug mpls (debug tag-switching)

atm-ldp api (atm-tdp api)

Shows information about LVCs

 

ldp transport events (tdp transport events)

Shows information concerning LDP neighbor discovery

 

ldp messages (tdp pies sent)

Displays information concerning LDP messages sent to and from the LSR

 

ldp advertisements (tdp advertisements)

Shows information concerning the advertisement of labels and interface addresses

 

ldp bindings (tdp bindings)

Displays information about label bindings and addresses received from peer LDP LSRs


Review Questions

  1. What transport mechanism is used for neighbor discovery between directly connected LDP peers?

  2. What is the LIB?

  3. What is the LFIB?

  4. What is a VRF?

  5. How are overlapping customer address spaces disambiguated in the MPLS VPN backbone?

  6. Which BGP extended community is used to control which VPN routes are imported into VRFs?

  7. Which routing protocols can be used between Cisco CE and PE routers?

  8. What label distribution protocol is used to advertise VPN labels?

  9. What label distribution protocol is used with MPLS traffic engineering on Cisco routers?

  10. What are the default and data MDTs?

MPLS VPN Troubleshooting Practice Labs

This section provides some troubleshooting labs to help you to consolidate troubleshooting skills learned in this chapter.

The troubleshooting labs described are based on the network topology shown in Figure 6-49.

The goal of each lab is to restore IP connectivity across the MPLS VPN backbone between CE1 and CE2.

Figure 6-49

Figure 6-49. MPLS VPN Troubleshooting Labs Network Topology

Base configurations for the labs can be found on the Cisco Press Web site (http://www.ciscopress.com/1587051044).

Detailed instructions for loading these base configurations onto your lab routers can be found in Appendix B.

Use tag-switching troubleshooting commands when doing the labs (see Table 6-2).

Troubleshooting Lab 1

mjlnet_CE1 and mjlnet_CE2 have IP connectivity to Chengdu_PE and HongKong_PE, respectively, but no routes are exchanged across the MPLS VPN backbone between the CE routers.

Troubleshoot this issue, and restore end-to-end IP connectivity across the MPLS VPN backbone between mjlnet_CE1 and mjlnet_CE2.

Record symptoms, actions, and solutions in the space provided.

Symptoms:

Actions:

Solution:

Troubleshooting Lab 2

mjlnet_CE1 routes are being received on mjlnet_CE2, but there is still no IP connectivity between the CE routers.

Troubleshoot this issue, and restore end-to-end IP connectivity across the MPLS VPN backbone between mjlnet_CE1 and mjlnet_CE2.

Record symptoms, actions, and solutions in the space provided.

Symptoms:

Actions:

Solution:

Troubleshooting Lab 3

Route exchange is successful between mjlnet_CE1 and mjlnet_CE2, but there is no IP connectivity between them.

Troubleshoot this issue, and restore end-to-end IP connectivity across the MPLS VPN backbone between mjlnet_CE1 and mjlnet_CE2.

Record symptoms, actions, and solutions in the space provided.

Symptoms:

Actions:

Solution: