Segment Routing Control Plane
The control plane in Segment Routing (SR) plays a crucial role in managing how segment ID information is shared among network devices. Link-state Interior Gateway Protocol mechanisms distribute segment IDs on Segment Routing networks. Both OSPF and IS-IS include protocol extensions to support the distribution of segment IDs. These extensions enable routers to maintain a comprehensive database containing information about all nodes and adjacency segments. Because IGPs are now responsible for distributing segment IDs, and labels in the case of the MPLS data plane, there’s no need for a separate label distribution protocol, as mentioned earlier. Our control plane has become far simpler because it is working with only one source of truth—IGP—instead of having to reconcile both IGP and LDP information during failure events. It is important to note that the Segment Routing control plane can be applied to both MPLS and IPv6 data planes. In Cisco’s documentation, this is referred to as SR-MPLS and SRv6, the former running on MPLS labels and the latter on IPv6 routing. Let’s start by examining SR-MPLS and learn the details behind protocols that provide this unified well-organized improvement.
IS-IS Control Plane
The IS-IS control plane disseminates Segment Routing information within an autonomous system. Because LDP is not necessary, IS-IS will distribute both the prefixes and labels in the extensions built into IS-IS itself. This allows for seamless deployment of Segment Routing in existing MPLS networks. Rather than modifying the protocol itself, the designers “extended” its use by providing these additional protocol add-ons to carry information not originally intended by protocol designers. Think of a train of railway cars where the locomotive does not know the load being carried inside each car it is pulling. This way, new functionalities can be added to the protocol by adding new TLVs. IS-IS works exactly this way, because it understands how to transport such values for the use of Segment Routing. It uses Type-Length-Value (TLV) triplets along with sub-TVLs to encapsulate various information in its advertisements. It can support both IPv4 and IPv6 control planes and extends its reach to level-1, level-2, and multilevel routing. It is capable of providing MPLS penultimate hop popping (PHP) and explicit-null signaling as well. Several RFCs, including RFC 8667 and RFC 8402, describe the process of how Prefix-SID and Adj-SID are carried in sub-TLVs in great detail. Figure 15-7 shows the format of the Prefix-SID sub-TLV.
Figure 15.7 IS-IS Prefix-SID Format
Table 15-3 shows the most significant TLVs you should be able to recognize on the exam.
Table 15-3 IS-IS TLVs
TLV |
Name |
Description |
Reference |
|---|---|---|---|
2 |
IIS Neighbors |
Shows all running interfaces to which IS-IS is connected, has a maximum metric of 6 with only 6 out of 8 bits used. |
ISO 10589 |
10 |
Authentication |
The information is used to authenticate IS-IS PDUs. |
ISO 10589 |
22 |
Extended IS Reachability |
Increases the maximum metric to 3 bytes (24 bits), addressing TLV 2 metric limitation. |
RFC 5305 |
134 |
TE Router ID |
MPLS Traffic Engineering router ID. |
RFC 5305 |
135 |
Extended IP Reachability |
Provides a 32-bit metric with an “up/down” bit for the route-leaking of L2 Đ L1. |
RFC 5305 |
149 |
Segment Identifier/Label Binding |
Advertises prefixes to SID/Label mappings. This functionality is called the Segment Routing Mapping Server (SRMS). |
RFC 8867 |
222 |
MT-ISN |
Allows for multiple-topology adjacencies. |
RFC 5120 |
236 |
IPv6 Reachability |
Describes network reachability through the specification of a routing prefix. |
RFC 5308 |
242 |
IS-IS Router CAPABILITY |
Allows a router to announce its capabilities within an IS-IS level or the entire routing domain. |
RFC 7981 |
IS-IS allocates the SRGB along with the Adjacency-SIDs and advertises both in IS-IS for the enabled address-families. IS-IS enables MPLS forwarding for all non-passive interfaces. Example 15-3 shows commands necessary to turn on Segment Routing in IS-IS.
Example 15-3 Commands to Turn on Segment Routing in IS-IS
IOS XR RP/0/0/CPU0:PE4# show running-config router isis router isis CCNP set-overload-bit on-startup 300 is-type level-2-only net 49.0001.0000.0000.0004.00 distribute link-state nsf ietf log adjacency changes lsp-gen-interval maximum-wait 10000 initial-wait 20 secondary-wait 200 level 2 lsp-refresh-interval 65000 max-lsp-lifetime 65535 address-family ipv4 unicast metric-style wide metric 100 level 2 microloop avoidance mpls traffic-eng level-2-only mpls traffic-eng router-id Loopback0 spf-interval maximum-wait 2000 initial-wait 50 secondary-wait 200 router-id Loopback0 segment-routing mpls ! address-family ipv6 unicast metric-style wide spf-interval maximum-wait 2000 initial-wait 50 secondary-wait 200 ! interface Loopback0 passive address-family ipv4 unicast prefix-sid index 4 ! ! mpls traffic-eng RP/0/0/CPU0:PE4#
OSPFv2 Control Plane
Much like in IS-IS, OSPF does not rely on LDP to transmit prefix label information. It uses protocol extensions to distribute Segment Routing labels in the OSPFv2 control plane. OSPF relies on fixed-length link-state advertisements (LSAs) for its fundamental operations. Later, Opaque LSAs were introduced to expand to new protocol capabilities, accommodating features like Segment Routing and Traffic Engineering. These advertisements are flooded to OSPF neighbors opaquely, implying that even if a transit router lacks comprehension of this information (perhaps due to running older software), it will nonetheless indiscriminately transmit it to its neighboring routers. Multi-area functionality is supported, host loopback prefixes are advertised as IPv4 Prefix Segment IDs (Prefix-SIDs), and Adjacency Segment IDs (Adj-SIDs) are used for adjacencies. MPLS penultimate hop popping (PHP) and explicit-null signaling are also supported.
Note the format of Opaque LSAs in Figure 15-8.
Figure 15.8 Opaque LSA Format
Opaque LSA types are identified by the topology flooding scope in Table 15-4. The most known of these (when it comes to Segment Routing) is type 10 LSAs, which distribute Traffic Engineering (TE) link attributes. It is often referred to as the TE LSA, yet it has other applications as well.
Table 15-4 OSPF Opaque LSAs
LSA type |
LSA Scope |
Topology Flooding Scope |
Reference |
|---|---|---|---|
9 |
Link-local |
Local network only |
RFC 5250 |
10 |
Area-local |
Only within an area |
RFC 5250 |
11 |
Autonomous System |
Domain-wide, same as AS-External type-5 LSAs |
RFC 5250 |
Similar to IS-IS, OSPF will allocate and advertise the SRGB to its neighbors. It activates MPLS forwarding on all OSPF interfaces, excluding loopback interfaces, and assigns Adjacency-SIDs to these interfaces. Example 15-4 shows commands necessary to turn on OSPF for Segment Routing.
Example 15-4 Commands to Turn on OSPF for Segment Routing
IOS XR RP/0/0/CPU0:PE4# show running-config router ospf router ospf CCNP nsr distribute link-state log adjacency changes detail router-id 10.1.100.10 segment-routing mpls segment-routing forwarding mpls fast-reroute per-prefix fast-reroute per-prefix ti-lfa enable affinity-map RED bit-position 0 ! nsf ietf ! Output omitted for brevity address-family ipv4 unicast area 0 mpls traffic-eng segment-routing mpls interface Loopback0 passive enable prefix-sid index 10 ! interface HundredGigE0/0/0/0 bfd minimum-interval 20 bfd fast-detect bfd multiplier 3 cost 200 network point-to-point ! interface HundredGigE0/0/0/1 bfd minimum-interval 20 bfd fast-detect bfd multiplier 3 cost 200 network point-to-point ! ! mpls traffic-eng RP/0/0/CPU0:PE4#
What about OSPFv3? While OSPFv3 has the potential to accommodate Segment Routing for IPv6 and utilize a native IPv6 data plane, specific extensions outlined in an IETF draft are required for implementation. It’s worth noting that, at press time, these extensions have not been integrated into Cisco IOS XR and IOS XE.
BGP Control Plane
BGP also has the capability to function as the control plane for Segment Routing (SR), enabling prefix distribution throughout the network. In the context of Segment Routing, the BGP control plane distributes segment routing information between routers, enabling them to make forwarding decisions based on predefined segments. While seen less frequently than IS-IS and OSPF, BGP has been effectively used in practice in multiple large-scale web data centers. Such data centers can support over 100,000 services, profoundly influencing and challenging the scalability and operational efficiency of the underlying network architectures. To meet the demands 740of high-intensity east-west traffic found in these compute clusters, operators frequently opt for variations of Clos or Fat-tree topologies. In these massive data center networks, symmetrical topologies with numerous parallel paths connecting two server-attachment points are common. It is in this context that BGP excels and arguably surpasses the IGP approach. The assertion that “BGP is a better IGP” challenges traditional viewpoints and has sparked conversations.
What would make BGP an attractive choice? Remember that these massive data centers seek maximum bandwidth to be transferred across the midpoint of the system. Such network structures are designed to be both highly scalable, cost-effective, and are constructed from affordable, low-end access-level switches. To maintain this level of scale, the designs call for a single protocol with simple behavior and wide vendor support. With the above in mind, when it comes to simplicity, BGP certainly has its advantages because it has less of a state machine and fewer data structures. This may not appear intuitive at first glance, but it does not take long to realize that the BGP RIB structure is simpler than those of Link-State Databases (LSDBs). There is a very clear picture of “which routing information is sent where.” There is a RIB-In and RIB-Out, a far easier construct for tracing exact routing paths than following link-state topology constraints with areas and levels. When it comes to operational troubleshooting, this is definitely a strength. Also, event propagation is more constrained in BGP because link failures have limited propagation scope. We can argue that BGP has more stability due to the reduced “event-flooding” domains. When it comes to traffic steering, BGP allows for per-hop Traffic Engineering that can be used for unequal cost Anycast load-balancing. In addition, BGP is widely supported by practically all vendors, so from the perspective of interoperability, BGP beats IGPs. We have been conditioned to perceive BGP as slow and suitable primarily for inter-domain routing, but it has no issues with demonstrating its adaptability and effectiveness in modern topologies. Therefore, it is advisable to approach BGP with an open mind, recognizing its potential to perform as well as, or even better than, traditional IGP alternatives in contemporary implementations.
BGP will advertise a BGP Prefix-SID associated with a prefix via BGP Labeled Unicast (BGP-LU) IPv4/IPv6 Labeled Unicast address-families. BGP Prefix-SID is a global SID, and the instruction forwards the packet over the ECMP-aware BGP best path to the associated prefix. RFC 8277 specifies that Label-Index TLV must be present in the BGP Prefix-SID attribute attached to IPv4/IPv6 Labeled Unicast prefixes. This 32-bit value represents the index value in the SRGB space and has the format illustrated in Figure 15-9.
Figure 15.9 BGP Prefix SID Advertised Format
The Prefix-SID for a locally originated BGP route can be set with a route-policy. Example 15-5 shows how to attach a label-index with network and redistribute commands.
Example 15-5 Attaching a Label-Index via a Route-Policy
IOS XR configuration with network command route-policy SIDs($SID) set label-index $SID end-policy ! router bgp 100 address-family ipv4 unicast network 10.1.100/4/32 route-policy SID(1) allocate-label all
IOS XR configuration with redistribute command route-policy SIDs if destination in (10.1.100.4/32) then set label-index 1 endif end-policy ! router bgp 100 address-family ipv4 unicast redistribute connected route-policy SIDs allocate-label all
One last thing regarding having BGP for a Segment Routing control plane. Remember the Anycast load-balancing I mentioned earlier in this section? Anycast allows different nodes to advertise the same BGP prefix. It is an application of Prefix SIDs to achieve anycast operations. Look at Figure 15-10, where I again moved some links around to represent a data center’s spine-and-leaf architectures, with spines located at the top. PE2 and P4, while advertising their individual BGP Prefix-SIDs (16002 and 16004, respectively), have been made members of the same unicast set. Both of them advertise anycast prefix 10.1.100.24/32 with BGP-Anycast SID 20001. PE3 wants to send traffic to PE7 but would like to exclude spine PE6. BGP-Anycast SID 20001 will load-balance the traffic to any member of the Anycast set and then forward it to PE7.
Figure 15.10 BGP-SR Anycast Load-Balancing
Additionally, due to BGP Prefix-SID global label usage, BGP-LU local labels are going to be the same across all of the network’s ASBRs. As a result, these Anycast loopbacks can be used as the next-hop for BGP-LU prefixes. That is pretty good resiliency! Nothing to scoff at, for sure.
SRv6 Control Plane
The SRv6 control plane manages the signaling, routing, and forwarding information for Segment Routing over IPv6 (SRv6) networks. It serves as the Segment Routing architecture tailored for the IPv6 data plane and extends to the value of IPv6, influencing future IP infrastructure deployments, whether in data centers, large-scale aggregation, or backbone networks. SRv6 functions as an extension of the Segment Routing architecture specifically designed for IPv6 networks. It introduces a source-routing mechanism by encoding instructions within the IPv6 packet header.
The use of IPv6 addresses to identify objects, content, or functions applied to objects opens up significant possibilities, particularly in the realm of chaining microservices within distributed architectures or optimizing content networking. Notably, stable networks, particularly in the Asia-Pacific region, have embraced SRv6, boasting tens of thousands of nodes on a single network as of the time of writing this book.
Fundamentally, SRv6 encodes topological and services paths into the packet header. The SRv6 domain does not hold any per-flow state for Traffic Engineering or network function virtualization (NFV). Sub-50ms path protection is delivered with TI-LFA. It natively delivers all services in the packet header, without any shims or overlays. IPv4’s limitations have forced the industry to create extra tools to deal with its challenges. When IPv4 lacked sufficient address space, NAT was created to hide and conserve addresses. For engineered load-balancing, we have had to invent MPLS Entropy Label and VxLAN UDP. For separating discrete networks, MPLS VPNs along VxLAN were created. Since Traffic Engineering functions were missing in IPv4, RSVP-TE and SR-TE MPLS appeared. Network Service Header (NSH) overcame IPv4 service chaining limitations. All of the above is done natively in IPv6 and why so many service providers are turning to this technology.
SRv6 (Segment Routing over IPv6) Header
At the heart of SRv6 is the IPv6 Segment Routing Header (SRH). Figure 15-11 shows the IPv6 SRH replicated from RFC 8754. This header is added to IPv6 packets to implement Segment Routing on the IPv6 forwarding plane. SRH specifies an IPv6 explicit path, listing one or more intermediate nodes the packet should visit on the way to its final destination. The Segment Left field provides the number of transit nodes before traffic reaches its destination. Then, the Segment List fields indicate the sequence of nodes in 128-bit IPv6 addresses to be visited from bottom to top. Segment List [n] shows the first node in the path; Segment List [0] shows the last node in the path.
Figure 15.11 IPv6 Segment Routing Header Format
In SRv6, each segment is represented by an IPv6 address known as a segment identifier (SID). These SIDs play a crucial role in defining specific paths or instructions for forwarding packets throughout the network. It looks a lot like a 128-bit IPv6 address, but has different semantics because it consists of two parts, with Figure 15-12 providing the visualization:
Locator: Represents an address of a specific SRv6 node performing the function.
Function: Represents any possible network instruction bound to the node that generates the SRv6 SID (network instruction) and is executed locally on that particular node, specified by the locator bits.
Figure 15-12 IPv6 Segment Identifier
You now have the ability to send packets to a node (locator) and then instruct the node to execute an action (function). This is not a subtle difference! In SR-MPLS, IGP with extensions advertised the transport mechanism, and services (L2VPNs, L3VPNs) were signaled independently via LDP or MP-BGP. You could change your transport (from MPLS to SR) without affecting the upper protocols that ran on top of it. For the first time in the industry, transport and services instructions are coupled and signaled in the SID. You will see an example of this coming up shortly where an L3VPN is written into the SID.
SRv6 Node Roles
In the context of SRv6 (Segment Routing over IPv6) networks, different nodes play distinct roles in facilitating packet forwarding and processing. These roles include
Source Node: This node has the capability to generate an IPv6 packet incorporating a Segment Routing Header (SRH), essentially forming an SRv6 packet. Alternatively, it serves as an ingress node that can apply an SRH to an existing IPv6 packet.
Transit Node: Found along the SRv6 packet’s path, the transit node functions without inspecting the SRH. The destination address of the IPv6 packet does not align with the transit node, and its role is primarily to forward the packet.
Endpoint Node: Located within the SRv6 domain, this node acts as the termination point for the SRv6 segment. The destination address of the IPv6 packet containing an SRH corresponds to the endpoint node. The endpoint node executes the specific function associated with the SID bound to the segment.
SRv6 Micro-Segment SID (uSID)
Often referred to as Micro-SID or Compressed SID, the uSID feature is an extension of the SRv6 architecture. In SRv6, the micro segment identifier, or uSID, is a specialized form of Segment Routing where packets are marked with a compact identifier for precise forwarding. Unlike traditional SRv6, which might use longer segment identifiers for various purposes, uSID is specifically designed for efficient and granular traffic steering. It provides a more streamlined approach to segment routing, particularly useful for scenarios requiring fine-grained control and scalability enhancements.
Using the established SRv6 Network Programming framework, it can encode up to six SRv6 Micro-SID (uSID) instructions within a singular 128-bit SID address, termed a uSID Carrier. Moreover, this extension seamlessly integrates with the existing SRv6 data plane and control plane, requiring no modifications. Notably, it ensures minimal MTU overhead. For instance, when incorporating six uSIDs within a uSID carrier, it yields 18 source-routing waypoints with just 40 bytes of overhead in the Segment Routing Header. Look at Figure 15-13, which illustrates the usage of uSID. Pay attention to how the highlighted uSIDs correspond to router numbering/naming.
Figure 15.13 uSID in Action
The customer at CE1 is using the VPNv4 SP service to connect to a remote site CE8. Router PE2 sends traffic to VPNv4 CE8 to router PE3 via a traffic-engineered path visiting routers PE6 and PE7 using a single (!) SRv6 SID (note that without uSID, a sequential Segment List would have to be specified). Let’s unpack this:
PE2, PE6, PE7, and PE3 are SRv6 capable and are configured with 32-bit SRv6 block 2001:db8.
P4 and P5 run classic IPv6 forwarding and do not change the Destination Address.
PE6, PE7, and PE3 advertise their corresponding 2001:db8:0600::/48, 2001:db8:0700::/48, and 2001:db8:0300::/48 routes.
PE2 receives an IPv4 packet from CE1, encapsulates it, and sends an IPv6 packet with the destination address 2001:db8:0600:0700:0300:f001:0000:0000. This is an SRv6 uSID Carrier that contains a sequence of micro-SIDs (instructions 0600, 0700, 0300, f001, and 0000).
The 0600, 0700, and 0300 uSIDs are used to construct a traffic engineering path to PE3 with two stops along the way—PE6 and PE7. uSID f001 is a BGP-signaled instruction sent by PE3 indicating the VPNv4 service. uSID 0000 indicates the end of instructions.
What happens at P4? P4, running only classic IPv6, forwards the packet along the shortest path to PE6.
PE6 receives the packet, pops its own uSID 0600, and advances the micro-program by looking up the shortest path to the next Destination Address (DA) 2001:db8:0700::/48. Now the DA is 2001:db8:0700:0300:f001:0000:0000:0000. This behavior is called shift and forward.
PE7 receives the packet, pops its own uSID 0700, and advances the micro-program by looking up the shortest path to the next Destination Address (DA) 2001:db8:0300::/48. Now the DA is 2001:db8:0300:f001:0000:0000:0000:0000. Shift and forward again.
P5 forwards the packet to PE3, just like P4 did.
PE2 receives the packet and executes the VPNv4 function based on this own instruction f001. It decapsulates the IPv6 packet, performs IPv4 table lookup, and forwards the IPv4 packet to CE8.
SRv6/MPLS L3 Service Interworking Gateway
The SRv6/MPLS L3 Service Interworking Gateway facilitates the seamless extension of L3 services between MPLS and SRv6 domains, ensuring continuity in service delivery across both control and data planes. This feature enables interoperability between SRv6 L3VPN and existing MPLS L3VPN domains, offering a pathway for transitioning from MPLS to SRv6 L3VPN.
At the gateway node, the SRv6/MPLS L3 Service Interworking Gateway performs both transport and service termination tasks. It generates SRv6 VPN SIDs and MPLS VPN labels for all prefixes within the configured VRF for re-origination, as illustrated in Figure 15-14. The gateway supports traffic forwarding from the MPLS domain to the SRv6 domain by removing the MPLS VPN label, performing a destination prefix lookup, and applying the appropriate SRv6 encapsulation. Conversely, for traffic from the SRv6 domain to the MPLS domain, the gateway removes the outer IPv6 header, performs a destination prefix lookup, and applies the VPN and next-hop MPLS labels.
Figure 15.14 SR-MPLS SRv6 Interworking Gateway
PE3 is the interworking gateway that has one leg in the SR-MPLS domain and the other in the SRv6 domain. It performs a translation service by popping the MPLS VPN label and looking up the destination prefix in the SRv6 domain. It encapsulates the payload in the outer IPv6 header with P4’s destination address. In the opposite direction, PE3 removes the outer IPv6 header, looks up the destination prefix, and pushes MPLS label 16002 for the BGP next-hop of PE2.
Co-existence with LDP
It would be nice to never worry about LDP and RSVP, but the reality is that many of today’s engineers will have to touch these older MPLS networks. A Segment Routing control plane can co-exist with the label-switched paths (LSPs) constructed with LDP or RSVP. The MPLS architecture allows for the simultaneous use of multiple label distribution protocols, including LDP, RSVP-TE, and others. The SR control plane can coexist alongside these protocols without any interaction. In Figure 15-15, we have removed some links in our network and have thus completely flattened it. This network runs a mix of both Segment Routing (SR) and Label Distribution Protocol (LDP). It is possible to establish an end-to-end seamless Multiprotocol Label Switching (MPLS) LSP, which will ensure interoperability between these two domains. To accomplish this, one or more nodes function as Segment Routing Mapping Servers (SRMS). These SRMS entities take on the responsibility of advertising SID mappings on behalf of nodes that are not SR-capable. This mechanism enables SR-capable nodes to learn about the SIDs assigned to non-SR-capable nodes without the need for explicit individual node configurations. Let’s unpack this.
Figure 15.15 SR and LDP Domain Interoperability
Notice that this network runs both SR and LDP, which can be typical during network transitions and upgrades. PE2 and PE7 are exchanging BGP VPNv4 routes. PE2, PE3, and P4 are SR-capable. PE4, P5, PE6, and PE7 use LDP. How do these two domains talk to each other end to end? First, let’s start from the LDPĐSR direction, which is quite easy because SR-capable routers will automatically translate between LDP- and SR-based labels:
PE7 learns a service route (L3VPN route, for example) for customer prefix 172.16.1.0/24 with a service/VPN label of 30001.
PE7’s BGP next-hop for this service label is associated with PE2’s lo1 10.1.100.2/32.
PE7 finds LDP label binding 24016 from its neighbor PE6 for PE2’s Forwarding Equivalence Class (FEC) of 10.10.100.2/32 and forwards the packet to PE6.
PE6 finds LDP label binding 24020 from its neighbor PE5 for PE2’s FEC of 10.10.100.2/32, swaps 24016 for 24020, and forwards the packet to PE5.
PE5 finds LDP label binding 24036 from its neighbor PE4 for PE2’s FEC of 10.10.100.2/32, swaps 24020 for 24046, and forwards the packet to P4.
P4 lacks an LDP binding originating from its next-hop PE3 for the FEC associated with PE1. What it does carry, though, is an SR node segment pointing to an IGP route leading to PE2. P4 engages in label merging, wherein it replaces its local LDP label (24036) for FEC PE2 with the corresponding SR node segment label, which is 16002.
PE3 pops label 16002 (assuming penultimate hop popping function is used) and forwards the packet to PE2.
PE2 receives the packet, looks up its service label of 30001, and drops the packet into the appropriate customer VRF.
We now have an end-to-end LDPĐSR path. Simple. What about in the opposite direction? This is where we will encounter a problem going from SRĐLDP. Can you take a moment to think what the problem would be by looking at Figure 15-14 before you examine Figure 15-15? PE2 needs to send traffic to 172.16.2.0/24 with service label 40001 that it received with the BGP next-hop of 10.1.100.7/32. Since PE2 only speaks SR, when it looks up the node segment for 10.1.100.7/32, what will it find in its label database? Nothing. Why? Because such label mapping does not exist on the network, since the operator has never configured it; therefore, no router advertises or receives this label mapping. There must be something to associate PE7’s loopback with SR label mapping. The better answer here is Segment Routing Mapping Server (SRMS). All analogies finally break down, but it is possible to think of SRMS as a sort of route reflector for SR labels. Just as in BGP, we can centrally instruct all routers in our SR domain. Look at Figure 15-16.
Figure 15.16 SR and LDP Domain Interoperability with SRMS
Walking back in the opposite direction looks like this:
PE3 is chosen as a Segment Routing Mapping Server (SRMS). In practice, it is recommended to have a redundant SRMS.
As PE7 lacks Segment Routing (SR) capability, you must create a mapping policy on the SRMS, which associates label 16007 with PE7’s lo1 10.1.100.7/32.
Now, PE2 learns a service route (L3VPN route via BGP) for customer prefix 172.16.1.0/24 with a service/VPN label of 40001 with the BGP next-hop of 10.1.100.7/32.
PE2 finds an SR label binding 16007 it has received from the SRMS PE3 for PE7’s FEC of 10.10.100.7/32 and forwards the packet to PE3 as the IGP next-hop.
PE3 finds an SR label binding 16007 pointing to its neighbor P4 as the IGP next-hop, swaps 16007 for 16007, and forwards to P4.
P4 does not have an SR label for PE7’s IGP route, but it holds LDP label 24011 for this FEC. It swaps 16007 for 24011 (remember the process is called label merge) and forwards to P5.
P5 swaps 24011 for 24022 and forwards to PE6.
PE6 pops the label (due to PHP in this setup) and forwards to PE7.
PE7 receives the packet, looks up its service label of 40001, and drops the packet into the appropriate customer VRF.
What should you remember here? Segment Routing Mapping Server labels are only necessary in the SRĐLDP direction. SR and LDP labels come from separate label database ranges (16000–23999 for SR and 24000+ for LDP), so unless the operator has deliberately violated this guidance, there is not a chance the network will be in a state of confusion, since the SR and LDP labels do not overlap each other. The network must maintain continuous SR connectivity in the SR domain. The network must also maintain continuous LDP connectivity in the LDP domain. If you understood the packet walkthrough, these points should be clear.
One last thing to know for completeness. By default, Cisco routers prefer LDP as the label imposition mechanism when the MPLS features are turned on. The way to enable SR for label imposition is shown in Example 15-6.
Example 15-6 Segment Routing Label Imposition Preferred
IS-IS router isis 100 address-family ipv4|6 unicast segment-routing mpls sr-prefer
OSPF router ospf 100 segment-routing mpls segment-routing sr-prefer