IP Routing Use Cases

Date: Sep 22, 2009 By Muhammad Afaq Khan. Sample Chapter is provided courtesy of Cisco Press.
This chapter discusses the capabilities of the Cisco ASR 1000 series router family and then reviews how those strengths can be used to address relevant problems.

This chapter focuses on routing and switching with ASR 1000 series routers. The chapter starts with a brief discussion of the capabilities of the router family, and then reviews how those strengths can be used to address relevant problems.

This approach, with detailed configuration examples where applicable, will allow you to understand the problems, the challenges they represent, and how you can use the ASR 1000 to address them.

Introduction to the Scalable and Modular Control Plane on the ASR 1000

The control plane is a logical concept that defines the part of the router architecture responsible for building and drawing the network topology map (also known as the routing table) and manifesting it to the forwarding plane (where actual packet forwarding takes place) in the form of the Forwarding Information Base (FIB).

While the forwarding capacity of the routers has continuously scaled throughout the years (the Cisco CRS-1, for example; the forwarding capacity for which boosted up to 92 terabits per second [Tbps]), the control-plane scale is given less attention. When routing products are compared, the focus is usually on the forwarding capacity (packets per second or bits per second).

Contrary to this popular notion, the control-plane scale is equally critical to ensure that the platform has the compute cycles in the form of Route Processor (RP) CPU to perform the following (among other things):

  • CLI, and similar external management functions performed via Simple Network Management Protocol (SNMP) or Extensible Markup Language (XML)
  • Routing protocols and their associated keepalives (including crypto functions in the control plane)
  • Link-layer protocols and their associated keepalives
  • Services such as RADIUS, TACACS+, DHCP, Session Border Controller (SBC), and Performance-based Routing (PfR) Master Controller function
  • All other traffic that cannot be handled at the data plane (for example, legacy protocols such as IPX), including punt traffic

The Cisco ASR 1000 router series delivers complete separation of the control and data plane, which enables the infrastructure's control plane to scale independently of the data plane. The ASR 1000 has two RPs on the market today: ASR1000-RP1 (first generation) and ASR1000-RP2 (second generation). ASR1000-RP1 is based on a 1.5-GHz RP CPU, whereas ASR1000-RP2 hosts a dual-core Intel 2.66-GHz processor, literally increasing the scale many times over the ASR1000-RP1.

The central benefit of physically separating the forwarding and control planes is that if the traffic load becomes very heavy (the forwarding plane gets overwhelmed), it simply doesn't affect the control plane's capability to process new routing information.

Another way of looking at it is if the routing plane gets very busy because of any of the relevant tasks, causing the control plane to be busy, perhaps because of a flood of new route information (even worse, peer or prefix flaps), busy-ness doesn't adversely affect the capability of the forwarding plane to continue forwarding packets. This is a common problem that plagues all software-based routers (due to a single general-purpose CPU running both control and data planes).

Key applications that benefit the most, from a big picture perspective, are network virtualization, infrastructure consolidation, and rapid rollout of various network-based services.

Before delving further and discussing the actual use cases from real-world networks, a quick refresher is in order on some commonly used terms.

NSF/SSO, NSR, Graceful Restart to Ensure Robust Routing

Nonstop forwarding (NSF) refers to the capability of the data plane to continue to function hitless when the routing plane disappears (momentarily, that is) and most likely fails over to a standby RP. Of course, the routing information and topology might change during this time and result in an invalid FIB, and therefore the switchover times should be as small as possible. The Cisco ASR 1000 provides switchover times of less than 50 ms RP to RP (or IOS daemon [IOSD] to IOSD for the ASR 1002-F/ASR 1002/ASR 1004).

Stateful switchover (SSO) refers to the capability of the control plane to hold configuration and various states during this switchover, and to thus effectively reduce the time to utilize the newly failed-over control plane. This is also handy when doing scheduled hitless upgrades within the ISSU execution path. The time to reach SSO for the newly active RP may vary depending on the type and scale of the configuration.

Graceful restart (GR) refers to the capability of the control plane to delay advertising the absence of a peer (going through control-plane switchover) for a "grace period," and thus help minimize disruption during that time (assuming the standby control plane comes up). GR is based on extensions per routing protocol, which are interoperable across vendors. The downside of the grace period is huge when the peer completely fails and never comes up, because that slows down the overall network convergence, which brings us to the final concept: nonstop routing (NSR).

NSR is an internal (vendor-specific) mechanism to extend the awareness of routing to the standby routing plane so that in case of failover, the newly active routing plane can take charge of the already established sessions.

Table 12-1 shows the compatibility and support matrix for ASR 1000 IOS XE software 2.2, and outlines the various states that are preserved during FP/ESP failover.

Table 12-1. Protocols and Their State Preservation via NSF/SSO

Technology Focus

NSF

SSO

Routing protocols

Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First Version 2 (OSPFv2), OSPFv3, Intermediate System-to-Intermediate System (IS-IS), and Border Gateway Protocol Version 4 (BGPv4)

IPv4 services

Address Resolution Protocol (ARP), Hot Standby Routing Protocol (HSRP), IPsec, Network Address Translation (NAT), IPv6 Neighbor Discovery Protocol (NDP), Unicast Reverse Path Forwarding (uRPF), Simple Network Management Protocol (SNMP), Gateway Load Balancing Protocol (GLBP), Virtual Router Redundancy Protocol (VRRP), Multicast (Internet Group Management Protocol [IGMP])

IPv6 services

IPv6 Multicast (Multicast Listener Discovery [MLD], Protocol Independent Multicast-Source Specific Multicast [PIM-SSM], MLD Access group)

L2/L3 protocols

Frame Relay, PPP, Multilink PPP (MLPPP), High-Level Data Link Control (HDLC), 802.1Q, bidirectional forwarding detection (BFD)

Multiprotocol Label Switching (MPLS)

MPLS Layer 3 VPN (L3 VPN), MPLS Label Distribution Protocol (LDP)

SBC

SBC Data Border Element (DBE)

See the "Further Reading" section at the end of this chapter to find out where to look for complete route scale testing details.

Use Case: Achieving High Availability Using NSF/SSO

To command higher revenues and consistent profitability, service providers and enterprises are increasingly putting more mission-critical, time-sensitive services on their IP infrastructure. One of the key challenges to this is achieving and delivering high network availability with strict service level agreement (SLA) requirements. It is universally understood that availability of the network is directly linked with the overall total cost of ownership (TCO).

An enterprise has an ASR 1006 / ASR1000-ESP10 router used in the core of the network running OSPF as the routing protocol used to connect to multiple distribution hub routers, where distribution hub routers might not all be Cisco.

The goal is to reduce the route/prefix recomputation churn caused by RP switchover and reestablishment of OSPF peers.

To address the requirements, you need to implement Internet Engineering Task Force (IETF) NSF for OSPF because that is interoperable with all vendors that are NSF-aware (a term used for a neighboring router that understands the GR protocol extensions). In this case, when NSF-capable ASR 1000 switches over from active RP to standby RP, there will be no packet loss at all, and downstream neighbors will not restart adjacencies.

Figure 12-1 shows the ASR 1000 core router and its neighbors, which are all NSF-aware and can act as helpers during RP SSO.

Figure 12-1

Figure 12-1 Logical view of many regional WAN aggregation routers coming into a consolidated WAN campus edge router.

To turn on IETF helper mode on all the distribution hub routers, including the Cisco ASR 1000, you need to execute the following configuration steps:

  • Step 1. Configure NSF within the given OSPF process ID:
    ASR1006# configure terminal
    ASR1006(config)# router ospf 100
    ASR1006(config-router)# nsf ietf restart-interval 300
    
          
  • Step 2. Check that the NSF is turned on, for sure, on the helper router:
    Router-helper# show ip ospf 100
    
    
     Routing Process "ospf 100" with ID 172.16.1.2
     ----output truncated----
     IETF Non-Stop Forwarding enabled      
        restart-interval limit: 300 sec    
     IETF NSF helper support enabled       
     Cisco NSF helper support enabled      
     Reference bandwidth unit is 100 mbps
        Area BACKBONE(0)
    ASR1006# sh ip ospf 100
     Routing Process "ospf 1" with ID 10.1.1.1
     ----output truncated----
     IETF Non-Stop Forwarding enabled     
         restart-interval limit: 300 sec  
         IETF NSF helper support enabled  
         Cisco NSF helper support enabled       
    
          
  • Step 3. Now you need to verify that both RPs are active (using the show platform command) and OSPF neighbor relationships are established (using the show ip ospf neighbors command):
    
    ! active ESP:
    
    ASR1006# show platform software ip fp active cef summary
    Forwarding Table Summary
    Name       VRF id   Table id    Protocol      Prefixes    State
    ----------------------------------------------------------------
    Default    0        0           IPv4           10000       cpp:
                                                               0x10e265d8
                                                               (created)
    ! standby ESP:
    
    ASR1006# show platform software ip fp standby cef summary
    Forwarding Table Summary
    Name       VRF id   Table id   Protocol    Prefixes  State
    ---------------------------------------------------------------------
    Default    0        0          IPv4        10000     cpp: 0x10e265d8
                                                         (created)
    

    You can also view the prefixes downloaded into both the active and standby Embedded Service Processor (ESP) before failing over the router.

    The preceding output shows that about 10K routes are created and exist in both ESPs before the failover.

  • Step 4. Now you'll induce the RP SSO failover (using redundancy force-switchover) from the active RP enable mode CLI. The following output shows the effects from the newly active RP:
    ASR1006# show ip ospf 100
     ----output truncated----
     IETF Non-Stop Forwarding enabled
        restart-interval limit: 300 sec, last IETF NSF restart 00:00:10 ago
    IETF NSF helper support enabled
     Cisco NSF helper support enabled
    
  • Step 5. RP SSO will not result in any packet loss, because forwarding continues during this entire process. During this switchover process, you can execute the show platform command to verify that the former active RP is booting ("booting" state).

In case of ASR1000-ESP10 failover, some small packet loss will occur (packets that are being processed inside the QuantumFlow Processor [QFP]), although that would account for much less than 1-ms worth of transit traffic loss.

NSF/SSO allows RPs to fail over without any packet loss, and ESPs can fail over with extremely small packet loss. The Cisco ASR 1000 shows core benefits of a carrier-class router where failover times beat even the Automatic Protection Switching (APS) gold standard of 50 ms.

In today's networks, where SLAs are enforced and networks are participating in life- and mission-critical scenarios, a robust infrastructure with faster failover based on modern architectures is a must.

Packet Capture Using Encapsulated Remote SPAN

For various reasons, including compliance, enterprises are looking for ways to capture data for further analysis (using an intrusion detection/prevention system [IDS/IPS] or some other advanced analysis system). NetFlow proves handy for this purpose, where you can get detailed IP flow accounting information for the given network.

NetFlow, however useful, still does not provide full packet capture capability from Layer 2 to 7. This is where the Switch Port Analyzer (SPAN) function steps in, although as the name says, this is limited to switches only. SPAN or Remote SPAN (RSPAN), where monitored traffic can traverse a Layer 2 cloud or network, essentially creates an opportunity to capture and analyze traffic on two different switches that are part of a single Layer 2 domain (as opposed to a Layer 3 routing domain). Encapsulated Remote SPAN (ERSPAN), as the name says, brings generic routing encapsulation (GRE) for all captured traffic and allows it to be extended across Layer 3 domains. Until recently, ERSPAN has been available only on Catalyst 6500 and 7600 platforms.

The Cisco ASR 1000 originated with ERSPAN support and can operate in two ways:

  • As source or destination for ERSPAN sessions
  • As source and destination for ERSPAN sessions at the same time

Note, as well, that this implementation is interoperable with Catalyst 6500 and 7600, and so traffic captured on a port/interface attached to an ASR 1000 can be sent to a destination monitoring station over to a 6500/7600 across a Layer 3 domain as a GRE packet.

Use Case: Ethernet Frame Capture and Transport Across a Layer 3 Cloud

An enterprise has an ASR 1000 being used at one of the regional HQs in San Francisco, and needs to capture traffic from an interface on an on-demand basis and bring it to the centralized data center location in Austin, terminating it on a Catalyst 6500 switch in the core. The San Francisco and Austin locations are connected via a shared MPLS IP VPN cloud.

To meet the requirement needed for this enterprise, you need to implement ERSPAN on the ASR 1000 in the SF HQ location as a source session and terminate it at the Catalyst 6500 switch in the core.

Figure 12-2 shows the ERSPAN source (monitored) and destination (monitoring) ports on the ASR 1000 and Catalyst 6500, respectively.

Figure 12-2

Figure 12-2 Ethernet frame capture at the WAN headend and transporting them to data center via a Layer 3 cloud.

Begin with the configuration on the ASR 1000. Here we'll configure source interface, direction of traffic, and ERSPAN session ID.

  • Step 1. Identify the ports/interfaces that need to be monitored, and the direction of traffic that needs to be captured, (for example, Rx) by entering the following commands:
    ASR1006(config)# monitor session 1 type erspan-source
    ASR1006(config-mon-erspan-src)# source interface Fe1/0/1 rx
    ASR1006(config-mon-erspan-src)# destination
    ASR1006(config-mon-erspan-src-dst)# erspan-id 100
    ASR1006(config-mon-erspan-src-dst)# ip address 10.10.0.1
    ASR1006(config-mon-erspan-src-dst)# ip ttl 32
    ASR1006(config-mon-erspan-src-dst)# origin ip address 172.16.0.1
    
          
  • Step 2. Configure the Catalyst 6500 to receive traffic from the source session on the ASR 1000 from Step 1:
    Cat6500(config)# monitor session 2 type erspan-destination
    Cat6500(config-mon-erspan-dst)# destination interface gigabitEthernet
      2/2/0
    Cat6500(config-mon-erspan-dst)# source
    Cat6500(config-mon-erspan-dst-src)# erspan-id 100
    Cat6500(config-mon-erspan-dst-src)# ip address 172.16.0.1
    
          
    You can use the show monitor session command to verify the configuration:
    ASR1006# show monitor session 1
    Session 1
    ---------
    Type                     : ERSPAN Source Session
    Status                   : Admin Enabled
    Source Ports             :
       RX Only                : Fe1/0/1
    Destination IP Address   : 10.10.0.1
    Destination ERSPAN ID    : 100
    Origin IP Address        : 172.16.0.1
    IP TTL                   : 32
    
  • Step 3. To be able to monitor the statistics of monitored traffic, you need to use show platform hardware qfp active feature erspan state command:
    ASR1006# show platform hardware qfp active feature erspan state
    ERSPAN State:
      Status    : Active
    ----output truncated----
    System Statistics:
      DROP src session replica  :         0 /        0
      DROP term session replica :         0 /        0
      DROP receive malformed    :         0 /        0
      DROP receive invalid ID   :         0 /        0
      DROP recycle queue full   :         0 /        0
      DROP no GPM memory        :         0 /        0
      DROP no channel memory    :         0 /        0
    

This will achieve the purpose of capturing received traffic on the ASR 1000 (FE1/0/1) to Catalyst 6500 GE2/2/0. This traffic will simply be captured, encapsulated in GRE by ASR 1000 natively by the QFP chipset and routed over to the Catalyst 6500. A sniffing station on the 6500 attached to GE2/2/0 will see the complete Ethernet frame (L2 to L7) up to jumbo size (assuming the routed WAN infrastructure can carry jumbo frames end to end).

The ASR 1000, being the first midrange routing platform to support ERSPAN, adds tremendous value to data capturing and data visibility end to end from a branch, or from HQ to data center, a common requirement in medium to large enterprise networks. ERSPAN packet replication is natively done by the QFP chipset, and therefore no external modules are required. ERSPAN, when combined with NetFlow, can result in detailed end-to-end network visibility.

Achieving Segmentation Using MPLS over GRE and MPLS VPNs over GRE Solutions

In today's world, an enterprise campus is home to many different and often competing users. Multitenant environments such as universities, airports, and some public-sector networks (including educational networks) fall under this category.

Such enterprises leverage their high-touch intelligent networking infrastructure to provide connectivity and network services for all stakeholders. For instance, different airlines could share one physical airport network and get billed for this connectivity. This setup accelerates the return on network infrastructure investment, and it optimizes network operations and operational expenses through virtualization. Regulatory compliance, mergers and acquisitions (M&A), and network infrastructure consolidation are among the many drivers. For the users of this single physical network, it results in seamless and instant-on delivery of services, which in turn results in increased revenue streams.

MPLS (or MPLs-based applications) has gained a lot of ground because of its capability to provide this virtualization within a large enterprise network and still provide the much-needed segmentation. The relevant technologies that you hear about are usually MPLS/LDP over GRE, and MPLS VPNs (2547) over GRE, in addition to a host of other MPLS-based technologies.

Use Case: Self-Managed MPLS and Enterprise Private WAN Segmentation

An enterprise is running a "self-managed" or "self-deployed MPLS" core to achieve this network segmentation. Deploying MPLS (or RFC 2547) over a mesh of GRE tunnels (enterprise provider edge [PE] to enterprise PE) allows the enterprise to extend their MPLS network over almost any IP network. Additional benefits include flexibility of edge router roles (provider [P] or PE), independence from the service provider (SP) cloud (which sees those packets as IP packets), and an easier add-on encryption capability, something you can call MPLS over GRE over IPsec. Several large enterprises today are running this environment in their production network.

Configurations of such deployments are fairly straightforward, where WAN edge routers (or customer edges [CE]) basically serve as enterprise Ps or PEs (also referred to as E-Ps or E-PEs), as documented in the text that follows.

Figure 12-3 shows the isolated self-deployed enterprise MPLS clouds that are connected together via an SP MPLS core using LDP over GRE.

Figure 12-3

Figure 12-3 Enterprise PEs (E-PE) are connected across the enterprise-owned/managed MPLS cloud.

A point-to-point GRE tunnel is set up between each WAN edge router pair if a full mesh is desired. From a control-plane perspective, the following protocols are to be run within the GRE tunnels:

  • An IGP such as EIGRP or OSPF for MPLS device reachability. (This makes the E-PE, E-P, and route reflectors [RRs], if configured, reachable to each other.)
  • LDP, to allow the formation of LSPs over which traffic is forwarded.
  • MP-iBGP for VPN route and label distribution between the E-PE devices.

You will need to configure MPLS labeling, using the mpls ip command, on the tunnel interfaces rather than on the WAN edge router physical interfaces. You can verify this configuration with the show platform software interface command:

E-PE-SF(config)# interface Tunnel10
 description GRE tunnel to E-P-NY
 bandwidth 10000
 ip address 172.16.10.5 255.255.255.0
 ip mtu 1400
 mpls ip
 tunnel source Loopback10
 tunnel destination 10.10.10.1

E-PE-SF# sh platform software interface fp active name Tunnel10

Name: Tunnel10, ID: 24, CPP ID: 25, Schedules: 0
----output truncated----
Flags: ipv4, mpls
ICMP Flags: unreachables, redirects, no-info-reply, no-mask-reply
ICMP6 Flags: unreachables, redirects
Dirty: unknown
AOM dependency sanity check: PASS
AOM Obj ID: 1081

Figure 12-4 shows the end-to-end protocol stacks for an MPLS/LDP over GRE scenario.

Figure 12-4

Figure 12-4 Protocol stacks for packets at both P and in the MPLS cloud.

This will effectively create an LSP from E-P-SF to E-P-NY, and the intermediary SP cloud does not have to be an MPLS-based service.

Figure 12-5 shows the end-to-end protocol stacks for an MPLS VPNs over GRE scenario, or something also known as 2547 VPNs over GRE.

Figure 12-5

Figure 12-5 Protocol stacks at both PEs and in the MPLS cloud.

Full-mesh peer-to-peer (p2p) GRE tunnels can easily become an administrative hassle in a network with large number of WAN edge routers. In those cases, enterprises can also consider 2547 over Dynamic Multipoint VPN (DM VPN), or 2547 over mGRE over IPsec, to ease the burden of tunnel administration. These solutions will be supported on ASR 1000 in the future IOS XE versions.

The Cisco ASR 1000 provides the extreme flexibility necessary to meet the changing business environments that need virtualization in today's multitenant enterprise networks by supporting MPLS/2547 over GRE solutions at serial interface, Fast Ethernet, Gigabit Ethernet, or even 10 Gigabit Ethernet speeds natively or higher with the unique capability to perform all these encapsulations inside the single QFP chipset.

Scalable v4/VPNv4 Route Reflector

With the growing adoption of MPLS in the enterprises to achieve large-scale virtualization and segmentation, there is also a need for enterprises to have their own route reflector (RR) for VPNv4 routes, deployed separately or combined in a PE router. An RR simplifies the iBGP full-mesh restriction where all PEs don't have to mesh with all other PEs, rather just with the RR.

Use Case: Route Reflection

Figure 12-6 shows the RR used by the enterprise in the self-managed MPLS clouds.

Figure 12-6

Figure 12-6 MAN using the same router for E-PE and VPNv4 RR roles.

To meet this requirement of avoiding the full mesh of iBGP, you need to configure the Cisco ASR 1000 as the RR for VPNv4 routes using the following steps:

  • Step 1. Configure RRs to peer with PEs to reflect VPNv4 routing information learned from other PEs:
    ASR1004-RR(config)# router bgp 100
    ASR1004-RR(config-router)# neighbor A-PE peer-group
    ASR1004-RR(config)# neighbor A-PE remote-as 100
    ASR1004-RR(config)# neighbor A-PE update-source Loopback100
    ASR1004-RR(config)# neighbor PE loopback# peer-group A-PE
    
          
  • Step 2. Configure RRs for VPNv4 BGP peering between PEs and RRs:
    ASR1004-RR(config-router)# address-family vpnv4
    ASR1004-RR(config-router-af)# neighbor A-PE activate
    ASR1004-RR(config-router-af)# neighbor A-PE route-reflector-client
    ASR1004-RR(config-router-af)# neighbor A-PE send-community extended
    ASR1004-RR(config-router-af)# neighbor PE loopback# peer-group A-PE
    
          
  • Step 3. Configure the PE for VPNv4 BGP peering between PEs and RRs (thus enabling PEs to exchange VPNv4 routing information with the RRs):
    ASR1004-PE(config)# router bgp 100
    ASR1004-PE(config-router)# no synchronization
    ASR1004-PE(config-router)# bgp log-neighbor-changes
    ASR1004-PE(config-router)# neighbor ASR1004-RR loopback ip# remote-as
      100
    ASR1004-PE(config-router)# neighbor ASR1004-RR loopback ip# update-
      source Loopback0
    ASR1004-PE(config-router)# address-family vpnv4
    ASR1004-PE(config-router-af)# neighbor ASR1004-RR loopback ip# activate
    ASR1004-PE(config-router-af)# neighbor 172.16.1.1 send-community
      extended
    
          

Although this example uses the Cisco ASR 1004 as the VPNv4 RR, this is applicable to the IPv4 RR, too. The VPNv4 route scale is completely a function of the ASR1000-RP you have in the system. With the ASR1000-RP1 and ASR1000-RP2, the scale is up to 1M and 4M, respectively, for IPv4. For VPNv4 routes, ESP does not have to be in the data path, and therefore any ESP can be used. Currently for IPv4, FIB entries are still populated, hence limiting the RR scale. This will change in a future IOS XE version.

The Cisco ASR 1000, by virtue of the ASR1000-RP1 and ASR1000-RP2, provides the largest scale for Route Reflector deployments in the Cisco midrange routing portfolio. The ASR1000-RP2, with 16-GB DRAM, truly raises the bar, with 64-bit IOS XE that allows the routes to scale up to 20M, which essentially rivals even the largest core routers available today.

In general, the ASR1000-RP2 (16-GB DRAM) provides four times the route scale over RP1 (4-GB DRAM), three times the number of peers/sessions (with the given convergence time) and is at least twice as fast in terms of route convergence (for the given set of routes and peers).

Scalable and Flexible Internet Edge

When we talk about a router to be placed at the edge of the network facing the public Internet, a few things come to mind. An ideal router needs to be flexible and scalable with regard to features and variety of interfaces, without requiring service modules for every basic service, such as Network Based Application Recognition (NBAR), Flexible Packet Matching (FPM), firewalls, and IPsec. Other critical attributes include high availability, deep packet inspection, and near-line-rate quality of service (QoS).

High availability enables applications to remain available in case of software or hardware failure that causes a data- or control-plane problem. Deep packet inspection helps classify the data based on application header or payload; it also addresses zero-day attacks.

Use Case: Internet Gateway/Edge Router

An enterprise is looking for, in a smaller-compact factor, an Internet edge that can natively accelerate NAT, firewall, NetFlow, and access control lists (ACL), along with ISSU and RP SSO. This device should also be able to scale up to 10 Gbps if needed in the future.

To meet these requirements, you could use the ASR 1002 with ASR1000-ESP5, which provides 5-Gbps system bandwidth with four built-in Gigabit Ethernet ports ready to be used as fiber or copper and facing either the inside LAN or Internet (usually provisioned via an Ethernet link).

The ASR 1002 can also take the ASR1000-ESP10, which satisfies the requirements of 10 Gbps, essentially doubling the bandwidth from initial deployment.

Figure 12-7 shows the ASR 1002/ASR1000-ESP5 deployed at the Internet edge.

Figure 12-7

Figure 12-7 Single router used for both the WAN edge and Internet gateway router.

There are no configurations to be shared in this use case, but note the performance and scale numbers for the ASR 1000 series routers relevant to the previously mentioned features.

Table 12-2 shows the various features and their respective performance and scale relevant to Internet edge.

Table 12-2. Various ESPs and Their Scale and Performance for IOS Zone-Based Firewall, NetFlow, and IPsec

Feature

ASR1000-ESP5

ASR1000-ESP10

ASR1000-ESP20

IOS zone-based firewall (L4 inspection)

5 Gbps

10 Gbps

20 Gbps

NetFlow (v5, v8, v9)

500K flow cache entries

1M flow cache entries

2M flow cache entries

IPsec

1 Gbps at IMIX

4000 tunnels

90 tunnels/sec with ASR1000-RP1

2.5 Gbps at IMIX

4000 tunnels

90 tunnels/sec with ASR1000-RP1

5.2 Gbps at IMIX

4000 tunnels

90 tunnels/sec with ASR1000-RP1

Dual IOSD failover

< 50 ms

< 50 ms for ASR 1002-F/ASR 1002/ASR 1004 chassis

< 50 ms for ASR 1002-F/ASR 1002/ASR 1004 chassis

The Cisco ASR 1000 not only meets the typical Internet gateway router requirements here, but also exceeds them from both control- and data-plane perspectives. The capability to have two IOS daemons running at the same time, and providing IOSD-based SSO, is second to none!

Scalable Data Center Interconnect

Today's businesses are seeing more and more consolidation for both file and application servers into a small number of data centers. Major drivers for this trend include cost savings, regulatory compliance, and ease of backup and administration.

At the heart of this, there is also a virtualization trend, where compute cycles are being isolated or abstracted from storage. This has created newer technologies for virtual machine high availability, and migration such as VMotion, clustering, or even geo-clustering of servers, which require extending Layer 2 VLANs across the WAN (data center interconnect).

Now, when looking at the data center connection and trying to tie it up with the application vendor requirements, almost all suggest using Layer 2 adjacent servers. To satisfy or emulate the requirement of L2 adjacencies across the WAN, various requirements emerge from these trends:

  • Loop prevention: This refers to isolation of Spanning Tree Protocol (STP) to each data center itself, and not extended across the data center interconnect (DCI).
  • Redundancy: This refers to the DCI solution itself not being prone to node or link failures. That, of course, requires redundancy.
  • Convergence times: Apparently, there is no set standard for this requirement for DCI. It really depends on what applications are being run (for example, a requirement driven by VMotion stipulates no more than a couple of seconds for convergence).
  • Usage of multiple paths: This is where technologies such as Virtual Switching Systems (VSS) and Multichassis EtherChannel (MEC) come into play. There is another similar solution known as virtual port channel (vPC), which essentially allows creating an EtherChannel where member links are across two different physical systems.

Three types of transport are common for DCI:

  • Dark fiber: Fiber that is not lit yet is called dark fiber. Not very many organizations have access to dark fiber, but the ones who have see it as the most preferred way of doing DCI. This is usually limited in distance.
  • IP: This is a rather common medium and usually consists of some kind of private IP services that most SPs offer across geographies.
  • MPLS: This is one of the more common ways to connect data centers.

The ASR 1000 supports almost all forms of Gigabit Ethernet coarse/dense wavelength-division multiplexing (CWDM/DWDM) optics, although the Catalyst 6500 with VSS/MEC has a solution that meets all the requirements in this arena, including multisite DC connectivity.

For IP and MPLS, the ASR 1000 offers (complementing the Cisco 6500 solution) Ethernet over MPLS and Ethernet over MPLS over GRE, starting from IOS XE 2.4 for dual-site DCI.

Figure 12-8 shows the MPLS transport and active/active EoMPLS pseudowires across DCI routers.

Figure 12-8

Figure 12-8 Encrypting Ethernet frames at Layer 2 using TrustSec and avoiding the use Layer 3 encryption such as IPsec.

Figure 12-9 shows the MPLS transport and active/active EoMPLS pseudowires across DCI routers. Here the DC core switches are Nexus 7Ks running TrustSec to encrypt packets at Layer 2 hop by hop.

Figure 12-9

Figure 12-9 EoMPLS scenario where the transport cloud is MPLS.

The solution in Figure 12-9 shows a unique advantage where any traffic leaving the premise is required to be encrypted, as this provides a native way to encrypt all traffic. This requirement is common in government and state agencies.

Figure 12-10 shows the IP transport and active/active EoMPLSoGRE tunnels across DCI routers.

Figure 12-10

Figure 12-10 EoMPLSoGRE scenario where the transport cloud is IP.

This can also be seen as a consolidation strategy, especially for green-field deployments, where ASR 1000 working as a DCI LAN extension router can also serve as a consolidated unified WAN services router. This brings down the TCO much lower and at the same time allows for faster qualification, where the ASR 1000 functions as a private WAN aggregation, and even perhaps the Internet edge can be collapsed at the consolidated WAN edge.

Figure 12-11 shows the unified WAN edge, which consolidates the DCI with multiple other functions.

Figure 12-11

Figure 12-11 Example of DCI and WAN edge functional collapse in a single router.

Use Case: Encrypting Traffic over an EoMPLS Psuedowire at Layer 2 Using TrustSec

Assume that an organization wants to connect two different data centers and extend multiple VLANs across these two sites for connecting various clusters, geo-clusters. Assume that the transport method in the middle is MPLS and the organization is using Nexus 7Ks as data center core switches. The customer also wants to start with a clear-text (nonencrypted) DCI, but later on would also like to add encryption to it to deal with regulatory compliance. This must be met without much configuration overhead.

Now, to meet these requirements, you need to extend the Layer 2 connectivity across the sites. Because the transport medium is MPLS, you can start with clear-text requirement and deploy the ASR 1000, as shown in Figure 12-8, and then later move to the deployment illustrated in Figure 12-9 by turning on TrustSec (or 802.1AE) link-layer encryption on the Nexus transparently without changing anything on the ASR 1000!

The examples that follow examine the configuration for both the ASR 1000 (port mode EoMPLS) and TrustSec on the Nexus 7000.

Figure 12-12 shows the final topology with both Nexus 7K and ASR 1000 connected with vPC.

Figure 12-12

Figure 12-12 Nexus 7K and ASR 1000 connected via vPC.

Example 12-1 shows the ASR 1000 port mode EoMPLS configuration, which can be used to theoretically extend 4K VLANs.

Example 12-1. Port Mode EoMPLS Configuration for the ASR 1000 with Remote Port Shutdown Enabled by Default

ASR1000-1:
ASR1000-1(config)# interface Loopback0
ASR1000-1(config-if)# ip address 192.168.100.1 255.255.255.255
!
ASR1000-1(config)# interface GigabitEthernet0/0/0
 ASR1000-1(config-if)# mtu 9216
ASR1000-1(config-if)# no ip address
ASR1000-1(config-if)# negotiation auto
ASR1000-1(config-if)# xconnect 192.168.100.2 100 encapsulation mpls
!
ASR1000-1(config)# interface GigabitEthernet0/0/1
ASR1000-1(config-if)# description to ASR-2
ASR1000-1(config-if)# mtu 9216
ASR1000-1(config-if)# ip address 10.1.2.1 255.255.255.0
ASR1000-1(config-if)# load-interval 30
ASR1000-1(config-if)# negotiation auto
ASR1000-1(config-if)# mpls label protocol ldp
ASR1000-1(config-if)# mpls ip
!
!
ASR1000-1(config)# interface GigabitEthernet0/0/3
ASR1000-1(config-if)# description to ASR-2
ASR1000-1(config-if)# mtu 9216
ASR1000-1(config-if)# ip address 10.1.1.1 255.255.255.0
ASR1000-1(config-if)# load-interval 30
ASR1000-1(config-if)# negotiation auto
ASR1000-1(config-if)# mpls label protocol ldp
ASR1000-1(config-if)# mpls ip
-----------------------------------------------------------------------------------
ASR1000-2:
ASR1000-2(config)# interface Loopback0
ASR1000-2(config-if)# ip address 192.168.100.2 255.255.255.255
!
ASR1000-2(config)# interface GigabitEthernet0/0/0
ASR1000-2(config-if)# mtu 9216
ASR1000-2(config-if)# no ip address
ASR1000-2(config-if)# negotiation auto
ASR1000-2(config-if)#  xconnect 192.168.100.1 100 encapsulation mpls
!
ASR1000-2(config)# interface GigabitEthernet0/0/1
ASR1000-2(config-if)# description to ASR-1
ASR1000-2(config-if)# mtu 9216
ASR1000-2(config-if)# ip address 10.1.2.2 255.255.255.0
ASR1000-2(config-if)# load-interval 30
ASR1000-2(config-if)# mpls label protocol ldp
ASR1000-2(config-if)# mpls ip
!
!
ASR1000-2(config)# interface GigabitEthernet0/0/3
ASR1000-2(config-if)# description to ASR-1
ASR1000-2(config-if)# mtu 9216
ASR1000-2(config-if)# ip address 10.1.1.2 255.255.255.0
ASR1000-2(config-if)# mpls label protocol ldp
ASR1000-2(config-if)# mpls ip

Example 12-2 shows the Nexus 7000 configuration to use TrustSec for all traffic going outbound on to the EoMPLS pseudowires (over an MPLS cloud).

Example 12-2. Nexus 7K TrustSec Configuration

Nexus-7K-1# sh run cts
version 4.1(2)
feature dot1x
feature cts
cts device-id Nexus-7K-1 password 7 qxz12345

interface Ethernet1/12
  switchport
  switchport access vlan 666
  cts manual
    sap pmk abcdef1234000000000000000000000000000000000000000000000000000000
  mtu 9216
  no shutdown

interface Vlan666
  no shutdown
  ip address 155.5.5.1/24
-----------------------------------------------------------------------------------
Nexus-7K-2# sh run cts
version 4.1(2)
feature dot1x
feature cts
cts device-id Nexus-7K-2 password 7 qxz12345
interface Ethernet1/12
  switchport
  switchport access vlan 666
  cts manual
    sap pmk abcdef1234000000000000000000000000000000000000000000000000000000
  mtu 9216
  no shutdown
interface Vlan666
  no shutdown
  mtu 9216
  ip address 155.5.5.2/24

Example 12-3 shows that the TrustSec session is established.

Example 12-3. Confirmation That TrustSec Is Negotiated and Is Up

Operational Status (TrustSec 802.1AE SAP negotiation successful):
Nexus-7K-1# show cts interface e 1/12
CTS Information for Interface Ethernet1/12:
    CTS is enabled, mode:    CTS_MODE_MANUAL
    IFC state:                CTS_IFC_ST_CTS_OPEN_STATE
    Authentication Status:   CTS_AUTHC_SKIPPED_CONFIG
      Peer Identity:
      Peer is:               Not CTS Capable
      802.1X role:           CTS_ROLE_UNKNOWN
      Last Re-Authentication:
    Authorization Status:   CTS_AUTHZ_SKIPPED_CONFIG
      PEER SGT:              0
      Peer SGT assignment:  Not Trusted
      Global policy fallback access list:
    SAP Status:               CTS_SAP_SUCCESS
      Configured pairwise ciphers: GCM_ENCRYPT
      Replay protection: Disabled
      Replay protection mode: Strict
      Selected cipher: GCM_ENCRYPT
      Current receive SPI: sci:1b54c148d80000 an:2
      Current transmit SPI: sci:225577968c0000 an:2
Operational Status (TrustSec 802.1AE SAP negotiation successful):
----------------------------------------------------------------------------------------------
Nexus-7K-2# show cts interface e 1/12
CTS Information for Interface Ethernet1/12:
    CTS is enabled, mode:      CTS_MODE_MANUAL
    IFC state:                CTS_IFC_ST_CTS_OPEN_STATE
    Authentication Status:   CTS_AUTHC_SKIPPED_CONFIG
      Peer Identity:
      Peer is:                Not CTS Capable
      802.1X role:            CTS_ROLE_UNKNOWN
      Last Re-Authentication:
    Authorization Status:   CTS_AUTHZ_SKIPPED_CONFIG
      PEER SGT:              0
      Peer SGT assignment:  Not Trusted
      Global policy fallback access list:
    SAP Status:              CTS_SAP_SUCCESS
      Configured pairwise ciphers: GCM_ENCRYPT
      Replay protection: Disabled
      Replay protection mode: Strict
      Selected cipher: GCM_ENCRYPT
      Current receive SPI: sci:225577968c0000 an:2
      Current transmit SPI: sci:1b54c148d80000 an:2

As shown in Figure 12-12, the two ASR 1000s are connected via two active/active EoMPLS pseudowires. To deal with a failure scenario, the ASR 1000 uses a feature called Remote Pseudo Wire Shutdown. The behavior on the ASR 1000 is a bit different than on the Catalyst 6500/7600, where the feature does not depend on interworking with the Ethernet LMIs.

On the ASR 1000, this feature, upon pseudowire down state, shuts down the local laser on the port with "xconnect", gi0/0/0, as shown in the use case. This is seen by the peer Ethernet port as the interface going down, and it will go to down/down. This allows the downstream devices to stop sending traffic to the port and results in almost instant convergence EoMPLS remote port shutdown provides faster failover times for both local/remote node or link failure scenarios.

This behavior is very helpful in the vPC scenario, because it will trigger the LACP (Link Aggregation Control Protocol) to converge instantly and will remove the member link from the virtual port channel.

Summary

This chapter discussed six use cases for Cisco ASR 1000 to provide the following variety of solutions:

  • High availability using NSF/SSO in an enterprise
  • Data capture using ERSPAN in a router
  • MPLS over x solutions in a large enterprise that needs virtualization/segmentation at 10 Gbps or higher speeds
  • VPNv4 RR in a self-deployed MPLS enterprise
  • Highly available Internet gateway router
  • DCI WAN router

The goal was really to go over a diverse set of technology problem statements and solutions that are common in an enterprise and to cover how the ASR 1000 addresses them.

Chapter Review Questions

  1. Is NSF for IGPs enabled by default?

  2. What is the difference between an NSF-aware and NSF-capable router?

  3. What is ERSPAN, and which Cisco platforms support ERSPAN today?

  4. How does IOS, being a 32-bit OS, address 16-GB DRAM in ASR1000-RP2 to achieve such a high route scale?

  5. Does the ASR 1000 require a feature license to turn on and use MPLS, BGP, NAT, GRE, or NetFlow?

  6. What is DCI, and how does the pseudowire failover work for remote node/link failure?

Answers

  1. No, it is not turned on by default. You need to turn it on by entering the nsf command within IGP configuration mode.

  2. NSF-aware means that the device can participate in an NSF restart by virtue of understanding the GR LSA, but might not undertake the restart itself. NSF-capable routers, on the other hand, can both understand GR LSA and can also undergo an NSF restart. Cisco ASR 1000 is an NSF-aware and -capable device.

  3. ERSPAN stands for Encapsulated Remote SPAN, which essentially encapsulates the SPAN-ed traffic inside a GRE header so that it can be routed across a Layer 3 domain. This enables data capturing on one device on a given set of interfaces and direction, whereas monitoring station could be placed several L3 hops away on another device (such as Cisco ASR 1000). Cisco Catalyst 6500, 7600, Nexus, and ASR 1000 are the only platforms that support ERSPAN.

  4. The RP IOS package for ASR1000-RP2 and most of the underlying software infrastructure has been extended to 64 bits, hence it can therefore address beyond 4 GB DRAM.

  5. No, the Cisco ASR 1000 does not require any software RTU licenses for these basic features. Hence, they can be used as long as they are available in the given IOS image.

  6. DCI stands for data center interconnect, which is a common way to extend Layer 2 or Layer 3 connectivity across the data centers. The ASR 1000 can be used at this time for p2p connectivity across two data centers for IP and MPLS transport types. EoMPLS can be used to extend the L2 connectivity and VLANs across the DCI WAN link. The ASR 1000 has a unique feature known as Remote Port Shutdown, which functions similar to GSR. So, to avoid traffic blackholing and to allow faster convergence, as soon as a pseudowire goes down, the router switches off the port laser to let the peer port (customer edge [CE]) know that the link has gone down, which immediately goes to down/down. This proves handy to achieve extremely fast convergence end to end. As soon as the pseudowire comes back up, it turns the laser on, signaling the CE port that it can resume traffic via the given EoMPLS PE. This feature is enabled by default and does not require any additional configuration.

Further Reading

Graceful OSPF Restart, document: http://tools.ietf.org/html/rfc3623

Configuring ERSPAN on Catalyst 6500 Switches, document: http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/span.html#wp1063324

Internet Gateway Router Design Using Cisco ASR 1000 Series Routers, document: http://tinyurl.com/l6nbcp

Cisco 6500 Virtual Switching Systems (VSS), document: http://tinyurl.com/5zph8e

Configuring vPC (Virtual Port Channel), document: http://tinyurl.com/l37wqp

Cisco Nexus 7000 Security Features, document: http://tinyurl.com/n2nx99

Data Center Interconnect, document: http://tinyurl.com/rclv2f

"Route Reflector Scale," report: http://tinyurl.com/kmc89b