Home > Articles > Cisco Certification > CCDP > CCDP Self-Study: Designing High-Availability Services

CCDP Self-Study: Designing High-Availability Services

Chapter Description

Cisco IOS high-availability technologies provide network redundancy and fault tolerance. Reliable network devices, redundant hardware components with automatic failover, and protocols like Hot Standby Router Protocol (HSRP) are used to maximize network uptime. This chapter will help you get a handle on high-availability technologies.

Designing High-Availability Enterprise Networks

The Enterprise Campus and the Enterprise Edge need maximum availability of the network resources; hence, network designers must incorporate high-availability features throughout the network. Designers must be familiar with the design guidelines and best practices for each component of an enterprise network. There are specific guidelines for designing a highly available Campus Infrastructure functional area and an Enterprise Edge functional area. Adopting a high-availability strategy for an enterprise site is a must.

Design Guidelines for High Availability

Designing a network for high availability requires designers to consider the reliability of each network hardware and software component, redundancy choices, protocol attributes, circuits and carrier options, and environmental and power features that contribute to the overall availability of the network.

To design high-availability services for an enterprise network, designers must answer the following types of questions:

  • Where should module and chassis redundancy be deployed in the network?

  • What software reliability features are required for the network?

  • What protocol attributes need to be considered?

  • What high-availability features are required for circuits and carriers?

  • What environmental and power features are required for the network?

  • What operations procedures are in place to prevent outages?

Redundancy Options

The options for device redundancy include both module and chassis redundancy. Both types of redundancy are usually most important at the Building Distribution and Campus Backbone submodules. The decision about which type of redundancy to use is based on the criticalness of the resource and the cost of redundancy.

With module redundancy, only selected modules are selected for failover. In the event that the primary module fails, the device operating system determines the failover. Module redundancy is typically the most cost-effective redundancy option available, and is the only option (over chassis redundancy) for edge devices in point-to-point topologies.

With chassis redundancy, the entire chassis and all modules within it are redundant. In the event of a failure, the protocols running on the network, such as HSRP or VRRP, determine how the failover occurs. Chassis redundancy increases the cost and complexity of the network, which are factors to consider when selecting device redundancy. Chassis redundancy is also limited for point-to-point edge networks. To calculate the theoretical advantage gained with redundant modules or chassis, use the following formula:

Availability _ 1 _ [(1 _ availability of device1) * (1 _ availability of device2)]

The preceding availability formula is for parallel redundant devices, as opposed to the earlier formula, which was for serial availability. For example, if you implement a redundant switch fabric with 100-percent failure detection and each device's availability is 99.997 percent, the overall availability is calculated as follows:

Availability _ 1 _ [(1 _ .99997) * (1 _ .99997)]
Availability _ 1 _ [(.00003) * (.00003)] _ 1 _ [.0000000009] 
Availability _ 0.99999

Therefore, redundant switch fabrics increase the availability of the component to 99.9999 percent. As mentioned, this is known as parallel availability.

Link redundancy, implemented through parallel or serial implementations, can significantly increase availability. The following formula calculates the availability resulting from redundant parallel links and shows the theoretical advantage gained:

Availability _ [1 _ (1 _ availability1)2] * [1 _ (1 _ availability2)2] * [1 _ (1 _ availability3)2]

In the example shown in Figure 5-5, a serial available network is available 99.86 percent of the time, while the parallel available network is available 99.97 percent of the time (based on the preceding formula).

Figure 5Figure 5-5 Parallel versus Serial Implementations

To fully determine the benefit of device, chassis, and link redundancy, designers should discover the answers to the following questions:

  • Will the solution allow for load sharing?

  • Which components are redundant?

  • What active-standby fault detection methods are used?

  • What is the MTBF for a module? What is the MTTR for a module? Should it be made redundant?

  • How long does it take to upgrade?

  • Are hot swapping and online insertion and removal (OIR) available?

Software Features and Protocol Attributes

Cisco Systems recommends implementation of the following software features:

  • Protect gateway routers with HSRP or VRRP

  • Implement resilient routing protocols, such as EIGRP, OSPF, IS-IS, RIPv2, BGP

  • Use floating static routes and access control lists (ACLs) to reduce load in case of failure

Network designers also need to consider protocol attributes, such as complexity to manage and maintain, convergence, hold times, and signal overhead.

Carrier and Circuit Types

Because the carrier network is an important component of the enterprise network and its availability, careful consideration of the following points about the carrier network is essential:

  • Understand the carrier network—Model and understand carrier availability, including the carrier diversity strategy and how that will affect the availability of your network design. Make sure you have a service level agreement (SLA) that specifies availability and offers alternate routes in case of failure. Ensure that the carrier offers diversity and that dual paths to the ISP do not terminate at the same location (a single point of failure).

  • Consider multihoming to different vendors—Multihoming to different vendors provides protection against carrier failures.

  • Monitor carrier availability—Determine if the carrier offers enhanced services, such as a guaranteed committed information rate (CIR) for Frame Relay, or differentiated services. Use carrier SLAs.

  • Review carrier notification and escalation procedures—Review the carrier's notification and escalation procedures to ensure that they can reduce downtimes.

Power Availability

Power and environmental availability affect overall network availability. According to a prediction by Worldwatch institute, electrical interruptions will cost U.S. companies $80 billion a year. By implementing uninterruptible power supplies (UPS), availability is increased. Table 5-3, from American's Power Conversion's Tech Note #26, describes the effect of UPS and power array generators on overall availability.

Table 5-3 Power Supply Availability Options


RAW AC Power

5 Minute UPS

1 Hour UPS

UPS with Generator

Power Array with Generator

Event Outages

15 events

1 event

.15 events

.01 events

.001 events

Annual Downtime

189 minutes

109 minutes

10 minutes

1 minute

.1 minute







For power and grounding sensitive electronic equipment, refer to IEEE-recommended practice, Standard 1100-1992.

High-Availability Design Goals and Conclusions

The general network design conclusions with respect to high availability are as follows:

  • Reduce complexity, increase modularity and consistency

  • Consider solution manageability

  • Minimize the size of failure domains

  • Consider protocol attributes

  • Consider budget, requirements, and areas of the network that contribute the most downtime or are at greatest risk

  • Test before deployment

Consider the following cost and budget issues when designing high-availability networks:

  • One-time costs—Calculate the cost of additional components or hardware, software upgrades, new software costs, and installation.

  • Recurring costs—Consider the costs of additional WAN links and the recurring cost of equipment maintenance.

  • Complexity costs—Keep in mind that availability might be more difficult to manage and troubleshoot. More training for the support staff might be required.

Best Practices for High-Availability Network Design

Cisco has developed a set of best practices for network designers to ensure high availability of the network. The five-step Cisco recommendations are

Step 1

Analyze technical goals and constraints—Technical goals include availability levels, throughput, jitter, delay, response time, scalability requirements, introductions of new features and applications, security, manageability, and cost. Investigate constraints, given the available resources. Prioritize goals and lower expectations that can still meet business requirements. Prioritize constraints in terms of the greatest risk or impact to the desired goal.

Step 2

Determine the availability budget for the network—Determine the expected theoretical availability of the network. Use this information to determine the availability of the system to help ensure the design will meet business requirements.

Step 3

Create application profiles for business applications—Application profiles help the task of aligning network service goals with application or business requirements by comparing application requirements, such as performance and availability, with realistic network service goals or current limitations.

Step 4

Define availability and performance standards—Availability and performance standards set the service expectations for the organization.

Step 5

Create an operations support plan—Define the reactive and proactive processes and procedures used to achieve the service level goal. Determine how the maintenance and service process will be managed and measured. Each organization should know its role and responsibility for any given circumstance. The operations support plan should also include a plan for spare components.

To achieve 99.99-percent availability (often referred to as "four nines"), the following problems must be eliminated:

  • Single point of failure

  • Inevitable outage for hardware and software upgrades

  • Long recovery time for reboot or switchover

  • No tested hardware spares available on site

  • Long repair times because of a lack of troubleshooting guides and process

  • Inappropriate environmental conditions

To achieve 99.999-percent availability (often referred to as "five nines"), you also need to eliminate these problems:

  • High probability of failure of redundant modules

  • High probability of more than one failure on the network

  • Long convergence for rerouting traffic around a failed trunk or router in the core

  • Insufficient operational control

Enterprise Campus Design Guidelines for High Availability

Each submodule of the Campus Infrastructure module should incorporate fault tolerance and redundancy features to provide an end-to-end highly available network. In the Building Access submodule, Cisco recommends that you implement STP along with the UplinkFast and PortFast enhancements. Rapid Spanning Tree Protocol (802.1w) and Multiple Spanning Tree Protocol (802.1s), offer benefits such as faster convergence and more efficiency over the traditional STP (802.1D). You can implement HSRP (or VRRP) in the Building Distribution submodule, with HSRP hellos going through the switches in the Building Access submodule. At the Building Distribution submodule, Cisco recommends that you implement STP and HSRP for first-hop redundancy. Finally, the Campus Backbone submodule is a critical resource to the entire network. Cisco recommends that you incorporate device and network topology redundancy at the Campus Backbone, as well as HSRP for failover.

By leveraging the flexibility of data-link layer connectivity in the Building Access switches, the option of dual-homing the connected end systems is available. Most NICs operate in an active-standby mode with a mechanism for MAC address portability between pairs. During a failure, the standby NIC becomes active on the new Building Access switch. Another end-system redundancy option is for a NIC to operate in active-active mode, in which each host is available through multiple IP addresses. Either end-system redundancy mode requires more ports in the Building Access submodule.

The primary design objective for a server farm is to ensure high availability in the infrastructure architecture. The following are the guidelines for server farm high availability:

  • Use redundant components in infrastructure systems, where such a configuration is practical, cost effective, and considered optimal

  • Use redundant traffic paths provided by redundant links between infrastructure systems

  • Use optional end-system (server) dual homing to provide a higher degree of availability

Enterprise Edge Design Guidelines for High Availability

Each module of the Enterprise Edge functional area should incorporate high-availability features from the service provider edge to the enterprise campus network. Within the Enterprise Edge functional area, consider the following for high availability:

  • Service level agreement—Ask your service provider to write into your SLA that your backup path terminates into separate equipment at the service provider, and that your lines are not trunked into the same paths as they traverse the network.

  • Link redundancy—Use separate ports, preferably on separate routers, to each remote site. Having backup permanent virtual circuits (PVCs) through the same physical port accomplishes little or nothing, because a port is more likely to fail than any individual PVC.

  • Load balancing—Load balancing occurs when a router has two (or more) equal cost paths to the same destination. You can implement load sharing on a per-packet or per-destination basis. Load sharing provides redundancy, because it provides an alternate path if a router fails. OSPF will load share on equal-cost paths by default. EIGRP will load share on equal-cost paths by default, and can be configured to load share on unequal-cost paths. Unequal-cost load sharing is discouraged because it can create too many obscure timing problems and retransmissions.

  • Policy-based routing—If you have unequal cost paths, and you do not want to use unequal-cost load sharing, you can use policy-based routing to send lower priority traffic down the slower path.

  • Routing protocol convergence—The convergence time of the routing protocol chosen will affect overall availability of the Enterprise Edge. The main area to examine is the impact of the Layer 2 design on Layer 3 efficiency.

Several of the generic high-availability technologies and Cisco IOS features might also be implemented at the Enterprise Edge functional area. Cisco Nonstop Forwarding enables continuous packet forwarding during route processor takeover and route convergence. Stateful failover allows a backup route processor to take immediate control from the active route processor while maintaining WAN connectivity protocols. RPR allows a standby route processor to load an IOS image configuration, parse the configuration, and reset and reload the line cards, thereby reducing reboot time. HSRP enables two or more routers to work together in a group to emulate a single virtual router to the source hosts on the LAN. Alternatively, VRRP enables a group of routers to form a single virtual router by sharing one virtual router IP address and one virtual MAC address.

High-Availability Design Example

Providing high availability in the enterprise site can involve deploying highly fault-tolerant devices, incorporating redundant topologies, implementing STP, and configuring HSRP. Figure 5-6 shows an example enterprise-site design that incorporates high-availability features.

Figure 6Figure 5-6 High-Availability Design Example

According to the example depicted in Figure 5-6, each module and submodule is utilizing the necessary and feasible high-availability technologies as follows:

  • Building Access submodule—The Building Access switches all have uplinks terminating in a pair of redundant multilayer switches at the Building Distribution submodule, which act as an aggregation point. Only one pair of Building Distribution switches is needed per building. The number of wiring-closet switches is based on port density requirements. Each Building Access switch includes fault tolerance to reduce MTBF. Because the failure of an individual switch would have a smaller impact than a device failure in the Building Distribution and Campus Backbone submodules, device redundancy is not provided.

  • Building Distribution submodule—First-hop redundancy and fast failure recovery are achieved with HSRP, which runs on the two multilayer switches in the distribution layer. HSRP provides end stations with a default gateway in the form of a virtual IP address that is shared by a minimum of two routers. HSRP routers discover each other via hello packets, which are sent through the Building Access switches with negligible latency.

  • Campus Backbone submodule—In the Campus Backbone submodule, two multilayer switches are deployed; each one is configured for high fault tolerance. HSRP is implemented to allow for device redundancy. The EIGRP routing protocol is used to provide load balancing and fast convergence.

  • Server Farm module—In the Server Farm module, two multilayer switches with HSRP configured provide redundancy. The file servers are mirrored for added protection.

  • Enterprise Edge module—At the Enterprise Edge, fault-tolerant switches are deployed with link redundancy and HSRP to enable failover. Outward-facing e-commerce servers are mirrored to ensure availability.