Home > Articles > Cisco Certification > CCDA > CCDA Self Study: Basic Campus Switching Design Considerations

CCDA Self Study: Basic Campus Switching Design Considerations

  • Sample Chapter is provided courtesy of Cisco Press.
  • Date: Jan 16, 2004.

Campus Design

Campus building blocks are comprised of multilayer devices that connect to the campus backbone. A building design is appropriate for a building-sized network that contains several thousand networked devices; a campus design is appropriate for a large campus that consists of many buildings. To scale from a building model to a campus model, network designers must add a campus backbone between buildings.

This section discusses advanced network traffic considerations and building design using Layer 2 and Layer 3 switching in the access and distribution layers. It describes traffic patterns, multicast traffic, and QoS, and uses both Layer 2 and Layer 3 technologies to discuss campus backbone design. Finally, we investigate server placement within the campus and present guidelines for connectivity to the rest of the enterprise network.

Introduction to Enterprise Campus Design

As discussed in Chapter 3, "Structuring and Modularizing the Network," the Enterprise Campus network can be divided into the following modules:

  • Campus Infrastructure—This module contains the following submodules:

    • Building Access—Aggregates end user connections and provides access to the network.

    • Building Distribution—Provides aggregation of access devices and connects them to the campus backbone.

    • Campus Backbone—Interconnects the building distribution submodules with the Edge Distribution module and provides high-speed transport.

  • Server Farm—Connects the servers to the enterprise network and manages the campus server availability and traffic load balancing.

  • Edge Distribution—Connects the Enterprise Edge applications to the network campus. Security is the main consideration in this module.

  • Network Management—The Network Management module requirements are similar to those for the Server Farm module, with the exception of the bandwidth requirement. The Network Management module typically does not require high bandwidth.

This section identifies major requirements for designing campus networks within these modules.

Enterprise Campus Module Requirements

As shown in Table 4-7, each Enterprise Campus module has different requirements. For example, this table illustrates how modules that are located closer to the users require a higher degree of scalability. This means that the network designer must consider an option for expanding the Campus network easily in the future, without redesigning the complete network. For example, adding new workstations to a network should result in neither high investment cost nor performance degradations.

Table 4-7 Enterprise Campus Design Requirements

Requirement

Building Access

 

Building Distribution

Campus Backbone

Server Farm

Edge Distribution

Technology

Shared

Layer 2 switched

Layer 2 and 3 switched

Layer 2 and 3 switched

Layer 3 switched

Layer 3 switched

Scalability

High

High

Medium

Low

Medium

Low

High availability

Low

Medium

Medium

High

High

Medium

Performance

Low

Low

Medium

High

High

Medium

Cost per port

Low

Low

Medium

High

High

Medium


The end user usually does not require high performance and high availability, but they are crucial to the campus backbone—especially the Server Farm module.

The price per port increases with increased performance and availability. The campus backbone and Server Farm require a guarantee of higher throughput so they can handle all traffic flows and not introduce additional delays or drops to the network traffic.

The Edge Distribution module does not require the same performance as in the campus backbone. However, it can require other features and functionalities that increase the overall cost.

Enterprise Campus Design Considerations

Designing an Enterprise Campus means not only dividing the network into modules, but also optimizing performance and the cost of each module while providing scalability and high availability. Before designing a campus network, you must take the following considerations relating to network traffic into account:

  • Application traffic patterns—Identify the organizational traffic flows. This includes the type of traffic and its bandwidth requirements and traffic patterns.

  • Multicast traffic—Identify the features that constrain multicast streams to the relevant ports. If present in the Enterprise Campus network and incorrectly designed, multicast traffic can use a great amount of bandwidth.

  • Delay sensitive traffic—Identify and incorporate the appropriate QoS mechanisms to manage the diverse requirements for delay and delay variations.

As Table 4-8 shows, the Enterprise Campus can be built on either a shared or switched (Layer 2 or Layer 3) foundation technology. In the building access layer, workstations with low demand can be connected via shared technology; however, this option is only suitable for some small (home) offices that have a few devices without any special bandwidth requirements. Where higher speeds are required, shared technology is not appropriate, and LAN switching is the only option. The remaining consideration is whether to use Layer 2 or Layer 3 switching technology.

Table 4-8 Enterprise Campus Design Decisions

Requirement

Building Access

 

Building Distribution

Campus Backbone

Server Farm

Edge Distribution

Technology

Shared

Layer 2 switched

Layer 2 and 3 Switched

Layer 2 and 3 switched

Layer 3 switched

Layer 3 switched

Application traffic

Distant

Local/ distant

Distant

Distant

Local/ distant

Distant

Multicast traffic aware

No

Layer 2 limited

Yes

Yes

Yes

Yes

QoS (delay sensitive) traffic support

No

Queuing/marking per port

Marking per application

 

 

 

 


Consideration of the applications and traffic is required to ensure that the appropriate equipment for the individual modules is selected. Application traffic patterns, multicast traffic, and QoS are important network design issues.

Layer 2 switches usually support multicast and QoS features, but with limited capability. A Layer 3 switch, or in the case of IP multicast, at least a so-called Layer 3-aware switch, might be required.

A Layer 2 multicast-aware switch that works closely with the Layer 3 device (router) can distinguish which hosts belong to the multicast stream and which do not. Thus, the Layer 2 switch can forward the multicast stream to only selected hosts.

Layer 2 QoS support is usually limited to port marking capability and queuing on only uplink trunk ports, especially on low-end switches. Layer 2 switches are usually incapable of marking or queuing based on the Layer 3 parameters of packets. However, several recent platforms have added support for Layer 2, Layer 3, and Layer 4 class of service (CoS) and type of service (ToS) packet marking and policing.

The following sections examine network traffic patterns, multicast traffic, and QoS considerations in the Enterprise Campus modules.

Network Traffic Patterns

Campus traffic patterns are generally categorized as local (within a segment or submodule) or distant (passing several segments and crossing the module boundaries).

Network traffic patterns have changed through the years. The characteristic of traditional campus networks was 80 percent local traffic and 20 percent distant traffic; this is known as the 80/20 rule. In modern campus networks, the ratio is closer to 20/80 because the servers are no longer present in the workgroup, but are instead placed separately in the Server Farm. The 20/80 ratio results in a much higher load on the backbone because the majority of the traffic from client workstations to the servers passes through the backbone.

80/20 Rule in the Campus

When designing a switched campus, network designers ensure that each switched segment corresponds to a workgroup. By placing the workgroup server in the same segment as its clients, most of the traffic can be contained. The 80/20 rule refers to the goal of containing at least 80 percent of the traffic within the local segment.

The campus-wide VLAN model is highly dependent upon the 80/20 rule. If 80 percent of the traffic is within a workgroup (VLAN), 80 percent of the packets flowing from the client to the server are switched locally.

The conventional 80/20 rule underlies traditional network design models. With the campus-wide VLAN model, the logical workgroup is dispersed across the campus, but is still organized so that 80 percent of traffic is contained within the VLAN. The remaining 20 percent of traffic leaves the network through a router.

20/80 Rule in the Campus

Many new and existing applications currently use distributed data storage and retrieval. The traffic pattern is moving toward what is now referred to as the 20/80 rule. With the 20/80 rule, only 20 percent of traffic is local to the workgroup LAN, and 80 percent of the traffic leaves the workgroup.

In a traditional network design, only a small amount of traffic passes through the Layer 3 devices. Because performance was not an issue, these devices have traditionally been routers. Modern enterprise networks utilize servers that are located in Server Farms or in the enterprise edge. With an increasing amount of traffic from clients to distant servers, performance requirements are higher in the building distribution and campus backbone. Therefore, devices that have a very high speed of Layer 3 processing are necessary; these devices are Layer 3 switches.

Network Traffic Pattern Example

Figure 4-9 illustrates examples of the 80/20 and 20/80 rules in a campus network.

Figure 9Figure 4-9 Traffic Patterns in Traditional and Modern Networks

Company A, shown on the left side of Figure 4-9, has several independent departments. Each department has its own VLAN, in which the servers and printers are located. File transfers from other department servers or workstations are necessary only occasionally. This traffic must pass the distribution layer, which is represented by the Layer 3 switch. The only common resource the departments use is the mail server, which is located in the corporate network's core.

Company B, shown on the right side of Figure 4-9, also has several departments; however, they use common resources. Not only do they use file servers from their own department, but they also use services from common data storage, such as an Oracle database. This type of configuration requires a higher-performance Layer 3 switch on the distribution layer. The access layer switch (Layer 2) concentrates users into their VLANs. The servers on the other side of the network are also organized into groups and are connected to Layer 2 switches. Distribution layer switches in the middle enable fast, reliable, and redundant communication among the groups on both sides of the network. Figure 4-9 illustrates that the majority of the communication takes place between servers and users, and only a small amount of traffic is switched inside the group.

Multicast Traffic Considerations

IP multicast is a bandwidth-conserving technology that reduces traffic by simultaneously delivering a single stream of information to potentially thousands of corporate recipients.

Videoconferencing, corporate communications, distance learning, distribution of software, stock quotes, and news are some applications that take advantage of the multicast traffic stream. IP multicast delivers source traffic to multiple receivers.

IP multicast is based on the concept of a multicast group. Any group of receivers can express an interest in receiving a particular data stream. This group does not require any physical or departmental boundaries; rather, the hosts can be located anywhere on the corporate network. Hosts that are interested in receiving data that flows to a particular group must join the group using the Internet Group Management Protocol (IGMP).

Figure 4-10 illustrates a typical situation with IP multicast. Multicast-enabled routers ensure that traffic is delivered properly by using one of the multicast routing protocols, such as Protocol Independent Multicast (PIM). The router forwards the incoming multicast stream to the switch port.

Figure 10Figure 4-10 Multicast Traffic Handled by Router

However, the default behavior for a Layer 2 switch is to forward all multicast traffic to every port that belongs to the same VLAN on the switch (a behavior known as flooding). This behavior defeats the purpose of the switch, which is to limit the traffic to only the ports that must receive the data.

NOTE

Support for broadcast and multicast suppression is available on several switched platforms. The suppression is done with respect to the incoming traffic rate and is either bandwidth-based or measured in packets per second. The threshold can be set to any value between 0 and 100 percent (or as a number of packets when packet-based suppression is turned on). When the data on the port exceeds the threshold, the switch suppresses further activity on the port for the remainder of the 1-second period.

Static entries can sometimes be set to specify which ports should receive the multicast traffic. Dynamic configuration of these entries simplifies the switch administration.

Several methods exist for Cisco switches to deal efficiently with multicast in a Layer 2 switching environment. Following are the most common methods:

  • Cisco Group Management Protocol (CGMP)—CGMP allows switches to communicate with a router to determine whether any of the users attached to them are part of a multicast group. The multicast receiver registration is accepted by the router (using the IGMP) and communicated via CGMP to the switch; the switch adjusts its forwarding table accordingly. CGMP is a Cisco proprietary solution that is implemented on all Cisco LAN switches.

  • IGMP snooping—With IGMP snooping, the switch intercepts multicast receiver registrations and adjusts its forwarding table accordingly. IGMP snooping requires the switch to be Layer 3-aware because IGMP is a network layer protocol. Typically, the IGMP packet recognition is hardware-assisted.

NOTE

Additional methods for addressing the problem of multicast frames in a switched environment include the Generic Multicast Registration Protocol (GMRP) and the Router-Port Group Management Protocol (RGMP). GMRP, which is used between the switch and the host, is not yet widely available. RGMP is a Cisco solution for router-only multicast interconnects in a switched environment. (More information on RGMP is available in Configuring RGMP, at http://www.cisco.com/en/US/products/hw/switches/ps708/products_configuration_guide_chapter09186a008007e6f8.html.)

QoS Considerations for Delay-sensitive Traffic

A Campus Network transports many types of applications and data, including high-quality video and delay-sensitive data (such as real-time voice). Bandwidth-intensive applications stretch network capabilities and resources, but they can also enhance many business processes. Networks must provide secure, predictable, measurable, and sometimes guaranteed services. Achieving the required QoS by managing delay, delay variation (jitter), bandwidth, and packet loss parameters on a network can be the key to a successful end-to-end business solution. QoS mechanisms are techniques that are used to manage network resources.

The assumption that a high-capacity, nonblocking switch with multigigabit backplanes never needs QoS is incorrect. Most networks or individual network elements are oversubscribed. In fact, it is easy to create scenarios in which congestion potentially occurs and that therefore require some form of QoS. Uplinks from the access layer to the distribution layer, or from the distribution layer to the core, most often require QoS. The sum of the bandwidths on all ports on a switch where end devices are connected is usually greater than that of the uplink port. When the access ports are fully used, congestion on the uplink port is unavoidable.

Depending on traffic flow and uplink oversubscription, bandwidth is managed with QoS mechanisms on the access, distribution, or even core switches.

QoS Categories

Layer 2 QoS is similar to Layer 3 QoS, which Cisco IOS software implements. You can configure the following four QoS categories on LAN switches:

  • Classification and marking—Packet classification features allow the partitioning of traffic into multiple priority levels, or classes of service. These features inspect the information in the frame header (Layer 2, Layer 3, and Layer 4) and determine the frame's priority. Marking is the process of changing a frame's CoS setting (or priority).

  • Scheduling—Scheduling is the process that determines the order in which queues are serviced. CoS is used on Layer 2 switches to assist in the queuing process. Layer 3 switches can also provide QoS scheduling; Layer 3 IP QoS queue selection uses the IP DiffServ Code Point (DSCP) or the IP packet's IP precedence field.

  • NOTE

    For more information on DSCP and IP precedence, refer to the Cisco Implementing Quality of Service Policies with DSCP document at http://www.cisco.com/en/US/tech/tk543/tk757/technologies_tech_note09186a00800949f2.shtml.

  • Congestion management—A network interface is often congested (even at high speeds, transient congestion is observed), and queuing techniques are necessary to ensure that traffic from the critical applications is forwarded appropriately. For example, real-time applications such as VoIP and stock trading might have to be forwarded with the least latency and jitter.

  • Policing and shaping—Policing and shaping is the process of reducing a stream of data to a predetermined rate or level. Unlike traffic shaping, in which the frames can be stored in small buffers for a short period of time, policing simply drops or lowers the priority of the frame that is out of profile.

QoS in LAN Switches

When configuring QoS features, select the specific network traffic, prioritize it according to its relative importance, and use congestion-management techniques to provide preferential treatment. Implementing QoS in the network makes network performance more predictable and bandwidth use more effective.

Figure 4-11 illustrates where the various categories of QoS are implemented in LAN switches.

Figure 11Figure 4-11 QoS in LAN Switches

Because they do not have knowledge of Layer 3 or higher information, access switches provide QoS based only on the switch's input port. For example, traffic from a particular host can be defined as high-priority traffic on the uplink port. The scheduling mechanism of an access switch's output port ensures that traffic from such ports is served first. The proper marking of input traffic ensures the expected service when traffic passes through the distribution and core layer switches.

Distribution and core layer switches are typically Layer 3-aware and can provide QoS selectively—not only on a port basis, but also according to higher-layer parameters, such as IP addresses, port numbers, or even QoS bits in the IP packet. These switches make QoS classification more selective by differentiating the traffic based on the application. QoS in distribution and core switches must be provided in both directions of traffic flow. The policing for certain traffic is usually implemented on the distribution layer switches.

QoS Example with Voice Traffic Across a Switch

QoS for voice over IP (VoIP) consists of keeping packet loss and delay within certain tolerable levels that do not affect the voice quality. Voice requires low jitter and low packet loss. One solution would be to simply provide sufficient bandwidth at all points in the network. A better alternative is to apply a QoS mechanism at the network's oversubscribed points.

A reasonable design goal for VoIP end-to-end network delay is 150 milliseconds, a level at which the speakers do not notice the delay. A separate outbound queue for real-time voice traffic can be provided to achieve guaranteed low delay for voice at campus speeds. Bursty data traffic, such as a file transfer, should be placed in a different queue. Packet loss is not an issue if low delay is guaranteed by providing a separate queue for voice. Figure 4-12 illustrates this situation.

Figure 12Figure 4-12 QoS for VoIP Example

QoS maps well to the multilayer campus design. Packet classification is a multilayer service that is applied at the wiring-closet switch (access switch), which is the network's ingress point. VoIP traffic flows are recognized and then classified by their port number—the IP ToS is set to low delay voice for VoIP packets. Wherever the VoIP packets encounter congestion in the network, the local switch or router applies the appropriate congestion management based on this ToS value.

Building Access and Distribution Layers Design

In a conventional campus-wide VLAN design, network designers apply Layer 2 switching to the access layer, while the distribution layer switches support Layer 3 capabilities. In small networks, both access and distribution layers can be merged into a single switch.

Building Access Layer Considerations

The access layer aggregates the workstations or hosts on a Layer 2 device (a switch or hub). The Layer 2 node represents one logical segment and is one broadcast domain. VLAN support might be required where multiple departments coexist in the same wiring closet.

The policies implemented on the access switch are based on Layer 2 information. These policies focus on and include the following features:

  • Port security

  • Access speeds

  • Traffic classification priorities that are defined on uplink ports

When implementing the campus infrastructure's building access submodule, consider the following questions:

  • How many users or host ports are currently required in the wiring closet, and how many will it require in the future? Should the switches support fixed or modular configuration?

  • What cabling is currently available in the wiring closet, and what cabling options exist for uplink connectivity?

  • What Layer 2 performance does the node need?

  • What level of redundancy is needed?

  • What is the required link capacity to the distribution layer switches?

  • How will the VLANs and STP deployed? Will there be a single VLAN, or several VLANs per access switch? Will the VLANs on the switch be unique or spread across multiple switches? The latter design was common a few years ago, but today campus-wide (or access layer-wide) VLANs are not desirable.

  • Are additional features, such as port security, multicast traffic management, and QoS (traffic classification based on ports), required?

Based on the answers to these questions, the network designer can select the devices that satisfy the access layer's customer requirements. The access layer should maintain the simplicity of traditional LAN switching, with the support of basic network intelligent services and business applications.

Redundant paths can be used for failover and load balancing. Layer 2 switches can support features that are able to accelerate STP timers and provide for faster convergence and switchover of traffic to the redundant link, including BackboneFast, UplinkFast, and RSTP.

Disabling STP on a device is not encouraged because of possible loops. STP should only be disabled on carefully selected ports, and typically only on those in which the hosts are connected. Several other methods enable loop-resistant deployment of the STP, including BPDU Filtering and the BPDU guard feature on the access links where PortFast is configured. On the uplinks, BPDU skew detection, STP loop guard, and UDLD are additional measures against STP loops.

The "Layer 2 and Layer 3 Switching Design Considerations" section of this chapter discusses these STP features.

Building Access Design Examples

Figure 4-13 illustrates examples of a small and a medium-sized campus network design.

Figure 13Figure 4-13 Small and Medium Campus Access Layer Designs

Following are some characteristics of a small campus network design:

  • Network servers and workstations in small campus networks connect to the same wiring closet.

  • Switches in small campus networks do not usually require high-end performance.

  • The network designer does not have to physically divide the network into a modular structure (building access and building distribution modules).

  • Low-end multilayer switches could provide the Layer 3 services closer to the end user when there are multiple VLANs at the access layer.

  • Small networks often merge the distribution and access layers.

Because of their performance requirements, medium-size campus networks are built on Layer 2 access switches and are connected by uplinks to the distribution Layer 3 switches. This forms a clear structure of building access and building distribution modules. If redundancy is required, an additional Layer 3 switch can be attached to the network's aggregation point with full link redundancy.

Building Distribution Layer Considerations

The building distribution layer aggregates the access layer and uses a combination of Layer 2 and Layer 3 switching to segment workgroups and isolate segments from failures and broadcast storms. This layer implements many policies based on access lists and QoS settings. The distribution layer can protect the core network segment from any impact of access layer problems by implementing all the policies.

One most frequently asked question regarding implementation of a building's distribution layer is whether a Layer 2 switch is sufficient, or a Layer 3 switch must be deployed. To make this decision, answer the following questions:

  • How many users will the distribution switch handle?

  • What type and level of redundancy are required?

  • As intelligent network services are introduced, can the network continue to deliver high performance for all its applications, such as Video On Demand, IP multicast, or IP telephony?

The network designer must pay special attention to the following network characteristics:

  • Performance—Distribution switches should provide wire-speed performance on all ports. This feature is important because of access layer aggregation on one side and high-speed connectivity of the core module on the other side. Future expansions with additional ports or modules can result in an overloaded switch if it is not selected properly.

  • Intelligent network services—Distribution switches should not only support fast Layer 2 and/or Layer 3 switching, but should also incorporate intelligent network services such as high availability, QoS, security, and policy enforcement.

  • Manageability and scalability—Expanding and/or reconfiguring distribution layer devices must be easy and efficient. These devices must support the required management features.

With the correct selection of distribution layer switches, the network designer can easily add new building access modules.

NOTE

Layer 3 switches are usually preferred for the distribution layer switches because this layer must support intelligent network services, such as QoS and traffic filtering.

The network designer must also decide where redundancy should be implemented and which mechanisms should be used: Layer 2 and STP, or Layer 3 (routing protocol) redundant paths. If advanced STP features such as RSTP, UplinkFast, or BackboneFast are not implemented, Layer 2 redundancy and STP configuration can take up to 50 seconds. If the aforementioned features are supported and enabled on the switch, the switchover time could be from 1 second (in the case of RSTP deployment) to 30 seconds. Routing protocols usually switch over in a few seconds (EIGRP in a redundant configuration is usually faster than OSPF because of the default 5-second shortest path first [SPF] algorithm delay recalculation).

Building Distribution Layer Example

Figure 4-14 illustrates a sample network. In this figure, each access layer module has two equal-cost paths to the distribution module switches. The distribution layer switch also has two equal-cost paths to the backbone to ensure fast failure recovery and possibly load sharing.

Figure 14Figure 4-14 Redundancy in the Building Distribution Layer

Because of the redundancy in this example, the network designer must also address STP on the distribution layer, particularly when the possibility of Layer 2 loops exists. Figure 4-14 illustrates the access layer that is connected to both distribution switches, which are also directly interconnected. If the same VLAN spreads across all links, STP must be implemented on the access and distribution switches. If the link to the access layer fails, STP recovery time might also be a concern. STP features such as UplinkFast, BackboneFast, or RSTP can reduce the time taken to switch from one active link to another. If the connectivity to the campus backbone is based on Layer 3, STP on those ports is not necessary.

Campus Backbone Design

Low price per port and high-port density can govern wiring closet environments, but high-performance wire-rate multilayer switching drives the campus backbone design.

A campus backbone should be deployed where three or more buildings are to be connected in the enterprise campus. Backbone switches reduce the number of connections between the distribution layer switches and simplify the integration of enterprise campus modules (such as the Server Farm and Edge Distribution modules). Campus backbone switches are Layer 2 and Layer 3 switches that are primarily focused on wire-speed forwarding on all interfaces. Backbone switches are differentiated by the level of performance achieved per port rather than by high port densities.

When implementing the campus backbone, the first issue the network designer must solve is the switching mechanism—and consequently, the entire campus backbone design (Layer 2, Layer 3, or mixed Layer 2/Layer 3). Other issues to consider include the following:

  • The Layer 2/Layer 3 performances needed in the campus network's backbone.

  • The number of high-capacity ports for distribution layer aggregation and connection to other campus modules, such as the Server Farm or Distribution Edge.

  • Redundancy requirements. To provide adequate redundancy, at least two separate switches (ideally located in different buildings) must be deployed.

The following sections discuss different campus backbone designs.

Layer 2 Campus Backbone Design

The simplest Layer 2-based backbone consists of a single Layer 2 switch that represents a single VLAN, with a star topology toward distribution layer switches. A single IP subnet is used in the backbone, and each distribution switch routes traffic across the backbone subnet. In this case, no loops exist, STP does not put any links in blocking mode, and STP convergence does not affect the backbone.

Figure 4-15 illustrates another Layer 2 campus backbone that has two switches for backbone redundancy and a single VLAN configured per switch.

Figure 15Figure 4-15 Single VLAN Layer 2 Campus Backbone Design

To prevent STP loops in the design in Figure 4-15, the distribution switch links to the backbone must be defined as routed interfaces (this is possible because the distribution switches are Layer 3 switches), not as VLAN trunks. This solution can lead to problems resulting from numerous Layer 3 connections between the routers that are attached to the Layer 2 backbone—especially if this includes a large number of routers.

NOTE

One of the additional drawbacks of a Layer 2-switched backbone is the lack of mechanisms to efficiently handle broadcast and multicast frames—an entire backbone is a single broadcast domain. Although the broadcast/multicast suppression feature can prevent the flood of such packets, this traffic increases CPU utilization on network devices and consumes available bandwidth in the backbone network.

Split Layer 2 Campus Backbone Design

You can implement an alternative solution that uses Layer 2 in a backbone with two VLAN domains, each on one switch but without a connection between the switches. Figure 4-16 illustrates this solution, which is known as a split Layer 2 backbone.

Figure 16Figure 4-16 Split Layer 2 Campus Backbone Design

The advantage of this design is the two equal-cost paths across the backbone, which provide for fast convergence and possible load sharing.

Although the design increases high availability, it still suffers from the usual Layer 2 problems with inefficient handling of broadcast and multicast frames. In the particular case shown in Figure 4-16, the broadcast domain is limited to a single switch (that has one VLAN).

Layer 3 Campus Backbone Design

For large enterprise networks, a single or two-broadcast domain backbone is not the recommended solution. As illustrated in Figure 4-17, the most flexible and scalable campus backbone consists of Layer 3 switches.

Figure 17Figure 4-17 Layer 3 Campus Backbone Design

Layer 3-switched campus backbones provide several improvements over the Layer 2 backbone, including the following:

  • A reduced number of connections between Layer 3 switches. Each Layer 3 distribution switch (router) connects to only one Layer 3 campus backbone switch (Layer 3). This implementation simplifies any-to-any connectivity between distribution and backbone switches.

  • Flexible topology without any spanning-tree loops. There is no Layer 2 switching in the backbone or on the distribution links to the backbone because all links are routed links. Arbitrary topologies are supported because of the routing protocol used in the backbone.

  • Multicast and broadcast control in the backbone.

  • Scalable to an arbitrarily large size.

  • Better support for intelligent network services due to Layer 3 support in the backbone switches.

One of the main considerations when using Layer 3 backbone switches is Layer 3 switching performance. Layer 3 switching requires more sophisticated devices for high-speed packet routing. Modern Layer 3 switches support routing in the hardware, even though the hardware might not support all the features. If the hardware does not support a selected feature, it must be performed in software; this can dramatically reduce the data transfer. For example, QoS and access list tables might not be processed in the hardware if they have too many entries, thereby resulting in switch performance degradation.

Dual-path Layer 3 Campus Backbone Design

As illustrated in Figure 4-18, dual links to the backbone are usually deployed from each distribution layer switch to provide redundancy and load sharing in the Layer 3-switched campus backbone.

Figure 18Figure 4-18 Dual-path Layer 3 Campus Backbone Design

This design's main advantage is that each distribution layer switch maintains two equal-cost paths to every destination network. Thus, recovery from any link failure is fast and higher throughput in the backbone results because load sharing is possible.

The core switches should deliver high-performance, multilayer switching solutions for an enterprise campus. They should also address requirements for the following:

  • Gigabit speeds

  • Data and voice integration

  • LAN/WAN/MAN convergence

  • Scalability

  • High availability

  • Intelligent multilayer switching in backbone/distribution and server aggregation environments

NOTE

In some situations, the campus backbone can be implemented as a mixture of Layer 2 and Layer 3 designs. Special requirements, such as the need for auxiliary VLANs for VoIP traffic and private VLANs for Server Farms, can influence the design decision.

The auxiliary VLAN feature allows IP phones to be placed into their own VLAN without any end-user intervention.

On the other hand, the private VLAN feature simplifies Server Farm designs in which the servers are separated and have no need to communicate between themselves (such as in hosting services implementation). These servers can be placed in a private VLAN with proper port assignments on the switches to ensure that servers do not communicate between themselves, while at the same time maintaining communication with external world. (More information on private VLANs is available in Securing Networks with Private VLANs and VLAN Access Control Lists, at http://www.cisco.com/en/US/products/hw/switches/ps700/products_tech_note09186a008013565f.shtml.)

Network Management Module Integration

Another consideration associated with the campus backbone is the question of network management module integration. Although a campus-wide management VLAN was used in the past, this approach has been replaced by the Layer 3 switching approach, in which the Network Management module is on its own subnet and its traffic is routed across the network.

Server Placement

Within a campus network, servers may be placed locally in a building access module, a building distribution module, or a separate Server Farm module. Servers also have numerous physical connectivity options. This section discusses these topics.

Local Server in a Building Access Module

If a server is local to a certain workgroup that corresponds to one VLAN and all workgroup members and the server are attached to an access layer switch, most of the traffic to the server is local to the workgroup. This scenario follows the conventional 80/20 rule for campus traffic distribution. If required, an access list at the distribution module switch could hide these servers from the enterprise.

Server in a Building Distribution Module

In some mid-size networks, a network designer can also attach servers to distribution switches. The designer can define these servers as building-level servers that communicate with clients in different VLANs but that are still within the same physical building. A network designer can create a direct Layer 2-switched path between a server and clients in a VLAN in two ways:

  • With multiple network interface cards (NICs), making a direct attachment to each VLAN.

  • With a trunk connection or a separate VLAN on the distribution switch for the common servers.

If required, the network designer can selectively hide servers from the rest of the enterprise by using an access list on the distribution layer switch.

Server Farm

Centralizing servers in an enterprise campus is a common practice. In some cases, the enterprise consolidates services into a single server. In other cases, servers are grouped at a data center for physical security and easier administration. These centralized servers are grouped into a Server Farm module.

Server Directly Attached to Backbone

The campus backbone generally transports traffic quickly, without any limitations. Servers in medium-sized networks can be connected directly to backbone switches, making the servers only one hop away from the users. However, the need for additional traffic control in the backbone arises out of the need for controlled server access. Policy-based (QoS and ACL) control for accessing the Server Farm is implemented in the Building Distribution or Edge Distribution modules.

Switches in the Server Farm Module

Larger enterprises place common servers in a Server Farm module and connect them to the backbone via multilayer distribution switches. Because of high traffic load, the servers are usually Fast Ethernet-attached, Fast EtherChannel-attached, or even Gigabit Ethernet-attached. Access lists at the Server Farm module's Layer 3 distribution switches implement the controlled access to these servers. Redundant distribution switches in a Server Farm module and solutions such as the Hot Standby Router Protocol (HSRP) provide fast failover. (Chapter 3 discusses HSRP.) The Server Farm module distribution switches also keep all server-to-server traffic off the backbone.

Rather than being installed on only one server, modern applications are distributed among several servers. This approach improves application availability and responsiveness. Therefore, placing servers in a common group (in the Server Farm module) and using intelligent Layer 3 switches provides the applications and servers with the required scalability, availability, responsiveness, throughput, and security.

Server Farm Design Guidelines

As shown in Figure 4-19, you can implement the Server Farm as a high-capacity building block attached to the campus backbone by using a modular design approach. One of the main concerns regarding the Server Farm is that it receives the majority of the traffic from the entire campus. Random frame drops can result because the uplink ports on switches are frequently oversubscribed. To guarantee that no random frame drops exist for business- critical applications, the network designer must apply QoS mechanisms to the server links.

NOTE

Switch oversubscription occurs when some switches allow more ports (bandwidth) in the chassis than the switch's hardware is capable of transferring through its internal structure.

Figure 19Figure 4-19 Server Farm Design

You must design the Server Farm switches with less oversubscription than switches that reside in the building access or distribution modules have. For example, if the campus consists of a few distribution modules that are connected to the backbone with Fast Ethernet, you should attach the Server Farm module to the backbone with either Gigabit Ethernet or multiple Fast Ethernet links.

The switch performance and the bandwidth of the link from the Server Farm to the backbone are not the only considerations. You must also evaluate the server's capabilities. Although server manufacturers support a variety of NIC connection rates (such as Gigabit Ethernet), the underlying network operating system might not be able to transmit at maximum capacity. As such, oversubscription ratios can be raised, thereby reducing the Server Farm's overall cost.

Server Connectivity Options

Servers can be connected in several different ways. For example, a server can attach by one or two Fast Ethernet connections. If the server is dual-attached, one interface can be active while the other is in hot standby. Installing multiple single-port or multiport NICs in the servers allows dual-homing using various modes, resulting in higher server availability.

Within the Server Farm, multiple VLANs can be used to create multiple policy domains as required. If one particular server has a unique access policy, a network designer can create a unique VLAN and subnet for that server. If a group of servers has a common access policy, the entire group can be placed in a common VLAN and subnet. Access control lists can be applied on the interfaces of the Layer 3 switches.

NOTE

Several other solutions improve server responsiveness and evenly distribute the load to them. Content switches provide a robust front end for Server Farms by performing functions such as load balancing of user requests across Server Farms to achieve optimal performance, scalability, and content availability. See Chapter 3 for more information on content switching.

The Effect of Applications on Switch Performance

Server Farm design requires that you consider the average frequency at which packets are generated and the packets' average size. These parameters are based on the enterprise applications' traffic patterns and number of users of the applications.

Interactive applications, such as conferencing, tend to generate high packet rates with small packet sizes. In terms of application bandwidth, the packets-per-second (pps) limitation of the Layer 3 switches might be more critical than the throughput. Applications that involve large movements of data, such as file repositories, transmit a high percentage of full-length packets. For these applications, uplink bandwidth and oversubscription ratios become key factors in the overall design. Actual switching capacities and bandwidths vary based on the mix of applications.

Designing Connectivity to the Remainder of the Enterprise Network

The Enterprise Campus functional area's Edge Distribution module connects the Enterprise Campus with the Enterprise Edge functional area.

Recall that the Enterprise Edge functional area is comprised of the following four modules:

  • E-commerce module—Enables enterprises to successfully deploy e-commerce applications.

  • Internet Connectivity module—Provides internal users with connectivity to Internet services.

  • Virtual Private Network (VPN)/Remote Access module—Terminates VPN traffic that is forwarded by the Internet Connectivity module, remote users, and remote sites. This module also terminates dial-in connections that are received through the Public Switched Telephone Network (PSTN).

  • WAN module—Uses different WAN technologies for routing the traffic between remote sites and the central site.

The Edge Distribution module filters and routes traffic into the core (the campus backbone). Layer 3 switches are the key devices that aggregate edge connectivity and provide advanced services. The switching speed is not as important as security in the Edge Distribution module, which isolates and controls access to servers that are located in the Enterprise Edge modules (for example, servers in an E-commerce module or public servers in an Internet Connectivity module). These servers are closer to the external users and therefore introduce a higher risk to the internal campus. To protect the core from threats, the switches in the Edge Distribution module must protect the campus against the following attacks:

  • Unauthorized access—All connections from the Edge Distribution module that pass through the campus backbone must be verified against the user and the user's rights. Filtering mechanisms must provide granular control over specific edge subnets and their ability to reach areas within the campus.

  • IP spoofing—IP spoofing is a hacker technique for impersonating another user's identity by using their IP address. Denial of Service (DoS) attacks use the IP spoofing technique to generate server requests using the stolen IP address as a source. The server does not respond to the original source, but it does respond to the stolen IP address. DoS attacks are a problem because they are difficult to detect and defend against; attackers can use a valid internal IP address for the source address of IP packets that produce the attack. A significant amount of this type of traffic renders the attacked server unavailable and interrupts business.

  • Network reconnaissance—Network reconnaissance (or discovery) sends packets into the network and collects responses from the network devices. These responses provide basic information about the internal network topology. Network intruders use this approach to find out about network devices and the services that run on them. Therefore, filtering traffic from network reconnaissance mechanisms before it enters the enterprise network can be crucial. Traffic that is not essential must be limited to prevent a hacker from performing network reconnaissance.

  • Packet sniffers—Packet sniffers, or devices that monitor and capture the traffic in the network, represent another threat. Packets belonging to the same broadcast domain are vulnerable to capture by packet sniffers, especially if the packets are broadcast or multicast. Because most of the traffic to and from the Edge Distribution module is business critical, corporations cannot afford this type of security lapse. Layer 3 switches can prevent such an occurrence.

NOTE

Chapter 3 and Chapter 9, "Evaluating Security Solutions for the Network," further discuss security threats.

With the correct selection of network edge switches, all connectivity and security requirements can be met. The basic request, such as the need for ACLs, requires a switch that is Layer 3-aware. Only switches that provide such advanced features as intrusion detection can satisfy the requirements for tighter restrictions.

Design Guidelines for the Edge Distribution Module

Figure 4-20 illustrates an example of Edge Distribution design. In terms of overall functionality, the campus Edge Distribution module is similar to the Campus Building Distribution submodule in some respects. Although both modules use access control to filter traffic, the Edge Distribution module can rely on Enterprise Edge modules to perform additional security functions to some degree. Both modules use Layer 3 switching to achieve high performance, but the Edge Distribution module can offer additional security functions because its performance requirements are not as high. The Edge Distribution module provides the last line of defense for all traffic that is destined for the Campus Infrastructure module. This line of defense includes mitigation of spoofed packets, mitigation of erroneous routing updates, and provisions for network layer access control.

Alternatively, the Edge Distribution module can be combined with the Campus Backbone submodule if performance requirements are not as stringent; this is similar to combining the Server Farm module and Campus Building Distribution submodule.

Security can be implemented in this scenario by using intrusion detection line cards in the Layer 3 switches. (Network Intrusion Detection Systems [NIDSs] reduce the need for external appliances at the points where the critical edge modules connect to the campus; performance reasons can dictate that dedicated intrusion detection is implemented in the various edge modules, as opposed to simply the Edge Distribution module.)

Figure 20Figure 4-20 Edge Distribution Design Example

Cisco Press Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from Cisco Press and its family of brands. I can unsubscribe at any time.

Overview

Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about Cisco Press products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information

To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites; develop new products and services; conduct educational research; and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@ciscopress.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information

Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security

Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children

This site is not directed to children under the age of 13.

Marketing

Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information

If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out

Users can always make an informed choice as to whether they should proceed with certain services offered by Cisco Press. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.ciscopress.com/u.aspx.

Sale of Personal Information

Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents

California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure

Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links

This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact

Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice

We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020