Home > Articles > Cisco Network Technology > IP Communications/VoIP > Cisco IP Telephony Flash Cards: Weighted Random Early Detection (WRED)

Cisco IP Telephony Flash Cards: Weighted Random Early Detection (WRED)

Chapter Description

This chapter provides an overvew of Weighted Random Early Detection (WRED) for Cisco IP Telephony, including Question & Answer flash cards to help you prepare for the Cisco IP Telephony Exam.

QoS Design Guidelines

This section reviews, in a design context, many of the concepts presented earlier in these Quick Reference Sheets. For example, voice, data, and video applications each have unique design guidelines. These guidelines are examined in this section.

Classification Review

As a review, recall how you performed classification and marking early on in these Quick Reference Sheets. Using the three-step MQC approach, you saw how to classify traffic by such characteristics as an incoming interface, an access-list match, or an NBAR. The Network Based Application Recognition (NBAR) classification mechanism offered the most granular classification, because NBAR can look beyond Layer 3 or Layer 4 information, all the way up to Layer 7.

Marking could then be done at Layer 2 or Layer 3 using markings such as CoS (at Layer 2), IP Precedence (at Layer 3), or DSCP (at Layer 3).

Figure 36Figure 36

Queuing Review

Marking traffic alone does not change the behavior of the traffic. To influence the traffic’s behavior, you can use the following other QoS mechanisms:

  • Queuing (for example, LLQ, CB-WFQ, and WRR)

  • Congestion avoidance (for example, WRED and ECN)

  • Compression (for example, TCP and RTP CB-Header Compression)

  • Traffic conditioning (for example, CB-Policing and CB-Shaping)

  • Link efficiency (for example, Link Fragmentation and Interleaving mechanisms such as MLP and compression mechanisms, such as RTP)

With the coverage of each of these QoS mechanisms, you can now select the appropriate tool or tools for a specific application. For example, if you wanted to give specific traffic priority treatment, you could use LLQ on a router or WRR on certain Catalyst switch platforms.

On a lower-speed WAN link, where bandwidth is scarce, you might choose to use TCP or RTP CB-Header Compression. In addition, you enable the appropriate LFI mechanism for the media that you are working with to reduce serialization delay that smaller payloads experience.

Cisco recommends that you perform classification and marking as close to the source as possible. However, you typically do not want to trust a user device’s markings. Therefore, you establish a “trust boundary,” where you determine the component (for example, a switch or an IP phone) that you trust to assign appropriate markings to traffic. The scheduling of packets (for example, queuing) can be formed on any device (for example, switch or router) that supports packet scheduling. However, link efficiency mechanisms are typically deployed on WAN links, where bandwidth conservation might be more of an issue.

Application-Specific QoS

Some of your applications have specific QoS guidelines that you should adhere to in the design process. For example, voice and interactive video traffic are latency sensitive and might require priority treatment. Following are a few design rules of thumb for voice and interactive video traffic:

  • One-way delay of no more than 150 ms

  • Packet loss of no more than 1 percent

  • Jitter of no more than 30 ms

  • Should be given priority treatment

Data applications vary widely in their needs. Therefore, each application on the network should be placed into a specific traffic category, where that application is sharing a policy with other applications that have similar QoS requirements.

QoS in a Service Provider Environment

As a final consideration in these QoS Quick Reference Sheets, consider some of the questions to ask when selecting a service provider. When negotiating with a service provider, you will probably put your agreement in writing in a document called a Service Level Agreement (SLA). An SLA stipulates specific service parameters that the service provider must adhere to a certain percentage of the time. For example, the SLA might state that 90 percent of the time, your traffic will experience no more than 150 ms of one-way delay. Parameters that are frequently specified in an SLA include the following:

  • Latency

  • Packet drops

  • Variable delay

  • Uptime

  • Bandwidth availability

If your service provider only provides Layer 2 service over, for example, a Frame Relay cloud, the service provider will not be inspecting your Layer 3 markings. Therefore, it would be up to you as the customer to apply QoS features (for example, LLQ, CB-Header Compression, and LFI) to your frames before they enter the service provider’s cloud.

However, some service providers do have Layer 3 awareness. In such a situation, the SLA can specify how your traffic is placed in the service provider’s defined traffic classes, perhaps based on your DSCP markings. Typically, service providers can categorize your traffic in up to three to five classes.

12. Weighted Random Early Detection (WRED) | Next Section Previous Section