Home > Articles > Cisco Certification > CCNP > CCNP TSHOOT Certification Guide: Understanding the Basics of QoS

CCNP TSHOOT Certification Guide: Understanding the Basics of QoS


  1. What Is QoS?
  2. DiffServ Mechanisms
  3. Configuration
  4. Conclusion

Article Description

Kevin Wallace, the author of CCNP TSHOOT 642-832 Official Certification Guide, discusses basic quality of service (QoS) mechanisms on Cisco routers, which are important to understand when troubleshooting VoIP issues.

From the author of

CCNP TSHOOT 642-832 Official Certification Guide

CCNP TSHOOT 642-832 Official Certification Guide

$47.99 (Save 20%)

DiffServ Mechanisms

DiffServ Mechanisms

Cisco IOS routers support multiple DiffServ QoS mechanisms. The following list discusses the categories of these QoS mechanisms:

  • Classification. Classification is the ability to recognize traffic types, which can be performed in a variety of ways. For example, an access list could be used to recognize Telnet traffic by matching TCP port 23. Another popular classification mechanism is Network-Based Application Recognition (NBAR), which can recognize the signatures of many well-known applications.
  • Marking. Once traffic is classified, it can be marked by altering bits. One such marking occurring at Layer 2 is Class of Service (CoS). A CoS marking uses three bits, and therefore has a range of values from 0[nd]7. Cisco tells us that values of 6 and 7 are reserved for network use, so our highest-priority markings should be no higher than 5. In fact, Cisco recommends marking voice media frames at Layer 2 with a CoS value of 5, which a Cisco IP phone does by default.
  • IP Precedence markings and Differentiated Services Code Point (DSCP) markings are both Layer 3 markings. IP Precedence uses the three leftmost bits in an IPv4 header's Type of Service (ToS) byte. Like CoS, with three bits at its disposal, IP Precedence markings have values in the range of 0[nd]7. Again, Cisco cautions us that values of 6 and 7 are reserved, so you should mark voice media with an IP Precedence value of 5. DSCP markings use the six leftmost bits of a ToS byte, giving us 64 potential values (in the range 0[nd]63). Rather than allowing haphazard value assignments, the IETF has identified and named a collection of preselected values, called per-hop behaviors (PHBs).

    • Default. The default PHB has a DSCP decimal value of 0 and is often used for best-effort traffic.
    • Class Selector (CS). Class Selector PHBs range from CS1[nd]CS7, where the number represents the equivalent IP Precedence value of that marking. In fact, CS PHBs provide pure backward compatibility with IP Precedence (useful in a network where some routers use IP Precedence markings and others use DSCP markings), because the fourth, fifth, and sixth bit positions in the ToS byte are zeroes, just as they would be in an IP Precedence marking.
    • Assured Forwarding (AF). The Assured Forwarding PHBs are divided into four classes: AF1, AF2, AF3, and AF4, where the number represents the IP Precedence value of the marking. Each of these classes has three values, however, giving us a grand total of 12 AF values: AF11, AF12, AF13, AF21, AF22, AF23, AF31, AF32, AF33, AF41, AF42, and AF43. The second number after AF refers to the drop probability of the packet when a router's queue starts to become congested: 3 reflects high drop probability, 2 indicates medium drop probability, and 1 indicates low drop probability.
    • Expedited Forwarding (EF). The Expedited Forwarding PHB is the marking Cisco recommends for use when marking voice media packets at Layer 3. The decimal equivalent value of the EF marking is 46.
  • Congestion management. Classifying and marking traffic doesn't change the behavior of the traffic; we need additional QoS mechanisms to examine those markings and make a decision based on those markings. One such QoS mechanism is congestion management, also known as queuing. Imagine that a router is receiving traffic from a LAN connection at a rate of 100 Mbps, and then trying to send out that traffic on a WAN link that has only 512 kbps of bandwidth—obviously, a big speed mismatch.
  • To solve this problem, the router stores those packets temporarily in the output interface's queue. While packets are in that queue, various queuing algorithms can be used to determine the order in which packets will be emptied from the queue. The two most popular queuing mechanisms used on Cisco routers today are Class-Based Weighted Fair Queuing (CB-WFQ) and Low Latency Queuing (LLQ). CB-WFQ can allocate minimum amounts of bandwidth for different classes of traffic. LLQ takes CB-WFQ a step further by adding a priority queue. Traffic (such as voice) placed in the priority queue gets sent first, up to a limit. For example, your configuration of LLQ might allocate 128 kbps of bandwidth for voice traffic, which allows the voice traffic to be prioritized ahead of other traffic types and sent first. However, the voice traffic will not consume more than the allotted 128 kbps of bandwidth, which otherwise could result in other traffic types being starved out.

  • Congestion avoidance. In connection with the Assured Forwarding collection of PHBs, I mentioned the use of a drop probability number. The mechanism that actually makes the dropping decision is Weighted Random Early Detection (WRED), which is a congestion-avoidance mechanism.
  • Traffic conditioning. Instead of guaranteeing a minimum bandwidth amount for a class of traffic, as CB-WFQ does, traffic conditioners can set a "speed limit" on specified traffic types. The two main approaches to traffic conditioning are policing and shaping. Both policing and shaping can limit the amount of bandwidth consumed by a class of traffic. By default, policing drops traffic attempting to exceed the configured speed limit, whereas shaping buffers the excess traffic and then transmits that traffic when bandwidth becomes available.
  • Link efficiency. Link efficiency mechanisms attempt to make the most efficient use of relatively limited WAN bandwidth. The two most common link efficiency approaches are compression and link fragmentation and interleaving (LFI). Compressing a packet's header and/or its payload allows the information included in the packet to be sent using less bandwidth. As a result, more information can be sent over a link without increasing the link's bandwidth. It's almost like getting free bandwidth (assuming that the router performing the compression has sufficient processing power to run the compression algorithms efficiently).
  • Rather than "squeezing" information, LFI fragments large packets (such as FTP packets) into smaller packets that can be sent out of a serial interface faster than their larger counterparts could. LFI then can interleave smaller, latency-sensitive packets (such as voice packets) among the fragmented data packets—much like shuffling a deck of cards. As a result, latency-sensitive voice packets can exit a serial interface sooner than they would if they had to wait for a large data packet to exit.

3. Configuration | Next Section Previous Section