Home > Articles > Quality of Service Design Overview

Quality of Service Design Overview

Chapter Description

This chapter provides an overview of the QoS design and deployment process. This process requires business-level objectives of the QoS implementation to be defined clearly and for the service-level requirements of applications to be assigned preferential or deferential treatment so that they can be analyzed.

These enterprise applications with unique QoS requirements are discussed in this chapter:

  • Voice

  • Call-Signaling

  • Interactive-Video

  • Streaming-Video

  • Best-Effort Data

  • Bulk Data

  • Transactional Data

  • Mission-Critical Data

  • IP Routing traffic

  • Network-Management traffic

  • Scavenger traffic

Additionally, key QoS design and deployment best practices that can simplify and expedite QoS implementations are presented, including these:

  • Classification and marking principles

  • Policing and markdown principles

  • Queuing and dropping principles

  • DoS and worm mitigation principles

  • Deployment principles

More than just a working knowledge of QoS tools and syntax is needed to deploy end-to-end QoS in a holistic manner. First, it is vital to understand the service-level requirements of the various applications that require preferential (or deferential) treatment within the network. Additionally, a number of QoS design principles that extensive lab testing and customer deployments have helped shape can streamline a QoS deployment and increase the overall cohesiveness of service levels across multiple platforms.

This chapter overviews the QoS requirements of VoIP, Video (both Interactive-Video and Streaming-Video), and multiple classes of data. Within this discussion, the QoS requirements of the control plane (routing and management traffic) are considered. The Scavenger class is examined in more detail, and a strategy for mitigating DoS and worm attacks is presented.

Next, QoS design principles relating to classification, marking, policing, queuing, and deployment are discussed. These serve as guiding best practices in the design chapters to follow.

QoS Requirements of VoIP

VoIP deployments require the provisioning of explicit priority servicing for VoIP (bearer stream) traffic and a guaranteed bandwidth service for Call-Signaling traffic. These related classes are examined separately.

Voice (Bearer Traffic)

The following list summarizes the key QoS requirements and recommendations for voice (bearer traffic):

  • Voice traffic should be marked to DSCP EF per the QoS Baseline and RFC 3246.

  • Loss should be no more than 1 percent.

  • One-way latency (mouth to ear) should be no more than 150 ms.

  • Average one-way jitter should be targeted at less than 30 ms.

  • A range of 21 to 320 kbps of guaranteed priority bandwidth is required per call (depending on the sampling rate, the VoIP codec, and Layer 2 media overhead).

Voice quality directly is affected by all three QoS quality factors: loss, latency, and jitter.


Loss causes voice clipping and skips. Packet loss concealment (PLC) is a technique used to mask the effects of lost or discarded VoIP packets. The method of PLC used depends upon the type of codec. A simple method used by waveform codecs such as G.711 (PLC for G.711 is defined in G.711 Appendix I) is to replay the last received sample with increasing attenuation at each repeat; the waveform does not change much from one sample to the next. This technique can be effective at concealing the loss of up to 20 ms of samples.

The packetization interval determines the size of samples contained within a single packet. Assuming a 20-ms (default) packetization interval, the loss of two or more consecutive packets results in a noticeable degradation of voice quality. Therefore, assuming a random distribution of drops within a single voice flow, a drop rate of 1 percent in a voice stream would result in a loss that could not be concealed every 3 minutes, on average. A 0.25 percent drop rate would result in a loss that could not be concealed once every 53 minutes, on average.


A decision to use a 30-ms packetization interval, for a given probability of packet loss, could result in worse perceived call quality than for 20 ms because PLC could not effectively conceal the loss of a single packet.

Low-bit-rate, frame-based codecs, such as G.729 and G.723, use more sophisticated PLC techniques that can conceal up to 30 to 40 ms of loss with "tolerable" quality when the available history used for the interpolation is still relevant.

With frame-based codecs, the packetization interval determines the number of frames carried in a single packet. As with waveform-based codecs, if the packetization interval is greater than the loss that the PLC algorithm can interpolate for, PLC cannot effectively conceal the loss of a single packet.

VoIP networks typically are designed for very close to 0 percent VoIP packet loss, with the only actual packet loss being due to L2 bit errors or network failures.


Latency can cause voice quality degradation if it is excessive. The goal commonly used in designing networks to support VoIP is the target specified by ITU standard G.114 (which, incidentally, is currently under revision): This states that 150 ms of one-way, end-to-end (from mouth to ear) delay ensures user satisfaction for telephony applications. A design should apportion this budget to the various components of network delay (propagation delay through the backbone, scheduling delay because of congestion, and access link serialization delay) and service delay (because of VoIP gateway codec and dejitter buffer).

Figure 2-1 illustrates these various elements of VoIP latency (and jitter because some delay elements are variable).

Figure 1Figure 2-1 Elements Affecting VoIP Latency and Jitter

If the end-to-end voice delay becomes too long, the conversation begins to sound like two parties talking over a satellite link or even a CB radio. The ITU G.114 states that a 150-ms one-way (mouth-to-ear) delay budget is acceptable for high voice quality, but lab testing has shown that there is a negligible difference in voice quality mean opinion scores (MOS) using networks built with 200-ms delay budgets. Thus, Cisco recommends designing to the ITU standard of 150 ms. If constraints exist and this delay target cannot be met, the delay boundary can be extended to 200 ms without significant impact on voice quality.


Certain organizations might view higher delays as acceptable, but the corresponding reduction in VoIP quality must be taken into account when making such design decisions.


Jitter buffers (also known as playout buffers) are used to change asynchronous packet arrivals into a synchronous stream by turning variable network delays into constant delays at the destination end systems. The role of the jitter buffer is to trade off between delay and the probability of interrupted playout because of late packets. Late or out-of-order packets are discarded.

If the jitter buffer is set either arbitrarily large or arbitrarily small, it imposes unnecessary constraints on the characteristics of the network. A jitter buffer set too large adds to the end-to-end delay, meaning that less delay budget is available for the network; hence, the network needs to support a tighter delay target than practically necessary. If a jitter buffer is too small to accommodate the network jitter, buffer underflows or overflows can occur. In an underflow, the buffer is empty when the codec needs to play out a sample. In an overflow, the jitter buffer is already full and another packet arrives; that next packet cannot be enqueued in the jitter buffer. Both jitter buffer underflows and overflows cause voice quality degradation.

Adaptive jitter buffers aim to overcome these issues by dynamically tuning the jitter buffer size to the lowest acceptable value. Well-designed adaptive jitter buffer algorithms should not impose any unnecessary constraints on the network design by doing the following:

  • Instantly increasing the jitter buffer size to the current measured jitter value following a jitter buffer overflow

  • Slowly decreasing the jitter buffer size when the measured jitter is less than the current jitter buffer size

  • Using PLC to interpolate for the loss of a packet on a jitter buffer underflow

When such adaptive jitter buffers are used—in theory—you can "engineer out" explicit considerations of jitter by accounting for worst-case per-hop delays. Advanced formulas can be used to arrive at network-specific design recommendations for jitter (based on maximum and minimum per-hop delays). Alternatively, because extensive lab testing has shown that voice quality degrades significantly when jitter consistently exceeds 30 ms, this 30 ms value can be used as a jitter target.

Because of its strict service-level requirements, VoIP is well suited to the expedited forwarding per-hop behavior, defined in RFC 3246 (formerly RFC 2598). Therefore, it should be marked to DSCP EF (46) and assigned strict-priority servicing at each node, regardless of whether such servicing is done in hardware (as in Catalyst switches through 1PxQyT queuing, discussed in more detail in Chapter 10, "Catalyst QoS Tools") or in software (as in Cisco IOS routers through LLQ, discussed in more detail in Chapter 5, "Congestion-Management Tools").

The bandwidth that VoIP streams consume (in bits per second) is calculated by adding the VoIP sample payload (in bytes) to the 40-byte IP, UDP, and RTP headers (assuming that cRTP is not in use), multiplying this value by 8 (to convert it to bits), and then multiplying again by the packetization rate (default of 50 packets per second).

Table 2-1 details the bandwidth per VoIP flow (both G.711 and G.729) at a default packetization rate of 50 packets per second (pps) and at a custom packetization rate of 33 pps. This does not include Layer 2 overhead and does not take into account any possible compression schemes, such as Compressed Real-Time Transport Protocol (cRTP, discussed in detail in Chapter 7, "Link-Specific Tools").

For example, assume a G.711 VoIP codec at the default packetization rate (50 pps). A new VoIP packet is generated every 20 ms (1 second / 50 pps). The payload of each VoIP packet is 160 bytes; with the IP, UDP, and RTP headers (20 + 8 + 12 bytes, respectively) included, this packet become 200 bytes in length. Converting bits to bytes requires multiplying by 8 and yields 1600 bps per packet. When multiplied by the total number of packets per second (50 pps), this arrives at the Layer 3 bandwidth requirement for uncompressed G.711 VoIP: 80 kbps. This example calculation corresponds to the first row of Table 2-1.

Table 2-1 Voice Bandwidth (Without Layer 2 Overhead)

Bandwidth Consumption

Packetization Interval

Voice Payload in Bytes

Packets Per Second

Bandwidth Per Conversation


20 ms



80 kbps


30 ms



74 kbps


20 ms



24 kbps


30 ms



19 kbps


The Service Parameters menu in Cisco CallManager Administration can be used to adjust the packet rate. It is possible to configure the sampling rate above 30 ms, but this usually results in poor voice quality.

A more accurate method for provisioning VoIP is to include the Layer 2 overhead, which includes preambles, headers, flags, CRCs, and ATM cell padding. The amount of overhead per VoIP call depends on the Layer 2 media used:

  • 802.1Q Ethernet adds (up to) 32 bytes of Layer 2 overhead (when preambles are included).

  • Point-to-Point Protocol (PPP) adds 12 bytes of Layer 2 overhead.

  • Multilink PPP (MLP) adds 13 bytes of Layer 2 overhead.

  • Frame Relay adds 4 bytes of Layer 2 overhead; Frame Relay with FRF.12 adds 8 bytes.

  • ATM adds varying amounts of overhead, depending on the cell padding requirements.

Table 2-2 shows more accurate bandwidth-provisioning guidelines for voice because it includes Layer 2 overhead.

Table 2-2 Voice Bandwidth (Including Layer 2 Overhead)

Bandwidth Consumption

802.1Q Ethernet



Frame Relay with FRF.12


G.711 at 50 pps

93 kbps

84 kbps

86 kbps

84 kbps

106 kbps

G.711 at 33 pps

83 kbps

77 kbps

78 kbps

77 kbps

84 kbps

G.729A at 50 pps

37 kbps

28 kbps

30 kbps

28 kbps

43 kbps

G.729A at 33 pps

27 kbps

21 kbps

22 kbps

21 kbps

28 kbps

Call-Signaling Traffic

The following list summarizes the key QoS requirements and recommendations for Call-Signaling traffic:

  • Call-Signaling traffic should be marked as DSCP CS3 per the QoS Baseline (during migration, it also can be marked the legacy value of DSCP AF31).

  • 150 bps (plus Layer 2 overhead) per phone of guaranteed bandwidth is required for voice control traffic; more may be required, depending on the Call-Signaling protocol(s) in use.

Originally, Cisco IP Telephony equipment marked Call-Signaling traffic to DSCP AF31. However, the assured forwarding classes, as defined in RFC 2597, were intended for flows that could be subject to markdown and aggressive dropping of marked-down values. Marking down and aggressively dropping Call-Signaling could result in noticeable delay to dial tone (DDT) and lengthy call-setup times, both of which generally translate into poor user experiences.

Therefore, the QoS Baseline changed the marking recommendation for Call-Signaling traffic to DSCP CS3 because Class-Selector code points, defined in RFC 2474, are not subject to such markdown and aggressive dropping as Assured Forwarding Per-Hop Behaviors are.

Some Cisco IP Telephony products already have begun transitioning to DSCP CS3 for Call-Signaling marking. In this interim period, both code points (CS3 and AF31) should be reserved for Call-Signaling marking until the transition is complete.

Most Cisco IP Telephony products use the Skinny Call-Control Protocol (SCCP) for Call-Signaling. Skinny is a relatively lightweight protocol and, as such, requires only a minimal amount of bandwidth protection (most of the Cisco large-scale lab testing was done by provisioning only 2 percent for Call-Signaling traffic over WAN and VPN links). However, newer versions of CallManager and SCCP have shown some "bloating" in this signaling protocol, so design recommendations have been adjusted to match (most examples in the design chapters that follow have been adjusted to allocate 5 percent for Call-Signaling traffic). This is a normal part of QoS evolution: As applications and protocols continue to evolve, so do the QoS designs required to accommodate them.

Other Call-Signaling protocols include (but are not limited to) H.225 and H.245, the Session Initiated Protocol (SIP), and the Media Gateway Control Protocol (MGCP). Each Call-Signaling protocol has unique TCP and UDP ports and traffic patterns that should be taken into account when provisioning QoS policies for them.

2. QoS Requirements of Video | Next Section