QoS Essentials
QoS
QoS is a means to process, deliver, and manage real-time network traffic. Both the
telecommunications and data communications industries developed QoS techniques that operate at
layer 2 and layer 3 of the OSI model, respectively.
Two major components of QoS are:
• Data plane: Identifies packets eligible for various QoS levels and enforces the requirements.
The data plane QoS mechanisms include:
Buffer management: To control access to internal temporary memory.
Packet scheduling: To control access to bandwidth.
Packet classification, traffic shaping, and traffic policing: To identify the who, what, when,
where, and how of QoS.
• Control plane: Determines if QoS level guarantees can be met. The control plane QoS
mechanisms include:
Basic functions of management and control of network resources.
Signal and configure data plane components.
Connection admission control to determine if a traffic flow may be accepted based on the current
state of the network and required QoS parameters.
Resource reservation and dynamic QoS state management to set up and tear down reservation
of data resources.
QoS or constraint-based routing to determine the best route through the network that supports
the requested QoS requirements end-to-end.
Service provisioning to exemplify control plane implementation.
This section describes layer 3 QoS mechanisms, such as:
• ToS, CoS, and DiffServ
• QoS queuing and buffering
ToS, CoS, and DiffServ
The advent of voice and data integration accelerated and emphasized the need to prioritize traffic and
design methods to meet this need. Earlier, the evolution of IP version 4 to IP version 6 redefined the
classification of packets from IPv4 ToS field to IPv6 CoS field. IPv4 ToS and IPv6 CoS specifications
delineate a method to indicate the priority of IP packets. You can use this method to implement
business policies and manage VoIP traffic. For example, instant messaging and Web browsing are
typically a low priority use of business infrastructures and, therefore, this traffic is assigned a low
priority. Packetized voice transmissions require high priority given the time sensitivity and integral
business function of enterprise voice communications and applications. IPv4 ToS and IPv6 CoS help
in the proper allocation of bandwidth and servicing of VoIP traffic based on service priority.
Remember 802.1p is a popular layer 2 QoS tagging technique. 802.1p uses three bits in the layer 2
packet header to identify ToS priority. In 1981, the IETF first defined IPv4 ToS in RFC 791 Internet
Protocol.
RFC 791 IPv4 Internet Datagram Header Format
The important QoS-related fields are Version and Type of Service. For IP v4, the 4-bit version field
has a value of 4. The 8-bit ToS byte designates the QoS for this packet. Routers running IPv4
configured with QoS policies refer to this field to implement priority processing of traffic. RFC 1349
Type of Service in the Internet Protocol Suite describes the ToS field.
IPv4 ToS Byte
The figure above shows various bit types. The table below lists the meaning of these bit types. The
final revision specified in RFC 1349 shows the ToS octet with a 4-bit ToS field.
Options Padding
Time to Live Protocol Header Checksum
Identification Flags Fragment Offset
Field Version IHL Type of Service Total Length
Bit # 0 1 2
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
Source Address
Destination Address
3
Field Version IHL Type of Service Total Length
Bit # 0 1 2
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
3
Bit#
Field
Type
0
PRECEDENCE
1 2 3
ToS
4 5 6 7
MBZ
Bits Service priority level Purpose
0 – 2 Precedence Bits
000 Routine Set routine precedence (0)
001 Priority Set priority precedence (1)
010 Immediate Set immediate precedence (2)
011 Flash Set flash precedence (3)
100 Flash-override Set flash-override precedence
(4)
101 Critical Set critical precedence (5)
110 Internet Set internetwork control
precedence (6)
111 Network Set network control precedence
(7)
3 – 6 ToS Bits
0000 Normal Set normal priority
0001 Minimum monetary cost Set minimum monetary cost
priority
0010 Maximum reliability Set maximum reliability as
priority
0100 Maximum throughput Set maximum throughput as
priority
1000 Minimum delay Set minimum delay as priority
RFC 1349 ToS Byte Classification
Bits Service priority level Purpose
0 – 2 Precedence Bits
000 Routine Set routine precedence (0)
001 Priority Set priority precedence (1)
010 Immediate Set immediate precedence (2)
011 Flash Set flash precedence (3)
100 Flash-override Set flash-override precedence (4)
101 Critical Set critical precedence (5)
110 Internet Set internetwork control precedence
(6)
111 Network Set network control precedence (7)
3 – 6 ToS Bits
Bit# 3 0 = normal delay
1 = low delay
Set delay priority
Bit# 4 0 = normal throughput
1 = high throughput
Set throughput priority
Bit# 5 0 = normal reliability
1 = high reliability
Set reliability priority
Bit# 6 – 7 Reserved Reserved
RFC 791 ToS Byte Classification
The first three bits represent the IP Precedence value. The IP Precedence field qualifies ToS for the IP
datagram during transmission and is called a forwarding class. The three Precedence bits support eight
priority levels. The Precedence bit classification is same in both tables. The industry standard term for
this QoS classification is IP Precedence, derived from the RFC 791 terminology. The IP Precedence
and ToS fields enable routers to read or tag VoIP packets and process the flow with appropriate QoS
based on configuration and policy.
According to RFC 791, network control precedence bit 7 assigned the service value for the datagram
within a network. Internetwork control precedence bit 6 assigned the service value for the datagram for
use by gateway controllers. These bits always outrank application use of bits 0-5. When you configure
a router, the design and implementation of QoS policies in your network requires proper use of these
fields.
Delay, throughput, and reliability fields are rudimentary qualifiers used to categorize traffic flows.
Some industry professionals use the term ToS to refer to these bits, instead of the complete ToS 8-bit
octet. Flagging one of these bits once invoked a compromise regarding the use of any of the other three
bits. Originally, RFC 791 recommended that at most two bits be set. However, setting more than one
bit proved unmanageable. RFC 1349 redefined and clarified use of the ToS octet. However, there is a
lack of agreement on and use of the fourth and last ToS bit prescribed in RFC 1349. The evolution of
IP QoS continued, resulting in DiffServ. As a result, this observation is presently debatable.
For decades, a router identified traffic on the network and read or tagged IP Precedence and ToS bits
according to the policy configured on the router and application requests. IP Precedence and ToS came
in handy with the advent of voice and data integration. Meanwhile, ongoing research and development
led to the draft and release of IPv6.
RFC 2460 IPv6 Internet Datagram Header Format
The version field is set to a value of 6 for IPv6. Routers use the 8-bit Traffic Class field to identify
QoS parameters similar to those designated in IPv4. IPv6 uses 4 bits for real-time traffic such as voice
and 4 bits for non-real time traffic such as data transfer. More importantly, RFC 2460 specifications
require IPv6 service interfaces to allow upper-layer protocols to set a value for the Traffic Class field.
Routers may change these values based on the router configuration.
Field Version Traffic Class
Bit # 0 1 2
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
3
Flow Label
Payload Length New Header Hop Limit
Source Address
128 bits
Destination Address
128 bits
IPv6 provides another QoS-type specification, the 24-bit flow label field. Network or computer
applications and services assign a value to this field. Routers use the flow label value to classify a
given traffic flow instead of ports or addresses. This enables the routers to be more efficient in
classifying and processing VoIP applications and their associated QoS. This mechanism works well
with RSVP. RSVP provisions the path and the router references the flow label field to identify and
maintain the state for a given active traffic flow.
IPv4 ToS octets and IPv6 CoS identify the fields subsequently defined as the Differentiated Services
(DS) field. IETF RFC 2474 Definition of the DS field in the IPv4 and IPv6 Headers released in 1998 is
a more robust QoS mechanism than ToS. RFC 2474 distinguishes the first six bits used in the IPv4
ToS octet or IPv6 CoS octet or DS field as Differentiated Services Code Point (DSCP). RFC 2474
defines the DS field as a replacement header field that supersedes previously recommended IPv4 ToS
octet and IPv6 CoS octet.
Comparison of IPv4 and IPv6 Header with RFC 2474 DiffServ Field
The first three bits of the DSCP retain the functionality of the IP Precedence field. Assured Forwarding
(AF) is used if these bits equal a decimal value of 4 or less. An I Precedence decimal value of 5 uses
Expedited Forwarding (EF). EF is the highest forwarding preference decimal value permitted and is
equal to 5.
The IP Precedence field represents the first three bits of the DSCP field. The last three bits of DSCP
indicate Drop Precedence. During network congestion, a higher Drop Precedence value increases the
probability that a packet will be dropped due to network congestion.
Field Version
Bit # 0 1 2
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
3
IHL Type of Service Total Length
Differentiated Services Codepoint
(DSCP)
CU
0 1 2 3 4 5 6 7
Bit # 0 1 2
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
3
Field Version Traffic Class Flow Label
Drop
precedence
Class 1 Class 2 Class 3 Class 4
Binary
value
Name Binary
value
Name Binary
value
Name Binary
value
Name
Low Drop
Precedence 001010 AF11 010010 AF21 011010 AF31 100010 AF41
Medium Drop
Precedence 001100 AF12 010100 AF22 011100 AF32 100100 AF42
High Drop
Precedence 001110 AF13 010110 AF23 011110 AF33 100110 AF43
AF DSCP Values
RFC 2597 Assured Forwarding Per Hop Behavior (PHB) Group and RFC 3260 New Terminology and
Clarifications for DiffServ describe the recommended codepoint values, classes, and names presented
in the tables below lists examples of class-based applications and related bandwidth and delay
parameters.
Class Sample application Minimum recommended
bandwidth
Recommended delay
Class 1 IP Telephony voice calls 5 Mbps < 5 ms
Class 2 E-commerce and operational
data and voice business
applications
3.75 Mbps < 20 ms
Class 3 FTP and e-mail 1.25 Mbps < 40 ms
Class 4 Web browsing N/A N/A
Class-Based Applications, Bandwidth, and Delay Recommendations
RFC 2475 An Architecture for Differentiated Services defines a logical structural design to execute
various levels of service across the Internet. Accordingly, six bits of the DS field represent a
codepoint, and DSCP is used to select a PHB for a given data stream. Currently, compliant routers
only use six bits of the DSCP and ignore the two-bit currently unused (CU) field. Examples of PHBs
are either based on minimal delay or based on percentage of a link bandwidth. PHBs that are more
complex include multiple criteria such as delay, jitter, and bandwidth constraints. Some PHBs are
standardized and have a recommended codepoint.
Routers and other DiffServ compliant devices select PHBs by mapping the DSCP codepoints to the
configuration table. This must be a configurable feature of
DiffServ-compliant VoIP routers. The DSCP values map to configuration parameters set in the router
that process the packet according to the stipulated PHB mechanism. In other words, the configuration
parameters stipulate the PHB forwarding mechanism. Using this mechanism, routers process voice
traffic across data networks with a higher priority to mitigate VoIP QoS issues, such as loss, delay, and
jitter. In DiffServ jargon, current QoS conditions, such as loss, delay, and jitter are observable
behavior characteristics.
It is important to note that the traffic load on a given link impacts the ability to match and support
DSCP parameters with PHB and current observable behavior characteristics. Links with several PHB
aggregates compete for bandwidth and buffer resources. The PHB function allocates resources hop-byhop. DiffServ processing with respect to current observable behavior characteristics is a function of the
technique used to configure PHBs. This technique may be absolute or relative. It is possible to
prioritize PHBs based on observable behavior characteristics, bandwidth, and buffer resource needs, or
relative to other PHBs. Categorizing PHBs into PHB groups reduces complexity and improves
uniformity. PHB groups conform to a set of common constraints. For example, a PHB group may
share common constraints, such as queuing policy, scheduling scheme, or discard strategy.
While the PHB behavioral characteristics map to DSCP values to describe forwarding mechanisms,
the algorithm employed to process QoS characteristics is not standardized. Several QoS algorithms
exist. These algorithms queue and schedule VoIP traffic flows. The distinction is that PHB executes
hop-by-hop forwarding behavior through the networks per flow. QoS algorithms queue and schedule
the packets of the flow as they pass through the router or VoIP component. Strict Priority Queuing,
WFQ, and Weighted Round Robin (WRR) Queuing are examples of QoS algorithms.
DiffServ specifications define a Differentiated Services (DS) domain. All routers in a DS domain
process traffic use a common set of PHBs. This method enables DSCP classifications to aggregate and
scale traffic flows.
DiffServ enables VoIP traffic aggregation and enhances scalability across the Internet. DiffServ
supports sophisticated techniques for policing, shaping, and classifying packets. Service providers
implement the techniques using Service Level Agreements (SLAs) that dictate provisioning policies.
SLAs may even delineate pricing schemes based on QoS traffic. SLA service classifications are
categorized as:
• Gold: Top service, and, therefore, expensive
• Silver: Good service and moderately priced
• Bronze: Best effort and inexpensive
Provisioning policies delineate the treatment of VoIP traffic entering the network using DiffServ or
other techniques such as MPLS. SLAs attempt to support QoS integral to voice and data integration
using DiffServ. Packets streaming onto the service provider VoIP backbone identified as requiring
higher QoS are queued ahead of data packets in the router’s buffer. The criteria used to classify traffic
as it enters the DS domain of the service provider include the IP address and protocol type in addition
to other factors.
DiffServ is a QoS control method that employs complex policies configured by network
administrators. It is, therefore, more flexible and scalable than ToS and CoS. However, the greater
flexibility, scalability, and robustness of DiffServ increase its complexity and management. For
example, routers may re-mark DS fields because the traffic streams travel across administrative and
DS domain boundaries from one ISP to another. This is one of the inherent problems with
implementing VoIP on the Internet. Ongoing research and development efforts continue to mitigate
and resolve these issues.
QoS Queuing and Buffering
Routers queue voice and data streams at the ingress or the egress port, but rarely at both ports. This
topic focuses on egress queues because most LAN and all WAN VoIP equipment provide a nonblocking matrix, especially switches. Egress queuing avoids head-of-line blocking problems. In headof-line blocking, traffic is blocked as it arrives at a given interface, thereby causing congestion on the
network or dropping packets. Neither of these conditions is desirable. This topic discusses QoS
queuing and buffering in routers. Media gateways and layer 2 Ethernet, ATM, and Frame Relay
switches also support these mechanisms to varying degrees.
When voice and data throughput enters a router, packets are queued by the router. To support various
voice and data stream QoS requirements, routers use multiple queues and scheduling algorithms.
Often, there are multiple queues dedicated per port. Traffic is prioritized, queued, and then forwarded
based on the QoS requirements of a given traffic flow and the configuration of the router. This
function is critical to support VoIP, especially when traffic flows surpass available bandwidth. Routers
prioritize traffic per flow by calculating a single hash value based on the source and destination IP
addresses, TCP/UDP port numbers, IP protocol field, and DSCP values. Then, routers assign
sequential numbers per packet per flow.
Routers classify and queue traffic based on criteria found in the packet header. Packets in these queues
compete for transmission on the output port. The output port that processes packets in the queues
forwards packets based on a queuing algorithm. The queuing algorithm prioritizes QoS classifications
of traffic to service bandwidth, latency, and jitter requirements. The conventional QoS classification
algorithms are:
• Strict Priority-Based Queuing, also known as Priority Queuing (PQ)
• WRR Queuing
• WFQ
• CBWFQ
• Random Early Detection (RED) and Weighted Random Early Detection (WRED)
Additional queuing algorithms exist, and some VoIP devices allow software modification to customize
queuing algorithms.
PQ
The PQ algorithm assigns each queue a priority in strict, absolute terms, sometimes known as strict
priority queuing. A port services the packets from higher priority queues before the lower priority
queues. A port services lower priority queues when the higher priority queues are empty. These queues
share the port bandwidth.
The advantage of PQ is VoIP applications with lower latency and jitter requirements, which are,
therefore, assigned a higher priority. These applications receive preferential treatment at the expense of
lower priority traffic. Successful PQ implementation requires a comprehensive understanding of
network traffic patterns. The drawback of PQ is that low priority traffic may be stuck in a queue if the
amount of high-priority traffic exceeds the interface bandwidth. Low priority traffic may also be stuck
in a queue if the high priority traffic throughput, in aggregate, is proportionally greater than the low
priority traffic. As is common, any network component configuration that is absolute by design is
inflexible and potentially problematic.
CISCO routers implement four strict priority queues: low priority, normal priority, medium priority,
and high priority.
WRR Queuing
WRR queuing assigns a weight that logically denotes the number of packets eligible to be dequeued
during a polling cycle. The port polls each queue in a round-robin fashion. If packets are present, the
output port processes a specific number of packets depending on the weight assigned to the queue.
The drawback of WRR queuing is the inability to provide strict bandwidth guarantees to queues
because the unit of data processed per queue is a variable-size packet. For example, an application
generating small VoIP packets is unable to obtain bandwidth guarantees in the presence of applications
that generate large-size packets.
To prove the drawback stated in the previous paragraph, consider a queuing system that consists of
two queues servicing an output port. Assume the weight assigned to each queue is 1. Therefore, one
packet is dequeued per queue during each polling cycle. If the packets on Queue 2 are larger than the
packets on Queue 1, the port bandwidth consumed by traffic on Queue 2 will be more than that on
Queue 1.
WRR Queuing
The figure above shows smaller, time-sensitive voice packets interleaved with larger data packets. This
introduces delay and delay variation, the nemesis of voice traffic. Therefore, WRR queuing is not an
optimal algorithm for voice and data integration on a port.
Output
PORT
Queue 2
Queue 1
1
1
1
1 1
PORT
Queue 2
Queue 1 1 1 1 1 1
FQ
The WFQ algorithm addresses the limitations of WRR queuing. WFQ scheduling accounts for variable
packet sizes. This algorithm schedules packets from various queues by prioritizing traffic into two
WFQ sessions, low bandwidth and high bandwidth. The router assigns low bandwidth flows a higher
priority and priority processing than high bandwidth flows. High bandwidth flows share bandwidth in
proportion to configured weights. In this way, routers service low bandwidth flows before high
bandwidth flows. Routers discard high bandwidth flows that exceed default or configured thresholds.
Weights can be calculated using IP Precedence.
When a queue reaches the maximum threshold, the router begins to discard packets. The TCP/IP
protocol stack detects this and begins to throttle back transmission rates to process queued packets.
This process is known as tail drop because routers drop packets at the tail of the queue or port
interface.
WFQ assigns each flow an equal share of the bandwidth. This is a flow-based, General Traffic
Shaping (GTS) QoS mechanism designed for low speed interfaces of 2 Mbps or less. GTS shapes
traffic flows as they exit the router through the egress port to the network. This is done to avoid
congestion on the network by sending traffic at a rate that is acceptable to the downstream neighbor
and the policy configured on the router. GTS is applied per interface. A drawback to this algorithm is
that it does not support QoS latency and jitter. Therefore, WFQ is not optimal for VoIP traffic.
WFQ is the default setting enabled on Cisco router low-speed interfaces. Cisco routers implement the
WFQ standard using one queue. Cisco WFQ calculates weight equivalents using IP Precedence values
multiplied by the packet length to determine a sequence number. A higher IP Precedence value
receives higher priority.
CBWFQ
CBWFQ is a class-based traffic shaping mechanism. The difference between WFQ as a GTS trafficshaping mechanism and CBWFQ as a class-based traffic-shaping mechanism is their queue types.
CBWFQ queues map to a traffic flow classification. CBWFQ allows traffic classification based on
criteria, such as QoS, protocols, access control lists, IP Precedence, and interfaces. You configure the
class qualifying criteria and specific queue parameters to meet the class QoS demands and shape
traffic flows. The specific queue parameter options are weight value, bandwidth limitation, and
maximum number of packets allowed. For example, configure a class as
interface-based. Then, specify CBWFQ parameters by assigning a weight value and allocating a
percentage of the interface’s bandwidth. Any packets that arrive for this class beyond the configured
bandwidth threshold are dropped or experience tail drop.
CBWFQ vs. FIFO
The figure above shows the CBWFQ algorithm well suited to support VoIP QoS needs based on
queuing and scheduling configuration. The FIFO method does not provide QoS technique and is,
therefore, not suitable for VoIP.
Cisco routers employ tail drop for CBWFQ by default and allow up to 64 classes.
A router uses best effort processing for unclassified traffic, unless configured with a default class.
Cisco router classification policies are called class maps. You link CBWFQ parameters or
characteristics to a class using policy maps. Service policies apply a policy map to the router interface.
CBWFQ requires Cisco Express Forwarding to be enabled.
A drawback of CBWFQ is the requirement to configure a policy statement on the interface to enqueue
traffic by class. An advantage of CBWFQ is the ability to tweak traffic using finer granularity and
processing than WFQ, producing VoIP quality handling.
RED and WRED
RED and WRED are congestion-avoidance mechanisms. RED leverages imbedded TCP/IP
functionality to provide congestion-avoidance across the network. When the network begins to
experience congestion, RED starts to drop packets randomly. RED detects when packets surpass a
configured queue threshold and starts to drop packets to alleviate buffer overflows. The source router
P
O
R
T
Prioritized and services
based on CBWFQ
configuration
Class 1
queue
Class 2
queue
Class 3
queue
Class 4
queue
Standard FIFO
and neighboring routers throttle back their transmission rates. When congestion subsides, the routers
increase their transmission rates. RED improves on FIFO because FIFO executes tail drop. Tail drop
causes distinct fluctuations across congested networks. RED only drops a random sampling, mitigating
distinct fluctuations.
WRED improves on RED by dropping packets using IP Precedence markings. WRED routers service
higher priority traffic based on IP Precedence. Therefore, lower priority traffic experiences a greater
probability of being dropped. RED randomly and indiscriminately chooses packets to drop. WRED
randomly but discriminately drops packets based on traffic classification.
The PQ, WRR, WFQ, and CBWFQ mechanisms primarily shape and police traffic at the network
edge. RED and WRED primarily operate at the core of the network. They are primarily percentagebased tools appropriate for large volumes of traffic. In addition, RED and WRED are relatively simple
mechanisms as compared to others. Simpler mechanisms are less processor-intensive than more
complex algorithms.
WRED complements VoIP processing. For example, it works well in conjunction with CBWFQ. As
stated earlier, Cisco routers configured for CBWFQ employ tail drop as a default for any unclassified
traffic that enters the router. You configure WRED as the default for CBWFQ on routers in VoIP
networks. This improves the probability that routers drop time-insensitive and loss-tolerant data traffic
versus VoIP traffic.
QoS with RSVP and MPLS
RSVP
IETF RFC 2205 Resource Reservation Protocol (RSVP) Version 1 Specification identifies this
protocol for use by a receiver of multicast or unicast traffic to initiate and set up resources required to
support this traffic flow. Therefore, it is a signaling Internet control protocol. IETF RFC 2212
Specification of Guaranteed Quality of Service complements RFC 2205 because it defines a process to
provision guaranteed delay and bandwidth QoS parameters for data communication networks. RFC
2212 relies on RSVP or an alternate method to set up the path.
RSVP is used to reserve resources such as bandwidth and QoS priorities for a given data stream.
Routers use RSVP to communicate resource requests to all routers in the path of that data stream, endto-end. If any router in the path cannot reserve the requested resource, the path is not set up.
Otherwise, routers reserve the requested resources, and transmissions across that path are
unidirectional. After the path is established, the data stream for that traffic flow traverses the network
using the protocol and logical ports of the service or application. The application or service uses a
separate logical connection to send the actual data using the path set up by RSVP. The service or
application itself may function as a receiver or sender.
A unique multicast or unicast traffic flow, or RSVP session, is defined by the IP destination address,
the destination port ID, and the IP protocol ID. For a given RSVP session, a source node that wants to
send a data stream to a destination node sends an RSVP PATH request. This request includes the
destination IP address, application source and destination address, protocol and port numbers, and
bandwidth. Routers read the RSVP request and designate the path and next hop that meets the criteria.
When the packet exits the router, the router inserts its IP address as the source address. The next router
upstream processes the RSVP PATH request in a similar fashion. The receiving host determines the
QoS parameters, such as jitter and delay. Receivers use RSVP Reservation (RESV) messages to
communicate QoS resource needs. Routers process RSVP RESV and RSVP CONFIRM messages
downstream to acknowledge the reservation of the requested resource needs. These routers maintain
the state for the given path by sending RSVP messages to participating routers at regular intervals.
For example, RSVP guarantees the bandwidth and QoS a real time videoconferencing, an application
requires. Routers process applications of a lower priority, such as FTP or e-mail, that network
conditions allow without guarantees. For a multicast transmission such as real-time videoconferencing,
IGMP and RSVP work together. IGMP identifies the members of the multicast group, and RSVP
dedicates the necessary resources along multicast paths.
RSVP Functional Processes
The figure above shows RSVP traffic control mechanisms in a host and router. RFC 2205 specifies
that RSVP implements QoS using traffic control mechanisms. There are four types of traffic control
mechanisms:
• Packet classifier: Determines the QoS class. For example, the classification may use a
destination address and transport protocol hash to map a packet to a QoS flow class.
• Admission control: Checks the resource availability, such as determining if sufficient
bandwidth exists to support the request.
• Policy control: Implements QoS policy and permission configuration on the router regarding
the senders, receivers, and intermediary routers. For example, the router QoS policy may deny
certain IP addresses or application ports the ability to reserve resources.
Application
Classifier
RSVP
process
Policy
control
Admission
control
Packet
scheduler
Data
Data
RSVP Messages RSVP
Process
RSVP Messages
Routing
process
Classifier
Packet
scheduler
Admission
control
Policy
control
Data
Host Router
• Packet scheduler: Tells the router how to queue the data stream based on the underlying layer
2 infrastructure. For example, the scheduler indicates to the packet transmission to interleave
two traffic flows based on classification.
In this way, RSVP attempts to bridge the gap between connectionless-oriented data networks and
connection-oriented circuit networks. RSVP provides
connection-oriented, real-time QoS integration of VoIP applications on data networks. In addition,
RSVP is robust and offers the following attributes:
• Reserves resources for unidirectional traffic flows. The receiver initiates resource reservation.
• Maintains resource reservation for the duration of the given traffic flow.
• Accommodates dynamic multicast group membership and route changes.
• Allows transparent processing through non-compliant routers.
RSVP is supported by both IPv4 and IPv6 as a transport protocol. Three main caveats exist with
respect to RSVP:
• Identifying priorities for VoIP applications and services across multiple networks using one
paradigm is difficult, if not impossible. Collectively, many applications with high priority and
bandwidth resource reservations cause bottlenecks and denied RSVP requests.
• RSVP is not a routing protocol. Instead, it relies on the routing protocols configured on the
network. Therefore, RSVP can perform only as well as the underlying routing network.
• RSVP does not scale well on large networks because routers need to maintain the state for
each RSVP traffic flow. RSVP is a protocol to implement at the ingress point to a WAN or
large LAN, and controls resources necessary at the core or backbone.
MPLS
MPLS offers solutions for voice and data convergence between IP and ATM and IP and Frame Relay
networks. MPLS is a technology quickly gaining increasing support and use. Several IETF RFCs
describe a wide range of specifications.
A key feature of MPLS is traffic engineering. MPLS enables core networks and service providers to
accelerate interpretation of traffic flow paths and QoS requirements for a given data stream. Similar to
RSVP, MPLS allocates a unique path, called a tunnel, through the network for a given traffic flow.
The tunnel setup for a given traffic flow is called a Label Switch Path (LSP).
LSP resource allocation accommodates traffic flow QoS parameters. MPLS allocates this path at the
edge of the network. A Label Edge Router (LER) router positioned at the edge of the network,
functioning as a layer 3 switch, assigns a label to a packet as it arrives at the ingress port. All LERs
and Label Switch Routers (LSR) routers connect end-to-end throughout the core set up and confirm
traffic flow resource reservation using MPLS labels before moving the data. MPLS uses Label
Distribution Protocol (LDP) to distribute labels. Subsequently, each downstream LSR binds the label
to the associated LSP. This process is known as downstream-on-demand (DOD). DOD distributes
labels upstream and assigns label bindings downstream. Unsolicited binding is another MPLS option.
MPLS relies on routing topologies used in the networks to establish tunnels. Border Gateway Protocol
(BGP), Open Shortest Path First (OSPF), and Intermediate System to Intermediate System (IS-IS) are
examples of routing protocols that establish link state information and create routing topologies.
MPLS constraint-based routing augments these protocols to distribute additional network information.
This additional network information constrains routing options for the tunnel. Hence, this routing is
called constraint-based or constrained-based routing.
If the underlying layer 2 technology is MPLS-compliant, the MPLS label is part of the native protocol
header. If not compliant, the MPLS label is a shim between the layer 2 and IP header. MPLS labels are
truncated identifiers that require less time to process than IP headers. This is useful for integrating
time-sensitive voice communications onto data networks. MPLS is also useful for voice and data
integration across disparate architectures, such as IP to Frame Relay and IP to ATM, and vice versa.
MPLS packets associated with a given traffic flow or LSP conform to the same Forward Equivalency
Class (FEC). FEC identifies QoS classification and prioritization parameters based on IP source and
destination addresses, IP protocol field, and TCP/UDP port addresses. Packets are mapped to an FEC
when they arrive at the ingress router, and communicated to each downstream router using MPLS
LDP.
FECs support a wide range of granularities for packet forwarding. For example, a coarse-grained FEC
may only look at the TCP port address. This type of FEC scales well but provides limited or no QoS
capability. A fine-grained FEC may include a subset of criteria associated with a given application
communication between two hosts. This type of FEC enables customized QoS and routing treatment of
individual VoIP data streams but does not scale well.
IETF RFC 3473 Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource
Reservation Protocol-Traffic Engineering (RSVP-TE) Extensions delineates signaling to support
MPLS and RSVP extensions. RFC 3209 RSVP-TE: Extensions to RSVP for LSP Tunnels defines
interoperability between RSVP extensions and MPLS LSPs. RFC 3209 specifications enable RSVP to
perform load balancing,
constrained-based routing, loop detection, and rerouting based on LSP messages.
The LER that initiates the establishment of an LSP sends an RSVP PATH message containing an
EXPLICIT_ROUTE object in the network. RSVP uses this path information to establish an explicit
route LSP hop by hop. Another option is to use unsolicited routes. RSVP piggybacks the MPLS labels
necessary to identify the LSP. At each hop, the receiving LSR replies with an RSVP RESV message in
the reverse direction to the upstream neighbor that provided the label. The RSVP RESV message
includes a LABEL_REQUEST object. Each next hop router that receives the RSVP RESV message
uses the LABEL_REQUEST object provided by the upstream LSR for outgoing traffic associated with
that tunnel. Each LSR allocates a new label and inserts it into the LABEL_REQUEST object. Using
this method, RSVP resource reservations for a given tunnel map to MPLS FEC parameters. A router
correlates an MPLS label’s FEC with the applicable RSVP path.
RSVP Processing Across an MPLS Network
The figure above shows a generic MPLS network with RSVP messaging. A real world application of
this includes service providers and their customers who negotiate SLAs to ensure QoS for VoIP traffic
using MPLS and RSVP. The SLA lists management packages of service indicators, monitoring
procedures, and cost structures. The SLA includes bandwidth requirements and preferred QoS
parameters, such as delay and jitter boundaries. A particular flow maps to a tunnel using policies based
on the SLA. A tunnel must provide the preferred QoS for a given IP traffic flow. Traffic engineering
using MPLS and RSVP enables correct path selection to create the tunnel and support the flow. A
priority assigned to the tunnel influences packet scheduling, resource reservation, and discarding
mechanisms. The SESSION_ATTRIBUTE RSVP extension establishes tunnel priority.
RFC 3496 Protocol Extension for Support of Asynchronous Transfer Mode (ATM) Service Classaware Multiprotocol Label Switching (MPLS) Traffic Engineering provides information on RSVP-TE
extensions. The RFC describes the use of RSVP-TE with LSRs in Packet over SONET, ATM, and
Ethernet environments. This method allows for voice and data integration across dissimilar
architectures. ATM networks built to carry voice traffic use MPLS and RSVP-TE to encapsulate this
traffic in IP or MPLS packets for interoperability. In addition, an LSR may use a DiffServ object,
described in RFC 3270 MPLS Support of Differentiated Services.
Traffic arriving at the edge of the MPLS and DiffServ domain traverses the network by establishing an
LSP and tunneling the traffic along the LSP. Appropriate forwarding techniques assigned to each LSP
specify a PHB or PHB group that supports the associated QoS guarantees for this traffic flow.
MPLS continues to gain popularity as a QoS methodology used to process VoIP traffic across
disparate networks, particularly Frame Relay, ATM, and IP networks. In addition, MPLS supports
Virtual Private Network (VPN) tunneling.
RSVP RESV RSVP RESV
LER LSR LER
RSVP PATH RSVP PATH
MPLS Cloud
Application QoS requirements
Real-time voice and video conferencing High priority, low latency, variable bandwidth
Stored video and voice multimedia replay Medium priority, medium latency, variable
bandwidth
PC to phone, phone to phone voice calls High priority, low latency, low loss
Network management and control High priority, controlled bandwidth
Circuit emulation Guaranteed bandwidth, low jitter
FTP and data application Low priority, no queue length constraints
Business Applications and Related QoS Variables