www.cisco.comKeep All Graphics Inside This Box econ_0481_09_010.ppt Policing Policing is the QoS component that limits traffic flow to a configured bit rate: • With limited bursting cap
Trang 1© 2000, Cisco Systems, Inc
Wide Area Networks
Trang 2© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Section I Policy / Shaping
Section I Policy / Shaping
© 2000, Cisco Systems, Inc www.cisco.com
Trang 3© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
The purpose of the lesson is to quickly survey the new policing and traffic shaping
features in Cisco IOS Release 12.1, and to describe the problems they solve
Trang 4© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Central Site
Frame Relay, ATM
128kbps 256kbps 512kbps 768kbps T1
Result:
Buffering = Delay or Dropped PacketsCustomer Problems to Solve
• Central to Remote Site Speed Mismatch
• Remote to Central Site Over-subscription
• Control use of shared LAN, WAN, MAN media
–Multi-Dwelling Unit (MDU)
The slide shows a Frame Relay or ATM network Pay close attention to the speeds
of the access lines to the remote sites on the left Suppose each site has a
Committed Information Rate (CIR) close to the access speed with bursting up to
the access bandwidth
• What happens at the central site if the bottom two sites burst at the same
time?
• What happens at the central site if a server rapidly transmits data for the top
left remote site?
• What happens if the bottom two left sites try to send a large amount of data
to the top left site?
In this section, some of the QoS techniques that help resolves issues such as
theseare examined
Trang 5© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Internet
Policing and traffic shaping occur within the network to provide congestion management and control bursts
Policing and traffic shaping occur within the network to provide congestion management and control bursts
Network ManagementPolicing and Shaping
In this module section, policing and traffic shaping are discussed Both of these
traffic engineering methods occur within the network as indicated by the heavy
ellipse in the slide They use the already marked Type of Service (ToS) or
Differentiated Services Code Point (DSCP) bits discussed in the previous module
With policing the rate at which traffic can flow is capped This is usually done
inbound to control how fast someone sends data
With shaping, smooth out bursts for a steadier flow of data Reduced burstiness
helps reduce congestion in a network core
Trang 6© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Policing
Policing is the QoS component that limits traffic flow to a configured bit rate:
• With limited bursting capability
burst rate are dropped or have their precedence altered
A policer typically drops traffic
For example, CARs rate-limiting policer will either drop the packet or rewrite its
IP Precedence, resetting the packet header's ToS bits
Policing is also available through the MQC
Trang 7© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Shaping
Shaping is the QoS feature component that regulates traffic flow to an average
or peak bit rate:
• With bursting capability
queued
A shaper typically delays excess traffic using a buffer, or queuing mechanism, to
hold packets and shape the flow when the data rate of the source is higher than
expected
For example, Generic Traffic Shaping (GTS) uses a weighted fair queue to delay
packets in order to shape the flow
Depending on how it is configured, Frame Relay Traffic Shaping (FRTS) uses
either a Priority Queue (PQ), a Custom Queue (CQ), or a first- in, first-out (FIFO)
queue for the same sort of purpose
Trang 8© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Time
Time Traffic Rate
Traffic Policing Versus Shaping
Traffic Policing Versus Shaping
Policer
Causes TCP resends Oscillation of TCP windows
Policer can be marker also (CAR)
Policer on input interface only
Shaper
Can adapt to network congestion (FR BECN, FECN)
This diagram shows the effects of traffic shaping
Both policing and shaping ensure that traffic does not exceed a (contracted) bandwidth
limit Policing and Shaping both limit bandwidth but with different traffic impact:
• Policing drops more often, more resends
• Shaping adds variable delay
Traffic shaping smoothes traffic by storing traffic above the configured rate in a queue
When a packet arrives at the interface for transmission, the following happens:
• If the queue is empty, the arriving packet is processed by the traffic shaper:
– If possible, the traffic shaper sends the packet
– Otherwise, the packet is placed in the queue
• If the queue is not empty, the packet is placed in the queue
When there are packets in the queue, the traffic shaper removes the number of packets it can send from the queue every time interval
Additional details on policing and shaping can be found at:
http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/12cgcr/qos_c/qcpart4/qcpolts.htm
Trang 9© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
Trang 10© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Committed Access Rate (CAR)
CAR performs three functions:
and QoS group setting
limiting (policing)
CARs rate- limiting feature manages a network's access bandwidth policy by ensuring
that traffic falling within specified rate parameters is sent, while dropping packets that
exceed the acceptable amount of traffic or sending them with a different priority CARsexceed action is to drop packets
The rate- limiting function of CAR does the following:
• Allows the control the maximum rate of traffic transmitted or received on an
interface
• Gives the ability to define Layer 3 aggregate or granular incoming or outgoing
(ingress or egress) bandwidth rate limits and to specify traffic-handling policies
when the traffic either conforms to or exceeds the specified rate limits
• Uses aggregate bandwidth rate limits to match all of the packets on an interface or
sub- interface
• Uses granular bandwidth rate limits to match a particular type of traffic based on
precedence, MAC address, or other parameters
CAR is often configured on interfaces at the edge of a network to limit traffic into or
out of the network
VIP-distributed CAR is a version of CAR that runs on the Versatile Interface Processor (VIP) It is supported on the Cisco 7500 routers with a VIP2-40 or greater interface
processor
Distributed Cisco Express Forwarding (dCEF) switching must be enabled on any
interface that uses VIP-Distributed CAR, even when only output CAR is configured
Trang 11© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
VoIP HTTP FTP
FTP HTTP
VoIP Gold Class Silver Class Bronze Class Separate “Conform” and
“Exceed” Actions
Policing Engine
VoIP HTTP FTP
CAR Marking and Policing
CAR Marking and Policing
flexible rules
– IP Precedence / IP access list / incoming interface / MAC address
Once a packet has been measured as conforming to or exceeding a particular rate
limit, the router performs one of the following actions on the packet:
• Transmit—The packet is sent
• Drop—The packet is discarded
• Set precedence (or perhaps DSCP bits) and transmit—The IP Precedence
(ToS) bits in the packet header are rewritten The packet is then sent Use this
action to either color (set precedence) or recolor (modify existing packet
precedence) the packet
• Continue—The packet is evaluated using the next rate policy in a chain of rate limits If there is not another rate policy, the packet is sent
Trang 12© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
classification, for example,
drop low priority via WRED
if burst limit exceeded
1) Packet marking through IP Precedence and QoS group settings.
Based on ACL or inbound interface
2) Apply rate limiting to matching traffic pattern, for example, 25Kbps of traffic to “Bronze”
San Jose
Ottawa
CAR Bandwidth Management
Today, all the packets on the network look the same, and are thus handled the same, with each packet getting best-effort service CAR provides the capability to allow the service provider or enterprise to specify a policy which determines which packets should be
assigned to which traffic class The IP header already provides a mechanism to do this,
namely the three precedence bits in the ToS field in the IP header
CAR can set policies based on information in the IP or TCP header such as IP address,
application port, physical port or sub- interface, and IP protocol to decide how the
precedence bits should be marked or “colored.” Once marked, appropriate treatment can
be given in the backbone to ensure that premium packets get premium service in terms of bandwidth allocation, delay control, and so on
CAR can also be used to police precedence bits set externally to the network either by the customer or by a downstream service provider Thus the network can decide to either
accept or override external decisions
CARs purpose is to identify packets of interest for packet classification or rate limiting or both, matching a specification such as:
1) All traffic
2) IP Precedence
3) MAC address
4) IP access list, standard and extended (slower)
See the following URL for additional information:
http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/12cgcr/qos_c/qcpart1/qccar.htm
Trang 13© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
CAR Action Policies
CAR Action Policies
Precedence bits and transmit)
Precedence bits and go to the next rate-limit or police statement in the list)
In Release 11.1 CC the CAR rate limit list is not bounded as to length
Each CAR rate limit statement is checked sequentially for a match When a match
is found the token bucket, if there is one, is evaluated
If the action is a “continue” action, the policer will go to the next rate- limit on the
list to find a subsequent match If a match is found, the traffic is subjected to the
next applicable rate- limit
If an end of rate- limit list is encountered without finding a match or “continue”
action, the default behavior is to transmit
Trang 14© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
The slide lists the actions available with MQC police They are similar to, but
different than, the CAR action options
For additional information see:
http://www.cisco.com/univercd/cc/td/doc/product/aggr/10000/10ksw/qosos.htm#4
7183
Trang 15© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
The slide lists the actions available with MQC police They are similar to, but
different than, the CAR action options
For additional information see:
http://www.cisco.com/univercd/cc/td/doc/product/aggr/10000/10ksw/qosos.htm#4
7183
Trang 16© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Additional Policy Map Policing
and Shaping Options
Additional Policy Map Policing
and Shaping Options
Policing policy-map options:
Distributed Traffic Shaping (DTS) policy-map options:
The commands shown are some of the other options to use in the MQC policy
map They are listed here so all options can be referenced back to this location in
the module section
DTS commands will be covered in more detail later in this module To turn on
DTS, enter any of the shape commands
Trang 17© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
There are other options in the MQC policy map The options shown in the slide
invoke queuing methods (covered in more detail in the Queuing and Scheduling
module)
For example, to turn on WRED within a policy map, use any of the
random-detect commands To reserve a minimum bandwidth with CBWFQ, enter
the bandwidth command
Trang 18© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Topics
Policing Traffic shaping
Trang 19© 2000, Cisco Systems, Inc www cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
• LAN traffic tends to be bursty and bursty traffic
is the root of all evil…
• Shaping is highly beneficial if downstream device is policing
–Avoids the “instantaneous congestion”
–Space the traffic to conform to traffic contract
• Packet bursts are queued instead of being dropped, quickly training TCP sources to send at the desired rate
• Resulting packet stream is “smoothed” and net throughput for bursty traffic is higher
Why Traffic Shaping?
The slide lists some of the reasons for Traffic Shaping
Trang 20© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Token Bucket
are added every Tc
at any time t during this period, upper bound being the access speed.
Over any integral multiple of Tc, the average bit rate of the interface will not exceed the mean bit rate The bit rate may, however, be arbitrarily fast
at any time t during this period, upper bound being the access speed.
In the token bucket metaphor, tokens are put into the bucket at a certain rate, Burst
Capacity (Bc) tokens every Time Interval Constant (Tc) seconds The bucket itself has
a specified capacity If the bucket fills to capacity (Bc + Excess Burst Capacity (Be)), newly arriving tokens are discarded Each token is permission fo r the source to send a certain number of bits into the network To send a packet, the regulator must remove
from the bucket a number of tokens equal in representation to the packet size
If not enough tokens are in the bucket to send a packet, the packet either waits until the bucket has enough tokens or the packet is discarded If the bucket is already full of
tokens, incoming tokens overflow and are not available to future packets Thus, at any time, the largest burst a source can send into the network is roughly proportional to the size of the bucket
Note that the token bucket mechanism used for traffic shaping has both a token bucket and a data buffer, or queue; if it did not have a data buffer, it would be a policer For
traffic shaping, packets that arrive that cannot be sent immediately are delayed in the
data buffer
For traffic shaping, a token bucket permits burstiness but bounds it It guarantees that
the burstiness is bounded so that the flow will never send faster than the token bucket's capacity plus the time interval divided by the established rate at which tokens are
placed in the bucket It also guarantees that the long-term transmission rate will not
exceed the established rate at which tokens are placed in the bucket
Bc is known as burst capacity Be is excess burst capacity Tc is the time interval
constant CIR is the Committed Information Rate All these terms are from
Frame-Relay
Trang 21© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Token Bucket with Class-Based
Weighted Fair Queuing
• While (token bucket is not empty):
–De-queue traffic from Weighted Fair Queuing (WFQ)or traffic arrives (if WFQ empty)
–If token bucket is not empty:
• Token bucket = token bucket less message size
• Forward the traffic
–Else: fair queue the traffic
A token bucket is a formal definition of a rate of transfer It has three components:
a burst size, a mean rate, and a time interval (Tc) Although the mean rate is
generally represented as bits per second, any two values may be derived from the
third:
• Mean rate—Also called the committed information rate (CIR), it specifies how much data can be sent or forwarded per unit time on average
• Burst size—Also called the Committed Burst (Bc) size, it specifies in bits per
burst how much can be sent within a given unit of time to prevent scheduling
concerns
• Time interval—Also called the measurement interval, it specifies the time
quantum in seconds per burst
By definition, over any integral multiple of the interval, the bit rate of the interface will not exceed the mean rate The bit rate may, however, be arbitrarily fast within
the interval
Trang 22© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Transmit Queue Output Line
Traffic Destined for Interface
Classification by:
Extended Access List Functionality
“Leaky Bucket”
Shaping
Configured Queuing (WFQ,
PQ, and so on)
Match
No Match Classify
(Generic) Traffic Shaping
Traffic shaping allows the control of traffic going out an interface in order to match its flow to the speed of the remote, target interface and to ensure that the traffic conforms to policies contracted for it Thus, traffic adhering to a particular profile can be shaped to meet
downstream requirements, thereby eliminating bottlenecks in topologies with
data-rate mismatches
The primary reasons traffic shaping should be used are to control access to available
bandwidth, to ensure that traffic conforms to the policies established for it, and to regulate the flow of traffic in order to avoid congestion that can occur when the sent traffic exceeds the access speed of its remote, target interface
Traffic shaping limits the rate of transmission of data Limit the data transfer to one of the following:
• A specific configured rate
• A derived rate based on the level of congestion
Generic Traffic Shaping (GTS) shapes traffic by reducing outbound traffic flow to avoid congestion by constraining traffic to a particular bit rate using the token bucket mechanism GTS applies on a per- interface basis and can use access lists to select the traffic to shape It works with a variety of Layer 2 technologies, including Frame Re lay, ATM, Switched
Multimegabit Data Service (SMDS), and Ethernet
On a Frame Relay subinterface, GTS can be set up to adapt dynamically to available
bandwidth by integrating Backward Explicit Congestion Notification (BECN) signals, or set
up simply to shape to a pre-specified rate GTS can also be configured on an ATM AIP model interface to respond to Resource Reservation Protocol (RSVP) signaled over statically
configured ATM permanent virtual circuits (PVCs)
Trang 23© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
• Simply forwards traffic
If not within threshold:
• Queues using WFQ-like queue on sub-interface
GTS is supported on most media and encapsulation types on the router GTS can
also be applied to a specific access list on an interface
Use DTS (covered in later slides) with the VIP cards
Trang 24© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Transmit Queue Output Line
Traffic Destined for Interface
Queueing Match
No Match
Distributed Traffic Shaping
Distributed Traffic Shaping
Enforces a maximum transmit rate Temporarily reduces transmit rate when signaled by Frame Relay (FR) Backward Explicit Congestion Notification (BECN) bits set in incoming frames
Shapes up to 200 FR Virtual Channels (VCs) at OC-3 rates with average size packets on a VIP2-50
Released in 12.0(4)XE, 12.0(7)S
Distributed Traffic Shaping (DTS) benefits:
• Offloads traffic shaping from the route switch processor (RSP) to the Versatile
Interface Processor (VIP)
• Supports up to 200 shape queues per VIP, supporting up to OC-3 rates when the
average packet size is 250 bytes or greater and when using a VIP2-50 or better with 8
MB of SRAM Line rates below T3 are supported with a
VIP2-40
The limitations are:
• Only IP traffic can be shaped
• dCEF must be enabled
• FastEtherChannel, Tunnel, VLAN and ISDN / Dialer interfaces are not supported
For additional information see:
http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120limit/120xe/120xe5/dts.htm
Trang 25© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Frame-Relay Traffic Shaping
• Rate enforcement on a per-VC basis
–Peak rate for outbound traffic can be set to match CIR or another value
• Dynamic traffic throttling on a per-VC basis
–When BECN packets indicate congestion on the network, outbound traffic rate automatically stepped down
• Enhanced queuing support on a per-VC basis
–Custom queuing or priority queuing can be configured for individual VCs
• Can use different VCs for different types of traffic
FRTS provides these capabilities:
• Rate enforcement on a per-VC basis—the peak rate for outbound traffic The value can be set to match CIR or another value
• Dynamic traffic throttling on a per-VC basis—When BECN packets indicate congestion on the network, the outbound traffic rate is automatically stepped down; when congestion eases, the outbound traffic rate is increased This feature is enabled by default
• Enhanced queuing support on a per-VC basis—Either custom queuing or priority queuing can be configured for individual VCs
By defining separate VCs for different types of traffic and specifying queuing and an outbound traffic rate for each VC, bandwidth for each type of traffic is guarenteed By specifying
different traffic rates for different VCs over the same line, virtual time-division multiplexing is performed By throttling outbound traffic from high-speed lines in central offices to lower-speed lines in remote locations, congestion and data loss in the network is eased Enhanced queuing also prevents congestion-caused data loss
Trang 26© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
GTS
Shaper Interface Level or Group -Based
Shaping Queue WFQ
No Support for FRF.12 Understands BECN/FECN
FRTS
Shaper FR Only
Per DLCI
Shaping Queue PQ,CQ and WFQ(12.0(4)T)
Supports FRF.12 Understands FECN/BECN
DTS
Shaper on VIP Interface / subint Level
or Group-Based
Understands FECN/BECN
Shaping Queue WFQ
No Support for FRF.12
Difference Between FRTS, DTS, and GTS
Difference Between FRTS, DTS, and GTS
Cisco has long provided support for forward explicit congestion notification (FECN) for
DECnet and OSI, and BECN for Systems Network Architecture (SNA) traffic using LLC2 encapsulation via RFC 1490 and DE bit support
FRTS builds upon this existing Frame Relay support with additional capabilities that improve the scalability and performance of a Frame Relay network, increasing the density of virtual circuits and improving response time
As is also true of GTS, FRTS can eliminate bottlenecks in Frame Relay networks that have high-speed connections at the central site and low-speed connections at branch sites Configure rate enforcement – a peak rate configured to limit outbound traffic – to limit the rate at which data is sent on the VC at the central site
Using FRTS, configure rate enforcement to either the CIR or some other defined value such as the excess information rate, on a per-VC basis The ability to allow the transmission speed used
by the router to be controlled by criteria other than line speed (that is, by the CIR or the excess information rate) provides a mechanism for sharing media by multiple VCs Pre-allocate
bandwidth to each VC, creating a virtual time-division multiplexing network
Also define PQ, CQ, and WFQ at the VC or sub- interface level Using these queuing methods allows for finer granularity in the prioritization and queuing of traffic, providing more control over the traffic flow on an individual VC If CQ is combined with the per-VC queuing and rate enforcement capabilities, the Frame Relay VCs are enabled to carry multiple traffic types such
as IP, SNA, and Internetwork Packet Exchange (IPX) with bandwidth guaranteed for each traffic type
Trang 27© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Summary Policy/Shaping
Summary Policy/Shaping
After completing this module section, you should be able to perform the following tasks:
• Describe the difference between policing and shaping and how each one relates to QoS
• Describe CAR, when to apply CAR, how to configure CAR
• Describe MQC policing and how to configure it
• Identify the three types of traffic shaping, their differences, and how to apply each
Trang 28© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Section 2 Queuing
Section 2 Queuing
© 2000, Cisco Systems, Inc www.cisco.com
Trang 29© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
• Apply each queuing method for the right application
• Describe the difference between IP Rapid Transport Protocol (RTP) Priority and Low Latency Queuing (LLQ)
The purpose of the lesson is to quickly survey the queuing features in Cisco IOS
12.1, and to be able to describe the problems they solve
Trang 30© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Application Traffic
App Tier 1 Tier 2 Tier 3 Tier 4
Market Data
256 kbps per router
128 kbps per pvc
96 kbps per router
120 kbps per pvc
96 kbps
on 1 pvc 48 kbps
SNA 56 kbps 48 kbps 36 kbps 24 kbps
Other (WWW, Email)
628 kbps 480 kbps
Stock brokerage customer needs to:
• Give priority to market data
• Reserve bandwidth by application
• Provide low latency for VoIP and SNA
How?
Descriptions of the classes of traffic and policy for this customer:
• Market Data—TCP Unicast, Top Priority, Bursty
• Hoot N Holler—VoIP Multicast, Sensitive to Drop, Jitter, and excessive drop
• Systems Network Architecture (SNA)—Delay sensitive Encapsulated in TCP
• Other—Delay insensitive, drop insensitive
Market Data is seen as the most important application The customer does not want anything to supercede it in importance or priority
The key is to know what traffic is going over each circuit Evaluate the traffic over
a period of time so that the location of QoS can be applied
Trang 31© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
Marked (colored) at ingress at network edge
IP Precedence / Differentiated Services Code Point (DSCP) set Policed and shaped at the network edge
Potentially discarded by Weighted Random Early Detection (WRED) (congestion mgmt)
Assigned to the appropriate outgoing queue Scheduled for transmission by CBWFQ and sent out interface
Classify and Mark
Trang 32© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
Advanced queuing methods Scheduling
Sequence of events
Basic queuing methods are examined first
One of the most basic (not shown) is FIFO, or First-In First-Out queuing This is
the technique used in simple routers and (generally) in lines of people No special
treatment for anyone!
Generally use a queuing technique when there is congestion, or not enough
capacity for all the traffic Most queuing descriptions focus on which traffic gets
preferential treatment However, if there is congestion, some traffic will have to be dropped So all queuing schemes, implicitly, are also making a choice as to which
Trang 33© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Priority Queuing
• Rigid traffic prioritization scheme with four queues—
high, medium, normal, low
• Traffic assigned to queues by previously defined priorities and policies
• Unclassified packets are placed in the normal queue
Interface Buffer Resources
Transmit Queue Output Line
High Medium Normal Low Classify
Absolute Priority Scheduling
The disadvantage of priority queuing is that the higher queue is given absolute
precedence over lower queues For example, packets in the low queue are only sent when the high, medium, and normal queues are completely empty If a queue is
always full, the lower-priority queues are never serviced They fill up and packets
are lost Thus, one particular kind of network traffic can dominate a priority
queuing interface
An effective use of priority queuing would be to place time-critical but
low-bandwidth traffic in the high queue This ensures that this traffic is sent
immediately, but because of the low-bandwidth requirement, lower queues are
unlikely to be starved
In order for packets to be classified on a priority queuing interface, create policies
on that interface These policies need to filter traffic into one of the four priority
queues Any traffic that is not filtered into a queue is placed in the normal queue
For additional informaton refer to:
http://cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/qos_c/qcprt2/qcdpq.htm
Trang 34© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
Up to 16
3 1
Weighted Round Robin Scheduling (byte count)
Allocate Proportion of Link Bandwidth
Classify
2 5 4
Transmit Queue
Output Line
The disadvantage of custom queuing is that, as in priority queuing, policy statements on the interface must be created to classify the traffic to the queues An effective use of custom queuing would be to guarantee bandwidth to a few critical applications to ensure reliable application performance
In order for packets to be classified on a custom queuing interface, create custom
queuing policies on that interface These policies need to specify a ratio, or percentage,
of the bandwidth on the interface that should be allocated to the queue for the filtered traffic A queue percentage can be as small as 5%, or as large as 95%, in increments of 5% The total bandwidth allocation for all policy statements defined on a custom
queuing interface cannot exceed 95% Any bandwidth not allocated by a specific policy statement is available to the traffic that does not satisfy the filters in the policy
statements
The queues that are defined constitute a minimum bandwidth allocation for the specified flow If more bandwidth is available on the interface due to a light load, a queue can use the extra bandwidth This is handled dynamically by the device
All packets that have not been classified for custom queuing are placed in the default queue
Figuring out how to allocate bandwidth can require either some fairly simple theoretical bandwidth calculations or a pragmatic (try-then-tweak) approach, in order to come up with the right byte-threshold limits for the custom queues
For additional information refer to:
http://cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/qos_c/qcprt2/qcdcq.htm
Trang 35© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Weighted Fair Queuing
• Traffic automatically sorted by stream
over high-bandwidth traffic
interfaces
WFQ provides traffic priority management that automatically sorts among individual traffic streams without requiring that access lists be defined first WFQ can also manage duplex data streams such as those between pairs of applications, and simplex data streams such as voice or video In effect, there are two categories of WFQ sessions: high bandwidth and low bandwidth Low-bandwidth traffic has effective priority over high-bandwidth traffic, and high-bandwidth traffic shares the transmission service proportionally according to assigned weights
When WFQ is enabled for an interface, new messages for high-bandwidth traffic streams are discarded after the configured or default congestive messages threshold has been met However, low-bandwidth conversations, which include control message conversations, continue to queue data As a result, the fair queue may occasionally contain more messages than its configured threshold number specifies
With standard WFQ, packets are classified by flow Packets with the same source IP address, destination IP address, source TCP or UDP port, or destination TCP or UDP port belong to the same flow WFQ allocates an equal share of the bandwidth to each flow Flow-based WFQ is also called fair queuing because all flows are equally weighted
WFQ is the default queuing mode on interfaces that run at or below E1 speeds (2.048 Mbps or less) It is enabled by default for physical interfaces that do not use Link Access Procedure,
Trang 36© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
Versatile Interface Processors (VIPs)
• Available on Cisco 7000 routers with RSP7000 or 7500 routers with VIP2-40 or faster
For dWFQ, packets are classified by flow Packets with the same source IP
address, destination IP address, source TCP or UDP port, destination TCP or UDP port, and protocol belong to the same flow
dWFQ can be configured on interfaces but not sub- interfaces It is not supported on Fast EtherChannel, tunnel, or other logical or virtual interfaces such as MLP
To monitor WFQ or dWFQ, use the following commands:
• show interfaces [interface] fair-queue
• show queueing fair
Example:
Router# show interfaces hssi 0/0/0 fair-queue
Hssi0/0/0 queue size 0
packets output 35, drops 0
WFQ: global queue limit 401, local queue limit 200
For additional details and configuration commands for both WFQ and dWFQ see
the following URL:
http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/12cgcr/qos_c/qcpart2/qcwfq.htm
Trang 37© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Weighted Random Early Detection
Weighted Random Early Detection
Upon congestion, packets from lower precedence are selectively discarded first Minimize the congestion impact on higher precedence services
Standard Premium
Weighted Random Early Detection (WRED) is a congestion-avoidance mechanism
to prevent congestion, not manage it WRED monitors traffic load on an interface
It selectively discards lower-priority traffic when the interface starts to get
congested
For robust transport protocols (such as TCP), WRED has the effect of throttling
back lower priority traffic at the source TCP has built- in rate shaping mechanisms,
so when congestion occurs the applications will throttle back
WRED, CQ, PQ, and WFQ are mutually exclusive on an interface The router
software produces an error message if WRED and any one of these queuing
strategies is configured simultaneously
WRED is well- suited for avoiding congestion on high-speed backbone links
When testing QoS with a SmartBits analyzer, bear in mind that the SmartBits does not respond to packet drops, hence the testing will not accurately reflect the
benefits of WRED
Trang 38© 2000, Cisco Systems, Inc www cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
A Possible Implementation of Assured Forwarding with WRED
A Possible Implementation of Assured Forwarding with WRED
Different drop preference for different precedence
th1 th2 th3Transmit
Probability
of drop increases with increasing average queue size
AF Class
The slide shows how WRED works Between initial threshold and queue size, the
probability of dropping a packet increases When the queue is full, the next packet
must always be dropped
WRED helps break up a TCP/IP phenomenon known as porpoising This is where
congestion avoidance in TCP causes a number of flows to all reduce their send
windows at the same time It is well known that TCP flows do tend to
synchronization of send windows and group slowdowns (similar to a school of
porpoises surfacing in the ocean at one time) It is better to randomly sacrifice a
packet from one flow so that many flows do not unnecessarily slo w down at the
same time
The WRED mechanism gives a possible implementation of DiffServ Assured
Forwarding (AF)
Trang 39© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box
econ_0481_09_010.ppt
Flow RED
example, User Datagram Protocol (UDP))
all the buffer resources
interface Serial1 random-detect random-detect flow random-detect flow average -depth- factor 8 random-detect flow count 16
FRED tends to penalize non-adaptive flows A flow is considered non-adaptive, that is, it takes up too much of the resources, when the average flow depth times the specified
multiplier (scaling factor) is less than the flow's depth, that is: average-flow-depth *
(scaling factor) < flow-depth For test comparisons, FRED penalizes the non-adaptive
SmartBits tester more heavily It is also “too strict” for use with UDP voice traffic
To enable flow-based WRED, also known as FRED, use the random-detect flow interface
configuration command: random-detect flow
Use this command to enable flow-based WRED before using the random-detect flow
average-depth-factor and random-detect flow count commands to further configure the
parameters of flow-based WRED For additional information, refer to:
http://cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/qos_r/qrdcmd4.htm#xtocid79759
To set the multiplier to be used in determining the average depth factor for a flow when
FRED is enabled, use the random-detect flow average-depth-factor interface
configuration command: random-detect flow average-depth-factorscaling-factor
Use this command to specify the scaling factor that flow-based WRED should use in
scaling the number of buffers available per flow and in determining the number of packets allowed in the output queue for each active flow This scaling factor is common to all
Trang 40© 2000, Cisco Systems, Inc www.cisco.com
Keep All Graphics Inside This Box