1. Trang chủ
  2. » Công Nghệ Thông Tin

configuring cisco avvid phần 9 pdf

39 246 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Optimizing Network Performance With Queuing And Compression
Trường học Syngress Publishing
Chuyên ngành Networking
Thể loại Tài liệu
Năm xuất bản 2001
Thành phố Not Applicable
Định dạng
Số trang 39
Dung lượng 131 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Priority Queuing PQ PQ provides a granular means for the network administrator todetermine which traffic must be queued and serviced first.. Each network flow can be categorized by the f

Trang 1

or later interface processors Although the VIP2-40 is the minimumrequired interface processor to run DFWQ, it is recommended todeploy VIP2-50s when the aggregate port speed on the VIP exceeds

45 Mbps In addition, distributed Cisco express forwarding (dCEF) isrequired to run DWFQ

dCEF provides increased packet routing performance because theentire route forwarding information base (FIB) is resident on eachVIP card Therefore, routing table lookups happen locally on the VIPcard without querying the centralized route switch processor

In flow-based DWFQ, all traffic flows are equally weighted andguaranteed equal access to the queue This queuing method guar-antees fair access to all traffic streams, thus preventing any singleflow from monopolizing resources

To enable DWFQ, activate fair queuing by enabling “IP CEF” inglobal configuration mode and “fair-queue” under the VIP2 interfaceconfiguration

Review the following example:

version 12.1

!

ip cef

! interface FastEthernet0/0

ip address 172.20.10.2 255.255.255.0 full-duplex

! interface Hssi4/0

ip address 172.20.20.2 255.255.255.0 fair-queue

! router ospf 100 network 172.20.0.0 0.0.255.255 area 0

! router#

Trang 2

DWFQ also has the following limitations:

■ Can be configured only on main interfaces; per IOS 12.1.0,there is no sub-interface support

■ Can be configured only on an ATM interface with AAL5SNAPencapsulation Per IOS 12.1.0, there is no support for

AAL5MUX or AAL5NLPID encapsulations

■ Is not supported on any virtual, tunnel, or FastEtherChannel interfaces

■ Cannot be configured in conjunction with RSP-based WFQ,

PQ, or CQ

Priority Queuing (PQ)

PQ provides a granular means for the network administrator todetermine which traffic must be queued and serviced first With pri-ority queuing techniques, the network administrator must under-stand all the traffic flows within the network This type of control isimportant when specific mission-critical traffic must receive ser-vicing The network administrator has the control to create differentinterface packet queues that are serviced in a hierarchical order

Each network flow can be categorized by the following:

■ Protocol or sub-protocol type

■ Incoming interface

■ Packet size

■ Fragments

■ Access listsThe queues are known as high, medium, normal, and low Therouter services the queues from highest to lowest priority The ser-vice order on the four queues works such that if the high queue hastraffic in it, the normal queue cannot forward any packets until allpackets in the high-priority queue are transmitted This is a major

Trang 3

issue when designing a queuing strategy for a network The networkadministrator may inadvertently starve a certain network stream,making users unable to use applications and services on the net-work However, this may be ideal for networks in which criticalapplications are not able to run because network users are running

“less important” applications Figure A.4 illustrates the PQ packetflow

When using PQ, packets are compared with a statically definedpriority list If there is any capacity in the priority queue associatedwith the incoming traffic, the packet is placed in the designatedqueue and waits to be serviced out the interface If there is no roomleft in the queue, the packet is dropped

Figure A.4PQ packet flow

Inbound Packet

Select Appropriate Queue

Place in Appropriate Queue

Discard Packet

Dispatch out Interface

Yes

Yes

Yes Low Packet?

Yes

No

No

No No

Queue Servicing Process

Normal Packet?

Medium Packet?

High Packet?

Timeout Condition?

Is Queue Full?

Yes

Trang 4

Packets that are dropped do not go into another queue

Since the definitions for queues are defined, a packet either fitsinto that queue, or it does not Even though packets are sent intoqueues, there is no guarantee they will be processed in time toreach their destination This process enables network administra-tors to control the priority of mission-critical network traffic, butalso requires a good understanding of its effect on the flow of othernetwork traffic Networks implementing priority queuing requireconstant reassessment, since traffic pattern requirements maychange as well Traffic that was once considered high priority maybecome a low priority at some point

It is important to note that priority queuing can affect CPU lization Cisco routers will process switch packets on interfaces thathave priority queuing enabled The packet-switching performancewill be degraded compared with other interfaces using cachingschemes Also note that priority queuing is not supported on tunnelinterfaces

uti-Priority Queuing Examples

In a mainframe environment, there may be a lot of users “surfing”

the Web and downloading files, causing performance problems withtime-sensitive Software Network Architecture (SNA) traffic and othertn3270 (Telnet) traffic The following situation allows the SNA traffic(using Data-Link Switching (DLSw)) and the Telnet traffic to havehigh priority where the reset of traffic is considered low There may

be some exceptions that can be controlled using an access list tomake a normal priority

! priority-list 1 protocol ip normal list 100 priority-list 1 protocol ip high tcp telnet

Trang 5

priority-list 1 protocol dlsw high priority-list 1 default low

!

To use an extended access list to make specific IP traffic have

normal priority on the interface, the priority-list 1 protocol ip normal list 100 command is used.

To configure Telnet traffic as high priority, the priority-list 1 protocol ip high tcp telnet command is used.

To configure DLSw traffic as high priority, the priority-list 1 protocol dlsw high command is used.

To configure traffic that does not match any of the previous

statements, the priority-list 1 default low command will set a

default priority If no default queue is defined the normal queue isused

! interface Serial0 priority-group 1

!

The interface priority-group 1 command is configured under the

whole interface to specify that priority list 1 is used for that face

inter-c2507#show interface serial 0 Serial0 is up, line protocol is up Hardware is HD64570

MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, rely 255/255, load 1/255

Encapsulation FRAME-RELAY, loopback not set, keepalive set (10 sec)

LMI enq sent 0, LMI stat recvd 0, LMI upd recvd 0, DTE LMI up LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0

LMI DLCI 1023 LMI type is CISCO frame relay DTE

Trang 6

Broadcast queue 0/64, broadcasts sent/dropped 0/0, interface broadcasts 0

Last input 00:00:03, output 00:00:03, output hang never Last clearing of "show interface" counters 00:00:03 Input queue: 0/75/0 (size/max/drops); Total output drops: 0 Queueing strategy: priority-list 1

Output queue (queue priority: size/max/drops):

high: 0/20/0, medium: 0/40/0, normal: 0/60/0, low: 0/80/0

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

0 packets input, 0 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

0 packets output, 0 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

0 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up

c2507#

Using the show interface serial 0 command, the type of

queuing is displayed on the queuing strategy line of the interfaceoutput The syntax for queues is size/max/drops, where size is thecurrent used depth of the queue, max is the maximum depth of thequeue before packets are dropped, and drops is the number ofpackets dropped after the max has been reached The size anddrops reset to 0 when the counters are cleared

! priority-list 1 queue-limit 30 60 60 90

!

Trang 7

The command priority-list 1 queue-limit <high> <med>

<norm> <low> configures the different queues to different depths.

CQ is the next progression of PQ It guarantees some level of vice to all created queues With PQ, you can end up servicing onlyyour high priority queue and never service the low priority queue

ser-CQ takes the other queues into consideration, allowing a percentage

of the other queues’ traffic to be processed The percentage can bedefined by the protocol, source/destination address, or incominginterface This ability to assign a percentage of the output interfaceensures that each queue will be serviced regularly and guaranteedsome level of bandwidth Figure A.5 illustrates CQ serving process

Figure A.5The CQ servicing process

Next Queue

Current Queue

Send Packet

More Packets in Current Queue?

No Yes

Custom Queue Servicing Process

Inbound Data Router Consults Custom Queue List

Custom Queue List

Packet Placed in Appropriate Queue

Queue Servicing Process

Queue 1 Queue 2 Queue 3 Queue

Exceed Threshold Service?

No

Yes

Trang 8

There are 17 queues defined in CQ Queue 0 is reserved forsystem messages such as keep alives and signaling, and queues 1through 16 are available for custom configuration The systemqueue is always serviced first The algorithm will allow you tospecify the number of bytes to be serviced by the queue and/or thenumber of packets to be forwarded by the queue before moving tothe next sequential queue The result is a queuing mechanism thatservices each queue sequentially for the predetermined byte and/orpacket count before cycling to the next queue Bandwidth to eachqueue is indirectly configured in terms of byte count and queuelength When using CQ, no application receives more bandwidththan configured in the custom queue under congestive conditions.

It is important to set the byte count parameters correctly toachieve predictable results Assume that you want to engineer acustom queue that divides the effective interface bandwidth evenlyacross four different applications Now, also assume that you havenot performed any traffic analysis and have configured four CQswith a byte count of 250 under the assumption that all the applica-tions are similar Now suppose that each application transmits 100-,300-, 500-, and 700-byte frames consecutively The net result is not

a 25/25/25/25 ratio When the router services the first queue, itforwards three 100-byte packets; when it services the second queue,

it forwards one 300-byte packet; when it services the third queue, itforwards one 500-byte packet; and when it services the fourthqueue, it forwards one 700-byte packet The result is an uneven dis-tribution of traffic flowing through the queue You must pre-deter-mine the packet size used by each flow or you will not be able toconfigure your bandwidth allocations correctly

To determine the bandwidth that a custom queue will receive,use the following formula:

(queue byte count / total byte count of all queues) * bandwidth capacity

of the interface

Trang 9

Custom Queuing Examples

In an environment where there is a low-speed serial connectionhandling all of the network traffic and more control over the dif-ferent traffic types is necessary, CQ may be most suitable In anenvironment where users are having problems getting Dynamic HostConfiguration Protocol (DHCP) information when booting up, create

a configuration that allows for DHCP traffic to have a higher ority The following configuration shows Telnet and bootpc with thehighest priority and an access list with the lowest priority

pri-! queue-list 1 protocol ip 1 list 100 queue-list 1 protocol ip 2 tcp telnet queue-list 1 protocol ip 3 udp bootpc queue-list 1 default 4

!

To use an extended access list to make specific IP traffic flow

into queue 1, the queue-list 1 protocol 1 list 100 command is

For all other traffic not defined in any of the CQs, a default

queue should be configured as in the queue-list 1 default 4

com-mand If there is no default queue configured, the router willassume that queue 1 is the default

! queue-list 1 queue 1 byte-count 1000 queue-list 1 queue 2 byte-count 4000 queue-list 1 queue 3 byte-count 4000 queue-list 1 queue 4 byte-count 2000

!

Trang 10

Queue 1 has been configured for 1000 bytes to be drained percycle, queue 2 has been configured for 4000 bytes, queue 3 hasbeen configured for 4000 bytes, and default queue 4 has been con-figured for 2000 bytes Configuring the byte count of the differentqueues controls which queue has high priority The higher the bytecount, the more bandwidth is dedicated to that queue.

! interface Serial 0 custom-queue-list 1

MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, rely 255/255, load 1/255

Encapsulation FRAME-RELAY, loopback not set, keepalive set (10 sec)

LMI enq sent 0, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0

LMI DLCI 1023 LMI type is CISCO frame relay DTE

FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 0/0, interface broadcasts 0

Last input 00:00:07, output 00:00:07, output hang never Last clearing of "show interface" counters 00:00:03 Input queue: 0/75/0 (size/max/drops); Total output drops: 0 Queueing strategy: custom-list 1

Output queues: (queue #: size/max/drops)

Trang 11

0: 0/20/0 1: 0/20/0 2: 0/20/0 3: 0/20/0 4: 0/20/0 5: 0/20/0 6: 0/20/0 7: 0/20/0 8: 0/20/0 9: 0/20/0 10: 0/20/0 11: 0/20/0 12: 0/20/0 13: 0/20/0 14: 0/20/0 15: 0/20/0 16: 0/20/0

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

0 packets input, 0 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

0 packets output, 0 bytes, 0 underruns

0 output errors, 0 collisions, 1 interface resets

0 output buffer failures, 0 output buffers swapped out

2 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=uph

c2507#

! queue-list 1 queue 1 limit 40

!

The queue-list <list> queue <queue#> limit <depth> command

configures the queue depth for each custom queue

Class-Based Weighted Fair Queuing (CBWFQ)

CBWFQ is an extended version of the standard WFQ functionality,with support for user-defined traffic classes added With CBWFQ,the network administrator has the ability to separate traffic andplace it into queues based on criteria such as protocol, access con-trol lists (ACLs), or originating interface Each packet is analyzed in

an effort to match a defined traffic class The packet is then warded to the appropriate queue for servicing

Trang 12

for-Classes are defined by parameters called class characteristics.

Examples of class characteristics are bandwidth, weight, and imum packet limit The bandwidth assigned is the minimum band-width required for that specific class of service during periods ofcongestion The weight value is derived from the bandwidth valueassigned to each class In addition, the weight value is used to helpcalculate the average queue length and packet limit The packetlimit defines the queue depth in packets The queue is designed todrop all packets that exceed the configured queue depth or packetlimit unless a policy is applied to the class An example of such apolicy is weighted random early detection (WRED), which we willdiscuss a bit later

max-CBWFQ does not allow more than 75 percent of the interfacebandwidth to be assigned to classes The additional 25 percent isreserved for overhead such as routing updates The network admin-istrator can override this threshold, but must first take into accountall the bandwidth required for routing protocol updates

A good example is an ATM-based interface This network istrator would need to take into account the overhead required topackage data into ATM cells at Layer 2, in addition to any controlpacket flows traversing the link

admin-The advantage to using CBWFQ is that it is not bound to packetflows In CBWFQ, up to 64 classes can be defined to a more gran-ular level than traditional WFQ CBWFQ is not affected by the totalnumber of flows traversing an interface, and classes do not competefor bandwidth with other classes The caveat is that multiple flowscan compete for bandwidth within a defined class; therefore, signifi-cant thought is required when defining your queuing strategy

CBWFQ is not supported in conjunction with traffic shaping orATM unspecified bit rate (UBR) permanent virtual circuits Pleasereview Figure A.6, which illustrates CBWFQ operation CBWFQ allo-cates bandwidth to a queue by guaranteeing the minimum amount

of bandwidth defined for each class There are 64 definable queues;

WFQ is used to allocate bandwidth within each class or queue,unlike CQ, which services each queue defined in a FIFO manner

Trang 13

Selecting a Cisco IOS Queuing Method

Steps 1 through 6 should be followed when determining whichqueuing option to implement:

1 Is the WAN link congested with network traffic? If there is nocongestion on the link, there is no need to sort the trafficinto queues If the link is consistently congested, trafficqueuing may not resolve the problem If the link is only con-gested for short periods of time, queuing may resolve theflows

2 What type of traffic is traversing the network and is it gested? The network administrator must learn traffic flowsand study the link during peak usage This will help deter-mine what traffic is utilizing the link and what can be donewith that traffic The network administrator needs to deter-mine whether control over individual streams has to beenforced and/or if generic protocols need to be queued to

con-Figure A.6CBWFQ

Output Queue

ACL assigns data to a class

Class 2 minimum guaranteed bandwidth

Class 1 minimum guaranteed bandwidth

There are up to 64 classes with a default class, which is allocated 25% of bandwidth Incoming data

reserved 128 Kbps remaining bandwidth

Trang 14

improve response time Remember, traffic utilization isdynamic and will need to be analyzed often to determinewhether changes are required.

3 After the traffic analysis is completed, can traffic be serviced

by WFQ? This step is done to determine whether packettrains are utilizing the link during peak times If so, auto-matic queuing provided by WFQ may be able to meet cur-rent needs Remember, traffic patterns are dynamic andsubject to change It is recommended that a regular trafficanalysis be performed to determine whether queuing opti-mization is required

4 What is your organization’s queuing policy? Queuing policiesare based on application requirements in conjunction with adetailed traffic study All interfaces require basic queuingconfiguration These configuration values may need to beadjusted based on application requirement or location

5 Does control over individual streams need to be taken intoaccount? If certain applications are failing but enough band-width exists, CQ, WFQ, or CBWFQ can be utilized This willallow the network administrator to select the critical traffic

to be serviced while the other network flows will utilize theremaining bandwidth

6 Can network delay be tolerated? If so, the network trator can develop PQ schemes The network administratorwill need to determine which flows need servicing first andthen determine how the other flows can be divided into theremaining queues If the network cannot handle delays inpacket arrival, then CQ can be used CQ can guarantee thatall applications gain some access to the link Please reviewthe queuing selection flow chart in Figure A.7

Trang 15

When addressing congestion on links that have very low physical width, consider the amount of bandwidth being used by the routing pro-tocol selected For locations that are stub sites (have only one link

band-connected to the backbone), consider using a default route or gateway oflast resort This will avoid the overhead associated with dynamic routingprotocols

Figure A.7Queuing selection

Is WAN Experiencing Congestion? No

No Need for Queuing;

FIFO Will Be Used

Yes

Strict Policies Needed? Yes

7500 Router with VIP 2-40

or Better? No

Use Weighted Fair Queuing

Use Distributed Weighted Fair Queuing

Yes

No

Define Traffic Preference in Terms of Priority

Delay Sensitive?

Yes

Use Priority Queuing

No Use CustomQueuing

Yes

No

Queuing Policy Defined?

Trang 16

Other things to consider are dynamic routing protocol selection, such

as Routing Information Protocol (RIP) versus Open Shortest Path First(OSPF) Distance Vector protocols such as RIP will propagate the entirerouting table every 30 seconds, requiring more bandwidth than link stateprotocols such as OSPF, which propagate changes in a given topology asthey occur

Table A.3 provides a comparison of queuing techniques

Table A.3Queuing Technique Selection

Weighted Fair Priority Queuing Custom Queuing Queuing

Low volume High queue serviced first Round-robin servicegiven priority

Conversation Packet dispatching Threshold dispatchingdispatching

Interactive traffic Critical traffic gets Allocation of available

File transfer gets Designed for low- Designed for balanced access bandwidth links speed, low-bandwidth

higher-linksEnabled by default Must configure Must configure

Verifying Queuing Operation

To properly verify queuing operation, use the show queuing

com-mand to identify discards in both the input and output queues

Router1#show queuing

Current fair queue configuration:

Interface Serial 0

Trang 17

Input queue: 0/75/0 (size/max/drops); Total output drops: 0 Output queue: 18/64/30 (size/threshold/drops)

Conversations 2/8 (active/max active) Reserved Conversations 0/0 (allocated/max allocated) (depth/weight/discards) 3/4096/30

Conversation 117, linktype: ip, length: 556, flags: 0x280 source: 172.16.128.110, destination: 172.16.58.90, id: 0x1069, ttl: 59,

TOS: 0 prot: 6, source port 514, destination port 1022 (depth/weight/discards) 14/4096/0

Conversation 150, linktype: ip, length: 1504, flags: 0x280 source: 172.16.128.110, destination: 172.16.58.90, id: 0x104D, ttl: 59,

TOS: 0 prot: 6, source port 20, destination port 1554

Weighted Random Early

Detection (WRED) Overview

WRED is Cisco’s version of RED When this service is used, routerswill attempt to anticipate and subsequently avoid network conges-tion This differs from queuing techniques that attempt to controlcongestion after it has occurred on an interface

RED is designed to make packet-switched networks aware ofcongestion before it becomes a problem RED tries to control theaverage queue size while indicating to the end host if it should stopsending packets using Transmission Control Protocol’s (TCP’s) con-gestion control mechanisms

RED will randomly drop packets during periods of high tion This action causes the source machine to decrease its trans-mission rate Since TCP restarts quickly once a packet is lost, it canadapt its transmission rate to one the network can support

conges-RED is recommended only for TCP/IP networks It is not mended for protocols such as AppleTalk or Internetwork PacketExchange/Sequenced Packet Exchange (IPX/SPX), which respond todropped packets by retransmitting the packets at the original rate

Trang 18

recom-Tail Drop

Tail dropping occurs when the egress queues become so congestedthat no more packets can enter the queue These packets havenowhere to go so they are dropped from the tail end of the queue

Once packets start to tail-drop, the current network session will go

to timeout mode These timeouts can cause each sender to neously retransmit Since all TCP sessions restart at the same time,more packets get congested in the queue at approximately the sameinterval, essentially causing a cyclic effect In other words, trafficcan go through a wave of congestion that increases and decreases at

simulta-regular intervals, and is commonly referred to as a global

synchro-nization problem.

Weighted Random Early Detection (WRED)

WRED tries to overcome the problem seen with tail dropping by domly discarding packets before the buffers get congested WREDdetermines when to start dropping packets based on the averagequeue length Once the packet count within the queue exceeds thedefined upper queue threshold, WRED begins dropping packets inthe upper queue range The dropping of packets is totally indiscrim-inate to the network flow Since packets are dropped at randomwithin the queue, this causes only a few sessions to restart Thisgives the network a chance to drain the queues Since the remainingsessions are still flowing, the buffers can empty and allow other TCPsessions a chance to recover

ran-NOTE

WRED, CQ, PQ, and WFQ are mutually exclusive on an interface Therouter software produces an error message if you configure WRED andany one of these queuing strategies simultaneously

Trang 19

Flow-Based WRED

Flow-based WRED takes into account the types of packets and tocols it attempts to drop while keeping track of flow states If itneeds to drop any flows, it will look for new flows within the queuerather than sacrificing a currently connected flow

pro-To allow for irregular bursty traffic, a scaling factor is applied tothe common incoming flows This value allows each active flow toreserve a number of packets in the output queue The value is usedfor all currently active flows When the scaling factor is exceeded,the probability of packets being dropped from the flow is increased.Flow-based WRED provides a more fair method in determiningwhich packets are tail-drops during periods of congestion WREDautomatically tracks flows to ensure that no single flow can monop-olize resources This is accomplished by actively monitoring trafficstreams, learning which flows are not slowing down packet trans-mission, and fairly treating flows that do slow down packet trans-mission

Data Compression Overview

Traffic optimization is a strategy that a network designer or operatorseeks when trying to reduce the cost and prolong the link life of aWAN—in particular, improving link utilization and throughput.Many techniques are used to optimize traffic flow, which includePQs (as we described earlier), filters, and access lists However,more effective techniques are found in data compression Data com-pression can significantly reduce frame size and therefore reducedata travel time between endpoints Some compression methodsreduce the packet header size, while others reduce the payload.Moreover, these methods ensure that reconstruction of the frameshappens correctly at the receiving end The types of traffic and thenetwork link type and speed need to be considered when selectingthe data compression method to be applied For example, data com-pression techniques used on voice and video differ from those

applied to file transfers

Ngày đăng: 14/08/2014, 04:21

TỪ KHÓA LIÊN QUAN