1. Trang chủ
  2. » Công Nghệ Thông Tin

ASP Configuration Handbook phần 8 pptx

66 149 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Designing the Infrastructure
Trường học Syngress Publishing
Chuyên ngành Network Design
Thể loại sách
Năm xuất bản 2001
Thành phố Burlington
Định dạng
Số trang 66
Dung lượng 607,34 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In designing regional WANs, whether you are using packet-switching services or point-to-point interconnections, three basic design approaches are common throughout the industry: ■ Star t

Trang 1

such as variable length subnet mask (VLSM), classless inter-domain routing(CIDR), and a routing protocol that can support these methods.

Possible Types of Topology Design

Once you have established your internetwork scheme, you must design a way forhandling interconnections among sites within the same region or area of admin-istrative control In designing regional WANs, whether you are using packet-switching services or point-to-point interconnections, three basic design

approaches are common throughout the industry:

■ Star topologies

■ Fully meshed topologies

■ Partially meshed topologies

In the following pages, I will try to help you understand these topologies andhow you can use them to your advantage Remember, though, that the discus-sions presented in this chapter address the application of these topologies specifi-cally to packet-switching services

NOTE

Illustrations in this chapter use lines to show the connection of specific routers on the PSDN network These connections are considered virtual connections, as the circuits are mapped within the routers themselves Normally, all physical connections are generally made to switches within the PSDN Unless otherwise specified, the connecting lines represent vir- tual connections within the PSDN.

Star Topologies

The star topology (also known as a hub and spoke) is a grouping of networkdevices that has a single internetworking hub, and provides connections for theexternal cloud networks to the backbone and access to each other, although onlythrough the core router Figure 8.3 illustrates a packet-switched star topology for

a regional internetwork

One of the main advantages of a star topology is that there is simplified agement and minimized tariff costs or tolls.Whereas tolls aren’t much of a factor

Trang 2

man-these days, they were in the past, and with some of the things that are happeningpolitically, they could be again However, there are significant disadvantages.

First, the core router is a single point of failure for the entire internetwork

Second, the core router can limit overall performance for access to backboneresources; the core may not be robust enough, or have enough bandwidth tohandle all of the traffic from the external networks.Third, this topology is not veryscalable, as there are generally only a certain number of ports on the core router

Fully Meshed Topologies

In a fully meshed topology, each routing node on the edge of a packet-switchingnetwork has a direct path to every other node on the cloud Figure 8.4 illustratesthis type of arrangement

One of the best reasons for creating a fully meshed environment is that itprovides for a high level of redundancy A fully meshed topology helps to facili-tate the support of all routing protocols, but it is not tenable in large packet-switched internetworks

Some of the main issues are due to the large number of virtual circuits thatare required (one for every connection between routers).There are also problemsassociated with the large number of packet and broadcast replications necessaryfor routing protocols or application traffic, and the configuration complexity for

Router Router

Core Router Router

Router Router

Regional Switching Network to BackboneConnection

Packet-Cloud Cloud

Cloud

Trang 3

There is a middle ground, though; by combining fully meshed and startopologies into a partially meshed environment, you can improve fault tolerancewithout encountering the performance and management problems that are nor-mally associated with a fully meshed internetwork.The following section dis-cusses the partially meshed topology.

Partially Meshed Topologies

As discussed earlier, a partially meshed topology reduces several of the problemsthat are inherent in star and fully meshed topologies.There is a reduction in thenumber of routers within a region that need direct connections to all other nodes

in the region Not all nodes need to be connected to all other nodes For a meshed node to communicate with another nonmeshed node, it will send trafficthrough one of the hub routers.This is a lot like the star topology, but there is

non-a redundnon-ant pnon-ath non-avnon-ailnon-able in the event the hub router becomes inopernon-able.Figure 8.5 illustrates such a situation

There are many forms of partially meshed topologies Generally, partiallymeshed implementations provide the optimum balance for regional topologies interms of the number of virtual circuits, their ability to provide redundancy, andtheir overall performance By providing a greater amount of connectivity andhigh availability, you will be able to use your bandwidth more effectively

Router Router

Router Router

Router Router Packet-Switching Network

Trang 4

Broadcast Issues

Broadcast traffic presents problems when it is introduced into a packet-serviceenvironment Broadcasts are necessary for a node to reach multiple other nodestations with a single packet when the sending node does not know the address

of the intended recipient, or when the routing protocols needs to send hellopackets and other miscellaneous services

As an example, the level of broadcast traffic that is generated in an EnhancedIGRP environment depends on the setting of the enhanced IGRP hello-timerinterval.The size of the internetwork determines other issues In a small network,the amount of broadcast traffic generated by Enhanced IGRP nodes might behigher than with comparable internal gateway routing protocols that run on theInternet

However, for large-scale internetworks, Enhanced IGRP nodes generate stantially less broadcast traffic than RIP-based nodes, for example

sub-NOTE

Usually, it is a good practice to manage packet replication when going over your design considerations When integrating broadcast-type LANs (such as Ethernet) with nonbroadcast packet services (such as X.25), you should try to figure out where replication will cause bottlenecks within your network With the multiple virtual circuits that are characteristic of connections to packet-switched environments, routers need to replicate broadcasts for each virtual circuit on a given physical link

Router Router

Router Router

Hub Router (Redundant)

Hub Router (Redundant) Frame Relay

Network

Trang 5

Within a highly meshed environment, the replicating broadcasts can beresource intensive in terms of increased required bandwidth and number of CPUcycles Because of this, highly meshed networks are impractical for large packet-switching networks However, circuit meshing is essential to enable fault tolerance.You really need to balance the trade-offs in performance with requirements forredundancy Also remember that as you scale your network, there will be otherissues that fall within the same vein; as you add routing nodes, you will want toadd redundancy, which will add at least two paths to your core infrastructure.

Performance Issues

When designing your WAN around a specific application service type, you

should consider the characteristics of the virtual circuit Sometimes the mance of a virtual circuit will depend on its capability to handle mixed-protocoltraffic Depending on how the traffic is queued and streamed from one node tothe next, certain applications may require special handling One solution might

perfor-be to assign specific circuits to specific application and protocol types

There are always going to be performance concerns for specific switching services.That is why there is the ability to include Committed

packet-Information Rates (CIR) in Frame Relay internetworks and window size tions in X.25 networks (The CIR matches the maximum average rate per con-nection for a period of time.)

limita-What is highly common within the ISP market is to sell guaranteed CIRs toyour customers and give them the ability to “burst” (for a fee, of course) outside

of the limits that they were given As it is, a CIR is the minimum amount ofbandwidth that your client is guaranteed at any point in time.The CIR is usuallydefined within the service level agreement (SLA) to which you and the customeragreed

Frame Relay Internetwork

Design Considerations

A major concern when designing a Frame Relay implementation is scalability Asthe number of remote clients and their links grows, your network must be able togrow to accommodate these growth spurts.The network must also provide a highlevel of performance, yet minimize support and management requirements.Meeting all these objectives can be quite a feat.The following sections focus onsome of the critical factors for Frame Relay internetworks, such as:

Trang 6

■ Hierarchical design

■ Regional topologies

■ Broadcast issues

■ Performance issuesThe following are suggestions to provide a solid foundation for constructingscalable networks that can balance performance, fault tolerance, and cost Again, I

am only using Frame Relay as a template; it is not the only technology that youcan use

Hierarchical Design for Frame Relay Internetworks

As discussed earlier in this chapter, the arguments supporting hierarchical designfor packet-switching networks apply to hierarchical design for Frame Relay net-works Remember the three factors that lead us to recommend the implementa-tion of a hierarchical design:

iden-To figure out how many DLCIs are going to be used within your ment, and how many will be mapped to your interfaces, depends on several fac-tors that should be considered together:

intensive will constrain the number of assignable DLCIs For example,AppleTalk is a routed protocol characterized by high levels of broadcastoverhead Another example is Novell Internetwork Packet eXchange(IPX), which sends both routing and service updates, which results in

Trang 7

higher broadcast bandwidth overhead In contrast, IGRP is less broadcastintensive, because it will send routing updates less often (by default, every

90 seconds).You can modify the timer for IGRP, so it can become cast intensive if timers are modified to send updates more frequently

updates, are one of the most important considerations when determiningthe number of DLCIs that can be defined.The amount and type ofbroadcast traffic will be a factor in your ability to assign DLCIs withinthis general recommended range

expected to be high, so you should consider faster links and DLCIs withhigher CIR and higher burstable limits.You should also consider imple-menting fewer DLCIs

use a greater amount of DLCIs per line, because a larger number ofDLCIs will help to reduce the level of broadcasting

To assist in your design considerations, here are two forms of hierarchicaldesign that you can implement:

■ The hierarchical meshed Frame Relay internetwork

■ The hybrid meshed Frame Relay internetworkThese designs have their advantages and disadvantages, and are compared inthe following sections

Hierarchical Meshed Frame Relay Internetworks

Implementing a hierarchical mesh for Frame Relay environments can assist you

in avoiding implementing an excessively large number of DLCIs.This will allowfor a more manageable, segmented environment.The hierarchical meshed envi-ronment features full meshing within the core PSDN and throughout the sur-rounding networks Locating routers between network elements creates thehierarchy

Figure 8.6 illustrates a simple hierarchical mesh.The internetwork shownillustrates a fully meshed backbone, with meshed regional internetworks andbroadcast networks at the outer edges

Trang 8

The advantage of the hierarchical mesh is that it scales well and helps tolocalize traffic By placing routers between fully meshed portions of your net-work, you limit the number of DLCIs that need to be configured per physicalinterface, segment your internetwork, and make the network more manageable.

However, please remember these two issues when implementing a hierarchicalmesh:

a many routers with multiple DLCIs per interface, there will be sive broadcast and packet replication, which can impair overall perfor-mance Due to a high level of meshing throughout a network, there will

exces-be excessive broadcasts and packet replication that will exces-be a significantresource threat In the core, where throughput requirements are typicallyhigh, the prevention of bandwidth loss due to broadcast traffic andpacket replication is particularly important

Meshed Region X

Meshed Region Y

Frame Relay Backbone Y1

Y2

Meshed Region Z

Router

Router Router

Router

Trang 9

Increased costs associated with additional router interfaces

When compared with a fully meshed topology, additional routers will benecessary to split the meshed core from the meshed edge networks.However, by implementing these routers, you are creating much largernetworks that scale ad infinitum when compared to a fully meshedinternetwork

Hybrid-Meshed Frame Relay Internetworks

The cost-effective and strategic significance of the core network often forces work designers to implement a hybrid-meshed network for their WAN internet-works A hybrid-meshed network is composed of redundant, meshed lines in theWAN core, and partially (or fully) meshed Frame Relay PSDNs on the networkedge Routers separate the two networks Figure 8.7 illustrates such a hybridarrangement

net-The hybrid hierarchical mesh designs can provide higher performance on thecore because they can localize traffic and simplify the scaling of the network.Hybrid-meshed networks for Frame Relay can provide better traffic control in

X1

X2

Access Networks X1 and X2 Also Use Partially Meshed Topology

Router

Fully Meshed Backbone Interconnected with Point-to-Point Leased-Lines Router

Router

Router

Router

Frame Relay Regional Network

Router Router

Point-to-Point Backbone

Trang 10

the core and allow the backbone to be composed of dedicated links, whichresults in greater stability.

Some of the disadvantages of hybrid hierarchical meshes include the highcosts associated with leased lines, and increased broadcast and packet replicationtraffic

Regional Topologies for Frame Relay Networks

There are generally three accepted designs that are relevant for a Frame based packet service regional network:

Relay-■ Star topology

■ Fully meshed topology

■ Partially meshed topologyEach of these topologies is discussed in the following sections Generally, Ihave emphasized partially meshed topologies as those that are integrated into ahierarchical environment; star and fully meshed topologies are discussed more fortheir structural context

Star Topologies

Star topology was addressed earlier in the section, “Possible Types of TopologyDesign.” Star topologies are attractive because they minimize the number ofDLCIs that are required, which will result in a lower-cost solution However, someinherent issues are associated with the star topology because bandwidth limita-tions In an environment in which a backbone router is attached to a Frame Relaycloud at 768 Kbps, and the remote sites are attached at 256 Kbps, there will besome throttling of traffic coming off the core that is intended for remote sites

A star topology does not offer the fault tolerance that is necessary for manynetworking situations For example, if the link from the hub router to a specificcloud router is lost, all connectivity to that router is lost

Fully Meshed Topologies

A fully meshed topology requires that every routing node connected to a FrameRelay network is logically linked by an assigned DLCI to every other node onthe cloud.This topology is not easy to manage, support, or even implement forlarger Frame Relay networks for several reasons:

Trang 11

■ Large, fully meshed Frame Relay networks require many DLCIs.There

is a requirement for each logical link between nodes to have a DLCI Asshown in Figure 8.8, a fully connected topology requires the assignment

of [x(x-1)]/2 DLCIs, where x is the number of routers that will bedirectly connected

■ Broadcast and replication traffic will clog the network in large, meshedFrame Relay topologies Routers tend to use Frame Relay as a broadcastmedium Every time a router sends a multicast frame (such as a routingupdate, or spanning tree update), the router will copy the frame to eachDLCI for that Frame Relay interface

This makes fully meshed topologies highly nonscalable for all but relativelysmall Frame Relay networks

Partially Meshed Topologies

By combining the star topology and the fully meshed topology, you will be able

to implement a partially meshed topology Partially meshed topologies are usuallyrecommended for Frame Relay networks that span regional environments as theycan provide fault tolerance (through redundant star routers) and are less expensive

to implement than a fully meshed environment As a rule, you should implement

at least minimum meshing to eliminate single point-of-failure issues.Virtual faces allow you to create networks using partially meshed Frame Relay designs, asshown in Figure 8.9

Trang 12

To create this type of network, you need to assign multiple virtual interfaces(these are considered logical addresses) to individual physical interfaces In thismanner, DLCIs can be grouped or separated to maximize their functionality As

an example, a small, fully meshed cloud of Frame Relay networked routers cantravel over a group of four DLCIs that are clustered on a single virtual interface,but a fifth DLCI on a separate virtual interface can provide the connectivity to acompletely separate network.This all happens over a single physical interface that

is connected to the Frame Relay cloud

Broadcast Issues for Frame Relay Networks

Routers treat Frame Relay as a broadcast media, which means that each time therouter sends a multicast frame (such as a routing update, or a spanning treeupdate), the router must replicate that frame to each DLCI that is associated withthe Frame Relay physical interface.This broadcast and replication traffic results insubstantial overhead for the router and for the physical interface itself

Consider an IP RIP environment with multiple DLCIs configured for asingle physical serial interface Every time a RIP update is detected, which occursevery 60 seconds, the router must replicate it and send it down the virtual inter-face associated with each DLCI

Router Router

Router Router

Star Router (Redundant)

Star Router (Redundant) Frame Relay

Network

Trang 13

There are several ways to reduce broadcast and replication traffic within your network One of the most efficient is to implement some of the more efficient routing protocols, such as Enhanced IGRP, and to adjust timers on lower-speed Frame Relay services.

Creating a Broadcast Queue for an Interface

When it comes to designing and implementing a very large Frame Relay work, you may come across performance issues when you have many DLCIs thatterminate in a single router or that access a server that must replicate routingupdates on each DLCI.What occurs is that these updates consume bandwidthand can cause noticeable latency in user traffic.These updates can also consumeinterface buffers, which will lead to packet loss, and therefore loss of user data androuting updates

net-There is a way to avoid these problems, though.You can create a specialbroadcast queue on an interface.The broadcast queue can be managed indepen-dently of the normal interface queue, because it has its own buffers, and the sizeand service rates of traffic can be configured

A broadcast queue has a maximum transmission rate (throughput) limitapplied to the interface, which is measured in both bytes per second and packetsper second.This queue is regulated to make certain that no more than the con-figured maximum bandwidth is provided.The broadcast queue also has prioritywhen transmitting at a rate that is below the configured maximum to guaranteethe minimum bandwidth allocation for the interface.These transmission ratelimits are implemented to avoid flooding the interface with broadcasts and replication traffic

Committed Interface Rates

When you implement Frame Relay packet-switched networks, external serviceproviders create a CIR that measures in bits per second.This is one of the mainmetrics CIR is the maximum permitted traffic level that a carrier will allow on aspecific DLCI across its packet-switching environment.The CIR can be anything

up to the capacity of the physical media of the connecting link

Trang 14

One of the drawbacks associated with the CIR is that there are relatively fewways to automatically prevent traffic on a line from exceeding the maximumbandwidth Although Frame Relay uses the Forward Explicit CongestionNotification (FECN) and Backward Explicit Congestion Notification (BECN)protocols to control traffic in the Frame Relay network, there isn’t a standardizedmapping between the Frame Relay (link) level and most upper-layer protocols atthis time.

What really happens when the specified CIR is exceeded depends on thetypes of applications that are running on your network For instance,TCP/IP hassomething called a backoff algorithm that will see dropped packets as an indica-tion that there is congestion, and sending hosts might reduce output.You shouldconsider the line speeds and what applications are going to be running on yournetwork

There is a way to buffer Frame Relay traffic so that it can handle instanceswhen traffic exceeds the CIR for a given DLCI.These buffers spool excess traffic

to reduce packet loss, especially if you are using a vigorous transport protocolsuch as Transmission Control Protocol (TCP) Even with these buffers in place,overflows can occur Remember that routers have the ability to prioritize traffic;

Frame Relay switches do not

You can also specify which packets have low priority or are not time tive, so that if there is a need to drop traffic, they will be the first to be discardedwhen there is congestion.The method that allows a Frame Relay switch to iden-tify these packets is within the packet itself and is known as the discard eligibility(DE) bit

sensi-The Frame Relay network must be able to interpret the DE bit, though, sothat there is an action taken when the switch encounters a DE bit during heavycongestion times Sometimes, networks will take no action when the DE bit isset; at other times, networks use the DE bit to determine which packets to dis-card Probably the best way to implement the DE bit is to set it to determinewhich packets should be dropped first, but also which packets have lower timesensitivity By doing this, you can define DE lists that will identify the types ofpackets that are eligible to be discarded, and you can also specify DE groups toidentify the DLCI and interface that is being affected

You can specify DE lists by the protocol and/or the interface used

Characteristics such as fragmentation of the packet, a specific TCP or UserDatagram Protocol (UDP) port, an access list number, or a packet size can also befactors that you should consider as DE

Trang 15

To avoid packet loss, be sure that you understand what types of tions are going to be on your network and how dropped packets affect them Try to implement unacknowledged application protocols (such as packetized voice and video) carefully; these protocols have a greater chance of buffer overflow Dropped packets are okay in voice and video applications, whereas data applications such as ERP should not drop frames.

applica-By defining separate virtual circuits for your different types of traffic, andspecifying the queuing and an outbound traffic rate, you can give the customer aguaranteed bandwidth for each of your applications By being able to specify dif-ferent traffic rates for different virtual circuits, you can perform virtual time divi-sion multiplexing (TDM)

You can also ease congestion and data loss in the network by throttling theoutbound traffic that comes from high-speed lines in central offices, and go tolow-speed lines in remote locations.This enhanced queuing will also help to pre-vent congestion that causes data loss.This type of traffic shaping can be applied toboth PVCs and SVCs

Capacity Planning for

Your Infrastructure

Have you ever tried to install a server, and there just isn’t a spare rack, or even anavailable network port? This is usually an issue for service providers who did notplan properly for explosive growth A ripple effect of this is that the networkprobably is staggering with the amount of clients that were added In this section,

we discuss some best practices for implementing a capacity plan

Connection and Expansion

Capacity planning is an issue that you will need to address, along with the designand implementation procedure If you have a general idea of where you stand for number of servers and expected growth, you can use those as a baseline forthe capacity of your network.The reason that I say “baseline” is that many thingscan happen in the business world over the course of six months that might cause

Trang 16

your design to be underpowered and/or oversubscribed Depending on the size of the current and expected network, there should be a padding area ofapproximately 10 to 15 percent for unexpected growth.You should reevaluatecapacity after every new customer you install.

Best Practices

One of the best practices for planning is to map out where the different customerareas are located, and what the server count is going to be Once these figures aredetermined, decide if the servers need one data link or multiple connections

With the number of connections decided, you now need to plan on tion and maximum bandwidth provisions of the network Sounds impressive,huh? It really isn’t that difficult.What you are doing is calculating the aggregateaverage bandwidth of the network devices located on the segment.With thesecalculations in place, you can plan whether the segment is powerful enough tosupport the clients and resources on a given network

subscrip-So, how do you calculate the aggregate average bandwidth? The calculationsare based on network topology, users’ traffic patterns, and network connections

Ask questions such as, “What type of links should we use to connect the clients tothe data center? What should the bandwidth requirement be for the backbone?”

You need to plan what type of link goes to each client, so that it has theproper bandwidth, yet does not allow for the monopolization of networkresources, and/or completely shut down the network with oversubscription of asegment Monopolization of a segment occurs when a user has the equal band-width of a resource (such as an application server), and the application takes all ofthe available bandwidth and maintains the trunk.This will not allow other clients

to access the resource

Oversubscription of a segment occurs when multiple customers use all of theapplication resource’s bandwidth, and therefore, other clients are unable to accessthe resource.While the two symptoms just described result in the same conclu-sion, they are different.The segment in a monopolized environment runs consis-tently, whereas the segment that is oversubscribed may shut itself down because itcannot pass traffic due to multiple requests flooding the buffers on the switchesand routers

With all of this information, you should get an idea for the type and capacity

of network equipment you need to deploy at each location Further concerns forplanning capacity are based on factors such as which protocol you use, theaddressing schemes, the geography, and how these fit the topology of the network

Trang 17

Protocol Planning Concerns

The following section provides details on how to choose a protocol to bestmatch your environment.With proper planning and implementation, any choicewill work By determining the physical layout of the network, you will be able tomap the correct topology and form a logical addressing scheme that will grow asyour network grows

Routing Protocols

Choosing routing protocols and their configuration are important parts of everynetwork design.You must be prepared to spend a significant amount of timeimplementing your policies for the network to provide optimal performance.Routing protocols are a fundamental component of networking and creating areachable network that can transfer data

If designed properly, the network will build routing tables and maps that youcan use to see adjacent routers and their status.There is also the ability to see net-work paths, congestion, and bandwidth of those links.This information helps indeciding on the optimal network paths

The more complex routing protocols allow you to add secondary metrics.Some of those metrics include reliability, delay, load, and bandwidth Using thesemetrics, the router can make these routing decisions dynamically

The basic difference in the various routing protocols lies in the sophistication

of their decision-making capabilities and metric support.This is one of the mainfactors to consider when choosing a protocol to match the characteristics of yournetwork

There are two types of routing protocols, internal and external Internal tocols are those that you would implement within your network infrastructure,and are controlled completely within that domain Conversely, external protocolswork with external domains, such as the Internet or other ISP networks.Theseprotocols are designed to protect your domain from external errors or misrepre-sentation.To read more about internal and external protocols, visit the InternetEngineering Task Force’s site (www.ietf.org)

pro-Interior Gateway Protocols

How do you decide on which interior protocol to use? The following sectionexplains some of the inherent differences that are implemented in each of theinterior protocols Some are self-explanatory, and some are just more complex

Trang 18

I hope that this section will clarify some of the terms and allow you to make reasonable, well thought-out decisions.

Let’s deal more with interior protocols, since those are the ones that you mayhave used less in your ISP environment.With the exception of OSPF and IS-IS,the interior routing protocols described are distance-vector protocols, and theyuse distance and next-hop data to make their routing and forwarding decisions

Some distance-vector protocols are very simplistic and don’t scale well inlarger environments One example is RIP, which uses hops (the number of con-nections between it and its destination) as its determining factor.The largest hopcount before it disregards the packet is 15, making it one of the least scalable pro-tocols Another drawback to RIP is that it does not take into account varyingavailable bandwidth If, for example, you have a packet that needs to get fromnetwork A to network D, RIP will take the path with two hops, rather than thepath with two hops but higher speed (Figure 8.10)

If your network is fairly simple in terms of the topology and number ofrouters, a distance-vector protocol such as RIP or IGRP (discussed later in this chapter) could work fine If you’re running a multivendor network, RIP,RIPv2, IS-IS, and OSPF are common protocols across many vendors’ routerimplementations

This chapter lists what is available for use in a semichronological order, as this

is probably how you will see them listed in other reference manuals Some of thestrengths and limitations for each protocol are also listed

56k

56k

T1

T1 T1 Router B

Router A

Router E Router D

Router C

Trang 19

Routing Information Protocol

The Routing Information Protocol (RIP) was derived from Xerox Corporation’sXNS for IP networks It supports IP and IPX networks

Strengths:

■ Still viable in networks that use a constant internal subnet

■ Usable on most vendors’ equipment

■ Low cost (generally free from most vendors)Weaknesses:

■ Scalability is minimal (15-hop maximum)

■ Path determined by hop count, and may take best path

■ Broadcasts full routing table frequently, wasting bandwidth

■ Cannot handle variable-length subnet masks (VLSM)

Interior Gateway Routing Protocol (Cisco Only)

The Interior Gateway Routing Protocol (IGRP) was created by Cisco for Cisco

IP and OSI networks It supports IP and OSI networks

Strengths:

■ Uses multiple metrics in decision making

■ Fast convergenceWeaknesses:

■ Runs only on Cisco equipment

■ Broadcasts routing table frequently, wasting bandwidth

Open Shortest Path First

The Open Shortest Path First (OSPF) was developed by the IETF for IP networks

It supports IP networks

Strengths:

■ Usable on most vendors’ equipment

■ Only broadcasts routing table when changes are made

Trang 20

■ Uses only bandwidth as a metric

■ Restricts some topologies

Integrated Intermediate System-to-Intermediate System

The Integrated Intermediate System-to-Intermediate System (IS-IS) was oped by IETF for OSI and IP networks It supports IP and OSI networks

devel-Strengths:

■ Usable on most vendors’ equipment

■ Only broadcasts routing table when changes are made

■ Fast convergenceWeaknesses:

■ Uses only bandwidth as a metric

■ Restricts some topologies

RIPv2

RIPv2 was developed by the IETF for IP networks It supports IP and IPX networks

Strengths:

■ Added authentication and multicast ability to RIP

■ Usable on most vendors’ equipmentWeaknesses:

■ Scalability is minimal (15-hop maximum)

■ Only uses bandwidth as metric

Enhanced Interior Gateway Routing Protocol (Cisco Only)

The Enhanced Interior Gateway Routing Protocol (EIGRP) (Cisco only) wasdesigned by Cisco for multiprotocol Cisco networks It supports IP, IPX, andAppleTalk networks

Trang 21

There is really only one external protocol to talk about, the Border Gateway

Protocol (BGP).There have been entire books written on BGP, and several nies make their living working with BGP exclusively.These companies are highlypaid, because you do not want to create incorrect BGP parameters and advertise-ments.These common mistakes will definitely adversely affect your network

compa-I will not go into any detail here, because if you have BGP and it is up andrunning, you most assuredly do not want to mess with it

Choosing the Right Interior Protocol

Using the previous references that explained the differences inherent with theinterior protocols, it is time to address which networking protocol will be bestfor your network.There are several considerations to take into account for yourchoices.You want the protocol to add functionality, you want it to be scalable,you want it to easily adapt to changes that will be implemented, you want it to

be manageable, and you want it to be cost effective

Scalability is an issue that will keep cropping up.You want your design to meetcurrent and future growth.What happens if you decide to implement RIP, andthen exceed the hop count within your own network? Preplanning will save yousome massive restructuring headaches later in the development of the infrastructure.Adaptability to new technologies is essential.With the world moving towardhigher and higher bandwidth usage, you must take into account that what youdesign must be readily adaptable for future technologies (e.g.,Voice over IP(VoIP) and video traffic) that will possibly be implemented.This is an area whereadding value to your network will save your company money in the long run

Trang 22

Another key concern is the manageability of the network.You are going toget into an area where there is more network traffic than bandwidth, and you aregoing to want to manage traffic and to some extent monitor the usage of thelinks that are on your network.With that criteria in mind, plan on protocols thatwill be more beneficial to you and the management of the network.

Finally, one of the toughest areas when choosing a routing protocol is costeffectiveness Some of the protocols can only run on Cisco equipment becausethey are proprietary.While RIP will run on most equipment, and is very costefficient, the network would have to be scaled down so that it could be properlyimplemented A protocol such as EIGRP would be fantastic in a larger-scaleenvironment, but it only works with Cisco equipment, so you need to keep that

in mind if you have legacy equipment you are planning to phase out

Route Selection

So why are we talking about route selection? It seems somewhat silly if there isonly one route to the destination.The question is, what happens if that routeshould fail? What about using other routes to allow more traffic and less conges-tion? These reasons are why most networks are designed with multiple routes(redundancy and load balancing), so there is always an alternate connection incase of failure or to alleviate traffic issues Routing protocols use metrics to selectthe best route based on weighted decisions, from groups of existing routes

Metrics are values that can be assigned and weighted to make decisions onrouting paths within a network Metrics are assigned characteristics or a set ofcharacteristics on each link/route of a network.When traffic is passed along alink, it the network equipment makes a choice on how to route the traffic bycalculating values of the metrics and assigning the traffic to the selected path

Metrics are handled differently depending on the routing protocol Most ofthe routing protocols can use multiple paths if they are equal cost Some proto-cols can use paths that have routes that are not cost equivalents By implementingmultiple paths, you can use load balancing to improve bandwidth allocation

With a multiple path design, there are two widely used ways for packet

distri-bution: per-packet load balancing and per-destination load balancing.

Per-packet load balancing uses all possible routes in proportion to the routemetrics.What this means is that if all routes are equal cost, it will cycle throughthem in a “round robin” selection scheme, where one packet is sent to each pos-sible path that is available Routers are defaulted to this method when fastswitching is disabled

Trang 23

Per-destination load balancing uses routes based on what the destination.Each destination is assigned an available route and maintains that route for futureuse.This way, traffic tends to arrive in the proper order Routers default to thismethod when fast switching is enabled.

NOTE

TCP can accommodate out-of-order packets, and there will always be some on multipath networks Excessive out-of-order packets can cause performance issues, so don’t go overboard on the redundant load bal- ancing Make sure to have at least one extra path between segments and

NO MORE if redundancy is an issue.

Preplanning these rollouts will save aggravation and long nights of shooting Sometimes it is best to sit down, create a matrix with these considera-tions, and cross-reference what will be best for your network.Try to use a singleinterior routing protocol for your network Sometimes this is not possible, so inthose instances, plan for the best protocol mesh and implement as cleanly as possible

trouble-The best advice that I offer is, “keep it simple.” Complexity is not a goodthing in network design.You want a stable, simple network Now, on to the nextdecision-making process, addressing

Addressing Considerations

The addressing scheme is dependent on several variables Since most companiesare now creating firewalls and implementing Network Address Translation (NAT)and/or Port Address Translation (PAT) to their public address space to conserveaddresses, you have to figure out how to implement your addresses in a logicalmanner Since NAT is so popular, private addressing is almost completely up toyou.The three private address spaces are:

Trang 24

These are private addresses, and cannot be routed on the Internet Usingsubnet masks will allow you to configure Network and Host IDs to suit yourneeds.

Since addressing is important, and time consuming, you want to plan tion of addresses as neatly as possible so you have to readdress your network aslittle as possible Plan on how you want to slice up the company, whether it is bysite, location, department, telecommuter, and so forth If you are using a publicaddress scheme, conservation is key If you are using a private addressing scheme,you should focus on layout Deployment should be in a logical and readilyunderstood manner for public and private addressing.You want to keep similarusers as well and similar resources together

alloca-Set aside address space for current and potential (have I said those wordsbefore?) users If you want to keep all of the servers in one address space, set asideenough for future growth Generally, you should group users with similar needs

in similar locations (with the advent of VLANs, this is not the major concern itwas in the past) Remember that when you add subnets, you need to placerouters between subnets that cannot speak to each other

Since the network doesn’t physically exist at this point in time, be sure torevise all of your ideas on paper.This will help for the actual implementation

Another way to help segment traffic and design your addressing scheme is bytopology

Topology

The topology of a network is defined by sets of routers and the networks towhich they connect Routing protocols can also establish a logical topologydepending on implementation

TCP/IP requires the creation of a hierarchical topology that establishes aCore layer, a Distribution layer, and an Access layer For example, OSPF and IS-ISprotocols use a hierarchical design.The hierarchical topology takes precedenceover any topology created through address segmentation.Therefore, if you choose

a hierarchical routing protocol, you should also create the addressing topology toreflect the hierarchy If you decide to use a flat routing protocol, the addressingwill create the topology

There are two regularly accepted ways to assign addresses in a hierarchicalnetwork.The easiest way is to assign a unique network address to all networkareas (including the core) A more complex way is to assign ranges of addresses toeach area Areas should be comprised of contiguous addresses for networks and

Trang 25

hosts Areas should also include all the router interfaces on any of the includednetworks By doing so, each area maintains its own topology database, because allinterfaces run a separate copy of the basic routing algorithm.

Flat networks were originally campus networks that consisted of a singleLAN to which new users were added.This LAN was a logical or physical cableinto which the network devices connected In the case of Ethernet, all the

devices shared the half-duplex 10 Mbps available.The LAN was considered a lision domain, because all packets were visible to all devices on the LAN; there-fore, they were free to collide, given the carrier sense multi-access with collisiondetection (CSMA/CD) scheme used by Ethernet

col-A bridge was inserted when the collision domain of the Lcol-AN became gested.This allowed a segmentation of traffic into several collision domains, because

con-a bridge is con-a store-con-and-forwcon-ard pcon-acket switch.With this con-ability to cut collisiontraffic, network throughput increased.The drawback is that bridges pass all traffic,including, flood broadcasts, multicasts, and unknown unicasts, to all segments.All the bridged segments in the campus together form a single broadcastdomain.The Spanning Tree Protocol (STP) was developed to prevent loops inthe network and to route around failed connections

Broadcast traffic sets a practical limit to the size of the broadcast domain.Managing and troubleshooting a bridged campus becomes harder as the number

of users increases because it adds to the broadcast domain One misconfigured ormalfunctioning workstation can disable an entire broadcast domain for an

extended period of time, as it is generally hard to locate

The STP Broadcast Domain

There are some issues with the STP broadcast domain It has a high time threshold for convergence, typically 40 to 50 seconds It allows for nonoptimized paths to exist Redundant links carry no data because they are blocked Broadcast storms affect the whole domain, and each net- work host must process all traffic Security is limited within the domain, and troubleshooting problems is time consuming New versions and options can reduce convergence time.

Configuring & Implementing…

Trang 26

The 80/20 traffic rule has been steadily changing due to the rise of intranets and distributed applications With new and existing applica- tions moving toward a distributed applications and storage model, which are accessed though Web retrieval, the traffic pattern is going toward the 20/80 model, where only 20 percent of traffic is local to the workgroup LAN, and 80 percent of the traffic is destined for a nonlocal domain.

Application and Network Services

Knowing the various network applications that will be on the wire is essential tothe planning and design of your new network.You need to take into account all

of the different types of protocols, ports, and bandwidth that will be used on thewire when running different programs For instance, standard file and print traffichas far less overhead than database and backup applications do Each programbehaves in its own way, and you need to anticipate the behavior in order todesign the network properly

Make a list of the applications that will be running on the wire, and remember,

be thorough! Make sure that you take into account every possible traffic generator(whether it is an application or a network device) that you can think of, no matterhow trivial, and add it to the pile of your bandwidth calculations Make sure also togive each application its proper “weight” in your calculations

For instance, it is highly doubtful that people will be printing 100 percent ofthe time on the network, so make sure that you apply the proper percentagebandwidth subscription and utilization in your calculations as to how much timeyou will see the traffic

Although this seems like a tedious task, it is well worth the exercise

Remember, the customer will be extremely unhappy with poor performance, and

in all likelihood, you, the provider, will be the one blamed if the bandwidth ischoked on the first day of the new network rollout.Try to overestimate theamount of bandwidth required at the worst times of traffic by about 20 percent

That will always lead to a safe estimate of bandwidth

Trang 27

Designing the Data Center Network

When designing the data center, you should build the network as a modularbuilding block using multilayer switching.This way, you can segment the traffic

so that it goes over specific bandwidths For example, the Gigabit Ethernet trunkcarries the server-to-server traffic, and the Fast EtherChannel trunk carries thebackbone traffic, so all server-to-server traffic is kept off the backbone, which hasperformance and security advantages

Data centers should have Virtual Router Redundancy Protocol (VRRP)redundancy between the multilayer switches, with access lists used to controlaccess policy to the data center.With this setup, the core switches are separatefrom the distribution switches, thereby cutting down on broadcast domains andallowing for better throughput

Note that when using the Hot Standby Router Protocol (HSRP) (Cisco cific) or Virtual Router Redundancy Protocol (VRRP), which can also addredundancy, you should consider implementing Fast EtherChannel so you canscale bandwidth from Fast Ethernet, and from Gigabit Ethernet to Gigabit

spe-EtherChannel.You should also consider using adaptive load balancing for highavailability

Terminal Data Centers

One of the benefits of using the Windows 2000 software platform is the built-incapability to use Terminal Server on the network.Terminal Server is the

Microsoft version of thin-client technology that allows the workstations to

receive a video snapshot of the desktop running on another machine and controlthat machine remotely

The benefit of this is so that bandwidth-heavy applications such as databaseand other query-based applications need not send the queries back and forthover the entire network on the “slow”WAN links Rather, they have access to the

“client” machine via high-speed switched links, and users on the network will beaccessing the database applications as if they are right next to the database servers

on the switch

This technology has become increasingly popular over the last couple ofyears, as bandwidth has become a premium on the WAN links Microsoft hasbuilt this technology into Windows 2000 so that any Win2K server can activatethis remote control capability

Trang 28

Application-Aware Networking

ASPs who want to deploy their applications need to realize that their success ofmission-critical applications over both the internal LAN and clientele WAN isachieved by defining network policies, which assist in the apportioning of net-work resources with business objectives.These policies are then enforced with thecreation of a Quality of Service (QoS) method for their application and band-width considerations.Without these QoS controls in place, nonvital applicationsand services can quickly overwhelm network resources by taking the availableresources away from those applications that you are selling and causing poor cus-tomer satisfaction with your product

When taking into account design considerations, remember that the ability toprovide an end-to-end application that has prioritization within a LAN will needspecific features that are dependent on where the networking device is located

In a service provider environment, Access layer devices should deliver the following capabilities:

type, destination, or physical port

Traffic Detection and Classification

Traffic detection uses Layer 4 IP UDP and TCP port numbers A switch that isable to examine a packet and identify the UDP and TCP port numbers can usethis information to identify the application that is using that packet A switch canthen compare a packet that is being used to run a nonmission-critical application

or service, such as email, against a packet that is running an application that isconsidered mission critical and classify it appropriately

Admission Control

Admission control is provided by a mechanism that can reject or remove tions based on user-defined policies For example, a client can define a policy totemporarily stop the transmission of email packets, so that the mission-criticalapplications can use the necessary resources

Trang 29

applica-Admission control can also be used to provide the following benefits:

■ During network congestion, you can specify that a specific application(e.g., video or multimedia traffic, which is usually considered bandwidthintensive) be dropped for the congestion period

■ You can bar specific applications from entering the network, whichwould save network bandwidth

Traffic Classification

Traffic classification uses traffic detection to identify what applications are ning.These applications can then be classified based on their user-defined policy.This classification will provide a tag that identifies the priority of the packet.Thisclassification information is carried within the precedence bits of the IP packet.The classification technique that uses the IP precedence fields is commonlyreferred to as Type of Service (ToS) If you are going to use either Cisco SystemsInter-Switch Link (ISL) or 802.1Q, this is usually referred to as Class of Service(CoS)

run-When using the IP precedence (which uses Layer 3) as the classification tag,you can deliver a QoS identifier that is independent of the transmission tech-nology that is allowing for the mapping of QoS across disparate media types such

as Ethernet and ATM

ISL and 802.1Q use Layer 2 technologies that are more suitable to campusdeployment across Ethernet networks that aren’t totally IP based Unlike Layer 2classification (ISL and 802.1Q), Layer 3 classification (ToS) has the ability to pro-vide consistent end-to-end classification for a packet as it negotiates throughLAN/WAN and intranet/Internet borders

In the ideal environment, packet classification should only happen once BothISL and 802.1Q have a three-bit field that can provide up to eight priority levels,which helps in delivering consistent priority-level maps between technologies.Traffic classification provides the following benefits to an ASP:

■ It can provide a tag that will identify the priority assigned to eachpacket.This gives devices that are downstream the information theyneed to prioritize packet traffic

■ It provides the mechanism that can reclassify packets based on policy

As an example, when a client’s workstation applies a high priority tomultimedia applications using traffic classification and detection, the

Trang 30

application, might be reclassified to a lower priority, depending on theimplemented network policy.

In the Distribution and Core layers of a network, the devices should supportthe following features:

their classification or policy during periods of network congestion

transmission preference, based on classification of the application or thepolicy of the network

Congestion Avoidance

By implementing Weighted Random Early Detection (WRED), you can addcongestion avoidance in your infrastructure Congestion avoidance can takeadvantage of TCP windowing to slow lower-priority application traffic beforenetwork congestion can affect the higher-priority applications.WRED is based

on the output queues of the network device

When network traffic increases, the chances of network congestion increases,because the buffers begin to fill Eventually, the buffer will overflow and packetswill be dropped in an unrestrained fashion.TCP traffic tends to increase its rate

of data transmission; until packets are dropped or the maximum TCP transmitsize is reached.When a data transmission mismatch happens (such as packets from

a Gigabit Ethernet port destined for a FastEthernet port) the device’s buffers willbegin to fill

By dropping packets, random early detection (RED) helps to prevent a bufferoverflow from randomly dropping packets, which can adversely affect applicationperformance.WRED enhances RED by specifying which packets are droppedwhen they reach a buffer’s threshold

There are generally three classes to packet classification: premium, standard,and DE Mission-critical applications are normally mapped to the premium class,while all other traffic would be mapped to the standard threshold If the bufferreaches 60 percent of full capacity, packets that belong in the standard class wouldbegin to be randomly dropped As the buffer continues to fill, the drop rate forthe standard class applications will also increase.The premium service may beconfigured to not drop those packets until the buffer has reached 90 percent ofits full capacity

Trang 31

Since it is harder to reach the higher thresholds (unless the network becomesoverly congested due to a large amount of higher priority applications), higher-priority applications will continue to have end-to-end connectivity and perfor-mance Consequently, traffic that is considered mission-critical is unaffected byapplications with a lower priority.

Scheduling

In a switch, there is a switching fabric with a finite amount of backplane andtime to transmit packets out a given interface By implementing weighted roundrobin (WRR) in your network devices, you can provide preferential treatment topackets based on their priority.With this method, you could allow a given inter-face to use 70 percent of its transmit capacity for mission-critical or delay-sensi-tive applications, and the other 30 percent could be slotted for applications thatare less critical or delay sensitive

Scalability Considerations

We now need to discuss the issue of scalability.When you implement your

design, there are going to be times when you will experience explosive growth.(I believe that marketing people call it “Hyper growth.”) So, how do you keep upwith the Jones’? Scalability Scale the bandwidth Scale the equipment In the fol-lowing section, we discuss the scaling of bandwidth and some of the concernsthat you will face

Scaling Bandwidth

Bandwidth in the multilayer model can be scaled in many ways Ethernet can beupgraded to Fast Ethernet, and Fast Ethernet can be combined into Fast

EtherChannel or Gigabit Ethernet or Gigabit EtherChannel Access switches can

be partitioned into multiple VLANs with multiple trunks, and use ISL or 802.1q

in the VLANs to combine the different trunks Fast EtherChannel provides moreefficient utilization of bandwidth by multiplexing multiple VLANs over onetrunk.To scale bandwidth within ATM backbones, you must add more OC-3 orOC-12 trunks all the way up to OC-192

Scaling Considerations

The great thing about designing the network with multilayer switching is that it’sextremely scalable Routing is able to scale because it is distributed, and therefore

Trang 32

it is easy to add pieces that point to other pieces.The backbone performancescales when you add more connections and/or switches.

Since the network is compartmentalized, it is also scalable from a managementand administration perspective.When issues crop up in the network, you can pinthe problem down to one of the layers, and troubleshoot it down from there

When designing your network, avoid creating STP loops in the backbone

STP takes 40 to 50 seconds to converge and does not allow for load balancingacross multiple paths.When using ATM for your backbone, use PNNI to handleload balancing

You should always try to use high-level routing protocols such as OSPF,

IS-IS, and Enhanced IGRP (Cisco only), which allow for path determination andload balancing OSPF and IS-IS operating costs at the core will rise linearly asthe number of switches in the Distribution layer increases.What happens is thatOSPF elects a router and a backup router, which will connect with all of theother routers in the Distribution layer

If multiple VLANs or ELANs are created in the backbone, a primary and abackup router are elected for each Remember that with OSPF routing traffic,CPU overhead increases as the number of VLANs or ELANs increases on thebackbone.Therefore, try to keep the number of VLANs or ELANs on a trunk to

a minimum

The following are some suggestions for best practices:

■ Remember that OSPF needs summarization to allow it to scale It is acommon practice in large campuses to make each building an OSPFarea, and make the routers area border routers (ABRs)

■ Try to make all of the subnets to a single customer from a contiguousblock of addresses.This will allow you to use a single summary adver-tisement on the ABRs By doing so, you will reduce the amount ofrouting information traffic and increase the stability of the routing table

■ Enhanced IGRP can be configured in roughly the same way; however,here are some exceptions to the rule Protocols such as Novell ServerAdvertisement Protocol (SAP), Novell Routing Information Protocol(RIP), and AppleTalk Routing Table Maintenance Protocol (RTMP)have an overhead that increases exponentially as you add connections

These protocols should be the exception rather than the rule in an ASPenvironment

Trang 33

Multimedia Services

According to a study by the Telecommunications Industry Association, the media application market (such as video on demand,VoIP, etc.) is expected toreach $16 billion in 2001 Some of the needs that are pushing the growth ofthese services and applications are distance learning, virtual workplace, audio andvideoconferencing, streaming media applications (video on demand), data storage(SAN/NAS), and financial applications (such as ERP and CRM)

multi-Many of the new multimedia applications that customers want, require IPmulticast for proper operation Any network communication that needs to

transmit information to multiple clients can benefit from the efficiency of cast technologies

multi-As an example, applications that involve one-to-many or many-to-manycommunications include:

■ Database replication

■ Dissemination of news feeds and stock quotes

■ Software downloads

■ Video and audio broadcast

■ Videoconferencing and collaboration

■ Web site caching

To get reasonable full-motion full-screen viewing requires approximately 1.5Mbps of bandwidth In a unicast environment, a separate video stream is sent tothe network for each client who wants to view the information (this uses X *1.5 Mbps of link bandwidth, where X = number of viewers).With a 100 MbpsFastEthernet interface on the server, 60 or 70 video streams will saturate theinterface

Even with a Gigabit Ethernet interface on a high-end server, the practicallimit would be from 250 to 300 1.5 Mbps video streams In as such, the server’sinterface can be a bottleneck by limiting the number of unicast video streams perserver

As you can see, this replicated unicast transmission can consume a lot ofbandwidth and other network resources, and is therefore a limitation.With clientsseparated from the server by two Distribution and/or Core switch hops and twoAccess layer switch hops, a single multi-unicast stream would consume 300 Mbps

Ngày đăng: 14/08/2014, 04:21