1. Trang chủ
  2. » Công Nghệ Thông Tin

Mission-Critical Network Planning phần 8 docx

43 204 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 43
Dung lượng 447,49 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

facil-11.2 Cable Plant A mission-critical network’s integrity starts with the cable plant—the cables, nectors, and devices used to tie systems together.. Cable plant is often taken forgr

Trang 1

Magnetic tape has long been a staple for data backup for many years ments in tape technology have given rise to tape formats that can store up to 100 GB

Advance-of data in a single cartridge However, advancements in disk technology haveresulted in disk alternatives that enable faster and less tedious backup and recoveryoperations

Storage vault services have been in use for some time They involve using a ice provider to pick up data backups from a customer’s site and transport them to asecure facility on a regular basis Electronic vaulting goes a step further by mirroring

serv-or backing up data over a netwserv-ork to a remote facility Firms can also elect to source the management of their entire data operations to SSPs

out-A storage network is dedicated to connecting storage devices to one another out-ASAN is a network that is separate from a primary production LAN A SAN providesconnectivity among storage devices and servers SANs are not intrinsically fault tol-erant and require careful network planning and design But they do improve scal-ability and availability by making it easier for applications to share data They alsooffer a cost-effective way to implement data mirroring and backup mechanisms.Fibre Channel technology has found widespread use in SANs Fibre Channel trans-fers files in large blocks without a lot of overhead Point to point, FC-AL, andFC-SW are common Fibre Channel SAN architectures

Rather than implementing entirely new networking technologies to date storage, alternative strategies are being devised that can leverage more conven-tional legacy technologies A NAS is a storage device that attaches directly to a LAN,instead of attaching to a host server This enables users to access data from all types

accommo-of platforms over an existing network NAS can also be used in conjunction withSANs There are also several approaches to further leverage legacy IP networks forblock storage transport FCIP, iFCP, and iSCSI transfer Fibre Channel commandsover IP iSCSI, in particular, is of great interest, as it leverages the installed base ofSCSI storage systems

SANs enable isolated storage elements to be connected and managed Many dors offer automated software tools and systems to assist in this management HSMsare software and hardware systems that manage data across many devices Theycompress and move files from operational disk drives to slower, less expensive mediafor longer term storage Virtualization is a technique that abstracts the data viewfrom that of the physical storage device This can be most appealing to organizationshaving large, complex IT environments because it offers a way to centrally managedata and make better use of storage resources Data redundancy for mission-criticalneeds can be more easily managed across different storage platforms

ven-Data restoration is just one component of an overall recovery operation A mosteffective approach to recover primary data images is to make snapshot copies of thedata at specified time intervals and restore from the most recent snapshot Recoverysoftware and systems can automate the recovery process Regardless of theapproach, there are several basic steps to data recovery that should be fol-lowed—these were reviewed earlier in this chapter

Networked storage and storage management applications provide the ability toefficiently archive and restore information, while improving scalability, redun-dancy, and diversity Not only do such capabilities aid survivability, they improveaccountability and auditing during an era when organizations are being held liable

Trang 2

for information and, depending upon the industry, must comply with regulatoryrequirements [55].

References

[1] Kirkpatrick, H L., “A Business Perspective: Continuous Availability of VSAM Application

Data,” Enterprise Systems Journal, October 1999, pp 57–61.

[2] Golick, J., “Distributed Data Replication,” Network Magazine, December 1999,

[7] Gordon, C., “High Noon—Backup and Recovery: What Works, What Doesn’t and Why,”

Enterprise Systems Journal, September 2000, pp 42, 46–48.

[8] Flesher, T., “Special Challenges Over Extended Distance,” Disaster Recovery Journal,

[14] Chevere, M., “Dow Chemical Implements Highly Available Solution for SAP

Environ-ment,” Disaster Recovery Journal, Spring 2001, pp 30–34.

[15] Marks, H., “The Hows and Whens of Tape Backups,” Network Computing, March 5,

2001, pp 68, 74–76.

[16] Baltazar, H., “Extreme Backup,” eWeek, Vol 19, No 32, April 15, 2002, pp 39–41, 43,

45.

[17] Wilson, B., “Think Gambling Only Happens in Casinos? E–Businesses Without Business

Continuity Processes Take High Risk Chances Daily,” Disaster Recovery Journal, Winter

[22] Pannhausen, G., “A Package Deal: Performance Packages Deliver Prime Tape Library

Per-formance,” Enterprise Systems Journal, November 1999, pp 52–55.

[23] Rigney, S., “Tape Storage: Doomsday Devices,” ZD Tech Brief, Spring 1999, pp S9–S12.

Trang 3

[24] “Vaulting Provides Disaster Relief,” Communications News, July 2001, pp 48–49 [25] Murtaugh, Jerry, “Electronic Vaulting Service Improves Recovery Economically,” Disaster Recovery Journal, Winter 2001, pp 48–50.

[26] Edwards, M., “Storage Utilities Make Case for Pay-as-You-Go Service,” Communications News, August 2000, pp 110–111.

[27] Connor, D., “How to Take Data Storage Traffic off the Network,” Network World, April

[34] Hubbard, D., “The Wide Area E-SAN: The Ultimate Business Continuity Insurance,”

Enterprise Systems Journal, November 1999, pp 42–43.

[35] Gilmer, B., “Fibre Channel Storage,” Broadcast Engineering, June 2001, pp 38–42 [36] Gilmer, B., “Fibre Channel Storage,” Broadcast Engineering, August 2000, pp 34–38 [37] Fetters, D., “Siren Call of Online Commerce Makes SANs Appealing,” Storage Area Net- works, CMP Media Supplement, May/June 1992, pp 4SS–22SS.

[38] Clark, T., “Evolving IP Storage Switches,” Lightwave, April 2002, pp 56–63.

[39] Helland, A., “SONET Provides High Performance SAN Extension,” Network World,

Janu-ary 7, 2002.

[40] Jacobs, A., “Vendors Rev InfiniBand Engine,” Network World, March 4, 2002.

[41] McIntyre, S., “Demystifying SANs and NAS,” Enterprise Systems Journal, July 2000,

[52] Connor, D., “Every Byte into the Pool,” Network World, March 11, 2002, pp 60–64.

[53] Zaffos, S., and P Sargeant, “Designing to Restore from Disk,” Gartner Research, ber 14, 2001.

Trang 4

Novem-[54] Cozzens, D A., “New SAN Architecture Benefit Business Continuity Re-Harvesting

Stor-age Subsystems Investment for Business Continuity,” Disaster Recovery Journal, Winter

2001, pp 72–76.

[55] Piscitello, D., “The Potential of IP Storage,” www.corecom.com.

Trang 6

Continuity Facilities

All network infrastructures must be housed in some kind of physical facility Amission-critical network facility is one that guarantees continued operation, regard-less of prevailing conditions As part of the physical network topology, the facility’sdesign will often be driven by the firm’s business, technology, and application archi-tecture The business architecture will drive the level of continuity, business func-tions, and processes that must be supported by the facility The technologyarchitecture will define the physical requirements of facility The application archi-tecture will often drive location and placement of service operations and connec-tivity, as well as the physical facility requirements

11.1 Enterprise Layout

Any individual facility is a single point of failure By reducing a firm’s dependence

on a single location and distributing the organization’s facilities across severallocations, the likelihood of an adverse event affecting the entire organization isreduced Geographic diversity in facilities buys time to react when an adverse eventoccurs, such as a physical disaster Locations unaffected by the same disaster canprovide mutual backup and fully or partially pick up service, reducing the recoverytime

Recognizing this, many firms explore geographic diversity as a protectionmechanism Decentralization avoids concentration of assets information processingsystems in a single location It also implies greater reliance on communication net-working However, a decentralized architecture is more complex and usually morecostly to maintain For this reason, many firms have shifted to centralizing theirdata center operations to at least two data centers or one data center and a recoverysite In the end, a compromise solution is usually the best approach

The level organizational centricity will often drive logical and physical networkarchitecture The number and location of branch offices, for instance, will dictatethe type of wide area network (WAN) connectivity with headquarters Quite often,firms locate their information centers, Internet access, and key operations personnel

at a headquarters facility, placing greater availability requirements on the WANarchitecture Quite often, Internet access is centralized at the corporate facility suchthat all branch office traffic must travel over the WAN for access On the otherhand, firms with extranets that rely heavily on Internet access will likely use a moredistributed Internet access approach for greater availability

289

Trang 7

11.1.1 Network Layout

Convergence has driven the consolidation of voice, data, and video infrastructure Ithas also driven the need for this infrastructure to be scalable so that it can be costeffectively planned and managed over the long term The prerequisite for this is anunderstanding of a network’s logical topology A layered hierarchical architecturemakes planning, design, and management much easier The layers that comprise anenterprise network architecture usually consist of the following:

Access layer This is the where user host systems connect to a network These

include elements such as hubs, switches, routers, and patch panels found inwiring or telecom closets (TCs) A failure of an access layer element usuallyaffects local users Depending on the availability requirements, these devicesmay need to be protected for survivability These devices also see much of thenetwork administrative activity in the form of moves, adds, or changes and areprone to human mishaps

Distribution layer This layer aggregates access layer traffic and provides

con-nectivity to the core layer WAN, campus, and virtual local area network(VLAN) traffic originating in the access layer is distributed among accessdevices or through a network backbone At this layer, one will see high-endswitching and routing systems, as well as security devices such as firewalls.Because these devices carry greater volumes of traffic, they likely requiregreater levels of protection within a facility than access devices They will usu-ally require redundancy in terms of multiple components in multiple locations.Redundant and diverse cable routing between access and distribution switches

is necessary Access-layer devices are often grouped together with their

con-necting distribution layer devices in switch blocks An access-layer switch on

will often connect to a pair of distribution-layer switches, forming a switchblock

Core layer Switch blocks connect to one another through the core layer Core

layer devices are typically high-density switches that process traffic at veryhigh line speeds Administrative and security processing of the traffic is lesslikely to occur at this layer, so as not to affect traffic flow These devices nor-mally require physical protection because of the amount of traffic they carryand their high-embedded capital cost

11.1.2 Facility Location

Network layout affects facility location Locations that carry enterprisewide trafficshould be dependable locations As geographically redundant and diverse routing isnecessary for mission-critical networks, locations that are less vulnerable to disas-ters and that have proximity to telecom services and infrastructure are desirable.The location of branch offices and their proximity to carrier point of presence (POP)sites influences the type of network access services and architecture

Peripheral to a decentralized facility strategy is the concept of ruralization This

entails organizations locating their critical network facilities in separate locationsbut in areas that are local to a main headquarters Typically, these are rural areaswithin the same metropolitan region as the headquarters The location is far enough

Trang 8

away from headquarters so that it is still accessible by ground transportation This isdone for the reason that a disaster will less likely simultaneously damage both loca-tions If one location is damaged, staff can relocate to the other location However,this strategy may not necessarily protect against sweeping regional disasters such ashurricanes or earthquakes.

11.1.3 Facility Layout

Compartmentalization of a networking facility is fundamental for keeping it tional in light of a catastrophic event [1] Many facilities are organized into a centralequipment room that houses technical equipment Sometimes, these rooms are sub-divided into areas based on their network functionality For instance, networking,data center, and Web-hosting systems are located in separate areas Because theseareas are the heart of a mission-critical facility, they should be reinforced so thatthey can function independently from the rest of the building

opera-Size and location within a facility are key factors As many of these rooms arenot within public view, they are often not spaciously designed Such rooms shouldhave adequate spacing between rows of racks so that personnel can safely moveequipment without damage Although information technology (IT) equipment isgetting smaller, greater numbers of units are being implemented, resulting in higher

density environments Swing space should be allocated to allow new technology

equipment to be installed or staged prior to the removal of older equipment A ity should be prewired for growth and provide enough empty rack space forexpansion

facil-11.2 Cable Plant

A mission-critical network’s integrity starts with the cable plant—the cables, nectors, and devices used to tie systems together Cable plant is often taken forgranted and viewed as a less important element of an overall network operation.However, many network problems that are often unsolvable at the systems level are

con-a result of lurking ccon-abling oversights Quite often, ccon-abling problems con-are still themost difficult to troubleshoot and solve, and sometimes are irreparable For thesereasons, a well-developed structured cabling plan is mandatory for a mission-critical facility [2] Mission-critical cabling plant should be designed with the fol-lowing features:

• Survivability and self healing capabilities, typically in the form of redundancy,diversity, and zoning such that a problem in a cable segment will minimallyimpact network operation;

• Adequately support the transmission and performance characteristics of theprevailing networking and host system technologies;

• Support easy identification, testing, troubleshooting, and repair of cableproblems;

• Easily and cost effectively facilitate moves, adds, and changes without servicedisruption;

• Support scalable long-term growth

Trang 9

11.2.1 Cabling Practices

Network cabling throughout a facility should connect through centralized locations,

so that different configurations can be created through cross connection, or ing But inside an equipment room, a decentralized approach is required Physically

patch-separating logically clustered servers and connecting them with diversely routedcabling avoids problems that could affect a rack or section of racks Centralization

of cable distribution can increase risks, as it becomes a single point of failure.Redundant routing of cables involves placing multiple cable runs between loca-tions Quite often, two redundant host systems will operate in parallel for reliabil-ity However, more often than not, the redundant devices connect to wires orfibers in the same cable, defeating the purpose of the redundancy altogether Placingextra cables in diverse routes between locations provides redundancy and accommo-dates future growth or additional backup systems that may be required in anemergency

This notion revisits the concept of logical versus physical link reliability, cussed earlier in this book If a cable is damaged and a redundant cable path is avail-able, then the systems on each end must continue to either send traffic on the samelogical path over the redundant cable path or redirect traffic on a new logical path

dis-on the redundant cable path In WANs, synchrdis-onous optical network (SONET) ringnetworks have inherent disaster avoidance so that the physical path is switched to asecond path, keeping the logical path intact In a local area network (LAN), a redun-dant physical path can be used between two devices only after reconvergence of thespanning tree Newer LAN switching devices have capabilities to logically tietogether identical physical links so that they can load share as well as provideredundancy

11.2.1.1 Carrier Entry

A service entrance is the location where a service provider’s cable enters a building

or property [3] Because a service provider is only required to provide one serviceentrance, a redundant second entrance is usually at the expense of the propertyowner or tenant Thus, a redundant service entrance must be used carefully in order

to fully leverage its protective features and costs The following are some practices tofollow:

• A redundant service entrance should be accompanied by diverse cable pathsoutside and within the property A cable path from the property tothe carrier’s central office (CO) or POP should be completely diverse fromthe other cable path, never converging at any point Furthermore, thesecond path should connect to a CO or POP that is different fromthat which serves the other path (See Chapter 7 for different access pathconfigurations.)

• The redundant cable should be in a separate cable sheath or conduit, instead ofsharing the same sheath or conduit as the primary cable

• Circuits, channels, or traffic should be logically split across both physicalcables if possible In a ring network access topology, as in the case of SONET,one logical path will enter through one service entrance and exit through theother

Trang 10

• Having the secondary access connect to an additional carrier can protectagainst situations where one access provider might fail while another mightsurvive, as in the case of a disaster.

There should be at least one main cross connect (MC) [4] The MC should be inclose proximity to or collocated with the predominant equipment room For redun-dancy or security purposes, selected cable can be passed through the MC to specificlocations

11.2.1.2 Multiple Tenant Unit

Multiple tenant units (MTUs) are typical of high-rise buildings Figure 11.1

illus-trates a typical MTU cable architecture It is clear that many potential single failure

points exist Vertical backbone cables, referred to as riser cables, are run from the

MC to a TC on each floor in a star topology For very tall MTUs, intermediate crossconnects (ICs) are inserted between the TC and MC for better manageability Some-times, they are situated within a TC Although risers are predominantly fiber cables,many installations include copper twisted pair and coaxial cable All stations oneach floor connect to a horizontal cross connect (HC) located in the TC [5]

Demarc

Horizontal cross connect (HC) Main cross connect (MC) Riser cable

Horizontal cable Intermediate cross connect (IC) Intermediate cable

Service entrance

To service provider

Trang 11

A mission-critical cable plant (Figure 11.2) should be designed to ensure that acable cut or equipment damage in any location does not cause loss of connectivity toany TC Although installing two TCs on a floor, each connecting to a separate riser,can add redundancy, it is most likely that each user station on a floor will be servedfrom one TC [6] If one of the TCs is damaged, a portion of the floor can remainoperational The TCs on each floor can be tied together through a cable that con-nects between the HCs [7] Adding tie cables between TCs produces a mesh-liketopology, enabling paths around a damaged IC or TC to be created If ICs are pres-ent, connecting multiple HCs together on a floor can provide a redundant pathbetween an HC and IC.

For further survivability, redundant riser backbones can extend from the twoMCs located on different floors to each IC The two MCs can also connect to each

Main cross connect (MC)

Horizontal cross connect (HC)

Riser cable

Horizontal cable Intermediate cross connect (IC) Intermediate cable

Service entrance

To service provider

Service entrance

To service provider

Demarc Demarc

Redundant TCs per floor

Interfloor tie cables

Intrafloor tie cables

Inter-MC tie Redundant

service entrance

Redundant MC

Redundant riser cable

IC tie

Inter-TC

TC TC

Trang 12

other for added redundancy From each MC, a separate riser cable runs to each IC.This creates a ring-like architecture and can thus leverage networking technologiessuch as SONET or resilient packet ring (RPR) The redundant riser is routed via aseparate conduit or shaft for diversity [8] Although not explicitly shown, a primaryand secondary cable run between each cross connect is essential in the event a cable

is damaged Although they aid cable management and reduce riser cable runs andattenuation, particularly in very tall buildings, ICs are highly vulnerable points offailure, because of their collective nature

The architecture in Figure 11.2 illustrates a highly redundant architecture.Although it may seem like expensive overkill, it is the price for embedding surviv-ability within a mission-critical facility Variants that are more economical can beused, depending on the level of survivability desired In MTUs, organizations typi-cally do not operate in all the tenant units Thus, an organization should includeterms in leasing agreements with the landlord or provider to assure that they canimplement, verify, and audit the architecture redundancy that is prescribed

11.2.1.3 Campus

Many of the cabling principles that apply to MTUs also apply to campus networks.Although campus networks can be viewed as flattened versions of the MTU archi-tecture, they must also reliably connect systems across several buildings, as well assystems within each building A simple campus configuration (Figure 11.3) is typi-cally achieved by running backbone cable from a central MC residing in one build-ing to ICs in every building on campus, in a star-like fashion Although all endpointsare routed to the MC, logical point-to-point, star, and ring networks can still besupported [9] For ring topologies, connections are brought to the central MC and

Service Entrance Demarc

Horizontal cross connect (HC)

Main cross connect (MC) Backbone link

Horizontal cable Intermediate cross connect (IC) Intermediate cable

Figure 11.3 Simple campus network design.

Trang 13

routed back to their ICs The star topology also enables patching a new building intothe network from the central MC [10].

The campus network design can be enhanced with features to add survivability,similar to the MTU case (Figure 11.4) Redundant MCs and tie links between ICsadd further redundancy in the event a building, cross connect, or cable is damaged.The result is a mesh-like network, which adds versatility for creating point-to-point,multipoint, star, or ring topologies within the campus Ring networks can be made

by routing cable from each IC to the MC (as in the simple campus case), as well asacross tie links between ICs

In campus environments, outdoor cable can be routed underground or on aerialplant, such as poles Underground cable often resides in conduit that is laid in theground The conduit must be sealed and pressurized to keep water out of the pipeand must be sloped so that any water will drain A major misconception is thatunderground cable is more reliable than aerial plant Surprisingly, field experiencehas shown that underground cable is more vulnerable to damage than aerial plant.Underground cable must often be exhumed for splicing and distribution

As an option to cable, wireless technologies such as microwave or free-spaceoptics can also be used to establish redundant links between buildings Becausebuildings within a campus are usually in close proximity, line of sight is much easier

to achieve For microwave, the closeness also enables higher bandwidth links to beestablished

11.2.2 Copper Cable Plant

The ubiquity of copper cable plant still makes it favored for LAN implementations.However, as enterprise networks migrate toward 100-Mbps and 1,000-Mbps speeds,

Service entrance Demarc

Inter-MC tie Redundant MC

Redundant service entrance

Service Entrance Demarc

To service provider

Horizontal cross connect (HC)

Main cross connect (MC) Backbone link

Horizontal cable Intermediate cross connect (IC) Intermediate cable

Inter-IC tie

Figure 11.4 Mission-critical campus network design.

Trang 14

the ability of the embedded copper base to adequately support higher LAN speedsremains somewhat questionable Category 5 systems installed prior to 1995 mayencounter problems in supporting Gigabit Ethernet technology The new Category5E and Category 6 specifications are designed to supersede Category 5 Category 5E

is recommended for any new installations and can support Gigabit Ethernet gory 6 systems allow greater headroom for cabling deficiencies than Category 5E.Such deficiencies appear in various forms Many cabling problems stem frominstallation in proximity to high-voltage electrical systems and lines Bad or improp-erly crimped cables, loose ends, and improperly matched or bad patch cords can cre-ate cyclical redundancy check (CRC) errors, making autonegotiation and duplexissues more pronounced Using voice frequency cables for data transmission canproduce signal distortion and degradation

Cate-Noise is unwanted disturbance in network cable and is a common cause of

sig-nal degradation Noise can originate from a variety of sources [11] There are two

fundamental types of noise Background noise is found when there are no devices

actively transmitting over a cable It is attributed to many sources such as thermal

noise, semiconductor noise, and even the effects of local electrical equipment Signal noise is noise power resulting from a transmission signal over the cable The signal- to-noise ratio is a measure often used to characterize the level of signal noise with

respect to background noise [12] A high value of this ratio indicates the degree towhich audible voice or error-free data will transmit over a cable The ratio is speci-fied in decibels below 1 mW (dBm)

The industry has arrived at several key test parameters that are often used tocertify a copper cable installation and to troubleshoot for some of these prob-lems [13, 14] The parameters include the following:

Attenuation, sometimes referred to as loss, is a problem characteristic in both

copper and fiber cable As illustrated in Figure 11.5, the amplitude of a signal

on a medium decreases with distance based on the impedance of the medium.Attenuation is greater at high frequencies than at lower frequencies Excessiveattenuation can make a signal incomprehensible to a receiving device.Attenuation is created by long cable runs, harsh bends, or poor terminations.For these reasons, an Ethernet cable run, for example, must not exceed 90m

In fiber-optic cabling, regenerators must often be used to regenerate the lightsignal over long distances

Return loss, which appears as noise or echo, is typically encountered in older

cabling plant and is attributed to defective connections and patch panels It is

a measurement of impedance consistency along a wire Variations in ance can cause signals to reflect back towards their origin, creating interfer-ence, as illustrated in Figure 11.5 Return loss is the summation of all signalenergy that is reflected back to the point of signal origin

imped-• Cross talk is a phenomenon that comes in several forms (Figure 11.6) [15, 16]:

1 Near-end cross talk (NEXT) is a condition where electrical signal from

one wire leaks onto another wire and induces a signal NEXT is usuallymeasured at the signal’s originating end, where the signal is usuallystrongest Contributors to NEXT include defective jacks, plugs, andcrossed or crushed wires NEXT is insufficient for fully characterizing

Trang 15

performance of cables subject to protocols that simultaneously usemultiple twisted pairs of copper wire in a cable, such as a Category 5cable.

2 Power-Sum NEXT (PSNEXT) is a calculated value used to convey the

effect of cross talk for protocols that simultaneously use several wirepairs PSNEXT is used to quantify the relationships between one pairand the other pairs by using pair-by-pair NEXT values This produces aPSNEXT for each wire pair

3 Far-end cross talk (FEXT) is measure of conditions where an electrical

signal on one wire pair induces signal on another pair at the far end of a

Far end cross talk (FEXT)

XCVR XCVR

XCVR XCVR

NEXT signals

Power sum NEXT XCVR XCVR

Figure 11.6 Types of cross talk.

Trang 16

link Because FEXT is a direct function of the length of a link due to theproperties of attenuation, it is insufficient to fully characterize a link.

4 Equal-level far-end cross talk (ELFEXT) is a calculated value to

compensate for the aforementioned length factor by subtracting theattenuation of the disturbing pair from the FEXT value

5 Power-sum ELFEXT (PSELFEXT) is a calculated value designed to

more fully describe FEXT characteristics for cables subject to newerprotocols that simultaneously use multiple twisted pairs of copper wire

in a cable, in the same fashion as PSNEXT

Delay skew occurs when simultaneous signals sent across several wires are

received at different times Propagation delay is the elapsed time a signal takes

to traverse a link When multiple wires are used to transmit a signal, as in the

case of Ethernet, variation in the nominal propagation velocity (NPV) of the

different wires can make it difficult for receivers to synchronize all of the nals Delay skew is defined as the range between the highest and lowest propa-gation delay among a set of wire pairs Differences in wire insulation amongthe pairs usually contribute to delay skew

sig-• Attenuation–to–cross talk ratio (ACR) is calculated value that combines the

merits of the preceding parameters It represents the frequency where a signal

is drowned out by noise created by NEXT Because attenuation increases withfrequency while NEXT increases irregularly with frequency, ACR is the fre-quency where the attenuated signal cannot exceed NEXT

Power-sum ACR (PSACR) is a calculated value used to convey ACR for cables

using multiple wire pairs Both PSACR and ACR are considered the ters that best characterize a cable’s performance

parame-Outside of poor installation practices, many cable problems are also due toprevailing usage and environmental factors For example, moisture or chemicalexposure can cause corrosion, vibration or temperature extremes can lead to wear,and simple dust can obstruct connector contacts Higher speed networking tech-nologies such as Gigabit Ethernet are more susceptible to these types of factors.They use weaker high-speed signals that are accompanied by significant noise [17].For this reason, they employ electronic components that are more sensitive and havetighter tolerances Thus, usage and environmental factors that may not have beenproblematic for a 10BaseT network may prove otherwise for a Gigabit Ethernetnetwork

Upon installation, cabling is passively tested for these parameters usinghardware-based tools; the most common is the time-delay reflectometer (TDR).Used for cable and fiber plant (the optical TDR is used for fiber), this device tests forcontinuity, cable length, and the location of attenuation Although passive testing iseffective, actively testing cable plant with live network traffic can be more definitive

in verifying a cable’s performance under normal operating conditions

11.2.3 Fiber-Optic Cable Plant

Optical-fiber cabling, in one form or another, has become a critical nent in mission-critical networking Fiber has found home in long-haul WAN

Trang 17

compo-implementations and metropolitan area networks (MANs) As Gigabit Ethernet hasbeen standardized for use with fiber-optic cabling, the use of fiber in conjunctionwith Ethernet LAN systems has grown more attractive [18] Yet, many organiza-tions cannot easily justify fiber-optic LAN infrastructure In addition to cost, the use

of fiber-based systems still requires interoperability with legacy copper ture for redundancy if failures occur

infrastruc-Fiber has advantages over copper in terms of performance When compared tocopper, multimode fiber can support 50 times the bandwidth of Category 5 cable.More importantly, fiber is immune to noise and electromagnetic radiation, making

it impervious to many of the aforementioned factors that affect copper New try standard fiber V-45 connectors have the look and feel of RJ-45 connectors yield-ing the same port density as copper-based switches

indus-Optical TDR (OTDR) devices are used to troubleshoot fiber link problems andcertify new installations [19, 20] Fiber failures do occur due to mishaps in cablinginstallation and maintenance, much in the same way as copper Poor polishing, dirty

or broken connectors, poor labeling, shattered end faces, poor splicing, and sive bends are frequently encountered problems Fiber end face inspection is oftendifficult to perform, as access to fiber terminating in a patch panel can be tricky andproper inspection requires use of a video microscope Systems have become avail-able to automatically monitor physical fiber connections using sensors attached tofiber ports

exces-There are many commonalities between characterizing performance of fiber and

copper cables Optical noise can result from amplified spontaneous emissions

(ASEs) from Erbium-doped fiber amplifiers (EDFAs) Similar to copper cables, anoptical signal-to-noise ratio (OSNR) is used to convey the readability of a receivedsignal The industry has arrived at several key test parameters that are often used tocertify fiber cable installation and to troubleshoot for some of these problems Theparameters include the following [21, 22]:

Fiber loss conveys the attenuation of a light signal over the length of a fiber

cable [23] Simply put, it is how much light is lost as it travels through a fiber

It can be a sign of poorly designed or defective cable Cable defects can stemfrom undue cable stress during manufacturing or installation [24]

Point loss is a stepwise increase in loss at some location along a fiber cable It is

usually caused by a twisted, crushed, or pinched fiber

Reflectance is the amount of light energy that is reflected from a signal source.

It usually occurs at mechanical splices and connectors It is also attributed toair gaps and misalignment between fibers Reflectance is manifested as a highBER in digital signals It is analogous to return loss in copper cables

Bandwidth of a fiber is a function of the properties of the fiber’s glass core It

conveys the frequency range of light signals, in megahertz, that can be ported by the fiber

sup-• Drift is variation in light signal power and wavelength resulting from

fluctua-tions in temperature, reflectance, and laser chirp It conveys the ability of a nal to remain within its acceptable boundary limits

sig-• Cross talk can be manifested in a light signal as the unwanted energy from

another signal In fiber cables, it is often difficult to observe and compute

Trang 18

Four-wave mixing is the extension of cross talk beyond a given optical signal

pair to other signals The energy can appear in wavelengths in use by otherchannels

Chromatic dispersion (CD) results from different wavelengths of light

travel-ing at different speeds within a fiber, resulttravel-ing in a broadentravel-ing of a sion pulse (Figure 11.7) [25] This causes difficulty for a receiving device todistinguish pulses

transmis-• Polarization-mode dispersion (PMD) is caused by modes of light traveling at

different speeds It is typically found in long fiber spans and results from avariety of causes, including fiber defects, improper bending, stress-producingsupports, and temperature effects (Figure 11.7)

For fiber, it is important that parameters are measured at those wavelengths atwhich the fiber will operate, specified in nanometers (nm) They are typically in therange of 850 and 1,300 for multimode fiber and 1,310 and 1,550 nm for single-mode fiber Fiber should be tested after installation because many of these parame-ters can change It is also important to understand the degree of error in any testmeasures Error levels can range from 2% to 25%, depending on how the tests areconducted

11.3 Power Plant

On average, a server system typically encounters over 100 power disturbances amonth Many disturbances go unnoticed until a problem appears in the platform.Studies have shown that about one-third of all data loss is caused by power prob-lems Power surges and outages, next to storms and floods, are the most frequentcause of service interruptions, particularly telecom service

Although most outages are less than 5 minutes in duration, they can occurquite frequently and go undetected Power disturbances usually do not instantane-ously damage a system Many traditional platform problems can be ultimatelytraced to power line disturbances that occur over a long term, shortening their opera-tional life

XCVR XCVR

Trang 19

11.3.1 Power Irregularities

Amps (A) are a measurement of current flow and volts (V) are a measure of theamount of work required to move electrons between two points Power is theamount of current used at a voltage level and hence is the product of volts and cur-rent Power is measured in watts (W) or volt-amps (VA), where 1W = 1.4 VA Elec-tricity travels in waves that can vary in size, frequency, and shape Frequency (orcycles), measured in hertz (Hz), is a critical parameter of a power specification Ifincorrectly specified, power at the wrong cycle can burn out equipment Ideally,alternating current (ac) power should be delivered as a smooth sine wave of steadystream current But this rarely happens due to a host of real-world circumstances

As electronic equipment grows more sensitive to power irregularities, greater

damage and disruption can ultimately result from dirty power A brief discontinuity

in power can take systems off-line and cause reboots and reinitialization of

conver-gence processes in networking gear Some devices will have some holdup time, which

is the time for an interruption of the input waveform to affect device operation, cally measured in milliseconds [26] Highly sensitive hardware components such aschips can still be damaged depending on the voltage Software-controlled machineryand adjustable speed-driven devices such as disk and tape drives can also be affected.Because power travels down cables and through conductors, low-voltage cabling can

typi-be affected as well The following are some of the more frequently encounteredpower disturbances, illustrated in Figure 11.8:

Blackouts are probably the most noticeable and debilitating of power

inter-ruptions Although they comprise about 10% of power problems, theexpected trend is for more frequent blackouts Inadequate supplies of power

to support the growth of data centers and the increased occurrence of ing temperatures have contributed to a rising power shortage

swelter-• Sags or dips are momentary drops in voltage Sags of longer duration are often referred to as brownouts They often result from storms and utility trans-

former overloads Sags are a major contributor to IT equipment problems

Transient/spike Surge

Line noise

Brownout

Frequency variation Harmonic distortion

Figure 11.8 Power irregularities.

Trang 20

Transients are high-frequency impulses that appear in the sinusoidal form They are typically caused by a variety of factors Spikes, sometimes referred to as voltage transient, are momentary sharp increases in voltage,

wave-typically resulting from a variety of sources, including utility companies cally, spikes that exceed nominal voltage by about 10% can damage unpro-tected equipment Quite often, electrical storms create spikes upon lightningstriking power or telecom lines

Typi-• Surges or swells are voltage increases that last longer than one cycle They are

typically caused by a sudden large load arising from abrupt equipment startupand switchovers

Line noise can be created by radio frequency interference (RFI) and EMI It

can come from machinery and office equipment in the vicinity It is usuallymanifested in the form of random fluctuations in voltage

Total harmonic distortion (THD) is a phenomenon that is usually associated

with power plant devices and IT systems It is the ability of a device to distort

the sinusoidal power waveform, the worst being a square wave In a typical

electrical system, 60 Hz is the dominant or first harmonic order Devices canattenuate the waveform and reflect waste current back into an electrical distri-bution system [27] To correct for this, power systems will often use filters totrap third-order harmonics [28]

Frequency variations are a change in frequency of more than 3 Hz arising

from unstable frequency power sources They are usually caused by cuttingover power from a utility to a generator

Many of these are also a result of mishaps and human errors by local utilities orwithin the general locale Electrical repair, maintenance, and construction accidentsand errors by external parties, particularly those residing in the same building, canoften impact utility service or power infrastructure

11.3.2 Power Supply

Commercial ac power availability is typically in the range of 99.9% to 99.98% Ifeach power disruption requires systems to reboot and services to reinitialize, addi-tional availability can be lost Because this implies downtimes on the order of 2 to

8 hours, clearly a high-uptime mission-critical facility will require power protectioncapabilities

A mission-critical facility should be supplied with sufficient power to supportcurrent and future operating requirements An inventory of all equipment isrequired along with how much power each is expected to consume All mission-critical equipment should be identified The manufacturer specified amps and voltsfor each piece of equipment should be noted Manufacturers will often cite maxi-mum load ratings rather than normal operating power These ratings often reflectthe maximum power required to turn up a device versus steady state operation.These ratings will often exceed normal operating power by an average of 30%.Because ratings convey maximum load, a power plant based strictly on thesevalues could result in costly overdesign Using current or anticipated operatingloads for each system might be a more reasonable approach, although a staggered

Trang 21

startup of all systems may be required to avoid power overdraw The total wattageshould be calculated and increased by at least 20% to 30% as a factor of safety, or inaccordance with growth plans.

Current power density for data centers can range from 40 to 200W per squarefoot [29] Data centers are typically designed for at least 100W per square foot, with

a maximum usage of 80% [30] Quite often, maximums are not realized for a ety of reasons Heat load, in particular, generated by such high-density configura-tion can be extremely difficult to manage Power supply and heat-dissipationtechnology are gradually being challenged by the trend in compaction of high-techequipment

vari-Commercial servers in North America use 120V of ac, with amperage ratingsvarying from server to server Whereas today’s high-density installations can see onthe order of 40 to 80 systems per rack, this is expected to climb to 200 to 300 sys-tems per rack in 5 years Although advances in chip making will reduce power con-sumption per system, unprecedented power densities on the order of 3,000 to5,000W per rack are still envisioned

Telecom networking systems use –48V of direct current (dc) Rectifiers are used,

which are devices that convert ac voltage to dc The negative polarity reduces sion of underground cables, and the low voltage is more easily stored and not gov-erned by national electric code (NEC) requirements [31] Because of the trend inconvergence, networking facilities are seeing more widespread use of ac-powered

corro-systems In telecom facilities, ac power is delivered via inverters, which are devices

that convert ac to dc, drawing power from the –48V dc

Mission-critical facilities obtain their power from a combination of two primary

sources Public power is electrical power obtained from a public utility Local power

is electrical power produced internally, often by diesel generators and other types

power-generation systems Also referred to as distributed generation or tion, local power is often used as a backup or supplement to public power At the

cogenera-time of this writing, the public power industry is undergoing several trends toimprove the cost, production, reliability, and quality of power: deregulation, diver-gence of the generation and distribution businesses, open-market purchasing, andgreater use of natural gas

Redundancy is key for improved power availability Power should be obtainedfrom two separate and independent utilities or power grids The power should arrivefrom multiple transmission systems along geographically diverse paths throughindependent utility substations Two power utility feeds into a facility via diversepaths protects against local mishaps, such as collapsed power poles

Electrical service should be delivered to a facility at the highest possible voltage.Excess current should be available to enable voltage to reach steady state after anincrease in draw following a startup or short-circuit recovery operation

11.3.3 Power Quality

A mission-critical facility should have a constant supply of clean power, free of anyirregularity or impurity The quality of power entering a facility should fall within

American National Standards Institute (ANSI) specifications Power-conditioning

devices may be required as part of the facility’s electrical plant to ensure a consistentsupply of pure power In general, conditioning keeps the ac waveform smooth and

Ngày đăng: 14/08/2014, 14:20

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] Kast, P., “The Broadcast Station as a Mission-Critical Facility,” Broadcast Engineering, November 2001, pp. 72–75 Sách, tạp chí
Tiêu đề: The Broadcast Station as a Mission-Critical Facility,”"Broadcast Engineering
[2] Fortin, W., “A Better Way to Cable Buildings,” Communications News, September 2000, p. 38 Sách, tạp chí
Tiêu đề: A Better Way to Cable Buildings,”"Communications News
[3] Matthews, J. K., “Maintaining Fiber-Optic Cable at the Demarc,” Cabling Installation &Maintenance, April 1999, pp. 25–30 Sách, tạp chí
Tiêu đề: Maintaining Fiber-Optic Cable at the Demarc
Tác giả: J. K. Matthews
Nhà XB: Cabling Installation & Maintenance
Năm: 1999
[4] Cerny, R. A., “Working with Fiber,” Broadcast Engineering, December 1999, pp. 93–96 Sách, tạp chí
Tiêu đề: Working with Fiber,”"Broadcast Engineering
[5] Clark, E., “Cable Management Shake-Up,” Network Magazine, July 1999, pp. 50–55 Sách, tạp chí
Tiêu đề: Cable Management Shake-Up
Tác giả: E. Clark
Nhà XB: Network Magazine
Năm: 1999
[6] Jensen, R., “Centralized Cabling Helps Fiber Come Out of the Closet,” Lightwave, October 1999, pp. 103–106 Sách, tạp chí
Tiêu đề: Centralized Cabling Helps Fiber Come Out of the Closet,”"Lightwave
[7] Lupinacci, J. A., “Designing Disaster Avoidance into Your Plan,” Cabling Installation &Maintenance, April 1999, pp. 17–22 Sách, tạp chí
Tiêu đề: Designing Disaster Avoidance into Your Plan,”"Cabling Installation &"Maintenance
[8] King, D., “Network Design and Installation Considerations,” Lightwave, May 1998, pp. 74–79 Sách, tạp chí
Tiêu đề: Network Design and Installation Considerations,” "Lightwave
[9] Minichiello, A. T., “Essentials of Campus Telecommunications Design,” Cabling Installa- tion & Maintenance, June 1998, pp. 45–52 Sách, tạp chí
Tiêu đề: Essentials of Campus Telecommunications Design,”"Cabling Installa-"tion & Maintenance
[10] Badder, A., “Building Backbone and Horizontal Cabling—A Quick Overview,” Cabling Business Magazine, October 2002, pp. 60–64 Sách, tạp chí
Tiêu đề: Building Backbone and Horizontal Cabling—A Quick Overview,”"Cabling"Business Magazine
[11] Jimenez, A. C., and L. Swanson, “Live Testing Prevents Critical Failures,” Cabling Installa- tion & Maintenance, June 2000, pp. 51–52 Sách, tạp chí
Tiêu đề: Live Testing Prevents Critical Failures,”"Cabling Installa-"tion & Maintenance
[12] D’Antonio, R., “Testing the POTS portion of ADSL,” Communications News, September 2000, pp. 108–110 Sách, tạp chí
Tiêu đề: Testing the POTS portion of ADSL
Tác giả: D’Antonio, R
Nhà XB: Communications News
Năm: 2000
[13] Steinke, S., “Troubleshooting Category 6 Cabling,” Network Magazine, August 2002, pp. 46–49 Sách, tạp chí
Tiêu đề: Troubleshooting Category 6 Cabling,” "Network Magazine
[14] Lecklider, T., “Networks Thrive on Attention,” Communications News, March 2000, pp. 44–49 Sách, tạp chí
Tiêu đề: Networks Thrive on Attention,”"Communications News
[15] Cook, J. W., et al., “The Noise and Cross-talk Environment for ADSL and VDSL Systems,”IEEE Communications Magazine, May 1999, pp. 73–78 Sách, tạp chí
Tiêu đề: The Noise and Cross-talk Environment for ADSL and VDSL Systems,”"IEEE Communications Magazine
[16] Goralski, W., “xDSL Loop Qualification and Testing,” IEEE Communications Magazine, May 1999, pp. 79–83 Sách, tạp chí
Tiêu đề: xDSL Loop Qualification and Testing,”"IEEE Communications Magazine
[17] Russell, T. R., “The Hidden Cost of Higher-Speed Networks,” Communications News, September 1999, pp. 44–45 Sách, tạp chí
Tiêu đề: The Hidden Cost of Higher-Speed Networks
Tác giả: T. R. Russell
Nhà XB: Communications News
Năm: 1999
[18] Jensen, R., “Extending Fiber Optic Cabling to the Work Area and Desktop,” Lightwave, April 1999, pp. 55–60 Sách, tạp chí
Tiêu đề: Extending Fiber Optic Cabling to the Work Area and Desktop,”"Lightwave
[19] Jensen, R., and S. Goldstein, “New Parameters for Testing Optical Fiber in Premises Net- works,” Cabling Installation & Maintenance, July 2000, pp. 65–70 Sách, tạp chí
Tiêu đề: New Parameters for Testing Optical Fiber in Premises Net-works,”"Cabling Installation & Maintenance
[20] Teague, C., “Speed of Light,” Communications News, June 1998, pp. 44–46 Sách, tạp chí
Tiêu đề: Speed of Light,”"Communications News

TỪ KHÓA LIÊN QUAN