1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Optical Networks: A Practical Perspective - Part 68 doc

10 237 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Photonic Packet Switching
Trường học University of Optical Networks
Chuyên ngành Optical Networks
Thể loại Thesis
Năm xuất bản 2023
Thành phố New York
Định dạng
Số trang 10
Dung lượng 795,4 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

We start by looking at a baseline architecture for an output-buffered switch using delay lines that does not make use of multiple wavelengths.. If multiple input packets arriving in a ti

Trang 1

9

I o

"7 I

Figure 12.17 Example of a 2 x 2 routing node using a feedback delay line architecture

for a common output port arrive simultaneously, one of them is switched to the output port while the others are switched to the recirculating buffers

In the context of optical switches, the buffering is implemented using feedback

delay lines In the feedback architecture of Figure 12.17, the delay lines connect the outputs of the switch to its inputs With two delay lines and two inputs from outside, the switch is internally a 4 • 4 switch Again, if two packets contend for a single output, one of them can be stored in a delay line If the delay line has length equal to one slot, the stored packet has an opportunity to be routed to its desired output in the next slot If there is contention again, it, or the contending packet, can be stored for another slot in a delay line

Recirculation buffering is more effective than output buffering at resolving con- tentions because the buffers in this case are shared among all the outputs, as opposed

to having a separate buffer per output The trade-off is that larger switch sizes are needed in this case due to the additional switch ports needed for connecting the recir- culating buffers For example, in [HK88], it is shown that a 16 • 16 switch requires a total of 112 recirculation buffers, or about 7 buffers per output, to achieve a packet loss probability of 10 - 6 a t an offered load of 0.8 In contrast, we saw earlier that the output-buffered switch requires about 25 buffers per output, or a total of 400 buffers, to achieve the same packet loss probability

In the feed-forward architecture considered earlier, a packet has a fixed number

of opportunities to reach its desired output For example, in the routing node shown

in Figure 12.14, the packet has at most three opportunities to be routed to its correct destination: in its arriving slot and the next two immediate slots On the other hand,

in the feedback architecture, it appears that a packet can be stored indefinitely This is not true in practice since photonic switches have several decibels of loss The loss can

be made up using amplifiers, but then we have to account for the cascaded amplifier noise as packets are routed through the delay line multiple times The switch crosstalk

Trang 2

12.4.4 Using Wavelengths for Contention Resolution

One way to reduce the amount of buffering needed is to use multiple wavelengths

In the context of PPS, buffers correspond to fiber delay lines Observe that we can store multiple packets at different wavelengths in the same delay line

We start by looking at a baseline architecture for an output-buffered switch using delay lines that does not make use of multiple wavelengths Figure 12.18 shows such

an implementation, which is equivalent to the output-buffered switch of Figure 12.15 with B buffers per output Up to B slots of delay are provided per output by using

a set of B delay lines per output T denotes the duration of a time slot If multiple input packets arriving in a time slot need to go to the same output, one of them is switched out while the others are delayed by different amounts and stored in the different delay lines, so that the output contention is resolved Note that the set of delay lines together can store more than B packets simultaneously For instance,

a single K-slot delay line can hold up to K packets simultaneously Therefore the total number of packets that can be held by the set of delay lines in Figure 12.18 is

1 + 2 + + B = B ( B + 1)/2 However, since we can have only one packet per slot transmitted out (or a total of B packets in B slots), the effective storage capacity of this set of delay lines is only/3 packets

In its simplest form, we can use wavelengths internal to the switch to reduce the number of delay lines required Figure 12.19 shows an example of such an output-buffered switch [ZT98] Instead of providing a set of delay lines per output, the delay lines are shared among all the outputs Packets entering the switch are sent through a tunable wavelength converter device (Note that tunable wavelength converter devices are still in research laboratories todaymsee Section 3.8 for some

of the approaches being pursued.) At the output of the switch, the packets are sent through an arrayed waveguide grating (AWG) The wavelength selected by the tunable wavelength converter and the output switch fabric port to which the packet

is switched together determine the delay line to which the packet is routed by the AWG

Trang 3

Figure 12.18 An example of an output-buffered optical switch using fiber delay lines for buffers that does not use wavelengths for contention resolution

Figure 12.19 An example of an output-buffered optical switch using multiple wave- lengths internal to the switch and fiber delay lines for buffers The switch uses tunable wavelength converters and arrayed waveguide gratings

Figure 3.25 provides a description of how the AWG works in this configuration For example, consider the first input port on the AWG From this port, wavelength )~1 is routed to delay line 0, wavelength )~2 is routed to the single-slot delay line, wavelength )~3 is routed to the two-slot delay line, and wavelength ~B is routed to the B-slot delay line In order to allow a packet at each input of the AWG to be routed

to each possible delay line, we need the number of wavelengths, W - max(N, B), where N is the number of inputs Thus the delay seen by a packet can be controlled

by controlling the wavelength at the output of the tunable wavelength converter device In this case, if we have two input packets on different ports destined to the

Trang 4

Assuming the same traffic model as before, with p = 0.8, in order to obtain a packet loss probability of 10 -6 for a 16 x 16 switch, we need a total of 25 delay lines, instead of 25 delay lines per output for the case where only a single wavelength is used inside the switch In Section 12.6, we will study other examples of switch con- figurations that use wavelengths internally to perform the switching and/or buffering functions

We next consider the situation where we have a W D M network In this case, multiple wavelengths are used on the transmission links themselves We can gain further reduction in the shared buffering required compared to a single-wavelength system by making use of the statistical nature of bursty traffic across multiple wave- lengths Figure 12.20 shows a possible architecture [Dan97] for such a switch, again using tunable wavelength converters and delay lines At the inputs to the switch, the wavelengths are demultiplexed and sent through tunable wavelength converters and then into the switch fabric The delay lines are connected to the output of the switch fabric The W wavelengths destined for a given output port share a single set

of delay lines In this case, we have additional flexibility in dealing with contention

If two packets need to go out on the same output port, either they can be delayed in time, or they can be converted to different wavelengths and switched to the output port at the same time The TWCs convert the input packets to the desired output wavelength, and the switch routes the packets to the correct output port and the appropriate delay line for that output

As the number of wavelengths is increased, keeping the load per wavelength constant, the amount of buffering needed will decrease because, within any given time slot, the probability of finding another free wavelength is quite high Basically

we are sharing capacity among several wavelengths and permitting better use of that capacity [Dan97] shows that the number of delay lines required to achieve

a packet loss probability of 10 -6 at an offered load of 0.8 per wavelength for a

16 x 16 switch drops from 25 per output without using multiple wavelengths to 7 per output using four wavelengths, and to 4 per output when eight wavelengths are present

Trang 5

Figure 12.20 An example of an output-buffered optical switch capable of switching multiple input wavelengths The switch uses TWCs and wavelength demultiplexers The TWCs convert the input packets to the desired output wavelength, and the switch routes the packets to the correct output port and the appropriate delay line for that output

Table 12.1 Number of delay lines required for different switch architectures A uniformly dis- tributed offered load of 0.8 per wavelength per input is assumed, with a packet loss probability of

10 -6 The switch size is 16 x 16

Recirculating (Figure 12.17) 1 1

Table 12.1 compares the number of delay lines required for the different buffer- ing schemes that we considered in this section Note that the number of delay lines

is only one among the many parameters we must consider when designing switch architectures The others include the switch fabric size, the number of wavelength converters required, and the number of wavelengths used internally (and the associ- ated complexity of the multiplexers and demultiplexers) While we have illustrated

a few sample architectures in Figures 12.17 through 12.20, many variants of these

Trang 6

sometimes called hot-potato routing

Intuitively, misrouting packets rather than storing them will cause packets to take longer paths on average to get to their destinations, and thus will lead to increased delays and lesser throughput in the network This is the price paid for not having buffers at the switches These trade-offs have been analyzed in detail for regular network topologies such as the Manhattan Street network [GG93], an example of which is shown in Figure 12.21, or the shufflenet [KH90, AS92], another regular interconnection network, an example of which is shown in Figure 12.22, or both [Max89, FBP95] Regular topologies are typically used for processor interconnec- tions and may be feasible to implement in LANs However, they are unlikely to be used in WANs, where the topologies used are usually arbitrary Nevertheless, these analyses shed considerable light on the issues involved in the implementation of deflection routing even in wide-area photonic packet-switching networks and the re- suiting performance degradation, compared to buffering in the event of a destination conflict

Before we can discuss these results, we need to slightly modify the model of the routing node shown in Figure 12.2 While discussing this figure earlier, we said that the routing node has one input link and output link from/to every other routing node and end node to which it is connected In many cases, the end node is colocated with the routing node so that information regarding packets to be transmitted or received can be almost instantaneously exchanged between these nodes In particular, this makes it possible for the end node to inject a new packet into its associated routing node, only when no other packet is intended for the same output link Thus this new

injected packet neither gets deflected nor causes deflection of other packets This is

a reasonable assumption to make in practice

Delay

The first consequence of deflection routing is that the average delay experienced

by the packets in the network is larger than in store-and-forward networks In this

Trang 7

Figure 12.21 The M a n h a t t a n Street n e t w o r k with 42 = 16 nodes In a n e t w o r k with

n 2 nodes, these nodes are arranged in a square grid with n rows and columns Each node transmits to two n o d e s - - o n e in the same r o w and another in the same column Each node also receives from two other n o d e s - - o n e in the same r o w and the other in the same column Assuming n is even, the direction of transmission alternates in successive rows and columns

~.A ~ ~ I - I

0 (0, 00) 4 (1, 00) ~ 0 (0, 00) ',

I

I"~i I

1 (0, 01) 5 (1, 01) 1 (0, 01) ',

_1

- - - I

2 (0, 10) 6 (1, 10) ~ 1 2 (0, 10) l,

I

3 (0, 11) 7 (1, 11) ] i 3 (0, 11) ',

Figure 1 2 2 2 The shufflenet with eight nodes M o r e generally, a (A, k) shufflenet con- sists of k A k nodes, arranged in k columns, each with A ~ nodes We can think of a (A, k) shufflenet in terms of the state transition diagram of a k-digit shift register, with each digit in {0, 1 A - 1} Each node (c, a o a l a ~ - l ) is labeled by its column index

c ~ {0, 1, 2 k - 1} along with a k-digit string a o a l a k - 1 , ai E {0, 1 A 1},

0 _ i _< k - 1 There is an edge from a node i to another node j in the following column

if node j's string can be obtained from node i's string by one shift In other words, there

is an edge from node (c, aoal a ~ - l ) to a node ((c + 1) m o d k , a l a 2 a ~ - l ) , where

9 ~ { 0 , 1 A - l }

Trang 8

away from their destinations As a result, in most cases, for a given arrival rate, the overall delay in deflection-routed networks is larger than the overall delay in store-and-forward networks

Throughput

Another consequence of deflection routing is that the throughput of the network is decreased compared to routing with buffers An informal definition of the throughput

of these networks, which will suffice for our purposes here, is that it is the maximum rate at which new packets can be injected into the network from their sources Clearly, this depends on the interconnection topology of the network and the data rates on the links In addition, it depends on the traffic pattern, which must remain fixed in defining the throughput The traffic pattern specifies the fraction of new packets for each source-destination pair Typically, in all theoretical analyses of such networks, the throughput is evaluated for a uniform traffic pattern, which means that the arrival rates of new packets for all source-destination pairs in the network are equal If all the links run at the same speed, the throughput can be conveniently expressed as a fraction of the link speed

For Manhattan Street networks with sizes ranging from a few hundred to a few thousand nodes, deflection routing achieves 55-70% of the throughput achieved

by routing with buffering [Max89] For shufflenets in the same range of sizes, the value is only 20-30% of the throughput with buffers However, since a shufflenet has a much higher throughput than a Manhattan Street network of the same size (for routing with buffers), the actual throughput of the Manhattan Street network

in the case of deflection routing is lower than that of the shufflenet All these results assume a uniform traffic pattern

So what do these results imply for irregular networks? To discuss this, let us examine some of the differences in the properties of these two networks One im- portant property of any network is its diameter, which is the largest number of hops on the shortest path between any two nodes in the network In other words,

Trang 9

the diameter is the maximum number of hops between two nodes in the network However, in most networks, the larger the diameter, the greater the number of hops that a packet has to travel even on average to get to its destination The Man- hattan Street network has a diameter that is proportional to ~/-~, where n is the number of nodes in the network On the other hand, the shufflenet has a diam- eter that is proportional to log 2 n (We consider shufflenets of degree 2.) Thus if

we consider a Manhattan Street network and a shufflenet with the same number

of nodes and edges, the Manhattan Street network will have a lower throughput for routing with buffers than the shufflenet, since each packet has to traverse more edges, on the average For arbitrary networks, we can generalize this and say that the smaller the diameter of the network, the larger the throughput for routing with buffers

For deflection routing, a second property of the network that we must consider

is its deflection index This property was introduced in [Max89], although it was not called by this name It was formally defined and discussed in greater detail in

a later paper [GG93] The deflection index is the largest number of hops that a single deflection adds to the shortest path between some two nodes in the network

In the Manhattan Street network, a single deflection adds at most four hops to the path length, so its deflection index is four On the other hand, the shufflenet has

a deflection index of log 2 n hops This accounts for the fact that the Manhattan Street network has a significantly larger relative throughput~the deflection routing throughput expressed as a fraction of the store-and-forward throughput~than the shufflenet (55-70% versus 20-30%) For arbitrary networks, we can then say that the deflection index must be kept small so that the throughput remains high in the face of deflection routing

Combining the two observations, we can conclude that network topologies with small diameters and small deflection indices are best suited for photonic packet-switching networks A regular topology designed by combining the Man- hattan Street and shufflenet topologies and having these properties is discussed in [GG93] In addition to choosing a good network topology (not necessarily regu- lar), the performance of deflection-routing networks can be further improved by using appropriate deflection rules A deflection rule specifies the manner in which the packets to be deflected are chosen among the packets contending for the same switch output port The results we have quoted assume that in the event of a conflict between two packets, both packets are equally likely to be deflected This deflec- tion rule is termed random Another possible deflection rule, called closest-to-finish

[GG93], states that when two packets are contending for the same output port, the packet that is farther away from its destination is deflected This has the effect of reducing the average number of deflections suffered by a packet and thus increasing the throughput

Trang 10

will be deflected forever and never reach its destination This phenomenon has been called both deadlock [GG93] and livelock [LNGP96], but the term livelock seems to

be more appropriate Livelock is somewhat similar to routing loops encountered in store-and-forward networks (see Section 6.3), but routing loops are a transient phe- nomenon there, whereas livelock is an inherent characteristic of deflection routing Livelock can be eliminated by suitably designed deflection rules However, prov- ing that any particular deflection rule is livelock-free seems to be hard We refer to [GG93, BDG95] for some further discussion of this issue (under the term deadlock)

One way to eliminate livelocks is to simply drop packets that have exceeded a certain threshold on the hop count

Burst switching is a variant of PPS In burst switching, a source node transmits a header followed by a packet burst Typically the header is transmitted at a lower speed

on an out-of-band control channel, although most proposals assume an out-of-band control channel An intermediate node reads the packet header and activates its switch to connect the following burst stream to the appropriate output port if a suitable output port is available If the output port is not available, the burst is either buffered or dropped The main difference between burst switching and conventional photonic packet switching has to do with the fact that bursts can be fairly long compared to the packet duration in packet switching

In burst switching, if the bursts are sufficiently long, it is possible to ask for or reserve bandwidth in the network ahead of time before sending the burst Various protocols have been proposed for this purpose For example, one such protocol, called Just-Enough-Time (JET), works as follows A source node wanting to send a burst first sends out a header on the control channel alerting the nodes along the path that a burst will follow It follows the header by transmitting the burst after a certain time period The period is large enough to provide the nodes sufficient time to

Ngày đăng: 02/07/2014, 12:21

TỪ KHÓA LIÊN QUAN