The building blocks demonstrated include all-optical wavelength converters using cross-phase modulation in semiconductor optical amplifiers see Section 3.8 up to 40 GHz, a packet sychron
Trang 1process the header and set the switches to switch the burst through when it arrives,
so that additional buffering is not needed for this purpose at the nodes
Overall, burst switching is essentially a variation of PPS where packets have variable and fairly large sizes, and little or no buffering is used at the nodes Like packet switching, one of the main issues with burst switching is to determine the buffer sizes needed at the nodes to achieve reasonable burst drop probabilities when there is contention The same techniques that we discussed earlier in Section 12.4 apply here as well
Several PPS testbeds have been built over the last few years, and many are being built today The main focus of most of these testbeds is the demonstration of certain key PPS functions such as multiplexing and demultiplexing, routing/switching, header recognition, optical clock recovery (synchronization or bit-phase alignment), pulse generation, pulse compression, and pulse storage We will discuss some of these testbeds in the remainder of this section The key features of these testbeds are summarized in Table 12.2
12.6.1 KEOPS
KEOPS (Keys to Optical Packet Switching) [Gam98, Gui98, RMGB97] was a significant project undertaken by a group of research laboratories and universi- ties in Europe Its predecessor was the ATMOS (ATM optical switching) project [Mas96, RMGB97] KEOPS demonstrated several of the building blocks for PPS and put together two separate demonstrators illustrating different switch architec- tures The building blocks demonstrated include all-optical wavelength converters using cross-phase modulation in semiconductor optical amplifiers (see Section 3.8)
up to 40 GHz, a packet sychronizer at 2.5 Gb/s using a tunable delay line, tunable lasers, and low-loss integrated indium phosphide Mach-Zehnder-type electro-optic switches
The demonstrations of network functionality were performed at a data rate of 2.5 Gb/s and 10 Gb/s, with the packet header being transmitted at 622 Mb/s The KEOPS switches used wavelengths internal to the switch as a key tool in performing the switching and buffering, instead of using large optical space switches In this sense, the KEOPS demonstrators are variations of the architecture of Figure 12.19 The first demonstrator, shown in Figure 12.23, used a two-stage switching approach with wavelength routing Here, the first stage routes the input signal to the appro- priate delay line by converting it to a suitable wavelength and passing it through a
Trang 2Table 12.2 Key features of photonic packet-switching testbeds described in Section 12.6
Testbed Topology Bit Rate Functions Demonstrated
KEOPS Switch 2.5 Gb/s
(per port)
KEOPS Switch 10 Gb/s
(per port) FRONTIERNET Switch 2.5 Gb/s
(per port) NTT Switch 10 Gb/s
(per port) Synchrolan Bus 40 Gb/s
(BT Labs) (aggregate)
BT Labs Switch 100 Gb/s
(per port) Princeton Switch 100 Gb/s
(per port) AON Helix (bus) 100 Gb/s
(aggregate)
(per port)
4 x 4 switch, subnanosecond switching, all-optical wavelength conversion tunable lasers,
packet synchronizer
16 x 16 broadcast/select, subnanosecond switching
16 x 16, tunable laser
4 x 4 broadcast/select Bit-interleaved data transmission and reception
Routing in a 1 • 2 switch based
on optical header recognition Packet compression,
TOAD-based demultiplexing Optical phase lock loop, pulse generation, compression, storage Contention resolution
wavelength demultiplexer The second stage routes the packet to the correct output, again by using a tunable wavelength converter and a combination of wavelength demultiplexers and multiplexers Each input has access to at least one delay line in each set of delay lines Since the delay line in turn has access to all the output ports, the switch may be viewed as implementing a form of shared output buffering The switch controller (not shown in the figure) schedules the incoming packets onto the delay lines as follows: Each input packet is scheduled with the minimum possible delay, d, such that (1) no other packet is scheduled in the same time slot to the same output port, (2) no other packet is scheduled in the same time slot on any
of the delay lines leading to the same second-stage ~ C as the desired packet, and (3) in order to deliver packets in sequence of their arrival, no previous packet from the same input is scheduled to the same output port with a delay larger than d Another demonstrator used a broadcast-and-select approach as shown in Fig- ure 12.24 Here packets arriving at different inputs are assigned different wave- lengths Each packet is then broadcast into an array of delay lines providing dif- ferent delays Each delay line can store multiple packets simultaneously at different
Trang 3Figure 12.23 The wavelength-routing packet switch used in KEOPS
wavelengths Thus each input packet is made available at the output over several slots Of these, one particular slot is selected using a combination of wavelength demultiplexers, optical switches, and wavelength multiplexers This switch therefore emulates an output-buffered switch with a B slot buffer on each output A 16 x 16 switch using this approach was demonstrated
12.6.2 NTr's Optical ATM Switches
Researchers at NTT have demonstrated photonic packet switches using an approach somewhat similar to KEOPS [Yam98, HMY98] Like the KEOPS switches, these switches also use wavelengths internal to the switch as a key element in perform- ing the switching function The FRONTIERNET switch [Yam98], shown in Fig- ure 12.25, uses tunable wavelength converters in conjunction with an arrayed wave- guide grating to perform the switching function, followed by delay line buffers This
is again an output-buffered switch, with two stages of selection For each output, the first stage selects the time slot, and the second stage the desired wavelength within that time slot In the experiment, the tunable converter assumes that the incoming data is electrical and uses a tunable laser and external modulator to provide a tun- able optical input into an arrayed waveguide grating A 16 x 16 switch operating at 2.5 Gb/s with optical delay line buffering was demonstrated
In separate experiments [HMY98], the switching was accomplished by broad- casting a wavelength-encoded signal to a shared array of delay lines and selecting the appropriate time slot at the output, again like the KEOPS approach A 4 x 4
Trang 4Figure 12.24 The broadcast-and-select packet switch used in KEOPS
Figure 12.25 The FRONTIERNET architecture
Trang 512.6.3
12.6.4
switch at a 10 Gb/s data rate was demonstrated The key technologies demonstrated included tunable lasers and optical delay line buffering
BT Labs Testbeds
Researchers at British Telecom (BT) Laboratories demonstrated several aspects of PPS networks [CLM97] that we discussed in this chapter Multiplexing and de- multiplexing of high-speed signals in the optical domains was demonstrated in a prototype broadcast local-area network based on a bus topology called Synchrolan
[LGM+97, Gun97b] Bit interleaving was used with each of the multiplexed channels operating at a bit rate of 2.5 Gb/s The aggregate bit rate transmitted on the bus was
40 Gb/s The clock signal (akin to a framing pulse) was distributed along with the bit-interleaved data channels The availability of the clock signal meant that there was no need for optical clock recovery techniques A separate time slot was not used for the clock signal, but rather it was transmitted with a polarization orthogonal to that of the data signals This enabled the clock signal to be separated easily from the data In a more recent demonstration [Gun97a], the data and clock signals were transmitted over two separate standard single-mode (nonpolarizati0n-preserving) fibers, avoiding the need for expensive polarization-maintaining components
A PPS node was also demonstrated separately at BT Labs [Cot95] The op- tical header from an incoming packet was compared with the header local addressicorresponding to the PPS node, using an optical AND gate (but of a dif- ferent type than the ones we discussed) The rest of the packet was stored in a fiber delay line while the comparison was performed The output of the AND gate was used to set a 1 • 2 switch so that the packet was delivered to one of two outputs based on a match, or lack of it, between the incoming packet header and the local address
Princeton University Testbed
This testbed was developed in the Lightwave Communications Laboratory at Prince- ton University, funded by DARPA [To198, SBP96] The goal was to demonstrate a single routing node in a network operating at a transmission rate of 100 Gb/s Packet interleaving was used, and packets from electronic sources at 100 Mb/s were optically compressed to the 100 Gb/s rate using the techniques we described in Sec- tion 12.1 The limitations of the semiconductor optical amplifiers used in the packet compression process (Figure 12.6) require a 0.5 ns (50 bits at 100 Gb/s) guard band between successive packets Optical demultiplexing of the compressed packet header was accomplished by a bank of AND gates, as described in Section 12.1 The TOAD architecture described in Section 12.1.3 was used for the AND gates The number
Trang 6GBW
transmit
BOD
transmit
Receive
GBW: guaranteed bandwidth BOD: bandwidth on demand
~ " ~ - transmit
"- ~," ~ ~ _ BOD
- tr nsm eceive
Figure 12.26 The helical LAN topology proposed to be used in the AON TDM testbed
of TOADs to be used is equal to the length of the packet header Thus the optically encoded serial packet header was converted to a parallel, electronic header by a bank
of TOADs
12.6.5 AON
This testbed was developed by the All-Optical Network (AON) consortium con- sisting of AT&T Bell Laboratories, Digital Equipment Corporation, and the Mas- sachusetts Institute of Technology [Bar96] The aim was to develop an optical TDM LAN/MAN operating at an aggregate rate of 100 Gb/s using packet inter- leaving Different classes of service, specifically guaranteed bandwidth service and bandwidth-on-demand service, were proposed to be supported The topology used
is shown in Figure 12.26 This is essentially a bus topology where users trans- mit in the top half of the bus and receive from the bottom half One difference, however, is that each user is attached for transmission to two points on the bus such that the guaranteed bandwidth transmissions are always upstream from the bandwidth-on-demand transmissions Thus the topology can be viewed as having the helical shape shown in Figure 12.26; hence the name helical L A N (HLAN) for
this network
Experiments demonstrating an optical phase lock loop were carried out In these experiments, the frequency and phase of a 10 Gb/s electrically controlled mode-locked laser were locked to those of an incoming 40 Gb/s stream (Every fourth pulse in the 40 Gb/s stream coincides with a pulse from the 10 Gb/s stream.) Other demonstrated technologies include short pulse generation, pulse compression, pulse storage, and wavelength conversion
Trang 7Traffic
sourCeand ~ Transmitter ~'1 ~ ~ ~2
coupler '~ Contention
Receiver ~ resolution ] ]
optics ~i~1,, ~2 ~'1, ~2
Node 1
Transmitter
Traffic source and sink
,.-[ Receiver
Node 2
Figure 12.27 A block diagram of the CORD testbed
12.6.6 CORD
The Contention Resolution by Delay Lines (CORD) testbed was developed by a con- sortium consisting of the University of Massachusetts, Stanford University, and GTE Laboratories [Ch196] A block diagram of the testbed is shown in Figure 12.27 The testbed consisted of two nodes transmitting ATM-sized packets at 2.488 Gb/s using different transmit wavelengths (1310 nm and 1320 nm) A 3 dB coupler broadcasts all the packets to both the nodes Each node generates packets destined to both itself and the other node This gives rise to contentions at both the receivers The headers of the packets from each node were carried on distinct subcarrier frequen- cies (3 GHz and 3.5 GHz) located outside the data bandwidth (~ 2.5 GHz) The subcarrier headers were received by tapping off a small portion of the power (10%) from the incoming signal
Time was divided into slots, with the slot size being equal to 250 ns Since an ATM packet is only 424/2.488 ~ 170 ns long, there was a fair amount of guard band in each slot Slot synchronization between nodes was accomplished by having nodes adjust their clocks based on their propagation delay to the hub However,
a separate synchronizer node was not used, and one of the nodes itself acted as the synchronizer (called "master" in CORD) node The data rate on the subcarrier channels was chosen to be 80 Mb/s so that a 20-bit header can be transmitted in the
250 ns slot
In one of the nodes, a feed-forward delay line architecture similar to that shown
in Figure 12.14 was used with a WDM demux and mux surrounding it, so that signals
at the two wavelengths could undergo different delays Thus this node had greater opportunities to resolve contentions among packets destined to it This is the origin of the name contention resolution by delay lines for this testbed The current testbed is
Trang 8built using discrete components, including lithium niobate switches, semiconductor optical amplifiers (for loss compensation), and polarization-maintaining fiber for the
delay lines An integrated version of the contention resolution optics (CRO), which
would integrate the three 2 • 2 switches and semiconductor amplifiers on a single InP substrate, is under development
Summary
Photonic packet-switched networks offer the potential of realizing packet-switched networks with much higher capacities than may be possible with electronic packet-switched networks However, significant advances in technology are needed
to make them practical, and there are some significant roadblocks to overcome The state of optical packet-switching technology is somewhat analogous to the state of electronic circuits before the integrated circuit was invented All the building blocks needed for optical packet switching are in a fairly rudimentary state today and in re- search laboratories they are either difficult to realize, very bulky, or very expensive For example, optical buffering is implemented using hundreds of meters of delay lines, which are bulky and can only provide limited amounts of storage Transmit- ting data at 100 Gb/s and higher line rates over any significant distances of optical fiber is still a major challenge At this time, fast optical switches have relatively high losses, including polarization-dependent losses, and are not amenable to integration, which is essential to realize large switches Optical wavelength converters, which have been proposed for many of the architectures, are still in their infancy today Temperature dependence of individual components can also be a significant problem when multiplexing, demultiplexing, or synchronizing signals at such high bit rates
We also need effective ways of combatting the signal degradation through these switches For instance, a cheap all-optical 3R regenerator along the lines of what we studied in Section 3.8 would make many of these architectures more practical For the foreseeable future, it appears that we will continue to perform all the intelligent control functions for packet switching in the electrical domain
In the near term, we will continue to see the optical layer being used to provide circuit-switched services, with packet-switching functions being done in the electronic domain by IP routers or ATM switches PPS, particularly with burst switching, is being positioned as a possible future replacement for the optical circuit layer, while still retaining electronic packet switching at the higher layers The notion is that circuit-switched links are still underutilized due to the bursty nature of traffic, and using an underlying optical packet layer instead of a circuit layer will help improve link utilizations
Trang 9Further Reading
There has been a great deal of research activity related to photonic packet switching with respect to architectures and performance evaluation, as well as experiments and testbeds See [HA00] for a recent overview, as well as [Pru93, BPS94, Mid93] [BIPT98, MS88] are special issues devoted to this topic
The NOLM is described in [DW88], and its use for optical demultiplexing is described in [BDN90] The NALM is described in [FHHH90] The architecture of the TOAD is described in [SPGK93], and its operation is analyzed in [KGSP94] Its use for packet header recognition is described in [GSP94] Another nonlinear optical loop mirror structure, which uses a short loop length and an SOA within the loop,
is described in [Eis92] The soliton-trapping AND gate is described in [CHI§ Other demultiplexing methods using high-speed modulators are described in [Mik99, MEM98] Packet compression and decompression can also be accomplished by a technique called rate conversion; see [PHR97]
For a summary of optical buffering techniques, see [HCA98, Hal97] Many of the performance results relating to buffering in packet switches may be found in [HK88] Optical buffering at 40 Gb/s is described in [HR98] [Dan97, Dan98] analyze the impact of using the wavelength dimension to reduce the number of buffers
For an overview of deflection routing, see [Bor95] For an analysis of deflection routing on the hypercube topology, see [GH92] Some other papers on deflection routing that may be of interest are [HC93, BP96] [BCM§ describes an early experimental demonstration of a packet-switching photonic switch using deflection routing
Using burst switching in the context of PPS has been proposed by [QY99, Tur99, YQD01] Similar notions have been proposed earlier in the context of electronic packet-switched networks JAms83]
Most of the testbeds we have discussed, and some we haven't, are described in the special issues on optical networks and photonic switching [BIPT98, CHK§ FGO+96] See also [Hun99, Gui00] for another testbed architecture and demonstra- tion using wavelength-based switching A design for a soliton ring network operating
at 100 Gb/s and using soliton logic gates such as the soliton-trapping AND gate is described in [SID93]
We have covered WDM as well as TDM techniques in the book, but haven't explored networks based on optical code division multiple access (OCDMA) Here different transmitters make use of different codes to spread their data, either in the time domain or in the frequency domain The codes are carefully designed so that many transmitters can transmit simultaneously without interfering with one another, and the receiver can pick out a desired transmitter's signal from the others by suitably
despreading the received signal OCDMA networks were a popular research topic
Trang 10in the late 1980s and early 1990s, but they suffer from even more problems than PPS networks employing high-speed TDM See [Sa189, SB89, PSF86, FV88] for a sampling of papers on this topic, and see [Gre93] for a good overview
12.1
12.2
12.3
Problems
In the packet multiplexing illustrated in Figure 12.6, show that the delay encoun- tered by pulse i, i = 1, 2 l, on passing through the k compression stages is (2 k - i ) ( T - r) Using the fact that the pulses are separated by time T at the input, now show that pulse i occurs at the output at time (2 k - 1)(T - r) + (i - 1)r Thus the pulses are separated by a time interval of r at the output
Show that a compressed data packet of length I bits, obtained by the packet multi- plexing technique illustrated in Figure 12.6, can be decompressed, in principle, by passing it through a series of k = [logl 1] expansion stages, where the j t h expansion stage is as shown in Figure 12.28 What should be the switching time of the on-off switches used in this scheme?
Consider the tunable delay shown in Figure 12.13 Assume that a delay of x T / 2 k - 1
is to be realized, where x is a k-bit integer Consider the binary representation of x,
Figure 12.28 An optical packet demultiplexer can be built, in principle, by passing the compressed packet passes through k expansion stages, where 2 k is the smallest power of two that is not smaller than the packet length 1 in bits The figure shows a detailed view
of expansion stage j