Part IV MULTIPLE ACCESS AND ADVANCED TRANSCEIVER SCHEMES 363
22.4 Routing and Resource Allocation in Multi-Hop Networks
22.4.11 Routing for Multiple Messages – Stochastic
All the routing algorithms described above are essentially based on the assumption that a single message in the network is to be transmitted to one or more destination nodes. The situation becomes much more complicated when multiple messages are to be transmitted, since each transmission creates interference on other active links. Thus, a link that provides high capacity when only a single message is routed over it might become a bottleneck when multiple messages are trying to be transmitted over this same link. In other words, the routes that are found to be optimum for the different messages separately (e.g., by the Dijkstra algorithm) are not necessarily the best (or even good) routes when multiple messages are being transmitted in the network simultaneously.
A logical next step is then to determine ajoint routing and power allocation, which essentially makes sure that messages along a route do not interfere with each other significantly. By keep- ing routes “sufficiently” separated (where the transmit power influences how big a separation is sufficient), the throughput can be improved significantly. Still, such an algorithm misses another vital component of network design, namely scheduling. Different messagescan be sent over the same node; we just have to ensure that this happens at different times – similar to road traffic at an intersection: the scheduling algorithm corresponds to a traffic light that tells the traffic from one road to stop while traffic on an intersecting road is using the intersection, so that no colli- sions occur. The general problem of finding routes, scheduling, and power/rate control for multiple messages is thus a very complicated, but also a practically very important problem. A number of different algorithms have been designed to solve this. As one example, the following will describe a stochastic optimization approach called thebackpressure algorithm.
The backpressure algorithm is a stochastic network optimization algorithm that – despite its simplicity – turns out to be optimal under certain assumptions. Let us describe the network by its states, and by control actions that determine what and how much data are transmitted with what power. The formulation as a control problem allows to bring to bear useful techniques from control theory like Lyapunov functions. The essence of the approach is the following: each node has a buffer in which it stores arriving data, and it tries to transmit data to empty the buffer. The algorithm then computes link weights that take into account (i) the difference of the queue size between the transmitting and receiving node of a link and (ii) the data rate that can be achieved over this link.
Consider the following setup: a network containsN nodes, connected byLlinks. The transmission of the messages occurs in a slotted manner, where t is the slot index. The link between two nodes a and b is characterized by a transmission rate μab(t ); the rates are summarized in the transmission matrix μ(t )=C(I(t ), S(t )),where C is the transmission rate function that depends on the network topology stateS(t ), which describes all the effects that the network cannot influence (e.g., fading) and the link control actionI (t ), which includes all the actions of the network that can be influenced, like power control, etc. The most important example of a transmission rate function is capacity-achieving transmission, so that on thel-th link
Cl(P(t ), S(t ))=log2
⎡
⎢⎢
⎢⎣1+ Pl(t )αll(S(t )) Pn+
k=l
Pk(t )αkl(S(t ))
⎤
⎥⎥
⎥⎦ (22.30)
where αkl(S(t ))= |hkl|2 is the power gain (inverse attenuation) of the signal transmitted by the intended TX of linkkto the intended RX of linkl, when the network topology is in the stateS(t ), andPl(t )is the power used for transmission along thel-th link.
During each timeslot, an amount of dataRoutq is being transmitted by a node. At the same time, data are arriving from an external source (e.g., sensors, or computers that want to transmit data);
and the node is receiving data via the wireless links from other sources; the total amount of data arriving during one timeslot isRinq.8 Each node has an infinitely large buffer, which can store the arriving messages before they are sent out over the wireless links. The amount of data in the buffer (also called queue backlog) is written as Qq(t ), where q is the index of the considered queue;
the backlogs are all written into a vectorQ(t ). During one timeslot, the backlog of a queue at a particular node changes as
Qq(t+1)≤max
Qq(t )−Rqout(I (t ), S(t )),0 +Rqin(I (t ), S(t )) (22.31) where the max[.,0] operator is used to ensure that the length of the queue cannot become negative.
We can then define a general “penalty function” ξ, which is defined as the time-average of an instantaneous penalty function – e.g., the overall power consumption of the network. An additional set of constraints xi can be defined, e.g., the (time-averaged) power consumption per node. We then aim to solve the following optimization problem:
Minimizeξ (22.32)
subject toxi≤xav for alli (22.33)
and network stability wherexav is, e.g., the maximum admissible mean power.
Here, network stability means that the size of the queues remains bounded – in other words, that on average not more data flow into the queue than can be “shoveled out.” Note that it is important to ultimately deliver the data to their final destination, since such data delivery is the only way of making the data “vanish” from the network. As long as data are sent on their way through a multi-hop network, they are part of some queue, and therefore detrimental to achieving network stability. We also note that a possible optimization goal is simply the achievement of network stability without any further constraints (this is formally achieved by settingξ =1).
The first step in solving the problem above is to convert the additional constraintsxi ≤xav into
“virtual queues” (these are not true data queues, but simply update equations):
Zi(t+1)=max [Zi(t )−xav,0]+xi(I (t ), S(t )) (22.34) The optimization problem is then converted into the problem of minimizing ξ while stabilizing both the actual queuesQ(t )and the virtual queuesZ(t ),combined into vector(t ). Defining now the Lyapunov drift
Z((t ))=E{LZ(((t+1))−LZ(((t ))|(t )} (22.35) Q((t ))=E
LQ(((t+1))−LQ(((t ))|(t ) ((t ))=Z((t ))+Q((t ))
where
LZ((t ))=1 2
i
[Zi(t )]2, LQ((t ))=1 2
q
[Qq(t )]2 (22.36)
8The half-duplex constraint can be included, e.g., by assuming that two orthogonal channels exist for each node, one on which data can be received and one on which only transmission can happen.
Relaying, Multi-Hop, and Cooperative Communications 549
Upper bounds for the Lyapunov drift are given by Z((t ))=BZ−
i
Zi(t )E{xav−xi(I (t ), S(t ))|(t )} (22.37) Q((t ))=BQ−
q
Qq(t )E
Rqout(I (t ), S(t ))−Rqin(I (t ), S(t ))|(t )
(22.38) whereBZ andBQ are finite constants.
The “generalized max-weight policy,” i.e., the determination for the optimum control policy for the backpressure algorithm, in each step greedily minimizes
Z((t ))+Q((t ))+V E{ξ(I (t ), S(t ))|(t )} (22.39) whereV is a control parameter. This can be written as minimizing
Vξ (I (t ), S(t ))+
i
Zi(t )xi(I (t ), S(t ))−
q
Qq(t )Rqout(I (t ), S(t ))−Rqin(I (t ), S(t )) (22.40) whereξ (I (t ), S(t ))=E{ξ(I (t ), S(t ))|I (t ), S(t ))} and similarly for the other “hat” functions in the equation above. These functions ofI (t )andS(t )are assumed to be known. For a given slot t, the network stateS(t )and queue backlogsQ(t )andZ(t )are known, though the pdf ofS does not need to be known.
The “network stabilization” only guarantees that – as a time average – the length of the queues does not grow to infinity. There is, however, no guarantee about the delay of a particular data packet. Even the average delay of the data packets can be quite large. The control parameter V allows a tradeoff between this average delay and how close the solution of Eq. (22.40) comes to the theoretical minimum of the penalty function.
After this rather general description, let us turn to simpler special cases: we wish to minimize the average network power (without additional constraints on the per-node power). In that case, there are no “virtual” queues. We thus define now for each node n as the set of links for which node n acts as TX; we furthermore realize that at each node there can be multiple queues, each for one particular message. We thus aim to choose the power vectorP(t )that maximizes the expression
n
⎡
⎣
l∈ n
Cl(P(t ), S(t ))Wl∗(t )−V
l∈ n
Pl
⎤
⎦ (22.41)
wherePl is the temporal average overPl(t ), and the weightsWl∗ are
Wl∗(t )=max[Qtl−Qrl,0] (22.42) andQtlandQrl are the backlogs of the queue at the transmitting node and receiving node, respec- tively, of link l for the packet stream (commodity) that has the biggest backlog differential at this particular link.
In the even simpler case of network stabilization only, the backpressure algorithm serves the queue whose product of “queue length difference” at the two link ends, and the transmission rate over that link, is maximum.9
It is also noteworthy that the backpressure algorithm provides an inherent routing; data packets will ultimately end up in their intended sinks, since this is the only way to get “out of the network.”
9For one-hop problems we haveQrl=0, and the algorithm reduces to maximizing a weighted sum of transmission rates over each link, minus a power cost.
However, there is no guarantee that the routes taken by the packets follow a shortest path; espe- cially in lightly loaded networks, data packets can take very circuitous routes. This is especially pronounced if we try to simply stabilize the network (without energy minimization). This problem can be alleviated by introducing a “bias” for shorter routes in the penalty function. Figure 22.13 shows the average backlog as a function of the generation rate of packets at the nodes, when no power minimization is attempted. We see that the shortest-path algorithm becomes unstable (backlog becomes infinite) at much lower packet generation rates. The “enhanced” backpressure algorithm (one that includes a bias term) performs much better than the regular algorithm at low packet generation rates.
104
103 102
101 100 101
102
104 103 102 101 100 101 102 103
0 0.1
E[U] for bottleneck node (log scale) E[U] (log scale)
0.2 0.3 0.4
Lambda Lambda
Network with no interference Network with local interference
0.5 0.6 0.7 0 0.00 0.04 0.06 0.08 0.1 0.12 0.14
DRPC EDAPC Shortest path
DRPC EDAPC Shortest path
100 node sensor network
Shortest Path vs. Backpressure Routing
Figure 22.13 Performance of different routing algorithms in an ad hoc network: DRPC (backpressure algorithm), EDRPC (enhanced backpressure), and shortest-path.
Reproduced from Georgiadis et al. [2006]©NOW publishing.