Viterbi Decoder – Classical Representation

Một phần của tài liệu .WIRELESS COMMUNICATIONSW ireless Communications, Second Edition Andreas F. Molisch © 2011 John ppsx (Trang 347 - 350)

Part IV MULTIPLE ACCESS AND ADVANCED TRANSCEIVER SCHEMES 363

14.3.2 Viterbi Decoder – Classical Representation

The Viterbi algorithm [Viterbi 1967] is the most popular algorithm for MLSE. The goal of this algorithm is to find the sequenceˆsthat was transmitted with the highest likelihood, if the sequence rwas received:7

ˆ s=max

s Pr(r|s) (14.40)

where maximization is done over all possible transmit sequencess. If perturbations of the received symbols by noise are statistically independent,8 then the probability for the sequence Pr(rs) can be decomposed into the product of the probabilities of each symbol:

ˆ s=max

s

i

Pr(ri|si) (14.41)

7This is the definition for Maximum A Posteriori (MAP) detection. However, it is equivalent to MLSE for equiprobable sequences (see Chapter 12).

8If the noise is colored – i.e., correlated between the different samples – then the receiver has to use a so-called

“whitening filter” (see Chapter 16).

Channel Coding and Information Theory 291

Now, instead of maximizing the above product, maximizing its logarithm also finds the optimum sequence – this is true since the logarithm is a strictly monotonous function:

ˆ s=max

s

i

log[Pr(ri|si)] (14.42)

The logarithmic transition functions log[Pr(ri|si)] are also known asbranch metrics; in the case that the decoder puts out only “hard decision” (estimates which coded bits were sent),ri = ±1, the branch metric is the Hamming distance betweenri andsi.

The MLSE now determines the total metrics

idH(ri, si) for all possible paths through the trellis – i.e., for all possible input sequences. In the end, the path with the smallest metric is selected. Such an optimum procedure requires large computational effort: the number of possible paths increases exponentially with the number of input bits. The key idea of the Viterbi algorithm is the following: instead of computing metrics for all possible paths (working from “top to bottom”) in the trellis, we work our way “from the left to the right” through the trellis. More precisely, we start with a set of possible states of the shift register (Ai, Bi, Ci, Di, whereidenotes the time instant, or considered input bit). Let us now consider all paths that (from the left) lead into stateA. We discard any possible paths(1)if it merges at stateAiwith a paths(2)that has a smaller metric. As the paths run through the same state of the trellis, there is nothing that would distinguish the paths from the point of view of later states. We thus choose the one with the better properties. Similarly, we choose the best paths that run through the states Bi, Ci, andDi. After having determined thesurvivors for statei, we next proceed to statei+1 (or rather, to the tuple of statesAi+1, Bi+1, Ci+1, Di+1), and repeat the process. All paths in a trellis ultimately merge in a single, well-defined point, the all-zero state.9 At this point, there is only a single survivor – the sequence that was transmitted with the highest probability.

Example 14.5 Example for Viterbi decoding.

Figure 14.5 shows an example of the algorithm. The basic structure of the trellis is depicted in Figure 14.5a; the bit sequence to be transmitted is shown in Figure 14.5b. The metrics shown are the Hamming distances of the received sequence, i.e.,afterthe hard decision compared with the theoretically possible bit sequences in the trellis.

We assume that at the outset the shift register is in the all-zero state. Figure 14.5c shows the trellis for the first 3 bits. There are two possibilities for getting from stateA0 to state A4: by transmission of the source data sequence 0, 0, 0 (which corresponds to the coded bit sequence 000, 000, 000, and thus via statesA2, A3) or by transmission of the source data sequence 100 (coded bit sequence 111, 011, 001 – i.e., via statesB2, C3). In the former case, the branch metric is 2; in the latter, it is 6. This allows us to immediately discard the second possibility. Similarly, we find that the transition from stateA0to stateB4could be created (with greatest likelihood) by the source data sequence 0, 0, 1, and not by 1, 1, 0. The following subfigures of Figure 14.5 show how the process is repeated for ensuing incoming bits.

The Viterbi algorithm greatly decreases storage requirements by elimination of nonsurviving paths, but they are still considerable. It is thus undesirable to wait for the decision as to which sequence was transmitted until the last bits of the source sequence. Rather, the algorithm makes decisions about bits that are “sufficiently” in the past. More precisely, during consideration of statesAi, Bi, Ci, Di we decide about the symbols of the state tupleAiLTr, BiLTr, CiLTr, DiLTr,

9Actually, this only happens if the convolutional encoder appends enough zeros (tail bits) to the source data sequence to force the encoder into the defined state. If this is not done, then the decoder has to consider all possible finishing states, and compare the metrics of the path ending in each and all of them.

Output of channel coder

(b) Decoder input sequence

(c) Viterbi decoding (2) (a) Trellis structure

111

011

100

000 A

B

C

101 D

010

(1) (2) (3) (4)

1 0 1 2

2

2 2

1 3

5 4 8 4 6 5 5 4 5

2 2

3 0

2

1 1

1

1 3 (5) (6) (7) 000 100 001 011 110 001

010 110 001

Discarded branches

010

(1) (2) (3) (4)

000 100 001

(f)

(1) (2) (3) (4) (5) (6) (7)

001 6 6 7 5 9 9 6 7 8

4 6

5 1

010 000 100

1

1 0 1 1 2 2

1 2

1 1

2 1 2

1

1 1

1 2

2 0

3

3 0 1

2 2

5 7

7 6

4 4

4 5

6 4

9 5 8 6 4 6 7 7 5 3

001 011 110

(6) (5) (4) (3) (2) (1)

0 2

3 1 2 2

1

110 011 001 100 000 010

(e)

Detected sequence 010

(1) (2) (3) 4 4 5 7 5 2

2

3 3 6

1 2 0

Branch distance Total distance Elimination of path with larger distance Elimination of path with larger distance Elimination of path with larger distance Elimination of either (d) Viterbi

decoding (1) 1

2

2

0 1 3

000 1

100 2

1

Figure 14.5 Example of Viterbi detection.

Reproduced with permission from Oehrvik [1994]©Ericsson AB.

whereLTris thetruncation depth. This principle is shown in Figure 14.6. Data within the window of length LTr are stored. When moving to the next tuple in the trellis, the leftmost-state tuple moves out of the considered window, and we have to make a final decision about which bits were transmitted there. The decision is made in favor of the state that contains the path that has the smallest metric in the currently observed state – i.e., at the right side of the window. While this procedure is suboptimum, performance loss can be kept small by judiciously choosing the length of the window. In practice, a duration:

LTr=6L (14.43)

has turned out to be a good compromise.

Channel Coding and Information Theory 293

Sliding window

Current decision Time Current decoding step

Path P1 Path P2 Path P3 Path P4 Decided symbols

Figure 14.6 Principle of decision using a finite duration sliding window.

Reproduced with permission from Mayr [1996]©B. Mayer.

Một phần của tài liệu .WIRELESS COMMUNICATIONSW ireless Communications, Second Edition Andreas F. Molisch © 2011 John ppsx (Trang 347 - 350)

Tải bản đầy đủ (PDF)

(884 trang)