Decoding of Low Density Parity Check Codes

Một phần của tài liệu .WIRELESS COMMUNICATIONSW ireless Communications, Second Edition Andreas F. Molisch © 2011 John ppsx (Trang 362 - 366)

Part IV MULTIPLE ACCESS AND ADVANCED TRANSCEIVER SCHEMES 363

14.7 Low Density Parity Check Codes

14.7.3 Decoding of Low Density Parity Check Codes

As we have mentioned above, the sparse structure of the parity check matrix is key to decoding that works with reasonable complexity. But it is still far from trivial! Performing an exact maxi- mum likelihood decoding is an N-p hard problem (in other words, we have to check all possible codewords, and compare them with the received signal). It is therefore common to use an iterative algorithm called belief propagation. It is this algorithm that we describe in more detail in the following.11

11Note that the decoding algorithm can be described based on the syndrome vector (the method we will choose here) as well as the data vector.

Figure 14.20 Tanner graph for the parity check matrixH = 1 0 1 1 0 1 1 1

.

Let the received signal vector ber. Let us next put the parity check equations into graphical form. In the so-called “Tanner graph” (see Figure 14.20),12we distinguish two kinds of nodes:

1. Variable (bit) nodes: each variable node corresponds to one bit, and we know that it can be either in state 0, or state 1. Variable nodes correspond to thecolumnsof the parity check matrix.

We denote these nodes by circles.

2. Constraint nodes: constraint nodes (checknodes) describe the parity check equations; we know that if there are no errors present, the inputs to constraint nodes have to add up to 0. This follows from the definition of the syndrome, which is 0 if no errors are present. Constraint nodes correspond to therows of the parity check matrix. We denote these nodes by squares.

Because there are two different types of nodes, and no connections between nodes of the same type, such a graph is also called a “bipartite” graph. In addition to constraint nodes and variable nodes, there is also external evidence, obtained by observation of the received signal, which has to influence our decisions.

Constraint nodes are connected to variable nodes if the appropriate entries in the parity check matrix are 1 – i.e., constraint nodeiis connected to variable nodejifHij=1. “Soft” information from the observed signals – i.e., the external evidence – is connected to the variable nodes. We also need to know the probability density function (pdf) of the amplitude of the variables – i.e., the probability that a variable node has a certain state, given the value of the external evidence.

Decoding on such a graph is done by a procedure calledmessage passing orbelief propagation. Each node collects incoming information, makes computations according to a so-calledlocal rule, and passes the result of the computation to other nodes. Essentially, thejth variable node tells the constraint nodes it is connected to what it thinks its – i.e., the variable node’s – value is, given the external informationrj and the information from the other constraint nodes. This message is denotedλij. In turn theith constraint node tells thejth variable node whatit thinks the variable node has to be, given the information that the constraint nodes have from all the other variable nodes; this message is calledμi,j. This is shown in Figure 14.21.

Let us formulate the decoding strategy mathematically, for an AWGN channel:

1. First, the data bits decide what value theythink they are, given the external evidence ronly.

Knowing the statistics of the noiseσn2, the variable nodes can easily compute their probability to be 1 or a 0, respectively, and pass that information to the constraint nodes. Conversely, the constraint nodes cannot pass a meaningful message to the variable nodes yet. Therefore,

μ(0)i,j =0, for alli (14.57)

λ(0)i,j =(2n2)rj, for allj (14.58)

12There are two types of graphical representations: the “Tanner graph” used here, and the “Forney factor graph”

[Loeliger 2004].

Channel Coding and Information Theory 307

(2/sn2)rj

mi, j(l)

li, j(l − 1)

Figure 14.21 Message-passing in a factor graph.

2. Then, the constraint nodes pass a different message to each variable node. Elaborating on the principle mentioned above, let us look specifically at constraint nodei: assume that a set of connections ends in nodei, which originate from an ensembleA(i) of variable nodes.

Now, each of the checknodes has two important pieces of information: (i) it knows the values (or probabilities) of all data bits connected to this checknode; (ii) furthermore, it knows that all the bits coming into a checknode have to sum up to 0 mod 2 (that is the whole point of a parity check matrix). From these pieces of information, it can compute the probabilities for the value that it thinks data bitj has to have. Since we have an AWGN channel, with a continuous output, and not a binary channel, we have to use LLRs instead of simple probabilities that a bit is reversed, so that the message becomes

μ(l)i,j =2tanh−1

kA(i)j

tanh λ(li,k−1)

2

⎠ (14.59)

where A(i)j denotes “all the members of ensemble A(i) with the exception of j” – i.e., all constraint nodes that connect to the ith variable node, with the exception of the jth node. Superscript(l−1)denotes the l−1th iteration – i.e., we use the results from the previous iteration steps.

3. Next, we update our opinion of what the variable nodes are, based on the information passed by the constraint nodes, as well as the external evidence. This rule is very simple:

λ(l)i,j=(2n2)rj+

kB(j )i

μ(l)k,j (14.60)

where B(j )i denotes all variable nodes that connect to the jth constraint node, with the exception ofi.

4. From the above, we can compute the pseudoposterior probabilities that a bit is 1 or 0:

Lj =(2n2)rj +

i

μ(l)i,j (14.61)

based on which tentative decision we make about the codeword. If that codeword is consistent – i.e., its syndrome is 0 – then decoding stops.

Example 14.8 Decoding of a low density parity check code.

Let us now consider a very simple example for this algorithm. Let the parity check matrix be H=

⎣0 0 1 1 1 1 1 1 1 1 0 0 1 1 0 0 1 1

⎦ (14.62)

Let the codeword:

y=[0 1 1 0 1 0] (14.63)

be sent through an AWGN channel withσn2=0.237 corresponding toγ =6.25 dB; the received word be

¯

r =[−0.71 0.71 0.99 −1.03 −0.61 −0.93] (14.64) Then according to step 1 (mentioned above), likelihood values are computed from external evidence as

λ(0)=[−6.0 6.0 8.3 −8.7 −5.2 −7.9] (14.65) Hard thresholding of the received likelihood values would result in codeword error with an error at bit position 5:

[0 1 1 0 0 0] (14.66)

Figure 14.22 demonstrates how the message-passing algorithm iterates itself to the correct solu- tion, following the recipe given above.

l =

m =

m = L =

L =

−6.0 6.0 8.3 −8.7 −5.2 −7.9 Tentative estimate Syndrome [0 1 1 0 0 0] [1 0 1]

[−5.1,5.1,7.1,5.1] [−5.8,5.8,5.3,−5.3] [4.8,−4.8,5.2,4.6]

−7.1 7.1 8.6 −8.9 7.2 1.8 [0 1 1 0 1 1]

[0 1 1 0 1 0]

[1 0 1]

[0 0 0]

[0.07,−0.07,3.3,−0.07] [−1.1,1.1,0.6,−0.6]

−8.7 8.7 9.0 −9.4 0.9 −9.9

[−1.6, 1.6, 2.8, −2.0]

Figure 14.22 Example for the iterations of low density parity check message passing.

Channel Coding and Information Theory 309

Một phần của tài liệu .WIRELESS COMMUNICATIONSW ireless Communications, Second Edition Andreas F. Molisch © 2011 John ppsx (Trang 362 - 366)

Tải bản đầy đủ (PDF)

(884 trang)