Error Probability Analysis for Convolutional Codes

Một phần của tài liệu Introduction to Digital Communication Systems by Krzysztof Wesolowski . (Trang 192 - 198)

Consider the problem of decoding of convolutional codes using a hard-decision algorithm, e.g. the Viterbi algorithm. Let us estimate the probability of error of the information symbols on the output of the Viterbi algorithm. The code transfer function appears to be useful for that purpose. Calculate the error probability for the code shown in Figure 2.17a.

As we remember, its transfer function is expressed in the form of an infinite series presented by formula (2.122). Substitution of J =1 in it results in the simplified form

T (D, N )=N D6+2N2D8+3N3D10+ ã ã ã (2.162) Knowing that the considered code is linear, assume without loss of generality that an all-zero codeword has been transmitted. We say that at a givenjth moment anerror event has occurred if the all-zero path on the trellis diagram has been eliminated in favor of another path merging with the all-zero path at that moment. If the decoder has decided to select the path featuring the Hamming weightwH =6, then the error event has occurred if, among six positions in which both paths differ, the received sequence agrees with the path of weight wH =6 in four or more positions. Note that errors occurring in the positions in which both paths do not differ have no influence on the decoder decision, as they equally increase the distance of the received sequence from codewords associated with both candidate paths. Let us additionally assume that if errors have occurred exactly in three positions out of six meaningful positions determined by the incorrect codeword of weightwH =6, then the error event occurs with probability 1/2. If a memoryless binary symmetric channel model is assumed, then binary errors are statistically independent and their probability is equal top. As a result, an incorrect codeword will be chosen with the probability given by the formula

P6= 1 2

6 3

p3(1−p)3+ 6 i=4

6 i

pi(1−p)6−i (2.163) In the general case in which a codeword of weight wH =k is selected instead of the all-zero codeword, we have

Pk=















 k i=(k+1)/2

k i

pi(1−p)ki fork odd

1 2

k k/2

pk/2(1−p)k/2+ k i=k/2+1

k i

pi(1−p)ki forkeven

(2.164)

The probability of the first error event can be upper-bounded by the sum of probabilities of selection of particular incorrect codewords (paths on the trellis diagram)

PE(j )≤ ∞ k=dfree

LkPk (2.165)

where Lk is the number of codewords of weight wH =k. Analysis of (2.162) for our code indicates thatL6=1,L7=0, L8=2, L9=0, L10 =3, etc. The bound shown in (2.165) does not depend on any particular moment j, therefore formula (2.165) can be presented in the form

PE ≤ ∞ k=dfree

LkPk (2.166)

The formulae describing the probabilitiesPk can be upper-bounded as follows.

For kodd we have Pk =

k i=(k+1)/2

k i

pi(1−p)ki <

k i=(k+1)/2

k i

pk2(1−p)k2

=pk2(1−p)k2 k i=(k+1)/2

k i

< pk2(1−p)k2 k

i=0

k i

=2kpk2(1−p)k2 (2.167)

The last equality sign in (2.167) results from the fact that k

i=0

k i

=2k

In turn, for an even value of kwe have Pk = 1

2 k

k/2

pk2(1−p)k2+ k i=k/2+1

k i

pi(1−p)ki

<

k i=k/2

k i

pi(1−p)ki <

k i=k/2

k i

pk2(1−p)k2

< pk2(1−p)k2 k

i=0

k i

=2kpk2(1−p)k2 (2.168)

Therefore

PE <

k=df ree

Lk 2(

p(1−p)!k

=T (D)00 00D=2√

p(1−p)

(2.169)

For small values of probability p the sum (2.169) is dominated by its first component and then we have

PELdfree 2(

p(1−p)!dfree

Ldfree2dfreepdfree/2 (2.170)

Each error event that is interpreted as diverging from and merging with the all-zero codeword implies at least one error in the decoded message sequence. On the basis of the estimated probability of an error event we are able to evaluate the error probability for message bits on the decoder output. As we remember, the number of “1”s in the message sequence resulting from selection of the path different from the all-zero path can be deduced from the code transfer function T (D, N, J ). This number is in fact a power of variable N in each component of the series expansion of this function. Each such component characterizes a certain path different from the all-zero path. For a given error event a number of incorrectly decoded message symbols can be estimated as the weighted sum of probabilitiesPk of selection of the route on the trellis diagram with the weight k, where the weights are numbers Bk of message symbol errors resulting from the selection of a given route, i.e.

Pb<

k=df ree

BkPk (2.171)

We have already shown that probabilityPk can be upper-bounded using the formula Pk < 2(

p(1−p)!k

Notice also that when we calculate the derivative of the code transfer functionT (D, N ) in the series expansion form with respect to N and we substitute N=1, we receive the sum from formula (2.171). Namely, we have from (2.162)

∂T (D, N )

∂N =D6+4N D8+9N2D10+ ã ã ã (2.172) SubstitutingN =1 andD=2√

p(1−p)in (2.172), we obtain Pb< ∂T (D, N )

∂N 0000

N=1,D=2√ p(1−p)

(2.173) For small values of a single codeword bit error probability p, the sum (2.173) is dom- inated by its first component. Then the probability of a single message symbol can be approximated by the formula

PbBdfree 2(

p(1−p)!dfree

Bdfree2dfreepdfree/2 (2.174) For the considered code we have Bdfree =B6=1, which for p=0.01 results in the message error bit probability equal to about 6.4×10−5. For small values of binary error probabilities the shortest error events dominate. These events, in turn, cause single errors in decoded message sequences.

One can show that due to hard-decision Viterbi decoding and application of bipolar modulation (see Chapter 3) the asymptotic coding gain expressed in decibels is about 10 log10Rd2free.

Let us extend our considerations on decoding performance to the soft-decision decoding implemented by the soft-input Viterbi algorithm. We will apply our results to the SOVA presented in the previous section. Let us again assume that all message symbols are statistically independent and equiprobable. Thus, the LLR function of the message symbols (ml)(l=1,2, . . . , i) is equal to zero and does not influence the metric values. The Viterbi algorithm reduces to the application of the maximum likelihood decision rule.

Consequently, the coefficient Lν does not have an impact on the choice of the decided sequence and can be omitted. Using (2.146) we can describe the maximized metric by the formula

M(ri1|di1)= i

l=1

n k=1

rl,kdl,k (2.175)

Recall that indexidenotes the current time instant, 1/nis the coding rate,dl,k is a bipolar code symbol in theith time unit appearing in thekth position of the codeword andrl,k is the additive Gaussian noise channel output when dl,k is given to the channel input, i.e., rl,k =dl,k+νl,k. As previously, consider the probability of the error event for diverging from and then merging with the all-zero path at the ith moment. Counting time units starting from the moment of divergence from the all-zero path, the metric of the all-zero route, denoted asM(0)(ri1|di1), takes the form (recall thatdl,k = −1 for a zero codeword symbol)

M(0)(ri1|di1)= − i

l=1

n k=1

rl,k = − in j=1

rj, rj =rl,k, j =k+n(l−1),

k=1, . . . , n (2.176) The decoder will commit an error if an incorrect path different from the all-zero route is decided. Denote the metric of this path asM(1)(ri1|di1). Thus, the probability of an error event PE is

PE=Pr{M(1)(ri1|di1) > M(0)(ri1|di1)}

=Pr{M(1)(ri1|di1)M(0)(ri1|di1) >0} (2.177) Let us note that, as in the hard-decision analysis, the result of metric comparison is influenced only by those positions and signal samples in which codewords associated with the all-zero and the other candidate path differ. Let them differ in at least d=wH

positions. Thus, the probability of the error event when two paths differ in d positions can be expressed by the formula

Pd =Pr{ d j=k

rjk>0} (2.178)

where the set{j1, j2, . . . , jd}lists all the sample indices in which two candidate codewords differ. Recall that due to the fact that the all-zero path is the correct onerjk = −√

Ec+νjk,

where the noise samples are statistically independent Gaussian zero-mean variables. As a result, the sum d

j=krjk is a Gaussian random variable whose deterministic component is equal to −d

Ec whereas its variance is the sum of variances of each component rjk, i.e. it is equal to 2. The probability distribution function of the random variable U =d

j=krjk is then given by the formula pU(u)= 1

√2π exp

"

(u+dEc)2 22

#

(2.179) Thus, the desired probability of error eventPd is

Pd=Pr{ d j=k

rjk>0} =

$∞

0

pU(u)du (2.180)

Let us apply the function Q(x) that describes the area under the tail of the normalized Gaussian distribution and is often found in the tables. This function is given by the formula

Q(x)= 1

√2π

$∞

x

exp

t2 2

dt (2.181)

We can easily show, using appropriate substitutions, that

Pd=Q

?Ecd σ2

=Q

@ d2Ec

N0

(2.182) The meaning of theQ-function is shown in Figure 2.25. As we know, the path differing in d positions from the all-zero path is not the only one that can appear. The possible values ofdcan be found from the code transfer function, which is expressed in the form of an expansion series. In general, formula (2.166) can be applied as an upper bound of the probability of an error event resulting in

PE ≤ ∞ d=dfree

LdPd = ∞ d=dfree

LdQ

?Ecd σ2

(2.183)

where, as before, Ld denotes the number of paths differing from the all-zero path in d positions. When the argumentx of the Q-function is growing then the function can be tightly upper-bounded by an exponential function of the form

Q(x)≤ 1 2exp

x2 2

Q(x)

x t

exp(−t2/2) 1

2

0.1

−4 −3 −2 −1 0 1 2 3 4

0.2 0.3 0.4

Figure 2.25 Illustration of the Q-function

Then, as in (2.169), formula (2.183) reduces to PE

d=dfree

LdPd = 1 2Dd00

00D=exp[Ec/2σ2]

= 1 2

d=dfree

T (D)00

00D=exp[Ec/2σ2]

(2.184) By analogy to (2.173), the message bit error probability can be upper-bounded in the following way

Pb< 1 2

∂T (D, N )

∂N 0000

N=1,D=exp[Ec/2σ2]

(2.185) Finally, for small noise variance the shortest route featuring the Hamming distance from the all-zero route equal to dfree dominates and then, as in (2.174), the message bit proba- bility can be approximated in the following way

Pb≃ 1

2Bdfreeexp

dfreeEc 2σ2

= 1

2Bdfreeexp

dfreeEc N0

(2.186) To end our considerations, let us illustrate the gain of soft-decision decoding over hard-decision decoding by giving some quantitative examples based on the derived approximations for high signal-to-noise ratios.

Recall the example of bit error probability for hard-decision decoding when the prob- ability of a single code symbol isp=0.01. Using (2.174) and substituting for our code dfree=6, Bdfree =1, we again havePb,hard≃6.4×10−5. As we will learn from Chapter 3, the probability of an error in bipolar transmission for high signal-to-noise ratios is

p=Q

?Ec

σ2

≃ 1 2exp

−1 2

Ec

σ2

= 1 2exp

Ec

N0

(2.187)

Substituting (2.187) into (2.186), we have Pb,soft≃ 1

2Bdfree

"

1 2exp

Ec 2σ2

#dfree 1 2

dfree

= 1

2Bdfree2dfreepdfree

Thus, if p =0.01, then Pb,soft≃3.2×10−11. As we can see, the difference in per- formance is significant. Let us also inspect the difference in the required Ec2 for a given probability of bit error Pb for hard- and soft-decision decoding. Let us stay at Pb,hard=Pb,soft≃6.4×10−5, so for hard-decision decodingp=0.01. Using the approx- imation applied in (2.187) for hard-decision decoding we have

Ec N0

0000

hard

= −ln 2p=3.91=5.92 dB In turn, using (2.186) we obtain

Ec N0

0000

soft

= − 1 dfree

ln 2Pb,soft=1.49=1.74 dB

so the gain achieved by application of soft-decision decoding instead of hard-decision decoding is of the order of 4 dB! Let us note that this quantitative result is not very precise, as only approximations of the bit error probabilities have been applied for both types of decoding. Typically, we can expect about 2 dB gain of soft-decision decoding over its hard-decision version. Anyway, one can also easily notice that the code-free distancedfree plays a crucial role in the overall decoding performance.

Một phần của tài liệu Introduction to Digital Communication Systems by Krzysztof Wesolowski . (Trang 192 - 198)

Tải bản đầy đủ (PDF)

(579 trang)