The decoding methods considered so far have taken advantage of specific properties of the applied codes, so they have been called nonalgebraic decoding methods. However, an important class of decoding algorithms is the class of algebraic decoding methods, relying on efficient solution of a certain equation set. These methods are applicable not only to binary codes, but also for nonbinary ones such as Reed-Solomon codes. Thus, they are important from the application point of view.
Consider the decoding of binary BCH codes. The method that will be presented below can be easily extended to nonbinary codes. As we remember, the roots of the BCH code generator polynomial g(x)are equal to αi0, αi0+1, . . . , αi0+2t−1, where α is a primitive element of the Galois field GF (pm)and i0 is a certain initial natural number. Assume without loss of generality that i0=1. Recall that the roots of the generator polynomial determine the form of the parity check matrix, which according to formula (2.65) is the following
H =
α0 α . . . αn−1 α20
α21
. . . α2n−1
. . . . . . . . . . . . α2t0
α2t1
. . . α2tn−1
(2.94)
We also remember that Hr=He=s, so the syndrome calculated for the received sequence depends exclusively on the error sequenceeor, equivalently, on the error poly- nomiale(x). Denote the result of the scalar product of theith row of matrixH given by (2.94) and the error sequence as si. One can easily find that it is the ith component of the syndrome vectors. We have
s1
s2
... s2t
=
α0 α . . . αn−1
α20 α21
. . . α2n−1
. . . . . . . . . . . . α2t0
α2t1
. . . α2tn−1
e0
e1
... en−1
(2.95)
so in the polynomial notation for i=1, . . .2t the ith syndrome component si can be shown in the form
si =e(αi)=en−1(αi)n−1+en−2(αi)n−2+ ã ã ã +e1αi+e0 (2.96) Each syndrome component si is a linear combination of the powers of the root αi and therefore belongs to the Galois field GF (pm). Assume thatw≤t errors have occurred in the received sequence. Their positions are unknown to the decoder. Denote them as j1, j2, . . . , jw. Therefore the error polynomial is expressed as
e(x)=ejwxjw+ejw−1xjw−1+ ã ã ã +ej2xj2+ej1xj1 (2.97) In the case of binary codes the coefficients ej1, ej2, . . . , ejw are equal to binary “1”s.
Taking advantage of (2.97), we obtain equation set (2.96) in the following form s1 = ejwαjw+ejw−1αjw−1+ ã ã ã +ej2αj2+ej1αj1
s2 = ejw(αjw)2+ejw−1(αjw−1)2+ ã ã ã +ej2(αj2)2+ej1(αj1)2 ...
s2t = ejw(αjw)2t +ejw−1(αjw−1)2t+ ã ã ã +ej2(αj2)2t +ej1(αj1)2t
(2.98)
so each syndrome component is described by the expression
si = w
l=1
ejl(αjl)i (2.99)
The main goal of the decoder is to identify the error positions j1, j2, . . . , jw. In general, determination of the error positions relies on calculation of the syndrome components s1, s2, . . . , s2t, followed by solution of the nonlinear equation set with respect to unknowns αj1, αj2, . . . , αjw. The appropriate powers of the primitive element α are found on the basis of the achieved solutions. The powers indicate error positions in the received sequence. Solution of the nonlinear equation set (2.98) is generally cumbersome. The conceptually simplest approach would be to find the solution by successive substitu- tion of unknowns αj1, αj2, . . . , αjw by all possible powers of the primitive element α in equation set (2.98). Unfortunately, it is reasonable only if the number of correctable errors t is small. Particular decoding methods basically differ in the method of finding the solution of equation set (2.98). In this section we will present the approach proposed by Berlekamp (1965), modified by Massey (1972) and summarized in a clear way by Lee (2000).
Instead of solving the nonlinear equation set, the Berlekamp-Massey algorithm defines an error location polynomial (x)and performs some operations on it. The polynomial has the form
(x)=wxw+w−1xw−1+ ã ã ã +1x+1 (2.100)
=(1−αjwx)(1−αjw−1x) . . . (1−αj1x)= w l=1
(1−αjlx) (2.101)
The roots (αjl)−1 (l=1,2, . . . , w) of the error location polynomial are inverses of the searched solutions of the nonliner equation set (2.98). Thus, the solution of (2.98) has been replaced by construction of the polynomial (x), followed by finding its roots.
Let us multiply both sides of the polynomial expression(x)given by formula (2.100) by ejl(αjl)k+w, where k is a certain natural number,3 and calculate its value for x= (αjl)−1. Because (αjl)−1 is a root of the polynomial (x), from (2.100) we get the following dependence
ejl(αjl)k+w
w(αjl)−w+w−1(αjl)−w+1+ ã ã ã +1(αjl)−1+1
=0 or equivalently
ejl
w(αjl)k+w−1(αjl)k+1+ ã ã ã +1(αjl)k+w−1+(αjl)k+w
=0 (2.102)
3One should not confuse the natural numberkwith the message block length of a codeword.
Let us sum both sides of equation (2.102) for all values of the index l=1,2, . . . , w.
Grouping all components containingi, we receive the following equation w
w l=1
ejl(αjl)k+w−1
w l=1
ejl(αjl)k+1+ ã ã ã +1
w l=1
ejl(αjl)k+w−1+ w
l=1
ejl(αjl)k+w =0 (2.103) If we recall formula (2.99) for syndrome components, we see that equation (2.103) can be written in the form
wsk+w−1sk+1+ ã ã ã +1sk+w−1+sk+w =0 (2.104) Equation (2.104) contains the existing components of the syndrome calculated from (2.95) ifkis located within the interval [1, w], because if, as it is assumed,w≤t, thenk+w≤ 2t. Substitutingi=k+w forw+1≤i≤2w, we get the following equation
wsi−w+w−1si−w+1+ ã ã ã +1si−1+si =0 (2.105) so
si = − w
l=1
lsi−l for i=w+1, w+2, . . . ,2w (2.106) Based on (2.106) we get the following matrix equation
sw+1 sw+2 ... s2w
= −
s1 s2 . . . sw s2 s3 . . . sw+1 . . . . . . . . . . . . sw sw+1 . . . s2w−1
w w−1
... 1
(2.107)
This is a linear equation set that needs to be solved in the finite field with respect to the coefficient set {1, 2, . . . , w} of the error location polynomial. Equation (2.106) implies that the subsequent syndrome components can be found by applying the feedback shift register shown in Figure 2.16 if the coefficients 1, 2, . . . , w are known. The solution of equation set (2.107) is then equivalent to the design of such a feedback shift register that is able to generate the sequence of the syndrome components. The iterative method of deriving the coefficients 1, 2, . . . , w was given by Massey (1972). His method is related to Berlekamp’s method (Berlekamp 1965), so the name of the decoding algorithm contains the names of both scientists. Let us note that the decoder does not know the number of errors w that have occurred in the received sequence. Assuming that the probability of a single error in the received symbol is smaller than 1/2, the case in which fewer errors have occurred is more probable than the case in which the number of errors is higher. Thus, we should search for the register with the lowest degree that correctly generates the sequence of the syndrome components calculated for the received sequence.
−Λw−1
−Λ2
−Λ1 −Λw
+ +
+
si
Si−1 Si−2 Si−w
si−w−1,si−w−2,...,s1 Figure 2.16 Linear feedback register synthesized in the Massey algorithm
The Massey algorithm is thus a method of feedback register synthesis that determines the register of minimum length. This register generates the required sequence of syndrome components. During operation of the algorithm the sequence of syndrome components generated by the current form of the feedback register is subsequently compared with the desired syndrome components calculated on the basis of the received sequence. This is done step by step until all the syndrome components are correctly produced by the register or the divergence between a component calculated by the register and that calculated on the basis of the received sequence is observed. In the case of divergence, the feedback register is modified so as to remove it. The syndrome components are generated again until their number is exhausted or the next divergence between the calculated and generated sequences appears again.
Denote the polynomial describing the correction of the feedback taps as D(x). L is the current degree of the synthesized connection polynomial (x). Leti be the number of the subsequent syndrome component. The algorithm performing the synthesis of the feedback register leading to determination of the error location polynomial coefficients can be formulated in the following steps (Michelson and Levesque 2003).
1. Derive the syndrome componentssi,i=1,2, . . . ,2t.
2. Initialize the variables applied in the algorithm:i=1,(x)=1, D(x)=x,L=0.
3. For a new syndrome componentsi calculate the discrepancy δ=si+
L l=1
lsi−l (2.108)
4. Check the calculated value of discrepancyδ. If δ=0, go to step 9, otherwise go to step 5.
5. Modify connection polynomial(x). Let∗(x)=(x)−δD(x).
6. Test the length of the feedback register. If 2L≥i, go to step 8, i.e. do not extend the register length, otherwise go to step 7.
7. Increase the register length and update the correction polynomialL:=i−L,D(x)= (x)δ−1
8. Update the connection polynomial: (x):=∗(x).
9. Update the correction polynomial:D(x):=xD(x).
10. Update the counter of the syndrome components: i:=i+1.
11. Check if the counter of the syndrome component has reached the final value, i.e. if i >2t. If not, go to step 3; otherwise stop the procedure.
Let us note that discrepancy δ is defined in such a way that in the case of the first realization of step 3 it is equal to the first syndrome component s1. Synthesis of the correction polynomial D(x) is performed not only to set discrepancyδ to zero but also to modify the polynomial (x) in such a manner that the feedback register configured according to this polynomial would generate all preceding syndrome components. Thus it is not then necessary to check the correctness of generation of the previous syndrome components by the modified feedback register. This property has a crucial influence on the algorithm complexity, which, as a result, depends linearly on the number of correctable errors.
After finding the coefficients of the polynomial(x)we have to find its roots αjl−1
, which, as we remember, are inverses of the Galois field elements indicating the error locations. Searching for the roots is often performed by substitution of each nonzero element of the extension field GF (pm), in which the primitive elementα is defined, to the polynomial (x) determined by formula (2.100), and testing if (x)=0. If this is true, the tested element is a root of the polynomial (x), so the power of the primitive element related to the root inverse determines the error location in the received sequence.
Concluding, the Berlekamp-Massey algorithm of decoding of BCH codes can be sum- marized in the following steps.
1. Derive the syndrome components s1, s2, . . . , s2t related to the received sequence described by the polynomialr(x).
2. Apply the received syndrome components s1, s2, . . . , s2t in the Berlekamp-Massey algorithm, which calculates the coefficients of the error location polynomial(x).
3. Find the roots of the polynomial (x).
4. On the basis of the inverses of the roots of(x), determine the error polynomiale(x).
5. Correct the received sequence, i.e. add the error polynomial to the received sequence polynomialr(x).
Let us illustrate the operation of the Berlekamp-Massey algorithm by the following example taken from Lee (2000).
Example 2.7.1 Consider decoding of the BCH (15,5) code of correction capability of t =3errors. The generator polynomial isg(x)=x10+x8+x5+x4+x2+x+1. This polynomial has the roots α,α2,α3,α4,α5,α6, whereα is the primitive element of the field GF (24)generated by the polynomialp(x)=x4+x+1. The list of field elements repre- sented as the powers of the primitive element has been shown in Table 2.3. Assume that the zero codeword has been transmitted, i.e. c(x)=0; however, the received sequence polynomial has the form r(x)=x12+x5+x3. In reality it is an error polynomiale(x), but this fact is not known to the decoder. In the first phase of the decoding algorithm the syndrome components have to be derived by calculating si =r(αi), i=1,2, . . . ,6. On the basis of Table 2.3 we get
s1=r(α)=α12+α5+α3
=
1 1 1 1
+
0 1 1 0
+
0 0 0 1
=
1 0 0 0
=1
The other syndrome elements calculated in a similar way are
s2=r(α2)=1 s3=r(α3)=α10 s4=r(α4)=1 s5=r(α5)=α10 s6=r(α6)=α5
Knowing the syndrome components we have to determine the coefficients of the error location polynomial(x), taking advantage of the iterative Berlekamp-Massey algorithm.
Table 2.5 presents subsequent steps of this algorithm.
Table 2.5 Results of subsequent steps of the Berlekamp-Massey iterative procedure (Lee 2000)
i δ D(x) ∗(x) (x) L
0 − x − 1 0
1 s1=1 x 1+x 1+x 1
2 0 x2 1+x 1+x 1
3 α5 α10x2+α10x α5x2+x+1 α5x2+x+1 2
4 0 α10x3+α10x2 α5x2+x+1 α5x2+x+1 2
5 α10 α10x3+α5x2+α5x α5x3+x+1 α5x3+x+1 3 6 0 α10x4+α5x3+α5x2 α5x3+x+1 α5x3+x+1 3
Analyze the operation of the algorithm. At the initial moment the variables used by the algorithm are initialized, i.e. D(x)=x, (x)=1, L=0. In the first iteration the discrepancy δ is nonzero and equal to the first syndrome component s1=1. Thus, the connection polynomial is modified by setting∗(x)=1−δx=1+x. Because the math- ematical operations are performed inGF (24), subtraction of 4-bit blocks representing the elements of this field is equivalent to modulo-2 addition of their components. Therefore, we will consequently apply signs of mathematical addition. Because L=0,i=1, and so far2L < i, the feedback register length is increased (L:=i−L=1) and the correc- tion polynomial is updated accordingly, i.e. D(x)=(x)δ−1=1. Next the connection polynomial is also updated, so(x):=∗(x)=1+x, and the correction polynomial is changed again, D(x):=xD(x)=x. At the end of this iteration the syndrome element counter is increased (i:=i+1=2). In the next iteration the discrepancy is calculated again:δ=s2−1
l=1ls2−l =1−1=0.Consequently, in this iteration we go directly to step 9 and the feedback register length is not changed because s2 has been generated correctly by the feedback register in the current form. In turn, updating of the correction polynomial is performed, i.e.D(x):=xD(x)=x2, and the iteration counter is increased by 1. The reader is encouraged to trace the next iterations of this algorithm, Finally, in the sixth iteration the following connection polynomial is received
(x)=α5x3+x+1 (2.109)
Now the polynomial roots have to be found by substituting subsequent nonzero elements of Galois fieldGF (24)into equation (2.109) and checking if the result is equal to zero. It turns out that the polynomial roots areα3,α10andα12.However, we are interested in their inverses because the latter, expressed as the powers of the primitive elements, indicate the
error locations in the received sequence. The calculated root inverses are α12, α5 and α3, respectively. The error polynomiale(x)achieves the forme(x)=x12+x5+x3. The final step of the decoder is correcting the errors by performing the following polynomial calculation
c(x)=r(x)+e(x)=(x12+x5+x3)+(x12+x5+x3)=0 As we can see, the final decoder decision is correct.
The example presented above illustrates the process of decoding of a binary code.
However, the Berlekamp-Massey algorithm is often applied for decoding nonbinary BCH codes, such as Reed-Solomon codes. In the tutorial by Michelson and Levesque (2003) the example for RS code decoding is presented.
One could wonder why such a complicated decoding method is used. The answer is simple. If the error correction capability is relatively large (e.g. if six errors can be corrected) and the codewords are long, then nonalgebraic decoding methods become too complex. Direct search of the solution of the nonlinear equation set by checking all nonzero elements of the fieldGF (pm)exhausting the set of all possible error combinations is also too complex. Thus, the Berlekamp-Massey algorithm becomes one of possible solutions for decoding such codes.
The Berlekamp-Massey algorithm is a classical solution of the BCH code decoding. In recent years many new soft-decision decoding algorithms have been developed; however, their presentation is beyond the scope of this introductory chapter.