LIST OF ABBREVIATIONS GRS Codes Generalized Reed-Solomon Codes MDS Codes Maximum-distance separable Codes AG Codes Algebraic Geometry Codes AWGN Channel Additive White Guassian Channel R
Trang 1Acknowledgments
First, I would like to express my sincere thanks to my supervisors, Dr Marc Andre Armand and Dr Mehul Motani, for their invaluable advice and patient guidance throughout the course of my project and thesis Their knowledge and experiences in the field of coding theory are insurmountable
I would also like to thank my friend Hu Wenguang for the numerous and fruitful discussions which helped me to solve specific problems in the thesis Besides, special thanks to Ye Jiangyang and Yu Yiding for sharing some enjoyable moments during the course of my research
My gratitude also goes to Department of Electrical and Computer Engineering, National University of Singapore, who provides the research facilities to conduct the research work
Finally, I am grateful to my family, without whose love, encouragement and support this thesis would not have been possible
Trang 2Table of Contents
Acknowledgements ……… i
Summary ……… ii
List of Figures ……… iv
List of Tables ……….……v
List of Abbreviations ……… ……vi
Chapter I Introduction 1.1 Error Control Coding 3
1.2 Accomplishments and Contributions 6
1.3 Thesis outline 7
Chapter II List Decoding 2.1 Introduction 10
2.2 RS codes and GRS codes 11
2.3 Sudan I Approach 15
2.4 Sudan II Approach 17
2.5 the KV Approach 20
2.6 Gröbner basis Interpolation Algorithm 25
2.7 Simulation Results 29
2.8 Summary 30
Chapter III Adaptive List Decoding 3.1 Introduction 31
Trang 33.2 Adaptive List Decoder 32
3.3 Modified Adaptive List Decoder 38
3.4 Puncturing Effects 41
3.5 Simulation Results 41
3.6 Summary 53
Chapter IV Turbo Codes 4.1 Introduction 56
4.2 MAP Decoder 56
4.3 Turbo Decoding Scheme 61
4.4 Summary 63
Chapter V Iterative List Decoding Scheme 5.1 Introduction 64
5.2 Modified Symbol-Based MAP Decoder 65
5.3 Extrinsic Information from the KV List Decoder 67
5.4 Iterative List Decoding Scheme 69
5.5 Adaptive Iterative List Decoder 71
5.6 Simulation Results 71
5.7 Summary 75
Chapter VI Conclusion 6.1 Summary of Thesis 77
6.2 Future work 79
References
Trang 4LIST OF FIGURES
Figure 1.1: Block diagram of a digital communication system ……… ………… 1
Figure 2.1: The KVA Structure ….……… ……… 19
Figure 2.2: FER performance for (15,9,7) RS code with the KVA ……… …… ….27
Figure 3.1: ALD Structure ……… ………30
Figure 3.2: ALD Flow Chart … ……….… …………32
Figure 3.3: Average list size for ALD ……… ……… 34
Figure 3.4: The distribution of list size for the ALD ………… ……….35
Figure 3.5: ALD Decoding Radius ……….………… ……….36
Figure 3.6: Average list size for the ALD and MALD ……… 38
Figure 3.7: The distributions of list size for the MALD ……… 38
Figure 3.8: ALD vs KVD for (15,9,7) RS codes ……… … 43
Figure 3.9: MALD vs KVD for (15,9,7) RS codes ……….……… … 44
Figure 3.10: Puncturing Scheme for (15,5,11) RS codes ….……….….47
Figure 3.11: ALD for Punctured (15,9,7) RS codes ……… 50
Figure 3.12: MALD for Punctured (15,9,7) RS codes ……….… 50
Figure 3.13: MALD for (15,9,7) RS codes ……….51
Figure 4.1: An Example of Trellis ……… 58
Figure 4.2: Turbo Decoding Scheme ………61
Figure 5.1: Iterative List Encoder ……… ………69
Figure 5.2: SISO decoder ……….69
Trang 5Figure 5.3: The ILD scheme ……….……….70
Figure 5.4: Generalized List Decoder ….……… ………72
Figure 5.5: The AILD Type I Flow Chart ……….…73
Figure 5.6: The AILD Type II Flow Chart ……….…73
Figure 5.7: ILD performance for (15,9,7) RS codes ……… 74
Figure 5.8: AILD performance for (15,9,7) RS codes ……….…… 75
Trang 6LIST OF TABLES
Table 2.1: GF(8) Table ……….11
Table 2.2: The Greedy Iterative Algorithm ……….21
Table 2.3: Gröbner basis Interpolation Algorithm ……… 26
Table 3.1: Stopping Criterion for the Adaptive List Decoder ……….…33
Table 3.2: The Modified Adaptive List Decoder ……… 39
Table 4.1: Decoding Algorithm for MAP Decoder1 ……… 63
Table 4.2: Decoding Algorithm for MAP Decoder2 ……… 63
Table 5.1: Decoding Algorithm for SISO Decoder1 ……… 71
Table 5.2: Decoding Algorithm for SISO Decoder2 ……… 72
Trang 7LIST OF ABBREVIATIONS
GRS Codes Generalized Reed-Solomon Codes
MDS Codes Maximum-distance separable Codes
AG Codes Algebraic Geometry Codes
AWGN Channel Additive White Guassian Channel
RSC Encoder Recursive Systematic Convolutional Encoder
MAP Maximum a Posteriori Probability
LAPP Log a posteriori probability
KVA Ralf Koetter and Alexander Vardy algorithm
KVD Ralf Koetter and Alexander Vardy decoder
ILD Iterative list decoder
MALD Modified adaptive list decoder
MMAPD Modified symbol-based MAP decoder
AILD Adaptive iterative list decoder
Trang 8Summary
Error correcting codes are designed to solve the problem of reliable transmission of information over a noisy channel A fundamental algorithm in coding theory is to decode the original message effectively even some symbols of the received words are distorted by the channel Traditionally, decoding algorithms have been constrained to output a unique codeword However, list decoding proposed by Elias and Wozencraft, generates a list of all candidate codewords that differ from the received word in a certain number of the positions
This thesis is intended to investigate the adaptive and iterative list decoding algorithm for Reed Solomon codes to achieve better performance Research consists of two major parts
The first part presents the adaptive list decoder (ALD), which can complement any existing list decoding algorithm In this thesis, we use the list decoder proposed by Ralf Koetter and Alexander Vardy (KV list decoder) as an example We compare the output of the KV list decoder with the hard-decision received word If the number of
mismatch symbols exceeds n t− for an[ ,n k+1, ]d RS code, where 1
Trang 9adaptive list decoder (MALD), which differs from the original decoder by introducing
a new stopping criterion The MALD not only increases the list size but also compares two decoded results from two consecutive iterations After we generate the output of its component KV list decoder by increasing the maximum list size, we compare it with the previous result If these two decoded results match, we stop iterations and generate output Otherwise, we continue increasing the list size and find the codewords
Part two of the thesis presents an iterative list decoding algorithm (ILD) In this algorithm, a serial concatenation of the modified symbol-based MAP decoder and the
KV list decoder serves as a core decoder We calculate the symbol reliability information from the core decoder and feed it back to the other decoder Furthermore,
we propose a scheme to combine the adaptive list decoder and iterative list decoder (AILD) together to achieve better performance
Trang 10Chapter I
Introduction
1.1 Error Control Coding
The objective of data transmission is to transfer data from an information source through a physical channel to a destination reliably A typical communication system can be represented by a block diagram shown in Figure 1.1
Figure 1.1 Block diagram of a digital communication system
Error control codes relate to protection of digital information against the errors that occur during data transmission or storage Data, which enters the communication system from information source, first passes a source encoder, which converts the source information into an information sequence A channel encoder transforms the information sequence into a coded sequence called a codeword It is a new, longer sequence that contains redundancy in the form of parity-check symbols After that, the
Trang 11codewords from the encoder are fed into a modulator that transforms each bit or symbol into a signal waveform, which can be accepted by channel The channel is the transmission media used to carry or store information Examples of channel are wire line, microwave radio links over free space, satellite links, fiber optic channel etc In the receiver, a demodulator generally maps the received sequence to the best estimates
of the transmitted codeword A channel decoder converts the output of the demodulator to an estimate of the transmitted information sequence
In his classical paper [27], C E Shannon indicated that with proper coding, the effect
of channel noise in a memoryless transmission system can be reduced to any desired level provided that the information transmission rate is less that the channel capacity [27, 28] To achieve this, the use of error control coding has become an indispensable part of the modern communication system
1.1.1 Turbo Codes
Turbo codes, introduced by Berrou et al [2] are the new paradigm for forward error correction These codes are one of the first successful attempts of achieving error-correcting performance near the theoretical Shannon bound For a bit-error probability of10−5and code rateR=1/ 2, the authors present an impressive ratio
Trang 12a soft decision iterative decoding algorithm, which minimizes the error probability using an MAP decoder, unlike that of soft decision Viterbi decoder which is a maximum-likelihood decoding method
The fact that turbo codes do not have large minimum distances causes the BER curve
to flatten at BER’s below This phenomenon is called the error floor Because of it, turbo codes are not suitable for applications requiring extremely low BER In order to achieve very small error probability, we introduce Reed-Solomon (RS) codes here
RS codes are readily available using algebraic decoding algorithms (eg the Berlekamp algorithm [39]) Soft-decision decoding improves the error performance of hard-decision decoders However, soft-decision decoding usually leads to a significant increase in decoding complexity One approach to improve the performance while keeping the decoding complexity low is to generate a list of candidate codewords from the received symbols, and then choose the candidate codeword with the highest reliability as the output This is the idea behind list decoding algorithm
1.1.3 List Decoding
In [12], Guruswami and Sudan proposed a new list decoding algorithm, which corrects
Trang 13up to⎡n n n( d) 1⎤
⎥
⎢ − , where is the block length andn is the minimum distance of
a RS code Extending their work, an efficient soft-decision list decoding algorithm was presented by Ralf Koetter and Alexander Vardy (KVA) [17], which performance significantly better than the algorithm of [12] The KV list decoder (KVD) makes full use of the reliability information from the received symbols to construct a multiplicity matrix, which indicates the multiplicity of corresponding interpolation points
d
1.2 Accomplishments and Contributions
Our accomplishments and contributions, which are elaborated throughout this thesis, can be briefly listed as follows:
• Briefly surveyed current research work on list decoding algorithm including Sudan I [35], Sudan II approach [12] and KVA Furthermore, Gröbner basis, which was proposed by Henry O’Keeffe and Patrick Fitzpatrick, is included to implement interpolation algorithm
• Studied in detail, adaptive list decoder (ALD) By adaptively adjusting the list size, we achieve a significantly better performance over the KVD with only a marginal increase in computational complexity
• Modified the adaptive list decoder (MALD) using a different stopping criterion, which reduces computational complexity yet improves performance
• Discussed the effect of the puncturing scheme on the list decoding algorithm and explained the reason why the performance is deteriorated if puncturing some symbols
• Performed simulations to evaluate the performance of both ALD and MALD
Trang 14• Surveyed current research work on turbo codes including a variety of components such as turbo encoder structure, recursive systematic convolutional encoder, interleaver, MAP decoder turbo decoder structure
• Established an iterative list decoder (ILD) which greatly reduces BER for Reed-Solomon codes by applying turbo structure to the KVD so that we can decode the received words iteratively
• Obtained the performance curve of the iterative list decoder and compared with original list decoder to observe how much improvement they have over the AWGN channel
• Proposed the scheme to combine ALD and ILD together
1.3 Thesis outline
Chapter 2 is devoted to the list decoding algorithm In this chapter, the Sudan I approach, the Sudan II approach and the KVA are investigated We also compare advantages and disadvantages of these three approaches about In addition, we briefly describe the Gröbner basis interpolation algorithm for the list decoder in this chapter Simulation results are shown at the end of this chapter
In Chapter 3, the ALD for Reed-Solomon codes is devised Furthermore, we propose the MALD, which differs from the original decoder by introducing a new stopping criterion In our original decoder, we compare the output of its component KVD with the hard-decision received word However, in the MALD, we compare the current output of its component KVD with the previous output If these two codeword are not
Trang 15consistent, we increase the list size by a predetermined step and decode the received word again Simulation results show that our ALD has significantly better BER performance over the soft-decision list decoder of Koetter and Vardy for a small list size Also presented in this chapter are simulation results of punctured RS codes with this new adaptive algorithm and some discussion on the effect of puncturing Finally, results of the performance comparisons of our ALD and MALD are put forward
In Chapter 4, basic concepts and structure about Turbo codes are introduced including the turbo encoder structure, recursive systematic encoder (RSC) and the interleaver In this chapter, we also present the MAP decoder and the turbo decoding principle Their derivations and application will be given to aid in the understanding of turbo decoding structure
In Chapter 5, we apply our turbo decoding principle to the list decoder (ILD) and demonstrator its superiority over the original algorithm In this scheme, we first introduce the modified symbol based MAP decoding algorithm (MMAPD) for non-binary input After that, we concatenate a MMAPD with the list decoder After calculating symbol reliability from the list decoder, we feed it back to the other decoder just as a classical turbo decoder In addition, we propose the scheme to combine our ALD and ILD together Furthermore, simulation results are described at the end of this chapter
Trang 16Chapter 6 draws the concluding remarks for this thesis and points out some promising future research directions
Trang 17Classical decoding, which uses the Hamming distance, can be described as “if the
number of errors is less than half of the minimum distance (or 1
2
d
≤ ⎢⎣ ⎦⎥⎥), find the
unique codeword, which differs from the received vector at most t positions.” While
the list decoding algorithm is “list all the possible codewords that differ from the
received vector at most e positions” [6]
Trang 18
List decoding was indicated by Elias [10] in the 1950’s However, no efficient decoding algorithm had been advocated for any known error-correcting code until recently and later M Sudan and Guruswami [12], who proposed new approaches to list decoding Their approach will be discussed in the next section After their work, Ralf Kotter and Alexander Vardy [17] presented a soft-decision algorithm for list decoding, which significantly improves the performance of RS codes
Definition 2.1 (List Decoding)
Input: Received codewords r and positive integer e
Output: A list of all codewords that differ from given code in at most positions
1, 2, , m
e
First, we introduce Reed Solomon codes and Generalized Reed Solomon codes
2.2 RS codes and GRS codes
2.2.1 Reed-Solomon Codes
Reed-Solomon (RS) codes form a class of maximum distance separable (MDS) codes They were discovered by Irving S Reed and Gustave Solomon in 1960 [25] A RS code can be derived from BCH codes over or we can regard RS code as a special case of BCH codes A RS code is defined as a primitive BCH code over If the length of a BCH code is
Trang 19Definition 2.2 Letαbe a primitive element of The generator polynomial of a
t-error-correcting RS code of length
(2 )m
GF
2m− is defined by its generator polynomial1 g x( )
g x( )=(x+α)(x+α2) (x+α2t) (2.1) Its parity check matrix is
exactly 2t+1 and the code is capable of correcting t or fewer symbol errors Hence, a
t-error-correcting RS code with symbols fromGF(2 )m has the following parameters:
Dimension: ; Minimum distance:
Another construction of Reed-Solomon (RS) code over the finite field is as follows Let
Trang 20letL⊂GF q x( )[ ]denote the set of polynomials of degree less than k We can define an
RS code by
{( ( ), ( ), , ( n )), }
C = f α f α f α − f ∈L (2.3)
which has length n and dimension k Since a polynomial of degree less than k has k-1
roots at most, each codeword has weightn−(k− = − +1) n k 1
Example 2.1, let us consider RS (7,3) over as an example We construct the
Trang 21Then its generator matrix is
2.2.2 Hard-decision Decoding of RS codes
Hard-decision decoding of RS codes is to determine both the location and the values
of the symbol errors Berlekamp’s iterative decoding algorithm [39] was the first efficient decoding algorithm for both binary and non-binary BCH codes In 1975, Sugiyama, Kasahara, Hirasawa and Namekawa showed that the Euclidean algorithm for finding the greatest common divisor of two polynomials can be used for decoding both BCH codes and RS codes [36] This Euclidean decoding algorithm is simple in concept and easy to implement Decoding of RS codes can also be implemented in the frequency domain The first decoding algorithm in the frequency domain is proposed
by Gore [11], and later significantly improved by Blahut [3]
Trang 22The Berlekamp algorithm and the Euclidean algorithm are known as algebraic decoding algorithm RS codes are commonly decoded with an algebraic decoding algorithm in many applications to keep the decoding complexity low To improve the error performance, the list decoding can be used
2.2.3 Generalized Reed-Solomon Codes
Let GF q( ) be a field and choose nonzero elementsv1, ,v n∈GF q( )and distinct elements α1, ,αn∈GF q( ) Setv=( , ,v1 v n)andα=(α1, ,αn) For , we define the generalized Reed-Solomon code is given by
Let us now find the generator matrix forGRS n k, (α, v) Any basis f x1( ), , f x k( )of
gives rise to a basis of the code A particularly nice polynomial basis is the set of monomials
2.3 Sudan I Approach
In the 1950’s, Elias [10] introduced the original idea about the list decoding However, till recently no non-trivial list decoding algorithms were known for any error
Trang 23correcting code Sudan proposed an efficient algorithm to achieve an error-correcting rate significantly better than that of any decoding algorithm that appeared before
Given that R denotes the information rate of a code, where , Sudan’s approach to list decoding is able to achieve an error rate of
The decoding procedure is divided into two steps:
Step 1 Computation of a non-zero bivariate polynomial that passes through a given set of points
( , )
Q X Y
1
{( ,x y i i)}i n= having prescribed parameters, subject to some
weighted-degree constrain That is to say, given n points , and an error
pattern e, we need to find a non-zero bivariate polynomial , such that for at least n-e values of
is also referred to as reconstruction step
Trang 241 Interpolation step: we find the bivariate polynomial
The codewords arec1=(1,1,1,1,1,1)andc2 =(5,3,1, 6, 4, 2)
From Example 2.2, we can observe that actually, the output polynomials are roots of
( )( , )
Q X Y
While this decoding procedure seems reasonable, it can only treat RS codes of rate less than1/ To overcome this limit, Guruswami and Sudan extended his first approach to Sudan II approach In Sudan II approach, we interpolate the given points with multiplicities
3
2.4 Sudan II Approach
In the Sudan I approach, we simply find a bivariate polynomial , which fits the
n points In the Sudan II approach, each point in is a singularity
of By a singularity of a polynomial , we mean that intersects
itself at this point The multiplicity m is the number of times intersects itself
We can choose m to be as large as possible in order to improve performance, but the computational complexity increases with m
Trang 25To enforce singularities at each point, first, we shift the coordinate system so that each point ( ,x y is the origin Then we rewrite the original polynomial i i) into a new polynomial in the new coordinate system Finally, we make the coefficients of
the monomials, whose degree is less than multiplicity m, to be zero That is to say, for
ofQ( )i ( , )X Y of total degree less than m (defined in 2.9) are 0 More specifically,
X Y is the quantityiw1+ jw2 For a bivariate polynomial ,
over all monomials with non-zero coefficients in of the -weighted degree of that monomial
The (1,1)-weighted degree is simply the total degree of a bivariate polynomial The
number of monomials of(w w x, y)-weighted degree at mostδ is denoted , ( )
x y
w w
N δ Thus
In [12], Guruswami and Sudan have indicated that:
1 The number of unknown monomials of(1,k−1)-weighted degree at most l is the
Trang 26⎝ ⎠ This means that the number of unknown monomials is greater
than the number of linear equations Only if the parameters n, k, m and l satisfy the
above inequalities, a polynomialQ X Y( , )exists
The Sudan II approach can be briefly described as follows Suppose we are given positive integersn k, and such that t
2
t >kn (2.8) and distinct interpolating pointsn {( ,x y i i)}i n=1 The first step of polynomial reconstruct concerns the computation of a bivariate polynomialQsatisfyingdeg ,
fort=1, 2, ,n (This means thatQ X Y( , )passes through the point( ,x y i i) m times
and ( ,x y is a singularity of i i) of order ) The next step involves finding all polynomials of degree at most such that is a Y-root of , i.e., is a factor of For each such polynomial , if
Trang 27Given a [10,5,6] RS code, heren=10,k+ =1 5,dmin = and code rate is6 1
c c
Π
Trang 28Fig 2.1 the KVA Structure Letπ denote the entries of the reliability matrixΠ and X and areY are finite input, the i j,
respectively, output alphabet of an AWGN channel Supposeαi∈X y, i∈ Y
Definition 2.4 Given a multiplicity matrix M with nonnegative integer entries ,
we define the cost of M as follows
It is easy to see that the computation of the polynomial (interpolation result)
is equivalent to solving a system of linear equations Since a given multiplicity m
means1/
( , )
M
Q x y
2 (m m+1)linear constraints on the coefficients of , the cost is
in fact the total number of linear equations We can always find a solution to
the soft interpolation task if the (1, k-1) –weighted degree
Trang 29so that the number of degree of freedom is greater than the number of linear constraints Thus we can define the function
w w v δ N w w, δ v
∆ = ∈Z > (2.15) Notice that∆1,k−1( )v < 2(k−1)v < 2kv
The list size of the soft decision decoder is bounded from above bydeg0,1Q M( , )x y and
generate the reliability matrix using the Greedy iterative algorithm proposed in [17]
Given points set and the multiplicity matrix, we can find a bivariate polynomial , which has a zero of multiplicity at least at the corresponding point for every pair ( )
The next step of the soft-decision list decoding algorithm is to find the roots
of just as in the Sudan II approach After generating a list of candidate codewords , we adopt the product distributions of the codewords, which is defined by
P c c c c j
=
=∏Π j (2.17) whereΠis the reliability matrix We choose the codeword from the generated list as
Trang 30output, which has the maximal product distribution (most reliable candidate)
Algorithm 2.1: The Greedy iterative algorithm
Input: Reliability matrixΠ and a positive integer s, derived from the cost of M and
indicating the total number of interpolation points
Output: Multiplicity matrix M
M : = 0;
While s≥0do
Find the maximum position (i, j) in reliability matrix Π
Update the reliability matrix, positive integer s and multiplicity matrix
, ,
Return multiplicity matrix M
Initially, we set the multiplicity matrix M to be the all-zero matrix The positive
integer s indicates the number of interpolation points as well as the number of entries
of M to be updated Based on the reliability matrix calculated from the received word,
we find the maximal position( , )i j Finally we update the reliability matrixΠ and multiplicity matrix using the formula above
Table 2.2 The Greedy Iterative Algorithm
Trang 31Example 2.5 From the output of the AWGN channel, we derive the reliability
matrixΠ
0.01 0.0025 0.05 0.14 0.20 0 0 0 0 00.06 0.0025 0.09 0.14 0.05 0 0 0 0 0
,
0.01 0.0012 0.61 0.44 0.40 0 0 0 0 00.90 0.0038 0.10 0.21 0.15 0 0 0 0 0
Trang 322.6 Gröbner basis Interpolation Algorithm
Let F be any field, and let R=F x x[ ,1 2, ,x n] denote the polynomial ring We consider the following problem:
a Given f f1, 2, ,f m ∈ R , decide if they have a common zero in F n
b Giveng f f, 1, 2, ,f m∈R, decide if g can be written asg =h f1 1+ + h f m m, for someh1, ,h m ∈ R
Let< f1, , f m>denote the set of all polynomials above for all possibleh1, ,h m∈ , R
which is an ideal of R In general a nonempty subset is called an ideal if
Trang 33bases and different bases may have different number of elements Also, the elements
of a basis for an ideal need not be linearly independent over F but can be easily
reduced to one with linearly independent elements
The idealI =< f1, , f m >captures the common zeros of f f1, 2, ,f in that: (i) Every m
common zero of f f1, 2, ,f is a zero of every g m ∈ (ii) If I has another basis, I
sayI =<g1, ,g m >, then a point in nis a common zero of
F f f1, 2, ,f if and only if it m
is a common zero ofg g1, 2, ,g m
Definition 2.5 A Gröbner basis for an ideal I =< f1, , f m >∈R is some basis for I such
that we can “see”' the common zeros of the ideal, hence the problem above can be easily solved (when is algebraically closed) In terms of ideals, the problem above is
1 Suppose that f f1, 2, ,f m ∈ R are linear Applying Gaussian elimination gives a
``triangular'' systemg g1, 2, ,g Then s < f1, ,f m >=< g1, ,g s > Therefore, we can conclude that<g g1, 2, ,g s >is a Gröbner basis
< >=< > Here,d x( )is a Gröbner basis for< f f1, 2 >
In some sense, a Gröbner basis is a common generalization of Gaussian elimination and the Euclidean algorithm
Trang 34From [23], we see that for non-zerof ∈R, we can write f =a x1 1+a x2 2+ + a x r r, where thea iare non-zero constants and thex are terms satisfying i x1>x2 > > The x r leading term of f isx and the leading coefficient is1 We say that a Gröbner basis is strictly ordered if there are no duplicates among its leading terms and its elements are
in increasing order of leading term Any Gröbner basis can be transformed into a
strictly ordered form Here we use ord(B) to indicate the Gröbner basis B in strictly
In the initialization, q bivariate polynomials are provided as the standard Gröbner
basis Then in each iteration, we compute coefficients of each bivariate polynomial, which is shifted from
i
m (i=1, ,n)
( , )x y to (x+x y i, +y i) Then according to the coefficients of
a specific term in each polynomial, the Gröbner basis is re-computed and re-ordered
In particular, the number of the iteration steps is determined by the number of interpolation points and their multiplicities
Trang 35Gröbner basis Interpolation Algorithm
Input:
points{( x y k, k) : 1≤ ≤ n with multiplicities k }, m k
Output:
Gröbner basis of the solution
ϕ( )Q as the first element of
Trang 362.7 Simulation Results
Fig 2.2 FER performance for (15,9,7) RS code with the KVA
Figure 2.2 indicates the different frame error rate (FER) performance of the (15,9,7)
RS code using the KVD and the Berlekamp-Welch decoder operating on an AWGN channel with 16-QAM modulation Since RS codes are mostly used when we try to achieve better performance in FER, we compare RS code by the KVD and Berlekamp-Welch decoder using FER In this scenario, we set the maximum list size
of KVD is 1 For the purpose of comparison, the error performance of uncoded 16-QAM is also shown From this figure, we can make the following observation
• The KVD has better performance than the hard-decision decoder We can
Trang 37observe that the KVD with maximum list size 1 can achieve 1dB coding gain over the Berlekamp-Welch at a FER of 4
10− and 4dB coding gain over uncoded QAM at the same FER
• The gap between KVD and the Berlekamp-Welch algorithm is not significant This is because we set the maximum list size of KVD 1 The list size can measure the error-correction capacity of the decoder A larger list size improves BER performance at the expense of higher computational complexity
Trang 38an[ , RS code, if the number of mismatched symbols exceed , where t is
the smallest positive integer satisfying (2.8), we increase the maximum list size
of its component KVD by a predefined step (usually 1) and decode it again This is the basic idea behind our ALD In addition, we present the modified adaptive list decoder (MALD) in the following sections, which compares the two decoded result from the
1, ]
2
Trang 39consecutive iterations Simulation results show that our algorithms can achieve much better results than the KVD for a low fixed list size Furthermore, simulation results indicate that if we puncture some symbols to increase the code rate, our scheme still works well and performance degradation due to puncturing can be compensated by further adaptive steps
Fig 3.1 ALD Structure
3.2 Adaptive List Decoder (ALD)
complexity of a list decoder can be implemented to run using
6 3
l O k
Trang 40size, we will increase the computational complexity dramatically We thus propose an ALD here to achieve much better BER performance yet with only modest increase in computational complexity This adaptive approach gives us a good trade-off between computational complexity and BER performance In [12] and [17], the decoding radius, and thus the complexity, is fixed but frequently the actual list size is significantly smaller than its upper bound based on the decoding radius This means that a smaller list size often suffices, leading naturally to the idea of adapting the bound on the list size (or decoding radius) as needed The computational complexity
of ALD is clearly less than the decoder in [12] and [17] with fixed decoding radius
3.2.1 ALD Key Steps
First, we distinguish three concepts:
The list size of ALD: the maximum list size of its component KVD
The maximum list size of ALD: the upper bound of the maximum list size of its
component KVD that we can increase
The average list size of ALD: the average of the maximum list size of its component
KVD when the MALD stops