1. Trang chủ
  2. » Tất cả

Course report digital communication turbo code turbo code structure and iteration turbo decoding algorithm

23 4 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Turbo Code Turbo Code Structure And Iteration Turbo Decoding Algorithm
Tác giả Nguyen Phat Dat, Pham Van Truong An, Pham Van Nam Anh, Dinh Huy Hoang
Người hướng dẫn Tran Thi Thao Nguyen
Trường học Vietnam National University Ho Chi Minh City University of Science
Chuyên ngành Digital Communication
Thể loại Báo cáo khóa học
Năm xuất bản 2022-2023
Thành phố Ho Chi Minh City
Định dạng
Số trang 23
Dung lượng 917,7 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

VIETNAM NATIONAL UNIVERSITY HO CHI MINH CITY UNIVERSITY OF SCIENCE COURSE REPORT DIGITAL COMMUNICATION School year 2022 2023 TURBO CODE GROUP 4 Nguyen Phat Dat – 20207090 Pham Van Truong An – 20207015[.]

Trang 1

VIETNAM NATIONAL UNIVERSITY

HO CHI MINH CITY UNIVERSITY OF SCIENCE

Pham Van Truong An – 20207015

Pham Van Nam Anh – 20207017

Dinh Huy Hoang – 20207092

CLASS: 20DTV-CLC-1

LECTURER: TRAN THI THAO NGUYEN

Trang 2

TABLE OF CONTENT

Trang 3

I THE CONCEPT OF TURBO CODES:

1.1 The concept of turbo codes: 3

1.1.1 Valid functions (LIKELIHOOD FUNCTIONS): 3

1.1.2 Case of two signal layers: 3

1.1.3 Log-likelihood ratio: 4

1.1.4 The principle of iterative decoding: 5

1.2 Log-likelihood algebra: 5

1.2.1 Two–dimensional single – parity code: 8

1.2.2 Product code: 8

1.2.3 Extrinsic likelihood: 9

II TURBO CODE STRUCTURE AND ITERATION TURBO DECODING ALGORITHM:

2.1 Introduction: 10

2.2 Encoder structure and iterative decoder: 11

2.3 Turbo decoding algorithm: 13

2.3.1 Overview of decryption algorithms: 14

2.3.2 MAP algorithm: 14

2.3.3 The principle of the soft open Viterbi decoder: 15

III TURBO CODE APPLICATION IN MOBILE COMMUNICATIONS :

3.1 Limitations when applying Turbo code to multimedia communications: 19

3.2 Transmission channel characteristics: 19

3.3 Recommendations when applying Turbo codes to multimedia communications: 19

REFERENCES

SKILLS REPORT

Trang 4

I THE CONCEPT OF TURBO CODES:

The narrative code schema was first proposed by Forney as a method to enhanceencryption by combining 2 or more simple related blocks or component codes As a result, thecodes have a much greater ability to correct errors than other error correction codes, and theyare provided with a structure that allows easy contact for complex decoding to become lighter

A contiguous code sequence is mostly used for power-limiting systems such as transmitters ondeep-space probes

1.1 The concept of turbo codes:

1.1.1 Valid functions (LIKELIHOOD FUNCTIONS):

The mathematical setup of testing hypotheses is based on Bayes' theorem For thecommunications industry, applications related to the AWGN channel are of most interest, thegreatest effect of Bayes' theorem describing the posterior probability (APP-a posteriorprobability) of a decision in the terms of the continuous random variable x is:

(1.1)and

(1.2)where is APP, and d = I represents the data d belonging to the i-th signal class from the set Mclass Furthermore, the probability density function representation (pdf) of the data additionnoise signal has a continuous value that is received x, given that on the signal layer d=i Also,called a priori probability, is the probability that it occurs in the i-th signal layer

1.1.2 Case of two signal layers:

Binary cits 1 and 0 are represented by electron potential levels of +1 and 1, respectively.The variable d is used to represent the emitted bit of data, each time it occurs, as a potential orlogical element Sometimes one format is more convenient than another; We can recognize thedifference in each situation The binary bit 0 (or potential level 1) is the element that is not inaddition

Trang 5

Figure 1.1: Valid function

The same decision principle, called maximum a posteriori (MAP), can be thought of as theminimum probability of error rule, which evaluates the priori probability of data The generalexpression for the MAP principle in the term APPs is:

(1.3)Equation (1.2) shows that we can choose the assumptions H1, ( d = + 1 ) if APP, P( d = - 1|

x ) is greater than APP, ( d = -1|x ) Conversely, the choice of hypothesis H2, ( d = -1 ) UsingBayes' theorem in equations (1.1), APPs in equations (1.3) can be replaced by equivalentexpressions, yielding:

(1.4)Here, pdf(x) appears on both sides of the inequality (1.1) so it has been reduced Equation (1.4) is expressed in the term ratio, creating the so-called loglikelihood radio test as follows:

(1.6)Thus:

Trang 6

L(d|x) = L(x|d) + L(d) (1.8)

Here L(x | d) is the LLR of the test statistics x generated by the measurement of the xchannel exit under the condition that d = +1 or d = -1 could have been transmitted, respectively,and L(d) is the LLR a priori of data d

1.1.4 The principle of iterative decoding:

In conventional receivers, demodulators are usually designed to make soft decisions thatare then transmitted to the decoder The improvement of quality (performance) - error (error-performance) using the system as soft decision compared to hard decision rated close to 2dB inAWGN As the decoder can be called a Frontend-Soft/Exit Decoder, because the final decodingprocess at the decoder's exit must end in bits

We will illustrate the SISO set for the system code:

Feedback for the next iteration

Figure 2.2 Entry-soft/exit-soft error decoder (for system code)

1.2 Log-likelihood algebra:

To explain the best iterative response of the soft decoder exit, we will use the concept ofLog-Validation Algebra For statistically independent data d, the sum of two log-valid ratios(LLRs) is defined as follows:

(1.9)

Trang 7

(1.10)Thus, derived from the definition of we have:

Therefore:

On the other hand:

Here d1 and d2 are statistically independent data bits representing the +1 and -1 positionscorresponding to logical levels 1 and 0 In this way, d1 ⊕ d2 produce -1 when d1 and d2 havethe same values (either +1 or -1) and produce +1 when d1 and d2 have different values

Therefore:

Trang 8

The sum of two LLRs denoted by the operand is defined as the LLR of the modular-2 sum

of the base statistically independent data bits Equation (1.10) approximates equation (1.19) andwill be very beneficial in the number example that we will consider later The log-validationsign is equivalent to the operation described in the equation

The sum of LLRs, as described in equation (1.9) or (1.10) produces several interestingoutcomes after whether LLRs are very large or very small:

1

and

2 Decode the horizontal row and use the equation (1.8), to generate the horizontalalien LLR:

3 Set for vertical decoding in step 4

4 Vertical decoding, and using the equation (1.8), we create a vertical exotic LLR:

Trang 9

1.2.1 Two–dimensional single – parity code:

At encoders, data bits and parity bits have a relationship between data bits and parity inseparate rows or columns expressed as binary numbers (1,0):

(1.11)and

(1.12)The ⊕ sign is modular addition-2 The bits are represented under strings:

At the entrance to the receiver, false noise bits are represented by the string, here for eachbit of data received, for each parity bit The indices i and j represent the position in the encoderexit array However, it is more convenient for us to represent the receiving sequence where k isthe time index Both conventions will be used Using the relationship deduced from equations2.7 to 2.9, and assuming the same type of AWGN noise, LLR allows a measurement of thetransmission channel of the signal received at time k, written as:

(1.13a)Thus, we have explored the theoretical basis of the log-validation ratio (LLR), a conceptthat is the foundation for building the Turbo decoding structure diagram Now to see the effect

of the above algorithm, let's look at the example of the multiplication code (ie the code is builtbased on two-dimensional space)

1.2.2 Product code:

The structure can be described as an array of data created by rows and columns Differentproportions of the structure are named d for data, for horizontal parity (oriented towards rows),

Trang 10

and for column parity (oriented towards columns) As a result, the data is encoded with twocodes - horizontal code and vertical code.

Figure 1.4 Two-dimensional multiplier

Figure 1.5a Encoder exit classification indicators

Figure 1.5.b The log-valid entrance ratio to the decoder

1.2.3 Extrinsic likelihood:

Here the term: the extraneous LLR representation is distributed by the code (corresponding

to the data acquisition and its priori probability, combined with the corresponding parity codeacquisition) Generally, the soft exit for the signal receiving corresponding data is: (1.14)

Trang 11

II TURBO CODE STRUCTURE AND ITERATION

TURBO DECODING ALGORITHM:

2.1 Introduction:

The VA decoding algorithm is the algorithm that searches for the most likely statesequence with a veterinary signal sequence Turbo Code was first introduced in 1993,consisting of two parallel Recursive Systematic Convolution Codes (RSCC) combining a mixedset and precision decoding algorithm Conventional Turbo decoding algorithms have the samecharacteristics that are combined between loop decoding algorithms and component decodingtypes with soft input, and soft output (SISO) Basic VA and MAP decoding algorithms differ inonly the optimal level

The MAP decoding algorithm differs from the VA foil algorithm that defines each specificstate that has the greatest likelihood with the veterinary signal sequence:

However, if the output is a soft quantity, it can improve the quality significantly Theintrinsic difference between them is that the states estimated by the VA algorithm must berouted through the grid, while the states estimated by the MAP algorithm do not need to beconnected to routes As such, the output of one decoder can use the information known inadvance for the other

We can briefly describe the MAP algorithm with SISO decoding:

We need to determine:

According to the composition probability formula:

This is the posterior probability as the PDF function, f( ) is a priori probability,

normally we assume that initially at the entrance then:

Trang 12

f( ) ⇔

Thus determination is equivalent to determination:

The Turbo code has an error control quality in the range of a few tenths of a dB from theShannon limit Soon, the quality was increased to about 2dB thanks to the control results in theTurbo code The larger component is encouraged to be applied to radio communication systems

as bandwidth requirements increase due to data communication service requirements

As such, Code Turbo has two important parts That is, parallel connection twisted codes and iterative decoding

2.2 Encoder structure and iterative decoder:

The Turbo code has a structure of at least two RSC codes connected in parallel in

combination with a disturber and SISO decoding algorithm:

Figure 2.1 Turbo code encoding diagramThus, the S-system input data sequence is fed directly to the RSC1 convolution code togenerate test bits and passed through the disturber to the generated RSC2 Breath andinspection systems, are incorporated into the comb and synthesis channel to eliminate thereduction of the test breath to speed up encoding The encoder output signal is controlled and

Trang 13

transmitted over the channel as shown in Figure 3.1 Figure 3.2 is an example of an RSCencoding scheme, where the input string is fed right to the output called the system steam chain.

Figure 2.2 RSC code

Figure 2.3 State diagram (a) and grid diagram of convolution code (b)

The decoder uses loop decoding so the residual information is used as pre-test informationfor other SISO (RSC1) decoders Since the player uses a shuffler, there are the same disturbers

in the decoder at the player and the corresponding shuffling solvers At the code level, thesplitter separates the evapotranspiration systems and checks for compatibility with SISOdecoders SISO is a decoder with a software input path, and a software output path, in which theinput is channel reliability spend ed, pre-test information The output consists of posthumousinformation L(d), residual system information also called extraneous information (externalinformation)

Therefore, the repeater decoder has the following structure:

Trang 14

Figure 2.4 Iterative decoding diagram

Done in the diagram above, the decoder is the SISO decoder Thus, the code string at theoutput of the multiplex channel will be fed to the splitter through the transmission channel.Both strings passed through the shuffler and then combined with the string leading to the input

of decoder 2, so the decoder's output will be the string for string over assembler

2.3 Turbo decoding algorithm:

There are 2 types of algorithms:

+ MAP decoding algorithm

+ SOVA decryption algorithm

Trang 15

2.3.1 Overview of decryption algorithms:

Turbo code uses serial decoders because the subsequent connection scheme is capable ofsplitting information between the connecting decoders, whereas decoders with song connectionschemes mostly decode each other independently The fire protection code has a parallelconnection encryption structure, but the fire protection decoding process is based on thesubsequent connection decryption scheme Figure 3.5 presents an overview of the decodingalgorithms based on the trellis diagram

Figure 2.5: Overview of decryption algorithms

The first family is the MAP algorithm, also known as the BCJR algorithm Raviv, the name of the four people who discovered this algorithm) This algorithm involvesalgorithms that decode the greatest probability of occurrence (ML) to minimize deterministicerrors In addition to these two decryption algorithms, they have several other iterativedecryption techniques

(Bahl-Cocke-Jelinek-2.3.2 MAP algorithm:

A decoder is a combination of multiple decoders (usually two decoders) and a decoderloop (interactive) Most files in the Viterbi algorithm provide software output values (softoutput or reliability information) for a software value comparator used to resolve bit output.Another algorithm of interest is Balh's symbol, the Maximum A Posteriori (MAP), which hasbeen published

Trang 16

Figure 2.6: MAP iterative decoder

The decryption algorithm is performed as follows:

1 Separate the realized signal into 2 corresponding sequences for decoder 1 and pseudosphere 2

2 In the first loop, the a priori information of decoder 1 is brought to 0 After decoder 1 has produced extrinsic information, it will be inserted and taken to decoder 2 which acts as the a priori information of this decoder Decoder 2 after giving extrinsic information, the loop ends The extrinsic information of the 2nd decoder will be decrypted and returned to decoder 1 as a priori information

3 The decryption process repeats itself until the specified number of iterations are performed

4 After the last loop, the estimated value is calculated by uninserting the information at the 2nddecoder and making a hard decision

2.3.3 The principle of the soft open Viterbi decoder:

Trang 17

conventional Viterbi decoders Compared to convolutional codes, the Viterbi algorithmproduces an ML output sequence.

Figure 2.7 Connected SOVA decoder

2.3.3.1 Generalized SOVA decoder reliability:

Figure 2.8: Survivor and competition lines for estimating reliability

The line immediately indicates the survivor line (assuming here part of the ML line) andthe line shows the Competition line (which occurs simultaneously) at the opposite point of time

to state 1 In Figure 3.8 the Trellis chart shows 4 statuses One cumulative measure Vs(S1,t)assigns the remaining line to each node and the cumulative measure Vc(S1,t) assigns to thecompeting line for each node

L(t) = | Vs(S1,t)Vc(S1,t)|

Ngày đăng: 25/03/2023, 05:44

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w