We compare the three LST structures performance with convolutional component codes.
Two rate 1/2 convolutional codes with memory orderν=2 andν=5 are considered. We denote by (n,k, ν) a ratek/nconvolutional code with memoryν. The generator polynomi- als in octal form of these codes are (5,7) and (53,75), and the free Hamming distancesdfree are 5 and 8, respectively. The channel is a flat slow Rayleigh fading channel. The modula- tion format is QPSK and the number of symbols per frame is 252. The MAP algorithm is employed to decode convolutional codes and the iterative PIC-DSC is applied in detection with five iterations between the decoder and the detector. Figs. 6.17 and 6.18 show the performance of three LST structures with(nT, nR)=(2,2)andν =2 andν =5, respec- tively. The performance results of these two codes in LST structures with(nT, nR)=(4,4) are shown in Figs. 6.19 and 6.20. For a given memory order, LST-c outperforms LST-b considerably and LST-a slightly. The LST-a has a lower error rate than the LST-b architec- ture on slow fading channels, as in LST-a a codeword from one encoder is distributed to various antennas resulting in a higher diversity order. However, LST-a is more sensitive to interference and when the number of interferers increases, or when a weaker interference canceller is used, its performance deteriorates. The convolutional code withν=5 achieves about 1 and 2 dB gain compared to the code withν=2 in LST-c and LST-b, respectively.
Figure 6.17 Performance comparison of three different LST structures with the (2,1,2) convolutional code as a constituent code for (nT, nR) = (2, 2)
Figure 6.18 Performance comparison of three different LST structures with the (2,1,5) convolutional code as a constituent code for (nT, nR) = (2, 2)
Figure 6.19 Performance comparison of three different LST structures with the (2,1,2) convolutional code as a constituent code for (nT, nR) = (4, 4)
Comparison of Various LST Architectures 213
Figure 6.20 Performance comparison of three different LST structures with the (2,1,5) convolutional code as a constituent code for (nT, nR) = (4, 4)
6.4.1 Comparison of HLST Architectures with Various Component Codes
We compare the performance and decoding complexity of convolutional and low density parity check (LDPC) codes. The convolutional codes are the same as in the previous figures.
The LDPC code is a regular rate 1/2 Gallager LDPC (500,250) code. Its parity check matrix has a fixed column weight of γ = 3 and a fixed row weight of ρ = 6. The minimum Hamming distance dmin of this LDPC code is 11. The dmin and the squared Euclidean distancedE2 of these three codes are given in Table 6.1.
The MAP and sum-product algorithms are employed to decode convolutional and LDPC codes, respectively. Other system parameters are the same as in the previous figures with convolutional component codes. An LDPC code is represented by a factor graph. The sum- product algorithm is a probabilistic suboptimal method for decoding graph based codes. This is a syndrome decoding method which finds the most probable vector to satisfy all syndrome constraints. The decoding complexity of the MAP algorithm increases exponentially with
Table 6.1 Comparison of convolutional and LDPC code distances
Conv.ν=2 Conv.ν=5 LDPC
dmin 5 8 11
dE2 20 32 44
Table 6.2 Performance comparison of convolutional and the LDPC codes Conv.ν=2 Conv.ν=5 LDPC
LST-a 9.2 8.0 9.2
LST-b 12.7 11.6 11.0
LST-c 8.8 7.6 8.8
LST-c (perfect decoding feedback) 7.2 8.2 4.9
the memory order ν. On the other hand, the complexity of decoding the LDPC code is linearly proportional to the number of entries in the parity check matrixH.
Table 6.2 shows the requiredEb/No (in dB) of the simulated codes to achieve FER of 10−3 in three LST structures with (nT, nR) =(4,4), five iterations between the decoder and the detector and ten iterations in the sum-product algorithm.
In LST-b, the LDPC outperforms both convolutional codes. The LDPC code achieves a similar performance as the (2,1,2) convolutional code but has a worse performance compared to (2,1,5) convolutional code in both LST-a and LST-c structures, although the LDPC code has a higher distance than the convolutional codes. In addition, there exist error floors for the LDPC code in LST structures withnR =2. However no error floor occurs for any of the convolutional codes withnR=2 in Figs. 6.17 and 6.18. The reason for this is that the sum-product algorithm is more sensitive to error propagation than the MAP decoder used for the convolutional codes.
The last row of Table 6.2 shows the required Eb/No (in dB) of three different codes achieving FER of 10−3in the (4,4) LST-c system with perfect decoding feedback. It shows that the performance difference between perfect and non-perfect decoding feedback of con- volutional and LDPC codes are about 0.4 and 3.9 dB, respectively. This means that the iterative joint detection and MAP decoding algorithm approaches the performance with no interference. On the other hand, the iterative detection with the sum-product algorithm of LDPC codes is far from the optimum performance.
As the number of receive antennas increases, the detector can provide better estimates of the transmitted symbols to the channel decoder. In this situation, the distance of the code dominates the LST system performance. Figure 6.21 shows that the LDPC code outperforms both convolutional codes in a (4,8) LST-c system. We conclude that the LDPC code has a superior error correction capability, but the performance is limited by error propagation in the LST-a and LST-c structures.
Several rate 1/3 turbo codes with information length 250 were chosen as the constituent codes in LST systems on a MIMO slow Rayleigh fading channel. Gray mapping and QPSK modulation are employed in all simulations. Ten iterations are used between the detector and the decoder; and ten iterations for each turbo channel decoder. A PIC-DSC is used as the detector and a MAP algorithm in the turbo channel decoder. Figure 6.22 shows the performance of LST-b and LST-c structures with a turbo constituent code. The generator polynomials in octal form of the component recursive convolutional code are (13,15). The performance of LST-c structure is better than the LST-b structure due to a higher diversity gain. Figure 6.22 also shows the performance of LST-b and LST-c with perfect decoding feedback. The performance of LST-b is very close to a system performance with no inter- ference. On the other hand, there is about 2 dB difference between non-perfect and perfect decoding feedback in LST-c at FER of 10−3. An error floor is observed in both structures, due to a low minimum free distance of the turbo code.
Comparison of Various LST Architectures 215
Figure 6.21 Performance comparison of LST-c with convolutional and LDPC codes for (nT, nR) = (4, 8)
Figure 6.22 Performance comparison of LST-b and LST-c with turbo codes as a constituent code for (nT, nR) = (4, 4)
Figure 6.23 Performance comparison of LST-b and LST-c with turbo codes as a constituent code for (nT, nR) = (4, 8)
Figure 6.23 shows the performance of LST-b and LST-c structures with turbo constituent code for(nT, nR)=(4,8). No error floor exists in this scheme.
Figures 6.24 and 6.25 show the bit error rate performance of LST-a with interleaver sizes 256 and 1024 for a (4,4) and (4,8) systems, respectively. The performance of LST-a with interleaver size 1024 is superior than 252 in both cases. From Fig. 6.24, one can see that the performance of LST-a structure with the turbo code is much worse than in the system with no interference. There is about 2.0 dB and 1.5 dB difference between non-perfect and perfect decoding feedback in LST-a structure with interleaver sizes 252 and 1024 at the BER of 10−3, respectively. Significant error floors are observed in Fig. 6.24. The error floor is due to both low minimum free distance of the turbo code and the decoding feedback error in LST-a structure.