Examples of Discrete Memoryless Channel Models

Một phần của tài liệu Introduction to Digital Communication Systems by Krzysztof Wesolowski . (Trang 64 - 67)

1.7 Channel Models from the Information Theory Point of View

1.7.2 Examples of Discrete Memoryless Channel Models

Below we present a few basic discrete memoryless channel models. Despite their sim- plicity they are often used as a tool in selection of a channel code and its decoding method.

1.7.2.1 Binary Symmetric Memoryless Channel

A binary symmetric memoryless channel is the most common channel model. In this model we assume that both the input and output symbol alphabets are binary (often described by symbols “0” and “1”), and subsequent output symbols are dependent only on single input symbols. As we remember, this is a statistical description of the absence of channel memory. Adoption of model symmetry results in assuming the same statistical channel behavior in the case of generation of symbols “0” and “1”. The binary symmetric memoryless channel is presented in Figure 1.17a. The arrows illustrate the occurrence of

4The terma priorioriginates from the Latin language and denotes something given in advance, before experienc- ing it.

0

0 0

0 1

1 1

1 1−p 1−p

1−p

1−p p

p p

p e

(a) (b)

(c)

xn + yn

Source of statistically independent

errors

en

Figure 1.17 Models of binary memoryless channels: (a) binary symmetric channel; (b) another form of binary symmetric channel with binary error source; (c) binary erasure channel

the output symbol depending on the generated input symbol, with appropriate transition probabilities placed above them. For our channel model we have

Pr{Y =0|X=0} =P (0|0)=1−p Pr{Y =0|X=1} =P (0|1)=p Pr{Y =1|X=0} =P (1|0)=p Pr{Y =1|X=1} =P (1|1)=1−p Let us now calculate the probability of error. Denoting thea priori probabilities of the input symbols as Pr{X=0} =αand Pr{X=1} =1−α, respectively (as we know, the sum of probabilities of occurrence of all the input symbols is equal to unity), we obtain

P (E)=Pr{Y =1|X=0}Pr{X=0} +Pr{Y =0|X=1}Pr{X=1}

=+p(1−α)=p (1.54)

The model shown in Figure 1.17a presents the channel operation in a single moment.

In Figure 1.17b another form of binary symmetric memoryless channel is presented. In subsequent moments, indexed by n, the input symbols xn take on the value “0” or “1”.

The occurrence of errors in the channel is modeled by the exclusive-or addition of the input symbol with the binary error symbol en. If an error occurs in the channel, the error source generates the symbolen=1, otherwiseen=0. As we know from formula (1.54), the probability of error is equal to p, so the error source is a binary digit generator that emits “1”s statistically independently of other symbols, with the probability p.

The model seems to be highly abstract. However, in practice many channel codes, and in particular decoding algorithms, are constructed by taking into account such an error source model. Very often errors in real communication channels are not uniformly distributed in time, but they are grouped inerror bursts. Thus, there are time intervals for which the error probability is high and intervals in which error bursts do not happen. A remedy for this disadvantageous situation is the application of a so-calledinterleaver at the transmitter and deinterleaver at the receiver. They are blocks that perform mutually dual operations. The interleaver changes the order of the sequence of transmitted channel input symbols, whereas the deinterleaver recovers the initial order of the sequence of

xn +en yn Interleaver xnyn′ Deinterleaver

Binary symmetric channel with error bursts Binary memoryless symmetric channel

Source of error bursts

Figure 1.18 Application of interleaver and deinterleaver in spreading of error bursts

input symbols operating on the channel output symbols. Both operations compensate each other with respect to the data symbols; however, the error sequence is the subject of deinterleaving only. Thus the error bursts occurring in the channel are spread in time and become almost independent statistically. Figure 1.18 illustrates a general rule of interleaving and deinterleaving. Owing to this idea the binary symmetric memoryless channel model remains valuable, despite the fact that the errors occurring in the channel are bursty and therefore statistically dependent.

1.7.2.2 Binary Symmetric Erasure Channel

Figure 1.17c presents a binary symmetric erasure channel. As we see, the number of different output symbols is increased to three. Besides the symbols “0” and “1” there is a third symbol denoted by “e” , callederasure. This symbol reflects the situation in which the receiver is not able to perform detection and decide if the received symbol is “0” or “1”. This can occur if there is a transmission outage or if another transmitter placed in the receiver’s vicinity temporarily saturates this receiver. This model does not take into account the possibility of conversion of “0” into “1” or vice versa. Derivation of error probability is very simple and leads to the same result as in the case of a binary symmetric memoryless channel. Namely, assuming again that Pr{X=0} =αand Pr{X=1} =1−α we have

P (E)=Pr{Y =e|X=0}α+Pr{Y =e|X=1}(1−α)

=+p(1−α)=p (1.55)

1.7.2.3 Memoryless Channel with Binary Input andm-ary Output

The memoryless binary inputm-ary output channel model is more and more often used in the analysis of digital communication systems with channel coding. As we remember, in the binary memoryless channel model the output symbols are binary. This model reflects the cases in which the demodulator, supplemented with a decision device, produces binary decisions. Thus, the channel decoder loses a part of the knowledge that otherwise could be used in the channel decoding. An important step towards avoiding this drawback is the

xn =+_A + yn nn

0 1 2 3 4 5 6 7A

A

p(y|x)

p(y|−A) p(y|A)

y (a)

(b)

(c)

0

1

0 1 2 3 4 5 6 7

Figure 1.19 Binary input 8-ary output channel model

application of a decision device that not only generates binary decisions but also gives an additional measure of decision quality. An example of such an approach is the application of an m-level quantizer replacing the binary one. The simplest model of such a system is shown in Figure 1.19a. The binary symbols are transmitted in the form of pulses of amplitude ±A, which are distorted by an additive noise. A sample of a signal that undergoes m-ary quantization is taken once per single pulse duration. In Figure 1.19b the quantization thresholds of an 8-level quantizer are shown against the background of conditional probability density functions of the channel output samples, whereas in Figure 1.19c a corresponding channel model is drawn. The transition probabilities for a particular input and output signal pair (not shown above the arrows showing appropriate transitions in Figure 1.19c) are given by the area limited by the appropriate conditional probability curve and neighboring quantization thresholds. The selection of the optimum quantization thresholds will not be considered here. The described model is an example of the channel model applied in situations in which the decision device generates soft decisions when our knowledge about received symbols is larger than the decided binary values only.

Một phần của tài liệu Introduction to Digital Communication Systems by Krzysztof Wesolowski . (Trang 64 - 67)

Tải bản đầy đủ (PDF)

(579 trang)