Estimation in a Complex Vector Space

Một phần của tài liệu fundamentals of wireless communication (Trang 549 - 553)

The extension of our discussion to the complex field is natural. Let us first consider scalar complex estimation, an extension of the basic real setup in (A.58):

y=x+w, (A.71)

where w ∼ CN (0, N0) is independent of the complex zero mean transmit signal x.

We are interested in a linear estimate ˆx = cy, for some complex constant c. The performance metric is

MSE =E£

|x−x|ˆ2¤

. (A.72)

The best linear estimate ˆx = cy can be directly calculated to be, as an extension of (A.62)

c= E[|x|2]

E[|x|2] +N0. (A.73)

The corresponding minimum MSE is

MMSE = E[|x|2]N0

E[|x|2] +N0. (A.74)

The orthogonality principle (c.f. (A.61)) for the complex case is extended to:

E[(ˆx−x)y] = 0. (A.75)

The linear estimate in (A.73) is easily seen to satisfy (A.75).

Now let us consider estimating the scalar complex zero mean xin a complex vector space:

y=hx+w, (A.76)

with w∼ CN (0, N0I) independent of x and h a fixed vector in Cn. The projection of yin the direction of h is a sufficient statistic and we can reduce the vector estimation problem to a scalar one: estimatex from

˜

y= hy

khk2 =x+w, (A.77)

where w∼ CN(0, N0/khk2).

Thus the best linear estimator is, as an extension of (A.67), c= E[|x|2]

E[|x|2]khk2 +N0 h. (A.78) The corresponding minimum MSE is, as an extension of (A.68),

MMSE = E[x2]N0

E[x2]khk2+N0. (A.79)

Summary A.3 Mean Square Estimation in a Complex Vector Space The linear estimate with the smallest mean squared error of x from

y=x+w, (A.80)

with w∼ CN(0, N0), is

ˆ

x= E[|x|2]

E[|x|2] +N0y. (A.81)

To estimate x from

y=hx+w, (A.82)

wherew∼ CN (0, N0I),

hy (A.83)

is a sufficient statistic, reducing the vector estimation problem to the scalar one.

The best linear estimator is ˆ

x= E[|x|2]

E[|x|2]khk2+N0 hy. (A.84) The corresponding minimum mean squared error (MMSE) is:

MMSE = E[|x|2]N0

E[|x|2]khk2+N0. (A.85)

In the special case when x∼ CN (à, σ2), this estimator yields the minimum mean squared error among all estimators, linear or non-linear.

Exercises

Exercise A.1. Consider the n-dimensional standard Gaussian random vector w N(0,In) and its squared magnitude kwk2.

1. With n = 1, show that the density ofkwk2 is f1(a) = 1

2πaexp

³

−a 2

´

, a≥0. (A.86)

2. For anyn, show that the density ofkwk2(denoted byfn(ã)) satisfies the recursive relation:

fn+2(a) = a

n fn(a), a≥0. (A.87)

3. Using the formulas for the densities for n = 1 and 2 (c.f. (A.86) and (A.9), respectively) and the recurisve relation in (A.87) determine the density of kwk2 for n≥3.

Exercise A.2. Let {w(t)} be white Gaussian noise with power spectral density N20. Let s1, . . . ,sM be a set of finite orthonormal waveforms (i.e., orthogonal and unit energy), and definezi =R

−∞w(t)si(t)dt. Find the joint distribution ofz. Hint: Recall the isotropic property of the normalized Gaussian random vector (c.f. (A.8)).

Exercise A.3. Consider a complex random vectorx.

1. Verify that the second-order statistics ofx(i.e., the covariance matrix of the real representation [<[x],=[x]]t) can be completely specified by the covariance and pseudo-covariance matrices of x, defined in (A.15) and (A.16) respectively.

2. In the case wherexis circular symmetric, express the covariance matrix [<[x],=[x]]t in terms of the covariance matrix of the complex vector x only.

Exercise A.4. Consider a complex Gaussian random vector x.

1. Show that a necessary and sufficient condition for x to be circular symmetric is that the mean àand the pseudo-covariance matrix J are zero.

2. Now suppose the relationship between the covariance matrix of [<[x],=[x]]t and the covariance matrix of x in part (2) of Exercise A.3 holds. Can we conclude that xis circular symmetric?

Exercise A.5. Show that a circular symmetric complex Gaussian random variable must have i.i.d. real and imaginary components.

ExerciseA.6. Letxbe anndimensional i.i.d. complex Gaussian random vector, with the real and imaginary parts distributed as N(0,Kx) where Kx is a 2×2 covariance matrix. SupposeUis a unitary matrix (i.e., UU=I). Identify the conditions on Kx under which Ux has the same distribution as x.

ExerciseA.7. Letzbe anndimensional i.i.d. complex Gaussian random vector, with the real and imaginary parts distributed as N(0,Kx) where Kx is a 2×2 covariance matrix. We wish to detect a scalar x, equally likely to be ±1 from:

y=hx+z, (A.88)

where x and z are independent and h is a fixed vector in Cn. Identify the conditions onKx under which the scalar hy is a sufficient statistic to detect x fromy.

Exercise A.8. Consider estimating the real zero mean scalar x from:

y=hx+w, (A.89)

where w∼ N (0, N0/2I) is uncorrelated with x and h is a fixed vector in <n. 1. Consider the scaled linear estimate cty(with the normalization kck= 1):

ˆ

x:=acty=¡ act

x+actz. (A.90)

Show that the constanta that minimizes the mean square error (E£

(x−x)ˆ 2¤ ) is equal to

E[x2]|cth|2

E[x2]|cth|2+N0/2. (A.91) 2. Calculate the minimal mean square error (denoted by MMSE) of the linear esti-

mate in (A.90) (by using the value of a in (A.91)). Show that E[x2]

MMSE = 1 + SNR := 1 + E[x2]|cth|2

N0/2 . (A.92)

For every fixed linear estimatorc, this shows the relationship between the correspond- ing SNR and MMSE (of an appropriately scaled estimate). In particular, this relation holds when when we optimize over all cleading to the best linear estimator.

Information Theory Background

This appendix discusses the information theory behind the capacity expressions used in the book. Section 8.3.4, is the only part of the book that supposes an understanding of the material in this appendix. More in-depth and broader expositions of information theory can be found in standard texts such as [17] and [27].

Một phần của tài liệu fundamentals of wireless communication (Trang 549 - 553)

Tải bản đầy đủ (PDF)

(593 trang)