1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Digital Signal Processing Handbook P66 pdf

17 402 1
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Subspace Tracking
Tác giả R. D. De Groat, E. M. Dowling, D. A. Linebarger
Trường học The University of Texas at Dallas
Chuyên ngành Digital Signal Processing
Thể loại Book chapter
Năm xuất bản 2000
Định dạng
Số trang 17
Dung lượng 149,97 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

SVD •Short Memory Windows for Time Varying Estimation •Classification of Subspace Methods•Historical Overview of MEP Methods•Historical Overview of Adaptive, Non-MEP Methods 66.3 Issues

Trang 1

R D De Groat, et Al “Subspace Tracking.”

2000 CRC Press LLC <http://www.engnetbase.com>.

Trang 2

Subspace Tracking

R.D DeGroat

The University of Texas at Dallas

E.M Dowling

The University of Texas at Dallas

D.A Linebarger

The University of Texas at Dallas

66.1 Introduction 66.2 Background EVD vs SVD •Short Memory Windows for Time Varying Estimation •Classification of Subspace Methods•Historical Overview of MEP Methods•Historical Overview of Adaptive, Non-MEP Methods

66.3 Issues Relevant to Subspace and Eigen Tracking Methods Bias Due to Time Varying Nature of Data Model • Control-ling Roundoff Error Accumulation and Orthogonality Errors

•Forward-Backward Averaging•Frequency vs Subspace Es-timation Performance•The Difficulty of Testing and Com-paring Subspace Tracking Methods•Spherical Subspace (SS) Updating — A General Framework for Simplified Updating

•Initialization of Subspace and Eigen Tracking Algorithms• Detection Schemes for Subspace Tracking

66.4 Summary of Subspace Tracking Methods Developed Since 1990

Modified Eigen Problems•Gradient-Based Eigen Tracking• The URV and Rank Revealing QR (RRQR) Updates• Miscel-laneous Methods

References

66.1 Introduction

Most high resolution direction-of-arrival (DOA) estimation methods rely on subspace or eigen-based information which can be obtained from the eigenvalue decomposition (EVD) of an estimated correlation matrix, or from the singular value decomposition (SVD) of the corresponding data matrix However, the expense of directly computing these decompositions is usually prohibitive for real-time processing Also, because the DOA angles are typically time-varying, repeated computation

is necessary to track the angles This has motivated researchers in recent years to develop low cost eigen and subspace tracking methods Four basic strategies have been pursued to reduce computation: (1) computing only a few eigencomponents, (2) computing a subspace basis instead of individual eigencomponents, (3) approximating the eigencomponents or basis, and (4) recursively updating the eigencomponents or basis The most efficient methods usually employ several of these strategies

In 1990, an extensive survey of SVD tracking methods was published by Comon and Golub [7] They classified the various algorithms according to complexity and basically two categories emerge:

O(n2r) and O(nr2) methods, where n is the snapshot vector size and r is the number of extreme

eigenpairs to be tracked Typically,r < n or r  n, so the O(nr2) methods involve significantly fewer

computations than theO(n2r) algorithms However, since 1990, a number of O(nr) algorithms have

Trang 3

been developed This article will primarily focus on recursive subspace and eigen updating methods developed since 1990, especially, theO(nr2) and O(nr) algorithms.

66.2 Background

66.2.1 EVD vs SVD

LetX = [x1|x2| |x N ] be an n × N data matrix where the kth column corresponds to the kth

snapshot vector,x k ∈ C n With block processing, the correlation matrix for a zero mean, stationary,

ergodic vector process is typically estimated asR = 1

N XX H where the true correlation matrix,

8 = E[x k x H

k ] = E[R].

The EVD of the estimated correlation matrix is closely related to the SVD of the corresponding data matrix The SVD ofX is given by X = USV HwhereU ∈ C n×nandV ∈ C N×Nare unitary matrices

andS ∈ C n×Nis a diagonal matrix whose nonzero entries are positive It is easy to see that the left

singular vectors ofX are the eigenvectors of XX H = USS T U H, and the right singular vectors ofX

are the eigenvectors ofX H X = V S T SV H This is so becauseXX HandX H X are positive definite

Hermitian matrices (which have orthogonal eigenvectors and real, positive eigenvalues) Also note that the nonzero singular values ofX are the positive square roots of the nonzero eigenvalues of XX H

andX H X Mathematically, the eigen information contained in the SVD of X or the EVD of XX H

(orX H X) is equivalent, but the dynamic range of the eigenvalues is twice that of the corresponding

singular values With finite precision arithmetic, the greater dynamic range can result in a loss of information For example, in rank determination, suppose the smallest singular value is where 

is machine precision The corresponding eigenvalue,2, would be considered a machine precision zero and the EVD ofXX H (orX H X ) would incorrectly indicate a rank deficiency Because of the

dynamic range issue, it is generally recommended to use the SVD ofX (or a square root factor of R).

However, because additive sensor noise usually dominates numerical errors, this choice may not be critical in most signal processing applications

66.2.2 Short Memory Windows for Time Varying Estimation

Ultimately, we are interested in tracking some aspect of the eigenstructure of a time varying correlation (or data) matrix For simplicity we will focus on time varying estimation of the correlation matrix, realizing that the EVD ofR is trivially related to the SVD of X A time varying estimator must

have a short term memory in order to track changes An example of long memory estimation is an estimator that involves a growing rectangular data window As time goes on, the estimated quantities depend more and more on the old data, and less and less on the new data The two most popular short memory approaches to estimating a time varying correlation matrix involve (1) a moving rectangular window and (2) an exponentially faded window Unfortunately, an unbiased, causal estimate of the true instantaneous correlation matrix at timek, 8 k = E[x k x H

k ], is not possible if averaging is used and the vector process is truly time varying However, it is usually assumed that the process is varying slowly enough within the effective observation window that the process is approximately stationary and some averaging is desirable In any event, at timek, a length N moving rectangular data window

results in a rank two modification of the correlation matrix estimate, i.e.,

R (rect) k = R k−1 (rect)+ 1

N (x k x k H − x k−N x k−N H ) (66.1)

wherex kis the new snapshot vector andx k−N is the oldest vector which is being removed from the estimate The corresponding data matrix is given byX (rect) k = [x k |x k−1 | |x k−N+1 ] and R k (rect)= 1

N X k (rect)X k (rect)H Subtracting the rank one matrix from the correlation estimate is referred to as

Trang 4

a rank one downdate Downdating moves all the eigenvalues down (or unchanged) Updating, on the other hand, moves all eigenvalues up (or unchanged) Downdating is potentially ill-conditioned because the smallest eigenvalue can move towards zero

An exponentially faded data window produces a rank one modification in

R k (f ade) = αR (f ade) k−1 + (1 − α)x k x k H (66.2) whereα is the fading factor with 0 ≤ α ≤ 1 In this case, the data matrix is growing in size, but the

older data is de-emphasized with a diagonal weighting matrix,

X k (f ade) = [x k |x k−1 | |x1] sqrt(diag(1, α, α2, , α k−1 ))andR (f ade) k = (1−α)X k (f ade)X (f ade) k H

Of course, the two windows could be combined to produce an exponentially faded moving rect-angular window, but this kind of hybrid short memory window has not been the subject of much study in the signal processing literature Similarly, not much attention has been paid to which short memory windowing scheme is most appropriate for a given data model Since downdating is poten-tially ill-conditioned, and since two rank one modifications usually involve more computation than one, the exponentially faded window has some advantages over the moving rectangular window The main advantage of a (short) rectangular window is in tracking sudden changes Assuming stationar-ity within the effective observation window, the power in a rectangular window will be equal to the power in an exponentially faded window when

N ≈ 1 (1 − α) or equivalently α ≈ 1 −

1

N =

N − 1

Based on a Fourier analysis of linearly varying frequencies, equal frequency lags occur when [14]

N ≈ (1 + α) (1 − α) or equivalently α ≈

N − 1

N + 1 . (66.4)

Either one of these relationships could be used as a rule of thumb for relating the effective observation window of the two most popular short memory windowing schemes

66.2.3 Classification of Subspace Methods

Eigenstructure estimation can be classified as (1) block or (2) recursive Block methods simply compute an EVD, SVD, or related decomposition based on a block of data Recursive methods update the previously computed eigen information using new data as it arrives We focus on recursive subspace updating methods in this article

Most subspace tracking algorithms can also be broadly categorized as (1) modified eigen problem (MEP) methods or (2) adaptive (or non-MEP) methods With short memory windowing, MEP methods are adaptive in the sense that they can track time varying eigen information However, when we use the word adaptive, we mean that exact eigen information is not computed at each update, but rather, an adaptive method tends to move towards an EVD (or some aspect of an EVD) at each update For example, gradient-based, perturbation-based, and neural network-based methods are classified as adaptive because on average they move towards an EVD at each update On the other hand, rank one, rankk, and sphericalized EVD and SVD updates are, by definition, MEP methods

because exact eigen information associated with an explicit matrix is computed at each update Both MEP and adaptive methods are supposed to track the eigen information of the instantaneous, time varying correlation matrix

Trang 5

66.2.4 Historical Overview of MEP Methods

Many researchers have studied SVD and EVD tracking problems Golub [19] introduced one of the first eigen-updating schemes, and his ideas were developed and expanded by Bunch and co-workers

in [3,4] The basic idea is to update the EVD of a symmetric (or Hermitian) matrix when modified

by a rank one matrix The rank-one eigen update was simplified in [37], when Schreiber introduced

a transformation that makes the core eigenproblem real Based on an additive white noise model, Karasalo [21] and Schreiber [37] suggested that the noise subspace be “sphericalized”, i.e., replace the noise eigenvalues by their average value so that deflation [4] could be used to significantly reduce computation By deflating the noise subspace and only tracking ther dominant eigenvectors, the

computation is reduced fromO(n3) to O(nr2) per update DeGroat reduced computation further

by extending this concept to the signal subspace [8] By sphericalizing and deflating both the signal and the noise subspaces, the cost of tracking ther dimensional signal (or noise) subspace is O(nr)

and no iteration is involved To make eigen updating more practical, DeGroat and Roberts developed stabilization schemes to control the loss of orthogonality due to the buildup of roundoff error [10] Further work related to eigenvector stabilization is reported in [15,28,29,30] Recently, a more stable version of Bunch’s algorithm was developed by Gu and Eisenstat [20] In [46], Yu extended rank one eigen updating to rankk updating.

DeGroat showed in [8] that forcing certain subspaces of the correlation matrix to be spherical, i.e., replacing the associated eigenvalues with a fixed or average value, is an easy way to deflate the size

of the updating problem and reduce computation Basically, a spherical subspace (SS) update is a rank one EVD update of a sphericalized correlation matrix Asymptotic convergence analysis of SS updating is found in [11,13] A four level SS update capable of automatic signal subspace rank and size adjustment is described in [9,11] The four level and the two level SS updates are the only MEP updates to date that areO(nr) and noniterative For more details on SS updating, see Section66.3.6,

Spherical Subspace (SS) Updating: A General Framework for Simplified Updating.

In [42], Xu and Kailath present a Lanczos based subspace tracking method with an associated detection scheme to track the number of sources A reference list for systolic implementations of SVD based subspace trackers is contained in [12]

66.2.5 Historical Overview of Adaptive, Non-MEP Methods

Owsley pioneered orthogonal iteration and stochastic-based subspace trackers in [32] Yang and Kaveh extended Owsley’s work in [44] by devising a family of constrained gradient-based algorithms

A highly parallel algorithm, denoted the inflation method, is introduced for the estimation of the noise subspace The computational complexity of this family of gradient-based methods varies from (approximately)n2r to7

2nr for the adaptation equation However, since the eigenvectors are only

ap-proximately orthogonal, an additionalnr2flops may be needed if Gram Schmidt orthogonalization is used It may be that a partial orthogonalization scheme (see Section66.3.2Controlling Roundoff Error Accumulation and Orthogonality Errors) can be combined with Yang and Kaveh’s methods to improve

orthogonality enough to eliminate theO(nr2) Gram Schmidt computation Karhunen [22] also ex-tended Owsley’s work by developing a stochastic approximation method for subspace computation Bin Yang [43] used recursive least squares (RLS) methods with a projection approximation approach

to develop the projection approximation subspace tracker (PAST) which tracks an arbitrary basis for the signal subspace, and PASTd which uses deflation to track the individual eigencomponents A multi-vector eigen tracker based on the conjugate gradient method is developed in [18] Previous conjugate gradient-based methods tracked a single eigenvector only Orthogonal iteration, lossless adaptive filter, and perturbation-based subspace trackers appear in [40] [36], and [5] respectively

A family of non-EVD subspace trackers is given in [16] An adaptive subspace method that uses a linear operator, referred to as the Propagator, is given in [26] Approximate SVD methods that are

Trang 6

based on a QR update step followed by a single (or partial) Jacobi sweep to move the triangular factor towards a diagonal form appear in [12,17,30] These methods can be described as approximate SVD methods because they will converge to an SVD if the Jacobi sweeps are repeated

Subspace estimation methods based on URV or rank revealing QR (RRQR) decompositions are referenced in [6] These rank revealing decompositions can divide a set of orthonormal vectors into sets that span the signal and noise subspaces However, a threshold (noise power) level that lies between the largest noise eigenvalue and the smallest signal eigenvalue must be known in advance

In some ways, the URV decomposition can be viewed as an approximate SVD For example, the transposed QR (TQR) iteration [12] can be used to compute the SVD of a matrix, but if the iteration

is stopped before convergence, the resulting decomposition is URV-like

Artificial neural networks (ANN) have also been used to estimate eigen information [35] In 1982, Oja [31] was one of the first to develop an eigenvector estimating ANN Using a Hebbian type learning rule, this ANN adaptively extracts the first principal eigenvector Much research has been done in this area since 1982 For an overview and a list of references, see [35]

66.3 Issues Relevant to Subspace and Eigen Tracking

Methods

66.3.1 Bias Due to Time Varying Nature of Data Model

Because direction-of-arrival (DOA) angles are typically time varying, a range of spatial frequencies

is usually included in the effective observation window Most spatial frequency estimation methods yield frequency estimates that are approximately equal to the effective frequency average in the window Consequently, the estimates lag the true instantaneous frequency If the frequency variation

is assumed to be linear within the effective observation window, this lag (or bias) can be easily estimated and compensated [14]

66.3.2 Controlling Roundoff Error Accumulation and Orthogonality

Errors

Numerical algorithms are generally defined as stable if the roundoff error accumulates in a linear fashion However, recursive updating algorithms cannot tolerate even a linear buildup of error if large (possibly unbounded) numbers of updates are to be performed For real time processing, periodic reinitialization is undesirable Most of the subspace tracking algorithms involve the product of at leastk orthogonal matrices by the time the kth update is computed According to Parlett [33], the error propagated by a product of orthogonal matrices is bounded as

where then × n matrix U k = U k−1 Q k = Q k Q k−1 Q1is a product ofk matrices that are each

orthogonal to working accuracy, is machine precision, and |.| Edenotes the Euclidean matrix norm Clearly, ifk is large enough, the roundoff error accumulation can be significant.

There are really only two sources of error in updating a symmetric or Hermitian EVD: (1) the eigenvalues and (2) the eigenvectors Of course, the eigenvectors and eigenvalues are interrelated Errors in one tend to produce errors in the other At each update, small errors may occur in the EVD update so that the eigenvalues become slowly perturbed and the eigenvectors become slowly nonorthonormal The solution is to prevent significant errors from ever accumulating in either

We do not expect the main source of error to be from the eigenvalues According to Stewart [38], the eigenvalues of a Hermitian matrix are perfectly conditioned, having condition numbers of one Moreover, it is easy to show that when exponential weighting is used, the accumulated roundoff error

Trang 7

is bounded by a constant, assuming no significant errors are introduced by the eigenvectors By contrast, if exponential windowing is not used, the bound for the accumulated error builds up in a linear fashion Thus, the fading factor not only fades out old data, but also old roundoff errors that accumulate in the eigenvalues

Unfortunately, the eigenvectors of a Hermitian matrix are not guaranteed to be well conditioned

An eigenvector will be ill-conditioned if its eigenvalue is closely spaced with other eigenvalues In this case, small roundoff perturbations to the matrix may cause relatively large errors in the eigenvec-tors The greatest potential for nonorthogonality then is between eigenvectors with adjacent (closely spaced) eigenvalues This observation led to the development of a partial orthogonalization scheme known as pairwise Gram Schmidt (PGS) [10] which attacks the roundoff error buildup problem

at the point of greatest numerical instability — nonorthogonality of adjacent eigenvectors If the intervening rotations (orthogonal matrix products) inherent in the eigen update are random enough, the adjacent vector PGS can be viewed as a full orthogonalization spread out over time When PGS

is combined with exponential fading, the roundoff accumulation in both the eigenvectors and the eigenvalues is controlled Although PGS was originally designed to stabilize Bunch’s EVD update, it

is generally applicable to any EVD, SVD, URV, QR, or orthogonal vector update Moonen et al [29] suggested that the bulk of the eigenvector stabilization in the PGS scheme is due to the normalization

of the eigenvectors Simulations seem to indicate that normalization alone stabilizes the eigenvectors almost as well as the PGS scheme, but not to working precision orthogonality Edelman and Stewart provide some insight into the normalization only approach to maintaining orthogonality [15] For additional analysis and variations on the basic idea of spreading orthogonalization out over time, see [30] and especially [28]

Many of theO(nr) adaptive subspace methods produce eigenvector estimates that are only

approx-imately orthogonal and normalization alone does not always provide enough stabilization to keep the orthogonality and other error measures small enough We have found that PGS stabilization can noticeably improve both the subspace estimation performance as well as the DOA (or spatial frequency) estimation performance For example, without PGS (but with normalization only), we found that Champagne’sO(nr) perturbation-based eigen tracker (method PC) [5] sometimes gives spurious MUSIC-based frequency estimates On the other hand, with PGS, Champagne’s PC method produced improved subspace and frequency estimates The orthogonality error was also significantly reduced Similar performance boosts could be expected for any subspace or eigen tracking method (especially those that produce eigenvector estimates that are only approximately orthogonal, e.g., PAST and PASTd [43] or Yang and Kaveh’s family of gradient based methods [44,45]) Unfortu-nately, normalization only and PGS areO(nr) Adding this kind of stabilization to an O(nr) subspace

tracking method could double its overall computation

Other variations on the original PGS idea involve symmetrizing the 2× 2 transformation and making the pairwise orthogonalization cyclic [28] The symmetric transformation assumes that the vector pairs are almost orthgonal so that higher order error terms can be ignored If this is the case, the symmetric version can provide slightly better results at a somewhat higher computational cost For methods that involve working precision orthogonal vectors, the original PGS scheme is overkill Instead of doing PGS orthogonalization on each adjacent vector pair, cyclic PGS orthogonalizes only one pair of vectors per update, but cycles through all possible combinations over time Thus, cyclic PGS covers all vector pairs without relying on the randomness of intervening rotations Cyclic PGS spreads the orthogonalization process out in time even more than the adjacent vector PGS method Moreover, cyclic PGS (or cyclic normalization) involvesO(n) flops per update, but there is a small

overhead associated with keeping track of the vector pair cycle

In summary, we can say that stabilization may not be needed for a small number of updates On the other hand, if an unbounded number of updates is to be performed, some kind of stabilization is recommended For methods that yield nearly orthogonal vectors at each update, only a small amount

of orthogonalization is needed to control the error buildup In these cases, cyclic PGS may be best

Trang 8

However, for methods that produce vectors that are only approximately orthogonal, a more complete orthogonalization scheme may be appropriate, e.g., a cyclic scheme with two or three vector pairs orthogonalized per update will produce better results than a single pair scheme

66.3.3 Forward-Backward Averaging

In many subspace tracking problems, forward-backward (FB) averaging can improve subspace as well

as DOA (or frequency) estimation performance Although FB averaging is generally not appropriate for nonstationary processes, it does appear to improve spatial frequency estimation performance if the frequencies vary linearly within the effective observation window Based on Fourier analysis of linearly varying frequencies, we infer that this is probably due to the fact that the average frequency in the window is identical for both the forward and the backward cases [14] Consequently, the frequency estimates are reinforced by FB averaging Besides improved estimation performance, FB averaging can be exploited to reduce computation by as much as 75% [24] FB averaging can also reduce computer memory requirements because (conjugate symmetric or anti-symmetric ) symmetries in the complex eigenvectors of an FB averaged correlation matrix (or the singular vectors of an FB data matrix) can be exposed through appropriate normalization

66.3.4 Frequency vs Subspace Estimation Performance

It has recently been shown with asymptotic analysis that a better subspace estimate does not necessarily result in a better MUSIC-based frequency estimate [23] In subspace tracking simulations, we have also observed that some methods produce better subspace estimates, but the associated MUSIC-based frequency estimates are not always better Consequently, if DOA estimation is the ultimate goal, subspace estimation performance may not be the best criterion for evaluating subspace tracking methods

66.3.5 The Difficulty of Testing and Comparing Subspace Tracking

Methods

A significant amount of research has been done on subspace and eigen tracking algorithms in the past few years, and much progress has been made in making subspace tracking more efficient Not surprisingly, all of the methods developed to date have different strengths and weaknesses Unfortunately, there has not been enough time to thoroughly analyze, study, and evaluate all of the new methods Over the years, several tests have been devised to “experimentally” compare various methods, e.g., convergence tests [44], response to sudden changes [7], and crossing frequency tracks (where the signal subspace temporarily collapses) [8] Some methods do well on one test, but not so well on another It is difficult to objectively compare different subspace tracking methods because optimal operating parameters are usually unknown and therefore unused, and the performance criteria may be ill-defined or contradictory

66.3.6 Spherical Subspace (SS) Updating — A General Framework for

Simplified Updating

Most eigen and subspace tracking algorithms are based directly or indirectly on tracking some aspect

of the EVD of a time varying correlation matrix estimate that is recursively updated according to

Eq (66.1) or (66.2) Since Eqs (66.1) and (66.2) involve rank one and rank two modifications to the correlation matrix, most subspace tracking algorithms explicitly or implicitly involve rank one (or two) modification of the correlation matrix Since rank two modifications can be computed as two rank one modifications, we will focus on rank one updating

Trang 9

Basically, spherical subspace (SS) updates are simplified rank one EVD updates The simplification involves sphericalizing subsets of eigenvalues (i.e., forcing each subset to have the same eigenlevel)

so that the sphericalized subspaces can be deflated

Based on an additive white noise signal model, Karasalo [21] and Schreiber [37] first suggested that the “noise” eigenvalues be replaced by their average value in order to reduce computation by deflation Using Ljung’s ODE-based method for analyzing stochastic recursive algorithms [25], it has recently been shown that, if the noise subspace is sphericalized, the dominant eigenstructure of a correlation matrix asymptotically converges to the true eigenstructure with probability one (under any noise assumption) [11] It is important to realize that averaging the noise eigenvalues yields a spherical subspace in which the eigenvectors can be arbitrarily oriented as long as they form an orthonormal basis for the subspace A rank-one modification affects only one component of the sphericalized subspace Thus, only one of the multiple noise eigenvalues is changed by a rank-one modification Consequently, making the noise subspace spherical (by averaging the noise eigenvalues, or replacing them with a constant eigenlevel) deflates the eigenproblem to an(r + 1) × (r + 1) problem, which

corresponds to a signal subspace of dimensionr, and the single noise component whose power is

changed For details on deflation, see [4]

The analysis in [11] shows that any number of sphericalized eigenlevels can be used to track various subspace spans associated with the correlation matrix For example, if both the noise and the signal subspaces are sphericalized (i.e., the dominant and subdominant set of eigenvalues is replaced

by their respective averages), the problem deflates to a 2× 2 eigenproblem that can be solved in closed form, noniteratively We will call this doubly deflated SS update, SA2 (Signal Averaged, Two Eigenlevels) [8] In [13] we derived the SA2 algorithm ODE and used a Lyapunov function to show asymptotic convergence to the true subspaces w.p 1 under a diminishing gain assumption In fact, the SA2 subspace trajectories can be described with Lie bracket notation and follow an isospectral flow as described by Brockett’s ODE [2] A four level SS update (called SA4) was introduced in [9]

to allow for information theoretic source detection (based on the eigenvalues at the boundary of the signal and noise subspaces) and automatic subspace size adjustment A detailed analysis of SA4 and

an SA4 minimum description length (SA4-MDL) detection scheme can be found in [11,41] SA4 sphericalizes all the signal eigenvalues except the smallest one, and all the noise eigenvalues except the largest one, resulting in a 4×4 deflated eigenproblem By tracking the eigenvalues that are on the boundary of the signal and noise subspaces, information theoretic detection schemes can be used to decide if the signal subspace dimension should be increased, decreased, or remain unchanged Both SA2 and SA4 areO(nr) and noniterative.

The deflated core problem in SS updating can involve any EVD or SVD method that is desired

It can also involve other decompositions, e.g., the URVD [34] To illustrate the basic idea of SS updating, we will explicitly show how an update is accomplished when only the smallest(n − r)

“noise” eigenvalues are sphericalized This particular SS update is called a Signal Eigenstructure (SE) update because only the dominantr “signal” eigencomponents are tracked This case is equivalent

to that described by Schreiber [37] and an SVD version is given by Karasalo [21]

To simplify and more clearly illustrate the idea SS updating, we drop the normalization factor,

(1 − α), and the k subscripts from Eq (66.2) and use the eigendecomposition ofR = UDU H to

expose a simpler underlying structure for a single rank-one update

e

Trang 10

= eU e De U H , U = UGHQe (66.12) whereG = diag (β1/|β1|, , β n /|β n| is a diagonal unitary transformation that has the effect of making the matrix inside the parenthesis real [37],H is an embedded Householder transformation

that deflates the core problem by zeroing out certain elements ofζ (see the SE case below), and

Qe DQ T is the EVD of the simplified, deflated core matrix,(αD + ζζ T ) In general, H and Q will

involve smaller matrices embedded in an n × n identity matrix In order to more clearly see the

details of deflation, we must concentrate on finding the eigendecomposition of the completely real matrix,S = (αD + γ γ T ) for a specific case Let us consider the SE update and assume that the

noise eigenvalues contained in the diagonal matrix have been replaced by their average values,d (n),

to produce a sphericalized noise subspace We must then apply block Householder transformations

to concentrate all of the power in the new data vector into a single component of the noise subspace The update is thus deflated to an(r + 1) × (r + 1) embedded eigenproblem as shown below,

=

I r 0

0 H n−r (n)

α

(s)

0 d (n) I n−r

 + ζζ T

I r 0

0 H n−r (n)

T

(66.15)

=

I r 0

0 H n−r (n)

Q r+1 0

0 I n−r−1

e

D (s) r 0 0

0 0 αd (n) I n−r−1

×

Q r+1 0

0 I n−r−1

T

I r 0

0 H n−r (n)

T

(66.16)

where

ζ T = (H T γ ) T = [γ (s) , |γ (n) |, 0 (n−r−1)×1]T , (66.18)

H n−r (n) = I n−r − 2v (n) (v (n) ) T

H =

I r 0

0 H n−r (n)

γ =

γ

(s)

γ (n)

}r

v (n) = γ (n) + |γ (n)|

 1

0(n−r−1)×1

The superscripts(s) and (n) denote signal and noise subspace, respectively, and the subscripts

de-note the size of the various block matrices In the actual implementation of the SE algorithm, the Householder transformations are not explicitly computed, as we will see below Moreover, it should

be stressed that the Householder transformation does not change the span of the noise subspace, but

Ngày đăng: 13/12/2013, 00:15

TỪ KHÓA LIÊN QUAN

w