1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Tracking and Kalman filtering made easy P9 pdf

4 349 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Tracking and Kalman Filtering Made Easy
Tác giả Eli Brookner
Chuyên ngành Electrical Engineering
Thể loại Sách tham khảo
Năm xuất bản 1998
Định dạng
Số trang 4
Dung lượng 43,44 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

GENERAL RECURSIVE MINIMUM-VARIANCE GROWING-MEMORY FILTER BAYES AND KALMAN FILTERS WITHOUT TARGET PROCESS NOISE In Section 6.3 we developed a recursive least-squares growing memory-filter

Trang 1

GENERAL RECURSIVE MINIMUM-VARIANCE GROWING-MEMORY

FILTER (BAYES AND KALMAN

FILTERS WITHOUT TARGET

PROCESS NOISE)

In Section 6.3 we developed a recursive least-squares growing memory-filter for the case where the target trajectory is approximated by a polynomial In this chapter we develop a recursive least-squares growing-memory filter that is not restricted to having the target trajectory approximated by a polynomial [5 pp 461–482] The only requirement is that Yni, the measurement vector at time

n i, be linearly related to Xni in the error-free situation The Yni can be made up to multiple measurements obtained at the time n i as in (4.1-1a) instead of a single measurement of a single coordinate, as was the case in (4.1-20), where Yn1¼ ½yn1 The Yni could, for example, be a two-dimensional measurement of the target slant range and Doppler velocity Extensions to other cases, such as the measurement of three-dimensional polar coordinates of the target, are given in Section 16.2 and Chapter 17

Assume that at time n we have Lþ 1 observations Yn, Yn1; ; YnL obtained at, respectively, times n; n 1; ; n  L These L þ 1 observations are represented by the matrix YðnÞ of (4.1-11a) Next assume that at some later time nþ 1 we have another observation Ynþ1 given by

Assume also that at time n we have a minimum-variance estimate of X n;nbased

on the past Lþ 1 measurements represented by YðnÞ This estimate is given by (4.1-30) with Wn given by (4.5-4)

In turn the covariance matrix S 

n;n is given by (4.5-5) Now to determine the new minimum-variance estimate X nþ1;nþ1from the set of data consisting of YðnÞ 260

Tracking and Kalman Filtering Made Easy Eli Brookner

Copyright # 1998 John Wiley & Sons, Inc ISBNs: 0-471-18407-1 (Hardback); 0-471-22419-7 (Electronic)

Trang 2

and Ynþ1, one could again use (4.1-30) and (4.5-4) with YðnÞ now replaced by

Yðnþ1Þ, which is YðnÞ of (4.1-11a) with Ynþ1 added to it Correspondingly the matrices T and RðnÞ would then be appropriately changed to account for the increase in YðnÞ to include Ynþ1 This approach, however, has the disadvantage that it does not make use of the extensive computations carried out to compute the previously minimum-variance estimate X n;n based on the past data YðnÞ Moreover, it turns out that if Ynþ1 is independent of YðnÞ, then the minimum-variance estimate of X nþ1;nþ1 can be obtained directly from Ynþ1 and X n;n and their respective variances Rnþ1 and S n;n This is done by obtaining the minimum-variance estimate of X nþ1;nþ1using Ynþ1and X n;ntogether with their variances No use is made of the original data set YðnÞ This says that the estimate X n;nand its covariance matrix S n;ncontain all the information we need about the previous Lþ 1 measurements, that is, about YðnÞ Here, X n;n and its covariance matrix are sufficient statistics for the information contained in the past measurement vector YðnÞtogether with its covariance matrix RðnÞ (This is similar to the situation where we developed the recursive equations for the growing- and fading-memory filters in Sections 6.3, 7.2, and 1.2.6.)

The recursive form of the minimum variance estimate based on Ynþ1and X n;nis given by [5, p 464]

X 

nþ1;nþ1 ¼ X nþ1;nþ H nþ1ðYnþ1 MX nþ1;nÞ ð9:2-1Þ where

H nþ1¼ S nþ1;nþ1MTR11 ð9:2-1aÞ

S 

nþ1;nþ1¼ ½ðS nþ1;n Þ1þ MTR11 M1 ð9:2-1bÞ

S 

nþ1;n¼ S n;nT ð9:2-1cÞ

X 

nþ1;n¼ X n;n ð9:2-1dÞ The above recursive filter is often referred to in the literature as the Bayes filter (this is because it can also be derived using the Bayes theorem on conditional probabilities [128].) The only requirement needed for the recursive minimum-variance filter to apply is that Ynþ1 be independent of YðnÞ When another measurement Ynþ2is obtained at a later time nþ 2, which is independent of the previous measurements, then the above equations (indexed up one) can be used again to obtain the estimate X 

nþ2;nþ2 If YðnÞand Ynþ1are dependent, the Bayes filter could still be used except that it would not now provide the minimum-variance estimate If the variates are reasonably uncorrelated though, the estimate should be a good one

BAYES FILTER 261

Trang 3

9.3 KALMAN FILTER (WITHOUT PROCESS NOISE)

If we apply the inversion lemma given by (2.6-14) to (9.2-1b), we obtain after some manipulations the following equivalent algebraic equation for the recursive minimum-variance growing-memory filter estimate [5, p 465]:

X 

n;n ¼ X n;n1þ H nðYn MX n;n1Þ ð9:3-1Þ where

H n ¼ S n;n1 MTðR1þ MS n;n1MTÞ1 ð9:3-1aÞ

S 

S 

n;n1¼ S n1;n1 T ð9:3-1cÞ

X 

n;n1¼ X n1;n1 ð9:3-1dÞ The preceding Kalman filter equations are the same as given by (2.4-4a) to (2.4-4j) except that the target model dynamic noise (Un or equivalently its covariance matrix Qn) is not included Not including the target model dynamic noise in the Kalman filter can lead to computational problems for the Kalman filter [5, Section 12.4] This form of the Kalman filter is not generally used for this reason, and it is not a form proposed by Kalman The Kalman filter with the target process noise included is revisited in Chapter 18

As discussed in Sections 2.3, 2.5, and 2.6, the recursive minimum-variance growing-memory filter estimate is a weighted sum of the estimates Ynþ1 and

X nþ1;n with the weighting being done according to the importance of the two estimates; see (2.3-1), (2.5-9), and (2.6-7) Specifically, it can be shown that the recursive minimum-variance estimate can be written in the form [5, p 385]

X 

nþ1;nþ1¼ S nþ1;nþ1½ðS nþ1;nÞ1X 

nþ1;nþ MR11 Ynþ1 ð9:4-1Þ

If the covariance matrix of ynþ1is dependent on n, then R1is replaced by Rnþ1 The recursive minimum-variance Bayes and Kalman filter estimates are maximum-likelihood estimates when Ynþ1 and YðnÞ are uncorrelated and Gaussian All the other properties given in Section 4.5 for the minimum-variance estimate also apply The Kalman filter has the advantage over the Bayes filter of eliminating the need for two matrix inversions in (9.2-1b), which have a size equal to the state vector X n;n [which can be large, e.g., 10

262 GENERAL RECURSIVE MINIMUM-VARIANCE GROWING-MEMORY FILTER

Trang 4

the example (2.4-6)] The Kalman filter on the other hand only requires a single matrix inversion in (9.3-1a) of an order equal to the measurement vector Ynþ1 (which has a dimension 4

is measured in polar coordinates; see (2.4-7)) It is also possible to incorporate these four measurements one at a time if they are independent of each other In this case no matrix inversion is needed

In the Bayes and Kalman filters it is not necessary for Ynþ1to be just a single measurement at time tnþ1 The term Ynþ1 could be generalized to consist of

Lþ 1 measurements at L þ 1 times given by

Ynþ1; Yn; Yn1; ; YnLþ1 ð9:4-2Þ

For this more general case we can express the above Lþ 1 measurements as a vector given by

Yðnþ1Þ¼

Ynþ1

Yn

YnLþ1

2 6 6 6 6 6 4

3 7 7 7 7 7 5

ð9:4-3Þ

Then from (4.1-5) through (4.1-10), (4.1-11) follows It then immediately follows that (9.2-1) through (9.2-1d) and (9.3-1) through (9.3-1d) apply with M replaced by T of (4.1-11b) and Ynþ1 replaced by Yðnþ1Þ of (9.4-3)

EXTENSION TO MULTIPLE MEASUREMENT CASE 263

Ngày đăng: 24/12/2013, 01:17

TỪ KHÓA LIÊN QUAN