1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Lọc Kalman - lý thuyết và thực hành bằng cách sử dụng MATLAB (P4) ppt

55 672 4

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Kalman Filtering: Theory and Practice Using MATLAB
Tác giả Mohinder S. Grewal, Angus P. Andrews
Trường học John Wiley & Sons, Inc.
Chuyên ngành Control Systems / Signal Processing
Thể loại Sách hướng dẫn / Tham khảo
Năm xuất bản 2001
Thành phố Hà Nội
Định dạng
Số trang 55
Dung lượng 487,59 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

By substituting for Kkfrom Equation 4.19,it can be putinto the following forms:Error covariance extrapolation models the effects of time on the covariancematrix of estimation uncertainty

Trang 1

TABLE 4.1 Linear Plant and Measurement Models

Plant _x…t† ˆ F…t†x…t† ‡ w…t† x k ˆ F k 1 x k 1 ‡ w k 1 (4.1) Measurement z…t† ˆ H…t†x…t† ‡ v…t† z k ˆ H k x k ‡ v k (4.2)

Ehw…t†w T …s†i ˆ d…t s†Q…t† Ehwkw T

i i ˆ D…k i†Qk (4.4) Observation noise Ehv…t†i ˆ 0 Ehv k i ˆ 0

Ehv…t†v T …s†i ˆ d…t s†R…t† Ehv k v T

i i ˆ D…k i†R k (4.5)114

Mohinder S Grewal,Angus P Andrews Copyright # 2001 John Wiley & Sons,Inc ISBNs: 0-471-39254-5 (Hardback); 0-471-26638-8 (Electronic)

Trang 2

The measurement and plant noise vk and wk are assumed to be zero-meanGaussian processes,and the initial value x0is a Gaussian variate with known mean

x0 and known covariance matrix P0 Although the noise sequences wk and vk areassumed to be uncorrelated,the derivation in Section 4.5 will remove this restrictionand modify the estimator equations accordingly

The objective will be to ®nd an estimate of the n state vector xkrepresented by ^xk,

a linear function of the measurements zi; ; zk,that minimizes the weighted squared error

mean-E‰xk ^xkŠTM‰xk ^xkŠ; …4:6†where M is any symmetric nonnegative-de®nite weighting matrix

4.1.2 Main Points to Be Covered

Linear Quadratic Gaussian Estimation Problem We are now prepared toderive the mathematical forms of optimal linear estimators for the states of linearstochastic systems de®ned in the previous chapters This is called the linearquadratic Gaussian (LQG) estimation problem The dynamic systems are linear,the performance cost functions are quadratic,and the random processes areGaussian

Filtering, Prediction, and Smoothing There are three general types ofestimators for the LQG problem:

 Predictors use observations strictly prior to the time that the state of thedynamic system is to be estimated:

tobs< test:

 Filters use observations up to and including the time that the state of thedynamic system is to be estimated:

tobs  test:

TABLE 4.2 Dimensions of Vectors and Matrices in Linear Model

Trang 3

 Smoothers use observations beyond the time that the state of the dynamicsystem is to be estimated:

tobs > test:

Orthogonality Principle A straightforward and simple approach using theorthogonality principle is used in the derivation1 of estimators These estimatorswill have minimum variance and be unbiased and consistent

Unbiased Estimators The Kalman ®lter can be characterized as an algorithmfor computing the conditional mean and covariance of the probability distribution

of the state of a linear stochastic system with uncorrelated Gaussian process andmeasurement noise The conditional mean is the unique unbiased estimate It ispropagated in feedback form by a system of linear differential equations or by thecorresponding discrete-time equations The conditional covariance is propagated by

a nonlinear differential equation or its discrete-time equivalent This implementationautomatically minimizes the expected risk associated with any quadratic lossfunction of the estimation error

Performance Properties of Optimal Estimators The statistical performance

of the estimator can be predicted a priori (that is,before it is actually used) bysolving the nonlinear differential (or difference) equations used in computing theoptimal feedback gains of the estimator These are called Riccati equations,2and thebehavior of their solutions can be shown analytically in the most trivial cases Theseequations also provide a means for verifying the proper performance of the actualestimator when it is running

4.2 KALMAN FILTER

Observational Update Problem for System State Estimator Suppose that

a measurement has been made at time tkand that the information it provides is to be

1 For more mathematically oriented derivations,consult any of the references such as Anderson and Moore [1],Bozic [9],Brammer and Sif¯ing [10],Brown [11],Bryson and Ho [14],Bucy and Joseph [15],Catlin [16],Chui and Chen [18],Gelb et al [21],Jazwinski [23],Kailath [24],Maybeck [30,31],Mendel [34, 35],Nahi [36],Ruymgaart and Soong [42],and Sorenson [47].

2 Named in 1763 by Jean le Rond D'Alembert (1717±1783) for Count Jacopo Francesco Riccati (1676± 1754),who had studied a second-order scalar differential equation [213],although not the form that we have here [54,210] Kalman gives credit to Richard S Bucy for showing him that the Riccati differential equation is analogous to spectral factorization for de®ning optimal gains The Riccati equation also arises naturally in the problem of separation of variables in ordinary differential equations and in the transformation of two-point boundary-value problems to initial-value problems [155].

Trang 4

applied in updating the estimate of the state x of a stochastic system at time tk It isassumed that the measurement is linearly related to the state by an equation of theform zkˆ Hxk‡ vk,where H is the measurement sensitivity matrix and vk is themeasurement noise.

Estimator in Linear Form The optimal linear estimate is equivalent to thegeneral (nonlinear) optimal estimator if the variates x and z are jointly Gaussian (seeSection 3.8.1) Therefore,it suf®ces to seek an updated estimate ^xk…‡†Ðbased onthe observation zkÐthat is a linear function of the a priori estimate and themeasurement z:

^xk…‡† ˆ K1

k^xk… † ‡ Kkzk; …4:7†where ^xk… † is the a priori estimate of xk and ^xk…‡† is the a posteriori value of theestimate

Optimization Problem The matrices K1

k and Kk are as yet unknown We seekthose values of K1

k and Kk such that the new estimate ^xk…‡† will satisfy theorthogonality principle of Section 3.8.2 This orthogonality condition can be written

in the form

Eh‰xk ^xk…‡†ŠzT

ii ˆ 0; i ˆ 1; 2; ; k 1; …4:8†Eh‰xk ^xk…‡†ŠzT

If one substitutes the formula for xk from Equation 4.1 (in Table 4.1) and for ^xk…‡†from Equation 4.7 into Equation 4.8,then one will observe from Equations 4.1 and4.2 that the data z1; ; zkdo not involve the noise term wk Therefore,because therandom sequences wk and vk are uncorrelated,it follows that EwkzT

Trang 5

Then Equation 4.11 can be reduced to the form

Clearly,this choice of K1

k causes Equation 4.7 to satisfy a portion of the conditiongiven by Equation 4.8,which was derived in Section 3.8 The choice of Kk is suchthat Equation 4.9 is satis®ed

Let the errors

Trang 6

Substituting for K1

k, zk; and ~xk… † and using the fact that E~xk… †vT

k ˆ 0,this lastresult can be modi®ed as follows:

Trang 7

This last equation is the so-called ``Joseph form'' of the covariance update equationderived by P D Joseph [15] By substituting for Kkfrom Equation 4.19,it can be putinto the following forms:

Error covariance extrapolation models the effects of time on the covariancematrix of estimation uncertainty,which is re¯ected in the a priori values of thecovariance and state estimates,

Pk… † ˆ E‰~xk… †~xT

k… †Š;

^xk… † ˆ Fk 1^xk 1…‡†; …4:25†respectively Subtract xk from both sides of the last equation to obtain the equations

^xk… † xkˆ Fk 1^xk 1…‡† xk;

~xk… † ˆ Fk 1‰^xk 1…‡† xk 1Š wk 1

ˆ Fk 1~xk 1…‡† wk 1for the propagation of the estimation error, ~x Postmultiply it by ~xT

k… † (on both sides

of the equation) and take the expected values Use the fact that E~xk 1wT

k 1ˆ 0 toobtain the results

Trang 8

4.2.1 Summary of Equations for the Discrete-Time Kalman EstimatorThe equations derived in the previous section are summarized in Table 4.3 In thisformulation of the ®lter equations, G has been combined with the plant covariance

1 Compute Pk… † using Pk 1…‡†, Fk 1,and Qk 1

2 Compute Kk using Pk… † (computed in step 1), Hk,and Rk

3 Compute Pk…‡† using Kk (computed in step 2) and Pk… † (from step 1)

4 Compute successive values of ^xk…‡† recursively using the computed values of

Kk (from step 3),the given initial estimate ^x0,and the input data zk.TABLE 4.3 Discrete-Time Kalman Filter Equations

System dynamic model:

x k ˆ F k 1 x k 1 ‡ w k 1

w k  N …0; Q k † Measurement model:

z k ˆ H k x k ‡ v k

v k  N …0; R k † Initial conditions:

K k ˆ P k … †H T

k ‰H k P k … †H T

k ‡ R k Š 1

Trang 9

Step 4 of the Kalman ®lter implementation [computation of ^xk…‡†] can beimplemented only for state vector propagation where simulator or real data setsare available An example of this is given in Section 4.12.

In the design trade-offs,the covariance matrix update (steps 1 and 3) should bechecked for symmetry and positive de®niteness Failure to attain either condition is asign that something is wrongÐeither a program ``bug'' or an ill-conditionedproblem In order to overcome ill-conditioning,another equivalent expression for

Pk…‡† is called the ``Joseph form,''4as shown in Equation 4.23:

Pk…‡† ˆ ‰I KkHkŠPk… †‰I KkHkŠT‡ KkRkKTk:

Note that the right-hand side of this equation is the summation of two symmetricmatrices The ®rst of these is positive de®nite and the second is nonnegative de®nite,thereby making Pk…‡† a positive de®nite matrix

There are many other forms5for Kk and Pk…‡† that might not be as useful forrobust computation It can be shown that state vector update,Kalman gain,and errorcovariance equations represent an asymptotically stable system,and therefore,theestimate of state ^xk becomes independent of the initial estimate ^x0, P0 as k isincreased

Figure 4.2 shows a typical time sequence of values assumed by the ith component

of the estimated state vector (plotted with solid circles) and its correspondingvariance of estimation uncertainty (plotted with open circles) The arrows show thesuccessive values assumed by the variables,with the annotation (in parentheses) onthe arrows indicating which input variables de®ne the indicated transitions Note thateach variable assumes two distinct values at each discrete time: its a priori value

M

Fig 4.1 Block diagram of system, measurement model, and discrete-time Kalman ®lter.

4 after Bucy and Joseph [15].

5 Some of the alternative forms for computing Kkand Pk…‡† can be found in Jazwinski [23],Kailath [24], and Sorenson [46].

Trang 10

corresponding to the value before the information in the measurement is used,andthe a posteriori value corresponding to the value after the information is used.EXAMPLE 4.1 Let the system dynamics and observations be given by thefollowing equations:

xkˆ xk 1‡ wk 1; zkˆ xk‡ vk;Ehvki ˆ Ewkˆ 0;

Trang 12

corre-advantageous to consider the components of z as independent scalar measurements,rather than as a vector measurement The principal advantages are as follows:

1 Reduced Computation Time The number of arithmetic computationsrequired for processing an `-vector z as ` successive scalar measurements issigni®cantly less than the corresponding number of operations for vectormeasurement processing (It is shown in Chapter 6 that the number ofcomputations for the vector implementation grows as `3,whereas that ofthe scalar implementation grows only as `.)

2 Improved Numerical Accuracy Avoiding matrix inversion in the tation of the covariance equations (by making the expression HPHT‡ R ascalar) improves the robustness of the covariance computations againstroundoff errors

implemen-The ®lter implementation in these cases requires ` iterations of the observationalupdate equations using the rows of H as measurement ``matrices'' (with rowdimension equal to 1) and the diagonal elements of R as the corresponding(scalar) measurement noise covariance The updating can be implemented iteratively

as the following equations:

for i ˆ 1; 2; 3; ; `,using the initial values

P‰0Šk ˆ Pk… †; ^x‰0Šk ˆ ^xk… †;

intermediate variables

R‰iŠk ˆ ith diagonal element of the `  ` diagonal matrix Rk;

Hk‰iŠˆ ith row of the `  n matrix Hk;

and ®nal values

Pk‰`Šˆ Pk…‡†; ^x‰`Šk ˆ ^xk…‡†:

4.2.3 Using the Covariance Equations for Design Analysis

It is important to remember that the Kalman gain and error covariance equations areindependent of the actual observations The covariance equations alone are all that isrequired for characterizing the performance of a proposed sensor system before it is

Trang 13

actually built At the beginning of the design phase of a measurement and estimationsystem,when neither real nor simulated data are available,just the covariancecalculations can be used to obtain preliminary indications of estimator performance.Covariance calculations consist of solving the estimator equations with steps 1±3 ofthe previous subsection,repeatedly These covariance calculations will involve theplant noise covariance matrix Q,measurement noise covariance matrix R,statetransition matrix F,measurement sensitivity matrix H,and initial covariance matrix

P0Ðall of which must be known for the designs under consideration

Q and R are positive de®nite

It is desired to ®nd the estimate of n state vector x(t) represented by ^x…t† which is alinear function of the measurements z…t†, 0  t  T,which minimizes the scalarequation

E‰x…t† ^x…t†ŠTM‰x…t† ^x…t†Š; …4:32†

where M is a symmetric positive-de®nite matrix

The initial estimate and covariance matrix are ^x0 and P0

This section provides a formal derivation of the continuous-time Kalmanestimator A rigorous derivation can be achieved by using the orthogonality principle

as in the discrete-time case In view of the main objective (to obtain ef®cient andpractical estimators),less emphasis is placed on continuous-time estimators.Let Dt be the time interval ‰tk tk 1Š As shown in Chapters 2 and 3,thefollowing relationships are obtained:

F…tk; tk 1† ˆ Fkˆ I ‡ F…tk 1†Dt ‡ 0…Dt2†;

Trang 14

where 0…Dt2† consists of terms with powers of Dt greater than or equal to two Formeasurement noise

RkˆR…tk†

Dt ;and for process noise

The Kalman gain of Equation 4.19 becomes,in the limit,

Trang 15

In similar fashion,the state vector update equation can be derived from Equations4.21 and 4.25 by taking the limit as Dt ! 0 to obtain the differential equation for theestimate:

^xk…‡† ˆ Fk 1^xk 1…‡† …4:39†and

Previous values of the estimates will become the initial conditions for the aboveequations

4.4.2 Accommodating Missing Data

It sometimes happens in practice that measurements that had been scheduled tooccur over some time interval …tk1< t  tk2† are,in fact,unavailable or unreliable.The estimation accuracy will suffer from the missing information,but the ®lter cancontinue to operate without modi®cation One can continue using the predictionalgorithm given in Section 4.4 to continually estimate xk for k > k1 using the lastavailable estimate ^xk1 until the measurements again become useful (after k ˆ k2)

It is unnecessary to perform the observational update,because there is noinformation on which to base the conditioning In practice,the ®lter is often runwith the measurement sensitivity matrix H ˆ 0 so that,in effect,the only updateperformed is the temporal update

Trang 16

4.5 CORRELATED NOISE SOURCES

4.5.1 Correlation between Plant and Measurement Noise

We want to consider the extensions of the results given in Sections 4.2 and 4.3,allowing correlation between the two noise processes (assumed jointly Gaussian).Let the correlation be given by

Ewk1vT

k2 ˆ Ck D…k2 k1† for the discrete-time case;

Ew…t1†vT…t2† ˆ C…t†d…t2 t1† for the continuous-time case:

For this extension,the discrete-time estimators have the same initial conditions andstate estimate extrapolation and error covariance extrapolation equations However,the measurement update equations in Table 4.3 have been modi®ed as

vkˆ Ak 1vk 1‡ Zk 1 …4:41†and Zk is zero-mean white Gaussian

Equation 4.1 is augmented by Equation 4.41,and the new state vector

Xkˆ ‰xk vkŠT satis®es the difference equation:

Trang 17

The measurement noise is zero, Rkˆ 0 The estimator algorithm will work as long

as HkPk… †HT

k ‡ Rk is invertible Details of numerical dif®culties of this problem(when Rk is singular) are given in Chapter 6

For continuous-time estimators,the augmentation does not work because

K…t† ˆ P…t†HT…t†R 1…t† is required Therefore, R 1…t† must exist Alternate niques are required For detailed information see Gelb et al [21]

tech-4.6 RELATIONSHIPS BETWEEN KALMAN AND WIENER FILTERSThe Wiener ®lter is de®ned for stationary systems in continuous time,and theKalman ®lter is de®ned for either stationary or nonstationary systems in eitherdiscrete time or continuous time,but with ®nite-state dimension To demonstrate theconnections on problems satisfying both sets of constraints,take the continuous-timeKalman±Bucy estimator equations of Section 4.3,letting F, G,and H be constants,the noises be stationary (Q and R constant),and the ®lter reach steady state (Pconstant) That is,as t ! 1,then _P…t† ! 0 The Riccati differential equation fromSection 4.3 becomes the algebraic Riccati equation

0 ˆ FP…1† ‡ P…1†FT‡ GQGT P…1†HTR 1HP…1†

for continuous-time systems The positive-de®nite solution of this algebraic equation

is the steady-state value of the covariance matrix, ‰P…1†Š The Kalman±Bucy ®lterequation in steady state is then

Trang 18

4.7 QUADRATIC LOSS FUNCTIONS

The Kalman ®lter minimizes any quadratic loss function of estimation error Just thefact that it is unbiased is suf®cient to prove this property,but saying that the estimate

is unbiased is equivalent to saying that ^x ˆ Ehxi That is,the estimated value is themean of the probability distribution of the state

4.7.1 Quadratic Loss Functions of Estimation Error

A loss function or penalty function6is a real-valued function of the outcome of arandom event A loss function re¯ects the value of the outcome Value concepts can

be somewhat subjective In gambling,for example,your perceived loss function forthe outcome of a bet may depend upon your personality and current state ofwinnings,as well as on how much you have riding on the bet

Loss Functions of Estimates In estimation theory,the perceived loss isgenerally a function of estimation error (the difference between an estimatedfunction of the outcome and its actual value),and it is generally a monotonicallyincreasing function of the absolute value of the estimation error In other words,bigger errors are valued less than smaller errors

Quadratic Loss Functions If x is a real n-vector (variate) associated with theoutcome of an event and ^x is an estimate of x,then a quadratic loss function for theestimation error ^x x has the form

L…^x x† ˆ …^x x†TM…^x x†; …4:42†

where M is a symmetric positive-de®nite matrix One may as well assume that M issymmetric,because the skew-symmetric part of M does not in¯uence the quadraticloss function The reason for assuming positive de®niteness is to assure that the loss

is zero only if the error is zero,and loss is a monotonically increasing function of theabsolute estimation error

4.7.2 Expected Value of a Quadratic Loss Function

Loss and Risk The expected value of loss is sometimes called risk It will beshown that the expected value of a quadratic loss function of the estimation error

6 These are concepts from decision theory,which includes estimation theory The theory might have been built just as well on more optimistic concepts,such as ``gain functions,'' ``bene®t functions,'' or ``reward functions,'' but the nomenclature seems to have been developed by pessimists This focus on the negative aspects of the problem is unfortunate,and you should not allow it to dampen your spirit.

Trang 19

^x x is a quadratic function of ^x Ehxi,where Eh^xi ˆ Ehxi This demonstrationwill depend upon the following identities:

Exh…x Ehxi†TM…x Ehxi†i

ˆ Exhtrace‰…x Ehxi†TM…x Ehxi†Ši …4:45†

ˆ Exhtrace‰M…x Ehxi†…x Ehxi†TŠi …4:46†

ˆ trace‰MExh…x Ehxi†…x Ehxi†TŠ …4:47†

P ˆdefExh…x Ehxi†…x Ehxi†Ti: …4:49†

Risk of a Quadratic Loss Function In the case of the quadratic loss functionde®ned above,the expected loss (risk) will be

ˆ Exh‰…^x Ehxi† …x Ehxi†ŠTM‰…^x Ehxi† …x Ehxi†Ši …4:52†

ˆ Exh…^x Ehxi†TM…^x Ehxi† ‡ …x Ehxi†TM…x Ehxi†i

Exh…^x Ehxi†TM…x Ehxi† ‡ …x Ehxi†TM…^x Ehxi†i …4:53†

ˆ …^x Ehxi†TM…^x Ehxi† ‡ Exh…x Ehxi†TM…x Ehxi†i

…^x Ehxi†TMExh…x Ehxi†i Exh…x Ehxi†iTM…^x Ehxi† …4:54†

ˆ …^x Ehxi†TM…^x Ehxi† ‡ trace‰MPŠ; …4:55†

which is a quadratic function of ^x Ehxi with the added nonnegative7 constanttrace[MP]

4.7.3 Unbiased Estimates and Quadratic Loss

The estimate ^x ˆ Ehxi minimizes the expected value of any positive-de®nitequadratic loss function From the above derivation,

7 Recall that M and P are symmetric and nonnegative de®nite,and the matrix trace of any product of symmetric nonnegative de®nite matrices is nonnegative.

Trang 20

only if

where it has been assumed only that the mean Ehxi and covariance

Exh…x Ehxi†…x Ehxi†Ti are de®ned for the probability distribution of x Thisdemonstrates the utility of quadratic loss functions in estimation theory: They alwayslead to the mean as the estimate with minimum expected loss (risk)

Unbiased Estimates An estimate ^x is called unbiased if the expected estimationerror Exh^x xi ˆ 0 What has just been shown is that an unbiased estimateminimizes the expected value of any quadratic loss function of estimation error

4.8 MATRIX RICCATI DIFFERENTIAL EQUATION

The need to solve the Riccati equation is perhaps the greatest single cause of anxietyand agony on the part of people faced with implementing a Kalman ®lter Thissection presents a brief discussion of solution methods for the Riccati differentialequation for the Kalman±Bucy ®lter An analogous treatment of the discrete-timeproblem for the Kalman ®lter is presented in the next section A more thoroughtreatment of the Riccati equation can be found in the book by Bittanti et al [54]

4.8.1 Transformation to a Linear Equation

The Riccati differential equation was ®rst studied in the eighteenth century as anonlinear scalar differential equation,and a method was derived for transforming it

to a linear matrix differential equation That same method works when the dependentvariable of the original Riccati differential equation is a matrix That solution method

is derived here for the matrix Riccati differential equation of the Kalman±Bucy ®lter

An analogous solution method for the discrete-time matrix Riccati equation of theKalman ®lter is derived in the next section

Matrix Fractions A matrix product of the sort AB 1 is called a matrix fraction,and a representation of a matrix M in the form

M ˆ AB 1will be called a fraction decomposition of M The matrix A is the numerator of thefraction,and the matrix B is its denominator It is necessary that the matrixdenominator be nonsingular

Trang 21

Linearization by Fraction Decomposition The Riccati differential equation

is nonlinear However,a fraction decomposition of the covariance matrix results in alinear differential equation for the numerator and denominator matrices Thenumerator and denominator matrices will be functions of time,such that the pro-duct A…t†B 1…t† satis®es the matrix Riccati differential equation and its boundaryconditions

Derivation By taking the derivative of the matrix fraction A…t†B 1…t† with respect

to t and using the fact8that

A…t†B 1…t†HT…t†R 1…t†H…t†A…t† ‡ Q…t†B…t†; …4:63†_A…t† A…t†B 1…t†f _B…t†g ˆ F…t†A…t† ‡ Q…t†B…t† A…t†B 1…t†

 fHT…t†R 1…t†H…t†A…t† FT…t†B…t†g; …4:64†

_B…t† ˆ HT…t†R 1…t†H…t†A…t† FT…t†B…t†; …4:66†d

Trang 22

Hamiltonian Matrix This is the name9given the matrix

HT…t†R 1…t†H…t† FT…t†

…4:68†

of the matrix Riccati differential equation

Boundary Constraints The initial values of A(t) and B(t) must also beconstrained by the initial value of P(t) This is easily satis®ed by taking

A…t0† ˆ P…t0† and B…t0† ˆ I,the identity matrix

4.8.2 Time-Invariant Problem

In the time-invariant case,the Hamiltonian matrix C is also time-invariant As aconsequence,the solution for the numerator A and denominator B of the matrixfraction can be represented in matrix form as the product

4.8.3 Scalar Time-Invariant Problem

For this problem,the numerator A and denominator B of the ``matrix fraction'' AB 1will be scalars,but C will be a 2  2 matrix We will here show how its exponentialcan be obtained in closed form This will illustrate an application of the linearizationprocedure,and the results will serve to illuminate properties of the solutionsÐsuch

as their dependence on initial conditions and on the scalar parameters F, H, R,and Q.Linearizing the Differential Equation The scalar time-invariant Riccati differ-ential equation and its linearized equivalent are

9 After the Irish mathematician and physicist William Rowan Hamilton (1805±1865).

Trang 23

this equation for P as a function of the free variable t and as a function of theparameters F, H, R,and Q.

Fundamental Solution of Linear Time-Invariant Differential Equation.The linear time-invariant differential equation has the general solution

375:

This matrix exponential will now be evaluated by using the characteristic vectors ofC,which are arranged as the column vectors of the matrix

35; f ˆ



F2‡H2QR

r

;with inverse

M 1ˆ

H22fR

H2Q2H2Q ‡ 2F2R 2FfR

H22fR

H2Q2H2Q ‡ 2F2R ‡ 2FfR

266

377;

by which it can be diagonalized as

M 1CM ˆ l

0 l1

24

35;

l2ˆ H2Q ‡ FfR 2R; l1ˆH2Q ‡ FfR 2R;with the characteristic values of C along its diagonal The exponential of thediagonalized matrix,multiplied by t,will be

eM 1 CMtˆ el2t 0

0 el1t

:

Trang 24

Using this,one can write the fundamental solution of the linear homogeneous invariant equation as

375;

375:

General Solution of Scalar Time-Invariant Riccati Equation The generalsolution formula may now be composed from the previous results as

e 2ft; …4:70†

Trang 25

Singular Values of Denominator The denominator DP…t† can easily be shown

to have a zero for t0 such that

e 2ft0ˆ 1 ‡ 2 R

H2H2‰P…0†f ‡ QŠ ‡ FR…f F†

H2P2…0† 2FRP…0† QR :However,it can also be shown that t0< 0 if

P…0† > HR2…f F†;

which is a nonpositive lower bound on the initial value This poses no particulardif®culty,however,since P…0†  0 anyway (We will see in the next section whatwould happen if this condition were violated.)

Boundary values Given the above formulas for P(t ),its numerator N …t†,and itsdenominator D…t†,one can easily show that they have the following limiting values:

limt!0NP…t† ˆ 2P…0†R



F2‡HR2Q

r

;lim

t!0P…t† ˆ P…0†;

limt!1P…t† ˆHR2 F ‡

Trang 26

acterizing the behavior of the solutions: the asymptotic solution as t ! 1 and thetime constant of decay to this steady-state solution.

Decay Time Constant The only time-dependent terms in the expression for P(t)are those involving e 2ft The fundamental decay time constant of the solution isthen the algebraic function

_P ˆ 0;

P2…1†H2R 1 2FP…1† Q ˆ 0;

which is also called the algebraic10 Riccati equation This quadratic equation in

P…1† has two solutions,expressible as algebraic functions of the problem eters:

param-P…1† ˆFR 



H2QR ‡ F2R2p

The two solutions correspond to the two values for the signum () There is no causefor alarm,however The solution that agrees with Equation 4.72 is the nonnegativeone The other solution is nonpositive We are only interested in the nonnegativesolution,because the variance P of uncertainty is,by de®nition,nonnegative.Dependence on Initial Conditions For the scalar problem,the initial condi-tions are parameterized by P(0) The dependence of the solution on its initial value isnot continuous everywhere,however The reason is that there are two solutions to thesteady-state equation The nonnegative solution is stable in the sense that initialconditions suf®ciently near to it converge to it asymptotically The nonpositive

10 So called because it is an algebraic equation,not a differential equation That is,it is constructed from the operations of algebra,not those of the differential calculus The term by itself is ambiguous in this usage,however,because there are two entirely different forms of the algebraic Riccati equation One is derived from the Riccati differential equation,and the other is derived from the discrete-time Riccati equation The results are both algebraic equations,but they are signi®cantly different in structure.

Trang 27

solution is unstable in the sense that in®nitesimal perturbations of the initialcondition cause the solution to diverge from the nonpositive steady-state solutionand converge,instead,to the nonnegative steady-state solution.

Convergent and Divergent Solutions The eventual convergence of a solution

to the nonnegative steady-state value may pass through in®nity to get there That is,the solution may initially diverge,depending on the initial values This type ofbehavior is shown in Figure 4.3,which is a multiplot of solutions to an example ofthe Riccati equation with

P…t† ˆee2t2t‰1 ‡ P…0†Š ‰1 P…0†Š‰1 ‡ P…0†Š ‡ ‰1 P…0†Š

in terms of the initial value P(0) Solutions of the initial-value problem with differentinitial values are plotted over the time interval 0  t  2 All solutions except theone with P…0† ˆ 1 appear to converge eventually to P…1† ˆ 1,but those that

Fig 4.3 Solutions of the scalar time-invariant Riccati equation.

Ngày đăng: 26/01/2014, 15:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm