1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Lọc Kalman - lý thuyết và thực hành bằng cách sử dụng MATLAB (P3) pptx

58 715 4

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Kalman Filtering: Theory and Practice Using MATLAB (P3)
Tác giả Mohinder S. Grewal, Angus P. Andrews
Trường học John Wiley & Sons, Inc.
Chuyên ngành Control Systems, Signal Processing, Estimation Theory
Thể loại Giáo trình
Năm xuất bản 2001
Thành phố New York
Định dạng
Số trang 58
Dung lượng 599,43 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The empirical results quanti®ed how some statistical properties ofthe random motion were in¯uenced by such physical properties as the size and mass of the particles and the temperature a

Trang 1

Random Processes and

Stochastic Systems

A completely satisfactory de®nition of random sequence is yet to be discovered

G James and R C James, Mathematics Dictionary,

D Van Nostrand Co., Princeton, New Jersey, 1959

3.1 CHAPTER FOCUS

The previous chapter presents methods for representing a class of dynamic systemswith relatively small numbers of components, such as a harmonic resonator with onemass and spring The results are models for deterministic mechanics, in which thestate of every component of the system is represented and propagated explicitly.Another approach has been developed for extremely large dynamic systems, such

as the ensemble of gas molecules in a reaction chamber The state-space approachfor such large systems would be impractical Consequently, this other approachfocuses on the ensemble statistical properties of the system and treats the underlyingdynamics as a random process The results are models for statistical mechanics, inwhich only the ensemble statistical properties of the system are represented andpropagated explicitly

In this chapter, some of the basic notions and mathematical models of statisticaland deterministic mechanics are combined into a stochastic system model, whichrepresents the state of knowledge about a dynamic system These models representwhat we know about a dynamic system, including a quantitative model for ouruncertainty about what we know

In the next chapter, methods will be derived for modifying the state of edge, based on observations related to the state of the dynamic system

knowl-56

Mohinder S Grewal, Angus P Andrews Copyright # 2001 John Wiley & Sons, Inc ISBNs: 0-471-39254-5 (Hardback); 0-471-26638-8 (Electronic)

Trang 2

3.1.1 Discovery and Modeling of Random Processes

Brownian Motion and Stochastic Differential Equations The Britishbotanist Robert Brown (1773±1858) reported in 1827 a phenomenon he hadobserved while studying pollen grains of the herb Clarkia pulchella suspended inwater and similar observations by earlier investigators The particles appeared tomove about erratically, as though propelled by some unknown force This phenom-enon came to be called Brownian movement or Brownian motion It has been studiedextensivelyÐboth empirically and theoreticallyÐby many eminent scientists(including Albert Einstein [157]) for the past century Empirical studies demon-strated that no biological forces were involved and eventually established thatindividual collisions with molecules of the surrounding ¯uid were causing themotion observed The empirical results quanti®ed how some statistical properties ofthe random motion were in¯uenced by such physical properties as the size and mass

of the particles and the temperature and viscosity of the surrounding ¯uid.Mathematical models with these statistical properties were derived in terms ofwhat has come to be called stochastic differential equations P Langevin (1872±1946) modeled the velocity v of a particle in terms of a differential equation of theform

White-Noise Processes and Wiener Processes A more precise tical characterization of white noise was provided by Norbert Weiner, using hisgeneralized harmonic analysis, with a result that is dif®cult to square with intuition

mathema-It has a power spectral density that is uniform over an in®nite bandwidth, implyingthat the noise power is proportional to bandwidth and that the total power is in®nite.(If ``white light'' had this property, would we be able to see?) Wiener preferred tofocus on the mathematical properties of v…t†, which is now called a Wiener process.Its mathematical properties are more benign than those of white-noise processes

Trang 3

3.1.2Main Points to Be Covered

The theory of random processes and stochastic systems represents the evolution overtime of the uncertainty of our knowledge about physical systems This representationincludes the effects of any measurements (or observations) that we make of thephysical process and the effects of uncertainties about the measurement processesand dynamic processes involved The uncertainties in the measurement and dynamicprocesses are modeled by random processes and stochastic systems

Properties of uncertain dynamic systems are characterized by statistical eters such as means, correlations, and covariances By using only these numericalparameters, one can obtain a ®nite representation of the problem, which is importantfor implementing the solution on digital computers This representation dependsupon such statistical properties as orthogonality, stationarity, ergodicity, and Marko-vianness of the random processes involved and the Gaussianity of probabilitydistributions Gaussian, Markov, and uncorrelated (white-noise) processes will beused extensively in the following chapters The autocorrelation functions and powerspectral densities (PSDs) of such processes are also used These are important in thedevelopment of frequency-domain and time-domain models The time-domainmodels may be either continuous or discrete

param-Shaping ®lters (continuous and discrete) are developed for random-constant,random-walk, and ramp, sinusoidally correlated and exponentially correlatedprocesses We derive the linear covariance equations for continuous and discretesystems to be used in Chapter 4 The orthogonality principle is developed andexplained with scalar examples This principle will be used in Chapter 4 to derive theKalman ®lter equations

3.1.3 Topics Not Covered

It is assumed that the reader is already familiar with the mathematical foundations ofprobability theory, as covered by Papoulis [39] or Billingsley [53], for example Thetreatment of these concepts in this chapter is heuristic and very brief The reader isreferred to textbooks of this type for more detailed background material

The Itoà calculus for the integration of otherwise nonintegrable functions (whitenoise, in particular) is not de®ned, although it is used The interested reader isreferred to books on the mathematics of stochastic differential equations (e.g., those

by Arnold [51], Baras and Mirelli [52], Itoà and McKean [64], Sobczyk [77], orStratonovich [78])

3.2PROBABILITY AND RANDOM VARIABLES

The relationships between unknown physical processes, probability spaces, andrandom variables are illustrated in Figure 3.1 The behavior of the physical processes

is investigated by what is called a statistical experiment, which helps to de®ne amodel for the physical process as a probability space Strictly speaking, this is not a

Trang 4

model for the physical process itself, but a model of our own understanding of thephysical process It de®nes what might be called our ``state of knowledge'' about thephysical process, which is essentially a model for our uncertainty about the physicalprocess.

A random variable represents a numerical attribute of the state of the physicalprocess In the following subsections, these concepts are illustrated by using thenumerical score from tossing dice as an example of a random variable

3.2.1 An Example of a Random Variable

EXAMPLE 3.1: Score from Tossing a Die A die (plural of dice) is a cube withits six faces marked by patterns of one to six dots It is thrown onto a ¯at surfacesuch that it tumbles about and comes to rest with one of these faces on top This can

be considered an unknown process in the sense that which face will wind up on top

is not reliably predictable before the toss The tossing of a die in this manner is anexample of a statistical experiment for de®ning a statistical model for the process.Each toss of the die can result in but one outcome, corresponding to which one of thesix faces of the die is on top when it comes to rest Let us label these outcomes oa,

ob, oc, od, oe, of The set of all possible outcomes of a statistical experiment iscalled a sample space The sample space for the statistical experiment with one die isthe set s ˆ foa, ob, oc, od, oe, ofg

Fig 3.1 Conceptual model for a random variable.

Trang 5

A random variable assigns real numbers to outcomes There is an integralnumber of dots on each face of the die This de®nes a ``dot function'' d : s ! < onthe sample space s, where d…o† is the number of dots showing for the outcome o ofthe statistical experiment Assign the values

d…oa† ˆ 1; d…oc† ˆ 3; d…oe† ˆ 5;

d…ob† ˆ 2; d…od† ˆ 4; d…of† ˆ 6:

This function is an example of a random variable The useful statistical properties ofthis random variable will depend upon the probability space de®ned by statisticalexperiments with the die

Events and sigma algebras The statistical properties of the random variable ddepend on the probabilities of sets of outcomes (called events) forming what iscalled a sigma algebra1of subsets of the sample space s Any collection of eventsthat includes the sample space itself, the empty set (the set with no elements), and theset unions and set complements of all its members is called a sigma algebra over thesample space The set of all subsets of s is a sigma algebra with 26ˆ 64 events.The probability space for a fair die A die is considered ``fair'' if, in a largenumber of tosses, all outcomes tend to occur with equal frequency The relativefrequency of any outcome is de®ned as the ratio of the number of occurrences of thatoutcome to the number of occurrences of all outcomes Relative frequencies ofoutcomes of a statistical experiment are called probabilities Note that, by thisde®nition, the sum of the probabilities of all outcomes will always be equal to 1 Thisde®nes a probability p…e† for every event e (a set of outcomes) equal to

p…e† ˆ #…e†

#…s†;where #…e† is the cardinality of e, equal to the number of outcomes o 2 e Notethat this assigns probability zero to the empty set and probability one to the samplespace

The probability distribution of the random variable d is a nondecreasing function

Pd…x† de®ned for every real number x as the probability of the event for which thescore is less than x It has the formal de®nition

Pd…x† ˆdefp…d 1…… 1; x†††;

d 1…… 1; x†† ˆdeffojd…o†  xg:

1 Such a collection of subsets eiof a set s is called an algebra because it is a Boolean algebra with respect

to the operations of set union (e1[ e2), set intersection (e1\ e2), and set complement (sne)Ð corresponding to the logical operations or, and, and not, respectively The ``sigma'' refers to the summation symbol S, which is used for de®ning the additive properties of the associated probability measure However, the lowercase symbol s is used for abbreviating ``sigma algebra'' to ``s-algebra.''

Trang 6

For every real value of x, the set fojd…o† < xg is an event For example,

Pd…6:0    01† ˆ p…s† ˆ 1;

as plotted in Figure 3.2 Note that Pd is not a continuous function in this particularexample

3.2.2 Probability Distributions and Densities

Random variables f are required to have the property that, for every real a and b suchthat 1  a  b  ‡1, the outcomes o such that a < f …o† < b are an event

e 2 a This property is needed for de®ning the probability distribution function Pf

Trang 7

The probability distribution function may not be a differentiable function However,

if it is differentiable, then its derivative

<n

3.2.3 Gaussian Probability Densities

The probability distribution of the average score from tossing n dice (i.e., the totalnumber of dots divided by the number of dice) tends toward a particular type ofdistribution as n ! 1, called a Gaussian distribution.3It is the limit of many suchdistributions, and it is common to many models for random phenomena It iscommonly used in stochastic system models for the distributions of randomvariables

Univariate Gaussian Probability Distributions The notation n…x; s2† is used todenote a probability distribution with density function

2 Named for the French mathematician FeÂlix Borel (1871±1956).

3 It is called the Laplace distribution in France It has had many discoverers besides Gauss and Laplace, including the American mathematician Robert Adrian (1775±1843) The physicist Gabriel Lippman (1845±1921) is credited with the observation that ``mathematicians think it [the normal distribution] is a law of nature and physicists are convinced that it is a mathematical theorem.''

Trang 8

another name for the Gaussian distribution Because so many other things are callednormal in mathematics, it is less confusing if we call it Gaussian.

Gaussian Expectation Operators and Generating Functions Because theGaussian probability density function depends only on the difference x x, theexpectation operator

ps

…‡1

1f …x†e …x x† 2 =2s 2

ˆ 12p

ps

…‡1

1f …x ‡ x†e x 2 =2s 2

has the form of a convolution integral This has important implications for problems

in which it must be implemented numerically, because the convolution can beimplemented more ef®ciently as a fast Fourier transform of f, followed by apointwise product of its transform with the Fourier transform of p, followed by aninverse fast Fourier transform of the result One does not need to take the numericalFourier transform of p, because its Fourier transform can be expressed analytically inclosed form Recall that the Fourier transform of p is called its generating function.Gaussian generating functions are also (possibly scaled) Gaussian density functions:

p…o† ˆ 1

2pp

…1

ˆ 1

2pp

p…x† ˆ 1

…2p†ndet P

Trang 9

The multivariate Gaussian generating function has the form

p…o† ˆ 1

…2p†ndet P 1

where o is an n-vector This is also a multivariate Gaussian probability distribution

n…0; P 1† if the scaled form of the Fourier transform shown in Equation 3.11 isused

3.2.4 Joint Probabilities and Conditional Probabilities

The joint probability of two events ea and eb is the probability of their setintersection p…ea\ eb†, which is the probability that both events occur The jointprobability of independent events is the product of their probabilities

The conditional probability of event e, given that event ec has occurred, isde®ned as the probability of e in the ``conditioned'' probability space with samplespace ec This is a probability space de®ned on the sigma algebra

of the set intersections of all events e 2 a (the original sigma algebra) with theconditioning event ec The probability measure on the ``conditioned'' sigma algebraajecis de®ned in terms of the joint probabilities in the original probability space bythe rule

p…ejec† ˆp…e \ ep…e c†

64 events For two dice, the sample space has 36 possible outcomes (6 independentoutcomes for each of two dice) and 236ˆ 68, 719, 476, 736 possible events If each

4 Discovered by the English clergyman and mathematician Thomas Bayes (1702±1761) Conditioning on impossible events is not de®ned Note that the conditional probability is based on the assumption that echas occurred This would seem to imply that ecis an event with nonzero probability, which one might expect from practical applications of Bayes' rule.

Trang 10

die is fair and their outcomes are independent, then all outcomes with two dice haveprobability …1

6†  …1

6† ˆ 1

36and the probability of any event is the number of outcomes

in the event divided by 36 (the number of outcomes in the sample space) Using thesame notation as the previous (one-die) example, let the outcome from tossing a pair

of dice be represented by an ordered pair (in parentheses) of the outcomes of the ®rstand second die, respectively Then the score s……oi; oj†† ˆ d…oi† ‡ d…oj†, where oirepresents the outcome of the ®rst die and oj represents the outcome of the seconddie The corresponding probability distribution function of the score x for two dice isshown in Figure 3.3a

The event corresponding to the condition that the ®rst die have either four or ®vedots showing contains all outcomes in which oiˆ od or oe; which is the set

ecˆ f…od; oa†; …od; ob†; …od; oc†; …od; od†; …od; oe†; …od; of†

…oe; oa†; …oe; ob†; …oe; oc†; …oe; od†; …oe; oe†; …oe; of†g;

of 12 outcomes It has probability p…ec† ˆ12

36ˆ1

3:Fig 3.3 Probability distributions of dice scores.

Trang 11

By applying Bayes' rule, the conditional probabilities of all events corresponding

to unique scores can be calculated as shown in Figure 3.4 The correspondingprobability distribution function for two dice with this conditioning is shown inFigure 3.3b

3.3 STATISTICAL PROPERTIES OF RANDOM VARIABLES

3.3.1 Expected Values of Random Variables

Expected values The symbol E is used as an operator on random variables It iscalled the expectancy, expected value, or average operator, and the expressionE

x h f …x†i is used to denote the expected value of the function f applied to theensemble of possible values of the random variable x The symbol under the Eindicates the random variable (RV) over which the expected value is to be evaluated.When the RV in question is obvious from context, the symbol underneath the E will

be eliminated If the argument of the expectancy operator is also obvious fromcontext, the angular brackets can also be disposed with, using Ex instead of Ehxi, forexample

Moments The nth moment of a scalar RV x with probability density p(x) isde®ned by the formula

Trang 12

The nth central moment of x is de®ned as

These de®nitions of a moment apply to discrete-valued random variables if wesimply substitute summations in place of integrations in the de®nitions

3.3.2Functions of Random Variables

A function of RV x is the operation of assigning to each value of x another value, forexample y, according to rule or function This is represented by

5 We here restrict the order of the moment to the positive integers The zeroth-order moment would otherwise always evaluate to 1.

Trang 13

The probability density of y can be obtained from the density of x If Equation3.23 can be solved for x, yielding the unique solution

3777777

Trang 14

a family of functions called random processes or stochastic processes A randomprocess is called discrete if its argument is a discrete variable set as

It is clear that the value of a random process x(t) at any particular time t ˆ t0, namely

x…t0; s†, is a random variable [or a random vector if x…t0; s† is vector valued].3.4.2Mean, Correlation, and Covariance

Let x(t) be an n-vector random process Its mean

For a random sequence, the integral is replaced by a sum

The correlation of the vector-valued process x(t) is de®ned by

37

Eh‰x…t1† Ex…t1†Š‰x…t2† Ex…t2†ŠTi

ˆ Ehx…t1†xT…t2†i Ehx…t1†iEhxT…t2†i: …3:35†When the process x(t) has zero mean (i.e., Ex…t† ˆ 0 for all t), its correlation andcovariance are equal

The correlation matrix of two RPs x(t), an n-vector, and y(t), an m-vector, is given

by an n  m matrix

Trang 15

Eh‰x…t1† Ex…t1†Š‰y…t2† Ey…t2†ŠTi: …3:38†

3.4.3 Orthogonal Processes and White Noise

Two RPs x(t) and y(t) are called uncorrelated if their cross-covariance matrix isidentically zero for all t1 and t2:

Eh‰x…t1† Ehx…t1†iЉy…t2† Ehy…t2†iŠTŠ ˆ 0: …3:39†The processes x(t) and y(t) are called orthogonal if their correlation matrix isidentically zero:

The random process x(t) is called uncorrelated if

Eh‰x…t1† Ehx…t1†iЉx…t2† Eh…t2†iŠTi ˆ Q…t1; t2†d…t1 t2†; …3:41†where d…t† is the Dirac delta ``function''6(actually, a generalized function), de®nedby

Eh‰xk EhxkiЉxj EhxjiŠTi ˆ Q…k; j† D…k j†; …3:43†where D…† is the Kronecker delta function7, de®ned by

6 Named for the English physicist Paul Adrien Maurice Dirac (1902±1984).

7 Named for the German mathematician Leopold Kronecker (1823±1891).

Trang 16

A process x(t) is considered independent if for any choice of distinct times

t1; t2; tn, the random variables x…t1†; x…t2†; ; x…tn† are independent That is,

px…t1†; ; px…tn†…s1; ; sn† ˆQn

iˆ1px…ti†…si†: …3:45†Independence (all of the moments) implies no correlation (which restricts attention

to the second moments), but the opposite implication is not true, except in suchspecial cases as Gaussian processes (see Section 3.2.3) Note that whiteness meansuncorrelated in time rather than independent in time (i.e., including all moments),although this distinction disappears for the important case of white Gaussianprocesses (see Chapter 4)

3.4.4 Strict-Sense and Wide-Sense Stationarity

The random process x(t) (or random sequence xk) is called strict-sense stationary ifall its statistics (meaning p‰x…t1†; x…t2†; Š) are invariant with respect to shifts of thetime origin:

p…x1; x2; ; xn; t1; ; tn†

ˆ p…x1; x2; ; xn; t1‡ e; t2‡ e; ; tn‡ e† …3:46†The random process x(t) (or xk) is called wide-sense stationary (WSS) (or ``weak-sense'' stationary) if

and

Ehx…t1†xT…t2†i ˆ Q…t2 t1† ˆ Q…t†; …3:48†where Q is a matrix with each element depending only on the difference t2 t1ˆ t.Therefore, when x(t) is stationary in the weak sense, it implies that its ®rst- andsecond-order statistics are independent of time origin, while strict stationarity byde®nition implies that statistics of all orders are independent of the time origin.3.4.5 Ergodic Random Processes

A process is considered ergodic8if all of its statistical parameters, mean, variance,and so on, can be determined from arbitrarily chosen member functions A sampledfunction x(t) is ergodic if its time-averaged statistics equal the ensemble averages

8 The term ergodic came originally from the development of statistical mechanics for thermodynamic systems It is taken from the Greek words for energy and path The term was applied by the American physicist Josiah Willard Gibbs (1839±1903) to the time history (or path) of the state of a thermodynamic system of constant energy Gibbs had assumed that a thermodynamic system would eventually take on all possible states consistent with its energy It was shown to be impossible from function-theoretic considerations in the nineteenth century The so-called ergodic hypothesis of James Clerk Maxwell (1831±1879) is that the temporal means of a stochastic system are equivalent to the ensemble means The concept was given ®rmer mathematical foundations by George David Birkhoff and John von Neumann around 1930 and by Norbert Wiener in the 1940s.

Trang 17

3.4.6 Markov Processes and Sequences

An RP x(t) is called a Markov process9if its future state distribution, conditioned onknowledge of its present state, is not improved by knowledge of previous states:

pfx…ti†jx…t†; t < ti 1g ˆ pfx…ti†jx…ti 1†g; …3:49†where the times t1< t2< t3<    < ti:

Similarly, a random sequence (RS) xkis called a Markov sequence if

p…xijxk; k  i 1† ˆ pfxijxi 1g: …3:50†

The solution to a general ®rst-order differential or difference equation with anindependent process (uncorrelated normal RP) as a forcing function is a Markovprocess That is, if x(t) and xk are n-vectors satisfying

or

xkˆ Fk 1xk 1‡ Gk 1wk 1; …3:52†where w…t† and wk 1 are r-dimensional independent random processes andsequences, the solutions x(t) and xkare then vector Markov processes and sequences,respectively

3.4.7 Gaussian Processes

An n-dimensional RP x(t) is called Gaussian (or normal) if its probability densityfunction is Gaussian, as given by the formulas of Section 3.2.3, with covariancematrix

P ˆ Ehjx…t† Ehx…t†ij‰x…t† Ehx…t†iŠTi …3:53†

for the random variable x

Gaussian random processes have some useful properties:

1 A Gaussian RP x(t) is WSSÐand stationary in the strict sense

2 Orthogonal Gaussian RPs are independent

3 Any linear function of jointly Gaussian RP results in another Gaussian RP

4 All statistics of a Gaussian RP are completely determined by its ®rst- andsecond-order statistics

9 De®ned by Andrei Andreevich Markov (1856±1922).

Trang 18

3.4.8 Simulating Multivariate Gaussian Processes

Cholesky decomposition methods are discussed in Chapter 6 and Appendix B

We show here how these methods can be used to generate uncorrelated random vector sequences with zero mean (or any speci®ed mean) and a speci®edcovariance P

pseudo-There are many programs that will generate pseudorandom sequences ofuncorrelated Gaussian scalars siji ˆ 1; 2; 3; gwith zero mean and unit variance:

Trang 19

The same technique can be used to obtain pseudorandom Gaussian vectors with agiven mean v by adding v to each wk These techniques are used in simulation andMonte Carlo analysis of stochastic systems.

3.4.9 Power Spectral Density

Let x(t) be a zero-mean scalar stationary RP with autocorrelation cx…t†,

The following are properties of autocorrelation functions:

1 Autocorrelation functions are symmetrical ( ``even'' functions)

2 An autocorrelation function attains its maximum value at the origin

3 Its Fourier transform is nonnegative (greater than or equal to zero)

These properties are satis®ed by valid autocorrelation functions

Setting t ˆ 0 in Equation 3.68 gives

Trang 20

EXAMPLE 3.3 If cx…t† ˆ s2e ajtj, ®nd the associated PSD:

EXAMPLE 3.4 This is an example of a second-order Markov process generated

by passing WSS white noise with zero mean and unit variance through a order ``shaping ®lter'' with the dynamic model of a harmonic resonator (This is thesame example introduced in Chapter 2 and will be used again in Chapters 4 and 5.)The transfer function of the dynamic system is

second-H…s† ˆs2‡ 2zwas ‡ b

ns ‡ w2:De®nitions of z, wn, and s are the same as in Example 2.7 The state-space model ofH(s) is given as

The block diagram corresponding to the state-space model is shown in Figure 3.5

Trang 21

The mean power of a scalar random process is given by the equations

Fig 3.5 Diagram of a second-order Markov process.

Fig 3.6 Block diagram representation of a linear system.

Trang 22

This type of integral is called a convolution integral Manipulation of Equation 3.76leads to relationships between autocorrelation functions of x(t) and y(t),

Cy…o† ˆ jH… jo†j2Cx…o†; …3:80†

where H is the system transfer function shown in Figure 3.6, de®ned in Laplacetransform notation as

H…s† ˆ

…1

where s ˆ jo

3.5.1 Stochastic Differential Equations

for Random Processes

A Note on the Calculus of Stochastic Differential Equations Differentialequations involving random processes are called stochastic differential equations.Introducing random processes as inhomogeneous terms in ordinary differentialequations has rami®cations beyond the level of rigor that will be followed here,but the reader should be aware of them The problem is that random processes arenot integrable functions in the conventional (Riemann) calculus The resolution ofthis problem requires foundational modi®cations of the calculus to obtain many ofthe results presented The Riemann integral of the ``ordinary'' calculus must bemodi®ed to what is called the Itoà calculus The interested reader will ®nd theseissues treated more rigorously in the books by Bucy and Joseph [15] and Itoà [113]

A linear stochastic differential equation as a model of an RP with initialconditions has the form

_x…t† ˆ F…t†x…t† ‡ G…t†w…t† ‡ C…t†u…t†;

z…t† ˆ H…t†x…t† ‡ v…t† ‡ D…t†u…t†; …3:82†

Trang 23

where the variables are de®ned as

x…t† ˆ n  1 state vector;

z…t† ˆ `  1 measurement vector;

u…t† ˆ r  1 deterministic input vector;

F…t† ˆ n  n time-varying dynamic coefficient matrix;

C…t† ˆ n  r time-varying input coupling matrix;

H…t† ˆ `  n time-varying measurement sensitivity matrix;

D…t† ˆ `  r time-varying output coupling matrix;

G…t† ˆ n  r time-varying process noise coupling matrix;

w…t† ˆ r  1 zero-mean uncorrelated ``plant noise'' process;

v…t† ˆ `  1 zero-mean uncorrelated ``measurement noise'' processand the expected values as

EXAMPLE 3.5 Continuing with Example 3.3, let the RP x(t) be a zero-meanstationary normal RP having autocorrelation

The corresponding power spectral density is

Cx…o† ˆo2s2‡ a2a2: …3:84†

Trang 24

This type of RP can be modeled as the output of a linear system with input w(t), azero-mean white Gaussian noise with PSD equal to unity Using Equation 3.80, onecan derive the transfer function H… jo† for the following model:

a ‡ jo



2a

ps

3.5.2Discrete Model of a Random Sequence

A vector discrete-time recursive equation for modeling a random sequence (RS) withinitial conditions can be given in the form

xkˆ Fk 1xk 1‡ Gk 1wk 1‡ Gk 1uk 1;

Trang 25

TABLE 3.1 System Models of Random Processes

Random Process Autocorrelation

Function and Power Spectral Density

Shaping Filter Diagram

State-Space Formulation

White noise c x …t† ˆ s 2 d 2 …t† None Always treated as

c x …o† ˆ s 2 measurement noise

Random walk cx…t† ˆ …undefined† _x ˆ w …t†

Trang 26

This is the complete model with deterministic inputs uk as discussed in Chapter 2,Equations 2.28 and 2.29, and random sequence noise wk and vk as described inChapter 4 equations:

xkˆ n  1 state vector

zkˆ `  1 measurement vector

ukˆ r  1 deterministic input vector

Fk 1ˆ n  n time varying matrix

Gk 1ˆ n  r time varying matrix

Hkˆ `  n time varying matrix

Dkˆ `  r time varying matrix

Gk 1ˆ n  r time varying matrixEhwki ˆ 0

Ehvki ˆ 0Ehwk1wT

k 2i ˆ Qk1 D…k2 k1†Ehvk1vT

k 2i ˆ Rk1 D…k2 k1†Ehwk1vT

k 2i ˆ Mk1D…k2 k1†

EXAMPLE 3.6 Let the fxkg be a zero-mean stationary Gaussian RS withautocorrelation

cx…k2 k1† ˆ s2e ajk 2 k 1 j:This type of RS can be modeled as the output of a linear system with input wkbeingzero-mean white Gaussian noise with PSD equal to unity

A difference equation model for this type of process can be de®ned as

xk ˆ Fxk 1‡ Gwk 1; zkˆ xk: …3:88†

In order to use this model, we need to solve for the unknown parameters F and G asfunctions of the parameter a To do so, we ®rst multiply Equation 3.38 by xk 1 onboth sides and take the expected values to obtain the equations

Ehxkxk 1i ˆ FEhxk 1xk 1i ‡ GEhwk 1xk 1i;

s2e aˆ Fs2;assuming the wk are uncorrelated and EhwKi ˆ 0, so that Ehwk 1xk 1i ˆ 0 Oneobtains the solution

Trang 27

Next, square the state variable de®ned by Equation 3.88 and take its expected value:

Ehx2i ˆ F2Ehxk 1xk 1i ‡ G2Ehwk 1wk 1i; …3:90†

because the variance Ehw2

k 1i ˆ 1 and the parameter G ˆ sp1 e 2a

.The complete model is then

xkˆ e axk 1‡ sp1 e 2a

wk 1with Ehwki ˆ 0 and Ehwk1wk2i ˆ D…k2 k1†

The dynamic process model derived in Example 3.6 is called a shaping ®lter.Block diagrams of this and other shaping ®lters are given in Table 3.2, along withtheir difference equation models

3.5.3 Autoregressive Processes and Linear Predictive Models

A linear predictive model for a signal is a representation in the form

377777

xk

xk 1

xk 2

xk n‡1

266666

377777

‡

u00

0

266666

377777

; …3:93†

where the ``state'' is the n-vector of the last n samples of the signal and thecovariance matrix Qk of the associated process noise uk will be ®lled with zeros,except for the term Q11 ˆ Ehu2

ki

Trang 28

TABLE 3.2Stochastic System Models of Discrete Random Sequences

Process Type Autocorrelation Block Diagram State-Space Model

Trang 29

3.6 SHAPING FILTERS AND STATE AUGMENTATION

Shaping Filters The focus of this section is nonwhite models for stationaryprocesses For many physical systems encountered in practice, it may not be justi®ed

to assume that all noises are white Gaussian noise processes It can be useful togenerate an autocorrelation function or PSD from real data and then develop anappropriate noise model using differential or difference equations These models arecalled shaping ®lters They are driven by noise with a ¯at spectrum (white-noiseprocesses), which they shape to represent the spectrum of the actual system It wasshown in the previous section that a linear time-invariant system (shaping ®lter)driven by WSS white Gaussian noise provides such a model The state vector can be

``augmented'' by appending to it the state vector components of the shaping ®lter,with the resulting model having the form of a linear dynamic system driven by whitenoise

3.6.1 Correlated Process Noise Models

Shaping Filters for Process Noise Let a system model be given by

_x…t† ˆ F…t†x…t† ‡ G…t†w1…t†; z…t† ˆ H…t†x…t† ‡ v…t† …3:95†where w1…t† is nonwhite, for example, correlated Gaussian noise As given in theprevious section, v(t) is a zero-mean white Gaussian noise Suppose that w1…t† can bemodeled by a linear shaping ®lter10:

10 See Example in Section 3.7 for WSS processes.

Ngày đăng: 26/01/2014, 15:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm