The vector mean value of the vector ξt is given as The expression Covξt1, t2 = Eξt1 − µt1ξt2 − µt2T 4.4.67 or is the corresponding auto-covariance matrix of the stochastic process vector
Trang 1When replacing the arguments t1, t2 in Equations (4.4.58), (4.4.60) by t and τ then
and
If t = τ then
where Covξ(t, t) is equal to the variance of the random variable ξ The abbreviated form Covξ(t) = Covξ(t, t) is also often used
Consider now mutually dependent stochastic processes ξ1(t), ξ2(t), ξn(t) that are elements
of stochastic process vector ξ(t) In this case, the mean values and auto-covariance function are often sufficient characteristics of the process
The vector mean value of the vector ξ(t) is given as
The expression
Covξ(t1, t2) = E(ξ(t1) − µ(t1))(ξ(t2) − µ(t2))T
(4.4.67) or
is the corresponding auto-covariance matrix of the stochastic process vector ξ(t)
The auto-covariance matrix is symmetric, thus
If a stochastic process is normally distributed then the knowledge about its mean value and covariance is sufficient for obtaining any other process characteristics
For the investigation of stochastic processes, the following expression is often used
¯
µ = lim
T →∞
1 2T
Z T
−T
¯
µ is not time dependent and follows from observations of the stochastic process in a sufficiently large time interval and ξ(t) is any realisation of the stochastic process In general, the following expression is used
¯
µm= lim
T →∞
1 2T
Z T
−T
For m = 2 this expression gives ¯µ2
Stochastic processes are divided into stationary and non-stationary In the case of a stationary stochastic process, all probability densities f1, f2, fndo not depend on the start of observations and onedimensional probability density is not a function of time t Hence, the mean value (4.4.55) and the variance (4.4.56) are not time dependent as well
Many stationary processes are ergodic, i.e the following holds with probability equal to one
µ =
Z ∞
−∞
xf1(x)dx = ¯µ = lim
T →∞
1 2T
Z T
−T
The usual assumption in practice is that stochastic processes are stationary and ergodic
Trang 2The properties (4.4.72) and (4.4.73) show that for the investigation of statistical properties of
a stationary and ergodic process, it is only necessary to observe its one realisation in a sufficiently large time interval
Stationary stochastic processes have a two-dimensional density function f2 independent of the time instants t1, t2, but dependent on τ = t2− t1that separates the two random variables ξ(t1), ξ(t2) As a result, the auto-correlation function (4.4.58) can be written as
Rξ(τ ) = E {ξ(t1)ξ(t2)} =
Z ∞
−∞
Z ∞
−∞
For a stationary and ergodic process hold the equations (4.4.72), (4.4.73) and the expression
E {ξ(t)ξ(t + τ)} can be written as
E {ξ(t)ξ(t + τ)} = ξ(t)ξ(t + τ)
= lim
T →∞
1 2T
Z T
−T
Hence, the auto-correlation function of a stationary ergodic process is in the form
Rξ(τ ) = lim
T →∞
1 2T
Z T
−T
Auto-correlation function of a process determines the influence of a random variable between the times t + τ and t If a stationary ergodic stochastic process is concerned, its auto-correlation function can be determined from any of its realisations
The auto-correlation function Rξ(τ ) is symmetric
For τ = 0 the auto-correlation function is determined by the expected value of the square of the random variable
For τ → ∞ the auto-correlation function is given as the square of the expected value This can easily be proved
Rξ(τ ) = ξ(t)ξ(t + τ ) =
Z ∞
−∞
Z ∞
−∞
For τ → ∞, ξ(t) and ξ(t + τ) are mutually independent Using (4.4.53) that can be applied to a stochastic process yields
Rξ(∞) =
Z ∞
−∞
x1f (x1)dx1
Z ∞
−∞
The value of the auto-correlation function for τ = 0 is in its maximum and holds
The cross-correlation function of two mutually ergodic stochastic processes ξ(t), η(t) can be given as
or
Rξη(τ ) =
Z ∞
−∞
Z ∞
−∞
x1y2f2(x1, y2, τ )dx1dy2
= lim
T →∞
1 2T
Z T
Trang 3Consider now a stationary ergodic stochastic process with corresponding auto-correlation func-tion Rξ(τ ) This auto-correlation function provides information about the stochastic process in the time domain The same information can be obtained in the frequency domain by taking the Fourier transform of the auto-correlation function The Fourier transform Sξ(ω) of Rξ(τ ) is given as
Sξ(ω) =
Z ∞
−∞
Correspondingly, the auto-correlation function Rξ(τ ) can be obtained if Sξ(ω) is known using the inverse Fourier transform
Rξ(τ ) = 1
2π
Z ∞
−∞
Rξ(τ ) and Sξ(ω) are non-random characteristics of stochastic processes Sξ(ω) is called power spectral density of a stochastic process This function has large importance for investigation of transformations of stochastic signals entering linear dynamical systems
The power spectral density is an even function of ω:
For its determination, the following relations can be used
Sξ(ω) = 2
Z ∞ 0
Rξ(τ ) = 1
π
Z ∞ 0
The cross-power spectral density Sξη(ω) of two mutually ergodic stochastic processes ξ(t), η(t) with zero means is the Fourier transform of the associated cross-correlation function Rξη(τ ):
Sξη(ω) =
Z ∞
−∞
The inverse relation for the cross-correlation function Rξη(τ ) if Sξη(ω) is known, is given as
Rξη(τ ) = 1
2π
Z ∞
−∞
If we substitute in (4.4.75), (4.4.85) for τ = 0 then the following relations can be obtained
Eξ(t)2 = Rξ(0) = lim
T →∞
1 2T
Z T
−T
Eξ(t)2 = Rξ(0) = 1
2π
Z ∞
−∞
Sξ(ω)dω = 1
π
Z ∞ 0
The equation (4.4.91) describes energetical characteristics of a process The right hand side of the equation can be interpreted as the average power of the process The equation (4.4.92) determines the power as well but expressed in terms of power spectral density The average power is given by the area under the spectral density curve and Sξ(ω) characterises power distribution of the signal according to the frequency For Sξ(ω) holds
Trang 44.4.4 White Noise
Consider a stationary stochastic process with a constant power spectral density for all frequencies
This process has a “white” spectrum and it is called white noise Its power spectral density is shown in Fig.4.4.4a From (4.4.92) follows that the average power of white noise is indefinitely large, as
Eξ(t)2 = 1
πV
Z ∞ 0
Therefore such a process does not exit in real conditions
The auto-correlation function of white noise can be determined from (4.4.88)
Rξ(τ ) = 1
π
Z ∞ 0
where
δ(τ ) = 1
π
Z ∞ 0
because the Fourier transform of the delta function Fδ(jω) is equal to one and the inverse Fourier transform is of the form
δ(τ ) = 1
2π
Z ∞
−∞
Fδ(jω)ejωτdω
2π
Z ∞
−∞
ejωτdω
2π
Z ∞
−∞
cos ωτ dω + j 1
2π
Z ∞
−∞
sin ωτ dω
π
Z ∞ 0
The auto-correlation function of white noise (Fig.4.4.4b) is determined by the delta function and
is equal to zero for any non-zero values of τ White noise is an example of a stochastic process where ξ(t) and ξ(t + τ ) are independent
A physically realisable white noise can be introduced if its power spectral density is constrained Sξ(ω) = V for |ω| < ω1
The associated auto-correlation function can be given as
Rξ(τ ) = V
π
Z ω 1
0
cos ωτ dω = V
The following relation also holds
¯
µ2= D = V
2π
Z ω 1
−ω 1
dω = V ω1
Sometimes, the relation (4.4.94) is approximated by a continuous function Often, the following relation can be used
Sξ(ω) = 2aD
The associated auto-correlation function is of the form
Rξ(τ ) = 1
2π
Z ∞
−∞
2aD
The figure4.4.5depicts power spectral density and auto-correlation function of this process The equations (4.4.102), (4.4.103) describe many stochastic processes well For example, if a 1, the approximation is usually “very” good
Trang 5Sξ Rξ
V
a)
b)
V δ(τ )
τ 0
Figure 4.4.4: Power spectral density and auto-correlation function of white noise
Sξ Rξ
a)
b)
τ 0
6
?
6
? 2D/a D
Figure 4.4.5: Power spectral density and auto-correlation function of the process given by (4.4.102)
and (4.4.103)
Trang 64.4.5 Response of a Linear System to Stochastic Input
Consider a continuous linear system with constant coefficients
dx(t)
where x(t) = [x1(t), x2(t), , xn(t)]T is the state vector, ξ(t) = [ξ1(t), ξ2(t), , ξm(t)]T is a stochastic process vector entering the system A, B are n×n, n×m constant matrices, respectively The initial condition ξ0is a vector of random variables
Suppose that the expectation E[ξ0] and the covariance matrix Cov(ξ0) are known and given as
Further, suppose that ξ(t) is independent on the initial condition vector ξ0 and that its mean value µ(t) and its auto-covariance function Covξ(t, τ ) are known and holds
E[(ξ(t) − µ(t))(ξ(τ) − µ(τ))T] = Covξ(t, τ ), for t ≥ 0, τ ≥ 0 (4.4.109)
As ξ0 is a vector of random variables and ξ(t) is a vector of stochastic processes then x(t)
is a vector of stochastic processes as well We would like to determine its mean value E[x(t)], covariance matrix Covx(t) = Covx(t, t), and auto-covariance matrix Covx(t, τ ) for given ξ0 and ξ(t)
Any stochastic state trajectory can be determined for given initial conditions and stochastic inputs as
x(t) = Φ(t)ξ0+
Z t
where Φ(t) = eAt is the system transition matrix
Denoting
then the following holds
¯
x(t) = Φ(t)x0+
Z t
This corresponds with the solution of the differential equation
d¯x(t)
with initial condition
¯
To find the covariance matrix and auto-correlation function, consider at first the deviation
x(t) − ¯x(t):
x(t) − ¯x(t) = Φ(t)[ξ0− ¯x0] +
Z t
Trang 7It is obvious that x(t) − ¯x(t) is the solution of the following differential equation
dx(t)
dt −d¯x(t)dt = A[x(t) − ¯x(t)] + B[ξ(t) − µ(t)] (4.4.117) with initial condition
From the equation (4.4.116) for Covx(t) follows
Covx(t) = E[(x(t) − ¯x(t))(x(t) − ¯x(t))T]
= E
Φ(t)[ξ0− x0] +
Z t
0 Φ(t − α)B[ξ(α) − µ(α)]dα
×
×
Φ(t)[ξ0− x0] +
Z t
0 Φ(t − β)B[ξ(β) − µ(β)]dβ
T
(4.4.119) and after some manipulations,
Covx(t) = Φ(t)E[(ξ0− x0)(ξ0− x0)T]ΦT(t)
+
Z t
0
Φ(t)E[(ξ0− x0)(ξ(β) − µ(β))T]BTΦT(t − β)dβ
+
Z t
0 Φ(t − α)BE[(ξ(α) − µ(α))(ξ0− x0)T]ΦT(t)dα
+
Z t
0
Z t
0 Φ(t − α)BE[(ξ(α) − µ(α))(ξ(β) − µ(β))T]BTΦT(t − β)dβdα (4.4.120) Finally, using equations (4.4.107), (4.4.109), (4.4.110) yields
Covx(t) = Φ(t)Cov0ΦT(t)
+
Z t 0
Z t
0 Φ(t − α)BCovξ(α, β)BTΦT(t − β)dβdα (4.4.121) Analogously, for Covx(t, τ ) holds
Covx(t, τ ) = Φ(t)Cov0ΦT(t)
+
Z t 0
Z τ
0 Φ(t − α)BCovξ(α, β)BTΦT(τ − β)dβdα (4.4.122) Consider now a particular case when the system input is a white noise vector, characterised by E[(ξ(t) − µ(t))(ξ(τ) − µ(τ))T] = V (t)δ(t − τ)
for t ≥ 0, τ ≥ 0, V (t) = VT(t) ≥ 0 (4.4.123) The state covariance matrix Covx(t) can be determined if the auto-covariance matrix Covx(t, τ )
of the vector white noise ξ(t)
is used in the equation (4.4.121) that yields
Covx(t) = Φ(t)Cov0ΦT(t)
+
Z t 0
Z t
0 Φ(t − α)BV (α)δ(α − β)BTΦT(t − β)dβdα (4.4.125)
= Φ(t)Cov0ΦT(t) +
Z t
Trang 8The covariance matrix Covx(t) of the state vector x(t) is the solution of the matrix differential equation
dCovx(t)
dt = ACovx(t) + Covx(t)A
with initial condition
The auto-covariance matrix Covx(t, τ ) of the state vector x(t) is given by applying (4.4.124)
to (4.4.122) After some manipulations follows
Covx(t, τ ) = Φ(t − τ)Covx(t) for t > τ
If a linear continuous system with constant coefficients is asymptotically stable and it is ob-served from time −∞ and if the system input is a stationary white noise vector, then x(t) is a stationary stochastic process
The mean value
is the solution of the equation
where µ is a vector of constant mean values of stationary white noises at the system input The covariance matrix
is a constant matrix and is given as the solution of
where V is a symmetric positive-definite constant matrix defined as
The auto-covariance matrix
E[(x(t1) − ¯x)(x(t2) − ¯x)T] = Covx(t1, t2) ≡ Covx(t1− t2, 0) (4.4.135)
is in the case of stationary processes dependent only on τ = t1− t2 and can be determined as Covx(τ, 0) = eAτCovx for τ > 0
Example 4.4.1: Analysis of a first order system
Consider the mixing process example from page 67given by the state equation
dx(t)
where x(t) is the output concentration, ξ(t) is a stochastic input concentration, a = −1/T1,
b = 1/T1, and T1= V /q is the time constant defined as the ratio of the constant tank volume
V and constant volumetric flow q
Suppose that
where ξ0 is a random variable
Trang 9G(s)
Figure 4.4.6: Block-scheme of a system with transfer function G(s)
Further assume that the following probability characteristics are known
E[ξ0] = x0
E[(ξ0− x0)2] = Cov0
E[(ξ(t) − µ)(ξ(τ) − µ)] = V δ(t − τ) for t, τ ≥ 0
E[(ξ(t) − µ)(ξ0− x0)] ≡ 0 for t ≥ 0
The task is to determine the mean value E[x(t)], variance Covx(t), and auto-covariance function in the stationary case Covx(τ, 0)
The mean value E[x(t)] is given as
¯
x = eatx0− b
a 1 − eat µ
As a < 0, the output concentration for t → ∞ is an asymptotically stationary stochastic process with the mean value
¯∞= −abµ
The output concentration variance is determined from (4.4.126) as
Covx(t) = e2atCov0− b
2 2a 1 − e2at V Again, for t → ∞ the variance is given as
lim
t→∞Covx(t) = −b
2V 2a The auto-covariance function in the stationary case can be written as
Covx(τ, 0) = −ea|τ |b
2V 2a
4.4.6 Frequency Domain Analysis of a Linear System with Stochastic
Input
Consider a continuous linear system with constant coefficients (Fig 4.4.6) The system response
to a stochastic input signal is a stochastic process determined by its auto-correlation function and power spectral density The probability characteristics of the stochastic output signal can be found if the process input and system characteristics are known
Let u(t) be any realisation of a stationary stochastic process in the system input and y(t) be the associated system response
y(t) =
Z ∞
where g(t) is the impulse response The mean value of y(t) can be determined in the same way as E[y(t)] =
Z ∞
Trang 10Analogously to (4.4.140) which determines the system output in time t, in another time t + τ holds y(t + τ ) =
Z ∞
−∞
The auto-correlation function of the output signal is thus given as
Ryy(τ ) = E[y(t)y(y + τ )]
= E
Z ∞
−∞g(τ1)u(t − τ1)dτ1
Z ∞
−∞g(τ2)u(t + τ − τ2)dτ2
(4.4.143) or
Ryy(τ ) =
Z ∞
−∞
Z ∞
−∞
g(τ1)g(τ2)E[u(t − τ1)u(t + τ − τ2)]dτ1dτ2 (4.4.144)
As the following holds
E[u(t − τ1)u(t + τ − τ2)] = E [u(t − τ1)u{(t − τ1) + (τ + τ1− τ2)}] (4.4.145) then it follows, that
Ryy(τ ) =
Z ∞
−∞
Z ∞
−∞
where Ruu(τ + τ1− τ2) is the input auto-correlation function with the argument (τ + τ1− τ2). The mean value of the squared output signal is given as
y2(t) = Ryy(0) =
Z ∞
−∞
Z ∞
−∞
The output power spectral density is given as the Fourier transform of the associated auto-correlation function as
Syy(ω) =
Z ∞
−∞
Ryy(τ )e−jωτdτ
=
Z ∞
−∞
Z ∞
−∞
Z ∞
−∞
g(τ1)g(τ2)Ruu[τ + (τ1− τ2)]e−jωτdτ1dτ2dτ (4.4.148) Multiplying the subintegral term in the above equation by (ejωτ 1e−jωτ 2)(e−jωτ 1ejωτ 2) = 1 yields Syy(ω) =
Z ∞
−∞
g(τ1)ejωτ 1dτ1
Z ∞
−∞
g(τ2)e−jωτ 2dτ2
Z ∞
−∞
Ruu[τ + (τ1− τ2)]e−jω(τ +τ 1 −τ 2 )dτ (4.4.149) Now we introduce a new variable τ0= τ + τ1− τ2, yielding
Syy(ω) =
Z ∞
−∞
g(τ1)ejωτ1dτ1
Z ∞
−∞
g(τ2)e−jωτ2dτ2
Z ∞
−∞
Ruu(τ0)e−jωτ0dτ0
(4.4.150) The last integral is the input power spectral density
Suu(ω) =
Z ∞
−∞
Ruu(τ0)e−jωτ 0
The second integral is the Fourier transform of the impulse function g(t), i.e it is the frequency transfer function of the system
G(jω) =
Z ∞
−∞
Finally, the following holds for the first integral
G(−jω) =
Z ∞