1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Analysis and Control of Linear Systems - Chapter 5 pptx

18 419 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 18
Dung lượng 250,01 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

After having discussed the spectral characterization of deter-ministic signals, with the help of the Fourier transform and energy spectral density, we will now define the power spectral

Trang 1

Signals: Deterministic and Statistical Models

5.1 Introduction

This chapter is dedicated to signal modeling procedures and in particular to sta-tionary random signals After having discussed the spectral characterization of deter-ministic signals, with the help of the Fourier transform and energy spectral density, we will now define the power spectral density of stationary random signals We will show that a simple modeling by linear shaper filter excited by a white noise makes it possi-ble to approach a spectral density with the help of a reduced number of parameters and

we will present a few standard structures of shaper filters Next, we will extend this modeling to the case of linear processes with deterministic input, in which the noises and disturbances can be considered as additional stationary noises Further on, we will present the representation in the state space of such a modeling and the relation with the Markovian processes

5.2 Signals and spectral analysis

A continuous-time deterministic signal y(t), t ∈  is, by definition, a function of

 in C:

y :  −→ C

t −→ y(t)

where variable t designates time In short, we speak of a continuous signal even if the

signal considered is not continuous in the usual mathematical sense

Chapter written by Eric LECARPENTIER

141

Trang 2

A discrete-time deterministic signal y[k], k ∈ Z is, by definition, a sequence of

complex numbers:

y =

y[k]

k∈Z

In short, we often speak of a discrete signal In general, the signals considered,

be they continuous-time or discrete-time, have real values, but the generalization to complex signals done here does not entail any theoretical problem

The spectral analysis of deterministic signals consists of decomposing them into simpler signals (for example, sine curves), in the same way as a point in space is located by its three coordinates The most famous technique is the Fourier transform, from the French mathematician J.B Fourier (1768–1830), which consists of using cisoid functions as basic vectors

The Fourier transform y (f ) of a continuous-time signal y(t) is a function of the

form y : f −→  y (f )of a real variable with complex number value, which is defined

for any f by:



y (f ) =

 +∞

−∞

y(t) e −j 2π ft dt [5.1]

We note from now on that if variable t is homogenous to a certain time, then vari-able f is homogenous to a certain frequency We will admit that the Fourier transform

is defined (i.e the integral above converges) if the signal has finite energy The Fourier transform does not entail any loss of information Indeed, knowing y (f ), y(t)can be

rebuilt by the following reverse formula; for any t:

y(t) =

 +∞

−∞



y (f ) e j 2π ft df [5.2] The Fourier transform is in fact the restriction of the two-sided Laplace transform

˘

y(s)to the axis of complex operators: y (f ) = ˘ y(j 2π f ) with, for any s ∈ C:

˘

y(s) =

 +∞

−∞

Likewise, the Fourier transform (or normalized frequency transform) y (ν)of a

discrete-time signal y[k] is a function of the form:



y :  −→ C

ν −→  y (ν)

Trang 3

defined for any ν by:



y (ν) =

+∞



k=−∞

We will accept that the Fourier transform of a discrete-time signal is defined (i.e the above sequence converges) if the signal has finite energy It is periodic of period

1 It is in fact the restriction of two-sided z transform ˘ y(z) to unit circle: y (ν) =

˘

y(e j 2π ν)with, for any z ∈ C:

˘

y(z) =

+∞



k=−∞

The Fourier transform does not entail any loss of information Indeed, knowing



y (ν) , we can rebuild y[k] by the following reverse formula; for any k:

y[k] =

 +1

1



y (ν) e j 2π νk dν [5.6]

The Fourier transform (continuous-time or discrete-time) verifies the following fundamental problem: it transforms the convolution integral into a simple product Let

y1(t) and y2(t) be two real variable functions; the convolution integral (y1⊗ y2)(t)is

defined for any t by:

(y1⊗ y2)(t) =

 +∞

−∞ y1(τ ) y2(t − τ) dτ [5.7]

Likewise, let y1[k] and y2[k] be two sequences; their convolution integral (y1

y2)[k] is defined for any k by:

(y1⊗ y2)[k] =

+∞



m=−∞

y1[m] y2[k − m] [5.8]

The convolution integral verifies the commutative and associative properties, and the neutral element is:

– δ(t) Dirac impulse for functions (δ(t) = 0 if we have t = 0,−∞ +∞ δ(t) dt = 1);

– δ[k] Kronecker sequence for sequences (δ[0] = 1, δ[k] = 0 if we have k = 0).

In addition, the convolution of a function or sequence with delayed neutral element delays it with the same quantity It is easily verified that the Fourier transform of the convolution integral is the product of transforms:

(y1⊗ y2 = y

Trang 4

On the other hand, the Fourier transform preserves the energy (Parseval theorem).

Indeed, the energy of a continuous-time signal y(t) or of a discrete-time signal y[k]

can be calculated by the square integration of the Fourier transform module y (f )or its normalized frequency transform y (ν):

– continuous-time signals:+∞

−∞ |y(t)|2dt =+∞

−∞ |  y (f )|2df;

– discrete-time signals:+∞

k=−∞ |x[k]|2=+1

1 |  x (ν) |2dν.

The function or sequence|  y |2is called a power spectrum, or energy spectral

density of signal y because its integral (or its summation) returns the energy of signal

y

The Fourier transform is defined only for finite energy signals and can be extended

to periodic or impulse signals (with the help of the mathematical theory of distribu-tions) We will give a few examples below

EXAMPLE5.1 (DIRAC IMPULSE) The transform of Dirac impulse is the unit function:



EXAMPLE5.2 (UNIT CONSTANT) It is not of finite energy, but admits a Fourier trans-form in the sense of distribution theory, which is a Dirac impulse:



EXAMPLE5.3 (CONTINUOUS-TIME CISOID) We have the following transformation:

y(t) = e j 2π f0t  y (f ) = δ(f − f0 [5.12]

Therefore, this means that the Fourier transform of the frequency cisoid f0is an

impulse centered in f0 By using the linearity of the Fourier transform, we easily obtain the Fourier transform of a real sine curve, irrespective of its initial phase; in particular:

y(t) = cos(2π f0t)  y (f ) = 1

2



δ(f − f0) + δ(f + f0 

[5.13]

y(t) = sin(2π f0t)  y (f ) = −j

2



δ(f − f0 − δ(f + f0  [5.14]

EXAMPLE5.4 (KRONECKER SEQUENCE) We immediately obtain:



Trang 5

EXAMPLE5.5 (UNIT SEQUENCE) The Fourier transform of the constant sequence

1Z [k]is the impulse frequency comb Ξ1:



1Z (ν) = Ξ1(ν) =

+∞



k=−∞

EXAMPLE5.6 (DISCRETE-TIME CISOID) We have the following transform:

y[k] = e j 2π ν0k  y (f ) = Ξ1(ν − ν0 [5.17]

Thus, this means that the Fourier transform of the frequency cisoid ν0 is a

fre-quency comb centered in ν0

Very often, the spectral analysis of deterministic signals is reduced to visualizing the energy spectral density, but numerous physical phenomena come along with dis-turbing phenomena, called “noises”; for example, mechanical systems generate vibra-tory or acoustic signals which are not periodic and have infinite energy

The mathematical characterization of such signals is particularly well formalized

in the case of stationary and ergodic random signals:

– random: this means that, in the same experimental conditions, two different

experiences generate two different signals The mathematical treatment can thus be only probabilistic, the signal observed being considered as the realization of a random variable;

– stationary: the statistical characteristics are then independent of the time origin; – ergodic: any statistical information is included in a unique realization of infinite

duration

In any case, the complete characterization of such signals is expressed with the help of the combined probability law of the values taken by the signal in different instants, irrespective of these instants and their number For example, for a Gaussian random signal, this combined law is Gauss’ probability law For a white random signal (or independent), this combined density is equal to the product of marginals (to clarify

a current confusion, we note that these two notions are not equivalent: a Gaussian signal can be white or not, a white signal can be Gaussian or not) In practice, we have the second order statistical analysis that deals only with the first and second order moments, i.e the mean and the autocorrelation function

A discrete-time random signaly[k], k ∈ Z is called stationary in the broad sense

if its mean m y and its autocorrelation function r yy [κ]defined by:

m y = E( y[k])

r yy [κ] = E(( y[k] − m y ∗ y[k + κ] − m y)) ∀κ ∈ Z [5.18]

Trang 6

are independent of index k, i.e independent of the time origin σ2 = r

yy[0]is the variance of the signal considered r yy [κ]

σ2 is the correlation coefficient between the

sig-nal at instant k and the sigsig-nal at instant k + κ It is traditiosig-nal to remain limited only

to the mean and the autocorrelation function in order to characterize a stationary

ran-dom signal and this even if the characterization, referred to as of second order, is very

incomplete (it is sufficient only for the Gaussian signals)

In practice, there is only one realization y[k], k ∈ Z of a random signal y[k] for

which we can define its time meany[k] :

y[k] = lim

N→∞

1

2N + 1

N



k=−N

The random signaly[k] is called ergodic for the mean if mean m yis equal to the

time mean of any realization y[k] of this random signal:

E( y[k]) = y[k] ergodicity for the mean [5.20]

In what follows, we will suppose that the random signaly[k] is ergodic for the

mean and, to simplify, of zero mean

The random signaly[k] is called ergodic for the autocorrelation if the

autocorre-lation function r yy [κ]is equal to the time meany ∗ [k] y[k + κ] calculated from any

realization y[k] of this random signal:

E (y∗ [k] y[k + κ]) = y ∗ [k] y[k + κ] ∀κ ∈ Z [5.21] ergodicity for the autocorrelation

this time mean being defined for any κ by:

y ∗ [k] y[k + κ] = lim

N→∞

1

2N + 1

N



k=−N

y ∗ [k] y[k + κ] [5.22]

The simplest example of ergodic stationary random signal for the autocorrelation

is the cisoid a e j (2π ν0k+φ) , k ∈ Z of initial phase φ evenly distributed between 0

and 2π, of autocorrelation function a2e j 2π ν0κ , κ ∈ Z However, the ergodicity is lost

if the amplitude is also random In practice, the ergodicity can be rigorously verified only rarely In general, it is a hypothesis – necessary in order to obtain the second order statistical characteristics of a random signal considered from a single realization

Trang 7

Under the ergodic hypothesis, the variance σ2of the signal considered is equal to the power|y[k]|2 of any realization y:

σ y2=|y[k]|2 = lim

N→∞

1

2N + 1

N



k=−N

|y[k]|2 [5.23]

i.e the energy of the signal y multiplied by the truncation window 1 −N,N which is equal to 1 on the interval{−N, , N} and zero otherwise, divided by the length of

this interval when N → +∞ With the help of Parseval’s theorem, we obtain:

σ y2= lim

N→∞

1

2N + 1

 +1

1 |(y 1 −N,N) (ν) |2

=

 +1

1 lim

N→∞

1

2N + 1 |(y 1 −N,N) (ν) |2

Hence, through formula [5.24], we have decomposed the power of the signal on the

frequency axis, with the help of function ν −→ lim N→∞ 2N+11 |(y 1 −N,N) (ν) |2 In numerous works, we define the power spectral density (or power spectrum, or spec-trum) of a stationary random signal by this function However, in spite of the ergodic hypothesis, we can show that this function depends on the realization considered We

will define here the power spectral density (or power spectrum) S yy as the mean of this function:

S yy (ν) = lim

N→∞ E

1

2N + 1 |(y 1 −N,N) (ν) |2 [5.25]

= lim

N→∞ E

 1

2N + 1

⏐ N

k=−N

y[k] e −j 2π νk

⏐2



[5.26]

Hence, we have two characterizations of a stationary random signal in the broad sense, ergodic for the autocorrelation Wiener-Khintchine’s theorem makes it possible

to show the equivalence of these two characterizations Under the hypothesis that the

sequence (κ r yy [κ])is entirely integrable, let:

+∞



κ=−∞

|κ r yy [κ] | < ∞ [5.27]

then, the power spectral density is the Fourier transform of the autocorrelation function and the two characterizations defined above coincide:

=

+∞



κ=−∞

r yy [κ] e −j 2π νκ [5.29]

Trang 8

Indeed, by developing expression [5.26], we obtain:

S yy (ν) = lim

N→∞

1

2N + 1 E



n=−N

N



k=−N

y[n] y ∗ [k] e −j 2π ν(n−k)



= lim

N→∞

1

2N + 1

N



n=−N

N



k=−N

r yy [n − k] e −j 2π ν(n−k)

= lim

N→∞

1

2N + 1

2N



κ=−2N

r yy [κ] e −j 2π νκ

× card {(n, k) | κ = n − k and |n| ≤ N and |k| ≤ N}  

2N+1−|κ|

= lim

N→∞

2N



κ=−2N

1− |κ|

2N + 1 r yy [κ] e

−j 2π νκ

= r yy (ν) − lim

N→∞

1

2N + 1

2N



κ=−2N

|κ| r yy [κ] e −j 2π νκ

Under hypothesis [5.27], the second term above disappears and we obtain formula [5.29]

These considerations can be reiterated briefly for continuous-time signals A conti-nuous-time random signal y(t), t ∈  is called stationary in the broad sense if its

mean m y and its autocorrelation function r yy (τ )defined by:

m y = E( y(t))

r yy (τ ) = E(( y(t) − m y ∗(y(t + τ) − m y)) ∀τ ∈  [5.30]

are independent of time t.

For a realization y(t), t ∈  of a random signal y(t), the time mean y(t) is

defined by:

y(t) = lim

T →∞

1

2T

 T

The ergodicity for the mean is written:

In what follows, we will suppose that the random signaly(t) is ergodic for the

mean and, to simplify, of zero mean

Trang 9

The random signaly(t) is ergodic for the autocorrelation if:

E (y∗ (t) y(t + τ)) = y ∗ (t) y(t + τ ) ∀τ ∈  [5.33]

this time mean being defined for any τ by:

y ∗ (t) y(t + τ ) = lim

T →∞

1

2T

 T

−T

y ∗ (t) y(t + τ ) dt [5.34]

The power spectral density S yyis expressed by:

S yy (f ) = lim

T →∞ E

1

2T |(y 1 −T,T) (f ) |2 [5.35]

= lim

T →∞ E

 1

2T

⏐ T

−T y(t) e −j 2π ft dt

⏐2



[5.36]

If function (τ r yy (τ ))is entirely integrable, let:

 +∞

−∞ |τ r yy (τ ) | dτ < ∞ [5.37] then the power spectral density is the Fourier transform of the autocorrelation function:

=

 +∞

−∞

r yy (τ ) e −j 2π fτ dτ [5.39] Power spectral density is thus a method to characterize the spectral content of a stationary random signal For a white signal, the autocorrelation function is expressed,

with q > 0, by:

Through the Fourier transform, we realize immediately that such a signal has a

power spectral density constant and equal to q.

Under the ergodic hypothesis, for the discrete-time signals, the power spectral

den-sity can be easily estimated with the help of the periodogram; given a recording of N points y[0], , y[N − 1] and based on expression [5.26], the periodogram is written:

I yy (ν) = 1

N |(y 1 0,N−1) (ν) |2 [5.41]

= 1

N

N−1

k=0 y[k] e −j 2π νk

Trang 10

where 10,N−1 is the rectangular window equal to 1 on the interval {0, , N − 1}

and zero otherwise With regard to the initial definition of power spectral density, we lost the mathematical expectation operator as well as the limit passage This estimator

is not consistent and several variants were proposed: Bartlett’s periodograms, modi-fied periodograms, Welsh’s periodograms, correlogram, etc The major drawback of the periodogram, and more so of its variants, is the bad resolution, i.e the capability

to separate the spectral components coming from close frequency sine curves More recently, methods based on a signal modeling were proposed, which enable better res-olution performances than those of the periodogram

5.3 Generator processes and ARMA modeling

Let us take a stable linear process with an impulse response h, which is excited by

a stationary random signal e, with output y:

Hence, we directly obtain that signal y is stationary and its autocorrelation function

is expressed by:

where h ∗− represents the conjugated and returned impulse response (h ∗− (t) =

(h( −t)) ∗ ) Through the Fourier transform, the power spectral density of y is expressed

by:

S yy =

h2

In particular, if e is a white noise of spectrum q, then:

S yy = q

Inversely, given a stationary random signal y with a power spectral density S yy,

if there is an impulse response h and a positive real number q so that we can write

formula [5.46], we say that this system is a generating process (or a shaper filter) for

y Everything takes place as if we could consider signal y as the output of a linear process with an impulse response h excited by a white noise of spectrum q.

This modeling depends, however, on any impulse response h of the shaper filter.

In order to be able to obtain a modeling with the help of a finite number of parameters,

we know only one solution to date: the system of impulse response h has a rational

transfer function Consequently, we are limited to the signals whose power spectral

density is a rational fraction in j 2π f for continuous-time and e j 2π ν for discrete-time Nevertheless, the theory of rational approximation indicates that we can always get as close as we wish to a function through a rational function of sufficient degrees

... hypothesis [5. 27], the second term above disappears and we obtain formula [5. 29]

These considerations can be reiterated briefly for continuous-time signals A conti-nuous-time random signal... consistent and several variants were proposed: Bartlett’s periodograms, modi-fied periodograms, Welsh’s periodograms, correlogram, etc The major drawback of the periodogram, and more so of its variants,... proposed, which enable better res-olution performances than those of the periodogram

5. 3 Generator processes and ARMA modeling

Let us take a stable linear process with an impulse

Ngày đăng: 09/08/2014, 06:23

TỪ KHÓA LIÊN QUAN