1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Discrete Time Systems Part 4 pptx

30 314 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Discrete Time Systems Part 4 pptx
Tác giả Sinopoli et al.
Trường học University of Systems and Control
Chuyên ngành Control Systems
Thể loại Lecture presentation
Năm xuất bản 2000
Thành phố Unknown
Định dạng
Số trang 30
Dung lượng 429,2 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Bounds for the expected error covarianceIn this section we derive upper and lower bounds for the trace G of the asymptotic EEC, i.e., 4.1 Lower bounds for the EEC In view of 76, a lower

Trang 1

0 2000 4000 6000 8000 10000 0.4

0.5 0.6 0.7 0.8 0.9 1

Fig 1 Upper and lower bounds for the Error Covariance

Also, sinceΦ1(P) =P, we have that

Now notice that the bounds (29) and (57) only differ in the position of the step functions H(·).

Hence, the result follows from (69) and (70)

3.4 Example

Consider the system below, which is taken from Sinopoli et al (2004),

A=

1.25 0

1 1.1



C=

11

withλ= 0.5 In Figure 1 we show the upper bound F T(x)and the lower bound F T(x), for

T =3, T = 5 and T =8 We also show an estimate of the true CDF F( x)obtained from a

Monte Carlo simulation using 10, 000 runs Notice that, as T increases, the bounds become tighter, and for T=8, it is hard to distinguish between the lower and the upper bounds

Trang 2

4 Bounds for the expected error covariance

In this section we derive upper and lower bounds for the trace G of the asymptotic EEC, i.e.,

4.1 Lower bounds for the EEC

In view of (76), a lower bound for G, can be obtained from an upper bound of F( x) One

such bound is F T(x), derived in Section 3.1 A limitation of FT(x)is that F T(x) =1, for all

x>φ(P, S0T), hence it is too conservative for large values of x To go around this, we introduce

an alternative upper bound for F( x), denoted by F(x)

Our strategy for doing so is to group the sequences S m T , m=0, 1,· · ·, 2T−1, according to thenumber of consecutive lost measurements at its end Then, from each group, we only considerthe worst sequence, i.e., the one producing the smallest EEC trace

Notice that the sequences S m T with m<2T −z, 0≤zT, are those having the last z elements

equal to zero Then, from (25) and (26), it follows that

(1−F(x))dx<∞, i.e., whenever the asymptotic EEC is finite.

Trang 3

We can now use both F T(x)and F(x)to obtain a lower bound G T for G as follows

G T=∞

0 1−min{F T(x), F(x)}dx. (80)The next lemma states the regions in which each bound is less conservative

Lemma 4.1. The following properties hold true:

Now, notice that the summation in (86) includes, but is not limited to, all the sequences

finishing with K zeroes Hence

Proving (82) is trivial, since F T(x) =1, x>Z(0, T)

We can now present a sequence of lower bounds G T , TN, for the EEC G We do so in the

next theorem

Trang 4

Theorem 4.1. Let E j , 0<j≤2T denote the set of numbers Tr

Trang 5

Now, F T(x)can be written as

and (90) follows from (100), (102) and (105)

To show (95), we use Lemma 7.1 (in the Appendix), with b= (1−λ)and X=APA+QP,

The result then follows immediately

4.2 Upper bounds for the EEC

Using an argument similar to the one in the previous section, we will use lower bounds of the

CDF to derive a family of upper bounds G T,N , TNN, of G Notice that, in general, there

existsδ>0 such that 1−F T(x) >δ, for all x Hence, using F T(x)in (76) will result in G being infinite valued To avoid this, we will present two alternative lower bounds for F( x), which

The lower bounds F T,N (x)and F N(x)are stated in the following two lemmas

Trang 6

For each TnN, let



n l



Then, for all xp∗(N),

Trang 7

where, for each nN and all φ(P∗(N), Sn−1

where Z N , m =0,· · ·, L1 denotes the set of sequences of length N with less than N0ones, with

Z0N=S0N , but otherwise arranged in any arbitrary order (i.e.,

In this case, for each n, we obtain a lower bound for F n(x) by considering in (57) thatTr(φ(∞, Sn

where S Ndenotes the sequence required to obtain P∗(N)

To conclude the proof we need to compute the probability p N,n of receiving a sequence of

length N+n that does not contain a subsequence of length N with at least N0ones This isdone in Lemma 7.2 (in the Appendix), where it is shown that

Trang 8

We do so in the next theorem.

Theorem 4.2. Let T and N be two given positive integers with N0 ≤ TN and such that for all

0≤m<2N ,|S N| ≥N0⇒φ(∞, SN) <∞ Let J be the number of sequences such that O( S T

m)has full column rank Let E00 and E j , 0<jJ denote the set of numbers Tr

φ(∞, ST

m), 0<mJ, arranged in ascending order, (i.e., E j=Tr

Proof: First, notice that F T(x)is defined for all x>0, whereas F T(x)is defined on the range

P(T) <xP(N)and F T (x)on P(N) <x Now, for all xp∗(T), we have

Trang 9

the probability of receiving a sequence of length T+n with more than or exactly N0ones.

Hence, F T,N (x)is greater than F T(x)on the range P(T) <xP(N) Also, FN

(x)measures

the probability of receiving a sequence of length N with a subsequence of length T with N0or

more ones Hence, it is greater than F T(x)on P(N) <x Therefore, we have that

We will use each of these three bounds to compute each term in (129) To obtain (130), notice

that F T(x)can be written as

Trang 10

0 5 10 15 20 0.4

λ = 0.8

λ = 0.5

Fig 2 Comparison of the bounds of the Cumulative Distribution Function

where max svM denotes the maximum singular value of M Then, to obtain (136), we use the result in Lemma 7.1 (in the Appendix) with b=max svM and X=AP∗(N)A+QP∗(N)

5.2 Bounds on the EEC

In this section we compare our proposed EEC bounds with those in Sinopoli et al (2004)and Rohr et al (2010)

Trang 11

Bound Lower UpperFrom Sinopoli et al (2004) 4.57 11.96From Rohr et al (2010) - 10.53

Table 1 Comparison of EEC bounds using a scalar system

From Sinopoli et al (2004) 2.15×104 2.53×105From Rohr et al (2010) - 1.5×105

Proposed 9.54×104 3.73×105Table 2 Comparison of EEC bounds using a system with a single unstable eigenvalue

5.2.1 Scalar example

Consider the scalar system (153) withλ=0.5 For the lower bound (90) we use T =14, and

for the upper bound (129) we use T=N =14 Notice that in the scalar case N0=1, that is,whenever a measurement is received, an upper bound for the EC is promptly available and

using N > T will not give any advantage Also, for the upper bound in Rohr et al (2010),

we use a window length of 14 sampling times (notice that no lower bound for the EEC isproposed in Rohr et al (2010))

In Table 1 we compare the bounds resulting from the three works We see that although thethree upper bounds are roughly similar, our proposed lower bound is significantly tighterthan that resulting from Sinopoli et al (2004)

5.2.2 Example with single unstable eigenvalue

Consider the following system, taken from Sinopoli et al (2004), whereλ=0.5 and

Trang 12

7 Appendix

Lemma 7.1. Let 0b1 be a scalar, XRn ×n be a positive-semidefinite matrix and ARn ×n

be diagonalizable, i.e., it can be written as

Trang 13

where the last equality follows from the property

−−→

Let δ i,j denote the i, j-th entry of b−→

D−→

D, and pow(Y, j) denote the matrix obtained after

elevating each entry of Y to the j-th power Then, if every entry of b−→

D−→

D has magnitudesmaller than one, we have that

Dif formed by the products of the

eigenvalues of A, so the series will converge if and only if

and the result follows since Tr{Dn{Y}} =Tr{Y}

Lemma 7.2. Let u, z, N0, L and M be as defined in Lemma 4.3 The probability p N,n of receiving

a sequence of length N+n that does not contain a subsequence of length N with at least N0ones is given by

Proof:

Let Z N , m=0,· · ·, L1, and U+(Z T m,γ)be as defined in Lemma (4.3) Also, for each N, t

t = {γ t,γ t−1,· · ·,γ t −N+1} Let Wt be the probability

distribution of the sequences Z N, i.e

1)

· · ·P(V t N =Z L N−1)

Trang 14

Hence, for a given n, the distribution W n of V n Nis given by

Finally, to obtain the probability p N,n , we add all the entries of the vector W n by

pre-multiplying W n by u Doing so, and substituting (177) in (175), we obtain

8 References

Anderson, B & Moore, J (1979) Optimal filtering, Prentice-Hall Englewood Cliffs, NJ.

Ben-Israel, A & Greville, T N E (2003) Generalized inverses, CMS Books in

Mathematics/Ouvrages de Mathématiques de la SMC, 15, second edn,Springer-Verlag, New York Theory and applications

Dana, A., Gupta, V., Hespanha, J., Hassibi, B & Murray, R (2007) Estimation over

communication networks: Performance bounds and achievability results, American Control Conference, 2007 ACC ’07 pp 3450 –3455.

Faridani, H M (1986) Performance of kalman filter with missing measurements, Automatica

22(1): 117–120

Liu, X & Goldsmith, A (2004) Kalman filtering with partial observation losses, IEEE Control

and Decision

Rohr, E., Marelli, D & Fu, M (2010) Statistical properties of the error covariance in a kalman

filter with random measurement losses, Decision and Control, 2010 CDC 2010 49th IEEE Conference on.

Schenato, L (2008) Optimal estimation in networked control systems subject to random delay

and packet drop, IEEE Transactions on Automatic Control 53(5): 1311–1317.

Schenato, L., Sinopoli, B., Franceschetti, M., Poolla, K & Sastry, S (2007) Foundations of

control and estimation over lossy networks, Proc IEEE 95(1): 163.

Shi, L., Epstein, M & Murray, R (2010) Kalman filtering over a packet-dropping network: A

probabilistic perspective, Automatic Control, IEEE Transactions on 55(3): 594 –604.

Sinopoli, B., Schenato, L., Franceschetti, M., Poolla, K., Jordan, M & Sastry, S (2004)

Kalman filtering with intermittent observations, IEEE Transactions on Automatic Control 49(9): 1453–1464.

Trang 15

Rodrigo Souto, João Ishihara and Geovany Borges

of the sensors, but also modifies the project itself to fix all of them (engineering hours, morematerial to buy, a heavier product) b) Some states are impossible to be physically measuredbecause they are a mathematically useful representation of the system, such as, the attitudeparameterization of an aircraft altitude

Suppose we have access to all the states of a system What can we do with them? As the statescontain all the information necessary about the system, one can use them to:

a) Implement a state-feedback controller Simon (2006) Almost in the same time the stateestimation theory was being developed, the optimal control was growing in popularitymainly because its theory can guarantees closed loop stability margins However, theLinear-Quadratic-Gaussian (LQG) control problem (the most fundamental optimal controlproblem) requires the knowledge of the states of the model, which motivated the development

of the state estimation for those states that could not be measured in the plant to be controlled.b) Process monitoring In this case, the knowledge of the state allows the monitoring of thesystem This is very useful for navigation systems where it is necessary to know the positionand the velocity of a vehicle, for instance, an aircraft or a submarine In a radar system, this

is its very purpose: keep tracking the position and velocity of all targets of interest in a givenarea For an autonomous robot is very important to know its current position in relation to aninertial reference in order to keep it moving to its destiny For a doctor is important to monitorthe concentration of a given medicine in his patient

Kalman Filtering for Discrete Time Uncertain Systems

6

Trang 16

c) Process optimization Once it is possible to monitor the system, the natural consequence

is to make it work better An actual application is the next generation of smart planes.Based on the current position and velocity of a set of aircraft, it is possible to a computer

to better schedule arrivals, departures and routes in order to minimize the flight time, whichalso considers the waiting time for a slot in an airport to land the aircraft Reducing theflight time means less fuel consumed, reducing the operation costs for the company and theenvironmental cost for the planet Another application is based on the knowledge of theposition and velocities of cell phones in a network, allowing an improved handover process(the process of transferring an ongoing call or data session from one channel connected tothe core network to another), implying in a better connection for the user and smart networkresource utilization

d) Fault detection and prognostics This is another immediate consequence of processmonitoring For example, suppose we are monitoring the current of an electrical actuator

In the case this current drops below a certain threshold we can conclude that the actuator

is not working properly anymore We have just detected a failure and a warning messagecan be sent automatically In military application, this is essentially important when a systemcan be damaged by exterior reasons Based on the knowledge of a failure occurrence, it ispossible to switch the controller in order to try to overcome the failures For instance, someaircraft prototypes were still able to fly and land after losing 60% of its wing Thinking aboutthe actuator system, but in a prognostics approach, we can monitor its current and note that

it is dropping along the time Usually, this is not an abrupt process: it takes so time to thecurrent drop below its acceptable threshold Based on the decreasing rate of the current, one

is able to estimate when the actuator will stop working, and then replace it before it fails.This information is very important when we think about the safety of a system, preventingaccidents in cars, aircrafts and other critical systems

e) Reduce noise effect Even in cases where the states are measured directly, state estimationschemes can be useful to reduce noise effect Anderson & Moore (1979) For example,

a telecommunication engineer wants to know the frequency and the amplitude of a sinewave received at his antenna The environment and the hardware used may introducesome perturbations that disturb the sin wave, making the required measures imprecise Astate-state model of a sine wave and the estimation of its state can improve precision of theamplitude and frequency estimations

When the states are not directly available, the above applications can still be performed

by using estimates of the states The most famous algorithm for state estimation is theKalman filter Kalman (1960) It was initially developed in the 1960s and achieved a widesuccess to aerospace applications Due its generic formulation, the same estimation theorycould be applied to other practical fields, such as meteorology and economics, achievingthe same success as in the aerospace industry At our present time, the Kalman filter is themost popular algorithm to estimate the states of a system Although its great success, thereare some situations where the Kalman filter does not achieve good performance Ghaoui &Clafiore (2001) The advances of technology lead to smaller and more sensible components.The degradation of these component became more often and remarkable Also, the numberand complexity of these components kept growing in the systems, making more and moredifficult to model them all Even if possible, it became unfeasible to simulate the system withthese amounts of details For these reasons (lack of dynamics modeling and more remarkableparameters changes), it became hard to provide the accurate models assumed by the Kalman.Also, in a lot of applications, it is not easy to obtain the required statistic information about

Trang 17

noises and perturbations affecting the system A new theory capable to deal with plantuncertainties was required, leading robust extensions of the Kalman filter This new theory isreferred as robust estimationGhaoui & Clafiore (2001).

This chapter presents a robust prediction algorithm used to perform the state estimation ofdiscrete time systems The first part of the chapter describes how to model an uncertainsystem In the following, the chapter presents the new robust technique used when dealingwith linear inaccurate models A numerical example is given to illustrate the advantages ofusing a robust estimator when dealing with an uncertain system

2 State Estimation

The Estimation Theory was developed to solve the following problem: given the values of

a observed signal though time, 1 also known as measured signal, we require to estimate(smooth, correct or predict) the values of another signal that cannot be accessed directly or

it is corrupted by noise or external perturbation

The first step is to establish a relationship (or a model) between the measured and theestimated signal Then we shall to define the criteria we will use to evaluate the model In thissense, it is important to choose a criteria that is compatible with the model The estimation isshown briefly at Figure 1

Fig 1 Block diagram representing the estimation problem

At Figure 1, we wish to estimate signal x The signal y are the measured values from the plant The signal w indicate an unknown input signal and it is usually represented by an stochastic

behavior with known statistical properties The estimation problem is about designing an

algorithm that is able to provide ˆx, using the measures y, that are close of x for several realizations of y This same problem can also be classically formulated as a minimization

of the estimation error variance At the figure, the error is represented by e and can be defined

as ˆx minus x When we are dealing with a robust approach, our concern is to minimize an

upper for the error variance as will be explained later on this chapter

The following notation will be used along this chapter: Rn represents the n-dimensional

Euclidean space, n×m is the set of real n × m matrices, E {•}denotes the expectation operator,

cov {•} stands for the covariance, Zrepresents the pseudo-inverse of the matrix Z, diag {•}

stands for a block-diagonal matrix

3 Uncertain system modeling

The following discrete-time model is a representation of a linear uncertain plant:

1 Signal here is used to define a data vector or a data set.

Ngày đăng: 20/06/2014, 01:20

TỪ KHÓA LIÊN QUAN