1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

Essentials of Control Techniques and Theory_9 potx

27 418 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Essentials of Control Techniques and Theory
Trường học University of Engineering and Technology
Chuyên ngành Control Techniques and Theory
Thể loại Tài liệu
Năm xuất bản 2009
Thành phố Hanoi
Định dạng
Số trang 27
Dung lượng 1,31 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

15.1 delays and the unit Impulse We have already looked into the function of time that has a Laplace transform which is just 1.. The unit step has Laplace transform 1/s, and so we can t

Trang 1

1 2 3

u

1 2 3

001

That settles the denominator How do we arrange the zeros, though? Our

out-put now needs to contain derivatives of x1,

We can only get away with this form y = Cx if there are more poles than zeros

If they are equal in number, we must first perform one stage of “long division” of

the numerator polynomial by the denominator to split off a Du term proportional

to the input The remainder of the numerator will then be of a lower order than

the denominator and so will fit into the pattern If there are more zeros than poles,

give up

Now whether it is a simulation or a filter, the system can be generated in

terms of a few lines of software If we were meticulous, we could find a lot of

unanswered questions about the stability of the simulation, about the quality of

Trang 2

the approximation and about the choice of step length For now let us turn our

attention to the computational techniques of convolution

Q 14.7.1

We wish to synthesize the filter s2/(s2 + 2s + 1) in software Set up the state equations

and write a brief segment of program

Trang 4

time, frequency, and

Convolution

Although the coming sections might seem something of a mathematician’s

playground, they are extremely useful for getting an understanding of underlying

principles of functions of time and the way that dynamic systems affect them In

fact, many of the issues of convolution can be much more easily be explored in

terms of discrete time and sampled systems, but first we will take the more

tradi-tional approach of infinite impulses and vanishingly small increments of time

15.1 delays and the unit Impulse

We have already looked into the function of time that has a Laplace transform

which is just 1 This is the “delta function” δ(t) when t = 0 The unit step has Laplace

transform 1/s, and so we can think of the delta function as its derivative Before we

go on, we must derive an important property of the Laplace transform, the “shift

theorem.”

If we have a function of time, x(t), and if we pass this signal through a time

delay τ, then the output is the same signal that was input τ seconds earlier, x(t – τ).

The bilateral Laplace transform of this output will be

x t t e dt( – ) –st

−∞

Trang 5

If we write T for t – τ, then dt will equal dT, and the integral becomes

where X(s) is the Laplace transform of x(t) If we delay a signal by time τ, its Laplace

transform is simply multiplied by e −sτ

Since we are considering the bilateral Laplace transform, integrated over all

time both positive and negative, we could consider time advances as well Clearly

all signals have to be very small for large negative t, otherwise their contribution to

the integral would be enormous when multiplied by the exponential

We can immediately start to put the shift theorem to use It tells us that the

transform of δ(t – τ), the unit impulse shifted to occur at t = τ, is e −sτ We could of

course have worked this out from first principles

We can regard the delta function as a “sampler.” When we multiply it by any

function of time, x(t) and integrate over all time, we will just get the contribution

from the product at the time the delta function is non-zero

we can think of the answer as sampling e −st at the value t = τ.

Let us briefly indulge in a little philosophy about the “meaning” of functions

We could think of x(t) as a simple number, the result of substituting some value of

t into a formula for computing x.

We can instead expand our vision of the function to consider the whole graph of

x(t) plotted against time, as in a step response In control theory we have to take this

broader view, regarding inputs and outputs as time “histories,” not just as simple

values This is illustrated in Figure 15.1

Trang 6

Now we can view Equation 15.1 as a sampling process, allowing us to pick one

single value of the function out of the time history But just let us exchange the

symbols t and τ in the equation and suddenly the perspective changes The

substi-tution has no absolute mathematical effect, but it expresses our time history x(t) as

the sum of an infinite number of impulses of size x(τ)dτ,

x t( )= x( ) ( −t d)

−∞

This result may not look important, but it opens up a whole new way of looking

at the response of a system to an applied input

15.2 the Convolution Integral

Let us first define the situation We have a system described by a transfer function

G(s), with input function u(t) and output y(t), as in Figure 15.2.

If we apply a unit impulse to the system at t = 0, the output will be g(t), where

the Laplace transform of g(t) is G(s) This is portrayed in Figure 15.3.

How do we go about deducing the output function for any general u(t)?

Perhaps the most fundamental property of a linear system is the “principle of

superposition.” If we know the output response to a given input function and also

to another function, then if we add the two input functions together and apply

them, the output will be the sum of the two corresponding output responses

In mathematical terms, if u1(t) produces the response y1(t) and u2(t) produces

response y2(t), then an input of u1(t) + u2(t) will give an output y1(t) + y2(t).

Now an input of the impulse δ(t) to G(s) provokes an output g(t) An impulse

applied at time t = τ, u(τ)δ(t – τ) gives the delayed response u(τ)g(t – τ) If we

apply several impulses in succession, the output will be the sum of the individual

responses, as shown in Figure 15.4

Trang 7

Notice that as the time parameter in the u-bracket increases, the time in the

g-bracket reduces At some later time t, the effect of the earliest impulse will have

had longest to decay The latest impulse has an effect that is still fresh

Now we see the significance of Equation 15.2 It allows us to express the input

signal u(t) as an infinite train of impulses u(τ)δτ δ(t – τ) So to calculate the output,

figure 15.3 for a unit impulse input, G(s) gives an output g(t).

Trang 8

we add all the responses to these impulses As we let δτ tend to zero, this becomes

This is the convolution integral

We do not really need to integrate over all infinite time If the input does not

start until t = 0 the lower limit can be zero If the system is “causal,” meaning that

it cannot start to respond to an input before the input happens, then the upper

limit can be t.

15.3 finite Impulse response (fIr) filters

We see that instead of simulating a system to generate a filter’s response, we could set

up an impulse response time function and produce the same result by convolution

With infinite integrals lurking around the corner, this might not seem such a wise

way to proceed!

In looking at digital simulation, we have already cut corners by taking a finite

step-length and accepting the resulting approximation A digital filter must similarly

accept limitations in its performance in exchange for simplification Instead of an

infinite train of impulses, u(t) is now viewed as a train of samples at finite intervals

The infinitesimal u(τ)dτ has become u(nT)T Instead of impulses, we have numbers

to input into a computational process

The impulse response function g(t) is similarly broken down into a train of

sample values, using the same sampling interval Now the infinitesimal operations

of integration are coarsened into the summation

The infinite limits still do not look very attractive For a causal system, however,

we need go no higher than r = n, while if the first signal was applied at r = 0 then

this can be the lower limit

Summing from r = 0 to n is a definite improvement, but it means that we have

to sum an increasing number of terms as time advances Can we do any better?

Most filters will have a response which eventually decays after the initial impulse

is applied The one-second lag 1/(s + 1) has an initial response of unity, gives an output

of around 0.37 after one second, but after 10 seconds the output has decayed to less

than 0.00005 There is a point where g(t) can safely be ignored, where indeed it is

Trang 9

less than the resolution of the computation process Instead of regarding the impulse

response as a function of infinite duration, we can cut it short to become a Finite

Impulse Response Why the capital letters? Since this is the basis of the FIR filter

We can rearrange Equation 15.4 by writing n – r instead of r and vice versa

Now if we can say that g(rT) is zero for all r < 0, and also for all r > N, the

sum-mation limits become

The output now depends on the input u at the time in question, and on its

past N values These values are now multiplied by appropriate fixed coefficients

and summed to form the output, and are moved along one place to admit the next

input sample value The method lends itself ideally to a hardware application with

a “bucket-brigade” delay line, as shown in Figure 15.5

The following software suggestion can be made much more efficient in time and

storage; it concentrates on showing the method Assume that the impulse response

has already been set up in the array g(i), where i ranges from 0 to N We provide

another array u(i) of the same length to hold past values.

//Move up the input samples to make room for a new one

Trang 10

//Now compute the output

y=0;

for(i=0;i<N+1;i++){

y=y+u[i]*g[i];

}

//y now holds the output value

This still seems more trouble than the simulation method; what are the

advan-tages? Firstly, there is no question of the process becoming unstable Extremely sharp

filters can be made for frequency selection or rejection which would have poles very

close to the stability limit Since the impulse response is defined exactly, stability is

assured

Next, the rules of causality can be bent a little Of course the output cannot

precede the input, but by considering the output signal to be delayed the impulse

response can have a “leading tail.” Take the non-causal smoothing filter discussed

earlier, for example This has a bell-shaped impulse response, symmetrical about t = 0

as shown in Figure 15.6 By delaying this function, all the important terms can be

contained in a positive range of t There are many applications, such as offline sound

and picture filtering, where the added delay is no embarrassment

15.4 Correlation

This is a good place to give a mention to that close relative of convolution, correlation

You will have noticed that convolution combines two functions of time by running

the time parameter forward in the one and backward in the other In correlation

the parameters run in the same direction

Non-causal response—impossible in real time

A time shift makes a causal approximationt = 0

t = 0 g(t)

figure 15.6 By delaying a non-causal response, it can be made causal.

Trang 11

The use of correlation is to compare two time functions and find how one is

influenced by the other The classic example of correlation is found in the satellite

global positioning system (GPS) The satellite transmits a pseudo random binary

sequence (PRBS) which is picked up by the receiver Here it is correlated against

the various signals that are known to have been transmitted, so that the receiver is

able to determine both whether the signal is present and by how much it has been

delayed on its journey

So how does it do it?

The correlation integral is

Φ τx y( )=∫x t y t( ) ( +τ)dt (15.5)

giving a function of the time-shift between the two functions The coarse

acqui-sition signal used in GPS for public, rather than military purposes, is a PRBS

sequence of length 1023 It can be regarded as a waveform of levels of value  +1 or

−1 If we multiply the sequence by itself and integrate over one cycle, the answer

is obviously 1023 What makes the sequence “pseudo random” is that if we

mul-tiply its values by the sequence shifted by any number of pulses, the integral gives

just −1

Figure 15.7 shows all 1023 pulses of such a sequence as lines of either black or

white The autocorrelation function, the correlation of the sequence with itself, is

as shown in Figure 15.8, but here the horizontal scale has been expanded so show

figure 15.7 Illustration of a pseudo-random sequence.

Value is 1023 at t = 0

figure 15.8 autocorrelation function of a PrBS.

Trang 12

just a few pulse-widths of shift In fact this autocorrelation function repeats every

1023 shifts; it is cyclic

When we correlate the transmitted signal against the signal at the receiver, we

will have a similar result to Figure 15.8, but shifted by the time it takes for the

transmitted signal to reach the receiver In that way the distance from the satellite

can be estimated (The receiver can reconstruct the transmitted signal because it

“knows” the time and the formula that generates the sequence.)

There are in fact 32 different “songs” of this length that the satellites can

trans-mit and the cross-correlation of any pair will be zero Thus, the receiver can identify

distances to any of the satellites that are in view From four or more such distances,

using an almanac and ephemeris to calculate the exact positions of the satellites, the

receiver can solve for x, y, z, and t, refining its own clock’s value to a nanosecond.

So what is a treatise on GPS doing in a control textbook?

There are some valuable principles to be seen In this case, the “transfer

func-tion” of the path from satellite to receiver is a time-delay The cross-correlation

enables this to be measured accurately What happens when we calculate the

cross-correlation between the input and the output of any control system? We have

Trang 13

Now if we reverse the order of integration, we have

Φuy

t T

The cross-correlation function is the function of time that would be output

from the system if the input’s autocorrelation function were applied instead of the

input This illustrated in Figure 15.9

Provided we have an input function that has enough bandwidth, so that its

autocorrelation function is “sharp enough,” we can deduce the transfer function by

cross-correlation This can enable adaptive controllers to adapt

More to the point, we can add a PRBS to any linear input and so “tickle”

the system rather than hitting it with a single impulse In the satellite system, the

Trang 14

PRBS is clocked at one megahertz and repeats after a millisecond But it is easy to

construct longer sequences One such sequence only repeats after a month! So in

a multi-input systems, orthogonal sequences can be applied to various inputs to

identify the individual transfer functions as impulse responses

15.5 Conclusion

We have seen that the state description of a system is bound tightly to its

repre-sentation as an array of transfer functions We have requirements which appear to

conflict One the one hand, we seek a formalism which will allow as much of the

work as possible to be undertaken by computer On the other, we wish to retain

an insight into the nature of the system and its problems, so that we can use

intel-ligence in devising a solution

Do we learn more from the time domain, thinking in terms of matrix equations

and step and impulse responses, or does the transfer function tell us more, with its

possibilities of frequency response and root locus?

In the next chapter, we will start to tear the state equations apart to see what the

system is made of Maybe we can get the best of both worlds

Ngày đăng: 21/06/2014, 07:20

TỪ KHÓA LIÊN QUAN