1. Trang chủ
  2. » Giáo án - Bài giảng

Ch12 stochastic processes and

55 5 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Stochastic processes and random vibrations
Trường học University of Academic Studies
Chuyên ngành Stochastic Processes
Thể loại lecture notes
Năm xuất bản 2003
Thành phố Unknown
Định dạng
Số trang 55
Dung lượng 591,41 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Applied Structural and Mechanical Vibrations Theory, Methods and Measuring Instrumentation 12 Stochastic processes and random vibrations 12 1 Introduction A large number of phenomena in science and en.

Trang 1

12 Stochastic processes and

random vibrations

12.1 Introduction

A large number of phenomena in science and engineering either defy anyattempt of a deterministic description or only lend themselves to adeterministic description at the price of enormous difficulties Examples ofsuch phenomena are not hard to find: the height of waves in a rough sea, thenoise from a jet engine, the electrical noise of an electronic component or, if

we remain within the field of vibrations, the vibrations of an aeroplane flying

in a patch of atmospheric turbulence, the vibrations of a car travelling on arough road or the response of a building to earthquake and wind loads.Without doubt, the question as to whether any of the above or similarphenomena is intrinsically deterministic and, because of their complexity,

we are simply incapable of a deterministic description is legitimate, but thefact remains that we have no way to predict an exact value at a futureinstant of time, no matter how many records we take or observations wemake However, it is also a fact that repeated observations of these andsimilar phenomena show that they exhibit certain patterns and regularitiesthat fit into a probabilistic description This occurrence suggests taking adifferent and more pragmatic approach, which has turned out to be successful

in a large number of practical situations: we simply leave open the questionabout the intrinsic nature of these phenomena and, for all practical purposes,tackle the problem by defining them as ‘random’ and adopting a description

in terms of probabilistic statements and statistical averages

In other words, we base the decision of whether a certain phenomenon isdeterministic or random on the ability to reproduce the data by controlledexperiments If repeated runs of the same experiment produce identical results(within the limits of experimental error), then we regard the phenomenon inquestion as deterministic; if, on the other hand, different runs of the sameexperiment do not produce identical results but show patterns and regularitieswhich allow a satisfactory description (and satisfactory predictions) in terms

of probability laws, then we speak of random phenomenon

Trang 2

12.2 The concept of stochastic process

First of all a note on terminology: although some authors distinguish betweenthe terms, in what follows we will adopt the common usage in which

‘stochastic’ is synonymous with ‘random’ and the two terms can be usedinterchangeably

Now, if we refer back to the preceding chapter, it can be noted that theconcepts of event and random variable can be conveniently considered asforming two levels of a hierarchy in order of increasing complexity: theinformation about an event is given by a single number (its probability),whereas the information about a random variable requires the knowledge

of the probability of many events If we take a step further up in the hierarchy

we run into the concept of stochastic or random process

Broadly speaking, any process that develops in time or space and can bemodelled according to probabilistic laws is a stochastic or random process

More specifically, a stochastic process X(z) consists of a family of random variables indexed by a parameter z which, in turn, can be either discrete or continuous and varies within an index set Z, i.e In the former caseone speaks of a discrete parameter process, while in the latter case we speak

of a continuous parameter process

For our purposes, the interest will be focused on random processes X(t) that develop in time so that the index parameter will be time t varying within

a time interval T; such processes can also be generally indicated with the

symbol In general, the fact that the parameter t varies continuously does not imply that the set of possible values of X(t) is

continuous, although this is often the case A typical example of a randomtime record with zero mean (velocity in this specific example, although this

is not important for our present purposes) looks like Fig 12.1, which wascreated by using a set of software-generated random numbers

Also note that a random process can develop in both time and space:consider for example the vibration of a tall and slender structure under theaction of wind during a windstorm The effect of turbulence will be random

not only in time but also with respect to the vertical space coordinate y

along the structure

The basic idea of stochastic process is that for any given value of t e.g.

is a random variable, meaning that we can consider itscumulative distribution function (cdf)

(12.1a)

or its probability density function (pdf)

(12.1b)where we write and to point out the fact that, in general,

these functions depend on the particular instant of time t0 Note, however,

Trang 3

that if we adhere strictly to the notation of the preceding chapter we shouldwrite and By the same token, we can have information on

the behaviour on the process X(t) at two particular instants of time t1 and t2

by considering the joint cdf

(12.2a)and the corresponding joint pdf

(12.2b)

or, for any finite number of instants we can consider the function

(12.3)

and its corresponding joint pdf so that, by increasing the value of n we can

describe the probabilistic structure of the random process in finer and finerdetail Note that knowledge of the joint distribution function (12.3) givesinformation for any (e.g the function of eq (12.2a) where m=2), since

these distribution functions are simply its marginal distribution functions.Similarly, we may extend the concepts above by considering more than one

Fig 12.1 Random (velocity) time record.

Trang 4

stochastic process, say X(t) and Y(t´), and follow the discussion of Chapter

11 to define their joint pdfs for various possible sets of the index parameters

t and t’.

Now, since we can characterize a random variable X by means of its

moments and since, for a fixed instant of time the stochastic process

X(t) defines a random variable, we can calculate its first moment (mean value) as

(12.4)

or its mth order moment

(12.5)

and the central moments as in eq (11.36) In the general case, all these

quantities now obviously depend on t because they may vary for different instants of time; in other words if we fix for example two instants of time t1

(12.8)

Particular cases of eqs (12.6) and (12.7) occur when so that we obtain,respectively, the mean squared value and the variance

(12.9)

When two processes are studied simultaneously the counterpart of eq (12.6)

is the cross-correlation function

(12.10) (12.6)

Trang 5

which is related to the cross-covariance

(12.11)

by the equation

(12.12)

Consider now the idea of statistical sampling With a random variable X we

usually perform a series of independent observations and collect a number of

samples, i.e a set of possible values of X Each observation x j is a number and

by collecting a sufficient number of observations we can get an idea of the

underlying probability distribution of the random variable X In the case of a stochastic process X(t) each observation x j (t) is a time record similar to the one

shown in Fig 12.1 and our experiment consists of collecting a sufficient number

of time records which can be used to estimate probabilities, expected values etc

A collection of a number—say n—of time records is the

engineer’s representation of the process and is called an ensemble A typical

ensemble of four time histories is shown in Fig 12.2

As an example, consider the vibrations of an aeroplane in a region offrequent atmospheric turbulence given the fact that the same plane fliesthrough that region many times a year During a specific flight we measure

a vibration time history x1(t), during a second flight in similar conditions we measure x2(t) and so on, where, for instance, if the plane takes about 15 min

Fig 12.2 Ensemble of four time histories for the stochastic process X(t).

Trang 6

to fly through that region, The statistical population for thisrandom process is the infinite set of time histories that, in principle, could berecorded in similar conditions.

We are thus led to a two-dimensional interpretation of the stochastic

process which we can indicate, whenever convenient, with the symbol X(j, t): for a specific value of t, say is a random variable and

are particular realizations, i.e observed values, of X(j,

t0); on the other hand, for a fixed j, say is simply a function of

time, i.e a sample function x j0 (t).

With the data at our disposal, the quantities of eqs (12.4)–(12.9) must beunderstood as ensemble expected values, that is expected values calculatedacross the ensemble However, it is not always possible to collect an ensemble

of time records and the question could be asked if we can gain someinformation on a random process just by recording a sufficiently long timehistory and by calculating temporal expected values, i.e expected valuecalculated along the sample function at our disposal An example of such a

quantity can be the temporal mean <x> obtained from a time history x(t) as

(12.13)

The answer to the question is that this is indeed possible in a number ofcases and depends on some specific assumptions that can often (reasonably)

be made about the characteristics of many stochastic processes of interest

12.2.1 Stationary and ergodic processes

Strictly speaking, a stationary process is a process whose probabilisticstructure does not change with time or, in more mathematical terms, isinvariant under an arbitrary shift of the time axis Stated this way, it isevident that no physically realizable process is stationary because all processesmust begin and end at some time Nevertheless the concept is very useful forsufficiently long time records, where by the expression ‘sufficiently long’ wemean here that the process has a duration which is long compared to theperiod of its lowest spectral components

There are many kinds of stationarity, depending on what aspect of theprocess remains unchanged under a shift of the time axis For example, a

process is said to be mean-value stationary if

(12.14a)

for any value of the shift r Equation (12.14a) implies that the mean value is

the same for all times so that for a mean-value stationary process

(12.14b)

Trang 7

Similarly, a process is second-moment stationary if

(12.15a)

for any value of the shift r For eq (12.15a) to be true, it is not difficult to see

that the autocorrelation and covariance functions must not depend on the

individual values of t1 and t2 but only on their difference so that

we can simply write

(12.15b)

By the same token, for two stochastic processes X(t) and Y(t) we can speak

of joint second-moment stationarity when At this point

it is easy to extend these concepts and define, for a given process, covariant

stationarity and mth moment stationarity or, for two processes, joint covariant

stationarity, etc It must be noted that stationarity always reduces the number

of necessary time arguments by one: i.e in the general case the mean depends

on one time argument, while for a stationary process it does not depend ontime (zero time arguments); the autocorrelation depends on two time

arguments in the general case and only on one time argument ( ) in the

stationary case, and so on

Other forms of stationarity are defined in terms of probability distributions

rather than in terms of moments A process is first-order stationary if

(12.16)

for all values of x, t and r; second-order stationary if

(12.17)for all values of and r Similarly, the concept can be extended to mth-order stationarity, although the most important types in practical

situations are first- and second-order stationarities

In general, a main distinction is made between strictly stationary processesand weakly stationary processes, strict stationarity meaning that the process

is mth-order stationary for any value of m and weak stationarity meaning

that the process is mean-value and covariant stationary (note that someauthors define weak stationarity as stationarity up to order 2)

If we consider the interrelationships among the various types of stationarity,

for our purposes it suffices to say that mth order stationarity implies all stationarities of lower order, while the same does not apply for mth moment stationarity Furthermore, mth-order stationarity also implies mth moment stationarity so that, necessarily, an mth-order stationary process is also stationary

up to the mth moment Note, however, that it is not always possible to establish

Trang 8

a hierarchy among different types of stationarities: for example it is not possible

to say which is stronger between second-moment stationarity and first-orderstationarity because they simply correspond to different behaviours First-order

stationarity certainly implies that all moments E[X m (t)]—which are calculated

by using p X (x, t)—are invariant under a time shift, but it gives us no information about the relationship between X(t1) and X(t2) when

Before turning to the issue of ergodicity, it is interesting to investigatesome properties of the functions we have introduced above The first property

is the symmetry of autocorrelation and autocovariance functions, i.e

(12.18)

which, whenever the appropriate stationarity applies, become

(12.19)

meaning that autocorrelation and autocovariance are even functions of

Also, if we note that

(12.22a)

so that, as it often happens in vibrations, if the process is stationary with zeromean, then When from eq (12.22a) it follows that

(12.22b)Two things should be noted at this point: first (Chapter 11), Gaussianrandom processes are completely characterized by the first two moments,

Trang 9

i.e by the mean value and the autocovariance or autocorrelation function.

In particular, for a stationary Gaussian process all the information we need

is the constant µ X and one of the two functions R XX ( ) or K XX ( ) Second,

for most random processes the autocovariance function rapidly decays tozero with increasing values of (i.e ) because, as can beintuitively expected, at increasingly larger values of there is an increasing

loss of correlation between the values of X(t) and Broadly speaking,

the rapidity with which K XX ( ) drops to zero as | | is increased can be

interpreted as a measure of the ‘degree of randomness’ of the process

If two weakly stationary processes are also cross-covariant stationary, it

can be easily shown that the cross-correlation functions R XY ( ) and R YX ( )

are neither odd nor even; in general but, owing to theproperty of invariance under a time shift, they satisfy the relations

(12.23)while eq (12.12) becomes

(12.24)The final property of cross-correlation and cross-covariance functions ofstationary processes is the so-called cross-correlation inequalities, which westate without proof:

(12.25)

(We leave the proof to the reader; the starting point is the fact that

where a is a real number.)

Stated simply, a process is strictly ergodic if a single and sufficiently long

time record can be assumed as representative of the whole process In other

words, if one assumes that a sample function x(t)—in the course of a sufficiently long time T—passes through all the values accessible to it, then the process can be reasonably classified as ergodic In fact, since T is large,

we can subdivide our time record into a number n of long sections of time

length Θ so that the behaviour of x(t) in each section will be independent of its behaviour in any other section These n sections then constitute as good

a representative ensemble of the statistical behaviour of x(t) as any ensemble

that we could possibly collect It follows that time averages should then beequivalent to ensemble averages

Assuming that a process is ergodic simplifies both the data acquisitionphase and the analysis phase In fact, on one hand we do not need to collect

an ensemble of time histories—which is often difficult in many practical

Trang 10

situations—and, on the other hand, the single time history at our disposalcan be used to calculate all the quantities of interest by replacing ensembleaverages with time averages, i.e by averaging along the sample rather thanacross the number of samples that form an ensemble Ergodicity impliesstationarity and hence, depending on the process characteristic we want toconsider, we can define many types of ergodicity For example, the process

X(t) is ergodic in mean value if the expression

(12.26)

where x(t) is a realization of X(t), tends to E[X(t)] as Mean valuestationarity is obviously implied (incidentally, note that the reverse is notnecessarily true, i.e a mean-value stationary process may or may not bemean-value ergodic, and the same applies for other types of stationarities)because the limit of (12.26) cannot depend on time and hence (eq (12.13))

an educated guess rather than a solid argument but we must always keep inmind that in real-world situations the data at our disposal are very seldom

in the form of a numerous ensemble or in the form of an extremely longtime history

Stationarity, in turn—besides the fact that we can rely on engineeringcommon sense in many cases of interest—can be checked by hypothesis testingnoting that, in general, it is seldom possible to test for more than mean-value and covariance stationarity This can be done, for example, bysubdividing our sample into shorter sections, calculating sample averagesfor each section and then examining how these section averages comparewith each other and with the corresponding average for the whole sample

Trang 11

On the basis of the amount of variation that we are willing to accept fromone section to another in order to accept the assumption of stationarity, thestatistical procedures of hypothesis testing provide us with the appropriatemeans to make a decision.

For instance, in common engineering practice, the vibration fromcontinuous traffic is considered as a random stationary ergodic process andthe length of the time record depends on the statistical error we are willing

to accept If, as generally happens, we accept a bias error of 4% and avariance error of 10%, the time record length is given by [1]

where ζ is the modal damping and v n is the natural frequency of the nth mode

of the building Also, as far as wind effects on structures are concerned, itshould be noted that the vast majority of available results based on wind tunneltesting and/or analytical turbulence modelling are obtained under the assumptionthat the atmospheric flow is stationary Hurricane flows, however, are highlynonstationary and some efforts to study nonstationary flow effects have beenrecently reported (e.g Adhikari and Yamaguchi [2]) For the interested reader,

it is worth mentioning that a technique which is becoming more and morepopular for the study of nonstationary processes is called ‘wavelet analysis’,although in what follows we will be concerned with stationary processes (wide-sense stationary processes at least, unless otherwise stated) only

12.3 Spectral representation of random processes

We noted in preceding chapters that the vibration analysis of linear systemscan be performed either directly in the time domain or in the frequencydomain via the classical tool of the Fourier transform The two descriptions,

in principle, are equivalent but the frequency domain is often preferredbecause it provides a perspective which lends itself more easily to engineeringinterpretation and synthesis of results This is, indeed, the case also in thefield of random vibrations

However, if we consider a general stochastic process X(t), two major

difficulties arise First, the expression

defines a new stochastic process on the index set of possible ω values, meaning

that if we insert under the integral sign a particular realization x(t) of X(t)

we do not obtain a frequency representation of the process but only of onemember of it Second, if the process is stationary (i.e it goes on forever) the

Trang 12

Dirichlet condition

(12.29)

is not satisfied and the sample function x(t) is not Fourier transformable.

These difficulties can be overcome by recalling the observation (Section 12.2.1)that for a large number of stationary random processes of engineering interestthe autocorrelation tends to zero as the separation time tends to infinity(we assume, without loss of generality, processes with zero mean; when this

is not the case, the following discussion applies to the covariance function).More specifically, the autocorrelation function of many processes is ofthe form

(12.30)where α is a positive constant and f( ) is a well-behaved function of

Mathematically, this means that the autocorrelation function satisfies theDirichlet condition and hence is Fourier transformable This leads to thedefinition of the function

relationship with the spectral density expressed in terms of ordinary frequency

is given by and the units of

Inverse Fourier transform of eq (12.31a) yields

Trang 13

Before proceeding further, let us consider some properties of these spectraldensities First, the symmetry properties of the (real) autocorrelation andcross-correlation functions (see eqs (12.19) and (12.23)) lead to

to an alternative form of spectral density, the one-sided spectral density, which

is usually denoted G XX ( ω) and is defined for positive frequencies only, as

(12.34)The second consideration we want to make is that eq (12.31b) for gives

(12.35)

This property is often used for calculations of variance values and showsthat the variance of the stationary process can be obtained as the area underthe autospectral density curve

If now we proceed in our discussion, the question may arise as to whether,

by Fourier transforming the correlation function, we are really consideringthe frequency content of the original process The answer is yes and thefollowing argument will provide some insight Consider a stationary process

X(t) and a realization x(t) of infinite duration Let us define the Fourier transformable truncated version of x(t) as

(12.36)

we have and we can consider the truncated realization

of the correlation function

(12.37)

Trang 14

Now, if we call the Fourier transform of x T (t) it is not difficult to

The desired result can now be obtained from eq (12.38) by taking theensemble average and passing to the limit as under these operations

(12.39a)

At this point, one might be tempted to argue that the ensemble averageshould not be needed if the process is ergodic However, this is not so: thereason lies in the fact that the truncated function which is an estimator

of the true spectral density, is not a ‘consistent’ estimator and its quality

does not improve even for very large T Hence, the version of eq (12.39a)

without ensemble average, i.e

(12.39b)

applies to deterministic signals only

This short argument, besides confirming our point that Fouriertransforming the autocorrelation function preserves the frequency content

of the original stationary signal, also shows that the spectral density obtainedfrom a single sample is not a good estimator of the desired (and unknown)

S XX ( ω) The typical approach to avoid this sampling difficulty is generally to

replace by a ‘smoothed’ version whose variance tend to zero as

We will not go into more details here and refer the reader to specificliterature (e.g Papoulis [3], Bendat and Piersol [4])

12.3.1 Spectral densities: some useful results

This section gives some general results which can be particularly useful whendealing with random processes First of all, many transformations on randomprocesses are in the form of linear, time-invariant operators and can be

mathematically represented as an operator A which transforms a sample

Trang 15

function x(t) into another function y(w), i.e where w may be time as well (for example if A is the derivative operator) or another variable

Here, we give without proof the following results (more details will

be given in subsequent sections):

When the relevant quantities exist, the operator A and the operation of

ensemble averaging can be exchanged, i.e

• A weakly (strongly) stationary random process is transformed into aweakly (strongly) stationary random process

The linear operator A transforms a Gaussian process into a Gaussian process.

A second useful result can be obtained if we consider the meaning of the

(12.40a)and also, since

(12.40b)

so that eqs (12.40a and b) imply

(12.40c)

and only a little thought is needed to show that is an odd function

of The result of eq (12.40c) can also be obtained by noting that E[X2(t)]

is a constant for a correlation covariant process; this implies

In this regard, it is worth mentioning the often exploited fact that a maximum

value for R XX ( ) corresponds to a zero crossing for i.e a zero

crossing for the cross-correlation between the processes X(t) and (t) By a

similar reasoning to the above we can show that

(12.41)

Trang 16

and that the second derivative of R XX (t) is an even function of Similarly,

(12.46)

showing that, if x(t) is a displacement time history, we can calculate the mean square velocity and acceleration from knowledge of the spectral density S XX ( ω).

Trang 17

The final topic we want to consider in this section is the distinction that

is usually made between narrow-band and wide-band random processes,these definitions having to do with the form of their spectral densities.Working, in a sense, backwards we can investigate what kind of time historiesand autocorrelation functions result in narrow-band and wide-band processes.Broadly speaking, a narrow-band process has a spectral density which isvery small except within a narrow band of frequencies: i.e except

in the neighbourhood of a frequency A typical example is given bythe spectral density shown in Fig 12.3, which is different from zero only in

an interval of width centred at ω0 where it has the constant

value S0

In order to obtain the autocorrelation function we can simplify thecalculations by noting that we are dealing with even functions of their

arguments; then the inverse Fourier transform of S XX ( ω ) can be written as a

cosine Fourier transform and we get

(12.47)

which is plotted in Figs 12.4(a) and (b) for the values

and S0=1 Figure 12.4(b) shows a detail of Fig.12.4(a) in the vicinity of

Fig 12.3 Spectral density of narrow-band process.

Trang 18

In essence, for a typical narrow-band process so that theautocorrelation graph is a cosine oscillation at the frequency

enveloped by the slowly varying term

Fig 12.4 Autocorrelation of narrow-band process.

Trang 19

which decays to zero for increasing values of | | In the limit of verysmall values of ∆ω, the spectral density becomes a Dirac delta ‘function’

for and zero otherwise) In fact

By analogy, we can infer that a time history of the narrow-band processwhose correlation function is given by eq (12.47) is surely not a sinusoidalfunction but, nonetheless, it may look ‘quite sinusoidal’ with a low degree ofrandomness

At the other extreme we find the so-called wide-band processes, whosespectral densities are significantly different from zero over a broad band offrequencies An example can be given by a process with a spectral density as

in Fig 12.3 but where now ω1 and ω2 are much more further away on theabscissa axis For illustrative purposes we can set and

(i.e ) and draw a graph of the autocorrelationfunction, which is still given by eq (12.47) This graph is shown in Fig 12.5where, again, we set S0=1

The fictitious process whose spectral density is equal to a constant S0 overall values of frequencies represents a mathematical idealization called ‘whitenoise’ (by analogy with white light which has an approximately flat spectrumover the whole visible range of electromagnetic radiation) For this process

it is evident that the spectral density is nonintegrable; however, we can oncemore use the Dirac delta function and note that the Fourier transform of theautocorrelation function

(12.50) (12.49)

Trang 20

yields the desired spectral density A more realistic process, called

‘band-limited white noise’, has a constant spectral density from up to

a cutoff frequency In this case

(12.51)

and ideal white noise is obtained by letting if now we define theparameter which tends to zero in the above limit, we get

(12.52)

because one of the representations of the delta function as a limit (Chapter 2)

is the Dirichlet or ‘diffraction peak representation’ which reads

The autocorrelation function of a band-limited white-noise signal (eq(12.51)) is shown in Fig 12.6, where and S0=1

Fig 12.5 Autocorrelation of wide-band process.

Trang 21

For obvious reasons, white-noise processes are also called ‘delta-correlated’,where this term focuses the attention on the time-domain correlation ratherthan on the flatness of the frequency-domain spectral density At this point

it is not difficult to figure out that the time histories of such processes arevery erratic and show a high degree of randomness (e.g Fig 12.1), the reason

being the fact that the random variables X(t) and are practically

uncorrelated even for small values of This confirms the qualitative

statement of Section 12.2.1 that the rapidity with which the correlationfunction decays to zero is a measure of the degree of randomness of theprocess under investigation Conversely, in the frequency domain somequantities have been devised in order to assign a numerical value to theconcept of bandwidth of a random process The interested reader is referred,for example, to Lutes and Sarkani [5], or Vanmarcke [6]

12.4 Random excitation and response of linear systems

We are now in a position to start the investigation of how linear vibratingsystems respond to the action of one or more stochastic excitation inputs.The situations we are going to consider are those in which a random (andgenerally stationary, unless otherwise stated) input is fed into a deterministiclinear system to produce a random output For our purposes, the fact thatthe system is deterministic means that its physical characteristics—mass,stiffness and damping—are well-defined quantities independent of time Ahigher level of sophistication is represented by the case in which these

Fig 12.6 Autocorrelation of band-limited white noise.

Trang 22

parameters are also considered as random variables and contribute to therandomness of the output in their own right In this regard it may beinteresting to mention the fact that the response of random parameterssystems to deterministic initial conditions and under the action of deterministicloads is, as a matter of fact, a random quantity (e.g Köylüoglu [7]) In ourapproach, however, the systems characteristics are fully represented by the

impulse response functions h(t) in the time domain or by frequency response functions H(ω) in the frequency domain.

The basic input-output relations can then be obtained as follows Consider

a linear physical system subjected to a forcing function in the form of a

stationary random process F(t) and let its response be the random output process X(t) The mental picture we need is one of a large number of experiments where realizations f(t) of the input force excite our deterministic system which, in turn, responds with realizations x(t) of the output If we

refer back to Chapter 5 (eq (5.24)), the output of a typical sample experimentcan be written as the Duhamel (or convolution) integral

(12.53)

so that, if the mean input level is given by the first thing we can

do is to calculate the mean output level E[X(t)] by taking the ensemble average

of both members of eq (12.53) Since it is legitimate to exchange the ensembleaverage operator with integration (this is always possible for stable systemssubjected to random input provided that the mean square of the input isfinite) we get

(12.54)

Real and stable systems always possess some degree of damping which makes

the function h(t) decay to zero after some time In these circumstances, eq

(12.54) shows that a stationary input produces a stationary output If, forexample, our system is a simple damped SDOF system whose impulseresponse function is given by the second of eqs (5.7a), it is not difficult todetermine that

(12.55)

showing that the mean input level is transmitted as any other static load

Trang 23

Incidentally, we note that we do not even need to calculate the integral in eq(12.55); in fact, since it follows that

(12.56)

and for an SDOF system (e.g eq (4.42)) we have H(0)=1/k, which leads

precisely to the result of eq (12.55) More generally, eq (12.54) can also bewritten as

(12.57)

Note that here and in what follows we represent the input as a forcesignal and the output as a displacement signal because this is therepresentation that we used for the most part of the book It is evident thatthis is merely a matter of convenience and it does not necessarily need to be

so The essence of the discussions remains the same and only a small effort

is required to adjust to situations where different input and output quantitiesare considered

If now we assume without loss of generality that the input process haszero mean value, we can turn our attention to the correlation function andwrite, by virtue of eq (12.53)

Taking the ensemble average on both sides we get

(12.59)

Trang 24

the response autocorrelation becomes a single integral, i.e.

on both sides of eq (12.58) we get

Then, in the integral within braces we make the change of variable

so that and the equation above becomes

which is the fundamental ‘single-input single-output’ relationship in thefrequency domain for stationary random processes Explicitly,

(12.61b) (12.61a)

Trang 25

Also, by virtue of this last relationship we can obtain another expressionfor the variance by writing the equation

from which it follows that

be broken down into a pair of equations to give both magnitude and phaseinformation This latter statement is of great practical importance because itmeans that the complete FRF of our system (i.e magnitude and phase) can

be obtained when both S FX ( ω) and S FF ( ω) are known, i.e.

(12.64b)

thus justifying the H1 FRF estimate of eq (10.28a), which was given in Chapter

10 without much explanation (Note that eq (10.28a) is written in terms of

Trang 26

one-sided spectral densities, the difference being only for practical purposesbecause these are the quantities displayed by spectrum analysers.Mathematically, the difference is irrelevant.)

By the same line of reasoning, it is now just a simple matter to obtain theoutput-input cross-relationships

(12.65)

from which we can obtain another expression for H( ω) In fact, putting

together eq (12.61b) and the second of eqs (12.65) we have

from which it follows that

(12.66)

thus justifying the H2 FRF estimate of eq (10.29)

Example 12.1 SDOF system subjected to broad-band excitation From

preceding chapters we know that the FRF of an SDOF system with parameters

m, k and c is given by

(12.67a)

so that

(12.67b)

Under the action of a random excitation with spectral density S FF ( ω), the

system’s response in the frequency domain is given by eq (12.61b), i.e

(12.68)

where, as usual, is the system’s natural frequency If the excitation is in theform of a broad-band process whose spectral density is reasonably flat over

Trang 27

a broad range of frequencies, we can approximate it as an ‘equivalent’ whitenoise by assuming The reason for this assumption comesfrom the fact that, for small damping, the function (12.67b) is sharply peaked

in the vicinity of ωn and small everywhere else—Fig 12.7 being an example

for m=10, k=100 and As a consequence, the product

will also show a similar behaviour, thus justifying the approximation above

In physical terms, our system acts as a band-pass filter which significantlyamplifies only the frequency components in the vicinity of its naturalfrequency and produces a narrow-band process at the output

The variance of the output process can then be obtained from eq (12.62) as

(12.69a)

where the last result can be obtained from tables of integrals (Tables ofintegrals for where the FRF is of the type

and the A j and B j are real constants, are given, for example, in Newland [8]

Fig 12.7 FRF magnitude squared (SDOF).

Ngày đăng: 11/12/2022, 02:05

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
1. International Standard ISO 4866–1990, Mechanical Vibration and Shock—Vibration of Buildings—Guidelines for the Measurement of Vibrations and Evaluation of Their Effects on Buildings Sách, tạp chí
Tiêu đề: Mechanical Vibration and Shock - Vibration of Buildings - Guidelines for the Measurement of Vibrations and Evaluation of Their Effects on Buildings
Tác giả: International Organization for Standardization
Nhà XB: International Organization for Standardization
Năm: 1990
2. Adhikari, R. and Yamaguchi, H., A study on the nonstationarity in wind and wind-induced response of tall buildings for adaptive active control, Wind Engineering, Proceedings of the 9th Wind Engineering Conference, Vol. 3, pp Sách, tạp chí
Tiêu đề: A study on the nonstationarity in wind and wind-induced response of tall buildings for adaptive active control
Tác giả: Adhikari, R., Yamaguchi, H
4. Bendat, J.S. and Piersol, A.G., Random Data—Analysis and Measurement Procedures, 2nd edn, John Wiley, New York, 1986 Sách, tạp chí
Tiêu đề: Random Data—Analysis and Measurement Procedures
Tác giả: J. S. Bendat, A. G. Piersol
Nhà XB: John Wiley
Năm: 1986
5. Lutes, L.D. and Sarkani, S., Stochastic Analysis of Structural and Mechanical Vibrations, Prentice Hall, Englewood Cliffs, NJ, 1997 Sách, tạp chí
Tiêu đề: Stochastic Analysis of Structural and Mechanical Vibrations
Tác giả: Lutes, L.D., Sarkani, S
Nhà XB: Prentice Hall
Năm: 1997
6. Vanmarcke, E.H., Properties of spectral moments with applications to random vibration, Journal of the Engineering Mechanics Division, ASCE, 98(EM2), 425–446, 1972 Sách, tạp chí
Tiêu đề: Properties of spectral moments with applications to random vibration
Tác giả: E. H. Vanmarcke
Nhà XB: Journal of the Engineering Mechanics Division, ASCE
Năm: 1972
7. Kửylỹoglu, H.U., Stochastic Response and Reliability Analyses of Structures with Random Properties Subject to Stationary Random Excitation, Ph.D.Dissertation, Princeton University, Jan. 1995 Sách, tạp chí
Tiêu đề: Stochastic Response and Reliability Analyses of Structures with Random Properties Subject to Stationary Random Excitation
Tác giả: Kửylỹoglu, H.U
Nhà XB: Princeton University
Năm: 1995
8. Newland, D.E., An Introduction to Random Vibrations, Spectral and Wavelet Analysis, 3rd edn, Longman Scientific and Technical, 1993 Sách, tạp chí
Tiêu đề: An Introduction to Random Vibrations, Spectral and Wavelet Analysis
Tác giả: D. E. Newland
Nhà XB: Longman Scientific and Technical
Năm: 1993
9. Rice, S.O., Mathematical analysis of random noise, Bell System Technical Journal Sách, tạp chí
Tiêu đề: Mathematical analysis of random noise
Tác giả: S.O. Rice
Nhà XB: Bell System Technical Journal
10. Sólnes, J., Stochastic Processes and Random Vibrations: Theory and Practice, John Wiley, New York, 1997 Sách, tạp chí
Tiêu đề: Stochastic Processes and Random Vibrations: Theory and Practice
Tác giả: Sólnes, J
Nhà XB: John Wiley, New York
Năm: 1997
11. ASTM Standard E468, American Society for Testing and Materials, Annual Book of ASTM Standards, E468–2, Section 3, Vol. 03.01, ASTM, Philadelphia, 1983, pp. 577–587 Sách, tạp chí
Tiêu đề: Annual Book of ASTM Standards
Tác giả: American Society for Testing and Materials
Nhà XB: ASTM
Năm: 1983
12. Dowling, N.E., Fatigue failure predictions for complicated stress-strain histories, Journal of Materials, 7(1), 71–87, 1972 Sách, tạp chí
Tiêu đề: Fatigue failure predictions for complicated stress-strain histories
Tác giả: N.E. Dowling
Nhà XB: Journal of Materials
Năm: 1972
13. Fuchs, H.O. and Stephens, R.I., Metal Fatigue in Engineering, John Wiley, New York, 1980 Sách, tạp chí
Tiêu đề: Metal Fatigue in Engineering
Tác giả: Fuchs, H.O., Stephens, R.I
Nhà XB: John Wiley
Năm: 1980
14. Downing, S.D. and Socie, D.F., Simple rainflow counting algorithms, International Journal of Fatigue, 4(1), 31–40, 1982 Sách, tạp chí
Tiêu đề: Simple rainflow counting algorithms
Tác giả: S. D. Downing, D. F. Socie
Nhà XB: International Journal of Fatigue
Năm: 1982

TỪ KHÓA LIÊN QUAN

w