MDOF and continuous systems: response to random excitation

Một phần của tài liệu Ch12 stochastic processes and (Trang 34 - 55)

The fundamental input-output relationships which have been obtained in the preceding sections form the basis of the stochastic analysis of linear systems. In order of increasing difficulty—although it is just a matter of extending those ideas—we can now turn our attention to the response of more complex systems.

If we consider a general system subjected to the action of n random inputs, the time response at the jth point of the structure is given by a straightforward extension of eq (12.74), i.e.

(12.87) where we already know from preceding chapters that hjk(t) expresses, in physical coordinates, the response at point j due to a Dirac delta function excitation at point k. If, as is often the case, we model our system as an assemblage of n masses connected by appropriate springs and dampers, we are, in fact, dealing with a discrete n-DOF system whose response can be

written in matrix form as

(12.88) where h(t) is the (symmetrical) IRF matrix in physical coordinates and it is evident that eq (12.87) is just the jth element of the n×1 vector x(t).

Now, before proceeding further, we make a short digression to extend the ideas of Section 11.5 to random processes. In fact, we note that n generic random processes can be arranged in the n×1 matrix so that the correlation matrix (see also eq (11.69)) can be defined as

(12.89) where—since this is the case of most interest for our purposes—we assume that the n processes are stationary and with zero mean.

Then, returning to our main discussion we can write

(12.90) so that, putting together eqs (12.88) and (12.90) to form the product and taking expectations on both sides we get the response correlation matrix as

(12.91) where RFF is the input correlation matrix. Explicitly, the jth diagonal element of eq (12.91) represents the autocorrelation of the jth response and reads

(12.92)

Note that if we have two inputs and the jth response is the only output, then eq (12.92) becomes eq (12.77).

At this point we can turn to the frequency domain by noting that the spectral density matrix SXX(ω) is the Fourier transform of RXX( ). Fourier transformation of both sides of eq (12.91) leads to

(12.93) where The diagonal elements of the matrix SXX(ω) are the

response autospectral densities

(12.94a)

while the off-diagonal elements are the response cross-spectral densities (12.94b)

and again, in the case of n inputs and one output, eq (12.94b) becomes eq (12.78b). A word of caution is necessary to point out that the matrices RXX( ) and SXX(ω) are sometimes called the autocorrelation matrix and the autospectral density matrix although most of their elements are, as a matter of fact, cross-correlation functions and cross-spectral density functions. Only their diagonal elements are autocorrelation functions.

By similar arguments as above, we can now obtain the cross-correlation matrix RFX( ) between the inputs and the outputs, i.e.

(12.95)

whose diagonal elements are the cross-correlations while the off- diagonal elements are the cross-correlation functions with

Both sides of eq (12.95) can be Fourier transformed to obtain the cross- spectral density matrix

(12.96) All the input-output relationships above have been obtained without taking into account the fact that the equations of motion of an n-DOF system can often be uncoupled into n SDOF equations. In Chapter 6 we determined how and when this can be done depending on the form of the damping matrix which, in common practice is sometimes neglected altogether—in which case uncoupling is always possible—or often assumed to be either ‘proportional’ (eq (6.141)) or in the ‘Caughey’ form of eq (6.146). This possibility leads to the concept of ‘normal or modal coordinates’ and, in the analysis of response to deterministic excitation (Chapter 7), to the concepts of modal IRFs hj(t) and modal FRFs Hj( ω).

These latter functions, in turn, can be arranged in the form of diagonal

matrices, i.e. the matrices and which

are related to the IRF and FRF matrices in physical coordinates by the relationships (7.37a) and (7.34a), i.e.

(12.97)

where P is the n×n matrix of mass-orthonormal eigenvectors so that PTMP=I and In this light, we can now express the response quantities of eqs (12.91), (12.93), (12.95) and (12.96) in terms of modal characteristics. For example, eqs (12.91) and (12.93) become

(12.98) and

(12.99)

where it should be noted that in eqs (12.98) and (12.99) we took into account the symmetries and the fact that P*=P because P is a matrix of real eigenvectors. Note that, explicitly, the (jk)th element of the spectral density matrix of eq (12.99) is written

(12.100)

where Hl(ω) and Hs(ω) are the lth and sth modal FRFs, respectively. Suppose now that the excitation is in the form of ‘nearly white noise’ processes, i.e.

The sum of eq (12.100) will contain some terms where the magnitudes squared of modal FRFs appear—say, for example — and other terms where the cross-products appear, with If—

as often happens for the lowest-order modes of many structures—the modes of our system are well separated and lightly damped, then the magnitudes of the terms of the latter type will generally be much smaller than those of the former type and the contributions from cross-modal terms can be neglected without a significant loss of accuracy. In other words, the spectral densities on the l.h.s. of eq (12.100) will show peaks at the natural frequencies of the system so that, for all practical purposes, we can say that our system behaves as a selective filter which amplifies only the input contributions near its natural frequencies.

If now we turn our attention to continuous systems, two general remarks can be made before proceeding any further.

First of all, it is very common to model continuous systems as MDOFs systems by lumping masses at a number of locations. In general, a higher

number of degrees of freedom corresponds to a better approximation for eigenvalues, eigenvectors and for the response characteristics of the real system under investigation. In any case, whenever we decide to adopt a similar approach it is evident that, in the case of random excitation, the above input- output relationships apply.

The second remark is that—when we do not follow this approach and we model our system as a truly continuous system—we must recall some general observations made in Chapter 8. In that chapter we noted that the behaviour of linear continuous systems bears noteworthy similarities to the behaviour of linear MDOF systems when the mathematical framework adopted to deal with these latter systems—i.e. finite-dimensional vector spaces—is extended to the idea of infinite-dimensional vector spaces with inner product, i.e. the so-called Hilbert spaces. The system’s matrices of the MDOF case are replaced, in the continuous case, by appropriate differential symmetrical operators, the finite sets of eigenvalues and eigenvectors are replaced by countable infinite sets of eigenvalues and eigenfunctions and the expansion theorem is replaced by a series expansion in terms of eigenfunctions. Broadly speaking, we can say that the price we must pay is a higher level of mathematical difficulty in ‘setting the stage’

for our analysis.

However, if we look at the problem from a practical point of view and note that many fundamental concepts introduced in the case of MDOF systems retain their validity, we may observe that the dynamical behaviour of our system can still be described in terms of IRFs h(r, s, t) or FRFs H(r, s, ω ). The symbols used here are very general and indicate, respectively, the response in the direction of the unit vector er at the position identified by the vector r (with respect to a fixed origin) due to a Dirac delta excitation applied (at t=0) in the direction of the unit vector es at the point identified by the vector s and the steady-state response at point r along the direction er due to a unit harmonic excitation applied at s in the direction es. As usual, the following relations hold between these two functions:

(12.101)

Therefore, if we are dealing, for example, with a one-dimensional continuous system such as a string or a beam and let the realization f(xk, t) of a stationary random process F(t) be the only excitation applied at point x=xk, the autocorrelation and autospectral density functions of the response w(x, t) (we are following the notation of Chapter 8: in this case, however, w(x, t) represents a realization of the response process W(t) at point x) at

the specified point x=xm can be directly obtained from eqs (12.58) and (12.61b) as

then, the mean squared displacement response at x=xm can be obtained (eq.12.35)), for example, from

(12.103a)

and the mean squared velocity response can be obtained from the first of eqs (12.46), i.e.

(12.103b)

If now we extend our reasoning to the case of multiple (say n) inputs and multiple outputs, we can be interested, for example, in the response characteristics at the point x=xm. In the form of the output autospectral density, the desired result is given by (eq (12.94a))

(12.104)

where, for is the cross-spectral density between the two inputs applied at x=xr and x=xs while, on the other hand, for r=s the function is the autospectral density of the input applied at x=xr. If, on the other hand, we are interested in the cross-spectral density between the outputs at points x=xm and x=xk, we get (eq (12.94b))

(12.105) At this point, provided that damping is either neglected or is

‘proportional’, we can recall from Chapter 8 that the IRFs and FRFs in physical coordinates can be expressed as series expansions in terms of the (12.102)

system mode shapes φj(x) and of modal IRFs or FRFs, respectively. These relations are given by eqs (8.185b) and (8.190), which we rewrite here for our present convenience:

(12.106)

When eqs (12.106) are substituted into the appropriate input-output relations we obtain the response characteristics in terms of modal contributions and, as a consequence, it is possible to include only the modes of interest and exclude those that are not.

We end this section here by noting that we have limited our discussion of continuous systems only to point excitations. The subject of distributed random loads has not been discussed and the interested reader is referred to specific literature on random vibrations, for example, Newland [8].

12.6 Analysis of narrow-band processes: a few selected topics This section considers briefly some topics of general interest in many applications. The choice is subjective and it has been made only with the intention of introducing the reader to some concepts and ideas in the vast and specialized field of random vibrations.

12.6.1 Stationary narrow-band processes: threshold crossing rates

For isolated and lightly damped structural modes, we noted in preceding sections that the system output to a broad-band excitation is a narrow-band process X(t) whose spectral density has a significant amplitude only in a limited range of frequencies in the vicinity of the mode natural frequency. Let us consider a time history x(t) of such a process and ask if we can obtain some information on the number of times that our sample function crosses a given threshold level x=a in a given time interval T. More specifically, we will be interested in the number of upward crossings—i.e. crossings with a positive slope—in time T. Let be the number of such crossings in a typical sample function of duration T. Averaging over samples we obtain the number (12.107) and, since the process is stationary, we can easily expect that a sample twice as long will contain twice as many upwards crossings. This leads to

the conclusion that is directly proportional to T so that we can write (12.108) where we interpret as the average frequency of upward crossings of the threshold x=a, i.e. the number of crossings per unit time. Now, by isolating a short (say, of length dt, between the instants t0 and t0+dt) section of a sample time history, let us consider a typical situation in which an upward crossing is very likely to occur. The first condition to be met is that at the beginning of the interval—i.e. at time t0we must have x<a. The second obvious condition is that the derivative dx/dt be positive. However, this is not enough: if we want an upward crossing to occur within the interval we must require that

(12.109)

which, in essence, means that the slope must be steep enough to arrive at the threshold value within the time interval dt. Rearranging eq (12.109) we get the equivalent condition

Since t0 is arbitrary and the two conditions must be satisfied simultaneously, we can obtain the probability dt of an upward crossing within the interval dt by expressing it as the double integral

(12.110)

where is the joint pdf for the process X and its time derivative . Now, since the interval dt is very small, it is reasonable to approximate the integral in dx as

so that eq (12.110) becomes

and hence

(12.111)

which is a general result valid for any probability distribution. A special case of eq (12.111) which deserves particular attention is the case of a Gaussian process, i.e. a process with distribution function

(12.112)

Substitution of eq (12.112) into eq (12.111) leads to

(12.113) which is a result obtained by Rice [9].

In this regard we can, for example, calculate the quantity for the SDOF system of Example 12.1 when the input process is a stationary Gaussian white noise with spectral density S0. From eq (12.69a) we have

Then, the variance of the derived process is obtained by combining the results of eq (12.62) and the first of eqs (12.46) to get

where H(ω) is given by eq (12.67a) and the last result on the r.h.s. has been obtained from tabulated integrals. Substitution in eq (12.113) gives

(12.114)

where ωn is the system natural frequency.

12.6.2 Stationary narrow-band processes: peak distributions Consider again a sample function x(t) of duration T of a stationary random process X(t). If we call the peak probability density function, then the probability that a peak chosen at random has an amplitude that exceeds the amplitude a is given by

(12.115)

Now, since we are considering a narrow-band process, any time history x(t) is generally well behaved and not very dissimilar from a sinusoidal

oscillation with varying amplitude. In this circumstance it is reasonable to assume that any upward crossing of the level x=a will result in one peak with amplitude >a, so that the number of such peaks in the time interval T is given by Also, we can say that each upward crossing of the threshold x=0 corresponds to one ‘cycle’ of our smoothly varying time history, so that there are, on average, ‘cycles’ in the time interval T. (Note that these assumptions are generally not true for a wide-band processes, which have highly erratic time histories. In this circumstance it cannot be assumed that each upcrossing of the threshold corresponds to one peak (or maximum) only.) Then, in the same interval, the favourable fraction of peaks greater than a can be expressed as the ratio and

(12.116)

Differentiating both sides with respect to a gives the desired result, i.e. the probability density function for the occurrence of peaks

(12.117)

If, in particular, the narrow-band process has a Gaussian distribution, we can use eq (12.113) to obtain

(12.118) where we took into account (eq (12.113)) that The distribution of eq (12.118) is well known in probability theory and is called the Rayleigh distribution. From this result it is easy to determine the probability that a peak chosen at random will exceed the level a: this is

(12.119a) or the probability that a peak chosen at random is less than level a, i.e. the Rayleigh cumulative probability distribution

(12.119b) Although the Rayleigh distribution is widely used in a large number of practical problems, it must be noted that the distribution of peaks may differ significantly from eq (12.118) if the underlying probability distribution of the original process is not Gaussian. In these cases, the Weibull distribution

generally provides better results. This distribution in its general form is a two-parameter distribution and is often found in statistics books written as

(12.120a) where α is a parameter which determines the shape of the distribution and β is a scale parameter which determines the spread of the values. From eq (12.120a) the Weibull probability density function can be obtained by differentiating with respect to x (see the third of eqs (11.20))

(12.120b) For our purposes, however, we can follow Newland and note that if we call a0 the median (eq (11.45)) of the Rayleigh distribution (12.119b), we have

so that from which it follows Substitution of this result into eq (12.119b) gives the Rayleigh distribution in the form

(12.121)

which, in turn, is a special case of the one-parameter Weibull distribution

(eq (12.120b) with and ), i.e.

(12.122)

From eq (12.122) we obtain the Weibull pdf

(12.123)

which is sketched in Fig. 12.9 for three different values of k, the case k=2 representing the Rayleigh pdf.

(The reader is invited to sketch a graph of the Weibull cumulative probability distributions of eq (12.122) for the same values of k.)

At this point we may ask about the highest peak which can be expected within a time interval T. The average number of cycles in time T (and hence, for a narrow-band process, the average number of peaks) is given by where has been introduced above in this section. Noting that there is no

loss of generality in considering the amplitude of peaks in median units, let us call A the (unknown) maximum peak amplitude expected, on average, in time T. In other words, we are putting ourselves in the situation in which the equation applies, which in turn implies

(12.124) Furthermore, we know from eq (12.116) that

(12.125a) and from eq (12.122) that

(12.125b) so that, equating eqs (12.125a) and (12.125b) and taking (12.124) into account, we get

Fig. 12.9 Weibull pdf for different values of k.

from which it follows that

(12.126a)

Finally, noting that A expresses the maximum amplitude in median units and can therefore be written as where amax is the maximum amplitude in its appropriate units, we get

(12.126b)

Equation (12.126b) is a general expression for narrow-band processes when we can reasonably assume that any upcrossing of the zero level corresponds to a full cycle (and hence to a peak), so that the average number of cycles (peaks) in time T is given by It is left to the reader to sketch a graph of eq (12.126b) plotting as a function of the number of cycles

For example, if the peak distribution of our process is a Weibull distribution with k=1, eq (12.126b) shows that, on average, a peak with an amplitude higher than four times the median can be expected every 16 cycles or, in other words, one peak out of 16 peaks will exceed, on average, four times the median. If, on the other hand, the peak distribution is a Rayleigh distribution, the average number of cycles needed to observe one peak higher than four times the median (i.e. ) is given by

or, in other words, one peak out of approximately 65 500 peaks will exceed an amplitude of four times the median. Qualitatively, a similar result should be expected just by visual inspection of Fig. 12.9, where we note that higher values of k correspond to more and more strongly peaked probability density functions in the vicinity of the median, and hence to lower and lower probabilities for the occurrence of peak values significantly different from a0. The interested reader can find further developments along this line of reasoning, for example, in Newland [8] or Sólnes [10].

12.6.3 Notes on fatigue damage due to random excitation Fatigue is the process by which the strength of a structural member is degraded due to the cyclic application of load (stress) or strain so that the fatigue load that a structure can withstand is often significantly less than the load it would be capable of if the same load were applied only once. Broadly

Một phần của tài liệu Ch12 stochastic processes and (Trang 34 - 55)

Tải bản đầy đủ (PDF)

(55 trang)