We are essentially interested in knowing how well the result of the measurement represents the value of the quantity being measured.Traditionally, this issue has been addressed by making
Trang 212.7 Summary and comments
In the light of the preliminary results on probability and statistics given inChapter 11, this chapter has considered the subject of random vibrations.Random vibrations arise in a number of situations in engineering practice.More specifically, when it is not possible to give a deterministic description
of the vibratory phenomenon under investigation but repeated observationsshow some underlying patterns and regularities, we resort to a description
in terms of statistical quantities and we speak of a ‘random (or stochastic)process’ This is precisely the subject of Section 12.2, where we also notethat, in practical situations, the engineer’s representation of a random process
is an a so-called ‘ensemble’, i.e a number of sufficiently long time histories(samples) which can be used, by averaging across the ensemble at specificinstants of time, to calculate (or better ‘estimate’) all the quantities of interest.Luckily, a large number of natural vibratory phenomena have—or can bereasonably assumed to have—some properties that allow a noteworthysimplification of the analysis These properties are stationarity and ergodicity(Section 12.2.1) There exist different levels of stationarity and ergodicitybut, broadly speaking, the first property has to do with the fact that certainstatistical descriptors of the process do not change with time, while the secondproperty refers to the circumstance in which a sufficiently long time recordcan be considered as representative of the whole process Furthermore,ergodicity implies stationarity and, in practice, when there is evidence that
a given process is stationary, ergodicity is also tacitly assumed so that wecan (1) record only one (sufficiently long) time history and (2) describe theprocess by taking time averages along this single sample rather thancalculating ensemble averages across a number of different samples, the twotypes of averages being equal because of ergodicity It should be noted,however, that the assumption of ergodicity is, more often than not, aneducated guess rather than a proven fact
Just as deterministic vibrations can be analysed in the time domain or inthe frequency domain, there is the possibility of doing the same with randomvibrations However, some complications of mathematical nature do ariseand the problem is tackled by Fourier-transforming correlation functionsrather than the time signal itself (Wiener-Khintchine relations) This procedureleads to the concept of spectral density, whose definition and properties arethe subject of Section 12.3, and to the notions of narrow-band and wide-band random processes
Then, with all the above results at our disposal, we can consider theproblem of determining the (random) response of a (deterministic) linearsystem to a random stationary source of excitation Proceeding in order ofincreasing complexity—one input and one output, one output and more thanone input, MDOF and continuous systems—we do so in Sections 12.4 and12.5, where we establish the fundamental input-output relationships for linearsystems and note that, once again, the system’s characteristics are represented
Trang 3in terms of IRFs in the time-domain FRFs in the frequency domain Moreover,also in this case, there is the possibility of expressing the output characteristics
in terms of modal IRFs and FRFs
Also, in the final part of Section 12.4, we pay due attention to the factthat the steady-state condition—in which a stationary input produces astationary output—is not reached immediately after the onset of the input,but some time has to pass before the system, so to speak, adjusts to its newstate of motion During this time, the response is clearly nonstationary becauseits statistical characteristics (typically, its mean value if different from zeroand its variance) vary from zero to their stationary value
Finally, in order to give the reader an idea of the richness of the subject
of random vibrations, which is now a specialized field of activity andresearch in its own right, Section 12.6 deals with specific topics ofparticular interest Sections 12.6.1 and 12.6.2 are strictly related andconsider, respectively, the threshold crossing rates and peak distributions
of stationary narrow-band processes, while Section 12.6.3 introduces somebasic concepts of fatigue damage of engineering materials and gives abrief account of how, based on our limited knowledge of the details ofmaterial fatigue, we can attack the frequently encountered problem offatigue damage due to random excitation In this circumstance, whenthis excitation is in the form of a narrow-band random process, it is alsoshown how we can use the results of Sections 12.6.1 and 12.6.2 to estimatethe mean time to failure
References
1 International Standard ISO 4866–1990, Mechanical Vibration and Shock— Vibration of Buildings—Guidelines for the Measurement of Vibrations and Evaluation of Their Effects on Buildings.
2 Adhikari, R and Yamaguchi, H., A study on the nonstationarity in wind and
wind-induced response of tall buildings for adaptive active control, Wind Engineering, Proceedings of the 9th Wind Engineering Conference, Vol 3, pp.
1455–1466, Wiley Eastern Ltd., New Delhi, 1995.
3 Papoulis, A., Signal Analysis, McGraw-Hill, New York, 1981.
4 Bendat, J.S and Piersol, A.G., Random Data—Analysis and Measurement Procedures, 2nd edn, John Wiley, New York, 1986.
5 Lutes, L.D and Sarkani, S., Stochastic Analysis of Structural and Mechanical Vibrations, Prentice Hall, Englewood Cliffs, NJ, 1997.
6 Vanmarcke, E.H., Properties of spectral moments with applications to random
vibration, Journal of the Engineering Mechanics Division, ASCE, 98(EM2),
425–446, 1972.
7 Köylüoglu, H.U., Stochastic Response and Reliability Analyses of Structures with Random Properties Subject to Stationary Random Excitation, Ph.D.
Dissertation, Princeton University, Jan 1995.
8 Newland, D.E., An Introduction to Random Vibrations, Spectral and Wavelet Analysis, 3rd edn, Longman Scientific and Technical, 1993.
9 Rice, S.O., Mathematical analysis of random noise, Bell System Technical Journal,
Trang 423 (1944) and 24 (1945); reprinted in Wax, N (ed.) Selected Papers on Noise
and Stochastic Processes, Dover, New York, 1954.
10 Sólnes, J., Stochastic Processes and Random Vibrations: Theory and Practice,
John Wiley, New York, 1997.
11 ASTM Standard E468, American Society for Testing and Materials, Annual Book of ASTM Standards, E468–2, Section 3, Vol 03.01, ASTM, Philadelphia,
14 Downing, S.D and Socie, D.F., Simple rainflow counting algorithms,
International Journal of Fatigue, 4(1), 31–40, 1982.
Trang 5Further reading to Part I
Ainsworth, M., Levesley, J., Light, W.A and Marletta, M (eds) Wavelets, Multilevel Methods and Elliptic PDEs, Oxford Science Publications, Clarendon Press,
Cercignani, C Spazio, Tempo, Movimento Introduzione alla Meccanica Rationale,
(in Italian), Zanichelli, Bologna, 1976.
Champeney, D.C Fourier Transforms in Physics, Adam Hilger, Bristol, 1985 Chisnell, R.F Vibrating Systems, Routledge & Kegan Paul, London, 1960 Clough, R.W and Penzien, J Dynamics of Structures, McGraw-Hill, New York,
1975.
Diana, G and Cheli F Dinamica e Vibrazioni dei Sistemi Meccanici, Vols 1 and 2,
Utet, Torino, 1993.
Genta, G Vibrazioni delle Strutture e delle Macchine, Levrotto & Bella, Torino,
1996 (Also available in English, Vibration of Structures and Machines,
Springer-Verlag, New York, 1993–5.)
Griffin, M.J Handbook of Human Vibration, Academic Press, London, 1990 Hartog, J.P.D Mechanical Vibrations, Dover, New York, 1984.
Hewlett-Packard Application Note 243–3 The Fundamentals of Modal Testing,
Köylüoglu, H.U Theory and Applications of Structural Vibrations, CIV 362—Lecture
Notes, Princeton University, 1995.
Landau, L.D and Lifshitz, E.M Meccanica, Editori Riuniti, Rome, 1982 (Also available in English: Landau and Lifshitz—Course of Theoretical Physics, Vol.
1, Mechanics, Pergamon Press.)
Trang 6Lebedev, N.N., Skalskaya, I.P and Uflyand, Y.S Worked Problems in Applied Mathematics, Dover, New York, 1965.
Lembgrets, F Parameter estimation in modal analysis, L.M.S Seminar on Modal Analysis, Milan, 25–26 May, 1992.
Milton, J.S and Arnold J.C Introduction to Probability and Statistics, 2nd edn,
McGraw-Hill, New York, 1990.
Mitchell, L.D Modal test methods—quality, quantity and unobtainable, Sound and Vibration, Nov., 10–17, 1994.
Norton, M.P Fundamentals of Noise and Vibration Analysis for Engineers,
Cambridge University Press, Cambridge, 1989.
Ohayon, R and Seize, C Structural Acoustics and Vibration, Academic Press, London,
1998.
Pettofrezzo, A.J Matrices and Transformations, Dover, New York, 1966.
Petyt, M Introduction to Finite Element Vibration Analysis, Cambridge University
Press, Cambridge, 1990.
Piersol, A.G Optimum resolution bandwidth for spectral analysis of stationary
random vibration data, Shock and Vibration, 1(1), 33–43, 1993–4.
Przemieniecki, J.S Theory of Matrix Structural Analysis, Dover, New York, 1968 Reddy, B.D Introductory Functional Analysis with Applications to Boundary Value Problems and Finite Elements, Springer-Verlag, New York, 1998.
Richardson, M.H Is it a mode shape, or an operating deflection shape? Sound and Vibration, Jan., 54–61, 1997.
Rudin, W Real and Complex Analysis, McGraw-Hill, New York, 1966.
Rudin W Functional Analysis, McGraw-Hill, New York, 1973.
Scavuzzo R.J and Pusey H.C Principles and Techniques of Shock Data Analysis,
edited and produced by the Shock and Vibration Information Analysis Center (SAVIAC, Arlington, Virginia), SVM-16, 2nd edn, 1996.
Shephard, G.C Spazi Vettoriali di Dimensioni Finite, Cremonese, Rome, 1969 (Also available in English: Vector Spaces of Finite Dimension, Oliver & Boyd
Timoshenko, S., Young, D.H and Weaver, W Jr Vibration Problems in Engineering,
4th edn, John Wiley, New York, 1974.
Towne, D.H Wave Phenomena, Dover, New York, 1967.
Ventsel, E.S Teoria delle probabilitá, Edizioni MIR, 1983.
Vu, H.V and Esfandiari, R.S Dynamic Systems Modeling and Analysis,
McGraw-Hill, New York, 1998.
Zaveri, K Modal Analysis of Large Structures—Multiple Exciter Systems, Bruel &
Kjaer, 1985.
Trang 7Part II
Measuring instrumentation
Vittorio Ferrari
Trang 8With the intention of highlighting correct measurement practice, thischapter presents the fundamental concepts involved with measurement andmeasuring instruments The first two sections on the measurement processand uncertainty form a general introduction Then three sections follow whichdescribe the functional model of measuring instruments and their static anddynamic behaviour Afterwards, a comprehensive treatment of the loadingeffect caused by the measuring instrument on the measured system ispresented, which makes use of the two-port models and of theelectromechanical analogy Worked out examples are included Finally, asurvey of the terminology used for specifying the characteristics of measuringinstruments is given.
This chapter is intended to be propaedeutic and not essential to the nexttwo chapters; the reader more interested in the technical aspects can skip toChapters 14 and 15 regarding transducers and the electronic instrumentation
13.2 The measurement process and the measuring
instrument
Measurement is the experimental procedure by which we can obtainquantitative knowledge on a component, system or process in order todescribe, analyse and/or exert control over it This requires that one or morequantities or properties which are descriptive of the measurement object,
called the measurands, are individuated The measurement process then
basically consists of assigning numerical values to such quantities or, more
Trang 9formally stated, of yielding measures of the measurands This should beaccomplished in both an empirical and objective way, i.e based onexperimental procedures and following rules which are independent of theobserver As a relevant consequence of the numerical nature of the measure
of a quantity, measures can be used to express facts and relationships involvingquantities through the formal language of mathematics
The practical execution of measurements requires the availability andproper use of measuring instruments A measuring instrument has the ultimateand essential role of extending the capability of the human senses byperforming a comparison of the measurand against a reference and providingthe result expressed in a suitable measuring unit The output of a measuringinstrument represents the measurement signal, which in today’s instruments
is most frequently presented in electrical form
The process of comparison against a reference may be direct or, moreoften, indirect In the former case, the instrument provides the capability ofcomparing the unknown measurand against reference samples of variablemagnitude and detecting the occurrence of the equality condition (e.g thearm-scale with sample masses, or the graduated length ruler) In the lattercase, the instrument’s functioning is based on one or more physical laws andphenomena embodied in its construction, which produce an observable effectthat is related to the measurand in a quantitatively known fashion (e.g thespring dynamometer)
The indirect comparison method is often the more convenient andpracticable one; think, for instance to the case of measurement of an intensivequantity such as temperature Motion and vibration measuring instrumentsmost frequently rely on an indirect measuring method
Regardless of whether the measuring method is direct or indirect, it isfundamental for achieving objective and universally valid measures that theadopted references are in an accurately known relationship with someconventionally agreed standard Given a measuring instrument and astandard, the process of determination and maintenance of this relationship
is called calibration A calibrated and properly used instrument ensures that
the measures are traceable to the adopted standard, and they are thereforeassumed to be comparable to the measures obtained by different instrumentsand operators, provided that calibration and proper use is in turn guaranteed
If we refer back to the definition of measurement, it can be recognizedthat measurement is intrinsically connected with the concept of information
In fact, measuring instruments can be thought of as information-acquiringmachines which are required to provide and maintain a prescribed functionalrelationship between the measurand and their output [1] However,measurement should not be considered merely as the collection of informationfrom the real world, but rather as the extraction of information which requiresunderstanding, skill and attention from the experimenter In particular, itshould be noted that even the most powerful signal postprocessing techniquesand data treatment methods can only help in retrieving the information
Trang 10embedded in the raw measurement data, but have no capability of increasingthe information content As such, they should not be misleadingly regarded
as substitutive to good measurements, nor a fix for poor measurement data.Therefore, carrying out good measurements is of primary importance andshould be considered as an unavoidable need and prerequisite to any furtheranalysis A fundamental limit to the achievable knowledge on themeasurement object is posed at this stage, and there is no way to overcomesuch a limit in subsequent steps other than by performing bettermeasurements
13.3 Measurement errors and uncertainty
After realizing the importance of making good measurements as a necessaryfirst step, we may want to be able to determine when measurements aregood or, at least, satisfying to our needs In other words, we become concernedwith the problem of qualifying measurement results on the basis of somequantifiable parameter which characterizes them and allows us to assesstheir reliability We are essentially interested in knowing how well the result
of the measurement represents the value of the quantity being measured.Traditionally, this issue has been addressed by making reference to the concept
of measuring error, and error analysis has long been considered an essentialpart of measurement science
The concept of error is based on the reasonable assumption that ameasurement result only approximates the value of the measurand but isunavoidably different from it, i.e it is in error, due to imperfections inherent
to the operation in nonideal conditions Blunders coming from gross defects
or malfunctioning in the instrumentation, or improper actions by the operatorare not considered as measuring errors and of course should be carefullyavoided
In general, errors are viewed to have two components, namely, a random and a systematic component Random errors are considered to arise from
unpredictable variations of influence effects and factors which affect themeasurement process, producing fluctuations in the results of repeatedobservation of the measurand These fluctuations cancel the ideal one-to-one relationship between the measurand and its measured value Randomerrors cannot be compensated for but only treated statistically By increasingthe number of repetitions, the average effect of random errors approacheszero or, more formally stated, their expectation or expected value is zero.Systematic errors are considered to arise from effects which influence themeasurement results in a systematic way, i.e always in the same directionand amount They can originate from known imperfections in theinstrumentation or in the procedure, as well as from unknown or overlookedeffects The latter sources in principle always exist due to the incompleteness
of our knowledge and can only be hopefully reduced to a negligible level
Trang 11Conversely, the former sources, as they are known, can be compensated for
by applying a proper correction factor to the measurement results After thecorrection, the expected value of systematic errors is zero
Although followed for a long time, the approach based on the concept ofmeasurement error has an intrinsic inconsistency due to the impossibility of
determining the value of a quantity with absolute certainty In fact, the true value of a quantity is unknown and ultimately unknowable, since it could
only be determined by measurement which, in turn, is recognizably imperfectand can only provide approximate results As a consequence, the measurementerror is unknowable as well, since it represents the deviation of themeasurement result from the unknowable true value As such, the concept
of error can not provide a quantitative and consistent mean to qualifymeasurement results on a theoretically sound basis
As a solution to the problem, a different approach has been developed inthe last few decades and is currently adopted and recommended by theinternational metrological and standardization institutions [2] It is based
on recognizing that when performing a measurement we obtain only anestimate of the value of the measurand and we are uncertain on its correctness
to some extent This degree of uncertainty is, however, quantifiable, though
we do not know precisely how much we are in error since we do not know
the true value The term measurement uncertainty can be therefore introduced
and defined as the parameter that characterizes the dispersions of the valuesthat could be attributed to the measurand In other words, the uncertainty
is an estimate of the range of values within which the true value of ameasurand lies according to our presently available knowledge Thereforeuncertainty is a measure of the ‘possible error’ in the estimated value of ameasurand as obtained by measurement
It is worth noting that the result of a measurement can unknowably bevery close to the value of the measurand, hence having a small error,nonetheless it may have a large uncertainty On the other hand, even whenthe uncertainty is small there is no absolute guarantee that the error is small,since some systematic effect may have been overlooked because it is unknown
or not recognized and, as such, not corrected for in the measurement result.From this standpoint, a different meaning can be attributed to the termtrue value in which the adjective ‘true’ loses its connotation of uniquenessand becomes formally unnecessary The true value, or simply the value, of ameasurand can be conventionally considered as the value obtained when themeasurement with lowest possible uncertainty according to the presentlyavailable knowledge is performed, i.e when an exemplar measuring methodwhich minimizes and corrects for every recognized influencing effect is used
In practical cases, the idea of an exemplar method should be commensuratewith the accuracy needed for the particular application; for instance, when
we measure the length of a table with a ruler we consciously disregard theinfluence of temperature on both the table and the ruler, since we considerthis effect to be negligible for our present measuring needs We simply
Trang 12acknowledge that our result has an uncertainty which is higher than the bestobtainable, but is suitable for our purposes.
However, we may be in the situation of negligible uncertainty of theinstrument (the ruler in this case) compared to that caused by temperature onthe measurement object (the table), for which we are therefore able to detectand measure the thermal expansion The converse situation is that of negligibleuncertainty of the measurement object compared to that of the measuringinstrument and procedure This is the case encountered when testing aninstrument by using a reference or standard of low enough uncertainty to beignored Thus the value of the reference or standard can be conventionallyassumed as the true value, and the test thought of as a mean to determine theerrors of the measuring instrument and procedure Quantifying such errorsand correcting those due to systematic effects is actually no different fromperforming a calibration of the measuring instrument under test
Summarizing, the introduction of the concept of uncertainty removes theinconsistency of the theory of errors, and directly provides an operationalmean for characterizing the validity of measurement results In practice, thereare many possible sources of uncertainty that, in general, are not independent,for example: incomplete definition of the measurand, effect of interferingenvironmental conditions and noise, inexact calibration and finitediscrimination capability of measuring instruments and variations in theirreadings in repeated observations under apparently identical observations,unconscious personal bias in the operation of the experimenter
In principle, the influence of each conceivable source of uncertainty could
be evaluated by the statistics of repeated observations In the practical casesthis is essentially impossible and, therefore, many source of uncertainty can
be more conveniently quantified a priori by analysing with scientific judgment
the pool of available information, such as tabulated data, previousmeasurement results, instrument specifications The results of the twoevaluation methods are called respectively type A and type B uncertainties,which are classified as different according to their derivation but do notdiffer in nature and, therefore, are directly comparable A detailed treatment
of the methods used to evaluate uncertainty can be found in [2] and [3]
13.4 Measuring instrument functional model
Irrespective of the measured variable and the operating principle involved, ameasuring instrument can be represented by the block diagram of Fig 13.1.This is a simplified and general model which focuses on the very fundamentalfeatures that, with various degrees of sophistication in the implementation,are typical of every measuring instrument
The measuring instrument can be seen as composed of three cascadedblocks, which provide an information transfer path from the measurandquantity to the observer The first block, named the sensing element, is the
Trang 13no better than 0.1 °C to the naked eye, which is, nevertheless, all that isneeded in many applications.
It should be observed that the distinction between functional blocks doesnot necessarily reflect a physical separation of such blocks in the realinstruments On the contrary, there are many cases in which several functionsare somewhat distributed among different pieces of hardware so that it isdifficult, besides essentially useless, to distinguish and parse them
Nowadays, most of the measurement tasks in any field are performed byinstruments and systems which measure physical quantities by electronic means.Basically, the use of electronics in measuring instrumentation offers higherperformance, improved functionality and reduced cost compared to purelymechanical systems A very general block representation of an electronicmeasuring instrument or system is given in Fig 13.1(b), which is fairly similar
to that of Fig 13.1(a) with some important differences In this case the sensingfunction is performed by sensors, or transducers, which respond to the physicalstimulus caused by the measurand with a corresponding electrical signal Such
a signal is then amplified, possibly converted into a digital format and processed
in order to extract the information of interest contained in the sensor signal,and filter out the unwanted spurious components All such processingoperations are carried out in the electrical domain irrespective of the nature ofthe measurand, and therefore they may take advantage of the high capabilities
of modern electronic elaboration circuitry
The obtained results can then be presented to the observer through adisplay stage, and/or possibly stored into some form of memory device, mosttypically electronic or magnetic The memory storage capability offered bymany electronic measuring instruments is of fundamental importance, as itenables analysis, processing and comparisons on measurement data to beperformed offline, that is, arbitrarily later than the time when the data arecaptured Some instruments are optimized for extremely fast cycles of datastorage-retrieval-processing so that they can perform specialized functions,such as filtering, correlation or frequency transforms, in real time, i.e with
a delay inessential for the particular application
Transducers and electronic signal amplification and processing will betreated in Chapters 14 and 15 respectively
A fundamental fact resulting from both block diagrams of Fig 13.1 is thatthe measuring instrument occupies the position at the interface between theobserver and the measurand Moreover, all of them are under the globalinfluence of the surrounding environment This influence is generally a cause
of interference on the information transfer path from the measurand to theobserver, producing a perturbing action which ultimately worsens themeasurement uncertainty This fact may be represented by considering the
output y of a measuring instrument being a function not only of the measurand
x, as we ideally would like to happen, but also of a number of further quantities
q i related to the effects of the boundary conditions Such quantities are named
the influencing or interfering quantities Typical influencing quantities may
Trang 14be of an environmental nature, such as temperature, barometric pressure andhumidity, or related to the instrument operation, such as posture, loadingconditions and power supply.
Besides observing that y, x and the quantities q i are actually functions of
time y(t), x(t) and q i (t), we may even consider time itself as an influencing
quantity, since in the most general case the output of a real measuring
instrument depends to some extent on the time t at which the measurement
is performed This means that the same input combination of measurandand influencing quantities applied at different time instants of the instrument’soperating life may, in general, produce different output values due toinstrument ageing and drift Considered as an influencing quantity, time has
a peculiar nature due to the fact that, unlike what theoretically can be done
for the q i s, the observer cannot exert any kind of control over it.
Developing a formal description of measuring instruments which globallytakes into account all the involved variables as functions of time with theaim of deriving the time evolution of the output is a difficult task Usually,
a more practicable approach is followed which, besides, provides a betterunderstanding of the instrument performances and a deeper insight into itsoperation It consists of distinguishing between static and dynamic behaviour,each of which can be analysed separately Operation under static conditionscan be analysed by neglecting the time dependence of the measurand andthe influencing quantities, therefore avoiding the solution of complicatedpartial-derivative differential equations The consequent reduction incomplexity enables a detailed description of the output-to-measurandrelationship and the evaluation of the impact due to influencing quantities
On the other hand, the analysis of dynamic operation is essentiallyperformed by taking into account the time evolution of the measurand onlyand the resultant time dependence of the instrument output, thereby requiringonly ordinary differential equations The effect of the influencing quantities
on dynamic behaviour is generally evaluated by a semiquantitative extension
of the results obtained for the static analysis Though this approach it is notstrictly rigorous, it offers a viable solution to an otherwise unmanageableproblem and, as such, it is of great practical utility
13.5 Static behaviour of measuring instruments
Let us assume that the measurand x and the influencing quantities q i s are
constant and independent of time It should be noted that this assumption is
not in contradiction with regarding x and the q i s as variables In fact, we
consider that the x and the qis are subject to variations over a range of
values, but we do not take into account the time needed by such variations
to take place In other words, we consider only the static combinations ofconstant inputs once the transients have died out Under such an assumption,
the relationship between the instrument output y and the measurand x, the
Trang 15q i s and the time t at which the measurement is performed is given by the
The quantities and represent the sensitivities of the
measuring instrument in response to the measurand x, the ith influence quantity
q i and the time t The term is responsible for the time stability of theconversion characteristic or, better, of its instability Higher values of imply a more pronounced ageing effect on the instrument and require a morefrequent calibration An instrument for which is called time-invariant.
The instrument is the more selective for x the lower the value of the terms
are compared to so that their effect on the output is negligiblewith respect to the measurand If all the terms were ideally zero, the
instrument would respond to the measurand only and would be called specific
for x In the real cases, given the desired level of accuracy and estimated the ranges of variability of x and the q i s, the comparison between andthe allows us to determine the influence quantities which actuallyplay a role and need to be taken into account in the case at hand
In principle, the contribution of the significant influence quantities could beexperimentally evaluated by varying each of them in turn over a given interval,
while keeping the measurand and the other q i s constant and monitoring the
instrument output In practice, this is hardly possible and usually the contribution
is estimated partly from experimental data and partly from theoretical predictions
Of course, it is expected that the instrument is mostly responsive to the
measurand x, and, therefore, the above procedure is primarily applied to the
experimental determination of the measurand-to-output relationship The
curve obtained in this way is the static calibration or conversion characteristic
of the instrument under given conditions of the influencing quantities Undervarying conditions, a family of calibration characteristics is obtained, which
contain information on the impact of the considered q i s.
Assuming a reference condition for which the influencing quantities are
kept constant at their nominal or average values q oi , and ageing effects are
neglected, it follows that the output y depends on the measurand only and
eq (13.1) reduces to
(13.3)
Trang 16The function f represents the instrument’s static conversion characteristic in the reference condition For the instrument to be of practical utility, f(x)
should be monotonic so that its inverse function, which relates the instrumentreading with the measurand, is single-valued
The term is called the sensitivity of the instrument with respect
to the measurand x In general, the sensitivity is not constant throughout the measurand range but is itself a function of x, i.e S=S(x) In most cases, however, the instrument is built to ensure a relationship of proportionality between y and x of the type In these cases the instrument is said to be
linear if yo=0 and incrementally linear if and the sensitivity S becomes
a constant given by the coefficient k, which is typically called the instrument
scale factor, calibration factor or conversion coefficient The term yo is calledthe instrument offset and represents the output at zero applied measurand.Figure 13.2 shows the conversion characteristics for both an incrementallylinear and a nonlinear instrument For incrementally linear instruments, the
variations in the coefficients k and y o about their reference values induced
by the influencing quantities are generally adopted to specify their effect.Taking temperature as an example, we may therefore find widespread usage
of the terms temperature coefficient of the scale factor and of the offset,
meaning the temperature-induced variations in k and y o respectively
It is very important to point out that nonlinear instruments may be
linearized by considering a small interval of the input x about an average value x o , and approximating dy with S(x o )dx in such an interval For suitably
small variations around x o the sensitivity can therefore assumed to be constant
equal to S(x o ) and the instrument considered as locally linear This procedure
is the so called small-signal linearization
The property of linearity is extremely important for measuring instruments,
as it is for every system, since it implies the validity of the superpositionprinciple Essentially, this means that a linear system responds to the sum oftwo inputs with an output which is the sum of the two single responsescaused by each input when applied alone As a consequence, linear systems,and linear instruments in particular, produce an output which is a scaledreplica of the input, i.e the readings of the instrument provide an undistortedimage of the measurand variations
Fig 13.2 Examples of (a) incrementally linear and (b) nonlinear conversion
characteristics.
Trang 17It is worth noting that for an instrument to have a linear conversioncharacteristic there is no need that each of the blocks of Fig 13.1 is linear Infact, this is only a sufficient condition for overall linearity, and we may as wellhave several blocks with nonlinear behaviours which mutually cancel, givingrise to a globally linear instrument This property is very often exploited wheninput or intermediate stages are intrinsically nonlinear, and such a nonlinearity
is compensated for within an additional conversion stage or even within thepresentation stage As an example, you may think of an instrument that, tocorrect a nonlinearity of some intermediate stages, uses a needle indicatorwhose reading scale has unequally spaced marks, as happens in logarithmicpaper Of course, an unfortunate drawback of this expedient is the possiblereduction of the indicator readability in some parts of its range
13.6 Dynamic behaviour of measuring instruments
Let us consider that the measurand x is actually a function of time x(t) and
assume that the effect of the influencing quantities is negligible Besides, supposethat time ageing and drift phenomena are absent or, as generally happens,very slow compared to the time evolution of the measurand signal, so thatthey can be overlooked and the instrument considered as time-invariant.Then we may rewrite eq (13.3) which now takes the form
(13.4)
F is conceptually different from f in eq (13.3), since F is an operator, in the
sense that it represents a correspondence between entities which are
themselves functions of time and not scalar values as for f In eq (13.3) F
defines the dynamic conversion characteristic of the measuring instrument
and generally contains time derivatives and integrals of both x(t) and y(t),
giving rise to integrodifferential nonlinear equations
We restrict the field of the many mathematical forms that F can take, by
assuming that it has the property of linearity and, therefore, we limit ourselves
to considering linear instruments
Briefly, a linear dynamic system, and an instrument in particular, is onefor which the superposition principle is valid when input and output,respectively considered as cause and effect, are regarded as functions of time
It is worth pointing out that the linearity of F, which could be indicated
as dynamic linearity, is not equivalent to the linearity of f, that is the static
linearity described in the preceding section In fact, they refer to two differentideas of the concept of linearity, namely operational in the former case andfunctional in the latter Indeed, the dynamic linearity is a more restrictivecondition than static linearity That is, we may have a system for which thesuperposition principle holds for constant values of the input, and, on thecontrary, does not apply when the input is considered as a function of time
Trang 18For example, a system for which the input-output relationship is given by
is not linear in the dynamical sense, though it
is statically linear, since for x independent of time the output becomes y=bx.
Conversely, dynamic linearity implies static linearity
For a time-invariant dynamically linear instrument for which input andoutput are real functions of time, eq (13.4) takes the form of a linear ordinarydifferential equation with constant coefficients, which can be generallywritten as
(13.5)
The coefficients a i and b i are a combination of instrument parameters assumed
to be independent of time, and are therefore real and constant numbers.Equations of the form of eq (13.5) are encountered in a wide number offields of engineering and science, and standard methods have been developedfor their solution We will not go into details about this aspect, on which theinterested reader can find many exhaustive references, such as [4] We wouldrather like to point out the main lines of reasoning that can be followed toapproach the problem, and illustrate the modelling of measuring instruments
as dynamic systems [5, 6]
The first approach is that of directly solving eq (13.5) in the time domain
It is well known that the general form of the solution y(t) is
(13.6)
where y f (t) is the forced response, and y i (t) is the free response determined
by the initial conditions In turn, y f (t) is the sum of a steady-state term y fS (t)
and a transient term y fT (t).
The time-domain approach becomes rather complex unless low-ordersystems with simple input functions are considered, and is therefore of limitedpractical utility Instead, it is very fruitful to take advantage of the property
of linearity and the consequent validity of superposition principle The generic
input x(t) can be decomposed as a finite or infinite sum of elementary
functions for which eq (13.5) simplifies to a set of readily solvable algebraicalequations The solutions of such equations are then summed to produce the
overall response y(t) to the original stimulus x(t) Depending on the type of
the elementary functions used as a decomposition basis, either complex
exponentials e i ωt or damped complex exponentials with α real, theabove procedure leads to the methods of Fourier and Laplace transformrespectively
Trang 19In the Fourier transform method, solving eq (13.5) in the time domainbecomes equivalent to solving the following complex algebraical equation
in the frequency domain
and T( ω) is called the frequency, or sinusoidal, response function of the system.
For a given angular frequency ω, T(ω) is a complex number whose magnitude
and argument respectively represent the gain and phase shift between thesinusoidal input of angular frequency ω and the corresponding sinusoidaloutput
The Fourier transform method can be applied to the class of functions
of time for which the transform exists, i.e the integral given in eq (13.8)converges In the most general case, such functions are suitably regularnonperiodic functions with their transform being nonzero over acontinuous spectrum of frequencies A subset of such functions isrepresented by the periodic functions, for which the integral of eq (13.8)becomes a summation over a discrete spectrum of frequencies and thetransform becomes the series of Fourier coefficients The method ofanalysis based on the expression of periodic functions of time as Fourierseries is called the harmonic analysis
In the Laplace transform method, damped complex exponentials are used
as the elementary functions constituting the decomposition basis, therebyextending the transform method to functions which are not transformablebut, nevertheless, have great practical importance, such as the linear rampand exponential functions Again, solving eq (13.5) in the time domainbecomes equivalent to solving the following complex algebraical equation
in the domain of the complex angular frequency
(13.9)
Trang 20where X(s) and Y(s) are complex functions being the Laplace (or -) transforms of x(t) and y(t) given by
(13.10a)
(13.10b)
and the complex function T(s) is called the transfer function of the system.
As the -transform of the impulse function δ(t) is unity, T(s) is the -transform
of the system response when subject to an impulsive stimulus, sometimescalled a ballistic excitation
As can be seen by comparing eqs (13.8) with eqs (13.10), the -transform is ageneralization of the transform based on substituting ω with where
α is such that the integral converges The Laplace transform method offers thedesirable advantage that it takes into account the initial conditions in a consistentway, thereby being a powerful tool for dealing with transient problems
The use of the transform method enables us to describe the dynamicbehaviour of a linear instrument by simply analysing its frequency responsefunction or its transfer function In turn, they may be derived by definingelementary blocks which compose the instrument and properly combining the
respective T( ω ) or T(s) of such blocks As a rule, the frequency response or transfer function of cascaded blocks is the product of the individual T( ω ) or
T(s) As a result a block representation of the instrument can be obtained,
which is shown in Fig 13.3 for both the and -transforms
It is important to point out that several relevant features of the systemunder consideration can be analysed directly in the frequency domain by using
T( ω ) and T(s), without the need to formulate the problem in the time domain, thereby avoiding the related difficulties T( ω ) and T(s) can be experimentally
measured by monitoring the outputs generated by swept-sine and impulseinputs respectively In practice, it is sometimes preferable to use a stepexcitation in place of the impulse, which may be more difficult to generate
Since the unitary step function 1(t) is the integral of the impulse δ(t), if the
Fig 13.3 Block-diagram representation of a measuring instrument in the (a) Fourier
and in (b) Laplace domains.
Trang 21system is linear the step response can be derived to obtain the impulse response
and, thereafter, the transfer function T(s).
Three main models of measuring instruments can be distinguished, which
differ in their dynamic behaviour according to the degree n of the denominator
of their respective transfer functions, alternatively seen as the order of the
differential equation in the time domain They are the zeroth-, first- and second-order instrument models.
For a zeroth-order instrument the input-output relationship in the timedomain is given by
(13.11)
which in the s-domain becomes
(13.12)
We can observe that, both in time- and frequency-domain representations,
the output is proportional to the input This means that y(t) instantly follows
x(t) whatever its time evolution is, differing from it only by the scale factor
which represents the instrument sensitivity In particular, the stepresponse of a zeroth-order instrument is a step function itself as shown in
Fig 13.4, and the sinusoidal frequency response T( ω ) is flat throughout the
frequency axis (Fig 13.5) An example of a zeroth-order instrument is aresistive potentiometer displacement transducer
For a first-order instrument the input-output relationship in the timedomain is given by
Trang 22at and it asymptotically reaches The systemthen behaves as a first-order low-pass filter with a –3 dB bandwidth extendingfrom DC, i.e zero frequency, to ωc An example of a first-order instrument
is a thermometer with a finite thermal resistance and capacitance
For a second-order instrument the input-output relationship in the timedomain is given by
(13.15)
which in the s-domain becomes
(13.16)
Fig 13.7 Frequency response of a first-order instrument: (a) magnitude; (b) phase
([6, p 122], reproduced with permission).
Trang 23Fig 13.8 Step response of a second-order instrument ([6, p 129], reproduced with
permission).
Fig 13.9 Frequency response of a second-order instrument: (a) magnitude.
(b) phase ([6, p 135], reproduced with permission).