Note that, if A is Hermitian, the definitions above tacitly imply that the term xHAx—which is called the Hermitian form generated by A—is always a real number and so we can also speak of
Trang 2Theorem A.5 If has eigenvalues the following statements are equivalent:
1 A is normal.
2 A is unitarily diagonalizable.
3
4 There is a orthonormal set of n eigenvectors of A.
The equivalence of 1 and 2 in Theorem A.5 is often called the spectral theorem for normal matrices For our present purposes we recall that a Hermitian
matrix is just a special case of normal matrix and we stress that—as
expected—the statement of the theorem says nothing about A having distinct
eigenvalues (and in fact, two or more eigenvalues could be equal)
Then, summarizing the results of the preceding discussion we can say
that a complex Hermitian matrix (or a real symmetrical matrix) A:
1 has real eigenvalues;
2 is always nondefective (which means that—regardless of the existence
of multiple eigenvalues—there always exists a set of n linearly
independent eigenvectors, which, in addition are mutually orthogonal);
3 is unitarily (orthogonally) similar to the diagonal matrix of eigenvalues
diag( j ) Moreover, the unitary (orthogonal) similarity matrix is the
matrix X of eigenvectors in which the jth column is the jth eigenvector.
We close this section by briefly considering special classes of Hermitian
matrices A n×n Hermitian matrix A is said to be positive definite if
(A.32a) for all nonzero vectors If the strict equality in eq (A.32a) is weakened to
(A.32b)
then A is said to be positive semidefinite Moreover, by simply reversing the
inequalities in eqs (A.32a) and (A.32b), we can define the concept of negative definite and negative semidefinite matrices
Note that, if A is Hermitian, the definitions above tacitly imply that the term xHAx—which is called the Hermitian form generated by A—is always
a real number and so we can also speak of positive definite Hermitian form (eq (A.32a)) or positive semidefinite Hermitian form (eq (A.32b))
The real counterparts of Hermitian forms are called quadratic forms and are expressions of the type xTAx, where A is a real symmetrical matrix.
Quadratic forms arise naturally in many branches of physics and engineering, and—as we also saw throughout many chapters of this book—the subject of
Trang 3engineering vibrations is no exception Clearly, the appropriate definition of
a positive definite matrix reads in this case
(A.33) for all nonzero vectors Similarly, the relation for all nonzero vectors defines a positive semidefinite matrix
For our purposes, the following result will suffice and we refer the interested reader to specialized literature for more details
Theorem A.6 A Hermitian matrix is positive semidefinite if and only if all its eigenvalues are nonnegative It is positive definite if and only
if all its eigenvalues are positive (clearly, this same theorem applies for real symmetrical matrices)
Finally, it is left to the reader to show that the trace and the determinant
of a positive definite matrix are also positive
A.4 Matrices and linear operators
Some aspects of the strict relationship between linear operators on a vector space and matrices have been somehow anticipated in Section A.1 Given a
basis in an n-dimensional vector space V on a scalar field , the statement
that the mapping (i.e the mapping that associates the vector to its components relative to the chosen basis ) is an isomorphism constitutes
a fundamental result which allows us to manipulate vectors by simply operating on their components In fact, according to these developments,
we saw in Section A.1 how the components of a vector change when we choose a different basis in the same vector space (in mathematical terminology, the sentence ‘ is an isomorphism but it is not a canonical isomorphism’ translates this fact that is indeed injective and surjective, but the coordinates
of a given vector change under a change of basis and therefore depend on the choice of the basis)
In a similar way, when we have to deal with linear operators on a vector
space, it can be shown that—after a basis has been chosen in the space V—
any given linear operator is represented by a n×n matrix and it can
also be shown that—given a basis in V—the mapping that associates a given linear operator with its representative matrix relative to the chosen basis is
an isomorphism between the vector space of linear operators from V to V
and the vector space of square matrices Simple examples of such mapping are the null operator—i.e the operator for which Zx=0
for all —which is represented by the null matrix and the identity operator—i.e the operator for which Ix=x for all —which is represented by the unit matrix
In general, however, when a different basis is chosen in V, the same linear
operator is represented by a different matrix So, the question arises: since
Trang 4different matrices may represent the same linear operator, what is the relationship between any two of them? The answer to this question is that any two matrices which represent the same linear operator are similar Let
us examine these points in more detail
First of all we must determine what we mean by a matrix representation of a
given linear operator To this end, let V be a n-dimensional vector space and let
be a linear transformation on V If we choose a basis in the
vector space, the action of T on any vector is determined once one knows the vectors because any x has a unique representation
every vector Tui, in turn, can be written as a linear combination
(A.34)
the n2 coefficients t
ki can be arranged in a square matrix T, which is called
the matrix representation of the operator T relative to the basis The entries of the matrix clearly depend on the chosen basis and this fact can be
emphasized by indicating this matrix by [T]
u so that, by choosing a different basis in V, we will obtain the matrix representation [T]
v of T.
At this point, before examining the relationship between two different
representations of T we need a preliminary result: we will show that—in a given n-dimensional vector space in which two basis and have been chosen—the ‘change-of-basis’ matrix is always nonsingular In fact, since we can write
(A.35)
where i=1, 2,…, n in the first equation (and the n2 coefficients c
ji can be arranged
in a square matrix which is the change-of-basis matrix from the basis
to and j=1, 2,…,n in the second equation (and the n2 coefficients
ij
can be arranged in a square matrix which is the change-of-basis matrix from the basis to Then from eqs (A.35) we get
and since any vector can be expressed uniquely as a linear combination of the vectors the term within brackets must satisfy
Trang 5By the same token, it can also be shown immediately
(A.36b)
Equations (A.36a) and (A.36b) in matrix form read, respectively
(A.37)
meaning that (or, equivalently, ) Therefore, a change-of-basis matrix C is always nonsingular
Also, with a slight change of notation, we can re-express the result of Section A.1 by noting that, since any vector can be written as
(A.38)
we can substitute the first of eqs (A.35) into the first of eqs (A.38) to obtain
which is equivalent to the matrix equation
(A.39)
where the notation [x]v means that we are considering the components of x
relative to the basis Similarly, [x]u indicates the components of x
relative to the basis and indicates the change-of-basis matrix
The rather cumbersome (but self-explanatory) notation of eq (A.39) will now serve our purposes in order to obtain the relation between two matrix representations of the same linear operator In fact, in terms of components
the action of a linear operator T on a vector x can be written
Trang 6where we defined Now, substituting eq (A.39) and its counterpart for the vector y into the second of eqs (A.40) yields
so that premultiplying both sides by the matrix we get
which implies (compare with the first of eqs (A.40))
(A.41)
that is, the matrices [T]u and [T]v are similar, the similarity matrix being the change-of-basis matrix C
Example A.5 As a simple example in 2, let us consider the two bases
and
Explicitly, the first of eqs (A.35) now reads
we can form the change-of-basis matrix
Trang 7Similarly, from the second of eqs (A.35) we obtain the change-of-basis matrix
so that, as expected (eqs (A.35)) or, according to the more cumbersome notation above,
Now, consider the linear transformation which acts on a vector
as follows:
(the proof of linearity is left to the reader) The representative matrix of T
relative to the basis is obtained from the equations
from which it follows that
and finally we get from eq (A.39)
which is exactly, as can be directly verified from the equations
the representative matrix of T relative to the basis
If, in addition, the two bases that we consider in the complex (real) linear
space V are orthonormal bases—this obviously implies that an inner product has been defined in V—the similarity matrix is unitary (orthogonal).
Trang 8In fact, let for example V be a real n-dimensional linear space and let
and be two orthonormal basis in V Then, from the first of eqs (A.35) and from the orthogonality condition we get
so that the equality reads in matrix form
(A.42) which implies and shows that, in a real linear space, we pass from one orthonormal basis to another orthonormal basis by means of an orthogonal change-of-basis matrix In terms of linear operators, this means
that two different matrix representations A and B of the same linear operator
are orthogonally similar and B=CTAC, where C is the
change-of-basis matrix
Clearly, if V is a complex linear space, we get C HC=I (i.e C is unitary;
recall that the inner product in a complex space is not homogeneous in one
of the slots) and the matrices A and B are unitarily similar, that is B=CHAC.
We will not go into further details here, but a final observation is in order: specific properties of linear operators are reflected by specific characteristics
of the matrices which may represent such operators; these characteristics, in turn, are generally invariant under a similarity transformation As an illustrative example of this situation, it can be shown that if a square matrix
A is Hermitian, then SHAS is Hermitian for all this is because a Hermitian matrix represents a Hermitian operator and another matrix representing the same operator must necessarily retain this characteristic (the definition of Hermitian operator is beyond our scopes and the interested reader is referred to specific literature) Also recall the corollary to Theorem A.4 stating that eigenvalues are invariant under a similarity transformation: this circumstance reflects the fact that eigenvalues are intrinsic characteristics
of a given linear operator and do not change when different matrices are used for its representation
In the light of these considerations, we may recall the discussion on
n-DOF systems (see Chapters 6 and 7, and also Chapter 9 for some important results on the characterization of eigenvalues) and note that the stiffness and mass of a given vibrating system can be envisioned as (symmetrical) linear
operators on the system’s n-dimensional configuration space Then, the
essence of the modal approach consists of finding the orthogonal basis—the basis of eigenvectors—in which such operators have a diagonal representation
Trang 9Solving the eigenvalue problem is the process by which we determine the basis of eigenvectors The inconvenience of dealing with a generalized eigenvalue problem rather than with a standard eigenvalue problem translates into the fact that we have to diagonalize simultaneously two matrices instead
of diagonalizing a single matrix As stated before, however, this is only a minor inconvenience which does not significantly modify the essence of the mathematical treatment
Reference
1 Wilkinson, J.H., The Algebraic Eigenvalue Problem, Clarendon Press, Oxford,
1988.
Further reading
Bickley, W.G and Thomson, R.S.H.G., Matrices—Their Meaning and Manipulation,
The English Universities Press, 1964.
Horn, R.A and Johnson, C.R., Matrix Analysis, Cambridge University Press, 1985 Pettofrezzo, A.J., Matrices and Transformations, Dover, New York, 1966 Quarteroni, A., Sacco, A and Saleri, F., Matematica Numerica, Springer-Verlag, Italy,
1998.
Shephard, G.C., Spazi Vettoriali di Dimensioni Finite, Cremonese, Rome, 1969 (In Italian) (Translated from the original English edition Vector Spaces of Finite
Dimension, University Mathematical Texts, Oliver & Boyd.)
Vọevodine, V., Algèbre Linéaire, Mir, Moscow, 1976 (in French).
Trang 10B Some considerations on the
assessment of vibration
intensity
B.1 Introduction
In a number of circumstances one of the main tasks of vibration analysis is
to ‘assess the vibration intensity’ This phrase, which is rather vague, can be interpreted as assigning to a specific vibration phenomenon a ‘figure of merit’ which can be used to predict the potential damaging effects, if any, of such
a phenomenon In these cases, one also speaks of ‘assessment of vibration severity’
Given the very large number of possible practical situations, it is obvious that the primary factors to be considered are, broadly speaking, the type, nature and duration of the excitation and the physical system which is affected
by the vibration Accordingly, there exist a number of specialized fields of investigation which study different aspects of the problem and consider, for example, the effect of shocks and vibrations on humans, buildings, various types of structures, electronic components etc In this appendix, also in the light of the fact that it can be extremely difficult to categorize a complex phenomenon with a single number (as a matter of fact, there seems to exist
no internationally accepted standard), we will obviously limit ourselves to some general considerations
B.2 Definitions
In order to ‘assess vibration intensity’, the first definition we consider is the
so-called Zeller’s power (or strength) of vibration, which takes into account
the acceleration amplitude a, in cm/s2, and the frequency v and is defined by
the relation
(B.1)
Zeller’s power is in units of cm2/s3 and in the second expression on the r.h.s
of eq (B.1) we call x (in cm) the displacement amplitude.
Trang 11From Zeller’s power, another two quantities can be calculated: the first is
the so-called vibrar unit, the strength S of a vibration in vibrar units being
given by
(B.2)
where the reference value Z0 is taken as 0.1 cm2/s3 The second quantity is
called the pal and the strength in pal (according to the original definition
given by Zeller[1]) is calculated as
(B.3)
where the second and third expressions on the r.h.s are obtained from the fact that Another definition of pal dates back to the German standard DIN 4150 of 1939 (current version 1986 [2]) and defines the strength of vibration in terms of velocity ratios, i.e
(B.4)
where and v rms is the vrms mean square value of the measured vibration velocity Note that we use the symbol to indicate the strength according to the DIN definition because this is different from Zeller’s definition of eq (B.3)
The current German standard DIN 4150, Part 2 [2] deals with the effects
of vibrations on people in residential buildings and considers the range of frequency from 1 to 80 Hz In this standard, the measured value of principal harmonic vibration is used to calculate a factor of intensity perception KB
by means of the formula
(B.5)
where d is the displacement amplitude in millimetres and v is the principal
vibration frequency in hertz The calculated KB value (in mm/s) is then compared with an acceptable reference value which takes into account such factors as: use of the building, frequency of occurrence, duration of the vibration and time of day For example, for small office buildings and office premises and a continuous or repeated source of vibration, the acceptable