1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Vibrations Fundamentals and Practice Appc

17 99 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 17
Dung lượng 97,64 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Vibrations Fundamentals and Practice Appc Maintaining the outstanding features and practical approach that led the bestselling first edition to become a standard textbook in engineering classrooms worldwide, Clarence de Silva''s Vibration: Fundamentals and Practice, Second Edition remains a solid instructional tool for modeling, analyzing, simulating, measuring, monitoring, testing, controlling, and designing for vibration in engineering systems. It condenses the author''s distinguished and extensive experience into an easy-to-use, highly practical text that prepares students for real problems in a variety of engineering fields.

Trang 1

de Silva, Clarence W “Appendix C”

Vibration: Fundamentals and Practice

Clarence W de Silva

Boca Raton: CRC Press LLC, 2000

Trang 2

Appendix C Review of Linear Algebra

Linear algebra, the algebra of sets, vectors, and matrices, is useful in the study of mechanical vibration In practical vibrating systems, interactions among various components are inevitable There are many response variables associated with many excitations It is thus convenient to consider all excitations (inputs) simultaneously as a single variable, and also all responses (outputs) as a single variable The use of linear algebra makes the analysis of such a system convenient The subject of linear algebra is complex and is based on a rigorous mathematical foundation This appendix reviews the basics of vectors and matrices, which form the foundation of linear algebra

C.1 VECTORS AND MATRICES

In the analysis of vibrating systems, vectors and matrices will be useful in both time and frequency domains First, consider the time-domain formulation of a vibration problem For a single-degree-of-freedom system with a single forcing excitation f(t) and a corresponding single displacement response y, the dynamic equation is

(C.1) Note that, in this single-dof case, the quantities f, y, m, c, and k are scalars If the system has

n degrees of freedom, with excitation forces f1(t), f2(t), …, f n(t) and associated displacement responses y1, y2, …, y n, then the equations of motion can be expressed as

(C.2)

in which

my˙˙+cy˙+ky= f t( )

My˙˙+Cy˙+Ky= f( )t

y=

=

y y

y

n n

1 2

M displacement vector ( th - order column vector)

f =

=

f f

f

n n

1 2

M forcing excitation vector ( th - order column vector)

M=

n n

n

n

L L M

L

mass matrix ( square matrix)

Trang 3

In this manner, vectors and matrices are introduced into the formulation of a multi-degree-of-freedom vibration problem Further, vector-matrix concepts will enter into the picture in subsequent analysis; for example, in modal analysis, as discussed in Chapters 5 and 11

Next consider the frequency-domain formulation In the single-degree-of-freedom case, the system equation can be given as

(C.3)

where

u = frequency spectrum (fourier spectrum) of the forcing excitation (input)

y = frequency spectrum (Fourier spectrum) of the response (output)

G = frequency-transfer function (frequency-response function) of the system

The quantities u, y, and G are scalars because each one is a single quantity, and not a collection

of several quantities

Next consider a two-degree-of-freedom system having two excitations u1 and u2, and two responses y1 and y2; y i now depends on both u1 and u2 It follows that one needs four transfer functions to represent all the excitation-response relationships that may exist in this system One can use the four transfer functions (G11, G12, G21, and G22) For example, the transfer function G12

relates the excitation u2 to the response y1 The associated two equations that govern the system are:

(C.4)

Instead of considering the two excitations (two inputs) as two separate quantities, one can consider them as a single “vector” u having the two components u1 and u2 As before, one can write this as a “column” vector:

Alternately, one can write a “row” vector as

C=

n n

n

n

L L M

L

damping matrix ( square matrix)

K=

n n

n

n

L L M

L

stiffness matrix ( square matrix)

y=Gu

y G u G u

y G u G u

u=

u u

1 2

u=[u u1, 2]

Trang 4

It is common to use the column-vector representation Similarly, one can express the two outputs

y1 and y2 as a vector y Consequently, the column vector is given by

and the row vector by

It should be kept in mind that the order in which the components (or elements) are given is

important because the vector [u1, u2] is not equal to the vector [u2, u1] In other words, a vector is

an “ordered” collection of quantities

Summarizing, one can express a collection of quantities, in an orderly manner, as a single

vector Each quantity in the vector is known as a component or an element of the vector What

each component means will depend on the particular situation For example, in a dynamic system,

it can represent a quantity such as voltage, current, force, velocity, pressure, flow rate, temperature,

or heat transfer rate The number of components (elements) in a vector is called the order, or

dimension, of the vector

Next, the concept of a matrix is introduced, using the frequency-domain example given above

Note that one needs four transfer functions to relate the two excitations to the two responses Instead

of considering these four quantities separately, one can express them as a single matrix G having

four elements Specifically, the transfer function matrix for the present example is

Note that the matrix has two rows and two columns Hence, the size or order of the matrix is 2×2

Since the number of rows is equal to the number of columns in this example, one has a square

matrix If the number of rows is not equal to the number of columns, one has a rectangular matrix

Actually, a matrix can be interpreted as a collection of vectors Hence, in the previous example,

the matrix G is an assembly of the two column vectors

or, alternatively, an assembly of the two row vectors

C.2 VECTOR-MATRIX ALGEBRA

The advantage of representing the excitations and the responses of a vibrating system as the vectors

u and y, and the transfer functions as the matrix G, is clear from the fact that the excitation-response

(input-output) equations can be expressed as the single equation

y=

y y

1 2

y=[y y1, 2]

G=

G G

G G

11 21

12 22

and

G G

G G

11 12

21 22

,

,

and

Trang 5

(C.5) instead of the collection of scalar equations (C.4)

Hence, the response vector y is obtained by “premultiplying” the excitation vector u by the transfer function matrix G Of course, certain rules of vector-matrix multiplication have to be agreed

on in order that this single equation is consistent with the two scalar equations given by equations (C.4) Also, one must agree on rules for the addition of vectors or matrices

A vector is a special case of a matrix Specifically, a third-order column vector is a matrix having three rows and one column Hence, it is a 3×1 matrix Similarly, a third-order row vector

is a matrix having one row and three columns Accordingly, it is a 1×3 matrix It follows that one only needs to know matrix algebra, and the vector algebra will follow from the results for matrices

C.2.1 M ATRIX A DDITION AND S UBTRACTION

Only matrices of the same size can be added The result (sum) will also be a matrix of the same size In matrix addition, one adds the corresponding elements (i.e., the elements at the same position)

in the two matrices, and write the results at the corresponding places in the resulting matrix

As an example, consider the 2×3 matrix

and a second matrix

The sum of these two matrices is given by

The order in which the addition is done is immaterial Hence,

(C.6)

In other words, matrix addition is commutative.

Matrix subtraction is defined just like matrix addition, except the corresponding elements are subtracted (or sign changed and added) An example is given below:

C.2.2 N ULL M ATRIX

The null matrix is a matrix for which the elements are all zeros Hence, when one adds a null matrix to an arbitrary matrix, the result is equal to the original matrix One can define a null vector

in a similar manner One can write

y=Gu

A= −





1 0 3





A+ =B  −





2 3 0

=

1 2

3 0

4 1

4 2

3 0

5 0

1 1

1 1

Trang 6

As an example, the 2×2 null matrix is:

C.2.3 M ATRIX M ULTIPLICATION

Consider the product AB of two matrices A and B One can write this as:

(C.8)

As such, B is premultiplied by A or, equivalently, A is post-multiplied by B For this multiplication

to be possible, the number of columns in A must be equal to the number of rows in B Then, the number of rows of the product matrix C is equal to the number of rows in A, and the number of columns in C is equal to the number of columns in B.

The actual multiplication is done by multiplying the elements in a given row (say, the ith row)

of A by the corresponding elements in a given column (say, jth column) of B and summing these

products The result is the element c ij of the product matrix C Note that c ij denotes the element

that is common to the ith row and the jth column of matrix C Thus,

(C.9)

As an example, suppose:

Note that the number of columns in A is equal to 3, and the number of rows in B is also equal

to 3 Hence, one can perform the premultiplication of B by A For example,

A+ =0 A

0 0

0 0





C= AB

c ij =∑a b ik kj k

A

B





=

c c c c c c

11

12

13

14

21

22

1 1 2 2 1 5 0

1 2 2 4 1 1 7

1 4 2 2 1 0 8

3 1 3 2 4 5 17

= × −( )+ × + −( )× −( )=

= × −( )+ −(( )× + × −3 4 ( )3 = −24 etc

Trang 7

The product matrix is

It should be noted that both products AB and BA are not always defined; and even when they are defined, the two results are not equal in general Unless both A and B are square matrices of

the same order, the two product matrices will not be of the same order

Summarizing, matrix multiplication is not commutative:

(C.10)

C.2.4 I DENTITY M ATRIX

An identity matrix (or unity matrix) is a square matrix whose diagonal elements are all equal to 1

and all the remaining (off-diagonal) elements are zeros This matrix is denoted by I.

For example, the third-order identity matrix is

It is easy to see that when any matrix is multiplied by an identity matrix (provided, of course, that the multiplication is possible), the product is equal to the original matrix; thus,

(C.11)

C.3 MATRIX INVERSE

An operation similar to scalar division can be defined with regard to the inverse of a matrix A proper inverse is defined only for a square matrix and, even for a square matrix, an inverse might not exist The inverse of a matrix is defined as follows:

Suppose that a square matrix A has the inverse B Then, these must satisfy the equation:

(C.12)

or, equivalently,

(C.13)

where I is the identity matrix, as previously defined.

The inverse of A is denoted by A–1 The inverse exists for a matrix if and only if the determinant

of the matrix is non-zero Such matrices are termed nonsingular The determinant is discussed in

section C.3.3; but, before explaining a method for determining the inverse of a matrix, one can verify that





17 24 22 6

ABBA

I=

1 0 0

0 1 0

0 0 1

AI=IA=A

AB=I

BA=I

2 1

1 1





Trang 8

is the inverse of

To show this, simply multiply the two matrices and show that the product is the second-order unity matrix Specifically,

or

C.3.1 M ATRIX T RANSPOSE

The transpose of a matrix is obtained by simply interchanging the rows and the columns of the

matrix The transpose of A is denoted by A T

For example, the transpose of the 2×3 matrix

is the 3×2 matrix

Note that the first row of the original matrix has become the first column of the transposed matrix, and the second row of the original matrix has become the second column of the transposed matrix

If A T = A, then the matrix A is symmetric Another useful result on the matrix transpose is

expressed by

(C.14)

It follows that the transpose of a matrix product is equal to the product of the transposed matrices, taken in the reverse order

C.3.2 T RACE OF A M ATRIX

The trace of a square matrix is given by the sum of the diagonal elements The trace of matrix A

is denoted by tr(A).

(C.15)

1 2





1 2

2 1

1 1

1 0

0 1







= 



2 1

1 1

1 2

1 0

0 1







= 







A T =

2 2

3 0

AB B A

( )T = T T

tr A( )=∑a ii

i

Trang 9

For example, the trace of the matrix

is given by

C.3.3 D ETERMINANT OF A M ATRIX

The determinant is defined only for a square matrix It is a scalar value computed from the elements

of the matrix The determinant of a matrix A is denoted by det(A) or A.

Instead of giving a complex mathematical formula for the determinant of a general matrix in terms of the elements of the matrix, one can compute the determinant as follows

First consider the 2×2 matrix

Its determinant is given by

Next consider the 3×3 matrix

Its determinant can be expressed as

where

A=

tr A( )= −( )2 + −( )4 + = −3 3

A=

det A( )=a a11 22−a a12 21

A=

det A( )=a M11 11−a M12 12+a M13 13

11

23

12

23

13

22

det

det

det a

22

21

21

Trang 10

Note that M ij is the determinant of the matrix obtained by deleting the ith row and the jth column

of the original matrix The quantity M ij is known as the minor of the element a ij of the matrix A.

If the proper sign is attached to the minor, then depending on the position of the corresponding

matrix element, one has a quantity known as the cofactor Specifically, the cofactor C ij corresponding

to the minor M ij is given by

(C.16)

Hence, the determinant of the 3×3 matrix can be given by

Note that in the two formulas given above for computing the determinant of a 3×3 matrix, one has expanded along the first row of the matrix The same answer is obtained, however, if one

expands along any row or any column Specifically, when expanded along the ith row, one obtains

Similarly, if one expands along the jth column, then

These ideas of computing a determinant can be easily extended to 4×4 and higher-order matrices

in a straightforward manner Hence, one can write

(C.17)

C.3.4 A DJOINT OF A M ATRIX

The adjoint of a matrix is the transponse of the matrix whose elements are the cofactors of the

corresponding elements of the original matrix The adjoint of matrix A is denoted by adj(A).

As an example, in the 3×3 case, one has

C ij = −( )i j+ M ij

1

det A( )=a C11 11+a C12 12+a C13 13

det A( )=a C i1 i1+a C i2 i2+a C i3 i3

det A( )=a C1j 1j+a C2j 2j+a C3j 3j

det A( )=∑a C ij ij =∑a C

j

ij ij i

adj A( )=

=

T

Trang 11

In particular, it is easily seen that the adjoint of the matrix

is given by

Accordingly,

Hence, in general,

(C.18)

C.3.5 I NVERSE OF A M ATRIX

At this point, one can define the inverse of a square matrix Specifically,

(C.19)

Hence, in the 3×3 matrix example given before, since the adjoint has already been determined, it remains only to compute the determinant in order to obtain the inverse Now, expanding along the first row of the matrix, the determinant is given by

Accordingly, the inverse is given by

For two square matrices A and B,

(C.20)

A=

0 3 2

1 1 1

adj A( )=

T

adj A( )=

adj A( )=[ ]C ij T

A

( )

1 adj det

det A( )= × + × + −1 1 2 2 ( )1 × −( )3 =8

A− =

8

A B B A

( )− 1= − 1 − 1

Trang 12

As a final note, if the determinant of a matrix is 0, the matrix does not have an inverse Then,

that matrix is singular Some important matrix properties are summarized in Box C.1

C.4 VECTOR SPACES

C.4.1 F IELD ( )

Consider a set of scalars If for any α and β from the set, α + β and αβ are also elements in the set; and if:

1 α + β = β + α and αβ = βα (Commutativity)

2 (α + β) + γ = α + (β + γ) and (αβ)γ = α(βγ) (Associativity)

3 α(β + γ) = αβ + αγ (Distributivity) are satisfied,

and if:

1 Identity elements 0 and 1 exist in the set such that α + 0 = α and 1α = α

2 Inverse elements exist in the set such that α + (–α) = 0

and α·α–1 = 1

then, the set is a field For example, the set  of real numbers is a field

C.4.2 V ECTOR S PACE ( )

Properties

1 Vector addition (x + y) and scalar multiplication (αx) are defined.

BOX C.1 Summary of Matrix Properties

Addition :

Multiplication :

Identity : = = is the identity matrix

: or in general

Transposition :

Inverse : = = = and

Community :

AI IA A I

AB

m n n r m r

=

( ) =

1 1

≠≠

( ) = ( )

BA

AB C A BC

C A B CA CB

A B D AD BD

in general Associativity :

Distributivity : +

Distributivity : +

Ngày đăng: 05/05/2018, 09:37