1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists part 32 potx

7 198 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 421,8 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

A linear space or a vector space over a field of scalars usually, the field of real numbers any nature for which the following conditions hold: II.. We obtain a specific linear space if

Trang 1

Example 3 Consider the real symmetric matrix

A=

(11 –6 2

– 6 10 – 4

2 – 4 6

)

.

Its eigenvalues are λ1 = 18, λ2 = 6, λ3 = 3 and the respective eigenvectors are

X1 =

(1 2 2

)

, X2 =

( 2 1

– 2

)

, X3 =

(2

– 2 1

)

.

Consider the matrix S with the columns X1, X2, and X3 :

S=

(1 2 2

2 1 – 2

2 – 2 1

)

Taking 2A1= S T AS, we obtain a diagonal matrix:

2

A1= S T AS=

(1 2 2

2 1 – 2

2 – 2 1

) (11 –6 2

– 6 10 – 4

2 – 4 6

) (1 2 2

2 1 – 2

2 – 2 1

)

=

(27 0 0

)

Taking 2A2= S– 1AS, we obtain a diagonal matrix with the eigenvalues on the main diagonal:

2

A2= S–1AS= – 1

27

(–3 –6 –6

– 6 – 3 6

– 6 6 – 3

) (11 –6 2

– 6 10 – 4

2 – 4 6

) (1 2 2

2 1 – 2

2 – 2 1

)

=

(3 0 0

0 6 0

0 0 18

)

.

We note that 2A1 = 92A 2

5.2.3-8 Characteristic equation of a matrix

The algebraic equation of degree n

f A (λ)det(A – λI)det [a ij – λδ ij]≡









a11– λ a12 · · · a1n

a21 a22– λ · · · a2n

a n1 a n2 · · · a nn – λ







 =0

characteristic polynomial The spectrum of the matrix A (i.e., the set of all its eigenvalues)

coincides with the set of all roots of its characteristic equation The multiplicity of every

root λ i of the characteristic equation is equal to the multiplicity m  i of the eigenvalue λ i

Example 4 The characteristic equation of the matrix

A=

(4 –8 1

5 – 9 1

4 – 6 – 1

)

has the form

f A (λ)≡ det

(4– λ –8 1

5 – 9– λ 1

4 – 6 – 1– λ

)

= –λ3– 6λ2– 11λ– 6= –(λ +1)(λ +2)(λ +3 ).

Similar matrices have the same characteristic equation

2) λ p j is an eigenvalue of the matrix A p (p =0, 1, , N for a nondegenerate A; otherwise,

p=0,1, , N ), where N is a natural number;

3) a polynomial f (A) of the matrix A has the eigenvalue f (λ).

Trang 2

Suppose that the spectra of matrices A and B consist of eigenvalues λ j and μ k,

the matrices A1, , A n The algebraic multiplicities of the same eigenvalues of matrices

A1, , A nare summed

Regarding bounds for eigenvalues see Paragraph 5.6.3-4

5.2.3-9 Cayley–Hamilton theorem Sylvester theorem

CAYLEY–HAMILTON THEOREM Each square matrix A satisfies its own characteristic equa-tion; i.e., f A (A) =0

Example 5 Let us illustrate the Cayley–Hamilton theorem by the matrix in Example 4:

f A (A) = –A3– 6A2– 11A– 6I

= –

(70 –116 19

71 – 117 19

64 – 102 11

)

– 6

(–20 34 –5

– 21 35 – 5

– 18 28 – 1

)

– 11

(4 –8 1

5 – 9 1

4 – 6 – 1

)

– 6

(1 0 0

0 1 0

0 0 1

)

= 0

A scalar polynomial p(λ) is called an annihilating polynomial of a square matrix A if

of A The unique monic annihilating polynomial of least degree is called the minimal polynomial of A and is denoted by ψ(λ) The minimal polynomial is a divisor of every

annihilating polynomial

By dividing an arbitrary polynomial f (λ) of degree n by an annihilating polynomial p(λ)

interpolation polynomial of A.

Example 6 Let

f(A) = A4+ 4A3+ 2A2– 12A– 10I, where the matrix A is defined in Example 4 Dividing f (λ) by the characteristic polynomial f A (λ) = –λ3–

6λ2– 11λ– 6, we obtain the remainder r(λ) =3λ2+ 4λ+ 2 Consequently,

f(A) = r(A) =3A2+ 4A+ 2I.

THEOREM Every analytic function of a square n×n matrix A can be represented as a

polynomial of the same matrix,

Δ(λ1, λ2, , λ n)

n



k=1

Δn–k A n–k,

replacing the (i +1)st row by (f (λ1), f (λ2), , f (λ n))

Example 7 Let us find r(A) by this formula for the polynomial in Example 6.

We find the eigenvalues of A from the characteristic equation f A (λ) =0: λ1= – 1, λ2 = – 2, and λ3= – 3 Then the Vandermonde determinant is equal toΔ(λ1, λ2, λ3) = – 2 , and the other determinants are Δ 1 = – 4 ,

Δ 2 = – 8 , and Δ 3 = – 6 It follows that

f(A) = 1

– 2[(–6)A2+ (–8)A + (–4)I] =3A2+4A+2I.

Trang 3

The Cayley–Hamilton theorem can also be used to find the powers and the inverse of a

matrix A (since if f A (A) =0, then A k f A (A) =0for any positive integer k).

Example 8 For the matrix in Examples 4–7, one has

f A (A) = –A3– 6A2– 11A– 6I= 0 Hence we obtain

A3= – 6A2– 11A– 6I.

By multiplying this expression by A, we obtain

A4= – 6A3– 11A2– 6A.

Now we use the representation of the cube of A via lower powers of A and eventually arrive at the formula

A4= 25A2+ 60A+ 36I.

For the inverse matrix, by analogy with the preceding, we obtain

A–1f A (A) = A–1(–A3– 6A2– 11A– 6I) = –A2– 6A– 11I– 6A–1= 0 The definitive result is

A–1= – 1

6(A2+6A+11I).

In some cases, an analytic function of a matrix A can be computed by a formula in the

following theorem

SYLVESTER’S THEOREM.If all eigenvalues of a matrix A are distinct, then

n



k=1

f (λ k )Z k, Z k =



i(A – λ i I)



i(λ k – λ i),

and, moreover, Z k = Z k m (m =1, 2,3, ).

5.3 Linear Spaces

5.3.1 Concept of a Linear Space Its Basis and Dimension

5.3.1-1 Definition of a linear space

A linear space or a vector space over a field of scalars (usually, the field of real numbers

any nature for which the following conditions hold:

II There is a rule that establishes correspondence between any pair x, λ, where x is an

and a vector x and denoted by u = λx.

III The following eight axioms are assumed for the above two operations:

1 Commutativity of the sum: x + y = y + x.

2 Associativity of the sum: (x + y) + z = x + (y + z).

5 A special role of the unit scalar1: 1 ⋅x = x for any element x.

6 Associativity of the multiplication by scalars: λ(μx) = (λμ)x.

7 Distributivity with respect to the addition of scalars: (λ + μ)x = λx + μx.

8 Distributivity with respect to a sum of vectors: λ(x + y) = λx + λy.

This is the definition of an abstract linear space We obtain a specific linear space if

the nature of the elements and the operations of addition and multiplication by scalars are concretized

Trang 4

Example 1 Consider the set of all free vectors in three-dimensional space If addition of these vectors

and their multiplication by scalars are defined as in analytic geometry (see Paragraph 4.5.1-1), this set becomes

a linear space denoted by B3

Example 2 Consider the set{x} whose elements are all positive real numbers Let us define the sum of

two elements x and y as the product of x and y, and define the product of a real scalar λ and an element x as the λth power of the positive real x The number1 is taken as the zero element of the space {x} , and the opposite

of x is taken equal to1/x It is easy to see that the set{x} with these operations of addition and multiplication

by scalars is a linear space.

Example 3 Consider the n-dimensional coordinate spaceRn , whose elements are ordered sets of n arbitrary real numbers (x1, , x n ) The generic element of this space is denoted by x, i.e., x = (x1, , x n),

and the reals x1, , x n are called the coordinates of the element x From the algebraic standpoint, the setRn

may be regarded as the set of all row vectors with n real components.

The operations of addition of element of Rnand their multiplication by scalars are defined by the following rules:

(x1, , x n ) + (y1, , y n ) = (x1+ y1, , x n + y n),

λ(x1, , x n ) = (λx1, , λx n).

Remark. If the field of scalars λ, μ, in the above definition is the field of all real numbers, the corresponding linear spaces are called real linear spaces If the field of scalars is that of all complex numbers, the corresponding space is called a complex linear space In many situations, it is clear from the context which

field of scalars is meant.

The above axioms imply the following properties of an arbitrary linear space:

1 The zero vector is unique, and for any element x the opposite element is unique.

4 The difference of two elements x and y, i.e., the element z such that z + y = x, is unique.

5.3.1-2 Basis and dimension of a linear space Isomorphisms of linear spaces

there exist scalars α1, , α ksuch that

y = α1x1+· · · + α kxk

Elements x1, , x kof the spaceV are said to be linearly dependent if there exist scalars

α1, , α ksuch that|α1|2+· · · +k|2 ≠ 0and

α1x1+· · · + α kxk=0,

Elements x1, , x kof the spaceV are said to be linearly independent if for any scalars

α1, , α ksuch that|α1|2+· · · +k|2 ≠ 0, we have

α1x1+· · · + α kxk≠ 0

THEOREM Elements x1, , x kof a linear spaceV are linearly dependent if and only

if one of them is a linear combination of the others

Remark. If at least one of the elements x1, , x kis equal to zero, then these elements are linearly

depen-dent If some of the elements x1, , x kare linearly dependent, then all these elements are linearly dependent.

Example 4 The elements i1= ( 1 , 0, ,0), i2= ( 0 , 1, ,0), , i n= ( 0 , 0, ,1 ) of the space Rn(see

Example 3) are linearly independent For any x = (x1, , x n)  Rn, the vectors x, i1, , i nare linearly dependent.

Trang 5

A basis of a linear space V is defined as any system of linearly independent vectors

e1, , e nsuch that for any element x of the spaceV there exist scalars x1, , x n such that

x = x1e1+· · · + xnen

This relation is called the representation of an element x in terms of the basis e1, , e n,

and the scalars x1, , x n are called the coordinates of the element x in that basis.

UNIQUENESS THEOREM The representation of any element x V in terms of a given

basis e1, , e nis unique

Let e1, , e nbe any basis inV and vectors x and y have the coordinates x1, , x nand

y1, , y n in that basis Then the coordinates of the vector x + y in that basis are x1+ y1,

, x n + y n , and the coordinates of the vector λx are λx1, , λx n for any scalar λ.

Example 5 Any three noncoplanar vectors form a basis in the linear space B3of all free vectors The n

elements i1 = ( 1 , 0, ,0), i2 = ( 0 , 1, ,0), , i n= ( 0 , 0, ,1 ) form a basis in the linear space Rn Any basis of the linear space {x} from Example 2 consists of a single element This element can be arbitrarily chosen of nonzero elements of this space.

integer N it contains N linearly independent elements.

THEOREM1 If V is a linear space of dimension n, then any n linearly independent

elements of that space form its basis

THEOREM2 If a linear spaceV has a basis consisting of n elements, then dim V = n.

Example 6 The dimension of the space B3 of all vectors is equal to 3 The dimension of the space Rnis

equal to n The dimension of the space{x} is equal to 1

if there is a one-to-one correspondence between the elements of these spaces such that if

Remark If linear spacesV and V are isomorphic, then the zero element of one space corresponds to the zero element of the other.

THEOREM Any two n-dimensional real (or complex) spaces V and V are isomorphic

5.3.1-3 Affine space

for which the following conditions hold:

from the point A with endpoint at B.

III The following conditions (called axioms of affine space) hold:

AB= a.

2 − AB −→ + −−→ BC = −→ AC for any three points A, B, C A.

Trang 6

By definition, the dimension of an affine space A is the dimension of the associated

Any linear space may be regarded as an affine space

(a1, , a n ) and B = (b1, , b n) are points of the affine spaceRn, then the corresponding

vector −− AB →from the linear spaceRn is defined by −− AB → = (b

1– a1, , b n – a n).

LetA be an n-dimensional affine space with the associated linear space V A coordinate

system in the affine space A is a fixed point O  A, together with a fixed basis e1, ,

enV The point O is called the origin of this coordinate system.

that the point M has affine coordinates (or simply coordinates) x1, , x nin this coordinate

system, and one writes M = (x1, , x n ) if x1, x nare the coordinates of the radius-vector

−−→

OM in the basis e1, , e n , i.e., −−→ OM = x1e1+· · · + x nen

5.3.2 Subspaces of Linear Spaces

5.3.2-1 Concept of a linear subspace and a linear span

condi-tions hold:

improper subspaces All other subspaces are called proper subspaces.

Example 1 A subset B2consisting of all free vectors parallel to a given plane is a subspace in the linear

space B3of all free vectors.

The linear span L(x1, , x m) of vectors x1, , x min a linear spaceV is, by definition,

the set of all linear combinations of these vectors, i.e., the set of all vectors of the form

α1x1+· · · + α mxm,

where α1, , α m are arbitrary scalars The linear span L(x1, , x m) is the least subspace

ofV containing the elements x1, , x m

linear spaceV Then this basis can be supplemented by elements e k+1, , e nof the space

V, so that the system e1, , e k, ek+1, , e nforms a basis in the spaceV.

THEOREM OF THE DIMENSION OF A LINEAR SPAN The dimension of a linear span

L(x1, , x m)of elements xm , , x mis equal to the maximal number of linearly

indepen-dent vectors in the system x1, , x m

5.3.2-2 Sum and intersection of subspaces

Trang 7

THEOREM The sum of dimensions of arbitrary subspaces L1 and L2 of a

dimension of their sum

Example 2 LetV be the linear space of all free vectors (in three-dimensional space) Denote by L1 the

subspace of all free vectors parallel to the plane OXY , and by L2 the subspace of all free vectors parallel to

the plane OXZ Then the sum of the subspaces L1 andL2 coincides withV, and their intersection consists of all free vectors parallel to the axis OX.

The dimension of each spaceL1 andL2 is equal to two, the dimension of their sum is equal to three, and the dimension of their intersection is equal to unity.

5.3.2-3 Representation of a linear space as a direct sum of its subspaces

element xV admits the unique representation x = x1+ x2, where x1 V1and x2 V2 In this case, one writesV = V1⊕V2.

Example 3 The spaceV of all free vectors (in three-dimensional space) can be represented as the direct

sum of the subspaceV1formed by all free vectors parallel to the plane OXY and the subspace V2 formed by

all free vectors parallel to the axis OZ.

THEOREM An n-dimensional space V is a direct sum of its subspaces V1andV2if and only if the intersection ofV1andV2is the null subspace and dimV = dim V1+ dimV2

Remark. If R is the sum of its subspaces R1 and R2, but not the direct sum, then the representation

x = x1+ x2is nonunique, in general.

5.3.3 Coordinate Transformations Corresponding to Basis

Transformations in a Linear Space

5.3.3-1 Basis transformation and its inverse

Let e1, , e n and2e1, ,2en be two arbitrary bases of an n-dimensional linear space V.

Suppose that the elements2e1, ,2enare expressed via e1, , e nby the formulas

2e1 = a11e1+ a12e2+· · · + a1nen,

2e2 = a21e1+ a22e2+· · · + a2nen,

2en = a n1e1+ a n2e2+· · · + a nnen

Thus, the transition from the basis e1, , e nto the basis2e1, ,2enis determined by the matrix

A

a11 a12 · · · a1n

a21 a22 · · · a2n

a n1 a n2 · · · ann

The transition from the basis2e1, ,2en to the basis e1, , e n is determined by the

matrix B[b ij ] = A–1 Thus, we can write

2ei =

n



j=1

a ijej, e =

n



j=1

Ngày đăng: 02/07/2014, 13:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm