1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists part 30 pot

7 389 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 408,86 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

A square matrix is nondegenerate if and only if its rows columns are linearly independent.. There exist matrices A kwhose positive integer power is equal to the zero matrix, even if A≠O.

Trang 1

Remark. If a matrix is real (i.e., all its entries are real), then the corresponding transpose and the adjoint

matrix coincide.

A square matrix A is said to be normal if A ∗ A = AA ∗ A normal matrix A is said to be

unitary if A ∗ A = AA ∗ = I, i.e., A ∗ = A– 1(see Paragraph 5.2.1-6).

5.2.1-4 Trace of a matrix

The trace of a square matrix A[a ij ] of size n×n is the sum S of its diagonal entries,

S = Tr(A) =

n



i=1

a ii.

If λ is a scalar and square matrices A and B has the same size, then

Tr(A + B) = Tr(A) + Tr(B), Tr(λA) = λTr(A), Tr(AB) = Tr(BA),

5.2.1-5 Linear dependence of row vectors (column vectors)

A row vector (column vector) B is a linear combination of row vectors (column vectors)

A1, , A k if there exist scalars α1, , α ksuch that

B = α1A1+· · · + α k A k.

Row vectors (column vectors) A1, , A k are said to be linearly dependent if there exist scalars α1, , α k (α21+· · · + α2

k≠ 0) such that

α1A1+· · · + α k A k = O,

where O is the zero row vector (column vector).

Row vectors (column vectors) A1, , A k are said to be linearly independent if, for any α1, , α k (α21+· · · + α2k≠ 0) we have

α1A1+· · · + α k A kO

THEOREM Row vectors (column vectors) A1, , A k are linearly dependent if and only if one of them is a linear combination of the others

5.2.1-6 Inverse matrices

Let A be a square matrix of size n×n , and let I be the unit matrix of the same size.

A square matrix B of size n×n is called a right inverse of A if AB = I A square matrix C of size n×n is called a left inverse of A if CA = I If one of the matrices B

or C exists, then the other exists, too, and these two matrices coincide In such a case, the matrix A is said to be nondegenerate (nonsingular).

THEOREM A square matrix is nondegenerate if and only if its rows (columns) are linearly independent

Remark. Generally, instead of the terms “left inverse matrix” and “right inverse matrix”, the term “inverse

matrix” is used with regard to the matrix B = A–1for a nondegenerate matrix A, since AB = BA = I.

UNIQUENESSTHEOREM The matrix A–1 is the unique matrix satisfying the condition

AA–1 = A– 1A = I for a given nondegenerate matrix A.

Remark For the existence theorem, see Paragraph 5.2.2-7.

Trang 2

Properties of inverse matrices:

(AB)–1= B–1A–1, (λA)– 1= λ– 1A– 1,

(A–1)–1= A, (A–1)T = (A T)–1, (A–1)∗ = (A ∗)–1,

where square matrices A and B are assumed to be nondegenerate and scalar λ≠ 0

The problem of finding the inverse matrix is considered in Paragraphs 5.2.2-7, 5.2.4-5, and 5.5.2-3

5.2.1-7 Powers of matrices

A product of several matrices equal to one and the same matrix A can be written as a positive

integer power of the matrix A: AA = A2, AAA = A2A = A3, etc For a positive integer k, one defines A k = A k–1A as the kth power of A For a nondegenerate matrix A, one defines

A0= AA– 1= I, A–k = (A– 1)k Powers of a matrix have the following properties:

A p A q = A p+q, (A p)q = A pq,

where p and q are arbitrary positive integers and A is an arbitrary square matrix; or p and q are arbitrary integers and A is an arbitrary nondegenerate matrix.

There exist matrices A kwhose positive integer power is equal to the zero matrix, even

if AO If A k = O for some integer k >1, then A is called a nilpotent matrix

A matrix A is said to be involutive if it coincides with its inverse: A = A– 1or A2= I.

5.2.1-8 Polynomials and matrices Basic functions of matrices

A polynomial with matrix argument is the expression obtained from a scalar polynomial f (x)

by replacing the scalar argument x with a square matrix X:

f (X) = a0I + a1X + a2X2+· · · ,

where a i (i =0,1, 2, ) are real or complex coefficients The polynomial f (X) is a square

matrix of the same size as X.

A polynomial with matrix coefficients is an expression obtained from a polynomial f (x)

by replacing its coefficients a i (i =0, 1, 2, ) with matrices Ai (i =0, 1,2, ) of the same size:

F (x) = A0+ A1x + A2x2+· · ·

Example 3 For the matrix

A=

(4 –8 1

5 – 9 1

4 – 6 – 1

) ,

the characteristic matrix (see Paragraph 5.2.3-2) is a polynomial with matrix coefficients and argument λ:

F (λ)A – λI = A0+ A1λ=

(4– λ –8 1

5 – 9– λ 1

4 – 6 – 1– λ

) , where

A0= A =

(4 –8 1

5 – 9 1

4 – 6 – 1

) , A1= –I =

(–1 0 0

0 – 1 0

0 0 – 1

) The corresponding adjugate matrix (see Paragraph 5.2.2-7) can also be represented as a polynomial with matrix coefficients:

G (λ) =

(λ2 + 10λ+ 15 – 8λ– 14 λ+ 1

5λ+ 9 λ2– 3λ– 8 λ+ 1

4λ+ 6 – 6λ– 8 λ2+ 5λ+ 4

)

= A0+ A1λ + A2λ2,

Trang 3

A0=

(15 –14 1

9 – 8 1

6 – 8 4

) , A1=

(10 –8 1

5 – 3 1

4 – 6 5

) , A2= I =

(1 0 0

0 1 0

0 0 1

)

The variable x in a polynomial with matrix coefficients can be replaced by a matrix X,

which yields a polynomial of matrix argument with matrix coefficients In this situation, one distinguishes between the “left” and the “right” values:

F (X) = A0+ A1X + A2X2+· · · ,

$

F (X) = A0+ XA1+ X2A2+· · ·

The exponential function of a square matrix X can be represented as the following

convergent series:

e X =1+ X + X2

2! +

X3 3! +· · · =



k=0

k! . The inverse matrix has the form

(e X)–1= eX =1– X + X

2 2! –

X3 3! +· · · =



k=0

(–1)k X k

k! .

Remark. Note that e X e Ye Y e X , in general The relation e X e Y = e X+Y holds only for commuting

matrices X and Y

Some other functions of matrices can be expressed in terms of the exponential function:

sin X = 1

2i (e iX – eiX), cos X = 1

2(e iX + eiX),

sinh X = 1

2(e X – eX), cosh X =

1

2(e X + eX).

5.2.1-9 Decomposition of matrices

THEOREM1 For any square matrix A, the matrix S1 = 12(A + A T) is symmetric and

the matrix S2 = 12(A – A T) is skew-symmetric The representation of A as the sum of symmetric and skew-symmetric matrices is unique: A = S1+ S2

THEOREM2 For any square matrix A, the matrices H1=12(A+A ∗)and H2= 21i (A–A ∗)

are Hermitian, and the matrix iH2is skew-Hermitian The representation of A as the sum

of Hermitian and skew-Hermitian matrices is unique: A = H1+ iH2

THEOREM3 For any square matrix A, the matrices AA ∗ and A ∗ A are nonnegative Hermitian matrices (see Paragraph 5.7.3-1)

THEOREM4 Any square matrix A admits a polar decomposition

A = QU and A = U1Q1,

where Q and Q1 are nonnegative Hermitian matrices, Q2 = AA ∗ and Q21 = A ∗ A , and U and U1 are unitary matrices The matrices Q and Q1 are always unique, while the matrices U and U1are unique only in the case of a nondegenerate A.

Trang 4

5.2.1-10 Block matrices.

Let us split a given matrix A[a ij ] (i =1, 2, , m; j =1, 2, , n) of size m×ninto

separate rectangular cells with the help of (M –1) horizontal and (N –1) vertical lines Each

cell is a matrix A αβ[a ij ] (i = i α , i α+1, , iα + m α–1; j = jβ , j β+1, , jβ + n β–1) of

size m α×n β and is called a block of the matrix A Here i α = m α–1+ i α–1, j β = n β–1+ j β–1

Then the given matrix A can be regarded as a new matrix whose entries are the blocks:

A[A αβ ] (α =1, 2, , M ; β =1, 2, , N ) This matrix is called a block matrix

Example 4 The matrix

A

a11 a12 a13 a14 a15

a21 a22 a23 a24 a25

a31 a32 a33 a34 a35

a41 a42 a43 a44 a45

a51 a52 a53 a54 a55

can be regarded as the block matrix

A≡A11 A12

A21 A22



of size 2 × 2 with the entries being the blocks

A11 ≡a11 a12 a13

a21 a22 a23

 , A12 ≡a14 a15

a24 a25

 ,

A21 ≡

(

a31 a32 a33

a41 a42 a43

a51 a52 a53

) , A22 ≡

(

a34 a35

a44 a45

a54 a55

)

of size 2 × 3 , 2 × 2 , 3 × 3 , 3 × 2 , respectively.

Basic operations with block matrices are practically the same as those with common matrices, the role of the entries being played by blocks:

1 For matrices A[a ij] ≡[A αβ ] and B[b ij] ≡[B αβ] of the same size and the same

block structure, their sum C[C αβ ] = [A αβ + B αβ] is a matrix of the same size and the same block structure

2 For a matrix A[a ij ] of size m× n regarded as a block matrix A[A αβ ] of size M×N,

the multiplication by a scalar is defined by λA = [λA αβ ] = [λa ij]

3 Let A[a ik] ≡ [A αγ ] and B[b kj] ≡ [B γβ] be two block matrices such that the

number of columns of each block A αγ is equal to the number of the rows of the

block B γβ Then the product of the matrices A and B can be regarded as the block matrix C[C αβ] = [

γ A αγ B γβ].

4 For a matrix A[a ij ] of size m× n regarded as a block matrix A[A αβ ] of size M×N,

the transpose has the form A T = [A T βα]

5 For a matrix A[a ij ] of size m× n regarded as a block matrix A[A αβ ] of size M×N,

the adjoint matrix has the form A ∗ = [A ∗ βα]

Let A be a nondegenerate matrix of size n×nrepresented as the block matrix



,

where A11 and A22 are square matrices of size p×p and q×q , respectively (p + q = n) Then the following relations, called the Frobenius formulas, hold:

A–1=

A–1

11+ A–111A12N A21A–111 –A–111A12N

–N A21A–1



,

A–1=

22

–A–221A21K A–1

22+ A–221A21KA12A–221



Here, N = (A22– A21A–1

11A12)–1, K = (A11– A12A–1

22A21)–1; in the first formula, the matrix

A11is assumed nondegenerate, and in the second formula, A22is assumed nondegenerate.

Trang 5

The direct sum of two square matrices A and B of size m×m and n×n, respectively,

is the block matrix C = AB=

A 0

0 B



of size m + n.

Properties of the direct sum of matrices:

1 For any square matrices A, B, and C the following relations hold:

(AB)⊕C = A(BC) (associativity),

Tr(AB ) = Tr(A) + Tr(B) (trace property)

2 For nondegenerate square matrices A and B, the following relation holds:

(AB)–1= A–1⊕B–1.

3 For square matrices A m , B m of size m×m and square matrices A n , B n of size n×n, the following relations hold:

(A mA n ) + (B mB n ) = (A m + B m)⊕(A n + B n);

(A mA n )(B mB n ) = A m B mA n B n.

5.2.1-11 Kronecker product of matrices

The Kronecker product of two matrices A[a iaja ] and B[b ibjb ] of size m a×n a and

m b×n b , respectively, is the matrix C[c kh ] of size m a m b×n a n b with entries

c kh = a iaja b ibjb (k =1, 2, , ma m b ; h =1, 2, , na n b),

where the index k is the serial number of a pair (i a , i b) in the sequence (1,1), (1,2), , (1, mb), (2,1), (2,2), (ma , m b ), and the index h is the serial number of a pair (j a , j b

in a similar sequence This Kronecker product can be represented as the block matrix

C[a iaja B]

Note that if A and B are square matrices and the number of rows in C is equal to the number of rows in A, and the number of rows in D is equal to the number of rows in B, then

(AB )(CD ) = ACBD The following relations hold:

(AB)T = A TB T, Tr(AB ) = Tr(A)Tr(B).

5.2.2 Determinants

5.2.2-1 Notion of determinant

With each square matrix A[a ij ] of size n×none can associate a numerical characteristic,

called its determinant The determinant of such a matrix can be defined by induction with respect to the size n.

For a matrix of size1 × 1(n =1), the first-order determinant is equal to its only entry,

Δ≡det A = a11 For a matrix of size 2 × 2 (n = 2), the second-order determinant, is equal to the difference of the product of its entries on the main diagonal and the product of its entries

on the secondary diagonal:

Δ≡det A≡a11 a12

a21 a22



 = a11a22– a12a21.

Trang 6

For a matrix of size3 × 3(n =3), the third-order determinant,

Δ≡det A≡











= a11a22a33+ a12a23a31+ a21a32a13– a13a22a31– a12a21a33– a23a32a11.

This expression is obtained by the triangle rule (Sarrus scheme), illustrated by the following

diagrams, where entries occurring in the same product with a given sign are joined by segments:

+ 





@

@

@

@

@

@







@

@















– 





 HHH 

H

A A A

AA H H H H A A A







For a matrix of size n×n (n >2), the nth-order determinant is defined as follows under

the assumption that the (n –1)st-order determinant has already been defined for a matrix of

size (n –1)×(n –1)

Consider a matrix A = [a ij ] of size n×n The minor M j i corresponding to an entry a ij

is defined as the (n –1)st-order determinant of the matrix of size (n –1)×(n –1) obtained

from the original matrix A by removing the ith row and the jth column (i.e., the row and the column whose intersection contains the entry a ij ) The cofactor A i j of the entry a ij is

defined by A i j = (–1)i+j M i

j (i.e., it coincides with the corresponding minor if i + j is even,

and is the opposite of the minor if i + j is odd).

The nth-order determinant of the matrix A is defined by

Δ≡det A









a11 a12 · · · a1n

a21 a22 · · · a2n

. .

a n1 a n2 · · · a nn







=

n



k=1

a ik A i

k =

n



k=1

a kj A k

j.

This formula is also called the ith row expansion of the determinant of A and also the jth

column expansion of the determinant of A.

Example 1 Let us find the third-order determinant of the matrix

A=

(1 –1 2

6 1 5

2 – 1 – 4

)

To this end, we use the second-column expansion of the determinant:

det A =

3



i=1

(– 1 )i+2a i M i= (– 1 )1+2× (– 1 ) × 6 5

2 – 4  + (–1)2 + 2 × 1 × 1 2

2 – 4  + (–1)3 + 2 × (– 1 ) × 1 2

6 5 

= 1 × [ 6 × (– 4 ) – 5 × 2 ] + 1 × [ 1 × (– 4 ) – 2 × 2 ] + 1 × [ 1 × 5 – 2 × 6 ] = – 49

5.2.2-2 Properties of determinants

Basic properties:

1 Invariance with respect to transposition of matrices:

det A = det A T

2 Antisymmetry with respect to the permutation of two rows (or columns): if two rows (columns) of a matrix are interchanged, its determinant preserves its absolute value, but changes its sign

Trang 7

3 Linearity with respect to a row (or column) of the corresponding matrix: suppose

that the ith row of a matrix A[a ij] is a linear combination of two row vectors,

(a i1, , a i3) = λ(b1, , b n ) + μ(c1, , c n); then

det A = λ det A b + μ det A c,

where A b and A c are the matrices obtained from A by replacing its ith row with (b1, , b n ) and (c1, , c n) This fact, together with the first property, implies that a

similar linearity relation holds if a column of the matrix A is a linear combination of

two column vectors

Some useful corollaries from the basic properties:

1 The determinant of a matrix with two equal rows (columns) is equal to zero

2 If all entries of a row are multiplied by λ, the determinant of the resulting matrix is multiplied by λ.

3 If a matrix contains a row (columns) consisting of zeroes, then its determinant is equal

to zero

4 If a matrix has two proportional rows (columns), its determinant is equal to zero

5 If a matrix has a row (column) that is a linear combination of its other rows (columns), its determinant is equal to zero

6 The determinant of a matrix does not change if a linear combination of some of its rows

is added to another row of that matrix

THEOREM(NECESSARY AND SUFFICIENT CONDITION FOR A MATRIX TO BE DEGENER

-ATE) The determinant of a square matrix is equal to zero if and only if its rows (columns) are linearly dependent

5.2.2-3 Minors Basic minors Rank and defect of a matrix

Let A[a ij ] be a matrix of size n×n Its mth-order (mn ) minor of the first kind,

denoted by M i1i2 im

j1j2 jm , is the mth-order determinant of a submatrix obtained from A by

removing some of its n – m rows and n – m columns Here, i1, i2, , i m are the

indices of the rows and j1, j2, , j m are the indices of the columns involved in that

submatrix The (n – m)th-order determinant of the second kind, denoted by M i j11i j22 im jm, is

the (n – m)th-order determinant of the submatrix obtained from A by removing the rows and the columns involved in M i1i2 im

j1j2 jm The cofactor of the minor M j i11j i22 im jm is defined by

A i1i2 im

j1j2 jm= (–1)i1 +i 2 +···+im+j 1 +j 2 +···+jmM i1i2 im

j1j2 jm.

Remark. minors of the first kind can be introduced for any rectangular matrix A[a ij ] of size m×n Its

k th-order (k≤ min {m , n}) minor M i1i2 i k

j1j2 j k is the determinant of the submatrix obtained from A by removing some of its m – k rows and n – k columns.

LAPLACE THEOREM Given m rows with indices i1, , i m (or m columns with indices

j1, , j m ) of a square matrix A, its determinantΔ is equal to the sum of products of all

m th-order minors M i1i2 im

j1j2 jm in those rows (resp., columns) and their cofactors A i1i2 im

j1j2 jm, i.e.,

Δ≡det A = 

j1 ,j2 , ,jm

M i1i2 im

j1j2 jm A i j11i j22 im jm =



i1 ,i2 , ,im

M i1i2 im

j1j2 jm A i j11i j22 im jm.

Here, in the first sum i1, , i m are fixed, and in the second sum j1, , j mare fixed

Let A[a ij ] be a matrix of size m×nwith at least one nonzero entry Then there is a

positive integer rnfor which the following conditions hold:

i) the matrix A has an rth-order nonzero minor, and

ii) any minor of A of order (r +1) and higher (of it exists) is equal to zero

Ngày đăng: 02/07/2014, 13:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm