1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists part 31 docx

7 258 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 381,22 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The rows and the columns whose intersection yields its basic minor are called basic rows and basic columns of the matrix.. The rank of a matrix is equal to the maximal number of its line

Trang 1

The integer r satisfying these two conditions is called the rank of the matrix A and is denoted by r = rank (A) Any nonzero rth-order minor of the matrix A is called its basic minor The rows and the columns whose intersection yields its basic minor are called basic rows and basic columns of the matrix The rank of a matrix is equal to the maximal number

of its linearly independent rows (columns) This implies that for any matrix, the number of its linearly independent rows is equal to the number of its linearly independent columns

When calculating the rank of a matrix A, one should pass from submatrices of a smaller size to those of a larger size If, at some step, one finds a submatrix A k of size k×k

such that it has a nonzero kth-order determinant and the (k +1)st-order determinants of all

submatrices of size (k +1)×(k +1) containing A kare equal to zero, then it can be concluded

that k is the rank of the matrix A.

Properties of the rank of a matrix:

1 For any matrices A and B of the same size the following inequality holds:

rank (A + B)rank (A) + rank (B).

2 For a matrix A of size m×n and a matrix B of size n×k , the Sylvester inequality holds:

rank (A) + rank (B) – nrank (AB)≤min{rank (A), rank (B)}

For a square matrix A of size n×n , the value d = n – rank (A) is called the defect of the matrix A, and A is called a d-fold degenerate matrix The rank of a nondegenerate square matrix A[a ij ] of size n×n is equal to n.

THEOREM ON BASIC MINOR Basic rows (resp., basic columns) of a matrix are linearly independent Any row (resp., any column) of a matrix is a linear combination of its basic rows (resp., columns)

5.2.2-4 Expression of the determinant in terms of matrix entries

1◦ Consider a system of mutually distinct β1, β2, , β

n , with each β i taking one of the

values1,2, , n In this case, the system β1, β2, , β n is called a permutation of the set

1,2, , n If we interchange two elements in a given permutation β1, β2, , β n, leaving

the remaining n –2elements intact, we obtain another permutation, and this transformation

of β1, β2, , β n is called transposition All permutations can be arranged in such an order

that the next is obtained from the previous by a single transposition, and one can start from

an arbitrary permutation

Example 2 Let us demonstrate this statement in the case of n =3(there are n! =6 permutations).

If we start from the permutation 1 2 3, then we can order all permutations, for instance, like this (we underline the numbers to be interchanged):

1 2 3−→2 1 3−→3 1 2−→1 3 2−→2 3 1−→321.

Thus, from any given permutation of n symbols, one can pass to any other permutation

by finitely many transpositions

One says that in a given permutation, the elements β i and β j form an inversion if β i > β j for i < j The total number of inversions in a permutation β1, β2, , β n is denoted

by N (β1, β2, , β n ) A permutation is said to be even if it contains an even number of inversions; otherwise, the permutation is said to be odd.

Example 3 The permutation4 5 1 3 2(n =5) contains N (4 5 1 3 2) = 7 inversions and is, therefore, odd Any of its transposition (for instance, that resulting in the permutation 4 3 1 5 2) yields an even permutation.

The nth-order determinant of a matrix A[a ij ] of size n×ncan be defined as follows:

β1 ,β2 , ,β n

(–1)N(β1 ,β2 , ,β n)a β1 1a β2 2 a β n n,

where the sum is over all possible permutations β1, β2, , β nof the set1,2, , n.

Trang 2

Example 4 Using the last formula, let us calculate the third-order determinant of the matrix from

Exam-ple 1 The numbers β1 , β2, β3represent permutations of the set 1, 2, 3 We have

Δ ≡det A = (–1) N(1 , 2 , 3 )

a11a22a33+ (–1)N(1 , 3 , 2 )

a11a32a23+ (–1)N(2 , 1 , 3 )

a21a12a33

+ (–1)N(2 , 3 , 1 )

a21a32a13+ (–1)N(3 , 1 , 2 )

a31a12a23+ (–1)N(3 , 2 , 1 )

a31a22a13

= (–1) 0 × 1 × 1 × (–4) + (–1) 1 × 1 × (–1) × 5 + (–1) 1 × 6 × (–1) × (–4) + (–1) 2 × 6 × (–1) × 2 + (–1) 2 × 2 × (–1) × 5 + (–1) 3 × 2 × 1 × 2 = –49, which coincides with the result of Example 1.

2◦ The nth-order determinant can also be defined as follows:

Δ≡det A =

n



β1 = 1

n



β2 = 1

· · ·n

β n 1

δ β1β2 β n a β1 1a β2 2 a β n n,

where δ β1β2 β n is the Levi-Civita symbol:

δ β1β2 β n =

0, if some of β

1, β1, , β ncoincide,

1, if β1, β1, , β nform an even permutation, –1, if β1, β1, , β nform an odd permutation

5.2.2-5 Calculation of determinants

1 Determinants of specific matrices are often calculated with the help of the formulas

for row expansion or column expansion (see Paragraph 5.2.2-1) For this purpose, its is convenient to take rows or columns containing many zero entries

2 The determinant of a triangular (upper or lower) and a diagonal matrices is equal to the

product of its entries on the main diagonal In particular, the determinant of the unit matrix

is equal to1

3 The determinant of a strictly triangular (upper or lower) matrix is equal to zero.

4 For block matrices, the following formula can be used:



A B O C =A B



 = det A det C,

where A, B, C are square matrices of size n×n and O is the zero matrix of size n×n

5◦ The Vandermonde determinant is the determinant of the Vandermonde matrix:

Δ(x1, x2, , x n)≡











x1 x2 · · · x n

x2

1 x22 · · · x2

n

. .

x n–1

1 x n–2 1 · · · x n–1

n











1≤j<in

(x i – x j)

5.2.2-6 Determinant of a sum and a product of matrices

The determinant of the product of two matrices A and B of the same size is equal to the

product of their determinants,

det(AB) = det A det B.

The determinant of the direct sum of a matrix A of size m×m and B of size n×nis equal to the product of their determinants,

det(AB ) = det A det B.

Trang 3

The determinant of the direct product of a matrix A of size m×m and B of size n×n

is calculated by the formula

det(AB ) = (det A) n (det B) m

5.2.2-7 Relation between the determinant and the inverse matrix

EXISTENCE THEOREM A square matrix A is invertible if and only if its determinant is

different from zero

The adjugate (classical adjoint) for a matrix A[a ij ] of size n×n is a matrix C[c ij]

of size n×n whose entries coincide with the cofactors of the entries of the transpose A T, i.e.,

c ij = A ji (i, j =1,2, , n). (5.2.2.9)

The inverse matrix of a square matrix A[a ij ] of size n×n is the matrix of size n×n

obtained from the adjugate matrix by dividing all its entries by det A, i.e.,

A–1=

A11 detA detA21A · · · A n1

detA

A12 detA detA22A · · · A n2

detA

. .

A1n

detA detA2n A · · · A nn

detA

⎠. (5.2.2.10)

JACOBI THEOREM For minors of the matrix of cofactors of a matrix A, the following

relations hold: 









A i1

j1 A i1

j2 · · · A i1

j k

A i2

j1 A i2

j2 · · · A i2

j k

. .

A i k

j1 A i k

j2 · · · A i k

j k











= (det A) k–1A i1i2 i k

j1j2 j k

5.2.3 Equivalent Matrices Eigenvalues

5.2.3-1 Equivalence transformation

Matrices A and 2 A of size m×n are said to be equivalent if there exist nondegenerate matrices S and T of size m×m and n×n , respectively, such that A and 2 Aare related by

the equivalence transformation

2

A = SAT

THEOREM Two matrices of the same size are equivalent if and only if they are of the same rank

equivalent matrices A and B if there is a nondegenerate square matrix S such that 2 A = SA or 2 A = AS.

2AI , so that A = S–1T–1= LU , where L = S–1and P = T–1are an upper and lower triangular matrix This

representation is also called the LU -decomposition.

Any equivalence transformation can be reduced to a sequence of elementary transfor-mations of the following types:

1 Interchange of two rows (columns)

2 Multiplication of a row (column) by a nonzero scalar

3 Addition to some row (column) of another row (column) multiplied by a scalar

Trang 4

These elementary transformations are accomplished with the help of elementary matri-ces obtained from the unit matrix by the corresponding operations with its rows (columns) With the help of elementary transformations, an arbitrary matrix A of rank r > 0can be

reduced to normal (canonical) form, which has a block structure with the unit matrix I of size r×rin the top left corner

Example 1 The LU -decomposition of a matrix

A=

(2 1 4

3 2 1

1 3 3

)

can be obtained with the help of the following sequence of elementary transformations:

S1

(1/2 0 0

0 1 0

0 0 1

)(2 1 4

3 2 1

1 3 3

)

S2

(1 0 0

–3 1 0

0 0 1

)(1 1/2 2

3 2 1

1 3 3

)

S3

( 1 0 0

0 1 0 –1 0 1

)(1 1/2 2

0 1/2 –5

1 3 3

)

(1 1/2 2

0 1/2 –5

0 5/2 1

)(1 –1/2 0T1  

)

0 1/2 –5

0 5/2 1

)(1 0T2  –2 

0 1 0

0 0 1

)

S4

(1 0 0

0 2 0

0 0 1

0 1/2 –5

0 5/2 1

)

S5

0 –5/2 1

0 1 –10

0 5/2 1

)

(1 0 0

0 1 –10

0 0 26

)(1 0 0T 3 

0 1 10

0 0 1

)

S6

0 1 0

0 0 1/26

)(1 0 0

0 1 0

0 0 26

)

(1 0 0

0 1 0

0 0 1

)

.

These transformations amount to the equivalence transformation I = SAT , where T = T1T2T3:

S = S6 S5S4S3 S2S1=

7/26 –5/26 1/26

)

and T = T1T2T3=

(1 –1/2 –7

0 1 10

)

Hence, we obtain

L = S–1=

3 1/2 0

1 5/2 26

)

and U = T–1=

(1 1/2 2

0 1 –10

)

.

5.2.3-2 Similarity transformation

Two square matrices A and 2 A of the same size are said to be similar if there exists a square nondegenerate matrix S of the same size, the so-called transforming matrix, such that A

and 2A are related by the similarity transformation

2

A = S–1AS or A = S 2 AS–1.

Properties of similar matrices:

1 If A and B are square matrices of the same size and C = A + B, then

2

C = 2A+ 2B or S–1(A + B)S = S– 1AS + S– 1BS.

Trang 5

2 If A and B are square matrices of the same size and C = AB, then

2

C= 2A 2 B or S–1(AB)S = (S– 1AS )(S– 1BS).

3 If A is a square matrix and C = λA where λ is a scalar, then

2

C = λ 2 B or S–1(λB)T = λS– 1BS.

4 Two similar matrices have the same rank, the same trace, and the same determinant Under some additional conditions, there exists a similarity transformation that turns a

square matrix A into a diagonal matrix with the eigenvalues of A (see Paragraph 5.2.3-5) on

the main diagonal There are three cases in which a matrix can be reduced to diagonal form:

1 All eigenvalues of A are mutually distinct (see Paragraph 5.2.3-5).

2 The defects of the matrices A–λ i I are equal to the multiplicities m  iof the corresponding

eigenvalues λ i (see Paragraph 5.2.3-6) In this case, one says that the matrix has a simple structure.

3 Symmetric matrices

For a matrix of general structure, one can only find a similarity transformation that

reduces the matrix to the so-called quasidiagonal canonical form or the canonical Jordan form with a quasidiagonal structure The main diagonal of the latter matrix consists of the eigenvalues of A, each repeated according to its multiplicity The entries just above the

main diagonal are equal either to1or0 The other entries of the matrix are all equal to zero The matrix in canonical Jordan form is a diagonal block matrix whose blocks form its main

diagonal, each block being either a diagonal matrix or a so-called Jordan cell of the form

Λk

λ k 1 0 · · · 0

0 λ k 1 · · · 0

0 0 λ k · · · 0

. .

⎠.

5.2.3-3 Congruent and orthogonal transformations

Square matrices A and 2 A of the same size are said to be congruent if there is a nondegenerate square matrix S such that A and 2 A are related by the so-called congruent or congruence transformation

2

A = S T AS or A = S 2 AS T.

This transformation is characterized by the fact that it preserves the symmetry of the original matrix

For any symmetric matrix A of rank r there is a congruent transformation that reduces

Ato a canonical form which is a diagonal matrix of the form,

2

A = S T AS =

(I

p

–I r–p

O

)

,

where I p and I r–p are unit matrices of size p×p and (r – p)×(r – p) The number p is called the index of the matrix A, and s = p – (r – p) =2p – r is called its signature.

THEOREM Two symmetric matrices are congruent if they are of the same rank and have the same index (or signature)

A similarity transformation defined by an orthogonal matrix S (i.e., S T = S–1) is said

to be orthogonal In this case

2A = S– 1AS = S T AS.

Trang 6

Example 2 Consider a three-dimensional orthogonal coordinate system with the axes OX1, OX2, OX3 and a new coordinate system obtained from this one by its rotation by the angle ϕ around the axis OX3, i.e.,

2x1= x1 cos ϕ – x2 sin ϕ, 2x2= x1 sin ϕ + x2 cos ϕ, 2x3= x3.

The matrix of this coordinate transformation has the form

S3=

(cos ϕ – sin ϕ 0

sin ϕ cos ϕ 0

)

.

Rotations of the given coordinate system by the angles ψ and θ around the axes OX1 and OX2, respectively, correspond to the matrices

S1=

0 cos ψ – sin ψ

0 sin ψ cos ψ

)

, S2=

( cos θ 0 sin θ

– sin θ 0 cos θ

)

.

The matrices S1, S2, S3 are orthogonal (S j–1 = S j T).

The transformation that consists of simultaneous rotations around of the coordinate axes by the angles

ψ, θ, ϕ is defined by the matrix

S = S3S2S1.

5.2.3-4 Conjunctive and unitary transformations

1◦ Square matrices A and 2 A of the same size are said to be conjunctive if there is a

nondegenerate matrix S such that A and 2 A are related by the conjunctive transformation

2

A = S ∗ AS or A = S 2 AS ∗,

where S ∗ is the adjoint of S.

2◦ A similarity transformation of a matrix A is said to be unitary if it is defined by a unitary

matrix S (i.e., S ∗ = S–1) In this case,

2

A = S–1AS = S ∗ AS Some basic properties of the above matrix transformations are listed in Table 5.3

TABLE 5.3 Matrix transformations Transformation 2A Invariants

Similarity S–1AS Rank, determinant, eigenvalues

Congruent S T AS Rank and symmetry

Orthogonal S–1AS = S T AS Rank, determinant, eigenvalues, and symmetry

Conjunctive S ∗ AS Rank and self-adjointness

Unitary S–1AS = S ∗ AS Rank, determinant, eigenvalues, and self-adjointness

5.2.3-5 Eigenvalues and spectra of square matrices

An eigenvalue of a square matrix A is any real or complex λ for which the matrix F (λ)

A – λI is degenerate The set of all eigenvalues of a matrix A is called its spectrum, and F (λ) is called its characteristic matrix The inverse of an eigenvalue, μ =1, is called

a characteristic value.

Trang 7

Properties of spectrum of a matrices:

1 Similar matrices have the same spectrum

2 If λ is an eigenvalue of a normal matrix A (see Paragraph 5.2.1-3), then ¯ λis an eigenvalue

of the matrix A ∗ ; Re λ is an eigenvalue of the matrix H1= 12(A + A ∗ ); and Im λ is an eigenvalue of the matrix H2= 21i (A – A ∗)

3 All eigenvalues of a normal matrix are real if and only if this matrix is similar to a Hermitian matrix

4 All eigenvalues of a unitary matrix have absolute values equal to1

5 A square matrix is nondegenerate if and only if all its eigenvalues are different from zero

A nonzero (column) vector X (see Paragraphs 5.2.1-1 and 5.2.1-2) satisfying the

con-dition

AX = λX

is called an eigenvector of the matrix A corresponding to the eigenvalue λ Eigenvectors corresponding to distinct eigenvalues of A are linearly independent.

5.2.3-6 Reduction of a square matrix to triangular form

THEOREM For any square matrix A there exists a similarity transformation 2 A = S–1AS

such that 2Ais a triangular matrix

The diagonal entries of any triangular matrix similar to a square matrix A of size n×n

coincide with the eigenvalues of A; each eigenvalue λ i of A occurs m  i ≥ 1times on the

diagonal The positive integer m  i is called the algebraic multiplicity of the eigenvalue λ i

Note that

i m



i = n.

The trace Tr(A) is equal to the sum of all eigenvalues of A, each eigenvalue counted

according to its multiplicity, i.e.,

i

m 

i λ i

The determinant det A is equal to the product of all eigenvalues of A, each eigenvalue

counted according to its multiplicity, i.e.,

det A =

i

λ m  i

i .

5.2.3-7 Reduction of a square matrix to diagonal form

THEOREM 1 If A is a square matrix similar to some normal matrix, then there is a

similarity transformation 2A = S–1AS such that the matrix 2Ais diagonal

THEOREM2 Two Hermitian matrices A and B can be reduced to diagonal form by the same similarity transformation if and only if AB = BA.

THEOREM3 For any Hermitian matrix A, there is a nondegenerate matrix S such that

2

A = S ∗ AS is a diagonal matrix The entries of 2Aare real

THEOREM4 For any real symmetric matrix A, there is a real nondegenerate matrix T

such that 2A = S T ASis a diagonal matrix

Ngày đăng: 02/07/2014, 13:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm