1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists part 34 pdf

7 255 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 385,24 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

General square system of linear equations.A square system of linear equations has the form where A is a square matrix.. Multiplication of both sides of one equation or the corresponding

Trang 1

5.5.2-2 General square system of linear equations.

A square system of linear equations has the form

where A is a square matrix.

1◦ If the determinant of system (5.5.2.2) is different from zero, i.e det A ≠ 0, then the system has a unique solution,

X = A–1B.

2◦ Cramer rule If the determinant of the matrix of system (5.5.2.2) is different from zero,

i.e Δ = det A≠ 0, then the system admits a unique solution, which is expressed by

x1 = Δ1

Δ , x2=

Δ2

Δ , ., x n=

Δn

Δ , (5.5.2.3)

whereΔk (k =1, 2, , n) is the determinant of the matrix obtained from A by replacing its kth column with the column of free terms:

Δk=









a11 a12 b1 a1n

a21 a22 b2 a2n

. . . .

a n1 a n2 b n a nn







.

Example 1 Using the Cramer rule, let us find the solution of the system of linear equations

2x 1+ x2+ 4x 3 = 16, 3x 1 + 2x 2+ x3= 10,

x1 + 3x 2 + 3x 3 = 16.

The determinant of its basic matrix is different from zero,

Δ = 





2 1 4

3 2 1

1 3 3





=26 ≠ 0, and we have

Δ 1 = 





16 1 4

10 2 1

16 3 3





=26, Δ2=







2 16 4

3 10 1

1 16 3





=52, Δ3=







2 1 16

3 2 10

1 3 16





=78.

Therefore, by the Cramer rule (5.5.2.3), the only solution of the system has the form

x1= Δ 1

Δ =

26

26 =1, x2= ΔΔ2 =

52

26 =2, x3= ΔΔ3 =

78

26 =3.

3◦ Gaussian elimination of unknown quantities.

Two systems are said to be equivalent if their sets of solutions coincide.

The method of Gaussian elimination consists in the reduction of a given system to an equivalent system with an upper triangular basic matrix The latter system can be easily solved This reduction is carried out in finitely many steps On every step, one performs an

elementary transformation of the system (or the corresponding augmented matrix) and

ob-tains an equivalent system The elementary transformations are of the following three types:

1 Interchange of two equations (or the corresponding rows of the augmented matrix)

2 Multiplication of both sides of one equation (or the corresponding row of the augmented matrix) by a nonzero constant

3 Adding to both sides of one equation both sides of another equation multiplied by a nonzero constant (adding to some row of the augmented matrix its another row multiplied

by a nonzero constant)

Trang 2

Suppose that det A≠ 0 Then by consecutive elementary transformations, the augmented

matrix of the system A1[see (5.5.1.4)] of size n×(n +1) can be reduced to the form

U1≡

1 u12 · · · u1 n y1

0 1 · · · u2 n y2

. .

and one obtains an equivalent system with an upper triangular basic matrix,

x1+ u12x2+ u13x3+· · · + u1 n x n = y1,

x2+ u23x3+· · · + u2 n x n = y2,

.

x n = y n. This system is solved by the so-called “backward substitution”: inserting x n = y n(obtained

from the last equation) into the preceding (n –1)st equation, one finds x n–1 Then inserting

the values obtained for x n , x n–1into the (n –2)nd equation, one finds x n–2 Proceeding in

this way, one finally finds x1 This back substitution process is described by the formulas

x k = y k

n



s=k+1

u ks x s (k = n –1, n –2, ,1)

Suppose that det A = 0 and rank (A) = r, 0 < r < n In this case, the system is

either inconsistent (i.e., has no solutions) or has infinitely many solutions By elementary transformations and, possibly, reindexing the unknown quantities (i.e., introducing new

unknown quantities y1 = x σ(1), , y n = x σ(n) , where σ(1), , σ(n) is a permutation of

the indices1,2, , n), one obtains a system of the form (for the sake of brevity, we retain the notation x j for the reindexed unknown quantities)

c11x1+· · · + c1 x r + c1,r+1x r+1+· · · + c1 n x n = d1,

c rr x r + c r,r+1x r+1+· · · + c rn x n = d r,

0= d r+1,

.

0= d n,

where the matrix [c ij ] (i, j = 1, 2, , r) of size r×r is nondegenerate If at least one

of the right-hand sides d r+1, , d nis different from zero, then the system is inconsistent

If d r+1= = d n=0, then the last n – r equations can be dropped and it remains to find all solutions of the first r equations Transposing all terms containing the variables x r+1, , x n

to the right-hand sides and regarding these variables as arbitrary free parameters, we obtain

a linear system for the unknown quantities x1, , x rwith the nondegenerate basic matrix

[c ij ] (j, j =1,2, , r).

Example 2 Let us find a solution of the system from Example 1 by the Gaussian elimination method.

By elementary transformations of the augmented matrix, we obtain

⎝2 1 4 163 2 1 10

1 3 3 16

⎠ →

⎝1 1/2 20 1/2 –5 –148

0 5/2 1 8

⎠ →

⎝1 1/2 20 1 –10 –288

0 0 26 78

⎠ →

⎝1 1/2 20 1 –10 –288

0 0 1 3

Trang 3

The transformed system has the form

x1+12x2+ 2x 3 = 8,

x2– 10x 3 = –28,

x3= 3.

Hence, we find that

x3= 3, x2= –28 + 10x 3 = 2, x1= 8 –12x2– 2x 3 = 1.

4◦ Gauss-Jordan elimination of unknown quantities.

This method consists of applying elementary transformations for reducing a system with

a nondegenerate basic matrix to an equivalent system with the identity matrix On the kth step (k =1, 2, , n) the rows of the augmented matrix A 1obtained on the preceding step can be transformed as follows:

a 

kj =

a 

kj

a 

kk

k = b

 k

a  kk

(j = k, k +1, , n),

a 

ij = a  ij – a  ik

a  kj

a  kk

, b 

i = b  i – a ik b

 k

a  kk (i =1, 2, , n, ik, j = k, k +1, , n),

provided that the diagonal element obtained on each step is not equal to zero After n steps,

the basic matrix is transformed to the identity matrix and the right-hand side turns into the desired solution

Example 3 For the linear system from Examples 1 and 2 we have

⎝2 1 4 163 2 1 10

1 3 3 16

⎠ →

⎝1 1/2 20 1/2 –5 –148

0 5/2 1 8

⎠ →

⎝1 0 70 1 –10 –2822

0 0 26 78

⎠ →

⎝1 0 0 10 1 0 2

0 0 1 3

⎠ ,

and therefore x1 = 1, x 2 = 2, x 3 = 3.

The diagonal element obtained on some step of the above elimination procedure may happen to be equal to zero In this case, the formulas become more complicated and reindexing of the unknown quantities may be required

5◦ Method of LU -decomposition.

This method is based on the representation of the basic matrix A as the product of a lower triangular matrix L and an upper triangular matrix U , i.e., in the form A = LU This factorization is called a triangular representation or the LU -representation of a matrix (see

also Paragraph 5.2.3-1)

Given such an LU -representation of the matrix A, the system AX = B can be represented

in the form LU X = B, and its solution can be obtained by solving the following two systems:

LY = B, U X = Y Due to the triangular structure of the matrices L[l ij ] and U[u ij], these systems can be solved with the help of the formulas

y i = 1

l ii



b i

i–n



j=i

l ij y j



(i =1, 2, , n),

x k = y k

n



s=k+1

u ks x s (k = n, n –1, ,1),

provided that l ii≠ 0

Trang 4

There exist various methods for the construction of LU -decompositions In particular

if the following conditions hold:

a11 ≠ 0, a11 a12

a21 a22



≠ 0, ., det A≠ 0,

then the elements of the desired matrices L and U can be calculated by the formulas

l ij =

a ij

j–1



s=1

l is u sj for ij,

0 for i < j,

u ij =

1

l ii



a ij

i–1



s=1

l is u sj



for i < j

5.5.2-3 Solutions of a square system with different right-hand sides

1◦ One often has to solve a system of linear equations with a given basic matrix A and different right-hand sides For instance, consider the systems AX(1) = B(1), ,

AX(m) = B(m) These m systems can be regarded as a single matrix equation AX = B, where X and B are matrices of size n×m whose columns coincide with X(j) and B(j) (j =1, 2, , m).

Example 5 Suppose that we have to solve the equation AX = B with the given basic matrix A and

different right-hand sides:

A=

( 1 2 –3

3 –2 1

) , B(1)=

(7 1 5

) , B(2)=

(10 6 –5

) Using the Gauss-Jordan procedure, we obtain

⎝ 1 23 –2 1 1 6–3 7 10

–2 1 3 5 –5

⎠ →

⎝1 20 –8 10–3 –207 10–24

0 5 –3 19 15

⎠ →

⎝1 00 1 –1/2–5/4 5/2 32 4

0 0 13/4 1/32 0

⎠ →

⎝1 0 0 3 40 1 0 5 3

0 0 1 2 0

⎠ Therefore,

X(1)=

(3 5 2

) , X(2)=

(4 3 0

)

2◦ If B = I, where I is the identity matrix of size n×n, then the solution of the matrix equation AX = I coincides with the matrix X = A–1

Example 6 Find the inverse of the matrix

A=

(2 1 0 –3 0 7 –5 4 1

) Let us transform the augmented matrix of the system, using the Gauss-Jordan method We get

⎝–3 0 7 0 1 02 1 0 1 0 0

–5 4 –1 0 0 1

⎠ →

⎝1 1/2 0 1/2 0 0

0 3/2 7 3/2 1 0

0 13/2 –1 5/2 0 1

⎠ →

⎝1 0 –7/3 0 –1/ 3 0

0 1 14/3 1 2/3 0

0 0 –94/3 –4 –13/ 3 1

⎠ →

⎝1 0 0 140 1 0 19/47 /47 1/47 7/47–1/94 –7/94

0 0 1 6/47 13/94 –3/94

Trang 5

5.5.2-4 General system of m linear equations with n unknown quantities.

Suppose that system (5.5.1.1) is consistent and its basic matrix A has rank r First, in the matrix A, one finds a submatrix of size r ×r with nonzero rth-order determinant and drops the m – r equations whose coefficients do not belong to this submatrix (the

dropped equations follow from the remaining ones and can, therefore, be neglected) In

the remaining equations, the n – r unknown quantities (free unknown quantities) that are

not involved in the said submatrix should be transferred to the right-hand sides Thus, one

obtains a system of r equations with r unknown quantities, which can be solved by any of

the methods described in Paragraph 5.5.2-2

Remark. If the rank r of the basic matrix and the rank of the augmented matrix of system (5.5.1.1) are equal to the number of the unknown quantities n, then the system has a unique solution.

5.5.2-5 Solutions of homogeneous and corresponding nonhomogeneous systems

1◦ Suppose that the basic matrix A of the homogeneous system (5.5.1.3) has rank r and its submatrix in the left top corner, B = [a ij ] (i, j = 1, , r), is nondegenerate Let

M = det B ≠ 0be the determinant of that submatrix Any solution x1, , x n has n – r free components x r+1, , x n and its first components x1, , x rare expressed via the free components as follows:

x1= – 1

M [x r+1M1(a i(r+1)) + xr+2M1(a i(r+2)) +· · · + x n M1(a in)],

x2= – 1

M [x r+1M2(a i(r+1)) + x r+2M2(a i(r+2)) +· · · + x n M2(a in)],

x r= – 1

M [x r+1M r (a i(r+1)) + x r+2M r (a i(r+2)) +· · · + x n M r (a in)],

(5.5.2.4)

where M j (a ik ) is the determinant of the matrix obtained from B by replacing its jth column with the column whose components are a1k , a2k , , a rk:

M j (a ik) =









a11 a12 a1k a1

a21 a22 a2k . a2

. . . .

a r a r . a rk . a rr







.

2◦ Using (5.5.2.4), we obtain the following n – r linearly independent solutions of the

original system (5.5.1.3):

X1=



M1(a i(r+1))

MM2(a i(r+1))

M · · · – M r (a i(r+1))



,

X2=



M1(a i(r+2))

MM2(a i(r+2))

M · · · – M r (a i(r+2))



,

X n–r =



M1(a in)

M · · · – M r (a in)



Any solution of system (5.5.1.3) can be represented as their linear combination

X = C1X1+ C2X2+· · · + C n–r X n–r, (5.5.2.5)

where C1, C2, , C n–r are arbitrary constants This formula gives the general solution of

the homogeneous system

Trang 6

3 Relations between solutions of the nonhomogeneous system (5.5.1.1) and solutions of

the corresponding homogeneous system (5.5.1.3)

1 The sum of any solution of the nonhomogeneous system (5.5.1.1) and any solution of the corresponding homogeneous system (5.5.1.3) is a solution of system (5.5.1.1)

2 The difference of any two solutions of the nonhomogeneous system (5.5.1.1) is a solution

of the homogeneous system (5.5.1.3)

3 The sum of a particular solution X0 of the nonhomogeneous system (5.5.1.1) and the general solution (5.5.2.5) of the corresponding homogeneous system (5.5.1.3) yields the

general solution X of the nonhomogeneous system (5.5.1.1).

5.6 Linear Operators

5.6.1 Notion of a Linear Operator Its Properties

5.6.1-1 Definition of a linear operator

An operator A acting from a linear space V of dimension n to a linear space W of dimension

mis a mapping A :V → W that establishes correspondence between each element x of the

spaceV and some element y of the space W This fact is denoted by y = Ax or y = A(x).

An operator A :V → W is said to be linear if for any elements x1and x2of the space

V and any scalar λ, the following relations hold:

A(x1+ x2) = Ax1+ Ax2 (additivity of the operator),

A linear operator A :V → W is said to be bounded if it has a finite norm, which is

defined as follows:

A = sup

x≠0

Ax

x = supx=1Ax≥ 0.

Remark. If A is a linear operator from a Hilbert spaceV into itself, then

A = sup

x V

x≠0

Ax

x = supx=1Ax = sup

x,y≠0

|(x, Ay)|

x y =x=y=sup 1 |(x, Ay)|.

THEOREM Any linear operator in a finite-dimensional normed space is bounded

The set of all linear operators A :V → W is denoted by L(V, W).

A linear operator O in L( V, W) is called the zero operator if it maps any element x of

V to the zero element of the space W: Ox =0

A linear operator A in L( V, V) is also called a linear transformation of the space V.

A linear operator I in L( V, V) is called the identity operator if it maps each element x

ofV into itself: Ix = x.

5.6.1-2 Basic operations with linear operators

The sum of two linear operators A and B in L( V, W) is a linear operator denoted by A + B

and defined by

(A + B)x = Ax + Bx for any xV.

The product of a scalar λ and a linear operator A in L( V, W) is a linear operator denoted

by λA and defined by

(λA)x = λAx for any x V.

Trang 7

The opposite operator for an operator AL(V, W) is an operator denoted by –A and

defined by

–A = (–1)A.

The product of two linear operators A and B in L( V, V) is a linear operator denoted by

AB and defined by

(AB)x = A(Bx) for any xV.

Properties of linear operators in L( V, V):

(AB)C = A(BC) (associativity of the product of three operators),

λ(AB) = (λA)B (associativity of multiplication of a scalar and two operators),

(A + B)C = AC + BC (distributivity with respect to the sum of operators),

where λ is a scalar; A, B, and C are linear operators in L( V, V).

Remark. Property 1 allows us to define the product A1A2 .Ak of finitely many operators in L(V, V)

and the kth power of an operator A,

Ak = AA A 

k times

.

The following relations hold:

Ap+q= ApAq, (Ap)q= Apq (5.6.1.1)

5.6.1-3 Inverse operators

A linear operator B is called the inverse of an operator A in L( V, V) if AB = BA = I The

inverse operator is denoted by B = A–1 If the inverse operator exists, the operator A is said

to be invertible or nondegenerate.

Remark. If A is an invertible operator, then Ak= (A–1)k= (Ak)–1and relations (5.6.1.1) still hold.

A linear operator A from V to W is said to be injective if it maps any two different

elements ofV into different elements of W, i.e., for x1x2, we have Ax1≠Ax2

If A is an injective linear operator fromV to V, then each element yV is an image of

some element xV: y = Ax.

THEOREM A linear operator A :V → V is invertible if and only if it is injective.

5.6.1-4 Kernel, range, and rank of a linear operator

The kernel of a linear operator A : V → V is the set of all x in V such that Ax =0 The

kernel of an operator A is denoted by ker A and is a linear subspace ofV.

The range of a linear operator A : V → V is the set of all y in V such that y = Ax The

range of a linear operator A is denoted by im A and is a subspace ofV.

Properties of the kernel, the range, and their dimensions:

1 For a linear operator A :V → V in n-dimensional space V, the following relation holds:

dim (im A) + dim (ker A) = n.

2 LetV1andV2be two subspaces of a linear spaceV and dim V1+ dimV2= dimV Then

there exists a linear operator A :V → V such that V1 = im A andV2= ker A.

A subspace V1 of the space V is called an invariant subspace of a linear operator

A :V → V if for any x in V1, the element Ax also belongs toV1 A linear operator A :V → V

is said to be reducible if V can be represented as a direct sum V = V1· · ·V N of two or more invariant subspacesV1 , , V N of the operator A, where N is a natural number.

Example 1 ker A and im A are invariant subspaces of any linear operator A :V → V.

Ngày đăng: 02/07/2014, 13:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm