1. Trang chủ
  2. » Thể loại khác

Multivariable-Calculus-Linear-Algebra-Sol--William-Trench

94 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Answers to Selected Problems in Multivariable Calculus with Linear Algebra and Series
Tác giả William F. Trench, Bernard Kolman
Trường học Drexel University
Chuyên ngành Multivariable Calculus with Linear Algebra
Thể loại answers
Định dạng
Số trang 94
Dung lượng 1,99 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Multiplication of a row of [A 1 Y] by a nonzero constant corresponds to multiplication of an equation in the system AX = Y by the same constant, This does not change the solutions of the

Trang 1

Answers to Selected Problems in Multivariable Calculus with Linear Algebra and Series

WILLIAM F TRENCH AND BERNARD KOLMAN Drexel University

Trang 3

= r [ a ] +r [ b ] Write (a) S = 1 + 2 + · · · + n and (b) S = n + (n-1) n n + · · · + 1 Then add (a) and (b) t o o b t a i n

Trang 4

T-10 B = — , C = — - For uniqueness, let

(i) A = B + C ±9 where B^ = B ± and C^ = -C ;

Then (ii) AT = B^ + C^ = B - C Adding (i) and

(ii) yields B = B; subtracting (ii) from (i) yields

Trang 5

AI = A n

(a) Suppose the i-th row of A consists entirely of

zeros Let C = AB; then

k > i for all terms in the last sum; hence b, = 0

for all these terms Therefore, d = 0 if i > i

Trang 6

(i) a Φ ±3 (ii) a = -3 (iii) a = 3

(i) a + ±1 (ii) a = -1 (iii) a = 1

T-l Multiplication of a row of [A 1 Y] by a nonzero

constant corresponds to multiplication of an

equation in the system AX = Y by the same constant, This does not change the solutions of the system Similar argument applies to the other elementary operations

2

4

6

Trang 7

A row in the augmented matrix of a system in

n unknowns, with zeros in the first n columns and

a 1 in the (n + 1) -st corresponds to the equation

0·χ + + 0·χ = 1, n which has no solution

For the converse, let [A|Y] be row equivalent

to [ B | Z ] , which is in row echelon form Since B has no row with a "leading" 1 in the (n + l)-st column, it follows (with the notation of Def 2.3) that j- < j? < · ·· < j,· Hence, for i = 1, 2, ,k, the i-th equation of BX = Z can be solved for x in terms of the remaining n - k unknowns, which can be specified arbitrarily

By definition of the elementary row operations, the system BX = 0 is obtained by performing on AX = 0 operations which do not change the solutions of the latter

Suppose ad - be ^ 0, b ^ O , d ^ O Then the

following matrices are row equivalent :

Suppose ad - be ^ 0 and b = 0 Then d φ 0 and

a φ 0 The following matrices are row equivalent:

T-2

T-3

T-4

Trang 8

similar argument disposes of the case where d = 0

If ad - bc = 0, then A is row equivalent to

Trang 9

T-6

only 2 x 2 matrices, other than I«, in reduced row

echelon form Since BX = 0 and CX = 0 have

nontrivial solutions, so does AX = 0, again by

J row equivalent, by Thm 2.2

Trang 11

Suppose A is m x n and B is p x q Since AB is

defined, (1) n = p and AB is m x q Since BA is defined,(2) m = q and BA is p x n Since AB = BA,

(3) m = p and (4) q = n The conclusion now

follows from (1), (2), (3) and (4)

T -1 T - I T T -1 T T -IT

A1 (A V = (A A)1 = I1 = I and (A V A = (AA V

T T -1 T -1 -1 T

= 1 = 1 If A = A1, then A = (A1) = (A V ; hence A is symmetric

Trang 12

Let E = [ e ] , E A = [ a ] , and I = [ 6 ] , where

ill 13 i j

δ = 0 if i H and δ , , = 1 i f i = j

(i) Let the row operation be multiplication of the

r-th row by c φ 0 Then e = δ if i Φ r and

e = c δ Now a = \ δ., a, = a i f i ^ r , rj rj ij / lk kj ij

n k=l and a = c \ δ a = c a Hence B = EA rj / rj kj rj

k=l (ii) Let the row operation be the interchange of

rows r and s Then e = δ if i Φ r and i φ s,

Hence B = EA (iii) Let the operation be

addition of c times the r-th row to the s-th row

(r φ s) Then e = δ + c δ δ and ij ij is rj

a = > (6., + c δ δ , )a, rj / lk is rk kj

k=l T-3

Trang 13

k=l k=l

6 , a rk kj

= a + e δ a ; i j i s rj hence B = EA

Since e = δ + c 6 δ and f = δ - c δ δ

l j l j IS rJ !J IJ 1 S rJ

E e " £ xi ■ R lk i s rk kj ks r j ., + c δ δ )(δ, - c 6, 6 ) δ (6, - c δ δ ) lk kj ks r j

In the notation of Definition 2.3,

l < j < j9< * » - < j = n , which implies that

j1= l , j0=2, ,j = n Hence a = 1,

i = 1, , n Since A is in reduced row echelon

form, it follows that a = 0 if i φ j; hence

A = I

T-4

T-5

Trang 14

T-6

Τ-7

Τ-8

Suppose a^ = 0, 1 < j < η Let B be an arbitrary

n x n matrix and C = AB Then

n n

c = \ a , b rj / rk kj 0-bkj = 0 , 1 < j < n;

k=l k=l

hence C has a row of zeros Therefore AB φ I

for any B, which means that A is singular

Let A be row equivalent to B, which is in reduced row echelon form Then B = E E A, where

E , , E, are elementary matrices Consequently,

B is nonsingular if and only if A is nonsingular From Exercise T-5, it now follows that A is

nonsingular if and only if B = I

The conclusion follows from Exercise T-4,

Section 1.2, and Corollary 3.2

T-9

1

ad-bc

d -c

- b |

a The result follows from Theorems 3.4 and 3.6

Trang 15

28 (a)

1 2 1

3" - 3" 3

1 1 2(b) - 3 - 3 3

I I I

3 3-"6

30 27

Trang 16

Interchange of adjacent elements j and j

increases the number of inversions by 1, if

1 < j ,_, or decreases it by 1 if j > j ,, ·

Now suppose s - r > 2 An interchange of j and

j can be effected by successively interchanging

j with j , , j (s - r interchanges of adjacent elements) to obtain

an odd integer, the conclusion follows

Let D be the determinant obtained by inter­

changing two columns of D Let D and D be

obtained by interchanging the rows and columns of

D and D, respectively Then D is obtained by interchanging two rows of D; hence D = -D

(Thm 4.2) However, by Thm 4 1

D = D and D = D

Therefore D = -D

Use Thm 4.4

Similar to proof given for (a) and (c)

The right side of the equation

Trang 17

T-7 Let the elementary operation be:

(a) interchange of two rows Then det E = -1 and det EB = -det B; hence det EB = (det E) (det B)

(b) multiplication of a row by a constant c φ 0

Then det E = c and det EB = c det B; hence det EB = (det E)(det B)

(c) addition of a multiple of a row to another row Then det E = 1 and det EB = det B; hence

det EB = (det E)(det B)

T-8 det E E B = (det Εχ) det(E2 E B)

= (det E )(det E2) det(E3 EkB)

= (det E )(det E ) (det E ) det B

= (det E E )(det E ) (det E ) det B

Trang 18

= det (E E )det B

T-9 From Eq (23), adj A = DA~" ; the result follows

from Exercise T-3

T-10 If det A φ 0, then A is nonsingular (Thm 4.10);

hence AX = 0 has only the trivial solution

(Cor 3.3, Sect 1.3) If det A = 0, then A is singular (Thm 4.10); hence A = E E B,

where B is in reduced row echelon form, and has a row of zeros (Cor 3.2, Sect 1.3) The system

BX = 0 has the same solutions as the system

B-X = 0, where B- is obtained by omitting the last (zero) row of B Since B-X = 0 is a homogeneous system of n equations in n - 1 unknowns, it has a

solution X φ 0 (Thm 2.6, Section, 1.2) Since

Trang 19

T-18 If A = A then AA = I; hence (det A ) 2 = 1,

- y )

(χ-1

-z) z) -z)

(y-z)(y-x)

~(z + x) (y-z)(y-x)

1

J2L

(z-x)(z-y) -(x + y) (z-x)(z-y)

1 (x-y)(x-z) (y-z)(y-x) (z-x)(z-y)

T-20 Apply Definition 4.3 to A - ti

T-21 Let A be upper triangular; then a = 0 if j < i

T-l aO = a(0 + 0) = aO + aO; now add -(aO) to the first

and last member to obtain 0 = aO

T-2 0 = 0U = [1 + (-l)U] = 1-U + (-l)U = U + (-l)U;

hence (-l)U = -U

Trang 20

T-3 Add -U to both sides and use the associative law T-4 Write (a - b)U = 0 and use (vi) of Thm 1.1

T-7 Suppose T is a subspace of S, and U is in S Then,

by (b) of Def 1.2, (-l)U is in T, and then, by (a) of Def 1.2, U + (-l)U = 0 is in T The

remaining properties of Def 1.1 hold in T, since they hold in S Conversely, if T is a vector space, then (a) and (b) hold, by Def 1.2

T-8 Use the following properties of continuous

functions: (a) cf is continuous if c is constant and f is continuous (b) f + g is continuous if

f and g are

T-9 In T-8, replace "continuous" by "n-times

differentiable."

Trang 22

°-As to spanning S[X-, X~ , X~] , it suffices to show that

each vector is a linear combination of the other two:

Χ Λ "~ X-j T" X« ; X_ — X~ — X~ ? Λ „ — X_ X_

T-l

T-2

Trang 23

T-S If T = {Ul ,· · · , Un}

then one of the U 's

1

others Remove this

T-3 Let T = {Xl' ' Xm} be a linearly independent set

If X is in 5 and X is not a linear combination ofXl'··.' Xm, then T' = {Xl'···' Xm, X} is linearlyindependent, which is a contradiction, since T' hasmore then m elements Thus T spans 5; since T islinearly independent, it is a basis for 5

T-4 Let dim 5 = dim T = n Let {Xl' ' Xn} be a basisfor T Then {Xl' ' Xn} is also a basis for 5, by

Ex T-3 Hence 5 T

is not linearly independent,

is a linear combination of thevector from T to obtain T' Then T' also spans 5 But this is a contradiction,since it implies that dim 5 < n

T-6 By Tbm 2.6, there is a basis which contains

Ul , , Un· Since dim S = n, this basis cannotcontain any other vectors

T-7 Without loss of generality, suppose that Ul ,···, Urn

(m 2 n) are linearly independent, and if m < n,

Um+l , , Un can be written as linear combinations

of Ul , , Urn· Then {Ul ,···, Urn} spans

S[Ul ,···, Un]; hence dim 5[Ul ,···, Un] =m

T-B Recall that det AT = det A

T-9 Let 51 = {Ul ,·.·, Uk} and

52 = {Ul ,···, Uk' Uk+l ,···, Un}·

0, where not all of the

Trang 24

a are zero, then a^U- + l 1 1 + a k \

+ 0·1λ _, + ···+ 0·ϋ is a nontrivial linear k+1 n

combination of vectors in S~ which vanishes

(b) If S is linearly dependent, then so is S~ , by (a)

1

-5

5 -2

with respect to the natural basis;

with respect to B and C; L

2

3

= L· -J

5

- 1 j

13

~ J

Trang 25

L(0) = L(0 + 0) = L(0) + L(0); hence L(0) = 0 Suppose V and V are in range L; then L(U ) = V and L(U?) = V , where U and U are in S: Then

LCU-L + U2) = LC^) + L(U2) = νχ + V2; hence V ± + V2

is in range L If a is a constant, then

L(aU ) = aV ; hence aV is in range L Thus range

L is a subspace of T, by Def 1.2, Sect 2.1

Trang 26

T-4 Every X in S can be w r i t t e n as X = a-lL + · · · + a U ; J l i n n

hence every vector in range L can be written as

Y = L(a,U, + · · · + a U ) = a,L(U1 1 n n l l n n n) + · · · + a L(U )

Therefore {L(U ) , , L(U )} spans range L

T-5 Let U = 01 1 n n 1 1 n n Ίϋ- + ··· + 3 U and V = ynU, + ··· + γ U ,

and suppose L(U) = L(V) Then 3 = y (1 < i < n) ,

and therefore U = V; hence L is one-to-one If

X = is an arbitrary vector in R ; then

L(3-,U, + ··· + 3 U ) = X; hence L is onto 1 1 n n

T-6 If (X) = X for every X in R , then

T-8 Let {υ1 n Ί, , U } be a basis for S If

ajLO^) + + a L(U ) = 0, then n n

alUl + '*

+ a U ) = 0 (Thm 3.1) n n Thus + a U = 0 , because ker L = {0}, and n n

Trang 27

a = =1 n a =0, since IL , , , , U are linearly I n J

independent Therefore {L(IL ) , , L(U ) } i s 1 n

linearly independent, and, from Exercise T-4, spans

range L; hence it is a basis for range L

T-9 Let A = [a ] be m x n Define

L(A) = [a LV a12, , a ^ , a ^ , a^, , a^,.,

l T

ml mn

T-10 If L is one-to-one, then dim(ker L) = 0 (Thm 3.3);

then by Thm 3.5, dim(range L) = n = dim T Hence

L is onto Conversely, if L is onto, then

dim(range L) = n and dim(ker L) = 0 (Thm 3.5);

hence L is one-to-one (Thm 3.3)

T-ll Let U , , U be linearly independent in S If

L(U ) , , L(U ) are linearly dependent, then there

are scalars a-, , a, , not all zero, such that

a1L(U1) + + akL(iy = Θ

Let U = a U + ··· + a U (which is nonzero); then

L(U) = Θ, so L is not one-to-one (Thm 3.3) For

the converse, let U + 0; then L(U) φ Θ, by

hypothesis; hence L is one-to-one (Thm 3.3)

T-12 Use Ex T-ll and the fact L:S -> Rn, defined by

L(U) = (U)_,, is one-to-one

D

Trang 28

T-13 In Thm 3.8, let T=s, and let L(U) = U for every

U(L = identity) The result follows from Eq (15)

Trang 29

10 3 12 3 14 {X , Χ2> 16 yes

-1 Let A be an m x n matrix, and suppose A contains

columns j , , j from A, where 1 < j < ··· < j l K — 1 k and 1 < k < n If the rows of A are linearly

dependent, there are constants c , , c , not all

I m zero, such that

Cl[all' a12'···' aln] + C2[a21' a2 2 " · " a2n]

+ ··· + c [ a , a , a ] = [0, 0, , 0]

m ml mz mn Then

-3 Let L:Rn -> Rm be defined by L(X) = AX By Theorem

3.5, dim(range L) + dim(ker L) = n Since

R(A) = dim(range L) and N(A) = dim(ker L) , the

conclusion follows

-4 The same sequence of elementary row operations that

leads from A to B automatically leads from A., to

V

Trang 30

Suppose columns p , , p of A form a basis for the column space of A; then they are linearly

independent If A is the submatrix consisting of columns p , , p of A, then A is of rank r and consequently has a nonzero subdeterminant of order

r (Theorem 4.3) Let B be the submatrix consisting

of columns p-, , p of B Since B is row 1

equivalent to A , and elementary row operations

* preserve nonvanishing of determinants, B has a nonzero determinant of order r The proof of the converse is similar

Let B be as defined in Thm 4.6, with p = 1,

p = 2, , p = k Then det B φ 0 if and only if

b,, = b11 22 kk 0 0 = ··· = b = 1 Now the conclusion follows from Thm 4.6

Let A be an m x n matrix with k nonzero rows, which

we denote by R , , R, ; thus

R = [rl il i2 in #1 , r.0, , r ] With j , , j as introduced in Def 2.3, Sect 1.2, r = 0 if i < j and r., = 1 If

c JL R 1 + c 2 R 2 + · · · + c ^ = [0, 0 , , 0 ] ,

examination of the j -th component on the left yields c = 0, then examination of the j9-th

column yields c = 0, and so forth Therefore

R , , R^ are linearly independent

T-5

T-6

T-7

Trang 31

T-8 AX = Y has a solution if and only if Y is a linear

combination of the columns of A

T-9 Follows from Cor 2.1, Sect 2.2

T-10 From Ex T-9,det A = 0 if and only if R(A) < n; now

X (d) = - X j/30 / 6

Trang 33

X-Y = a , ( X - Y ) + a1 1 I I m m 0 ( X -Y) + · · · + c (X -Y) = 0 ,

(a) In particular, take X = Y, which yields

Trang 34

(b) Χ·(Υ - Z) = 0 for all X in R Apply (a) to

in the same direction

T-8 |X + Y|2 = (X + Υ)·(Χ + Y) = |x|2 + |Y|2 + 2(Χ·Υ), from which the conclusion follows

T-9 Follows directly from Def 5.2

T-10 Follows directly from Def 5.2, and the definition of matrix multiplication

T-11 Take X = U - V in part (d) of Thm 5.1 to obtain (a) and (b) For (c), simply observe that |-x| = |x| For (d), let X = U - W and Y = W - V in Thm 5.3

Trang 35

L 2t

Trang 36

fio

1

ilo

Trang 37

T-l Suppose AXI = AXI and AX2 = AX2; then

A(XI + X2) = AXI + AX2 = A(XI + X2) and, if c is ascalar, then A(cXl ) = cAXl = cAXl = A(cXl ) Theconclusion follows

T-2 See Exercise T-20, Sect 1.4

T-3 Since Xl' ' X l' X are linearly dependent, thereJ- Jare constants a l , , aj _l , b, not all zero, suchthat

Trang 38

T-4 The remainder of the proof follows directly from

Eq (18): if λ φ 0, then (18) determines a

uniquely; if λ = 0, then (18) cannot be satisfied

Q ^ Q , then C = (Q λ ? λ ) A(PQ) = (PQ) λ A(PQ) ,

T-6 Let X 1" , X be the columns of P; then n

T-8 In the equation preceding Thm 6.12, observe that

P AP = diag [λ 1 " ν·

Trang 39

T-9 Let B = Ρ" AP, and note that I = P^IP Then

B - λΐ = P_1AP - λΡ_1ΙΡ = P_1(A - λΙ)Ρ; hence

det(B - λΐ) = (det P"1) det(A - λΐ) det P = det A -λΐ

Similar matrices have the same eigenvalues

T-10 If A is triangular, then A - XI is also triangular;

T-12 The constant term in ρ(λ) = det(A - XI) is, on the

one hand, equal to the product of the roots of ρ(λ)

(which are the eigenvalues of A), and > on the other

hand, equal to p(0) = det A

T-13 If I = ATA, then det I = det ATA = det AT det A

T Since det 1 = 1 and det A = det A, it follows that

2 (det A) = 1 ; hence det A = ±1

T - 1 4 I f A T A = B T B = I , t h e n (AB) T (AB) = B T A T AB = B T B = I ;

h e n c e AB i s o r t h o g o n a l

- I T T T T T T T T - 1

T - 1 5 (P AP) = ( P A P ) = P A (P ) = P A P = P AP

T-16 Let X-, , X be orthonormal eigenvectors of A, and 1 n

let P be the orthogonal matrix which has X , , X

as columns Then A = PDP , where D is diagonal

Now the result follows from Exercise T-15

Ngày đăng: 27/05/2022, 15:25

TỪ KHÓA LIÊN QUAN