1. Trang chủ
  2. » Giáo án - Bài giảng

Chương 8 phép biến đổi tuyến tính

35 531 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 35
Dung lượng 793,86 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The matrix A is called the standard matrix for the linear transformation T.. and T e2 = 01 ,and so it follows from Proposition 8A that the standard matrix is given by It is not difficult

Trang 1

W W L CHEN

c W W L Chen, 1997, 2008.

This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain,

and may be downloaded and/or photocopied, with or without permission from the author.

However, this document may not be kept on any information storage and retrieval system without permission

from the author, unless such system is not accessible to any individuals other than its owners.

Chapter 8

LINEAR TRANSFORMATIONS

8.1 Euclidean Linear Transformations

By a transformation from Rn into Rm, we mean a function of the type T : Rn → Rm, with domain Rn

and codomain Rm For every vector x ∈ Rn, the vector T (x) ∈ Rm is called the image of x under thetransformation T , and the set

R(T ) = {T (x) : x ∈ Rn},

of all images under T , is called the range of the transformation T

Remark For our convenience later, we have chosen to use R(T ) instead of the usual T (Rn) to denotethe range of the transformation T

For every x = (x1, , xn) ∈ Rn, we can write

T (x) = T (x1, , xn) = (y1, , ym)

Here, for every i = 1, , m, we have

where Ti: Rn→ R is a real valued function

Definition A transformation T : Rn → Rm is called a linear transformation if there exists a realmatrix

Trang 2

such that for every x = (x1, , xn) ∈ Rn, we have T (x1, , xn) = (y1, , ym), where

y1= a11x1+ + a1nxn,

The matrix A is called the standard matrix for the linear transformation T

Remarks (1) In other words, a transformation T : Rn → Rm is linear if the equation (1) for every

i = 1, , m is linear

(2) If we write x ∈ Rn and y ∈ Rm as column matrices, then (2) can be written in the form y = Ax,and so the linear transformation T can be interpreted as multiplication of x ∈ Rn by the standardmatrix A

Definition A linear transformation T : Rn→ Rmis said to be a linear operator if n = m In this case,

we say that T is a linear operator on Rn

Example 8.1.1 The linear transformation T : R5→ R3, defined by the equations

Example 8.1.3 Suppose that I is the identity n × n matrix The linear operator T : Rn → Rn, where

T (x) = Ix for every x ∈ Rn, is the identity operator on Rn Clearly T (x) = x for every x ∈ Rn

Trang 3

PROPOSITION 8A Suppose that T : Rn → Rm is a linear transformation, and that {e1, , en} isthe standard basis for Rn Then the standard matrix for T is given by

A = ( T (e1) T (en) ) ,where T (ej) is a column matrix for every j = 1, , n

and T (e2) =

01

,and so it follows from Proposition 8A that the standard matrix is given by

It is not difficult to see that the standard matrices for reflection across the x1-axis and across the line

x1= x2are given respectively by

We give a summary in the table below:

Reflection across x2-axis n y1= −x1

Trang 4

Similarly, the standard matrix for orthogonal projection onto the x2-axis is given by

We give a summary in the table below:

Orthogonal projection onto x1-axis n y1= x1

 

x1

x2



It follows that the standard matrix is given by

A =

cos θ − sin θsin θ cos θ



We give a summary in the table below:

Anticlockwise rotation by angle θ



y1= x1cos θ − x2sin θ

y2= x1sin θ + x2cos θ

cos θ − sin θsin θ cos θ

The operator is called a contraction if 0 < k < 1 and a dilation if k > 1, and can be extended to negativevalues of k by noting that for k < 0, we have

This describes contraction or dilation by non-negative scalar −k followed by reflection across the origin

We give a summary in the table below:

Contraction or dilation by factor k

Trang 5

Example 8.2.5 For expansion or compression in the x1-direction by a positive factor k, we have

T (x1, x2) = (kx1, x2), with standard matrix

This can be extended to negative values of k by noting that for k < 0, we have

This describes expansion or compression in the x1-direction by positive factor −k followed by reflectionacross the x2-axis Similarly, for expansion or compression in the x2-direction by a non-zero factor k,

we have the standard matrix

We give a summary in the table below:

Expansion or compression in x1-direction

Example 8.2.5 For expansion or compression in the x1-direction by a positive factor k, we have

T (x1, x2) = (kx1, x2), with standard matrix

We give a summary in the table below:

Expansion or compression in x1-direction

T (k=1)

For the case k = −1, we have the following

T (k=−1)

For the case k = −1, we have the following

Example 8.2.5 For expansion or compression in the x1-direction by a positive factor k, we have

T (x1, x2) = (kx1, x2), with standard matrix

We give a summary in the table below:

Expansion or compression in x1-direction

T (k=1)

For the case k = −1, we have the following

T (k=−1)

Trang 6

Similarly, for shears in the x2-direction with factor k, we have standard matrix

We give a summary in the table below:

e1=

10

7→



−10

7→



−10

7→



−10



= T (e1),

e2=

01

7→

01

7→

31

7→

3

Let us summarize the above and consider a few special cases We have the following table of invertiblelinear operators with k 6= 0 Clearly, if A is the standard matrix for an invertible linear operator T , thenthe inverse matrix A−1 is the standard matrix for the inverse linear operator T−1

Next, let us consider the question of elementary row operations on 2 × 2 matrices It is not difficult

to see that an elementary row operation performed on a 2 × 2 matrix A has the effect of multiplying the

Trang 7

matrix A by some elementary matrix E to give the product EA We have the following table.

Interchanging the two rows

We have proved the following result

PROPOSITION 8B Suppose that the linear operator T : R2→ R2has standard matrix A, where A isinvertible Then T is the product of a succession of finitely many reflections, expansions, compressionsand shears

In fact, we can prove the following result concerning images of straight lines

PROPOSITION 8C Suppose that the linear operator T : R2 → R2 has standard matrix A, where A

is invertible Then

(a) the image under T of a straight line is a straight line;

(b) the image under T of a straight line through the origin is a straight line through the origin; and(c) the images under T of parallel straight lines are parallel straight lines

Proof Suppose that T (x1, x2) = (y1, y2) Since A is invertible, we have x = A−1y, where

Trang 8

( α0 β0) = ( α β ) A−1.Then

γ = 0 To prove (c), note that parallel straight lines correspond to different values of γ for the same

8.3 Elementary Properties of Euclidean Linear Transformations

In this section, we establish a number of simple properties of euclidean linear transformations

PROPOSITION 8D Suppose that T1 : Rn → Rm and T2 : Rm → Rk are linear transformations.Then T = T2◦ T1: Rn→ Rk is also a linear transformation

Proof Since T1and T2are linear transformations, they have standard matrices A1and A2respectively

In other words, we have T1(x) = A1x for every x ∈ Rn and T2(y) = A2y for every y ∈ Rm It followsthat T (x) = T2(T1(x)) = A2A1x for every x ∈ Rn, so that T has standard matrix A2A1

Example 8.3.1 Suppose that T1 : R2 → R2 is anticlockwise rotation by π/2 and T2 : R2 → R2 isorthogonal projection onto the x1-axis Then the respective standard matrices are

It follows that the standard matrices for T2◦ T1 and T1◦ T2 are respectively

Example 8.3.2 Suppose that T1 : R2 → R2 is anticlockwise rotation by θ and T2 : R2 → R2 isanticlockwise rotation by φ Then the respective standard matrices are

A1=

cos θ − sin θsin θ cos θ



cos φ − sin φsin φ cos φ



It follows that the standard matrix for T2◦ T1is

.Hence T2◦ T1is anticlockwise rotation by φ + θ

Example 8.3.3 The reader should check that in R2, reflection across the x1-axis followed by reflectionacross the x2-axis gives reflection across the origin

Linear transformations that map distinct vectors to distinct vectors are of special importance

Trang 9

Definition A linear transformation T : Rn→ Rmis said to be one-to-one if for every x0, x00∈ Rn, wehave x0= x00whenever T (x0) = T (x00).

Example 8.3.4 If we consider linear operators T : R2→ R2, then T is one-to-one precisely when thestandard matrix A is invertible To see this, suppose first of all that A is invertible If T (x0) = T (x00),then Ax0 = Ax00 Multiplying on the left by A−1, we obtain x0 = x00 Suppose next that A is notinvertible Then there exists x ∈ R2 such that x 6= 0 and Ax = 0 On the other hand, we clearly haveA0 = 0 It follows that T (x) = T (0), so that T is not one-to-one

PROPOSITION 8E Suppose that the linear operator T : Rn→ Rn has standard matrix A Then thefollowing statements are equivalent:

(a) The matrix A is invertible

(b) The linear operator T is one-to-one

(c) The range of T is Rn; in other words, R(T ) = Rn

Proof ((a)⇒(b)) Suppose that T (x0) = T (x00) Then Ax0= Ax00 Multiplying on the left by A−1gives

x0= x00

((b)⇒(a)) Suppose that T is one-to-one Then the system Ax = 0 has unique solution x = 0 in Rn

It follows that A can be reduced by elementary row operations to the identity matrix I, and is thereforeinvertible

((a)⇒(c)) For any y ∈ Rn, clearly x = A−1y satisfies Ax = y, so that T (x) = y

((c)⇒(a)) Suppose that {e1, , en} is the standard basis for Rn Let x1, , xn ∈ Rn be chosen tosatisfy T (xj) = ej, so that Axj= ej, for every j = 1, , n Write

C = ( x1 xn)

Definition Suppose that the linear operator T : Rn→ Rnhas standard matrix A, where A is invertible.Then the linear operator T−1 : Rn → Rn, defined by T−1(x) = A−1x for every x ∈ Rn, is called theinverse of the linear operator T

Remark Clearly T−1(T (x)) = x and T (T−1(x)) = x for every x ∈ Rn

Example 8.3.5 Consider the linear operator T : R2 → R2, defined by T (x) = Ax for every x ∈ R2,where

Next, we study the linearity properties of euclidean linear transformations which we shall use later todiscuss linear transformations in arbitrary real vector spaces

Trang 10

PROPOSITION 8F A transformation T : Rn → Rm is linear if and only if the following twoconditions are satisfied:

(a) For every u, v ∈ Rn, we have T (u + v) = T (u) + T (v)

(b) For every u ∈ Rn and c ∈ R, we have T (cu) = cT (u)

Proof Suppose first of all that T : Rn→ Rmis a linear transformation Let A be the standard matrixfor T Then for every u, v ∈ Rn and c ∈ R, we have

T (u + v) = A(u + v) = Au + Av = T (u) + T (v)and

T (cu) = A(cu) = c(Au) = cT (u)

Suppose now that (a) and (b) hold To show that T is linear, we need to find a matrix A such that

T (x) = Ax for every x ∈ Rn Suppose that {e1, , en} is the standard basis for Rn As suggested byProposition 8A, we write

A = ( T (e1) T (en) ) ,where T (ej) is a column matrix for every j = 1, , n For any vector

eigen-Definition Suppose that T : Rn → Rn is a linear operator Then any real number λ ∈ R is called

an eigenvalue of T if there exists a non-zero vector x ∈ Rn such that T (x) = λx This non-zero vector

x ∈ Rn is called an eigenvector of T corresponding to the eigenvalue λ

Remark Note that the equation T (x) = λx is equivalent to the equation Ax = λx It follows thatthere is no distinction between eigenvalues and eigenvectors of T and those of the standard matrix A

We therefore do not need to discuss this problem any further

8.4 General Linear Transformations

Suppose that V and W are real vector spaces To define a linear transformation from V into W , we aremotivated by Proposition 8F which describes the linearity properties of euclidean linear transformations

Trang 11

By a transformation from V into W , we mean a function of the type T : V → W , with domain Vand codomain W For every vector u ∈ V , the vector T (u) ∈ W is called the image of u under thetransformation T

Definition A transformation T : V → W from a real vector space V into a real vector space W iscalled a linear transformation if the following two conditions are satisfied:

(LT1) For every u, v ∈ V , we have T (u + v) = T (u) + T (v)

(LT2) For every u ∈ V and c ∈ R, we have T (cu) = cT (u)

Definition A linear transformation T : V → V from a real vector space V into itself is called a linearoperator on V

Example 8.4.1 Suppose that V and W are two real vector spaces The transformation T : V → W ,where T (u) = 0 for every u ∈ V , is clearly linear, and is called the zero transformation from V to W Example 8.4.2 Suppose that V is a real vector space The transformation I : V → V , where I(u) = ufor every u ∈ V , is clearly linear, and is called the identity operator on V

Example 8.4.3 Suppose that V is a real vector space, and that k ∈ R is fixed The transformation

T : V → V , where T (u) = ku for every u ∈ V , is clearly linear This operator is called a dilation if

p = p0+ p1x + + pnxn

in Pn, we let

T (p) = pn+ pn−1x + + p0xn.Suppose now that q = q0+ q1x + + qnxn is another polynomial in Pn Then

p + q = (p0+ q0) + (p1+ q1)x + + (pn+ qn)xn,

so that

T (p + q) = (pn+ qn) + (pn−1+ qn−1)x + + (p0+ q0)xn

= (pn+ pn−1x + + p0xn) + (qn+ qn−1x + + q0xn) = T (p) + T (q)

Trang 12

Also, for any c ∈ R, we have cp = cp0+ cp1x + + cpnxn, so that

T (cp) = cpn+ cpn−1x + + cp0xn= c(pn+ pn−1x + + p0xn) = cT (p)

Hence T is a linear transformation

Example 8.4.6 Let V denote the vector space of all real valued functions differentiable everywhere in R,and let W denote the vector space of all real valued functions defined on R Consider the transformation

T : V → W , where T (f) = f0 for every f ∈ V It is easy to check from properties of derivatives that T

Consider a linear transformation T : V → W from a finite dimensional real vector space V into a realvector space W Suppose that {v1, , vn} is a basis of V Then every u ∈ V can be written uniquely

in the form u = β1v1+ + βnvn, where β1, , βn∈ R It follows that

T (u) = T (β1v1+ + βnvn) = T (β1v1) + + T (βnvn) = β1T (v1) + + βnT (vn)

We have therefore proved the following generalization of Proposition 8A

PROPOSITION 8G Suppose that T : V → W is a linear transformation from a finite dimensionalreal vector space V into a real vector space W Suppose further that {v1, , vn} is a basis of V Then

T is completely determined by T (v1), , T (vn)

Example 8.4.8 Consider a linear transformation T : P2→ R, where T (1) = 1, T (x) = 2 and T (x2) = 3.Since {1, x, x2} is a basis of P2, this linear transformation is completely determined In particular, wehave, for example,

T (5 − 3x + 2x2) = 5T (1) − 3T (x) + 2T (x2) = 5

Example 8.4.9 Consider a linear transformation T : R4→ R, where T (1, 0, 0, 0) = 1, T (1, 1, 0, 0) = 2,

T (1, 1, 1, 0) = 3 and T (1, 1, 1, 1) = 4 Since {(1, 0, 0, 0), (1, 1, 0, 0), (1, 1, 1, 0), (1, 1, 1, 1)} is a basis of R4,this linear transformation is completely determined In particular, we have, for example,

T (6, 4, 3, 1) = T (2(1, 0, 0, 0) + (1, 1, 0, 0) + 2(1, 1, 1, 0) + (1, 1, 1, 1))

= 2T (1, 0, 0, 0) + T (1, 1, 0, 0) + 2T (1, 1, 1, 0) + T (1, 1, 1, 1) = 14

We also have the following generalization of Proposition 8D

PROPOSITION 8H Suppose that V, W, U are real vector spaces Suppose further that T1: V → Wand T2: W → U are linear transformations Then T = T2◦ T1: V → U is also a linear transformation.Proof Suppose that u, v ∈ V Then

T (u + v) = T2(T1(u + v)) = T2(T1(u) + T1(v)) = T2(T1(u)) + T2(T1(v)) = T (u) + T (v).Also, if c ∈ R, then

T (cu) = T2(T1(cu)) = T2(cT1(u)) = cT2(T1(u)) = cT (u)

Trang 13

8.5 Change of Basis

Suppose that V is a real vector space, with basis B = {u1, , un} Then every vector u ∈ V can bewritten uniquely as a linear combination

It follows that the vector u can be identified with the vector (β1, , βn) ∈ Rn

Definition Suppose that u ∈ V and (3) holds Then the matrix

is called the coordinate matrix of u relative to the basis B = {u1, , un}

Example 8.5.1 The vectors

of Rn Furthermore, note that

[u + v]B= [u]B+ [v]B and [cu]B= c[u]B,

so that φ(u + v) = φ(u) + φ(v) and φ(cu) = cφ(u) for every u, v ∈ V and c ∈ R Thus φ is a lineartransformation, and preserves much of the structure of V We also say that V is isomorphic to Rn Inpractice, once we have made this identification between vectors and their coordinate matrices, then wecan basically forget about the basis B and imagine that we are working in Rn with the standard basis.Clearly, if we change from one basis B = {u1, , un} to another basis C = {v1, , vn} of V , then wealso need to find a way of calculating [u]C in terms of [u]B for every vector u ∈ V To do this, note thateach of the vectors v1, , vncan be written uniquely as a linear combination of the vectors u1, , un.Suppose that for i = 1, , n, we have

vi= a1iu1+ + aniun, where a1i, , ani∈ R,

Trang 14

u = γ1v1+ + γnvn

= γ1(a11u1+ + an1un) + + γn(a1nu1+ + annun)

= (γ1a11+ + γna1n)u1+ + (γ1an1+ + γnann)un

= β1u1+ + βnun.Hence

β1= γ1a11+ + γna1n,

βn= γ1an1+ + γnann.Written in matrix notation, we have

We have proved the following result

PROPOSITION 8J Suppose that B = {u1, , un} and C = {v1, , vn} are two bases of a realvector space V Then for every u ∈ V , we have

[u]B= P [u]C,where the columns of the matrix

P = ( [v1]B [vn]B)are precisely the coordinate matrices of the elements of C relative to the basis B

Remark Strictly speaking, Proposition 8J gives [u]B in terms of [u]C However, note that the matrix

P is invertible (why?), so that [u]C = P−1[u]B

Definition The matrix P in Proposition 8J is sometimes called the transition matrix from the basis C

to the basis B

Trang 15

Example 8.5.2 We know that with

 Then

Trang 16

Example 8.5.3 Consider the vector space P2 It is not too difficult to check that

8.6 Kernel and Range

Consider first of all a euclidean linear transformation T : Rn → Rm Suppose that A is the standardmatrix for T Then the range of the transformation T is given by

R(T ) = {T (x) : x ∈ Rn} = {Ax : x ∈ Rn}

Trang 17

It follows that R(T ) is the set of all linear combinations of the columns of the matrix A, and is thereforethe column space of A On the other hand, the set

{x ∈ Rn: Ax = 0}

is the nullspace of A

Recall that the sum of the dimension of the nullspace of A and dimension of the column space of A isequal to the number of columns of A This is known as the Rank-nullity theorem The purpose of thissection is to extend this result to the setting of linear transformations To do this, we need the followinggeneralization of the idea of the nullspace and the column space

Definition Suppose that T : V → W is a linear transformation from a real vector space V into a realvector space W Then the set

ker(T ) = {u ∈ V : T (u) = 0}

is called the kernel of T , and the set

R(T ) = {T (u) : u ∈ V }

is called the range of T

Example 8.6.1 For a euclidean linear transformation T with standard matrix A, we have shown thatker(T ) is the nullspace of A, while R(T ) is the column space of A

Example 8.6.2 Suppose that T : V → W is the zero transformation Clearly we have ker(T ) = V andR(T ) = {0}

Example 8.6.3 Suppose that T : V → V is the identity operator on V Clearly we have ker(T ) = {0}and R(T ) = V

Example 8.6.4 Suppose that T : R2 → R2 is orthogonal projection onto the x1-axis Then ker(T ) isthe x2-axis, while R(T ) is the x1-axis

Example 8.6.5 Suppose that T : Rn→ Rn is one-to-one Then ker(T ) = {0} and R(T ) = Rn, in view

of Proposition 8E

Example 8.6.6 Consider the linear transformation T : V → W , where V denotes the vector space ofall real valued functions differentiable everywhere in R, where W denotes the space of all real valuedfunctions defined in R, and where T (f) = f0 for every f ∈ V Then ker(T ) is the set of all differentiablefunctions with derivative 0, and so is the set of all constant functions in R

Example 8.6.7 Consider the linear transformation T : V → R, where V denotes the vector space ofall real valued functions Riemann integrable over the interval [0, 1], and where

T (f) =

Z 1

0 f(x) dxfor every f ∈ V Then ker(T ) is the set of all Riemann integrable functions in [0, 1] with zero mean,while R(T ) = R

PROPOSITION 8K Suppose that T : V → W is a linear transformation from a real vector space Vinto a real vector space W Then ker(T ) is a subspace of V , while R(T ) is a subspace of W

Ngày đăng: 18/01/2017, 08:27

TỪ KHÓA LIÊN QUAN

w