Vector Spaces and Subspaces

Một phần của tài liệu Lyche t numerical linear algebra and matrix factorizations 2020 (Trang 25 - 31)

Many mathematical systems have analogous properties to vectors inR2orR3. Definition 1.1 (Real Vector Space) A real vector space is a nonempty set V, whose objects are calledvectors, together with two operations+ :V×V −→V andã : RìV −→ V, calledadditionandscalar multiplication, satisfying the following axioms for all vectorsu,v,winVand scalarsc, dinR.

(V1) The sumu+vis inV, (V2) u+v=v+u,

(V3) u+(v+w)=(u+v)+w,

(V4) There is azero vector 0such thatu+0=u,

(V5) For eachuinVthere is a vector−uinVsuch thatu+(u)=0, (S1) The scalar multiplecãuis inV,

(S2) cã(u+v)=cãu+cãv, (S3) (c+d)ãu=cãu+dãu, (S4) cã(dãu)=(cd)ãu, (S5) 1ãu=u.

The scalar multiplication symbolãis often omitted, writingcvinstead ofcãv. We defineuv:=u+(v). We callVacomplex vector spaceif the scalars consist of all complex numbersC. In this book a vector space is either real or complex.

From the axioms it follows that 1. The zero vector is unique.

2. For eachuVthenegativeuofuis unique.

3. 0u=0,c0=0, and−u=(−1)u.

Here are some examples

1. The spaces Rn and Cn, where nN, are real and complex vector spaces, respectively.

2. LetDbe a subset ofRanddN. The setVof all functionsf,g:DRdis a real vector space with

(f+g)(t):=f(t)+g(t), (cf)(t):=cf(t), tD, cR. Two functionsf,g in V are equal if f(t) = g(t) for all tD. The zero element is thezero functiongiven byf(t)=0for alltDand the negative off is given by−f =(−1)f. In the following we will use boldface letters for functions only ifd >1.

3. For n ≥ 0 the space Πn of polynomials of degree at most n consists of all polynomialsp:RR,p:RC, orp:CCof the form

p(t):=a0+a1t+a2t2+ ã ã ã +antn, (1.2)

where the coefficientsa0, . . . , an are real or complex numbers.pis called the zero polynomialif all coefficients are zero. All other polynomials are said to be nontrivial. Thedegreeof a nontrivial polynomialpgiven by (1.2) is the smallest integer 0≤knsuch thatp(t)=a0+ ã ã ã +aktk withak =0. The degree of the zero polynomial is not defined.Πnis a vector space if we define addition and scalar multiplication as for functions.

Definition 1.2 (Linear Combination) Forn≥1 letX := {x1, . . . ,xn}be a set of vectors in a vector spaceVand letc1, . . . , cnbe scalars.

1. The sumc1x1+ ã ã ã +cnxnis called alinear combinationofx1, . . . ,xn. 2. The linear combination isnontrivialifcjxj =0for at least onej. 3. The set of all linear combinations of elements inX is denoted span(X).

4. A vector space isfinite dimensionalif it has a finite spanning set; i.e., there existsnNand{x1, . . . ,xn}inVsuch thatV=span({x1, . . . ,xn}).

Example 1.1 (Linear Combinations)

1. Anyx= [x1, . . . , xm]T inCmcan be written as a linear combination of the unit vectors asx=x1e1+x2e2+ ã ã ã +xmem. Thus,Cm=span({e1, . . . ,em})and Cmis finite dimensional. SimilarlyRmis finite dimensional.

2. Let Π = ∪nΠn be the space of all polynomials. Π is a vector space that is not finite dimensional. For suppose Π is finite dimensional. Then Π = span({p1, . . . , pm})for some polynomialsp1, . . . , pm. Letdbe an integer such that the degree ofpj is less thand forj =1, . . . , m. A polynomial of degreed cannot be written as a linear combination ofp1, . . . , pm, a contradiction.

1.2.1 Linear Independence and Bases

Definition 1.3 (Linear Independence) A set X = {x1, . . . ,xn} of nonzero vectors in a vector space islinearly dependentif0can be written as a nontrivial linear combination of{x1, . . . ,xn}. OtherwiseXislinearly independent.

A set of vectorsX = {x1, . . . ,xn}is linearly independent if and only if

c1x1+ ã ã ã +cnxn=0c1= ã ã ã =cn=0. (1.3) Suppose{x1, . . . ,xn}is linearly independent. Then

1. Ifx∈span(X)then the scalarsc1, . . . , cnin the representationx=c1x1+ã ã ã+

cnxnare unique.

2. Any nontrivial linear combination ofx1, . . . ,xnis nonzero,

Lemma 1.1 (Linear Independence and Span) Supposev1, . . . ,vnspan a vector spaceVand thatw1, . . . ,wkare linearly independent vectors inV. Thenkn.

Proof Suppose k > n. Write w1 as a linear combination of elements from the set X0 := {v1, . . . ,vn}, say w1 = c1v1 + ã ã ã +cnvn. Since w1 = 0 not all the c’s are equal to zero. Pick a nonzero c, say ci1. Then vi1 can be expressed as a linear combination ofw1and the remainingv’s. So the setX1 :=

{w1,v1, . . . ,vi1−1,vi1+1, . . . ,vn}must also be a spanning set forV. We repeat this forw2 and X1. In the linear combinationw2 = di1w1+ j=i1djvj, we must havedi2 = 0 for somei2withi2 = i1. For otherwisew2 = d1w1contradicting the linear independence of the w’s. So the set X2 consisting of the v’s with vi1 replaced byw1andvi2replaced byw2is again a spanning set forV. Repeating this processn−2 more times we obtain a spanning setXnwherev1, . . . ,vnhave been replaced byw1, . . . ,wn. Sincek > nwe can then writewkas a linear combination ofw1, . . . ,wn contradicting the linear independence of thew’s. We conclude that

kn.

Definition 1.4 (Basis) A finite set of vectors{v1, . . . ,vn}in a vector spaceVis a basisforVif

1. span{v1, . . . ,vn} =V.

2. {v1, . . . ,vn}is linearly independent.

Theorem 1.1 (Basis Subset of a Spanning Set) SupposeVis a vector space and that{v1, . . . ,vn}is a spanning set forV. Then we can find a subset{vi1, . . . ,vik} that forms a basis forV.

Proof If {v1, . . . ,vn} is linearly dependent we can express one of the v’s as a nontrivial linear combination of the remainingv’s and drop thatvfrom the spanning set. Continue this process until the remainingv’s are linearly independent. They still

span the vector space and therefore form a basis.

Corollary 1.1 (Existence of a Basis) A vector space is finite dimensional (cf.

Definition1.2) if and only if it has a basis.

Proof LetV = span{v1, . . . ,vn}be a finite dimensional vector space. By Theo- rem1.1,V has a basis. Conversely, ifV =span{v1, . . . ,vn}and{v1, . . . ,vn}is a

basis then it is by definition a finite spanning set.

Theorem 1.2 (Dimension of a Vector Space) Every basis for a vector space V has the same number of elements. This number is called thedimensionof the vector space and denoteddimV.

Proof SupposeX = {v1, . . . ,vn}andY = {w1, . . . ,wk}are two bases forV. By Lemma1.1we have kn. Using the same Lemma withX andY switched we

obtainnk. We conclude thatn=k.

The set of unit vectors{e1, . . . ,en}form a basis for bothRnandCn.

Theorem 1.3 (Enlarging Vectors to a Basis) Every linearly independent set of vectors{v1, . . . ,vk}in a finite dimensional vector spaceV can be enlarged to a basis forV.

Proof If{v1, . . . ,vk}does not spanV we can enlarge the set by one vectorvk+1 which cannot be expressed as a linear combination of{v1, . . . ,vk}. The enlarged set is also linearly independent. Continue this process. Since the space is finite dimensional it must stop after a finite number of steps.

1.2.2 Subspaces

Definition 1.5 (Subspace) A nonempty subsetSof a real or complex vector space Vis called asubspaceofVif

(V1) The sumu+vis inSfor anyu,vS.

(S1) The scalar multiplecuis inSfor any scalarcand anyuS.

Using the operations inV, any subspace S ofV is a vector space, i.e., all 10 axiomsV1−V5 andS1−S5 are satisfied forS. In particular,Smust contain the zero element inV. This follows since the operations of vector addition and scalar multiplication are inherited fromV.

Example 1.2 (Examples of Subspaces)

1. {0}, where0is the zero vector is a subspace, thetrivial subspace. The dimension of the trivial subspace is defined to be zero. All other subspaces arenontrivial.

2. Vis a subspace of itself.

3. span(X)is a subspace ofV for anyX = {x1, . . . ,xn} ⊆V. Indeed, it is easy to see that(V1)and(S1)hold.

4. Thesumof two subspacesSandT of a vector spaceVis defined by

S+T := {s+t:sSandtT}. (1.4) Clearly(V1)and(S1)hold and it is a subspace ofV.

5. Theintersectionof two subspacesSandT of a vector spaceVis defined by ST := {x:xSandxT}. (1.5) It is a subspace ofV.

6. Theunionof two subspacesSandT of a vector spaceVis defined by

ST := {x :xSorxT}. (1.6) In general it is not a subspace ofV.

7. A sum of two subspacesSandT of a vector spaceVis called adirect sumand denotedST ifST = {0}.

Theorem 1.4 (Dimension Formula for Sums of Subspaces) LetSandT be two finite subspaces of a vector spaceV. Then

dim(S+T)=dim(S)+dim(T)−dim(ST). (1.7) In particular, for a direct sum

dim(ST)=dim(S)+dim(T). (1.8) Proof Let{u1, . . . ,up}be a basis forST, where{u1, . . . ,up} = ∅, the empty set, in the caseST = {0}. We use Theorem1.3to extend{u1, . . . ,up}to a basis {u1, . . . ,up, s1, . . . ,sq}forS and a basis{u1, . . . ,up,t1, . . . ,tr}forT. Every xS+T can be written as a linear combination of

{u1, . . . ,up,s1, . . . ,sq,t1, . . . ,tr}

so these vectors spanS+T. We show that they are linearly independent and hence a basis. Supposeu+s+t = 0, whereu := pj=1αjuj,s := qj=1ρjsj, and t := rj=1σjtj. Nows = −(u+t)belongs to bothS and toT and hencesST. Thereforescan be written as a linear combination ofu1, . . . ,upsays :=

p

j=1βjuj. But then0= pj=1βjujqj=1ρjsj and since {u1, . . . ,up,s1, . . . ,sq}

is linearly independent we must haveβ1 = ã ã ã = βp = ρ1 = ã ã ã = ρq = 0 and hence s = 0. We then have u +t = 0 and by linear independence of {u1, . . . ,up,t1, . . . ,tr}we obtainα1 = ã ã ã = αp = σ1 = ã ã ã = σr = 0. We have shown that the vectors{u1, . . . ,up,s1, . . . ,sq,t1, . . . ,tr}constitute a basis forS+T. But then

dim(S+T)=p+q+r =(p+q)+(p+r)p=dim(S)+dim(T)−dim(ST) and (1.7) follows. Equation (1.7) implies (1.8) since dim{0} =0.

It is convenient to introduce a matrix transforming a basis in a subspace into a basis for the space itself.

Lemma 1.2 (Change of Basis Matrix) Suppose S is a subspace of a finite dimensional vector spaceV and let{s1, . . . ,sn}be a basis forSand{v1, . . . ,vm} a basis forV. Then eachsjcan be expressed as a linear combination ofv1, . . . ,vm, say

sj = m i=1

aijviforj =1, . . . , n. (1.9)

If xS then x = nj=1cjsj = mi=1bivi for some coefficients b :=

[b1, . . . ,bm]T,c:= [c1, . . . , cn]T. Moreoverb=Ac, whereA= [aij] ∈Cm×nis given by(1.9). The matrixAhas linearly independent columns.

Proof Equation (1.9) holds for someaij sincesjV and{v1, . . . ,vm}spansV. Since{s1, . . . ,sn}is a basis forSand{v1, . . . ,vm}a basis forV, everyxScan be writtenx= nj=1cjsj = mi=1bivi for some scalars(cj)and(bi). But then

m i=1

bivi =x= n j=1

cjsj (1.9)= n j=1

cjm

i=1

aijvi

= m i=1

n

j=1

aijcj vi.

Since{v1, . . . ,vm}is linearly independent it follows thatbi = nj=1aijcj fori= 1, . . . , m orb = Ac. Finally, to show that Ahas linearly independent columns supposeb:=Ac =0for somec= [c1, . . . , cn]T. Definex := nj=1cjsj. Then x = mi=1bivi and sinceb=0we havex =0. But since{s1, . . . ,sn}is linearly

independent it follows thatc=0.

The matrixAin Lemma1.2is called achange of basis matrix.

1.2.3 The Vector Spaces Rnand Cn

WhenV = Rm orCm we can think of n vectors in V, say x1, . . . ,xn, as a set X := {x1, . . . ,xn}or as the columns of anm×nmatrixX = [x1, . . . ,xn]. A linear combination can then be written as a matrix times vector Xc, wherec = [c1, . . . , cn]T is the vector of scalars. Thus

R(X):= {Xc:cRn} =span(X).

Definition 1.6 (Column Space, Null Space, Inner Product and Norm) Associ- ated with anm×nmatrixX= [x1, . . . ,xn], wherexjV,j =1, . . . , nare the following subspaces ofV.

1. The subspaceR(X)is called thecolumn spaceofX. It is the smallest subspace containingX = {x1, . . . ,xn}. The dimension ofR(X)is called therankofX.

The matrixXhas ranknif and only if it has linearly independent columns.

2. R(XT)is called therow spaceofX. It is generated by the rows ofXwritten as column vectors.

3. The subspaceN(X):= {yRn : Xy =0}is called thenull spaceorkernel spaceof X. The dimension of N(X) is called thenullity ofX and denoted null(X).

4. Thestandard inner productis

x,y :=yx=xTy= n j=1

xjyj. (1.10)

5. TheEuclidian normis defined by x2:=

n

j=1

|xj|2 1/2

=√

xx. (1.11)

ClearlyN(X)is nontrivial if and only ifXhas linearly dependent columns. Inner products and norms are treated in more generality in Chaps.5and8.

The following Theorem is shown in any basic course in linear algebra. See Exercise7.10for a simple proof using the singular value decomposition.

Theorem 1.5 (Counting Dimensions of Fundamental Subspaces) SupposeXCm×n. Then

1. rank(X)=rank(X).

2. rank(X)+null(X)=n, 3. rank(X)+null(X)=m,

Một phần của tài liệu Lyche t numerical linear algebra and matrix factorizations 2020 (Trang 25 - 31)

Tải bản đầy đủ (PDF)

(376 trang)