1. Trang chủ
  2. » Khoa Học Tự Nhiên

Linear algebra and geometry

320 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Linear Algebra and Geometry
Tác giả K.T. Leung, Doris L.C. Chen
Trường học Hong Kong University
Chuyên ngành Mathematics
Thể loại book
Năm xuất bản 1974
Thành phố Hong Kong
Định dạng
Số trang 320
Dung lượng 4,41 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • Chapter I LINEAR SPACE .......................... I § 1 General Properties of Linear Space (12)
    • A. Abelian groups B. Linear spaces C. Examples D. Exercises §2 Finite-Dimensional Linear Space (15)
    • A. Linear combinations B. Base C. Linear indepen- (28)
    • A. Existence of Base B. Dimension C. Exercises §4 Subspace (43)
    • A. General properties B. Operations on subspaces C. Direct sum D. Quotient space E. Exercises (46)
  • Chapter II LINEAR TRANSFORMATIONS (56)
    • A. Linear transformation and examples B. Composition C. Isomorphism D. Kernel and image E. Factorization F. Exercises §6 The Linear Space Hom (X, Y) (56)
    • A. The algebraic structure of Horn (X, Y) B. The (73)
    • A. General properties of dual space B. Dual trans- (85)
    • A. Points and vectors B. Barycentre C. Linear varie- (109)
    • A. General properties B. The category of affine spaces (124)
  • Chapter IV PROJECTIVE GEOMETRY ................ § 11 Projective Space ................................ A. Points at infinity B. Definition of projective (129)
  • duality I. Exercises § 12 Mappings of Projective Spaces (0)
    • A. Projective isomorphism B. Projectivities C. Semi- (153)
  • Chapter V MATRICES (166)
    • A. Notations B. Addition and scalar multiplication of (166)
    • A. Matrix of a linear transformation B. Square matrices C. Change of bases D. Exercises §15 Systems of Linear Equations (177)
    • A. The rank of a matrix B. The solutions of a system (110)
  • Chapter VI MULTILINEAR FORMS (0)
  • Chapter VII EIGENVALUES (0)
    • A. Definitions B. Euclidean algorithm C. Greatest (0)
    • A. Invariant subspaces B. Eigenvectors and eigenvalues C. Characteristic polynomials D. Diagonalizable endo- (0)
    • A. Triangular form B. Hamilton-Cayley theorem C. Canonical decomposition D. Nilpotent endomor- (0)
  • Chapter VIII INNER PRODUCT SPACES (0)
    • A. Inner product and norm B. Orthogonality C. SCHWARZ'S inequality D. Normed linear space E. Exercises §22 Linear Transformations of Euclidean Spaces (0)
    • A. The conjugate isomorphism B. The adjoint trans- (0)
    • A. Orthogonality B. The Conjugate isomorphism C. The adjoint D. Self-adjoint transformations E. Iso- (0)

Nội dung

LINEAR SPACE I § 1 General Properties of Linear Space

Abelian groups B Linear spaces C Examples D Exercises §2 Finite-Dimensional Linear Space

Understanding the operations of addition and scalar multiplication is essential, as they follow specific rules that assign sums and products to pairs of objects These operations must adhere to certain requirements known as axioms To articulate these concepts more clearly, we utilize the framework of set theory.

In a set A, an internal composition law is defined as a mapping r: A x A → A For any pair of elements (a, b) in A x A, the result of this mapping, r(a, b), is referred to as the composite of a and b, typically expressed as a r b.

The addition of vectors, forces, and solutions exemplifies internal composition laws Similarly, an external composition law in A pertains to the relationships between its elements.

In abstract algebra, an algebraic structure is defined on a set A when it includes one or more composition laws, either internal or external, that adhere to specific axioms These axioms, such as commutativity and associativity, are not arbitrary; they are established properties found in various applications The mapping of elements from set B to set A, as well as scalar multiplications of vectors and forces, exemplifies these external composition laws Thus, abstract algebra serves as the mathematical framework for studying these structures and their properties.

With this in mind, we introduce the algebraic structure of the abelian group and study the properties thereof.

DEFINITION 1.1 Let A be a set An internal composition law r: A x A - A is said to define an algebraic structure of abelian group on A if and only if the following axioms are satisfied.

[G 1 ] For any elements a, b and c of A, (a rb)rc = a r(b rc). [G2] For any elements a and b of A, a rb = bra.

[G3] There is an element 0 in A such that Ora = a for every element a of A.

[G4] Let 0 be a fixed element of A satisfying [G3] Then for every element a of A there is an element -a of A such that (-a)ra = 0.

In this case, the ordered pair (A, r) is called an abelian group.

It follows from axiom [G3] that if (A, r) is an abelian group, then

A non-empty set A can be part of multiple abelian groups, represented as (A, r) and (A, r2), each with different internal composition laws It is incorrect to define an abelian group solely as a non-empty set with an internal composition law satisfying specific axioms, since such a law exists for every non-empty set, and multiple laws can exist if the set is not a singleton Therefore, it is essential to differentiate between the set A and the abelian group (A, r) For simplicity, when there is no ambiguity, we may refer to the abelian group (A, r) simply as A, indicating that A is the underlying set without its algebraic structure In this context, a subset or element of the abelian group A refers to a subset or element of the set A itself.

An abelian group is exemplified by the set of all integers, Z, combined with standard integer addition The axioms governing this group, labeled [G11 to [G4], reflect familiar properties of basic arithmetic Additionally, many established arithmetic properties of integers correspond to concepts in the abstract framework of abelian groups In this section, we will utilize the integer group Z as a fundamental model to explore the general characteristics of abelian groups.

For convenience of formulation, we shall use the following notations and abbreviations.

(i) The internal composition law r of (A, r) is referred to as the addition of the abelian group A.

In an abelian group A, the composite arb is represented as the sum of its elements a and b, denoted by a + b, where a and b are referred to as the summands While the specific notation is not critical to the underlying theory, carefully chosen symbols can streamline complex formulations and enhance calculation efficiency.

(iii) A neutral element of the abelian group A is an element 0 of

A satisfying [G3 ] above, i.e., 0 + a = a for every a eA.

(iv) For any element a of the abelian group A, an additive inverse of a is an element -a of A satisfying [G4] above, i.e., a + (-a) = 0.

(v) As a consequence of the notations chosen above, the abelian group A is called an additive abelian group or simply an additive group.

In this section, we will explore the fundamental properties of additive groups, highlighting their significance within our theoretical framework We will present these properties as theorems, relying solely on the axioms of their definition and previously established properties Consequently, all characteristics inherent to additive groups arise purely from their definitions.

THWREM 1.2 For any two elements a and b of an additive group

A, there is one and only one element x of A such that a + x = b.

To prove the existence and uniqueness of an element \( x \) in set \( A \) such that \( a + x = b \), we first assume there are two elements \( x \) and \( x' \) in \( A \) satisfying this equation By manipulating the equations, we find that \( x = x' \), demonstrating uniqueness For existence, we verify that the element \( -a + b \) in \( A \) meets the required condition, as substituting it into the equation yields \( a + (-a + b) = b \) Thus, we have established both the existence and uniqueness of \( x \).

In an additive group A, for any elements a and b, the equation a + x = b has a unique solution x, represented as b - a, which is referred to as the difference between b and a This notation leads to the conclusions that a - a equals 0 and 0 - a equals -a for all elements a in A.

COROLLARY 1.3 In an additive group, there is exactly one neutral element and for each element a there is exactly one additive inverse -a of a.

Here is another interesting consequence of 1.2 (or 1.3) In an additive group A, for each element x of A, x = 0 if and only if a + x = a for some a of A In particular, we get -0 = 0.

The axioms known as the associative law and the commutative law of addition, represented by [G1] and [G2], highlight key properties of addition in an additive group The associative law states that the sum of three elements, expressed as (a + b) + c = a + (b + c), indicates that the grouping of the numbers does not affect the final result, allowing us to simplify the expression to a + b + c This principle extends to any number of elements, such as a + b + c + d, demonstrating the consistency of addition regardless of how the elements are grouped.

A then, for any positive integer n such that 0 < n < N, we have the recursive definition: a, +a2 + +a, +an+I =(a, +a2 + +an)+an+

The associative law [G 1 ] can be generalized into

[GI'] (ai +a2 + +am)+(am+I +am+2 + +am+n)

The proof of [G1'] is carried out by induction on the number n For n = 1, [GI'] follows from the recursive definition Under the in- duction assumption that [G1'] holds for an n > 1, we get

=(a, + +am)+ [(am+, + +am+n)+am+n+i ] [(a, + +am)+(am+, + +am+n)] +am+n+i

This establishes the generalized associative law [G1'] of addition.

A simple consequence of the generalized associative law is that we can now write a multiple sum a + + a (n times) of an element a of an additive group A as na.

The commutative law [G2] of addition means that the sum a + b is independent of the order in which the summands a and b appear.

In other words, we can permute the summands in a sum without changing it Generalizing this, we get

[G2'] For any permutation (0(l), 0(2), ,O(n)) of the n-tuple

A permutation is defined as a bijective mapping of the set (1, 2, , n) onto itself, represented by ao(I) + ao(1) + + aO(n) + a2 + + an For n = 1, the statement [G2'] holds true Assuming the validity of this statement for n-1 > 1, we identify k such that O(k) = n By removing the number ¢(k) from the sequence (0(l), , Ak, , 0(n)), we obtain a permutation of the (n-1)-tuple (1, , n-1) Utilizing the induction assumption, we derive the expression a¢(1) + + acb(k) + + an.

2O(n) = a1 + +an-1 where aq5(k) under ^ is deleted Now ao(1) + +aO(n) =(ao(1) + +ao(k) + + ao(n ))+ ao(k)

The generalized commutative law of addition, denoted as [G2'], confirms that the sum of a finite number of elements in an additive group remains consistent regardless of how brackets are placed or the order of the elements.

The summation sign E is a convenient notation for expressing the sum a1 + a2 + + an as Eai or E(ai : i = 1, , n) When the range of summation is evident from the context, we can simplify the notation to a1 + a2 + + an as Eat or simply Eat The individual elements ai (where i = 1, , n) are referred to as the summands of the sum Ea.

By employing this notation, we simplify the process of managing double summations For each i from 1 to m and j from 1 to n, let aii represent an element of an additive group A These mn group elements can be organized into a rectangular array, structured as follows: a11, a12, , a1n; a21, a22, , a2n; ; am1, am2, , amn.

Linear combinations B Base C Linear indepen-

In the linear space V2 of vectors in the Euclidean plane E, we begin by selecting a non-zero vector a originating from the point O By applying addition and scalar multiplication to this vector, we can generate other vectors in V2, all of which can be expressed in the form Xa, where A is a real number Consequently, the endpoints of these vectors align along a single straight line in E that passes through O, demonstrating that a multitude of vectors in V2 can be derived from the original vector a.

"reached" by this "algebraic mechanism" from a single vector, and

Certain vectors in V2 cannot be attained through the addition of other vectors A geometric analysis reveals that if vectors a and b are both non-zero and non-collinear, meaning their endpoints and initial points do not lie on the same line, then their sum, a + b, can be expressed in a unique way.

Repeated applications of addition and scalar multiplication on vectors in V2 result in vectors of the form Xa + pb Since every vector in V2 can be expressed as Xa + àb, the vectors a and b can be considered as a pair of "key vectors" for the vector space V2.

Let us now clarify our position by giving precise definitions to those terms between inverted commas Let X be a linear space over

A, x and y two vectors of X, and A and p two scalars of A Then the scalar multiplication of X allows us to form in X the multiples Ax and ày of x and y respectively; and the addition of X allows us to form in X the sum Ax + ày We called the vector Ax + py of X a linear combination of the vectors x and y.

The concept of linear combination is fundamental in linear space theory, as it encompasses the essential operations of sum and product within this algebraic structure A linear combination of vectors x and y can be expressed as Ax + βy, where specific values for A and β illustrate that vectors such as 0, x, y, their sum x + y, and the products Ax and βy are all linear combinations of x and y Extending this idea, for scalars A₁, , Aₙ and vectors x₁, , xₙ, a linear combination can be represented as A₁x₁ + + Aₙxₙ More broadly, if (Aᵢ)ᵢ∈J is a family of scalars and (xᵢ)ᵢ∈I is a family of vectors with finite support, then the resulting family (Aᵢxᵢ)ᵢ∈I also has finite support, leading to the conclusion that the sum ΣAᵢxᵢ is a vector in the linear space, known as a linear combination of the family of vectors (xᵢ)ᵢ∈I.

If vector x is a linear combination of a family of vectors (x_i) for each index i in set I, and if each vector x_i is itself a linear combination of another family of vectors (y_j) for indices j in set J, then it follows that vector x is also a linear combination of the vectors from the family (y_j).

In the last section, we have given a precise definition to the

"algebraic mechanism" mentioned in the introductory remarks In this section, we try to do the same to the "key vectors" through the concept of base.

In a linear space X, every vector can be expressed as a linear combination of other vectors within the same space This fundamental concept, while seemingly simple, gives rise to significant definitions and inquiries in linear algebra theory.

DEFINITION 2.1 Let X be a linear space and (x;),Ela family of vectors of X The family (x;),Elis said to generate the linear space X if and only if each vector x of X is a linear combination of the vectors of the family In this case (x,),E1is called a family of genera- tors of the linear space X.

The empty family generates the zero vector in a linear space, while the collection of all non-zero vectors spans the entire space It is possible to simplify this collection by removing many vectors while still maintaining the ability to generate the space For instance, by eliminating all scalar multiples of a fixed non-zero vector and subsequently removing redundant vectors from the remaining set, we can achieve a minimal generating family This minimal family offers significant advantages in terms of efficiency and clarity in linear space representation.

DEFINITION 2.2 A base of linear space X is a family (x;);Ej of generators of A such that no proper subfamily of (x;)((-=S generates A.

An important question arises: does every linear space have a basis? In §3A, we will demonstrate that a basis does indeed exist for every linear space, confirming this question positively For now, let’s examine some specific cases.

In the real linear space V2 of 1.7, any two non-zero vectors (0, A) and (0, B), where 0, A and B are not collinear, form a base of V2.

In the real (complex) linear space R" (Cn) the family (e;)t of vectors R" (Cn) where e1 =(1,0, ,0),e2 =(0,1,0, ,0), ,en =(0, ,0,1) form a base of R" (C"), called the canonical base of R"(C").

For the real linear space RCZ" we find that the family a, _ (1,0, ,0), a2 = (0,1,0, ,0), , an = (0, Al) b1 =(1,0, ,0),b2 =(0,1,0, ,0), ,b"=(0, ,0,i) of vectors of RCZ" form a base for RC,".

The family (pk)k=0,11 of polynomials where pk = Tk form an infinite but countable base of the linear space R [ T 1.

The free linear space generated by a set S over A admits the family (f )tes as a base.

In this section and the upcoming ones, we will explore the properties of bases in a linear space to lay the groundwork for proving the theorem regarding the existence of a basis.

In a linear space X, a finite subfamily of a base B, denoted as (x1, , xn), must consist of vectors such that none can be expressed as a linear combination of the others; otherwise, removing one vector would create a smaller subfamily that still generates X, contradicting the definition of a base This essential characteristic of base vectors is encapsulated in the following theorem.

THEOREM 2.3 Let (x1, , xn) be a family of n (n > 0) vectors of a linear space X over A Then the following statements are equivalent :

(i) none of the vectors x1, , x" is a linear combination of the others.

(ii) if, for any scalars A1i ,An of A, Al x1 + + Anxn = 0, then Al = A2 = = A" = 61

PROOF (ii) follows from (i) Assume that A; 0 0 and A1x1 + +

An xn = 0 Then we would get

+ + A xn xj = ( 1 X1 + + A where the summand x; on the right-hand side under the symbol is deleted This would mean that the vector x, is a linear combination of the others, contradicting (i).

(i) follows from (ii) Assume that x; is a linear combination of the other vectors Then we would get x, = A, x, + + zl + +

An Xn where x; under" is deleted Therefore where A = - 1, contradicting (ii).

DEFINITION 2.4a A finite family (x1 , , x,) of vectors of a linear space X over A is said to be linearly independent if and only if it satisfies the conditions of 2.3; otherwise it is said to be linearly dependent.

In linear algebra, an empty family and every finite subfamily of a base of vector space X, as well as any subfamily of a linearly independent family, are considered linearly independent A single vector family (x) is linearly independent if the vector x is non-zero For a pair of vectors (x, y) to be linearly independent, both must be non-zero, and x cannot be a multiple of y Additionally, in a linearly independent family, no two vectors are equal, meaning x_i ≠ x_j for i ≠ j Therefore, if the family (x_1, , x_n) is linearly independent, we can conclude that the set of vectors {x_1, , x_n} is also linearly independent Essentially, a finite family of linearly independent vectors is equivalent to a finite set of linearly independent vectors.

A necessary and sufficient condition for a family (y1 , , Ym ) of vectors being linearly dependent is that there is a family

A family of vectors \((y_1, , y_m)\) is considered linearly dependent if there exist scalars \(A_1, , A_m\) such that the linear combination \(A_1y_1 + + A_my_m = 0\) with at least one \(A_i \neq 0\) This dependence is evident if any vector is zero or if two vectors are equal Importantly, a linearly dependent family may contain sets that are independent; for instance, the family \((b, b)\) is dependent while the set \(\{b\}\) is independent if \(b \neq 0\) Conversely, both the family \((0, b)\) and the set \(\{0, b\}\) are linearly dependent, highlighting the need to differentiate between a linearly dependent family of vectors and their corresponding sets.

THEOREM 2.5a Let (xl , , xn) be a linearly independent family of vectors of a linear space X Then

Xi XI + + Anxn = pl xi + + 11nXn if and only if A, = à, for all i = 1, , n.

PROOF If A, x, + + AnXn = Al Xl + + tlnx , then we get

(A, - ài )Xi + + (An - àn )xn Since the family is linearly independent, we get (Ai - ài) = 0 for all i = 1, , n Therefore

Ai = à, for all i = 1, , n The converse is trivial.

Existence of Base B Dimension C Exercises §4 Subspace

In this section, we will demonstrate that every linear space possesses a basis, utilizing Zorn's lemma as a crucial component of our proof Zorn's lemma, an important axiom in set theory, serves as a powerful and effective tool for managing infinite sets.

In set theory, let A represent a set and C denote a non-empty collection of subsets of A A subset W of C is identified as a chain if for any subsets D and E in W, either D is a subset of C or C is a subset of D An upper bound of a subset W in C is defined as an element U in A such that every element C in W is a subset of U Furthermore, an element M in A is termed a maximal element of C if it is not a proper subset of any other element in C This framework leads to the formulation of Zorn's lemma, which is a fundamental principle in set theory.

If every chain ' of e has an upper bound in (', then (' has a maximal element.

A base of a linear space X is defined as a maximal set of linearly independent vectors, which is fundamentally equivalent to a family of linearly independent vectors.

THEOREM 3.1 Every linear space has a base.

In a linear space X, if it is finite-dimensional, it is defined to have a finite basis, which concludes the discussion However, if X is infinite-dimensional, we consider the set C, which comprises all subsets of linearly independent vectors within X.

X, we get X 0 Therefore 0 is non-empty; for instance, {x} E for every non-zero vector x of X We shall show that every chain in 0' has an upper bound in Cl Let ' be a chain in ( Then we consider the union U of all CE K Since ' is a chain, for any n(n %0) vectors x, , , xn of U there is a member C of ' such that x;EC for i = 1, , n Since C belongs toe, the vectors x, , , xn are linearly independent This means that U is a set of linearly inde- pendent vectors of X and therefore UE O1 and U is an upper bound of the chain W By ZORN's lemma, C has a maximal element M M is then a maximal set of linearly independent vectors of X, and hence a base of X.

We can now generalize 2.10 to the supplementation theorem below.

THEOREM 3.2 Let B be a base of a linear space X and S a set of linearly independent vectors of X Then there exists a subset B' of B such that SnB' = 0 and SUB' is a base of X.

PROOF Let C be the set of subsets of SUB, such that C belongs to

If and only if S is a subset of C and C is linearly independent, we can demonstrate that Q' possesses a maximal element M This element M is a linearly independent set of generators for X, thereby serving as a basis for X Consequently, the set B' = M \ S meets the conditions outlined in the theorem.

The general theorem on the invariance of dimensionality in linear spaces cannot be directly derived from the supplementation theorem, as is possible in finite-dimensional cases To properly formulate and prove this theorem, specific results from set theory are required.

To each set S there is associated a unique set Card(S) called the cardinal number of S in such a way that for any two sets S and

T, Card(S) = Card(T) if and only if S and Tare equipotent, i.e., there is a bijective mapping between them When S is equipotent to a subset of

T, or equivalently when there is a surjective mapping of T onto S, then we write Card(S) < Card(T) The well-known SCHRODER-BERNSTEIN Theorem states that if Card(S) < Card(T) and Card(T) < Card(S), then Card(S) = Card(T) In what follows, we shall also make use of the following theorem: If A is an infinite set and T a set of finite subsets of A such that every element x of A belongs to some element S of ?P, then Card A = Card (jf ).

The general theorem on the invariance of dimensionality is given as follows.

THEOREM 3.3 Let X be a linear space Then for any two bases B and

In the context of a finite-dimensional linear space X, the theorem is evidently valid; however, we can extend our consideration to infinite-dimensional spaces In this scenario, both sets B and C are infinite We analyze the sets of all finite subsets of B and C, denoted as Y(B) and Y(C), respectively, leading to the conclusion that Card(Y(B)) equals Card(B) and Card(Y(C)) equals Card(C) For each finite subset S of B, we can construct corresponding elements, illustrating the relationship between these sets.

The set Ts consists of all linear combinations of vectors from S, confirming that Ts is finite according to theorem 2.10 Consequently, T belongs to the function space f(C) A mapping b is established from f(B) to f(C) such that the relation φ(S) equals TS for every S in f(B) By denoting the direct image of φ as $', we derive that the cardinality of $' is less than or equal to the cardinality of φ.

In the context of vector spaces, since B serves as a base for X, every vector c in C can be expressed as a linear combination of a finite number of vectors from B This implies that each vector c in C is part of some transformation space of T-, leading to the relationship Card(C) * Card(f) < Card(fB) = Card(B) By applying symmetry, we also find that Card(B) < Card(C) Consequently, according to the ScImoDER-BERNsmn Theorem, we conclude that Card(B) equals Card(C).

DEFINITION 3.4 The dimension of a linear space X over A, denoted by dim AX or simply by dim X, is equal to Card(B) where B is any base of X

1 Show that in the linear space R[T] of polynomials p = (p0, , pk ) and Q = (qo, , qk ) where pk = Tk and qk = (T - X)k, X * 0 being a constant, are two bases Express the vectors qk explicitly in terms of pk.

2 Prove that the linear space F of all real valued functions defined on the closed interval [a,b], where a 0 b, is an infinite-dimen- sional linear space.

3 Let a be any cardinal number Prove that there is a linear space A over A such that dim A = a. § 4 Subspace

General properties B Operations on subspaces C Direct sum D Quotient space E Exercises

Many linear spaces exist within larger linear spaces, such as the real linear space R' being a subset of RC2" Similarly, the vector space V2, which consists of plane vectors, is contained within the three-dimensional Euclidean space V3 Additionally, in the linear space R[T] of all polynomials with real coefficients in an indeterminate T, the collection of polynomials with degrees less than a specified positive integer n forms an n-dimensional linear space.

A subset Y of a linear space X over a field A is defined as a subspace of X if it forms a linear space over A, adhering to the same rules of addition and scalar multiplication as X Specifically, Y qualifies as a subspace if it satisfies the conditions for closure under addition and scalar multiplication.

Xx belong to Y for every x, yEY and every scalar XEA, and the axioms [Gl ] to [G4] and [M1 ] to [M3] are satisfied.

According to the definition, if Y is a subspace of a linear space X, then Y must be a non-empty subset of X In any linear space X, the smallest subspace is the zero subspace, represented as 0 = {0}, which contains only the zero vector Conversely, X itself is the largest subspace of X Furthermore, if the dimension of X is greater than 1, it can be established that there are additional subspaces apart from 0 and X.

THEOREM 4.1 Let X be a linear space over A Then Y is a subspace of X if and only if Y is a non-empty subset of X and Xx + py belongs to Y for any vectors x, y of Y and any scalars X, p of A.

To prove that Y is a subspace of X, we first establish that Y is non-empty and contains all linear combinations of any two vectors within it Since the sums x + y and scalar multiples Xx of vectors x and y in Y also belong to Y, it follows that Y satisfies the necessary axioms [G1], [G2], [M1], [M2], and [M3] inherent to X Furthermore, both the zero vector and the negative of any vector -x in Y are linear combinations of vectors in Y, confirming that axioms [G3] and [G4] are met Thus, we conclude that Y is indeed a subspace of X.

In a linear space X, any subset or family of vectors S generates a set Y, which consists of all possible linear combinations of the vectors in S This set Y is recognized as a subspace of X, highlighting its significance in linear algebra.

X generated or spanned by S In particular 0 is the subspace generated by 0.

The following theorems give information on the dimension of a subspace.

THEOREM 4.3 If Y is a subspace of a linear space X, then dim Y < dim X.

PROOF By 3.1, X and Y have bases, say B and C respectively By 3.3 and 3.4, we have dim X = Card B and dim Y = Card C Since every vector of C is a linear combination of vectors of B we have

Card C < Card B from the proof of 3.3 Therefore dim Y < dim X. For finite dimensional linear spaces we have a more precise result.

THEOREM 4.4 Let Y be a subspace of a finite-dimensional linear space

X Then dim Y < dim X Furthermore, dim Y = dim X if and only if

The theorem's first part is a specific instance of a previously established result, but we will provide an independent proof To demonstrate that the vector space Y has a basis, we start with the case where Y is the zero vector space, for which the empty set serves as a basis If Y contains a non-zero vector x, we can form the set S1 = {x}, which is linearly independent If S1 spans Y, it qualifies as a basis; if not, we can identify a vector x2 in Y that cannot be expressed as a linear combination of vectors in S1 By applying the linear independence criterion, we can create the set S2 = {x1, x2}, and if S2 spans Y, it is a basis; otherwise, we can continue this process by adding more vectors This iterative procedure will conclude in no more than n = dim X steps, as any set of n+1 vectors in X is linearly dependent Thus, Y has a basis consisting of at most n vectors, establishing the first part of the theorem, while the second part follows directly from established results.

The set -'(X) of all subspaces of a linear space X over A is

(partially) ordered by inclusion Thus for any three subspaces Y', Y" and Y"" of A, we get if Y' D Y" and Y" D Y"', then Y' D Y"'

Y' = Y" if and only if Y' D Y" and Y" D Y'.

We shall introduce two operations in 2 (X) For any two subspace Y' and Y" of X, the intersection

Y' n Y" = {xEX:xEY'andxEY") is, by 4.1, also a subspace of X In the sense of inclusion, Y' n Y" is the largest subspace of X that is included in both Y' and Y".

By this, we mean that (i) Y' 3 Y' n Y" and Y" 3 Y' n Y", and

If Y' is contained in Z and Y" intersects Z for some subspace Z of X, then the intersection of Y' and Y" also contains Z In contrast to the intersection, the union of two subspaces Y' and Y" does not generally form a subspace of X Instead, the subspace generated by the union of Y' and Y", denoted as Y' + Y", represents the smallest subspace of X that encompasses both Y' and Y" This means that Y' + Y" includes both Y' and Y", and it is the least subspace that satisfies this condition.

Z 3 Y' and Z 3 Y" for some subspace Z of X, then Z 3 Y' + Y" It is not difficult to see that

Y'+Y"={zEX:z=x+yfor some xEY'andyEY"}.

The following rules hold in2'(X) For any three elements Y', Y" and Y"' of 2°(X),

On the dimensions of the intersection and the sum of subspaces we have the following useful theorem.

THEOREM 4.5 Let Y' and Y" be subspaces of a finite-dimensional linear space X Then dim Y' + dim Y" = dim(Y' + Y") + dim(Y' n Y").

If vectors x₁, , xₙ constitute a base for vector spaces Y' and Y", they can be supplemented with additional vectors y₁, , yₛ from Y' to create a complete base for Y', and vectors z₁, , zₜ from Y" to form a complete base for Y" The theorem is established by demonstrating that the combined set of vectors x₁, , xₙ and y₁, , yₛ serves as a valid basis for Y'.

Z1i , zt form a base of the sum Y' + Y" Clearly these vectors generate the subspace Y' + Y", and therefore it remains to be proved that they are linearly independent Assume that

(1) X, 1x1 + +A,X,+p1yj + +àsYS +PIZ, + +vvzt = 0, then the vector

(3) x = -v1 z1 - - vtztEY" is a vector of Y' n Y" Since the vectors x I, , x, form a base of Y' n Y", we get from (2) àI = à2 = = às= 0 Hence (1) becomes

But the vectors x1, , Xr, z1, , zt form a base of Y", therefore

Al = = Ar = Vi = = Pt = 0 Hence the r+s+t vectors in question are linearly independent.

We have seen that for any two subspaces Y' and Y" of a linear space X we have

Y' + Y" = {zeX: z = x + y for some xeY' and some Y E Y" }.

The representation z = x + y of a vector of the sum by a sum of vectors of the summands is not unique in general Take, for instance, the case where Y' = Y" = X.

In the context of vector spaces, consider a vector \( z \) in the sum of two subspaces \( Y' + Y'' \), expressed as \( z = x + y \), where \( x \) and \( x' \) are vectors from \( Y' \) and \( y \) and \( y' \) are distinct vectors from \( Y'' \) The non-zero vector \( t = x - x' \) leads to \( C = Y' t = y' - y \), indicating that \( t \) belongs to the intersection \( Y' \cap Y'' \) Conversely, if \( t \) is a non-zero vector in the intersection \( Y' \cap Y'' \), then for any vector \( z = x + y \) in \( Y' + Y'' \), it can be rewritten as \( z = (x + t) + (y - t) = x + y \) Therefore, the condition \( Y' \cap Y'' = \{0\} \) is both necessary and sufficient for each vector in \( Y' + Y'' \) to have a unique representation as a sum of a vector from \( Y' \) and a vector from \( Y'' \), leading to a formal definition.

DEFINITION 4.6 Let Y' and Y" be subspaces of a linear space X. Then the sum Y' + Y" is called a direct sum and is denoted by Y'® Y" if for every vector x of Y' + Y" there is one and only one vector y' of Y' and there is one and only one vector y" of Y" such that x = y' + y".

It follows from the discussion above that Y' + Y" is a direct sum if and only if Y' fl Y" = 0 Furthermore for a direct sum Y' a 1"', the union B' U B" of any base B' of Y' and any base B" of Y" is a base of Y'9 Y" Therefore dim Y' e Y" = dim Y'+ dim Y".

A straightforward extension of the idea of direct sum to that of more than two subspaces is as follows:

DEFINITION 4.7 Let Y1 (i = 1, , p) be subspaces of a linear space X Then the sum Y1 + + Yp is called a direct sum and is denoted by Y1 ® ® Yp if for every vector x of Y1 + + Yp there is one and only one vector y; of Y1 (i = 1, , p) such that x=yl+ +yp.

Similarly for Y1 ® ® Yp,, the union B1 U U Bp of bases

B, of Y; is a base of Y1 ® ® Yp We remark here that the direct sum Yl ® is Yp requires Y fl (Y1 + + Y/ + + Yp) = 0 for/=1, ,p,andnotjustY;flYfori*j.InV2,letL1,L2 and L3 be subspaces generated by the non-zero vectors (O,A1), (O,A2) and (O, A3) where the points 0, At, Ai are not collinear for i * j Then the sum L1 + L2 + L3 is clearly not a direct sum but

The concept of direct sum is significantly important because it allows for the identification of a complementary subspace Y" within a linear space X for any subspace Y' This relationship is expressed as X = Y' ® Y", demonstrating that every subspace can be complemented According to the supplementation theorem, if B' represents a basis for Y', there exists a set B" of linearly independent vectors in X that intersects with B' in a manner that maintains the integrity of both bases.

= 0 and B' U B" is a base of Y The subspace Y" generated by B" clearly satisfies the required condition that Y = Y' ® Y" We have therefore proved the following theorem.

THEOREM 4.6 For any subspace Y' of a linear space X, there is a subspace Y" of X, called complementary subspace of Y' in X, such that X=Y'0Y".

It follows that if Y" is a complementary subspace of Y' in X then Y' is a complementary subspace of Y" in X Furthermore dim X = dim Y' + dim Y"; in particular, when X is finite-dimensional dim Y" = dim X - dim Y'.

We remark that a complementary subspace is not unique (except in trivial cases) Indeed from the example above, we see that L1 Q) L 2 = V,

LINEAR TRANSFORMATIONS

Exercises § 12 Mappings of Projective Spaces

MATRICES

EIGENVALUES

INNER PRODUCT SPACES

Ngày đăng: 27/05/2022, 13:57