Hirschfeld: Finite projective spaces of three dimensionsEdmunds and Evans: Spectral theory and differential operators Pressley and Segal: Loop groups, paperback Evens: Cohomology of group
Trang 2Hirschfeld: Finite projective spaces of three dimensions
Edmunds and Evans: Spectral theory and differential operators
Pressley and Segal: Loop groups, paperback
Evens: Cohomology of groups
Hoffman and Humphreys: Projective representations of the symmetric groups: Q-Functions
and Shifted Tableaux
Amberg, Franciosi, and Giovanni: Products of groups
Gurtin: Thermomechanics of evolving phase boundaries in the plane
Faraut and Koranyi: Analysis on symmetric cones
Shawyer and Watson: Borel’s methods of summability
Lancaster and Rodman: Algebraic Riccati equations
Th´evenaz: G-algebras and modular representation theory
Baues: Homotopy type and homology
D’Eath: Black holes: gravitational interactions
Lowen: Approach spaces: the missing link in the topology–uniformity–metric triad
Cong: Topological dynamics of random dynamical systems
Donaldson and Kronheimer: The geometry of four-manifolds, paperback
Woodhouse: Geometric quantization, second edition, paperback
Hirschfeld: Projective geometries over finite fields, second edition
Evans and Kawahigashi: Quantum symmetries of operator algebras
Klingen: Arithmetical similarities: Prime decomposition and finite group theory
Matsuzaki and Taniguchi: Hyperbolic manifolds and Kleinian groups
Macdonald: Symmetric functions and Hall polynomials, second edition, paperback
Catto, Le Bris, and Lions: Mathematical theory of thermodynamic limits: Thomas-Fermi type
models
McDuff and Salamon: Introduction to symplectic topology, paperback
Holschneider: Wavelets: An analysis tool, paperback
Goldman: Complex hyperbolic geometry
Colbourn and Rosa: Triple systems
Kozlov, Maz’ya and Movchan: Asymptotic analysis of fields in multi-structures
Maugin: Nonlinear waves in elastic crystals
Dassios and Kleinman: Low frequency scattering
Ambrosio, Fusco and Pallara: Functions of bounded variation and free discontinuity problems Slavyanov and Lay: Special functions: A unified theory based on singularities
Joyce: Compact manifolds with special holonomy
Carbone and Semmes: A graphic apology for symmetry and implicitness
Boos: Classical and modern methods in summability
Higson and Roe: Analytic K-homology
Semmes: Some novel types of fractal geometry
Iwaniec and Martin: Geometric function theory and nonlinear analysis
Johnson and Lapidus: The Feynman integral and Feynman ’s operational calculus, paperback Lyons and Qian: System control and rough paths
Ranicki: Algebraic and geometric surgery
Ehrenpreis: The radon transform
Lennox and Robinson: The theory of infinite soluble groups
Ivanov: The Fourth Janko Group
Huybrechts: Fourier-Mukai transforms in algebraic geometry
Hida: Hilbert modular forms and Iwasawa theory
Boffi and Buchsbaum: Threading homology through algebra
Trang 3Threading Homology Through Algebra: Selected Patterns
Trang 43Great Clarendon Street, Oxford OX2 6DP
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide in
Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Oxford is a registered trade mark of Oxford University Press
in the UK and in certain other countries
Published in the United States
by Oxford University Press Inc., New York
c
Oxford University Press, 2006
The moral rights of the authors have been asserted
Database right Oxford University Press (maker)
First published 2006 All rights reserved No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, or under terms agreed with the appropriate reprographics rights organization Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this book in any other binding or cover and you must impose the same condition on any acquirer
British Library Cataloguing in Publication Data
Data available Library of Congress Cataloging in Publication Data
Data available Typeset by Newgen Imaging Systems (P) Ltd., Chennai, India
Printed in Great Britain
on acid-free paper by Biddles Ltd., King’s Lynn, Norfolk ISBN 0–19–852499–4 978–0–19–852499–1
1 3 5 7 9 10 8 6 4 2
Trang 5A coloro che amo
To Betty, wife and lifelong friend.
Though she can’t identify each tree, she shares with me the delight
of walking through the forest.
Trang 7From a little before the middle of the twentieth century, homological methodshave been applied to various parts of algebra (e.g Lie Algebras, AssociativeAlgebras, Groups [finite and infinite]) In 1956, the book by H Cartan and
S Eilenberg, Homological Algebra [33], achieved a number of very important
results: it gave rise to the new discipline, Homological Algebra, it unified theexisting applications, and it indicated several directions for future study Sincethen, the number of developments and applications has grown beyond counting,and there has, in some instances, even been enough time to see various methodsthreading their way through apparently disparate, unrelated branches of algebra.What we aim for in this book is to take a few homological themes (Koszulcomplexes and their variations, resolutions in general) and show how these affectthe perception of certain problems in selected parts of algebra, as well as theirsuccess in solving a number of them The expectation is that an educated readerwill see connections between areas that he had not seen before, and will learntechniques that will help in further research in these areas
What we include will be discussed shortly in some detail; what we leave outdeserves some mention here This is not a compendium of homological algebra,nor is it a text on commutative algebra, combinatorics, or representation the-ory; although, it makes significant contact with all of these fields We are notattempting to provide an encyclopedic work As a result, we leave out vast areas
of these subjects and only select those parts that offer a coherence from thepoint of view we are presenting Even on that score we can make no claim tocompleteness
Our Chapter I, called “Recollections and Perspectives,” reviews parts of nomial Ring and Power Series Ring Theory, Linear Algebra, and MultilinearAlgebra, and ties these with ideas that the reader should be very familiar with
Poly-As the title of the chapter suggests, this is not a compendium of “assumedknown” items, but a presentation from a certain perspective—mainly homolo-gical For example, almost everyone knows about divisibility and factoriality; wegive a criterion for factoriality that ties it immediately to a homological inter-pretation (and one which found significant application in solving a long-openquestion in regular local ring theory)
The next three chapters of this book pull together a group of classical results,all coming from and generalizing the techniques associated with the Koszul com-plex Perhaps the major result in Chapter II, on local rings, is the homologicalcharacterization of a regular local ring by means of its global dimension SectionII.6 includes a proof of the factoriality of regular local rings which is much closer
to the original one, rather than the Kaplansky proof that is frequently quoted
Trang 8We have also included a section on multiplicity theory, mainly to carry throughthe theme of the Koszul complex, and a section on the Homological Conjectures,
as they provide a good roadmap for still open problems as well as a historicalguide through much of what has been going on in the area this book is sketching.Chapter III deals with a class of complexes developed with the following aim inview: to associate a complex to an arbitrary finite presentation matrix of a mod-ule (the Koszul complex does this for a cyclic module), and to have that complexplay the same role in the proof of the generalized Cohen–Macaulay Theorem thatthe Koszul complex plays in the classical case We have made an explicit connec-tion, in terms of a chain homotopy, between an older, “fatter” class of complexes,and a slimmer, more “svelte” class We have also included a last section in which
we define a generalized multiplicity which has found interesting applications, oflate Chapter IV applies some of the properties of these complexes to a system-atic study of finite free resolutions, ending in a “syzygy-theoretic” proof of theunique factorization theorem (or “factoriality”) in regular local rings
The last three chapters and the Appendix not only focus on determinantalideals and characteristic-free representation theory, but also involve a good deal
of combinatorics Chapter V employs the homological techniques developed inthe previous part in the study of a number of types of determinantal ideals,namely Pfaffians and powers of Pfaffians In Chapter VI we develop the basics
of a characteristic-free representation theory of the general linear group (whichhas already made its appearance in earlier chapters) Because of the generalityaspired to, heavy use is made of letter-place methods, an idea used more bycombinatorialists than by commutative algebraists As some of the proofs requiremore detail than is probably helpful for those encountering this material for thefirst time, we decided to place these details in a separate Appendix: Appendix A.Much of the development of this chapter rests heavily on the notion of straighttableaux introduced by B Taylor In Chapter VII we first present a number ofresults that immediately follow from this more general theory Then examplesare given to indicate what further use has been made of it, and in most casesreferences are given to detailed proofs It is in this part of the chapter that we seethe important influence of the work of A Lascoux in characteristic zero We givesome of the background to the Hashimoto example of the dependence of the Bettinumbers of determinantal ideals on characteristic We deal with resolutions ofWeyl modules in general, and skew-hooks in particular, and we make connectionswith intertwining numbers,Z-forms, and several other open problems
The intended readership of this book ranges from third-year and above ate students in mathematics, to the accomplished mathematician who may ormay not be in any of the fields touched on, but who would like to see what devel-opments have taken place in these areas and perhaps launch himself into some
gradu-of the open problems suggested Because gradu-of this assumption, we are allowingourselves to depend heavily on material that can be found in what we regard ascomprehensive and accessible texts, such as the textbook by D Eisenbud Wemay at times, though, include a proof of a result here even if it does appear insuch a text, if we think that the method of proof is typical of many of that kind
Trang 9I.3.1 R[X1, , X t] as a symmetric algebra 22
II.4 Codimension and finitistic global dimension 50
II.8 Intersection multiplicity and the homological conjectures 64
III.1.1 The graded Koszul complex and its “derivatives” 70III.1.2 Definitions of the hooks and their explicit bases 72
III.3.2 Comparison of the fat and slim complexes 91
Trang 10IV.3 Proof of the first structure theorem 115
V.1.2 Resolution of a certain pfaffian ideal 131V.1.3 Algebra structures on resolutions 132
V.2.1 Intrinsic description of the matrix X 137
VI.2 Weyl and Schur modules associated to shape matrices 154
VI.3.1 Positive places and the divided power algebra 156VI.3.2 Negative places and the exterior algebra 159VI.3.3 The symmetric algebra (or negative letters and places) 164
VI.4 Place polarization maps and Capelli identities 165
VI.6 Some kernel elements of Weyl and Schur maps 169VI.7 Tableaux, straightening, and the straight basis theorem 174VI.7.1 Tableaux for Weyl and Schur modules 174
VI.7.3 Taylor-made tableaux, or a straight-filling algorithm 181VI.7.4 Proof of linear independence of straight tableaux 183
VII.2 Direct sums and filtrations for skew-shapes 197
Trang 11Contents xi
VII.5 Resolutions revisited; the Hashimoto counterexample 209
VII.6.5 Comparison with the Lascoux resolutions 227
A.1 Theorem VI.3.2, Part 1: the double standard tableaux generate 237A.2 Theorem VI.3.2 Part 2: linear independence of double standard
A.3 Modifications required for Theorems VI.3.3 and VI.3.4 244
Trang 13RECOLLECTIONS AND PERSPECTIVES
This chapter is neither a collection of results which we assume to be known northe place to prove some results probably unknown to the reader, but needed inthe following Although it resembles a little of both things, it is essentially a selec-tion of topics, some elementary, some more advanced, which we feel are adequate,
or even necessary, to prepare the ground for the material of the chapters to come.Since it is almost impossible to tell which “basic” material is truly universallyknown, and which is not, we can only assure the reader that those terms in thischapter which are unfamiliar can be easily found in the book by D Eisenbud, [41]
I.1 Factorization
In this section, we deal with the basic topic of divisibility In doing so, we review
a few properties of some rings, which are of importance to us For more details,
we refer the reader to Reference [87]
I.1.1 Factorization domains
Let R be an integral domain, that is, a commutative ring (with 1) having no
zero divisors Given a and b in R, we say that a is a divisor of b (written a | b)
if b = ac for some c in R If a | b and b | a, then b = ua for some unit u, and
a and b are called associates Being associate is an equivalence relation a is a
proper divisor of b if a divides b, but is neither a unit, nor an associate of b.
In terms of ideals, a | b means (b) ⊆ (a), u being a unit is equivalent to (u) = R, a and b associates says (a) = (b), and a properly divides b if and only
if (b) ⊂ (a).
Definition I.1.1 An element c ∈ R is called a greatest common divisor
(gcd) of a and b in R if c | a, c | b, and c is divisible by every d such that d | a and d | b An element c ∈ R is called a least common multiple (lcm) of
a and b in R if a | c, b | c, and c divides every d such that a | d and b | d Given a and b in R, gcd(a, b) may or may not exist If it does, it is unique up
to associates Similarly for lcm(a, b).
Remark I.1.2 If a, b ∈ R−{0}, and lcm(a, b) exists, then also gcd(a, b) exists, and lcm(a, b) · gcd(a, b) = ab, up to units If a, b ∈ R − {0}, and gcd(a, b) exists, lcm(a, b) may not exist (cf Example I.1.11 later on) However, if gcd(a, b)
Trang 14exists for all choices of a and b in R − {0}, then lcm(a, b) exists for all choices
Definition I.1.3 A non-zero, non-invertible element a ∈ R is called
irredu-cible if it does not have any proper divisors A non-zero, non-invertible element
a ∈ R is called prime if, whenever a | bc, then either a | b or a | c.
In terms of ideals, a is irreducible if and only if (a) is maximal among the proper principal ideals of R; a is prime if and only if (a) is a prime ideal If a is
prime, then it is irreducible But the converse does not hold
Definition I.1.4 An integral domain R is called a factorization domain if
every non-zero, non-invertible a ∈ R can be expressed as a product of irreducible
elements A factorization domain is called a unique factorization domain
(UFD) if every factorization into irreducibles is unique up to permutation of
the factors and multiplication of the factors by units.
In terms of ideals, an integral domain R is a factorization domain if and only if there is no strictly ascending infinite chain of principal ideals in R In particular,
every principal ideal domain (PID) is a factorization domain: given any ascendingchain of ideals, the union of these ideals is an ideal of the sequence
In fact, a PID is always a UFD, by part (ii) of the following proposition
Proposition I.1.5 Let R be a factorization domain The following are equivalent.
(i) R is a UFD.
(ii) lcm(a, b) exists for every choice of a, b in R.
(iii) Every irreducible element is prime.
Proof (i) ⇒ (ii) As in Z, one expresses a and b as products of powers (with non-negative exponents) of suitable irreducibles (the same ones for a and b):
(ii) ⇒ (iii) The gcd exists in R since the lcm does If c = gcd(a, b), then
cd = gcd(ad, bd) for every d For, given any two ideals a and b, d(a∩b) = da∩db; hence dlcm(a, b) = lcm(ad, bd); using that adbd = gcd(ad, bd)lcm(ad, bd) and
ab = clcm(a, b), we are through.
Let an irreducible element c divide ab, and assume that c b: we claim that
c | a Since c is irreducible and c b, b and c are coprime, that is, 1 = gcd(b, c).
Trang 15Factorization 3
It follows that a = gcd(ab, ac); since c divides ab by assumption, c must divide gcd(ab, ac), as claimed.
Corollary I.1.6 If R is a UFD, then hdR (R/(a, b)) ≤ 2 for all a and b in
R (Here hd stands for homological dimension, sometimes called projective
dimension and denoted by pd.)
Proof If a = 0 = b, the quotient ring is R and the homological dimension is 0.
If a = 0 and b = 0, the quotient ring is R/(b) and we consider the exact complex of R-modules:
0→ K → R b
→ R → R/(b) → 0, where K stands for the kernel of the map given by multiplication by b Since R
is a domain, cb = 0 implies c = 0, and K = (0) Hence hd R (R/(b)) ≤ 1.
If both a and b are different from 0, we consider the following exact complex:
0→ K → R 2 (a,b) → R → R/(a, b) → 0, where K stands for the kernel of the map given by the matrix (a, b) We want
to show that K is free over R, as this will give us our result on the homological dimension of R/(a, b).
If (a) : b denotes the ideal {r ∈ R | rb ∈ (a)}, clearly K = (a) : b, because
rb ∈ (a) if and only if rb = sa for some (unique) s ∈ R, that is, (−s)a + rb = 0.
As R-modules, (a) : b ∼ = b((a) : b), and obviously b((a) : b) = (a) ∩ (b).
By the previous proposition, (a) ∩ (b) is a principal ideal; hence it is a rank 1
Because we have not made significant use of homological dimension, we willput off giving a formal definition of that term here; the reader will find it in thenext section (Definition I.2.25) The crucial fact that we needed in the proof of
the above corollary was just that K is free.
We will see in Chapter II that if R is a noetherian local ring, then R is a UFD
if and only if hdR (R/(a, b)) ≤ 2 for all a and b in R This will lead to proving
that regular local rings are UFD
Remark I.1.7 We have noticed in the proof of Proposition I.1.5 that if R is
a UFD, then c = gcd(a, b) implies cd = gcd(ad, bd) for every d It follows that gcd(a, b) = 1 and a | bc together imply a | c In terms of ideals, this means that if
a and b are coprime in R, then b is a non-zero divisor in R/(a), although R/(a) may no longer be an integral domain Conversely, if b is a non-zero divisor in R/(a), then gcd(a, b) = 1; for otherwise a/ gcd(a, b) would kill b in R/(a) This set up will be generalized in Chapter II by the notion of M -sequence.
Remark I.1.8 The complex
0→ R 2 (a,b) → R → R/(a, b) → 0
Trang 16is a truncation of the following Koszul complex (to be described in Chapter II):
0→ R
−b a
→ R 2 (a,b) → R → R/(a, b) → 0.
Does the latter complex coincide with the resolution of (a, b), a = 0 = b, described
in the proof of Corollary I.1.6? Recalling the identification K = (a) : b,
im
−b a
=
(s, t) ∈ R2| s = −br, t = ar for some r ∈ Rcorresponds to (a) ⊆ (a) : b and we are asking whether (a) = (a) : b We claim that equality holds if and only if gcd(a, b) = 1 For, by the previous remark gcd(a, b) = 1 means that b is a non-zero divisor in R/(a), and so (a) : b vanishes
in R/(a).
When unique factorization of elements does not hold in our integral domain
R, we might relax the condition a bit and ask: what kind of ring allows for the
unique factorization of principal ideals into prime ideals? We do not know theanswer to that, but we can ask for a generalization of principal ideal domains,
namely what kind of ring, R, (besides a PID) may have unique factorization of
ideals into products of prime ideals?
Actually, there is a name for such a ring: Dedekind domain It turns out
that this condition is equivalent to a combination of other properties, namelythat of being noetherian, normal (or integrally closed), and being of dimensionone Some of these terms will be discussed in great detail in Chapter II, but we
briefly point out here that one of the many characterizations of a noetherian
ring is that every ideal is finitely generated (another one is that no infinite
strictly ascending chain of ideals can exist) This is certainly true of a PID, soevery PID is noetherian To say that the dimension of a ring is equal to one turnsout to mean (see Section II.3) that every non-zero prime ideal is maximal, andthe observations immediately preceding Definition I.1.3 imply that in a PID allprime ideals are indeed maximal Finally, it is clear that every PID (in fact, everyUFD) is normal, so that we know every PID is a noetherian, normal domain ofdimension one However, these three properties do not quite characterize a PID;rather, there is the following theorem
Theorem I.1.9 For an integral domain R, the following are equivalent (i) R is a noetherian normal domain of dimension 1.
(ii) Every proper ideal a of R can be expressed as a product of prime ideals,
in a unique way, up to permutations of the factors.
Proof Cf., for example, Reference [87, chapter 5, section 6, theorem 13, p 275]
2
So we are led to make the following definition
Trang 17Factorization 5
Definition I.1.10 An integral domain is called a Dedekind domain if it
satisfies the equivalent conditions of Theorem I.1.9.
The ring of algebraic integers in any algebraic number field is always a kind domain Some very accessible ones are the ringsZ+Z√ n with n a squarefree
Dede-element of Z − {0, 1} such that n is not congruent to 1 modulo 4 (this latter
condition ensuring that this is the ring of integers inQ(√ n)).
The family of Dedekind domains properly includes the family of principal ideal
domains, for Dedekind domains may have ideals which are not principal
Example I.1.11 Let R =Z + Z√ −5, a = 1 + √ −5, b = 3 gcd(a, b) exists and equals 1, for if s + t √
−5 divides both a and b, then its norm N(s + t √ −5) =
s2+ 5t2must divide both 6 and 9, hence their gcd 3; but s2+ 5t2| 3 forces t = 0 and s = ±1 If a = (a, b) were a principal ideal, gcd(a, b) = 1 would imply a = R.
But√
−5 /∈ a, since otherwise √ −5 = αa + βb would give 5 = 6N(α) + 9N(β) and 3 should divide 5, a contradiction Finally, notice that lcm(a, b) does not exist: if s + t √
−5 were a lcm(a, b), s2+ 5t2would be divisible by both 6 and 9,
hence by lcm(6, 9) = 18; moreover, s+t √
−5 should divide both 36 and 54, hence gcd (36, 54) = 18, since both 6 = (1 + √
−5)(1 − √ −5) = 2 · 3 and 3 + 3 √ −5 are common multiples; thus s2+ 5t2= 18, which is impossible
If one had simply wanted to prove that this ring is not a PID, it would havesufficed to point out that 6 = 2× 3 = (1 + √ −5)(1 − √ −5), show that each of
these factors is irreducible, and conclude that this contradicts UFD, hence PID.The rather longer discussion above, though, actually produces a non-principalideal
While it may be slightly disappointing that there are Dedekind domains thatare not principal ideal domains, it is a well-known property of Dedekind domainsthat all their ideals can be generated by at most two elements (cf., e.g Refer-ence [87, chapter 5, section 7, theorem 17, p 279]) So at least we are not toofar off the mark
If R is any commutative ring, the collection of prime ideals of R is called the
spectrum of R, written Spec(R) The set of maximal ideals of R is called the maximal spectrum, and is denoted by Max(R).
By Theorem I.1.9, given a Dedekind domain R, Spec(R) = {0} ∪ Max(R),
Proposition I.1.12 If R is a Dedekind domain such that |Max(R)| < ∞, then
R is a PID.
Proof Let Max(R) = {m1,m2, ,mt} For every i = 1, 2, , t there exists
an element a i ∈ mi such that a i ∈ m / 2
i and a i ∈ mj / , j = i (by the Chinese
Remainder Theorem, Reference [87], chapter 5, section 7, theorem 17, p 279)
Then (a i) =mi, and the principality of all maximal ideals implies the principality
of every other ideal (by part (ii) of Theorem I.1.9) 2
If we localize a Dedekind domain at a non-zero primem, Rmis still Dedekind,since condition (i) of Theorem I.1.9 is preserved by localization (We assume the
Trang 18reader is familiar with the process of forming rings of quotients with respect to
a multiplicative subset Essentially this is just the “fractions” having arbitraryelements of the ring on top, and elements of the multiplicative subset as denom-inators All the bells and whistles of localization are explained in Reference [41],
section 2.1) In fact, Rm is a PID (by the last proposition), because 0 andmmare its only prime ideals Ifmm = (a) for some a ∈ Rm, then every other ideal of
Rm is of type (a n ) for some positive n.
Notice that since Rp is a PID for everyp ∈ Spec(R), unique factorization of elements is locally true for every Dedekind domain.
Local Dedekind domains are known as discrete valuation rings.
I.1.2 Polynomial and power series rings
Given any commutative ring (with 1), say R, a (formal) power series in t
indeterminates over R, t ∈ N−{0}, is a function f : N t → R Power series can be
added and multiplied Addition is simply addition of functions Multiplication
is defined by (f g)(n1, , n t) =
m i +l i =n i f (m1, , m t )g(l1, , l t) The set of
all power series in t indeterminates over R turns out to be a commutative ring
(with 1) with respect to the indicated operations The customary notation for
this ring is R[[X1, , Xt ]], for one identifies f :Nt → R with the formal sum
Given R as above, a polynomial in t indeterminates over R is a power
series f : Nt → R which is zero almost everywhere The corresponding
symbol
f (n1, , nt )X n1
1 · · · X n t
t is usually meant to be restricted to the
(finitely many) non-zero values f (n1, , n t), thereby giving a finite formal sum
Polynomials form a subring of the ring of power series, denoted by R[X1, , X t]
Clearly, R[X1, , X t ] = (R[X1, , X t −1 ])[X t]
Often one writes R[[X]] and R[X] instead of R[[X1, , X t]] and
R[X1, , X t ], meaning that X = {X1, , X t}.
The following proposition collects some properties valid when|X| = t = 1.
Proposition I.1.13 Let R be a commutative ring (with 1).
(i) f ∈ R[X] is invertible in R[X] if and only if a0 is invertible in R and all other coefficients are nilpotent in R (as usual, we assume f =n
i=0 a i X i ;
an element of a ring is nilpotent if some power of it is equal to 0) (ii) f ∈ R[[X]] is invertible in R[[X]] if and only if a0 is invertible in R (as usual, we assume f =∞
i=0 a i X i ).
(iii) R has no zero divisors if and only if R[X] has no zero divisors if and only
if R[[X]] has no zero divisors.
Trang 19Factorization 7
Proof We only prove (ii), not because it is harder, but because we need it soon
If f is invertible in R[[X]], there exists g ∈ R[[X]], g = ∞ i=0 b i X i say, such
that f g = 1 Hence
f g = a0b0+ (a0b1+ a1b0)X + (a0b2+ a1b1+ a2b0)X2+· · · = 1
forces a0b0= 1, and a0 is a unit in R.
Conversely, assume that a0 is a unit in R and look for some g as above, such that f g = 1 The following equalities must be satisfied:
a0b0= 1, a0b1+ a1b0= 0, a0b2+ a1b1+ a2b0= 0,
The invertibility of a0allows us to solve these equations for b0, b1, b2, one after
The last statement of Proposition I.1.13 hints at a general question: what
properties of R are inherited by R[X] and R[[X]]?
For instance, if R is a Euclidean domain (i.e a domain where one has division with remainder), the domain R[X] need not be Euclidean.
Proposition I.1.14 If R is noetherian, then both R[X] and R[[X]] are noetherian.
Proof For R[X], this is the Hilbert basis theorem, (cf., for example, ence [87], chapter 4, section 1, theorem 1, p 201) For R[[X]], there is a proof
Refer-very much in the spirit of the proof of the Hilbert basis theorem, (cf., e.g.Reference [87], chapter 7, section 1, theorem 4, p 138) 2
Corollary I.1.15 If R is noetherian, then both R[X1, , Xt] and R[[X1, , Xt ]] are noetherian.
When R is a noetherian domain, both R[X1, , Xt ] and R[[X1, , Xt]](being noetherian domains) are factorization domains: they cannot contain anystrictly ascending infinite chain of principal ideals This remark leads to the fol-
lowing question: if R is a UFD, is it true that R[X1, , Xt ] and R[[X1, , Xt]]are UFD?
Unlike Proposition I.1.14, we cannot give a unique answer: we will prove in
a moment that R[X1, , X t ] does inherit the property of being a UFD from R; but R[[X1, , X t]] may not be a UFD The first counterexample was given by
P Samuel in 1961 (see [77])
Theorem I.1.16 If R is a UFD, then R[X1, , Xt ] is a UFD.
Proof By induction on t, it suffices to show that R[X] is a UFD Since we already know that R[X] is a factorization domain, part (ii) of Proposition I.1.5 says that it is enough to prove that a lcm(f, g) exists for any two polynomials
f and g in R[X].
Let Q denote the field of quotients of R Since Q is a field, Q[X] is a Euclidean domain, hence a PID, hence a UFD So a lcm(f, g) certainly exists in Q[X].
Trang 20Call it h Clearly, h can be expressed as h = c(h) · h , where h ∈ R[X] and has coprime coefficients, while c(h) ∈ Q.
Write f = c(f ) · f and g = c(g) · g , where f and g are assumed to havecoprime coefficients Since R is a UFD by hypothesis, a lcm(c(f ), c(g)) exists in R Call it c Then c · h is a lcm(f, g) in R[X], as required. 2 Although a similar theorem does not hold for R[[X1, , Xt]], we have thefollowing partial result
Proposition I.1.17 If R is a field, then R[[X1, , X t ]] is a UFD.
Proof We cannot reduce to the case t = 1 But since we already know that R[[X1, , Xt]] is a factorization domain, it suffices to show that every irreducibleelement generates a (principal) prime ideal (cf part (iii) of Proposition I.1.5)
This can be accomplished by induction on t, and using the statement of the
previous theorem, (cf., e.g Reference [87], chapter 7, section 1, theorem 6, p 148)
2
We give another property of K[[X1, , X t ]], when K a field.
Proposition I.1.18 If K is a field, then K[[X1, , Xt ]] is a local ring with maximal ideal m = (X1, , Xt ).
Proof The proof of part (ii) of Proposition I.1.13 works word for word in every
ring (R[[X1, , Xs −1 ]])[[X s ]] Since in our case R is a field, the non-units of K[[X1, , Xt ]] are the elements with zero constant term That is, (X1, , Xt)
consists of all the non-invertible elements of K[[X1, , X t]] 2 When t = 1, K[[X]] is in fact a discrete valuation ring (= local Dedekind
domain), hence a PID (cf Proposition I.1.12), for it is not hard to check that
every proper ideal in K[[X]] is a power of m = (X).
I.2 Linear algebra
In this section we deal with linear algebra over a commutative ring, not justover a field In doing so, we review some basics of homological algebra For moredetails, we refer the reader to References [15], [33], and [41]
I.2.1 Free modules
Let R be a commutative ring (with 1) An R-module is an immediate ation of a vector space That is, if K is a field and V a vector space over K, we know that V is an abelian group, K acts on V , and this action satisfies certain
generaliz-conditions One notices that the conditions in no way make use of the fact that
K is a field; thus we may replace K by the commutative ring R, write M for V , and get the definition for a module M over the ring R.
Trang 21Linear algebra 9
The usual definitions of linearly independent subset, linearly dependent set, generators, submodule, submodule generated by a subset, that are used for vector spaces apply mutatis mutandis to R-modules The difference, as we will
sub-see, lies in the fact that our base ring is not in general a field; thus such things
as the existence of a basis for every vector space do not hold true for modulesover arbitrary rings (Recall that a basis of a module is a linearly independ-ent subset which generates the module.) Yet the existence of maximal linearlyindependent subsets of a module is proved in exactly the same way as is donefor vector spaces It may be, however, that the empty set is a maximal lin-early independent subset of a module, but the module is not necessarily the zeromodule
For example, the abelian group Z/(2), considered as a Z-module, has two
elements, but its maximal linearly independent set is the empty set For{0} is
not independent and{1} is not independent because 2 · 1 = 0.
Thus, while for vector spaces we have the fact that a maximal independentsubset is a basis for (hence generates) the vector space, this is no longer the casefor general modules
Definition I.2.1 An R-module M is called free if it has a basis.
We note immediately that the zero module is free (its basis is the emptyset) The following result shows that the basis of a free module can have anycardinality
Proposition I.2.2 Given any non-empty set I, there is a free R-module with basis in one-to-one correspondence with I.
Proof Let M be the set {f : I → R | f is zero almost everywhere} It is
an R-module with respect to the operations (f1+ f2)(i) = f1(i) + f2(i) and (rf )(i) = rf (i) Clearly M has an R-basis {fi}i ∈I , where f i stands for the map
Homomorphisms of R-modules, often called R-maps, are defined as in the
case of vector spaces
The free module built in the proof of Proposition I.2.2 is canonically
R-isomorphic to ⊕i ∈I R i , where R i is a copy of the R-module R for every i ∈ I.
The basis of ⊕i ∈I Ri corresponding to {fi}i ∈I in this isomorphism is called the
canonical basis of⊕i ∈I Ri When I is finite, say |I| = t, we write R tinstead of
⊕i ∈I Ri
Remark I.2.3 A free R-module M having a finite basis B cannot have an infinite basis B For every element of B can be expressed as a linear combination
of finitely many elements of B , and if C is the (finite linearly independent) set
of all the elements of B involved in the expressions of the elements of B, then every element of M is generated by C, so that C = B .
Trang 22Proposition I.2.4 If B = {m1, , m t} and B ={m
1, , m s} are two finite bases of the same free R-module M , then t = s.
Proof Write each m i (respectively, m
j ) as an R-linear combination of the elements of B (respectively, B):
Call S the t × s matrix (aij ) and T the s × t matrix (bji ) Clearly, ST equals the
t × t identity matrix It , and T S = I s If t = s, say t > s, consider the square matrices of order t:
Their product is still I t , but det I t = 1, while det( S T ) = det S det T = 0:
a contradiction Similarly for t < s, using T S = I s 2
We remark that the definition of determinant is the same as for fields, andthat det S det T = det S T is purely formal and does not require that the base
ring be a field
Definition I.2.5 A free R-module is called finite if all its bases have finitely many elements A finite free R-module has rank t if it has a basis consisting of
t elements (hence every basis consists of t elements).
Remark I.2.6 If a free R-module M happens to have a finite system of erators, then M must be a finite free R-module Just argue as in Remark I.2.3, calling B the system of generators and B any basis.
gen-Proposition I.2.7 Let F be a rank t free R-module with basis B =
{f1, , f t} Let S be a t×t matrix with entries in R Let C = {m1, , m t} ⊆ F
Then the following are equivalent.
(i) det S is a unit in R.
(ii) C is another basis of F
(iii) C is a generating system of F
Trang 23 shows that C generates F (S −1
exists for det S invertible allows the use of the customary formula); independence
is easy
(ii)⇒ (iii): Trivial.
(iii)⇒ (i): Call T the matrix expressing B in terms of C; then T S = It, and
Corollary I.2.8
(i) Every generating system {m1, , m t} of R t is a basis.
(ii) Every generating system of R t has at least t elements.
(iii) If ϕ : R n → R m is an R-epimorphism, then n ≥ m.
Proposition I.2.9 Let ϕ be an R-morphism from R t to R t , and let S be its matrix with respect to the canonical basis of R t Then ϕ is injective if and only
if det S is a non-zero divisor in R.
Proof First part: det S a non-zero divisor ⇒ ϕ injective.
| S2 · · · S t
, where I i
t stands for the i-th column
of I t , and similarly for S i Thus det S · x1= 0, a contradiction
Second part: ϕ injective ⇒ det S a non-zero divisor.
It suffices to show that if det S is a zero divisor, then ϕ is not injective, that is, the columns of S are not an independent system in R t If t = 1, this is obvious.
If t ≥ 2, the statement is a corollary of the following more general result Let m1, , m s be elements of R t (t ≥ 2) If a ∈ R −{0} kills all the maximal minors of the matrix (m1 · · · ms ), then {m1, , m s} is not an independent system.
The proof is by induction on the number s of t-tuples.
If s = 1, then am1= 0 with a = 0 prevents m1 from being independent
We now assume that t ≥ s > 1 If a also kills all maximal minors of the matrix (m2 · · · ms), then {m2, , ms} is not an independent system (by induction
hypothesis) and a fortiori{m1, , ms} is not either.
If a does not kill all maximal minors of (m2 · · · ms ), let b be one of those minors such that ab = 0; let us say that b is given by the last s − 1 rows of the matrix (m2 · · · ms ) We now use the assumption that a kills all maximal minors
of (m1 · · · ms)
Trang 24Let T denote the t ×t matrix ( I t −s
0 | m1 · · · ms ) Since det T equals a maximal minor of (m1 · · · ms ), a det T = 0 If T denotes the companion matrix of T ,
and Ti is the i-th column of T , then T T = det T · Itimplies:
.0
Hence 0 = abm1+ (∗m2) +· · · + (∗ms ), so that (since ab = 0) {m1, , m s}
cannot be an independent system
Finally, we consider the case s > t, that is, (m1 · · · ms ) is a t × s matrix with t < s In that case, its maximal minors are of order t; if a kills all the maximal minors, in particular it kills det(m1 · · · mt ) This implies (case s = t of
the induction hypothesis) that{m1, , m t} cannot be an independent system;
a fortiori,{m1, , ms} cannot be either.
This concludes the proof of the more general result on R t, as well as of the
Corollary I.2.10
(i) If ϕ : R n → R m is an R-monomorphism, then n ≤ m.
(ii) A non-zero ideal a of R is a finite free R-module if and only if it is principal, and generated by a non-zero divisor.
Proof (i) If n > m, then there would be a monomorphism R n → R ϕ m →
R m ⊕ R n −m = R n associated with the n × n matrix S = S
(n −m zero rows)
,
where S is the matrix of ϕ with respect to the canonical bases But det S = 0
contradicts the proposition
(ii) Ifa is finite R-free, say a ∼ = R t, the inclusiona → R gives a monomorphism
R t → R, and t ≤ 1 by part (i) So a = Ra = (a) for some independent (that is, non-zero divisor) a ∈ R The converse is obvious 2
Trang 25Linear algebra 13
I.2.2 Projective modules
Every R-module M is the quotient of a free R-module F For if {mi}i ∈I is agenerating system of M (if necessary, the generating system may consist of all the elements of M ), then we call F the free module of Proposition I.2.2 If {fi}i ∈I
is the basis of F defined in that proposition, the R-epimorphism ϕ : F → M sending f i to m i for every i does the job.
If N denotes the kernel of ϕ, then there exists another R-epimorphism ψ :
E → N with E an R-free module And one gets the following exact complex:
(∗) E → F ψ ϕ
→ M → 0,
which is called a free presentation of M (0 stands for the zero module and
M → 0 is the zero map).
We recall that complex means, wherever you have two consecutive arrows, the image of the left arrow is included in the kernel of the right arrow Exact
complex means that the inclusion is always an equality.
If|I| < ∞ (i.e M is finitely generated), the above F is a finite free R-module, but E need not be finite Yet in some cases (for instance when R is noetherian),
E does have a finite basis, and M is said to be finitely presented Then ψ can
be expressed by a matrix (relative to some fixed bases of E and F ), carrying information on M = coker(ψ).
Let us go back to the R-epimorphism ϕ : F → M and consider the exact
complex (a short exact sequence is what it is generally called):
0→ ker(ϕ) → F → M → 0.
Does it imply that F ∼ = M ⊕ ker(ϕ)?
More generally, does an exact complex of R-modules
(∗∗) 0→ M α → M → M β → 0 imply that M ∼ = M ⊕ M ?
It is clear that the answer cannot be positive, in general (just think of theexact complex ofZ-modules
0→ Z α
→ Z β
→ Z/(n) → 0, where α is multiplication by the positive integer n and β is the canonical
projection)
Definition I.2.11 The exact complex ( ∗∗) is called split if it implies M ∼=
M ⊕ M by means of an isomorphism ϕ : M → M ⊕ M such that ϕ −1 | M equals α and the composite
Trang 26Proposition I.2.12 The following are equivalent for the exact sequence ( ∗∗).
(i) (∗∗) is split.
(ii) There exists an R-map γ : M → M such that γ ◦ α = idM
(iii) There exists an R-map δ : M → M such that β ◦ δ = idM
Sometimes one says that an exact complex M → M β → 0 is split, meaning that there exists δ : M → M such that β ◦ δ = id M Similarly for an exactcomplex 0→ M α → M.
If the module M of (∗∗) happens to be R-free, condition (iii) of
Proposition I.2.12 is automatically satisfied, and (∗∗) is split For if {fi}i ∈I is a
basis of M , it suffices to choose one m i ∈ β −1 (f i ) for every i and define δ by means of δ(f i ) = m i for every i.
The above is just an instance of an important property satisfied by freemodules
Proposition I.2.13 Given any exact complex M → N → 0 of R-modules, and π any R-map ϕ : F → N with F free, there exists an R-map ψ : F → M (called a
lifting of ϕ) such that ϕ = π ◦ ψ.
Proof Fix a basis {fi}i ∈I of F and choose one m i ∈ π −1 (ϕ(f i )) for every i. Then define ψ by means of ψ(f i ) = m i for every i 2
The proposition leads to a well-known generalization of free modules
Definition I.2.14 An R-module P is called projective if whenever we are
given an exact complex M → N → 0 of R-modules and an R-map ϕ : P → N, π there exists an R-map ψ : P → M such that ϕ = π ◦ ψ.
Proposition I.2.15 Given an R-module P , the following are equivalent (i) P is projective.
(ii) Every exact complex M → P → 0 is split π
(iii) P is a direct summand of a free R-module.
(iv) Every exact complex M → N → 0 induces an exact complex π
HomR (P, M ) −→ HomR π ◦ (P, N ) → 0, where the left arrow sends f to π ◦ f.
Proof We just show (iii)⇒ (iv) Let F = P ⊕ Q with F free Then
HomR (F, M ) −→ HomR π ◦ (F, N ) → 0
is exact, thanks to Proposition I.2.13, and we are done because
HomR (P ⊕ Q, X) = HomR (P, X) ⊕ HomR (Q, X)
Trang 27Proof Cf., for example, Reference [41, corollary 4.8, p 124] 2
Proposition I.2.17 Let (R, m) be a local ring (not necessarily noetherian) and let M = 0 be a finitely generated R-module If M is R-projective, then M is R-free.
Proof Part (i) of the lemma implies that M/ mM (= M ⊗ R R/m) is a non-zero
finitely generated R/ m-vector space, thus it has a basis, say {m1, , m t} By
part (ii) of the lemma, it follows that{m1, , m t} is a generating system of M, hence there exists an epimorphism ϕ : F → M with F a rank t free module Since M is projective, ϕ splits and F = M ⊕ N for a suitable N It follows that F/ mF = M/mM ⊕ N/mN as R/m-vector spaces But F/mF and M/mM have both dimension t, so that N/ mN must have dimension 0, that is, N = mN and
In the proof of Proposition I.2.17, we indicated that M/ mM = M ⊗ R R/mwithout explaining the symbol, “⊗R,” known as the tensor product We will
assume that this operation is well known to the reader; if in doubt, a forward treatment may be found in Reference [41] sections 2.2 and A.2.2 We
straight-also point out here that in Proposition I.2.18 we will write Mm = M ⊗R Rm,
thereby indicating that the localization of a module is the same as taking thetensor product of the module with the ring of quotients of the ring These arefacts that we will use often in what follows
• In connection with the use of tensor products in this book, we offer a caveat
to the reader While we often try to indicate the ring over which a given tensorproduct is taken, especially when a certain ambiguity exists, there are many timesthat the base ring is omitted We believe that when this occurs, the context issuch as to leave no question in the reader’s mind what the base ring is
There are other cases when projective R-modules and free R-modules cide, for instance when R is a PID (see Corollary I.2.23 below) and when
coin-R = K[X1, , Xn ], K a field (Quillen-Suslin Theorem).
A case in which the family of projective R-modules properly contains the family of free R-modules is given by R a Dedekind domain: see Remark I.2.21
below
Trang 28The property of being projective is a local property, in the sense of the followingproposition.
Proposition I.2.18 Let M be a finitely presented R-module M is projective
if and only if Mm is Rm-free for every m ∈ Max(R).
Proof The only if part follows from Proposition I.2.17, once we remark that
Mm = M ⊗R Rm R-projective implies Mm is Rm-projective (this comes from
Proposition I.2.15 (iii) and the fact that ⊗R commutes with⊕).
The if part uses Proposition I.2.15 (iv), coupled with the remark that a map is
onto if and only if its localizations at all maximal ideals are onto The assumption
that M is finitely presented ensures that
Rm⊗RHomR (M, N ) ∼= HomRm(Mm, Nm)
We now briefly turn our attention to a special class of rings
Definition I.2.19 A nontrivial commutative ring (with 1) R is called
hered-itary if every ideal of R is a projective R-module.
Clearly every PID is hereditary, because of Corollary I.2.10 (ii) Anotherexample is given by Dedekind domains, due to the following result
Proposition I.2.20 Every non-zero ideal a of a Dedekind domain R is a projective R-module.
Proof Since R is noetherian, a is finitely presented as an R-module Hence
by the last proposition, a is R-projective if and only if am is Rm-free for every
Theorem I.2.22 If R is hereditary, then every submodule of a free R-module
is R-isomorphic to a direct sum of ideals of R.
Proof Cf., for example, Reference [33], chapter I, theorem 5.3, p 13 2
Corollary I.2.23 If R is a PID, then every submodule of a free R-module is free; in particular, every projective R-module P is free.
Corollary I.2.24 R is hereditary if and only if every submodule of a projective R-module is projective.
Trang 29Linear algebra 17
Proof If part: since R is R-projective (being R-free of rank 1), every submodule of R (= every ideal of R) is projective.
Only if part: every R-projective P is a direct summand of some free R-module;
by the theorem, every submodule M of P is a direct sum of ideals, hence of
pro-jective modules (by definition of hereditary ring); but a direct sum of propro-jective
I.2.3 Projective resolutions
We push the analysis of non-free modules a little further
Given any R-module M , with a generating system {mi}i ∈I, we have alreadyconstructed a free presentation of M , that is, an exact complex
R-module F2, a longer exact complex is obtained:
of free R-modules (except M ), which is called a free resolution of M
Sometimes one says that the free resolution of M is just the exact complex
· · · ϕ → Fn n+1 ϕ n
→ · · · ϕ → F2 1
ϕ1
→ F0
and that M = coker(ϕ1)
By the same token, we may also consider projective resolutions of a module,
M , namely, an exact sequence
(∗∗) · · · d → Pn n+1 d n
→ · · · → P d2 1
d1
→ P0→ M → 0 where each module P i is projective Since free modules are projective, and wehave already shown the existence of free resolutions, the existence of projectiveresolutions is assured
If for some non-negative integer n we have F n (P n)= 0 and Fn+t (P n+t) = 0
for every t > 0, we say that M has a finite free (projective) resolution of
length n.
A famous theorem due to D Hilbert states that if R = K[x1, , x t ], K a field, then every finitely generated R-module has a finite free resolution of length less than or equal to t (Note that in the case considered by Hilbert, each of the modules occurring in the resolution is finitely generated, since R is noetherian.)
We can now write down the formal definition of homological dimension that
we promised at the end of the proof of Corollary I.1.6
Trang 30Definition I.2.25 A module, M , has homological (free) dimension less
than or equal to n if it has a projective (free) resolution of length equal to n.
Otherwise it is said to have infinite homological (free) dimension.
Since every free module is projective, if the R-module M has a finite free resolution of length n, then obviously its homological dimension hd RM cannot
be larger than n However, it is quite possible for the free dimension of a module
to exceed its homological dimension, as the module R/(a, b) of Example I.1.11
⊗ R preserves surjectivity), but the truncated complex
· · · d n −→ Pn+1⊗1 ⊗R N d −→ · · · n ⊗1 d −→ P2⊗1 1⊗R N d −→ P1⊗1 0⊗R N
may have nontrivial homology
Definition I.2.26 For every n ≥ 0, the n-th torsion module, Tor R
(Notice that Tor R0(M, N ) = M ⊗R N )
It is well known that the definition of TorR n (M, N ) does not depend on the choice of the projective resolution of M Furthermore, if one picks a projective resolution of N and tensors it by M , the resulting truncated complex yields the
same homology modules That is, TorR n (M, N ) = Tor R n (N, M ).
Clearly, TorR n (M, N ) = 0 for every n ≥ 1, whenever either M or N is
projective
As one might expect, Tor has some connection with the notion of torsion.
If N is an R-module, an element a ∈ N is said to be a torsion element
if there is a non-zero divisor r ∈ R such that ra = 0 The subset Ntor =
{a ∈ N | a is a torsion element} is clearly a submodule of N, and is called the
torsion submodule of N N is called torsion-free if Ntor = 0; it is called a
torsion module if N = Ntor
Example I.2.27 Let R be a commutative ring and r ∈ R a non-zero divisor.
Then
0→ R r
→ R → R/(r) → 0
Trang 31Linear algebra 19
is a projective (in fact free) resolution of R/(r), and for every R-module N , we
have that
TorR1(R/(r), N ) = {elements of N killed by r}
If TorR1(R/(r), N ) = 0 for every non-zero divisor r ∈ R, then N is a torsion-free R-module In particular, N free implies N torsion-free (by the remark coming
immediately before this example)
It is well known that Tor “restores exactness” in the following sense Given a
short exact sequence of R-modules:
0→ M → M → M → 0, tensoring by N may destroy exactness on the left But one can prove that there
is a long exact sequence:
Proposition I.2.28 Let R be a PID and N a finitely generated R-module: N
is torsion free if and only if N is free.
Proof The if part is in the example above Let us prove the only if part.
Thanks to Corollary I.2.23, Proposition I.2.18, and the fact that localization
preserves finite generation and torsion-freeness, we may assume that R is local, hence a discrete valuation ring with maximal ideal (a), say.
If the R/(a)-vector space N ⊗R R/(a) has dimension t, by Lemma I.2.16 (ii)
we can find a rank t free R-module F such that
0→ K → F → N → 0
is exact (K simply stands for the kernel of the R-surjection F → N) Tensoring
by R/(a), one gets the long exact sequence:
· · · → Tor R
1(R/(a), N ) → K ⊗R R/(a) → F ⊗R R/(a) → N ⊗R R/(a) → 0.
But the indicated Tor1 is zero (because N is torsion-free and the last example applies), and the two vector spaces involving F and N have equal dimensions by construction Thus K ⊗R R/(a) = 0, that is, K/(a)K = 0 and by Lemma I.2.16
In the last proposition, the assumption that N is finitely generated cannot be
removed: theZ-module Q is torsion-free, but not free
Trang 32Corollary I.2.29 Let R be a commutative ring, and N an R-module Then N/Ntor is torsion-free Thus, if R is a PID and N is finitely generated, then N
is the direct sum of a free module and a torsion module.
Proof If r ∈ R is a non-zero divisor, and a ∈ N/Ntor is such that ra = 0, then
ra ∈ Ntor Hence there exists a non-zero divisor s ∈ R such that 0 = s(ra) = (sr)a Since s, r are non-zero divisors, it follows a ∈ Ntor, that is, a = 0 which proves that N/Ntor is torsion-free But by the last proposition, if R is a PID, N/Ntor is R-free if it is torsion-free.
Since N/Ntor is R-free, the short exact sequence
0→ Ntor→ N → N/Ntor→ 0
Another example of “exactness restoration” is provided by Ext
Let M and N be two R-modules Taken a (possibly infinite) projective resolution of M , such as ( ∗∗) above, the following complex is obtained:
0→ HomR (M, N ) −→ HomR (P0, N ) −→ HomR ◦d1 (P1, N ) −→ · · · , ◦d2
where ◦dn sends a map P n −1 → N to the composite Pn d n
−→ Pn −1 → N We
still have exactness at HomR (M, N ), just because P0 projects onto M , but the
truncated complex
0→ HomR (P0, N ) −→ HomR ◦d1 (P1, N ) −→ · · · , ◦d2
may have nontrivial homology in dimension zero as well as elsewhere
Definition I.2.30 For every n ≥ 0, the n-th extension module Ext n
(Notice that since Ext0R (M, N ) is the kernel of the map ◦d1, and since
0→ Hom R (M, N ) → Hom R (P0, N ) −→ Hom ◦d1 R (P1, N )
is exact, we have Ext0R (M, N ) = Hom R (M, N ).)
As in the case of Tor, one can check that the definition of Extn R (M, N ) does not depend on the choice of the projective resolution of M But if one wants to get the same homology modules starting with a resolution of N , it is necessary
to pick an injective resolution of N and apply Hom R (M, ).
It is clear that Extn R (M, N ) = 0 for every n ≥ 1, whenever M is projective.
Trang 33Multilinear algebra 21
Moreover, one can prove that given a short exact sequence of R-modules:
0→ M α → M β
→ M → 0,
there exists a long exact sequence (note the reverse arrows):
HomR (M , N ) ← HomR (M, N ) ← HomR (M , N ) ← 0
By definition, an R-module L is called an extension of M by N if there exists
a short exact sequence
0→ N → L → M → 0.
If the sequence is split, the extension is trivial, because L = N ⊕ M.
As the reader may have guessed, these extensions may be turned into agroup which is isomorphic to Ext1R (M, N ), with the zero element being the split
extension Thus Ext1R (M, N ) = 0 if and only if all extensions of M by N are
trivial
If part: Any short exact sequence 0 → K → F → M → 0 with F free is split
by assumption; hence M is projective and Ext n R (M, N ) = 0 for every n ≥ 1, as
I.3 Multilinear algebra
In this section, we deal with some algebras to be used extensively in the future,namely symmetric, divided power, and exterior algebras Our approach here is viathe structure of Hopf algebra, which involves the notions of algebra, coalgebra,multiplication, comultiplication, and antipode map, among others We do this inorder to make clear the relationship between the symmetric and divided poweralgebras, and to bring out their many useful properites For more details, werefer the reader to References [15], [16], and [41]
Trang 34I.3.1 R[X1, , X t ] as a symmetric algebra
Let R be a commutative ring (with 1), and let S denote the polynomial ring R[X1, , X t ] S is an R-module with respect to the usual addition of polyno- mials and to their multiplication by elements of R Moreover, multiplication of general elements of S defines a map m S : S × S → S which is R-bilinear Hence one has an R-morphism S ⊗R S → S, still denoted by mS If one further con- siders the R-map u S : R → S defined by means of uS(1) = 1, the fact thatmultiplication of polynomials is associative and has a neutral element implies
that S is an R-algebra, in the sense of the following definition.
Definition I.3.1 Given a commutative ring R (with 1), an R-algebra is an
R-module A endowed with two R-morphisms
(we have omitted subscripts, and 1 stands for the identity on A).
One should remark that an R-algebra A is always a ring, multiplication being given by the composite map m A ◦ χ, where χ stands for the canonical bilinear map A × A → A ⊗R A, and 1 being provided by uA(1R) It is a commutativering if the following diagram commutes:
A ⊗ A τ
→ A ⊗ A
A → A1(here τ is the R-linear map sending a1⊗ a2to a2⊗ a1)
By an ideal (right, left, two-sided) of the R-algebra A we always mean
an ideal (right, left, two-sided) of the ring A.
Trang 35Multilinear algebra 23
Given two R-algebras A and B, an R-algebra homomorphism ϕ : A → B
is an R-map also compatible with the ring structures of A and B.
An important feature of a polynomial is its degree (total degree, that is) If for
every i ∈ N, Si stands for the R-linear span of all monomials of total degree i,
S decomposes (as an R-module) as the direct sum ⊕i ∈N Si Moreover, the product
of f ∈ Si and g ∈ Sj belongs to S i+j This means that S is a graded algebra, in
the sense of the following definition
Definition I.3.2 An R-algebra A is called graded if it decomposes (as an
R-module) as ⊕i ∈N Ai, in such a way that AiAj ⊆ Ai+j for every i and j The (non-zero) elements of Ai are called homogeneous elements of degree i.
Let nowa be an ideal of S Clearly the ring S/a is again an R-algebra But S/ a inherits the graduation of S if and only if a has a system of generators which
are homogeneous
Remark I.3.3 Let A be any graded R-algebra, a a two-sided ideal of A The R-module A/ a always inherits from A the structure of an R-algebra But the R-algebra A/ a inherits the graduation of A if and only if a has a system of generators which are homogeneous.
We now turn to a significant property of the R-algebra S Let F be the rank t free R-module corresponding to the set I = {x1, , xt} (recall
Proposition I.2.2) Denote by {fx i } the canonical basis of F and define the R-map ϕ : F → S by means of ϕ(fx i ) = x i Clearly, ϕ(F ) is a set of (commuting) generators for the R-algebra S.
Proposition I.3.4 For every R-morphism ψ : F → A, with A an R-algebra, such that the elements of ψ(F ) commute in A, there exists a unique R-algebra morphism χ : S → A verifying ψ = χ ◦ ϕ.
Proof If we define
χ
c(s1, , s t )x s1
1 · · · x s t t
Definition I.3.5 Let M be an R-module We call a symmetric algebra
on M any pair (S(M ), ϕ) satisfying the following requirements: S(M ) is an
R-algebra, ϕ : M → S(M) is an R-morphism, the elements of ϕ(M) commute
in S(M ), and for every R-map ψ : M → A, with A an R-algebra, such that the elements of ψ(M ) commute in A, there exixts a unique R-algebra morphism
χ : S(M ) → A verifying ψ = χ ◦ ϕ.
It is clear that, if symmetric algebras on M do exist, there is a unique isomorphism from one to the other which is the identity on M
Trang 36Theorem I.3.6 Every M admits a symmetric algebra.
We sketch a proof of the theorem
The idea is to build first an R-algebra T (M ) and an R-map σ : M → T (M)
with the following properties:
(1) for every R-map ψ : M → A, with A an R-algebra, there exists a unique R-algebra morphism ρ : T (M ) → A verifying ψ = χ ◦ σ;
(2) the elements of σ(M ) generate T (M ).
Then one defines S(M ) as the quotient T (M )/a, where a is the two-sided ideal
generated by all the elements σ(m1)σ(m2)− σ(m2)σ(m1) with m1 and m2 in
M The map ϕ is defined as the composite:
M → T (M) → T (M)/a σ Clearly, the elements of ϕ(M ) commute in S(M ) If ψ : M → A is given, such that the elements of ψ(M ) commute in A, the map ρ : T (M ) → A verifying
ρ ◦ σ = ψ vanishes on a, and we call χ the map T (M)/a → A induced by ρ Finally, since T (M ) is generated by the elements of σ(M ), ϕ(M ) generates S(M ) and the uniqueness of χ follows.
A pair (T (M ), σ), necessarily unique, up to isomorphism, can simply be obtained by taking the R-module ⊕i ∈N M ⊗iand by imposing that every product(m1⊗ · · · ⊗ mi )(m
1⊗ · · · ⊗ m
j ) be equal to m1⊗ · · · ⊗ mi ⊗ m
1⊗ · · · ⊗ m
j (We
mean M ⊗0 = R.) The map σ identifies M with the summand M ⊗1.
One should remark that T (M ) (called the tensor algebra on M ) is graded by
construction, and that the ideala is generated by homogeneous elements Hence
S(M ) inherits the graduation of T (M ) One usually writes S(M ) = ⊕i ∈N S i (M ).
In particular, ϕ identifies M with S1(M ), and the elements of S1(M ) generate S(M ) Thus S(M ) is commutative.
The construction of S(M ) is functorial [meaning that every R-map M1→ M2
induces an R-algebra morphism S(M1) → S(M2)] and commutes with base
change [meaning that S(M ⊗R R ) = S(M ) ⊗R R for every ring homomorphism
R → R ] These properties follow from those of the tensor algebra, that is,
Our polynomial ring S, being a symmetric algebra, is also a graded coalgebra,
and in fact a graded Hopf algebra
Definition I.3.7 Given a commutative ring R (with 1), an R-coalgebra is
an R-module A endowed with two R-morphisms
c A : A → A ⊗R A (comultiplication), ε A : A → R (counit)
Trang 37It should be noticed that the diagrams occurring in the last definition areobtained by reversing arrows in those describing algebra properties.
S(M ) is an R-coalgebra with respect to the following counit and
comultiplication
ε is the identity on S0(M ) = R and the zero map on S i (M ) for every i ≥ 1.
c is constructed from the diagonal mapping ∆ : m ∈ M → (m, m) ∈ M × M: since ∆ is R-linear, a morphism S(M ) → S(M × M) of R-algebras is induced; but S(M × M) ∼ = S(M ) ⊗R S(M ) as R-algebras, and c is defined to be the composite S(M ) → S(M × M) ∼ = S(M ) ⊗R S(M ).
Remark I.3.8 In the above, the following definition is understood: given two R-algebras A and B, the R-algebra A ⊗R B is the R-module A ⊗R B with multiplication defined by
As for an explicit description of c in the case of S(M ), since the isomorphism S(M × M) ∼ = S(M ) ⊗R S(M ) is induced by (m1, m2)→ m1⊗ 1 + 1 ⊗ m2, and
c is a morphism of algebras, one gets
c(m1· · · mk) =
k
i=1 (m i ⊗ 1 + 1 ⊗ mi) =
(m i1· · · mi h)⊗ (mj1· · · mj k−h ),
Trang 38the sum ranging over all pairs of strictly increasing sequences i1< · · · < ihand
j1< · · · < jk −h such that
{i1, , ih} ∪ {j1, , jk −h } = {1, , k}
(including the cases in which one of the two sequences is empty)
One should remark that S(M ) is a graded coalgebra, in the sense that all
maps are compatible with the given gradings
Remark I.3.9 Since the comultiplication of S(M ) is obtained from the onal map M → M × M, it is customary to call diagonal map the
diag-comultiplication of any coalgebra, replacing the letter c by ∆.
In order to explain what a (graded) Hopf algebra is, we need the definition
of the product of coalgebras Let A and B be two given R-coalgebras The R-coalgebra A ⊗R B is the R-module A ⊗R B with counit ε A ⊗ R B (a ⊗ b) =
ε A (a)ε B (b) and with diagonal map
Definition I.3.10 A Hopf algebra over R is an R-module A, which is both
an algebra and a coalgebra over R, satisfying the following properties:
(1) the multiplication m and the unit u are coalgebra morphisms;
(2) the diagonal map ∆ and the counit ε are algebra morphisms;
(3) there exixts an R-module map s A : A → A (called the antipode) such that the diagrams
Trang 39Multilinear algebra 27
We leave as an exercise the verification that S(M ) is indeed a graded Hopf
algebra, that is, a Hopf algebra with all structure maps preserving the graded
structure The antipode s is defined as 1 on all S i (M ) with i even, and as −1 on all S i (M ) with i odd.
In fact, S(M ) is a commutative graded Hopf algebra, meaning that it is both
a commutative graded R-algebra and a cocommutative graded R-coalgebra.
A little bit of care is necessary here, for the general definition of (co-)commutativity in the graded case does not suit the intuitive ideas suggested
by our polynomial ring
Definition I.3.11 Let A = ⊕i ∈N Ai and B = ⊕i ∈N Bi be two graded R-modules Let τB,A : B ⊗R A → A ⊗R B be the map defined on homogeneous elements by means of τB,A (b ⊗ a) = (−1) deg(b) deg(a) a ⊗ b.
If A has the structure of a graded R-algebra, we say that it is a commutative
graded R-algebra if the following diagram
If B has the structure of a graded R-coalgebra, we say it is a cocommutative
graded R-coalgebra if the following diagram
Remark I.3.12 The last definition also has an impact on the construction
of the graded R-algebra A ⊗R B The multiplication defined on A ⊗R B in Remark I.3.8 is the composition
A ⊗ B ⊗ A ⊗ B1A ⊗τ⊗1 −→ B A ⊗ A ⊗ B ⊗ B m A −→ A ⊗ B ⊗m B
(with τ (b, a)=(a, b)) and applies to non-graded algebras, as well as to graded algebras concentrated in even degrees (such as the symmetric algebras) But in the
Trang 40general graded case, it is replaced by the following, slightly different, composition
A ⊗ B ⊗ A ⊗ B1A ⊗τ B,A ⊗1 B
−→ A ⊗ A ⊗ B ⊗ B m A ⊗m B
−→ A ⊗ B, with τ B,A as in the last definition That is,
(a1⊗ b1)(a2⊗ b2) = (−1) deg(b1) deg(a2 )a1a2⊗ b1b2.
In the future, unless otherwise stated, we always adopt for graded algebrasthe specified graded versions of (co-)commutativity and tensor product We also
assume that the elements of S(M i ) have degree 2i.
I.3.2 The divided power algebra
We again let F be a rank t free R-module, and let a basis of F be {x1, , x t}.
We use E (rather than F ∗ ) to denote the R-module Hom R (F, R), the dual of F
so-called dual basis of{x1, , x t}).
For every n ∈ N, the symmetric group Sn acts on the n-fold tensor product
T n (E) = E ⊗R · · · ⊗R E by means of σ(e1⊗ · · · ⊗ en ) = e σ(1) ⊗ · · · ⊗ eσ(n) The
elements of T n (E) fixed by such an action are called symmetric tensors of order
n and form an R-submodule of T n (E), denoted by D n (E).
There is an R-map, sym : ⊕n ∈N Tn (E) −→ ⊕n ∈N Dn (E), called
symmetriza-tion, defined by means of
Proposition I.3.13 D(E) is a graded R-algebra.
Proof Given z ∈ D i (E) and z ∈ D j (E), then zz ∈ D i+j (E) is defined to be
σ ∈S i,j σ(z ⊗ z ), where
Si,j={σ ∈ Si+j | σ(1) < · · · < σ(i) and σ(i + 1) < · · · < σ(i + j)}
Remark I.3.14 If e1, , en belong to E, then their product e1· · · en in D(E) equals sym(e1⊗ · · · ⊗ en ).
Definition I.3.15 D(E) is called the divided power algebra on the free
R-module E.
If we assume that the elements of D n (E) have degree 2n, then D(E) is commutative as a graded R-algebra.