Tải bản đầy đủ (.pdf) (267 trang)

(Oxford mathematical monographs) giandomenico boffi, david buchsbaum threading homology through algebra selected patterns oxford university press, USA (2006) (1)

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (5.49 MB, 267 trang )

OX FO R D M AT H E M AT I C A L M O N O G R A P H S
Series Editors

J. M. BALL W. T. GOWERS
N. J. HITCHIN L. NIRENBERG
R. PENROSE A. WILES


OX FO R D M AT H E M AT I C A L M O N O G R A P H S

Hirschfeld: Finite projective spaces of three dimensions
Edmunds and Evans: Spectral theory and differential operators
Pressley and Segal: Loop groups, paperback
Evens: Cohomology of groups
Hoffman and Humphreys: Projective representations of the symmetric groups: Q-Functions
and Shifted Tableaux
Amberg, Franciosi, and Giovanni: Products of groups
Gurtin: Thermomechanics of evolving phase boundaries in the plane
Faraut and Koranyi: Analysis on symmetric cones
Shawyer and Watson: Borel’s methods of summability
Lancaster and Rodman: Algebraic Riccati equations
Th´
evenaz: G-algebras and modular representation theory
Baues: Homotopy type and homology
D’Eath: Black holes: gravitational interactions
Lowen: Approach spaces: the missing link in the topology–uniformity–metric triad
Cong: Topological dynamics of random dynamical systems
Donaldson and Kronheimer: The geometry of four-manifolds, paperback
Woodhouse: Geometric quantization, second edition, paperback
Hirschfeld: Projective geometries over finite fields, second edition
Evans and Kawahigashi: Quantum symmetries of operator algebras


Klingen: Arithmetical similarities: Prime decomposition and finite group theory
Matsuzaki and Taniguchi: Hyperbolic manifolds and Kleinian groups
Macdonald: Symmetric functions and Hall polynomials, second edition, paperback
Catto, Le Bris, and Lions: Mathematical theory of thermodynamic limits: Thomas-Fermi type
models
McDuff and Salamon: Introduction to symplectic topology, paperback
Holschneider: Wavelets: An analysis tool, paperback
Goldman: Complex hyperbolic geometry
Colbourn and Rosa: Triple systems
Kozlov, Maz’ya and Movchan: Asymptotic analysis of fields in multi-structures
Maugin: Nonlinear waves in elastic crystals
Dassios and Kleinman: Low frequency scattering
Ambrosio, Fusco and Pallara: Functions of bounded variation and free discontinuity problems
Slavyanov and Lay: Special functions: A unified theory based on singularities
Joyce: Compact manifolds with special holonomy
Carbone and Semmes: A graphic apology for symmetry and implicitness
Boos: Classical and modern methods in summability
Higson and Roe: Analytic K-homology
Semmes: Some novel types of fractal geometry
Iwaniec and Martin: Geometric function theory and nonlinear analysis
Johnson and Lapidus: The Feynman integral and Feynman ’s operational calculus, paperback
Lyons and Qian: System control and rough paths
Ranicki: Algebraic and geometric surgery
Ehrenpreis: The radon transform
Lennox and Robinson: The theory of infinite soluble groups
Ivanov: The Fourth Janko Group
Huybrechts: Fourier-Mukai transforms in algebraic geometry
Hida: Hilbert modular forms and Iwasawa theory
Boffi and Buchsbaum: Threading homology through algebra


www.pdfgrip.com


Threading Homology Through
Algebra: Selected Patterns
GIANDOMENICO BOFFI
Universit`
a G. d’Annunzio

DAVID A. BUCHSBAUM
Department of Mathematics, Brandeis University

CLARENDON PRESS · OXFORD
2006

www.pdfgrip.com


3

Great Clarendon Street, Oxford OX2 6DP
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide in
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece

Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trade mark of Oxford University Press
in the UK and in certain other countries
Published in the United States
by Oxford University Press Inc., New York
c Oxford University Press, 2006
The moral rights of the authors have been asserted
Database right Oxford University Press (maker)
First published 2006
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, or under terms agreed with the appropriate
reprographics rights organization. Enquiries concerning reproduction
outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this book in any other binding or cover
and you must impose the same condition on any acquirer
British Library Cataloguing in Publication Data
Data available
Library of Congress Cataloging in Publication Data
Data available
Typeset by Newgen Imaging Systems (P) Ltd., Chennai, India
Printed in Great Britain
on acid-free paper by
Biddles Ltd., King’s Lynn, Norfolk
ISBN 0–19–852499–4

978–0–19–852499–1


1 3 5 7 9 10 8 6 4 2

www.pdfgrip.com


A coloro che amo
To Betty, wife and lifelong friend.
Though she can’t identify each tree, she shares with me the delight
of walking through the forest.

www.pdfgrip.com


This page intentionally left blank

www.pdfgrip.com


PREFACE
From a little before the middle of the twentieth century, homological methods
have been applied to various parts of algebra (e.g. Lie Algebras, Associative
Algebras, Groups [finite and infinite]). In 1956, the book by H. Cartan and
S. Eilenberg, Homological Algebra [33], achieved a number of very important
results: it gave rise to the new discipline, Homological Algebra, it unified the
existing applications, and it indicated several directions for future study. Since
then, the number of developments and applications has grown beyond counting,
and there has, in some instances, even been enough time to see various methods
threading their way through apparently disparate, unrelated branches of algebra.
What we aim for in this book is to take a few homological themes (Koszul

complexes and their variations, resolutions in general) and show how these affect
the perception of certain problems in selected parts of algebra, as well as their
success in solving a number of them. The expectation is that an educated reader
will see connections between areas that he had not seen before, and will learn
techniques that will help in further research in these areas.
What we include will be discussed shortly in some detail; what we leave out
deserves some mention here. This is not a compendium of homological algebra,
nor is it a text on commutative algebra, combinatorics, or representation theory; although, it makes significant contact with all of these fields. We are not
attempting to provide an encyclopedic work. As a result, we leave out vast areas
of these subjects and only select those parts that offer a coherence from the
point of view we are presenting. Even on that score we can make no claim to
completeness.
Our Chapter I, called “Recollections and Perspectives,” reviews parts of Polynomial Ring and Power Series Ring Theory, Linear Algebra, and Multilinear
Algebra, and ties these with ideas that the reader should be very familiar with.
As the title of the chapter suggests, this is not a compendium of “assumed
known” items, but a presentation from a certain perspective—mainly homological. For example, almost everyone knows about divisibility and factoriality; we
give a criterion for factoriality that ties it immediately to a homological interpretation (and one which found significant application in solving a long-open
question in regular local ring theory).
The next three chapters of this book pull together a group of classical results,
all coming from and generalizing the techniques associated with the Koszul complex. Perhaps the major result in Chapter II, on local rings, is the homological
characterization of a regular local ring by means of its global dimension. Section
II.6 includes a proof of the factoriality of regular local rings which is much closer
to the original one, rather than the Kaplansky proof that is frequently quoted.

www.pdfgrip.com


viii

Preface


We have also included a section on multiplicity theory, mainly to carry through
the theme of the Koszul complex, and a section on the Homological Conjectures,
as they provide a good roadmap for still open problems as well as a historical
guide through much of what has been going on in the area this book is sketching.
Chapter III deals with a class of complexes developed with the following aim in
view: to associate a complex to an arbitrary finite presentation matrix of a module (the Koszul complex does this for a cyclic module), and to have that complex
play the same role in the proof of the generalized Cohen–Macaulay Theorem that
the Koszul complex plays in the classical case. We have made an explicit connection, in terms of a chain homotopy, between an older, “fatter” class of complexes,
and a slimmer, more “svelte” class. We have also included a last section in which
we define a generalized multiplicity which has found interesting applications, of
late. Chapter IV applies some of the properties of these complexes to a systematic study of finite free resolutions, ending in a “syzygy-theoretic” proof of the
unique factorization theorem (or “factoriality”) in regular local rings.
The last three chapters and the Appendix not only focus on determinantal
ideals and characteristic-free representation theory, but also involve a good deal
of combinatorics. Chapter V employs the homological techniques developed in
the previous part in the study of a number of types of determinantal ideals,
namely Pfaffians and powers of Pfaffians. In Chapter VI we develop the basics
of a characteristic-free representation theory of the general linear group (which
has already made its appearance in earlier chapters). Because of the generality
aspired to, heavy use is made of letter-place methods, an idea used more by
combinatorialists than by commutative algebraists. As some of the proofs require
more detail than is probably helpful for those encountering this material for the
first time, we decided to place these details in a separate Appendix: Appendix A.
Much of the development of this chapter rests heavily on the notion of straight
tableaux introduced by B. Taylor. In Chapter VII we first present a number of
results that immediately follow from this more general theory. Then examples
are given to indicate what further use has been made of it, and in most cases
references are given to detailed proofs. It is in this part of the chapter that we see
the important influence of the work of A. Lascoux in characteristic zero. We give

some of the background to the Hashimoto example of the dependence of the Betti
numbers of determinantal ideals on characteristic. We deal with resolutions of
Weyl modules in general, and skew-hooks in particular, and we make connections
with intertwining numbers, Z-forms, and several other open problems.
The intended readership of this book ranges from third-year and above graduate students in mathematics, to the accomplished mathematician who may or
may not be in any of the fields touched on, but who would like to see what developments have taken place in these areas and perhaps launch himself into some
of the open problems suggested. Because of this assumption, we are allowing
ourselves to depend heavily on material that can be found in what we regard as
comprehensive and accessible texts, such as the textbook by D. Eisenbud. We
may at times, though, include a proof of a result here even if it does appear in
such a text, if we think that the method of proof is typical of many of that kind.

www.pdfgrip.com


CONTENTS

I

Recollections and Perspectives
I.1 Factorization
I.1.1 Factorization domains
I.1.2 Polynomial and power series rings
I.2 Linear algebra
I.2.1 Free modules
I.2.2 Projective modules
I.2.3 Projective resolutions
I.3 Multilinear algebra
I.3.1 R[X1 , . . . , Xt ] as a symmetric algebra
I.3.2 The divided power algebra

I.3.3 The exterior algebra

1
1
1
6
8
8
13
17
21
22
28
30

II Local Ring Theory
II.1 Koszul complexes
II.2 Local rings
II.3 Hilbert–Samuel polynomials
II.4 Codimension and finitistic global dimension
II.5 Regular local rings
II.6 Unique factorization
II.7 Multiplicity
II.8 Intersection multiplicity and the homological conjectures

37
38
43
46
50

54
56
59
64

III Generalized Koszul Complexes
III.1 A few standard complexes
III.1.1 The graded Koszul complex and its “derivatives”
III.1.2 Definitions of the hooks and their explicit bases
III.2 General setup
III.2.1 The fat complexes
III.2.2 Slimming down
III.3 Families of complexes
III.3.1 The “homothety homotopy”
III.3.2 Comparison of the fat and slim complexes
III.4 Depth-sensitivity of T(q; f )
III.5 Another kind of multiplicity

69
69
70
72
80
82
83
85
88
91
94
99


IV Structure Theorems for Finite Free Resolutions
IV.1 Some criteria for exactness
IV.2 The first structure theorem

www.pdfgrip.com

103
104
110


x

V

Contents

IV.3 Proof of the first structure theorem
IV.3.1 Part (a)
IV.3.2 Part (b)
IV.4 The second structure theorem

115
115
118
119

Exactness Criteria at Work
V.1 Pfaffian ideals

V.1.1 Pfaffians
V.1.2 Resolution of a certain pfaffian ideal
V.1.3 Algebra structures on resolutions
V.1.4 Proof of Part 2 of Theorem V.1.8
V.2 Powers of pfaffian ideals
V.2.1 Intrinsic description of the matrix X
V.2.2 Hooks again
V.2.3 Some representation theory
V.2.4 A counting argument
V.2.5 Description of the resolutions
V.2.6 Proof of Theorem V.2.4

127
128
128
131
132
134
136
137
138
139
140
143
145

VI Weyl and Schur Modules
VI.1 Shape matrices and tableaux
VI.1.1 Shape matrices
VI.1.2 Tableaux

VI.2 Weyl and Schur modules associated to shape matrices
VI.3 Letter-place algebra
VI.3.1 Positive places and the divided power algebra
VI.3.2 Negative places and the exterior algebra
VI.3.3 The symmetric algebra (or negative letters and places)
VI.3.4 Putting it all together
VI.4 Place polarization maps and Capelli identities
VI.5 Weyl and Schur maps revisited
VI.6 Some kernel elements of Weyl and Schur maps
VI.7 Tableaux, straightening, and the straight basis theorem
VI.7.1 Tableaux for Weyl and Schur modules
VI.7.2 Straightening tableaux
VI.7.3 Taylor-made tableaux, or a straight-filling algorithm
VI.7.4 Proof of linear independence of straight tableaux
VI.7.5 Modifications for Schur modules
VI.7.6 Duality
VI.8 Weyl–Schur complexes

149
149
149
153
154
156
156
159
164
164
165
167

169
174
174
176
181
183
186
187
187

VII Some Applications of Weyl and Schur Modules
VII.1 The fundamental exact sequence
VII.2 Direct sums and filtrations for skew-shapes
VII.3 Resolution of determinantal ideals

193
193
197
199

www.pdfgrip.com


Contents

VII.3.1 The Lascoux resolutions
VII.3.2 The submaximal minors
VII.3.3 Z-forms
VII.4 Arithmetic considerations
VII.4.1 Intertwining numbers

VII.4.2 Z-forms again
VII.5 Resolutions revisited; the Hashimoto counterexample
VII.6 Resolutions of Weyl modules
VII.6.1 The bar complex
VII.6.2 The two-rowed case
VII.6.3 A three-rowed example
VII.6.4 Resolutions of skew-hooks
VII.6.5 Comparison with the Lascoux resolutions

xi

200
202
203
206
206
208
209
211
212
215
217
225
227

A Appendix for Letter-Place Methods
A.1 Theorem VI.3.2, Part 1: the double standard tableaux generate
A.2 Theorem VI.3.2 Part 2: linear independence of double standard
tableaux
A.3 Modifications required for Theorems VI.3.3 and VI.3.4

A.4 Modifications required for Theorem VI.8.4

237
237

References

249

Index

253

www.pdfgrip.com

241
244
246


This page intentionally left blank

www.pdfgrip.com


I
RECOLLECTIONS AND PERSPECTIVES

This chapter is neither a collection of results which we assume to be known nor
the place to prove some results probably unknown to the reader, but needed in

the following. Although it resembles a little of both things, it is essentially a selection of topics, some elementary, some more advanced, which we feel are adequate,
or even necessary, to prepare the ground for the material of the chapters to come.
Since it is almost impossible to tell which “basic” material is truly universally
known, and which is not, we can only assure the reader that those terms in this
chapter which are unfamiliar can be easily found in the book by D. Eisenbud, [41].
I.1 Factorization
In this section, we deal with the basic topic of divisibility. In doing so, we review
a few properties of some rings, which are of importance to us. For more details,
we refer the reader to Reference [87].
I.1.1 Factorization domains
Let R be an integral domain, that is, a commutative ring (with 1) having no
zero divisors. Given a and b in R, we say that a is a divisor of b (written a | b)
if b = ac for some c in R. If a | b and b | a, then b = ua for some unit u, and
a and b are called associates. Being associate is an equivalence relation. a is a
proper divisor of b if a divides b, but is neither a unit, nor an associate of b.
In terms of ideals, a | b means (b) ⊆ (a), u being a unit is equivalent to
(u) = R, a and b associates says (a) = (b), and a properly divides b if and only
if (b) ⊂ (a).
Definition I.1.1 An element c ∈ R is called a greatest common divisor
(gcd) of a and b in R if c | a, c | b, and c is divisible by every d such that d | a
and d | b. An element c ∈ R is called a least common multiple (lcm) of
a and b in R if a | c, b | c, and c divides every d such that a | d and b | d.
Given a and b in R, gcd(a, b) may or may not exist. If it does, it is unique up
to associates. Similarly for lcm(a, b).
Remark I.1.2 If a, b ∈ R −{0}, and lcm(a, b) exists, then also gcd(a, b) exists,
and lcm(a, b) · gcd(a, b) = ab, up to units. If a, b ∈ R − {0}, and gcd(a, b)
exists, lcm(a, b) may not exist (cf. Example I.1.11 later on). However, if gcd(a, b)

www.pdfgrip.com



2

Recollections and perspectives

exists for all choices of a and b in R − {0}, then lcm(a, b) exists for all choices
of a and b in R − {0} .
If 1 is a greatest common divisor for a and b, we will say that a and b are
coprime.
In terms of ideals, c being a common divisor of a and b means (a, b) ⊆ (c),
while c being a common multiple of a and b means (a) ∩ (b) ⊇ (c). The equality
(a, b) = (c) implies gcd(a, b) = c (but not conversely), while (a) ∩ (b) = (c) is
equivalent to lcm(a, b) = c.
Definition I.1.3 A non-zero, non-invertible element a ∈ R is called irreducible if it does not have any proper divisors. A non-zero, non-invertible element
a ∈ R is called prime if, whenever a | bc, then either a | b or a | c.
In terms of ideals, a is irreducible if and only if (a) is maximal among the
proper principal ideals of R; a is prime if and only if (a) is a prime ideal. If a is
prime, then it is irreducible. But the converse does not hold.
Definition I.1.4 An integral domain R is called a factorization domain if
every non-zero, non-invertible a ∈ R can be expressed as a product of irreducible
elements. A factorization domain is called a unique factorization domain
(UFD) if every factorization into irreducibles is unique up to permutation of
the factors and multiplication of the factors by units.
In terms of ideals, an integral domain R is a factorization domain if and only if
there is no strictly ascending infinite chain of principal ideals in R. In particular,
every principal ideal domain (PID) is a factorization domain: given any ascending
chain of ideals, the union of these ideals is an ideal of the sequence.
In fact, a PID is always a UFD, by part (ii) of the following proposition.
Proposition I.1.5
equivalent.


Let R be a factorization domain. The following are

(i) R is a UFD.
(ii) lcm(a, b) exists for every choice of a, b in R.
(iii) Every irreducible element is prime.
Proof (i) ⇒ (ii) As in Z, one expresses a and b as products of powers (with
non-negative exponents) of suitable irreducibles (the same ones for a and b):
max{si ,ti }
a = fisi and b = fiti , say. Then lcm(a, b) = fi
.
(ii) ⇒ (iii) The gcd exists in R since the lcm does. If c = gcd(a, b), then
cd = gcd(ad, bd) for every d. For, given any two ideals a and b, d(a ∩ b) = da ∩ db;
hence dlcm(a, b) = lcm(ad, bd); using that adbd = gcd(ad, bd)lcm(ad, bd) and
ab = clcm(a, b), we are through.
Let an irreducible element c divide ab, and assume that c b: we claim that
c | a. Since c is irreducible and c b, b and c are coprime, that is, 1 = gcd(b, c).

www.pdfgrip.com


Factorization

3

It follows that a = gcd(ab, ac); since c divides ab by assumption, c must divide
gcd(ab, ac), as claimed.
(iii) ⇒ (i) As in the ring of integers.

Corollary I.1.6 If R is a UFD, then hdR (R/(a, b)) ≤ 2 for all a and b in

R. (Here hd stands for homological dimension, sometimes called projective
dimension and denoted by pd.)
Proof If a = 0 = b, the quotient ring is R and the homological dimension is 0.
If a = 0 and b = 0, the quotient ring is R/(b) and we consider the exact
complex of R-modules:
b

0 → K → R → R → R/(b) → 0,
where K stands for the kernel of the map given by multiplication by b. Since R
is a domain, cb = 0 implies c = 0, and K = (0). Hence hdR (R/(b)) ≤ 1.
If both a and b are different from 0, we consider the following exact complex:
(a,b)

0 → K → R2 → R → R/(a, b) → 0,
where K stands for the kernel of the map given by the matrix (a, b). We want
to show that K is free over R, as this will give us our result on the homological
dimension of R/(a, b).
If (a) : b denotes the ideal {r ∈ R | rb ∈ (a)}, clearly K = (a) : b, because
rb ∈ (a) if and only if rb = sa for some (unique) s ∈ R, that is, (−s)a + rb = 0.
As R-modules, (a) : b ∼
= b((a) : b), and obviously b((a) : b) = (a) ∩ (b).
By the previous proposition, (a) ∩ (b) is a principal ideal; hence it is a rank 1
free R-module, and so is K.

Because we have not made significant use of homological dimension, we will
put off giving a formal definition of that term here; the reader will find it in the
next section (Definition I.2.25). The crucial fact that we needed in the proof of
the above corollary was just that K is free.
We will see in Chapter II that if R is a noetherian local ring, then R is a UFD
if and only if hdR (R/(a, b)) ≤ 2 for all a and b in R. This will lead to proving

that regular local rings are UFD.
Remark I.1.7 We have noticed in the proof of Proposition I.1.5 that if R is
a UFD, then c = gcd(a, b) implies cd = gcd(ad, bd) for every d. It follows that
gcd(a, b) = 1 and a | bc together imply a | c. In terms of ideals, this means that if
a and b are coprime in R, then b is a non-zero divisor in R/(a), although R/(a)
may no longer be an integral domain. Conversely, if b is a non-zero divisor in
R/(a), then gcd(a, b) = 1; for otherwise a/ gcd(a, b) would kill b in R/(a). This
set up will be generalized in Chapter II by the notion of M -sequence.
Remark I.1.8

The complex
(a,b)

0 → R2 → R → R/(a, b) → 0

www.pdfgrip.com


4

Recollections and perspectives

is a truncation of the following Koszul complex (to be described in Chapter II):



0→R

−b
a





(a,b)

R2 → R → R/(a, b) → 0.

Does the latter complex coincide with the resolution of (a, b), a = 0 = b, described
in the proof of Corollary I.1.6? Recalling the identification K = (a) : b,
im

−b
a

= (s, t) ∈ R2 | s = −br, t = ar for some r ∈ R

corresponds to (a) ⊆ (a) : b and we are asking whether (a) = (a) : b. We claim
that equality holds if and only if gcd(a, b) = 1. For, by the previous remark
gcd(a, b) = 1 means that b is a non-zero divisor in R/(a), and so (a) : b vanishes
in R/(a).
When unique factorization of elements does not hold in our integral domain
R, we might relax the condition a bit and ask: what kind of ring allows for the
unique factorization of principal ideals into prime ideals? We do not know the
answer to that, but we can ask for a generalization of principal ideal domains,
namely what kind of ring, R, (besides a PID) may have unique factorization of
ideals into products of prime ideals?
Actually, there is a name for such a ring: Dedekind domain. It turns out
that this condition is equivalent to a combination of other properties, namely
that of being noetherian, normal (or integrally closed), and being of dimension

one. Some of these terms will be discussed in great detail in Chapter II, but we
briefly point out here that one of the many characterizations of a noetherian
ring is that every ideal is finitely generated (another one is that no infinite
strictly ascending chain of ideals can exist). This is certainly true of a PID, so
every PID is noetherian. To say that the dimension of a ring is equal to one turns
out to mean (see Section II.3) that every non-zero prime ideal is maximal, and
the observations immediately preceding Definition I.1.3 imply that in a PID all
prime ideals are indeed maximal. Finally, it is clear that every PID (in fact, every
UFD) is normal, so that we know every PID is a noetherian, normal domain of
dimension one. However, these three properties do not quite characterize a PID;
rather, there is the following theorem.
Theorem I.1.9

For an integral domain R, the following are equivalent.

(i) R is a noetherian normal domain of dimension 1.
(ii) Every proper ideal a of R can be expressed as a product of prime ideals,
in a unique way, up to permutations of the factors.
Proof

Cf., for example, Reference [87, chapter 5, section 6, theorem 13, p. 275].


So we are led to make the following definition.

www.pdfgrip.com


Factorization


5

Definition I.1.10 An integral domain is called a Dedekind domain if it
satisfies the equivalent conditions of Theorem I.1.9.
The ring of algebraic integers in any algebraic number √
field is always a Dedekind domain. Some very accessible ones are the rings Z+Z n with n a squarefree
element of Z − {0, 1} such that n is not congruent to √
1 modulo 4 (this latter
condition ensuring that this is the ring of integers in Q( n)).
The family of Dedekind domains properly includes the family of principal ideal
domains, for Dedekind domains may have ideals which are not principal.


Example I.1.11 Let
√ and
√ R = Z + Z −5, a = 1 + −5, b = 3. gcd(a, b) exists
equals 1, for if s + t −5 divides both a and b, then its norm N (s + t −5) =
s2 + 5t2 must divide both 6 and 9, hence their gcd 3; but s2 + 5t2 | 3 forces t = 0
and s√= ±1. If a = (a, b) were a √
principal ideal, gcd(a, b) = 1 would imply a = R.
But −5 ∈
/ a, since otherwise −5 = αa + βb would give 5 = 6N (α) + 9N (β)
and 3 should√divide 5, a contradiction. Finally, notice that lcm(a, b) does not
exist: if s + t −5 were a lcm(a, b), s2 +√5t2 would be divisible by both 6 and 9,
hence by lcm(6, 9) = 18; moreover, s+t√ −5 should
√ divide both 36 and 54,
√ hence
gcd (36, 54) = 18, since both 6 = (1 + −5)(1 − −5) = 2 · 3 and 3 + 3 −5 are
common multiples; thus s2 + 5t2 = 18, which is impossible.
If one had simply wanted to prove that this

√ a PID, it would have
√ ring is not
sufficed to point out that 6 = 2 × 3 = (1 + −5)(1 − −5), show that each of
these factors is irreducible, and conclude that this contradicts UFD, hence PID.
The rather longer discussion above, though, actually produces a non-principal
ideal.
While it may be slightly disappointing that there are Dedekind domains that
are not principal ideal domains, it is a well-known property of Dedekind domains
that all their ideals can be generated by at most two elements (cf., e.g. Reference [87, chapter 5, section 7, theorem 17, p. 279]). So at least we are not too
far off the mark.
If R is any commutative ring, the collection of prime ideals of R is called the
spectrum of R, written Spec(R). The set of maximal ideals of R is called the
maximal spectrum, and is denoted by Max(R).
By Theorem I.1.9, given a Dedekind domain R, Spec(R) = {0} ∪ Max(R),
Proposition I.1.12
R is a PID.

If R is a Dedekind domain such that |Max(R)| < ∞, then

Proof Let Max(R) = {m1 , m2 , . . . , mt }. For every i = 1, 2, . . . , t there exists
/ m2i and ai ∈
/ mj , j = i (by the Chinese
an element ai ∈ mi such that ai ∈
Remainder Theorem, Reference [87], chapter 5, section 7, theorem 17, p. 279).
Then (ai ) = mi , and the principality of all maximal ideals implies the principality
of every other ideal (by part (ii) of Theorem I.1.9).

If we localize a Dedekind domain at a non-zero prime m, Rm is still Dedekind,
since condition (i) of Theorem I.1.9 is preserved by localization. (We assume the


www.pdfgrip.com


6

Recollections and perspectives

reader is familiar with the process of forming rings of quotients with respect to
a multiplicative subset. Essentially this is just the “fractions” having arbitrary
elements of the ring on top, and elements of the multiplicative subset as denominators. All the bells and whistles of localization are explained in Reference [41],
section 2.1). In fact, Rm is a PID (by the last proposition), because 0 and mm
are its only prime ideals. If mm = (a) for some a ∈ Rm , then every other ideal of
Rm is of type (an ) for some positive n.
Notice that since Rp is a PID for every p ∈ Spec(R), unique factorization of
elements is locally true for every Dedekind domain.
Local Dedekind domains are known as discrete valuation rings.
I.1.2 Polynomial and power series rings
Given any commutative ring (with 1), say R, a (formal) power series in t
indeterminates over R, t ∈ N−{0}, is a function f : Nt → R. Power series can be
added and multiplied. Addition is simply addition of functions. Multiplication
is defined by (f g)(n1 , . . . , nt ) = mi +li =ni f (m1 , . . . , mt )g(l1 , . . . , lt ). The set of
all power series in t indeterminates over R turns out to be a commutative ring
(with 1) with respect to the indicated operations. The customary notation for
this ring is R[[X1 , . . . , Xt ]], for one identifies f : Nt → R with the formal sum
f (n1 , . . . , nt )X1n1 · · · Xtnt .
In particular,
an X n stands for the function f : N →R such that f (n) = an
for every n ∈ N.
Clearly, R[[X1 , . . . , Xt ]] = (R[[X1 , . . . , Xt−1 ]])[[Xt ]].
Given R as above, a polynomial in t indeterminates over R is a power

series f : Nt → R which is zero almost everywhere. The corresponding
symbol
f (n1 , . . . , nt )X1n1 · · · Xtnt is usually meant to be restricted to the
(finitely many) non-zero values f (n1 , . . . , nt ), thereby giving a finite formal sum.
Polynomials form a subring of the ring of power series, denoted by R[X1 , . . . , Xt ].
Clearly, R[X1 , . . . , Xt ] = (R[X1 , . . . , Xt−1 ])[Xt ].
Often one writes R[[X]] and R[X] instead of R[[X1 , . . . , Xt ]] and
R[X1 , . . . , Xt ], meaning that X = {X1 , . . . , Xt }.
The following proposition collects some properties valid when |X| = t = 1.
Proposition I.1.13

Let R be a commutative ring (with 1).

(i) f ∈ R[X] is invertible in R[X] if and only if a0 is invertible in R and all
n
other coefficients are nilpotent in R (as usual, we assume f = i=0 ai X i ;
an element of a ring is nilpotent if some power of it is equal to 0).
(ii) f ∈ R[[X]] is invertible in R[[X]] if and only if a0 is invertible in R (as

usual, we assume f = i=0 ai X i ).
(iii) R has no zero divisors if and only if R[X] has no zero divisors if and only
if R[[X]] has no zero divisors.

www.pdfgrip.com


Factorization

7


Proof We only prove (ii), not because it is harder, but because we need it soon.

If f is invertible in R[[X]], there exists g ∈ R[[X]], g = i=0 bi X i say, such
that f g = 1. Hence
f g = a0 b0 + (a0 b1 + a1 b0 )X + (a0 b2 + a1 b1 + a2 b0 )X 2 + · · · = 1
forces a0 b0 = 1, and a0 is a unit in R.
Conversely, assume that a0 is a unit in R and look for some g as above, such
that f g = 1. The following equalities must be satisfied:
a0 b0 = 1,

a0 b1 + a1 b0 = 0,

a0 b2 + a1 b1 + a2 b0 = 0,

...

The invertibility of a0 allows us to solve these equations for b0 , b1 , b2 , . . . one after
the other, and we are done.

The last statement of Proposition I.1.13 hints at a general question: what
properties of R are inherited by R[X] and R[[X]]?
For instance, if R is a Euclidean domain (i.e. a domain where one has division
with remainder), the domain R[X] need not be Euclidean.
Proposition I.1.14
noetherian.

If R is noetherian, then both R[X] and R[[X]] are

Proof For R[X], this is the Hilbert basis theorem, (cf., for example, Reference [87], chapter 4, section 1, theorem 1, p. 201). For R[[X]], there is a proof
very much in the spirit of the proof of the Hilbert basis theorem, (cf., e.g.

Reference [87], chapter 7, section 1, theorem 4, p. 138).

Corollary I.1.15 If R is noetherian,
R[[X1 , . . . , Xt ]] are noetherian.

then

both

R[X1 , . . . , Xt ]

and

When R is a noetherian domain, both R[X1 , . . . , Xt ] and R[[X1 , . . . , Xt ]]
(being noetherian domains) are factorization domains: they cannot contain any
strictly ascending infinite chain of principal ideals. This remark leads to the following question: if R is a UFD, is it true that R[X1 , . . . , Xt ] and R[[X1 , . . . , Xt ]]
are UFD?
Unlike Proposition I.1.14, we cannot give a unique answer: we will prove in
a moment that R[X1 , . . . , Xt ] does inherit the property of being a UFD from R;
but R[[X1 , . . . , Xt ]] may not be a UFD. The first counterexample was given by
P. Samuel in 1961 (see [77]).
Theorem I.1.16

If R is a UFD, then R[X1 , . . . , Xt ] is a UFD.

Proof By induction on t, it suffices to show that R[X] is a UFD. Since we
already know that R[X] is a factorization domain, part (ii) of Proposition I.1.5
says that it is enough to prove that a lcm(f, g) exists for any two polynomials
f and g in R[X].
Let Q denote the field of quotients of R. Since Q is a field, Q[X] is a Euclidean

domain, hence a PID, hence a UFD. So a lcm(f, g) certainly exists in Q[X].

www.pdfgrip.com


8

Recollections and perspectives

Call it h. Clearly, h can be expressed as h = c(h) · h , where h ∈ R[X] and has
coprime coefficients, while c(h) ∈ Q.
Write f = c(f ) · f and g = c(g) · g , where f and g are assumed to have
coprime coefficients. Since R is a UFD by hypothesis, a lcm(c(f ), c(g)) exists in R.

Call it c. Then c · h is a lcm(f, g) in R[X], as required.
Although a similar theorem does not hold for R[[X1 , . . . , Xt ]], we have the
following partial result.
Proposition I.1.17

If R is a field, then R[[X1 , . . . , Xt ]] is a UFD.

Proof We cannot reduce to the case t = 1. But since we already know that
R[[X1 , . . . , Xt ]] is a factorization domain, it suffices to show that every irreducible
element generates a (principal) prime ideal (cf. part (iii) of Proposition I.1.5).
This can be accomplished by induction on t, and using the statement of the
previous theorem, (cf., e.g. Reference [87], chapter 7, section 1, theorem 6, p. 148).

We give another property of K[[X1 , . . . , Xt ]], when K a field.
Proposition I.1.18 If K is a field, then K[[X1 , . . . , Xt ]] is a local ring with
maximal ideal m = (X1 , . . . , Xt ).

Proof The proof of part (ii) of Proposition I.1.13 works word for word in every
ring (R[[X1 , . . . , Xs−1 ]])[[Xs ]]. Since in our case R is a field, the non-units of
K[[X1 , . . . , Xt ]] are the elements with zero constant term. That is, (X1 , . . . , Xt )

consists of all the non-invertible elements of K[[X1 , . . . , Xt ]].
When t = 1, K[[X]] is in fact a discrete valuation ring (= local Dedekind
domain), hence a PID (cf. Proposition I.1.12), for it is not hard to check that
every proper ideal in K[[X]] is a power of m = (X).
I.2 Linear algebra
In this section we deal with linear algebra over a commutative ring, not just
over a field. In doing so, we review some basics of homological algebra. For more
details, we refer the reader to References [15], [33], and [41].
I.2.1 Free modules
Let R be a commutative ring (with 1). An R-module is an immediate generalization of a vector space. That is, if K is a field and V a vector space over K, we
know that V is an abelian group, K acts on V , and this action satisfies certain
conditions. One notices that the conditions in no way make use of the fact that
K is a field; thus we may replace K by the commutative ring R, write M for V ,
and get the definition for a module M over the ring R.

www.pdfgrip.com


Linear algebra

9

The usual definitions of linearly independent subset, linearly dependent subset, generators, submodule, submodule generated by a subset, that are used for
vector spaces apply mutatis mutandis to R-modules. The difference, as we will
see, lies in the fact that our base ring is not in general a field; thus such things
as the existence of a basis for every vector space do not hold true for modules

over arbitrary rings. (Recall that a basis of a module is a linearly independent subset which generates the module.) Yet the existence of maximal linearly
independent subsets of a module is proved in exactly the same way as is done
for vector spaces. It may be, however, that the empty set is a maximal linearly independent subset of a module, but the module is not necessarily the zero
module.
For example, the abelian group Z/(2), considered as a Z-module, has two
elements, but its maximal linearly independent set is the empty set. For {0} is
not independent and {1} is not independent because 2 · 1 = 0.
Thus, while for vector spaces we have the fact that a maximal independent
subset is a basis for (hence generates) the vector space, this is no longer the case
for general modules.
Definition I.2.1

An R-module M is called free if it has a basis.

We note immediately that the zero module is free (its basis is the empty
set). The following result shows that the basis of a free module can have any
cardinality.
Proposition I.2.2 Given any non-empty set I, there is a free R-module with
basis in one-to-one correspondence with I.
Proof Let M be the set {f : I → R | f is zero almost everywhere}. It is
an R-module with respect to the operations (f1 + f2 )(i) = f1 (i) + f2 (i) and
(rf )(i) = rf (i). Clearly M has an R-basis {fi }i∈I , where fi stands for the map
sending i to 1 and all other elements of I to 0.

Homomorphisms of R-modules, often called R-maps, are defined as in the
case of vector spaces.
The free module built in the proof of Proposition I.2.2 is canonically
R-isomorphic to ⊕i∈I Ri , where Ri is a copy of the R-module R for every i ∈ I.
The basis of ⊕i∈I Ri corresponding to {fi }i∈I in this isomorphism is called the
canonical basis of ⊕i∈I Ri . When I is finite, say |I| = t, we write Rt instead of

⊕i∈I Ri .
Remark I.2.3 A free R-module M having a finite basis B cannot have an
infinite basis B . For every element of B can be expressed as a linear combination
of finitely many elements of B , and if C is the (finite linearly independent) set
of all the elements of B involved in the expressions of the elements of B, then
every element of M is generated by C, so that C = B .

www.pdfgrip.com


10

Recollections and perspectives

Proposition I.2.4 If B = {m1 , . . . , mt } and B = {m1 , . . . , ms } are two finite
bases of the same free R-module M , then t = s.
Proof Write each mi (respectively, mj ) as an R-linear combination of the
elements of B (respectively, B):
s

mi =

t

aij mj ,

mj =

j=1


bji mi .
i=1

Call S the t × s matrix (aij ) and T the s × t matrix (bji ). Clearly, ST equals the
t × t identity matrix It , and T S = Is . If t = s, say t > s, consider the square
matrices of order t:
S = (S | t − s zero columns)

and T =

T
t − s zero rows

.

Their product is still It , but det It = 1, while det(S T ) = det S det T = 0:

a contradiction. Similarly for t < s, using T S = Is .
We remark that the definition of determinant is the same as for fields, and
that det S det T = det S T is purely formal and does not require that the base
ring be a field.
Definition I.2.5 A free R-module is called finite if all its bases have finitely
many elements. A finite free R-module has rank t if it has a basis consisting of
t elements (hence every basis consists of t elements).
Remark I.2.6 If a free R-module M happens to have a finite system of generators, then M must be a finite free R-module. Just argue as in Remark I.2.3,
calling B the system of generators and B any basis.
Proposition I.2.7 Let F be a rank t free R-module with basis B =
{f1 , . . . , ft }. Let S be a t×t matrix with entries in R. Let C = {m1 , . . . , mt } ⊆ F
be defined by





m1
f1
 .. 
 . 
 .  = S  ..  .
mt

ft

Then the following are equivalent.
(i) det S is a unit in R.
(ii) C is another basis of F .
(iii) C is a generating system of F .

www.pdfgrip.com


Linear algebra

11






m1

f1




Proof (i) ⇒ (ii): S −1  ...  =  ...  shows that C generates F (S −1
mt
ft
exists for det S invertible allows the use of the customary formula); independence
is easy.
(ii) ⇒ (iii): Trivial.
(iii) ⇒ (i): Call T the matrix expressing B in terms of C; then T S = It , and
det S is a unit in R.

Corollary I.2.8
(i) Every generating system {m1 , . . . , mt } of Rt is a basis.
(ii) Every generating system of Rt has at least t elements.
(iii) If ϕ : Rn → Rm is an R-epimorphism, then n ≥ m.
Proposition I.2.9 Let ϕ be an R-morphism from Rt to Rt , and let S be its
matrix with respect to the canonical basis of Rt . Then ϕ is injective if and only
if det S is a non-zero divisor in R.
Proof First
divisor ⇒ ϕ injective.

 part:det Sa non-zero


x1
0
x1




 

If not, S  ...  =  ...  for some non-zero  ... , x1 = 0 say. Then
xt
xt
0
 

x1
0
 


S  ... | It2 · · · Itt  =  ... | S 2 · · · S t , where Iti stands for the i-th column
xt
0
of It , and similarly for S i . Thus det S · x1 = 0, a contradiction.
Second part: ϕ injective ⇒ det S a non-zero divisor.
It suffices to show that if det S is a zero divisor, then ϕ is not injective, that is,
the columns of S are not an independent system in Rt . If t = 1, this is obvious.
If t ≥ 2, the statement is a corollary of the following more general result.
Let m1 , . . . , ms be elements of Rt (t ≥ 2). If a ∈ R − {0} kills all the maximal
minors of the matrix (m1 · · · ms ), then {m1 , . . . , ms } is not an independent
system.
The proof is by induction on the number s of t-tuples.
If s = 1, then am1 = 0 with a = 0 prevents m1 from being independent.
We now assume that t ≥ s > 1. If a also kills all maximal minors of the matrix

(m2 · · · ms ), then {m2 , . . . , ms } is not an independent system (by induction
hypothesis) and a fortiori {m1 , . . . , ms } is not either.
If a does not kill all maximal minors of (m2 · · · ms ), let b be one of those
minors such that ab = 0; let us say that b is given by the last s − 1 rows of the
matrix (m2 · · · ms ). We now use the assumption that a kills all maximal minors
of (m1 · · · ms ).


www.pdfgrip.com


12

Recollections and perspectives

Let T denote the t×t matrix ( It−s
0 | m1 · · · ms ). Since det T equals a maximal
minor of (m1 · · · ms ), a det T = 0. If T denotes the companion matrix of T ,
and Ti is the i-th column of T , then T T = det T · It implies:


1


..


.
0





1


,

a
T T1 · · · Tt−s aTt−s+1 Tt−s+2 · · · Tt = det T 



1




..


.
0
1
with a in position (t − s + 1, t − s + 1). It follows (since a det T = 0):


0
 .. 
 . 



 0 




0 = T aTt−s+1
=T
 ab  , with ab in row t − s + 1.
 ∗ 


 . 
 .. 

Hence 0 = abm1 + (∗m2 ) + · · · + (∗ms ), so that (since ab = 0) {m1 , . . . , ms }
cannot be an independent system.
Finally, we consider the case s > t, that is, (m1 · · · ms ) is a t × s matrix
with t < s. In that case, its maximal minors are of order t; if a kills all the
maximal minors, in particular it kills det(m1 · · · mt ). This implies (case s = t of
the induction hypothesis) that {m1 , . . . , mt } cannot be an independent system;
a fortiori, {m1 , . . . , ms } cannot be either.
This concludes the proof of the more general result on Rt , as well as of the
proposition.

Corollary I.2.10
(i) If ϕ : Rn → Rm is an R-monomorphism, then n ≤ m.
(ii) A non-zero ideal a of R is a finite free R-module if and only if it is
principal, and generated by a non-zero divisor.

ϕ

Proof (i) If n > m, then there would be a monomorphism Rn → Rm →
S
Rm ⊕ Rn−m = Rn associated with the n × n matrix S = (n−m zero
rows) ,
where S is the matrix of ϕ with respect to the canonical bases. But det S = 0
contradicts the proposition.
(ii) If a is finite R-free, say a ∼
= Rt , the inclusion a → R gives a monomorphism
t
R → R, and t ≤ 1 by part (i). So a = Ra = (a) for some independent (that is,
non-zero divisor) a ∈ R. The converse is obvious.


www.pdfgrip.com


Linear algebra

13

I.2.2 Projective modules
Every R-module M is the quotient of a free R-module F . For if {mi }i∈I is a
generating system of M (if necessary, the generating system may consist of all
the elements of M ), then we call F the free module of Proposition I.2.2. If {fi }i∈I
is the basis of F defined in that proposition, the R-epimorphism ϕ : F → M
sending fi to mi for every i does the job.
If N denotes the kernel of ϕ, then there exists another R-epimorphism ψ :
E → N with E an R-free module. And one gets the following exact complex:

ψ

ϕ

E → F → M → 0,

(∗)

which is called a free presentation of M (0 stands for the zero module and
M → 0 is the zero map).
We recall that complex means, wherever you have two consecutive arrows,
the image of the left arrow is included in the kernel of the right arrow. Exact
complex means that the inclusion is always an equality.
If |I| < ∞ (i.e. M is finitely generated), the above F is a finite free R-module,
but E need not be finite. Yet in some cases (for instance when R is noetherian),
E does have a finite basis, and M is said to be finitely presented. Then ψ can
be expressed by a matrix (relative to some fixed bases of E and F ), carrying
information on M = coker(ψ).
Let us go back to the R-epimorphism ϕ : F → M and consider the exact
complex (a short exact sequence is what it is generally called):
0 → ker(ϕ) → F → M → 0.
Does it imply that F ∼
= M ⊕ ker(ϕ)?
More generally, does an exact complex of R-modules
(∗∗)

β

α


0→M →M →M →0

imply that M ∼
=M ⊕M ?
It is clear that the answer cannot be positive, in general (just think of the
exact complex of Z-modules
α

β

0 → Z → Z → Z/(n) → 0,
where α is multiplication by the positive integer n and β is the canonical
projection).
Definition I.2.11 The exact complex (∗∗) is called split if it implies M ∼
=
M ⊕ M by means of an isomorphism ϕ : M → M ⊕ M such that ϕ−1 | M
equals α and the composite
M →M ⊕M → M
ϕ

pr2

equals β.
Conditions for being split are easily proven.

www.pdfgrip.com


×