Abstract We prove, by simple manipulation of commutators, two noncommutative alizations of the Cauchy–Binet formula for the determinant of a product.. As specialcases we obtain elementar
Trang 1Noncommutative determinants, Cauchy–Binet formulae, and Capelli-type identities
I Generalizations of the Capelli and Turnbull identities
Sergio CaraccioloDipartimento di Fisica and INFNUniversit`a degli Studi di Milano
via Celoria 16I-20133 Milano, ITALYSergio.Caracciolo@mi.infn.it
Alan D Sokal∗
Department of PhysicsNew York University
4 Washington PlaceNew York, NY 10003 USAsokal@nyu.eduAndrea SportielloDipartimento di Fisica and INFNUniversit`a degli Studi di Milano
via Celoria 16I-20133 Milano, ITALYAndrea.Sportiello@mi.infn.it
Submitted: Sep 20, 2008; Accepted: Aug 3, 2009; Published: Aug 7, 2009
Mathematics Subject Classification: 15A15 (Primary); 05A19, 05A30, 05E15, 13A50, 15A24,
15A33, 15A72, 17B35, 20G05 (Secondary)
Abstract
We prove, by simple manipulation of commutators, two noncommutative alizations of the Cauchy–Binet formula for the determinant of a product As specialcases we obtain elementary proofs of the Capelli identity from classical invarianttheory and of Turnbull’s Capelli-type identities for symmetric and antisymmetricmatrices
gener-Key Words: Determinant, noncommutative determinant, row-determinant, determinant, Cauchy–Binet theorem, permanent, noncommutative ring, Capelli identity,Turnbull identity, Cayley identity, classical invariant theory, representation theory, Weylalgebra, right-quantum matrix, Cartier–Foata matrix, Manin matrix
column-∗ Also at Department of Mathematics, University College London, London WC1E 6BT, England.
Trang 21 Introduction
Let R be a commutative ring, and let A = (aij)n
i,j=1 be an n × n matrix with elements
in R Define as usual the determinant
One of the first things one learns about the determinant is the multiplicative property:
More generally, if A and B are m × n matrices, and I and J are subsets of [n] :={1, 2, , n} of cardinality |I| = |J| = r, then one has the Cauchy–Binet formula:
(Note that col-det A = row-det AT.) Of course, in the absence of commutativity these
“determinants” need not have all the usual properties of the determinant
Our goal here is to prove the analogues of (1.2)/(1.3) for a fairly simple mutative case: namely, that in which the elements of A are in a suitable sense “almostcommutative” among themselves (see below) and/or the same for B, while the commu-tators [x, y] := xy − yx of elements of A with those of B have the simple structure[aij, bkl] = −δikhjl.1 More precisely, we shall need the following type of commutativityamong the elements of A and/or B:
noncom-1
The minus sign is inserted for future convenience We remark that this formula makes sense even if the ring R lacks an identity element, as δ ik h jl is simply a shorthand for h jl if i = k and 0 otherwise.
Trang 3Definition 1.1 Let M = (Mij) be a (not-necessarily-square) matrix with elements in a(not-necessarily-commutative) ring R Then we say that M is column-pseudo-commutative
in case
[Mij, Mkl] = [Mil, Mkj] for all i, j, k, l (1.6)and
[Mij, Mil] = 0 for all i, j, l (1.7)
We say that M is row-pseudo-commutative in case MT is column-pseudo-commutative
In Sections 2 and 3 we will explain the motivation for this strange definition, and showthat it really is the natural type of commutativity for formulae of Cauchy–Binet type.2
Suffice it to observe now that column-pseudo-commutativity is a fairly weak condition:for instance, it is weaker than assuming that [Mij, Mkl] = 0 whenever j 6= l In manyapplications (though not all, see Example 3.6 below) we will actually have [aij, akl] =[bij, bkl] = 0 for all i, j, k, l Note also that (1.6) implies (1.7) if the ring R has theproperty that 2x = 0 implies x = 0
The main result of this paper is the following:
Proposition 1.2 (noncommutative Cauchy–Binet) Let R be a commutative) ring, and let A and B be m × n matrices with elements in R Suppose that
2
Similar notions arose already two decades ago in Manin’s work on quantum groups [38–40] For this reason, some authors [15] call a row-pseudo-commutative matrix a Manin matrix ; others [30–32] call it a right-quantum matrix See the historical remarks at the end of Section 2.
Trang 4These identities can be viewed as a kind of “quantum analogue” of (1.3), with the matrices
Qcol and Qrow supplying the “quantum correction” It is for this reason that we havechosen the letter h to designate the matrix arising in the commutator
Please note that the hypotheses of Proposition 1.2 presuppose that 1 6 r 6 n erwise I and J would be nonexistent or empty) But r > m is explicitly allowed: inthis case the left-hand side of (1.9)/(1.11)/(1.13) is manifestly zero (since the sum over L
(oth-is empty), but Proposition 1.2 makes the nontrivial statement that the noncommutativedeterminant on the right-hand side is also zero
Note also that the hypothesis in part (c) — what we shall call column-commutativity,see Section 2 — is sufficient to make the determinants of A and B well-defined withoutany ordering prescription We have therefore written det (rather than col-det or row-det)for these determinants
Replacing A and B by their transposes and interchanging m with n in Proposition 1.2,
we get the following “dual” version in which the commutator −δikhjlis replaced by −hikδjl:Proposition 1.2′ Let R be a (not-necessarily-commutative) ring, and let A and B be
m × n matrices with elements in R Suppose that
(col-det AIL)(col-det (BT)LJ) = col-det[(ABT)IJ + Qcol] (1.15)
where Qcol is defined in (1.10)
(b) If B is row-pseudo-commutative, then
X
L ⊆ [n]
|L| = r
(row-det AIL)(row-det (BT)LJ) = row-det[(ABT)IJ + Qrow] (1.16)
where Qrow is defined in (1.12)
In particular,
Trang 5(c) If [aij, akl] = 0 and [bij, bkl] = 0 whenever i 6= k, then
Proposi-Corollary 1.3 Let R be a (not-necessarily-commutative) ring, and let A and B be m × nmatrices with elements in R Suppose that
[aij, bkl] = −hδikδjl (1.18c)where h ∈ R Then, for any positive integer r, we have
Qcol = h diag(r − 1, r − 2, , 0) (1.20a)
Qrow = h diag(0, 1, , r − 1) (1.20b)The cognoscenti will of course recognize Corollary 1.3 as (an abstract version of)the Capelli identity [6–8] of classical invariant theory In Capelli’s identity, the ring R
is the Weyl algebra Am×n(K) over some field K of characteristic 0 (e.g Q, R or C)generated by an m × n collection X = (xij) of commuting indeterminates (“positions”)and the corresponding collection ∂ = (∂/∂xij) of differential operators (proportional to
“momenta”); we then take A = X and B = ∂, so that (1.18) holds with h = 1
Trang 6The Capelli identity has a beautiful interpretation in the theory of group tions [23]: Let K = R or C, and consider the space Km×nof m×n matrices with elements
representa-in K, parametrized by coordrepresenta-inates X = (xij) The group GL(m) × GL(n) acts on Km×n
by
where M ∈ GL(m), N ∈ GL(n) and X ∈ Km×n Then the infinitesimal action associated
to (1.21) gives a faithful representation of the Lie algebra gl(m) ⊕ gl(n) by vector fields
on Km×n with linear coefficients:
characteristic of gl(m) ⊕ gl(n) Furthermore, the action (L, R) extends uniquely to ahomomorphism from the universal enveloping algebra U(gl(m) ⊕ gl(n)) into the Weylalgebra Am×n(K) [which is isomorphic to the algebra PD(Km×n) of polynomial-coefficientdifferential operators on Km×n] As explained in [23, secs 1 and 11.1], it can be shownabstractly that any element of the Weyl algebra that commutes with both L and R must
be the image via L of some element of the center of U(gl(m)), and also the image via
R of some element of the center of U(gl(n)) The Capelli identity (1.19) with A = Xand B = ∂ gives an explicit formula for the generators Γr [1 6 r 6 min(m, n)] of thissubalgebra, from which it is manifest from (1.19a or b) that Γr belongs to the image under
R of U(gl(n)) and commutes with the image under L of U(gl(m)), and from (1.19c or d)the reverse fact See [21–23,25,35,54,55,58] for further discussion of the role of the Capelliidentity in classical invariant theory and representation theory, as well as for proofs of theidentity
Let us remark that Proposition 1.2′ also contains Itoh’s [25] Capelli-type identity forthe generators of the left action of o(m) on m × n matrices (see Example 3.6 below).Let us also mention one important (and well-known) application of the Capelli identity:namely, it provides a simple proof of the “Cayley” identity3 for n × n matrices,
det(∂) (det X)s = s(s + 1) · · · (s + n − 1) (det X)s−1 (1.24)
Trang 7To derive (1.24), one simply applies both sides of the Capelli identity (1.19) to (det X)s:the “polarization operators” Lij = (X∂T)ij and Rij = (XT∂)ij act in a very simple way ondet X, thereby allowing col-det(X∂T+ Qcol) (det X)s and col-det(XT∂ + Qcol) (det X)s to
be computed easily; they both yield det X times the right-hand side of (1.24).4 In fact, by
a similar method we can use Proposition 1.2 to prove a generalized “Cayley” identity thatlives in the Weyl algebra (rather than just the polynomial algebra) and from which thestandard “Cayley” identity can be derived as an immediate corollary: see Proposition A.1and Corollaries A.3 and A.4 in the Appendix See also [11] for alternate combinatorialproofs of a variety of Cayley-type identities
Since the Capelli identity is widely viewed as “mysterious” [2, p 324] but also as a
“powerful formal instrument” [58, p 39] and a “relatively deep formal result” [52, p 40],
it is of interest to provide simpler proofs Moreover, since the statement (1.19)/(1.20)
of the Capelli identity is purely algebraic/combinatorial, it is of interest to give a purelyalgebraic/combinatorial proof, independent of the apparatus of representation theory.Such a combinatorial proof was given a decade ago by Foata and Zeilberger [20] for thecase m = n = r, but their argument was unfortunately somewhat intricate, based on theconstruction of a sign-reversing involution The principal goal of the present paper is toprovide an extremely short and elementary algebraic proof of Proposition 1.2 and hence
of the Capelli identity, based on simple manipulation of commutators We give this proof
in Section 3
In 1948 Turnbull [53] proved a Capelli-type identity for symmetric matrices (see also[57]), and Foata and Zeilberger [20] gave a combinatorial proof of this identity as well.Once again we prove a generalization:
Proposition 1.4 (noncommutative Cauchy–Binet, symmetric version) Let R be
a (not-necessarily-commutative) ring, and let A and B be n × n matrices with elements
in R Suppose that
[aij, bkl] = −h (δikδjl+ δilδjk) (1.25)where h is an element of R
(a) Suppose that A is column-pseudo-commutative and symmetric; and if n = 2, supposefurther that either
(i) the ring R has the property that 2x = 0 implies x = 0, or
(ii) [a12, h] = 0
operator Ω = det(∂) was indeed introduced by Cayley on the second page of his famous 1846 paper on invariants [13]; it became known as Cayley’s Ω-process and went on to play an important role in classical invariant theory (see e.g [18, 21, 35, 47, 51, 58]) But we strongly doubt that Cayley ever knew (1.24) See [1, 11] for further historical discussion.
4
See e.g [54, p 53] or [23, pp 569–570] for derivations of this type.
Trang 8Then, for any I, J ⊆ [n] of cardinality |I| = |J| = r, we have
(Qcol)αβ = (r − β) hδi α j β (1.27)for 1 6 α, β 6 r
(b) Suppose that B is column-pseudo-commutative and symmetric; and if n = 2, supposefurther that either
(i) the ring R has the property that 2x = 0 implies x = 0, or
Turnbull [53] and Foata–Zeilberger [20] proved their identity for a specific choice ofmatrices A = Xsym and B = ∂sym in a Weyl algebra, but it is easy to see that theirproof depends only on the commutation properties and symmetry properties of A and
B Proposition 1.4 therefore generalizes their work in three principal ways: they consideronly the case r = n, while we prove a general identity of Cauchy–Binet type5; theyassume that both A and B are symmetric, while we show that it suffices for one of thetwo to be symmetric; and they assume that both [aij, akl] = 0 and [bij, bkl] = 0, while
we show that only one of these plays any role and that it moreover can be weakened tocolumn-pseudo-commutativity.6 We prove Proposition 1.4 in Section 4.7
5
See also Howe and Umeda [23, sec 11.2] for a formula valid for general r, but involving a sum over minors analogous to (1.19).
6
This last weakening is, however, much less substantial than it might appear at first glance, because
a matrix M that is column-pseudo-commutative and symmetric necessarily satisfies 2[M ij , M kl ] = 0 for all i, j, k, l (see Lemma 2.5 for the easy proof) In particular, in a ring R in which 2x = 0 implies x = 0, column-pseudo-commutativity plus symmetry implies full commutativity.
7
In the first preprint version of this paper we mistakenly failed to include the extra hypotheses (i) or (ii) in Proposition 1.4 when n = 2 For further discussion, see Section 4 and in particular Example 4.2.
Trang 9Finally, Howe and Umeda [23, eq (11.3.20)] and Kostant and Sahi [33] independentlydiscovered and proved a Capelli-type identity for antisymmetric matrices.8 Unfortunately,Foata and Zeilberger [20] were unable to find a combinatorial proof of the Howe–Umeda–Kostant–Sahi identity; and we too have been (thus far) unsuccessful We shall discussthis identity further in Section 5.
Both Turnbull [53] and Foata–Zeilberger [20] also considered a different (and tedly less interesting) antisymmetric analogue of the Capelli identity, which involves ageneralization of the permanent of a matrix A,
(a) If A is antisymmetric off-diagonal (i.e., aij = −aji for i 6= j) and [aij, h] = 0 for all
(Qcol)αβ = (r − β) hδi α j β (1.35)for 1 6 α, β 6 r
8
See also [29] for related work.
Trang 10(b) If B is antisymmetric off-diagonal (i.e., bij = −bji for i 6= j) and [bij, h] = 0 for all
Note that no requirements are imposed on the [a, a] and [b, b] commutators (but see theRemark at the end of Section 4)
Let us remark that if [aij, bkl] = 0, then the left-hand side of (1.34)/(1.36) is simply
Turnbull [53] and Foata–Zeilberger [20] proved their identity for a specific choice ofmatrices A = Xantisym and B = ∂antisym in a Weyl algebra, but their proof again dependsonly on the commutation properties and symmetry properties of A and B Proposition 1.5therefore generalizes their work in four principal ways: they consider only the case r =
n, while we prove a general identity for minors; they assume that both A and B areantisymmetric, while we show that it suffices for one of the two to be antisymmetricplus an arbitrary diagonal matrix ; and they assume that [aij, akl] = 0 and [bij, bkl] = 0,while we show that these commutators play no role We warn the reader that Foata–Zeilberger’s [20] statement of this theorem contains a typographical error, inserting afactor sgn(σ) that ought to be absent (and hence inadvertently converting col-per tocol-det).10 We prove Proposition 1.5 in Section 4.11
Finally, let us briefly mention some other generalizations of the Capelli identity thathave appeared in the literature One class of generalizations [41, 45, 46, 48] gives formulaefor further elements in the (center of the) universal enveloping algebra U(gl(n)), such
as the so-called quantum immanants Another class of generalizations extends these
on the left-hand side of (1.34)/(1.36).
11
In the first preprint version of this paper we mistakenly failed to include the hypotheses that [a ij , h] =
0 or [b ij , h] = 0 See the Remark at the end of Section 4.
Trang 11formulae to Lie algebras other than gl(n) [23–28, 33, 34, 41, 42, 56] Finally, a third class
of generalizations finds analogous formulae in more general structures such as quantumgroups [49,50] and Lie superalgebras [44] Our approach is rather more elementary than all
of these works: we ignore the representation-theory context and simply treat the Capelliidentity as a noncommutative generalization of the Cauchy–Binet formula A differentgeneralization along vaguely similar lines can be found in [43]
The plan of this paper is as follows: In Section 2 we make some preliminary commentsabout the properties of column- and row-determinants In Section 3 we prove Proposi-tions 1.2 and 1.2′ and Corollary 1.3 We also prove a variant of Proposition 1.2 in whichthe hypothesis on the commutators [aij, akl] is weakened, at the price of a slightly weakerconclusion (see Proposition 3.8) In Section 4 we prove Propositions 1.4 and 1.5 Finally,
in Section 5 we discuss whether these results are susceptible of further generalization Inthe Appendix we prove a generalization of the “Cayley” identity (1.24)
In a companion paper [10] we shall extend these identities to the (considerably moredifficult) case in which [aij, bkl] = −gikhjl for general matrices (gik) and (hjl), whoseelements do not necessarily commute
Note added Subsequent to the posting of the present paper in preprint form, Chervov,Falqui and Rubtsov [16] posted an extremely interesting survey of the algebraic properties
of row-pseudo-commutative matrices (which they call “Manin matrices”) when the ring R
is an associative algebra over a field of characteristic 6= 2 In particular, Section 6 of [16]contains an interesting generalization of the results of the present paper.12 To state thisgeneralization, note first that the hypotheses of our Proposition 1.2(a) are
(i) A is column-pseudo-commutative, and
12
Chervov et al [16] also reformulated the hypotheses and proofs by using Grassmann variables (= exterior algebra) along the lines of [25, 28] This renders the proofs slightly more compact, and some readers may find that it renders the proofs more transparent as well (this is largely a question of taste) But we do think that the hypotheses of the theorems are best stated without reference to Grassmann variables.
13
Here we have made the translations from their notation to ours (M → A T
, Y → B, Q → H) and written their hypotheses without reference to Grassmann variables Their Conditions 1 and 2 then correspond to (ii ′′ ) and (iii), respectively.
Trang 12Chervov et al [16, Section 6.5] also provide an interesting rejoinder to our assertionabove that no formula of Cauchy–Binet type exists for permanents They show that if onedefines a modified permanent for submatrices involving possibly repeated indices, whichincludes a factor 1/ν! for each index that is repeated ν times, then one obtains a formula
of Cauchy–Binet type in which the intermediate sum is over r-tuples of not necessarilydistinct indices l1 6l2 6 6 lr Moreover, this formula of Cauchy–Binet type extends
to a Capelli-type formula involving a “quantum correction” [16, Theorems 11–13] In ouropinion this is a very interesting observation, which goes a long way to restore the analogybetween determinants and permanents (and which in their formalism reflects the analogybetween Grassmann algebra and the algebra of polynomials)
2 Properties of column- and row-determinants
In this section we shall make some preliminary observations about the properties ofcolumn- and row-determinants, stressing the following question: Which commutationproperties among the elements of the matrix imply which of the standard properties ofthe determinant? Readers who are impatient to get to the proof of our main results canskim this section lightly We also call the reader’s attention to the historical remarksappended at the end of this section, concerning the commutation hypotheses on matrixelements that have been employed for theorems in noncommutative linear algebra.Let us begin by recalling two elementary facts that we shall use repeatedly in theproofs throughout this paper:
Lemma 2.1 (Translation Lemma) Let A be an abelian group, and let f : Sn → A.Then, for any τ ∈ Sn, we have
Trang 13for all σ ∈ Sn [where (ij) denotes the transposition interchanging i with j] Then
where in the second line we made the change of variables σ′ = σ ◦ (ij) and used sgn(σ′) =
− sgn(σ) [or equivalently used the Translation Lemma] 2
With these trivial preliminaries in hand, let us consider noncommutative determinants.Let M = (Mij) be a matrix (not necessarily square) with entries in a ring R Let us callM
• commutative if [Mij, Mkl] = 0 for all i, j, k, l;
• row-commutative if [Mij, Mkl] = 0 whenever i 6= k [i.e., all pairs of elements not inthe same row commute];
• column-commutative if [Mij, Mkl] = 0 whenever j 6= l [i.e., all pairs of elements not
in the same column commute];
• weakly commutative if [Mij, Mkl] = 0 whenever i 6= k and j 6= l [i.e., all pairs ofelements not in the same row or column commute]
Clearly, if M has one of these properties, then so do all its submatrices MIJ Also, M iscommutative if and only if it is both row- and column-commutative
Weak commutativity is a sufficient condition for the determinant to be defined biguously without any ordering prescription, since all the matrix elements in the product(1.1) differ in both indices Furthermore, weak commutativity is sufficient for single de-terminants to have most of their basic properties:
unam-Lemma 2.3 For weakly commutative square matrices:
(a) The determinant is antisymmetric under permutation of rows or columns
(b) The determinant of a matrix with two equal rows or columns is zero
(c) The determinant of a matrix equals the determinant of its transpose
Trang 14The easy proof, which uses the Translation and Involution Lemmas, is left to the reader(it is identical to the usual proof in the commutative case) We simply remark that if thering R has the property that 2x = 0 implies x = 0, then antisymmetry under permutation
of rows (or columns) implies the vanishing with equal rows (or columns) But if the ringhas elements x 6= 0 satisfying 2x = 0 (for instance, if R = Zn for n even), then a slightlymore careful argument, using the Involution Lemma, is needed to establish the vanishingwith equal rows (or columns)
The situation changes, however, when we try to prove a formula for the determinant
of a product of two matrices, or more generally a formula of Cauchy–Binet type Weare then inevitably led to consider products of matrix elements in which some of theindices may be repeated — but only in one of the two positions It therefore turns out(see Proposition 3.1 below) that we need something like row - or column-commutativity;indeed, the result can be false without it (see Example 3.2)
Some analogues of Lemma 2.3(a,b) can nevertheless be obtained for the column- androw-determinants under hypotheses weaker than weak commutativity For brevity let usrestrict attention to column-determinants; the corresponding results for row-determinantscan be obtained by exchanging everywhere “row” with “column”
We then have the following trivial result:
Lemma 2.4 For arbitrary square matrices:
(a) The column-determinant is antisymmetric under permutation of rows:
for any permutation τ
(b) The column-determinant of a matrix with two equal rows is zero
Indeed, statements (a) and (b) follow immediately from the Translation Lemma and theInvolution Lemma, respectively
On the other hand, the column-determinant is not in general antisymmetric under mutation of columns, nor is the column-determinant of a matrix with two equal columnsnecessarily zero [For instance, in the Weyl algebra in one variable over a field of char-acteristic 6= 2, we have col-det
per-
d d
x x
= dx − xd = 1, which is neither equal to −1 nor
to 0.] It is therefore natural to seek sufficient conditions for these two properties to hold
We now proceed to give a condition, weaker than weak commutativity, that entails thefirst property and almost entails the second property
Trang 15Let us begin by observing that µijkl := [Mij, Mkl] is manifestly antisymmetric underthe simultaneous interchange i ↔ k, j ↔ l So symmetry under one of these interchanges
is equivalent to antisymmetry under the other Let us therefore say that a matrix M has
• row-symmetric (and column-antisymmetric) commutators if [Mij, Mkl] = [Mkj, Mil]for all i, j, k, l;
• column-symmetric (and row-antisymmetric) commutators if [Mij, Mkl] = [Mil, Mkj]for all i, j, k, l
Let us further introduce the same types of weakening that we did for commutativity,saying that a matrix M has
• weakly row-symmetric (and column-antisymmetric) commutators if [Mij, Mkl] =[Mkj, Mil] whenever i 6= k and j 6= l;
• weakly column-symmetric (and row-antisymmetric) commutators if [Mij, Mkl] =[Mil, Mkj] whenever i 6= k and j 6= l
(Note that row-symmetry is trivial when i = k, and column-symmetry is trivial when
j = l.) Obviously, each of these properties is inherited by all the submatrices MIJ of
M Also, each of these properties is manifestly weaker than the corresponding type ofcommutativity
The following fact is sometimes useful:
Lemma 2.5 Suppose that the square matrix M has either row-symmetric or symmetric commutators and is either symmetric or antisymmetric Then 2[Mij, Mkl] = 0for all i, j, k, l In particular, if the ring R has the property that 2x = 0 implies x = 0,then M is commutative
column-Proof Suppose that M has row-symmetric commutators (the column-symmetric case
is analogous) and that MT = ±M Then [Mij, Mkl] = [Mkj, Mil] = [Mjk, Mli] =[Mlk, Mji] = [Mkl, Mij], where the first and third equalities use the row-symmetric com-mutators, and the second and fourth equalities use symmetry or antisymmetry 2
Returning to the properties of column-determinants, we have:
Lemma 2.6 If the square matrix M has weakly row-symmetric commutators:
(a) The column-determinant is antisymmetric under permutation of columns, i.e
for any permutation τ
Trang 16(b) If M has two equal columns, then 2 col-det M = 0 (In particular, if R is a ring inwhich 2x = 0 implies x = 0, then col-det M = 0.)
(c) If M has two equal columns and the elements in those columns commute amongthemselves, then col-det M = 0
Proof (a) It suffices to prove the claim when τ is the transposition exchanging i with
i + 1 (for arbitrary i) We have
σ 7→ σ ◦ (i, i + 1), so the Involution Lemma implies that the sum is zero
(b) is an immediate consequence of (a)
(c) Using (a), we may assume without loss of generality that the two equal columnsare adjacent (say, in positions 1 and 2) Then, in
The embarrassing factor of 2 in Lemma 2.6(b) is not simply an artifact of the proof;
it is a fact of life when the ring R has elements x 6= 0 satisfying 2x = 0:
Trang 17Example 2.7 Let R be the ring of 2 × 2 matrices with elements in the field GF (2), andlet α and β be any two noncommuting elements of R [for instance, α =
1 0
0 0
and
In Proposition 3.8 below, we shall prove a variant of Proposition 1.2 that requiresthe matrix AT only to have row-symmetric commutators, but at the price of multiplyingeverything by this embarrassing factor of 2
If we want to avoid this factor of 2 by invoking Lemma 2.6(c), then (as will be seen
in Section 3) we shall need to impose a condition that is intermediate between commutativity and row-symmetry: namely, we say (as in Definition 1.1) that M is
row-• row-pseudo-commutative if [Mij, Mkl] = [Mkj, Mil] for all i, j, k, l and [Mij, Mkj] = 0for all i, j, k;
• column-pseudo-commutative if [Mij, Mkl] = [Mil, Mjk] for all i, j, k, l and [Mij, Mil] =
0 for all i, j, l
(Of course, the [M, M] = [M, M] condition need be imposed only when i 6= k and
j 6= l, since in all other cases it is either trivial or else a consequence of the [M, M] = 0condition.) We thus have M row-commutative =⇒ M row-pseudo-commutative =⇒ Mhas row-symmetric commutators; furthermore, the converse to the second implicationholds whenever R is a ring in which 2x = 0 implies x = 0 Row-pseudo-commutativitythus turns out to be exactly the strengthening of row-symmetry that we need in order toapply Lemma 2.6(c) and thus avoid the factor of 2 in Proposition 3.8, i.e to prove thefull Proposition 1.2
The following intrinsic characterizations of row-pseudo-commutativity and metry are perhaps of some interest14:
row-sym-Proposition 2.8 Let M = (Mij) be an m × n matrix with entries in a commutative) ring R
(not-necessarily-(a) Let x1, , xn be commuting indeterminates, and define for 1 6 i 6 m the ments exi =
Proposition 2.8 is almost identical to a result of Chervov and Falqui [15, Proposition 1], from whom
we got the idea; but since they work in an associative algebra over a field of characteristic 6= 2, they don’t need to distinguish between row-pseudo-commutativity and row-symmetry They attribute this result to Manin [38, top p 199] [39, 40], but we are unable to find it there (or perhaps we have simply failed to understand what we have read) However, a result of similar flavor can be found in [38, p 193, Proposition] [39, pp 7–8, Theorem 4], and it is probably this to which the authors are referring.
Trang 18is row-pseudo-commutative if and only if the elements ex1, , exm commute amongthemselves.
(b) Let η1, , ηm be Grassmann indeterminates (i.e η2
i = 0 and ηiηj = −ηjηi), and fine for 1 6 j 6 n the elements eηj =
η1, , eηn anticommute among themselves (i.e eηiηej = −eηjηei)
(ii) The matrix M is row-pseudo-commutative if and only if the elements eη1, , eηn
satisfy all the Grassmann relations eηiηej = −eηjηei and eη2
which vanishes if and only if [Mij, Mkj] = 0 for all i, k 2
Some historical remarks 1 Row-commutativity has arisen in some previous work
on noncommutative linear algebra, beginning with the work of Cartier and Foata onnoncommutative extensions of the MacMahon master theorem [12, Th´eor`eme 5.1] Forthis reason, many authors [15, 30–32] call a row-commutative matrix a Cartier–Foatamatrix See e.g [12, 19, 30, 32, 37] for theorems of noncommutative linear algebra forrow-commutative matrices; and see also [32, secs 5 and 7] for some beautiful q- andq-generalizations
Trang 192 Row-pseudo-commutativity has also arisen previously, beginning (indirectly) withManin’s early work on quantum groups [38–40] Thus, some authors [15] call a row-pseudo-commutative matrix a Manin matrix ; others [30–32] call it a right-quantum ma-trix Results of noncommutative linear algebra for row-pseudo-commutative matricesinclude Cramer’s rule for the inverse matrix [15, 31, 39] and the Jacobi identity for co-factors [31], the formula for the determinant of block matrices [15], Sylvester’s determi-nantal identity [30], the Cauchy–Binet formula (Section 3 below), the Cayley–Hamiltontheorem [15], the Newton identities between tr Mk and coefficients of det(tI + M) [15],and the MacMahon master theorem [31, 32]; see also [32, secs 6 and 8] [30, 31] for somebeautiful q- and q-generalizations See in particular [32, Lemma 12.2] for Lemma 2.6specialized to row-pseudo-commutative matrices.
The aforementioned results suggest that row-pseudo-commutativity is the natural pothesis for (most? all?) theorems of noncommutative linear algebra involving the column-determinant Some of these results were derived earlier and/or have simpler proofs underthe stronger hypothesis of row-commutativity
hy-We thank Luigi Cantini for drawing our attention to the paper [15], from which wetraced the other works cited here
3 Subsequent to the posting of the present paper in preprint form, Chervov, Falquiand Rubtsov [16] posted an extremely interesting survey of the algebraic properties ofrow-pseudo-commutative matrices (which they call “Manin matrices”) when the ring R is
an associative algebra over a field of characteristic 6= 2 This survey discusses the resultscited in #2 above, plus many more; in particular, Section 6 of [16] contains an interestinggeneralization of the results of the present paper on Cauchy–Binet formulae and Capelli-type identities These authors state explicitly that “the main aim of [their] paper is toargue the following claim: linear algebra statements hold true for Manin matrices in aform identical to the commutative case” [16, first sentence of Section 1.1]
4 The reader may well wonder (as one referee of the present paper did): Since theliterature already contains two competing terminologies for the class of matrices in ques-tion (“Manin” and “right-quantum”), why muddy the waters by proposing yet anotherterminology (“row-pseudo-commutative”) that is by no means guaranteed to catch on?
We would reply by stating our belief that a “good” terminology ought to respect thesymmetry A 7→ AT; or in other words, rows and columns ought to be treated on the samefooting, with neither one privileged over the other (For the same reason, we endeavor
to treat the row-determinant and the column-determinant on an equal basis.) We do notclaim that our terminology is ideal — perhaps someone will find one that is more conciseand easier to remember — but we do think that this symmetry property is important
3 Proof of the ordinary Capelli-type identities
In this section we shall prove Proposition 1.2; then Proposition 1.2′ and Corollary 1.3follow immediately At the end we shall also prove a variant (Proposition 3.8) in whichthe hypotheses on the commutators are slightly weakened, with a corresponding slightweakening of the conclusion
Trang 20It is convenient to begin by reviewing the proof of the classical Cauchy–Binet formula(1.3) where the ring R is commutative First fix L = {l1, , lr} with l1 < < lr, andcompute
where we have here commuted the b’s through the a’s Note that the order of the elements
of B remains unchanged throughout these manipulations
Let us also remark that this proof is valid even if r > m: the starting sum (3.2) is thenempty, since there do not exist distinct elements l1, , lr ∈ [m]; but the sum (3.4a) isnonempty, since repetitions among the l1, , lr are now allowed, and we prove the non-trivial result that det(ATB)IJ = 0 (Of course, in the commutative case this is no surprise,since the matrix ATB has rank at most m; but the corresponding noncommutative resultwill be less trivial.)
Now let us examine this proof more closely, in order to see what commutation erties of the matrix elements were really needed to make it work In the passage from
Trang 21prop-(3.1a) to (3.1b), the essence of the argument was that
AT to have weakly row-symmetric commutators In the argument that f (l1, , lr) = 0whenever two or more arguments take the same value, we need to apply Lemma 2.6(c)
to a matrix that is a submatrix of AT with possibly repeated columns; therefore we need,
in addition to weak row-symmetry, the additional hypothesis that the matrix elements of
AT within each column commute among themselves — or in other words, we need AT to
be row-pseudo-commutative (Definition 1.1) Finally, in the step from (3.4a) to (3.4b),
we commuted the b’s through the a’s We have therefore proven:
Proposition 3.1 (easy noncommutative Cauchy–Binet) Let R be a ily-commutative) ring, and let A and B be m × n matrices with elements in R Supposethat
(not-necessar-(a) AT is row-pseudo-commutative, i.e A is column-pseudo-commutative, i.e [aij, akl] =[ail, akj] whenever i 6= k and j 6= l and [aij, ail] = 0 whenever j 6= l;
(b) the matrix elements of A commute with those of B, i.e [aij, bkl] = 0 for all i, j, k, l.Then, for any I, J ⊆ [n] of cardinality |I| = |J| = r, we have
X
L ⊆ [m]
|L| = r
(col-det (AT)IL)(col-det BLJ) = col-det (ATB)IJ (3.6)
Note that no hypothesis whatsoever is needed concerning the commutators [bij, bkl].There is also a dual result using row-det, in which B is required to be column-pseudo-commutative and no hypothesis is needed on the [a, a] commutators
The hypothesis in Proposition 3.1 that A be column-pseudo-commutative really isnecessary:
Example 3.2 Let α and β be any noncommuting elements of the ring R, and let A =
α α
β β
Then A is row-commutative but not column-pseudo-commutative, while the elements of B commute with everything We have det AT =det B = 0 but col-det(ATB) = αβ − βα 6= 0