Other Kinds of Matrix Multiplication

Một phần của tài liệu Matrix algebra theory, computations and applications in statistics ( PDFDrive ) (Trang 120 - 124)

The most common kind of product of two matrices is the Cayley product, and when we speak of matrix multiplication without qualification, we mean the Cayley product. Three other types of matrix multiplication that are use- ful areHadamard multiplication, Kronecker multiplication, andinner product multiplication.

3.2.10.1 The Hadamard Product

Hadamard multiplication is defined for matrices of the same shape as the multiplication of each element of one matrix by the corresponding element of the other matrix. Hadamard multiplication is often denoted by; for two matricesAn×m andBn×mwe have

AB=

⎢⎣

a11b11 . . . a1mb1m

... . . . ... an1bn1. . . anmbnm

⎥⎦.

Hadamard multiplication immediately inherits the commutativity, asso- ciativity, and distribution over addition of the ordinary multiplication of the underlying field of scalars. Hadamard multiplication is also called array mul- tiplication and element-wise multiplication. Hadamard matrix multiplication is a mapping

IRn×m×IRn×mIRn×m.

The identity for Hadamard multiplication is the matrix of appropriate shape whose elements are all 1s.

3.2.10.2 The Kronecker Product

Kronecker multiplication, denoted by, is defined for any two matricesAn×m

andBp×q as

A⊗B =

⎢⎣

a11B . . . a1mB ... . . . ... an1B . . . anmB

⎥⎦.

The Kronecker product of A and B is np×mq; that is, Kronecker matrix multiplication is a mapping

IRn×m×IRp×qIRnp×mq.

The Kronecker product is also called the “right direct product” or just direct product. (A left direct product is a Kronecker product with the factors reversed. In some of the earlier literature, “Kronecker product” was used to mean a left direct product.) Note the similarity of the Kronecker product of matrices with the direct product of sets, defined on page5, in the sense that the result is formed from ordered pairs of elements from the two operands.

Kronecker multiplication is not commutative, but it is associative and it is distributive over addition, as we will see below. (Again, this parallels the direct product of sets.)

The identity for Kronecker multiplication is the 1×1 matrix with the element 1; that is, it is the same as the scalar 1.

We can understand the properties of the Kronecker product by expressing the (i, j) element ofA⊗B in terms of the elements ofAandB,

(A⊗B)i,j=A(i1)/p+1,(j1)/q+1Bip(i1)/p, jq(j1)/q. (3.96) Some additional properties of Kronecker products that are immediate re- sults of the definition are, assuming the matrices are conformable for the indicated operations,

(aA)(bB) =ab(A⊗B)

= (abA)⊗B

=A⊗(abB), for scalarsa, b, (3.97)

(A+B)(C) =A⊗C+B⊗C, (3.98)

(A⊗B)⊗C=A⊗(B⊗C), (3.99)

(A⊗B)T=AT⊗BT, (3.100)

(A⊗B)(C⊗D) =AC⊗BD. (3.101)

I⊗A= diag(A, . . . , A). (3.102)

A⊗I= (aijI). (3.103)

These properties are all easy to see by using equation (3.96) to express the (i, j) element of the matrix on either side of the equation, taking into account the size of the matrices involved. For example, in the first equation, if A is n×mandB isp×q, the (i, j) element on the left-hand side is

aA[(i1)/p]+1,[(j1)/q]+1bBip[(i1)/p], jq[(j1)/q]

and that on the right-hand side is

abA[(i1)/p]+1,[(j1)/q]+1Bip[(i1)/p], jq[(j1)/q]. They are all this easy! Hence, they are Exercise3.6.

The determinant of the Kronecker product of two square matrices An×n and Bm×m has a simple relationship to the determinants of the individual matrices:

det(A⊗B) = det(A)mdet(B)n. (3.104) The proof of this, like many facts about determinants, is straightforward but involves tedious manipulation of cofactors. The manipulations in this case can be facilitated by using the vec-permutation matrix. See Harville (1997) for a detailed formal proof.

From equation (3.100) we see that the Kronecker product of symmetric matrices is symmetric.

Another property of the Kronecker product of square matrices is

tr(A⊗B) = tr(A)tr(B). (3.105) This is true because the trace of the product is merely the sum of all possible products of the diagonal elements of the individual matrices.

The Kronecker product and the vec function often find uses in the same application. For example, ann×mnormal random matrixXwith parameters

M, Σ, and Ψ can be expressed in terms of an ordinary np-variate normal random variableY = vec(X) with parameters vec(M) andΣ⊗Ψ. (We discuss matrix random variables briefly on page220. For a fuller discussion, the reader is referred to a text on matrix random variables such as Carmeli1983, or Kollo and von Rosen2005.)

A useful relationship between the vec function and Kronecker multiplica- tion is

vec(ABC) = (CT⊗A)vec(B) (3.106)

for matricesA,B, andCthat are conformable for the multiplication indicated.

This is easy to show and is left as an exercise.

3.2.10.3 The Inner Product of Matrices

An inner product of two matrices of the same shape is defined as the sum of the dot products of the vectors formed from the columns of one matrix with vectors formed from the corresponding columns of the other matrix; that is, ifa1, . . . , amare the columns ofA andb1, . . . , bm are the columns ofB, then theinner product ofAandB, denotedA, B, is

A, B= m j=1

aj, bj. (3.107)

Similarly as for vectors (page23), the inner product is sometimes called a “dot product”, and the notationAãBis sometimes used to denote the matrix inner product. (I generally try to avoid use of the term dot product for matrices because the term may be used differently by different people. In Matlab, for example, “dot product”, implemented in thedotfunction, can refer either to 1×mmatrix consisting of the individual terms in the sum in equation (3.107), or to the1 matrix consisting of the dot products of the vectors formed from the rows ofA andB. In the NumPy linear algebra package, thedotfunction implements Cayley multiplication! This is probably because someone working with Python realized the obvious fact that the defining equation of Cayley multiplication, equation (3.43) on page75, is actually the dot product of the vector formed from the elements in the ith row in the first matrix and the vector formed from the elements in thejth column in the first matrix.)

For real matrices, equation (3.107) can be written as A, B=

m j=1

aTjbj. (3.108)

As in the case of the product of vectors, the product of matrices defined as in equation (3.108) over the complex field is not an inner product because the first property (on page24or as listed below) does not hold.

For conformable matrices A, B, and C, we can easily confirm that this product satisfies the general properties of an inner product listed on page24:

IfA= 0,A, A>0, and0, A=A,0=0,0= 0.

• A, B=B, A.

• sA, B=sA, B, for a scalars.

(A+B), C=A, C+B, C.

As with any inner product (restricted to objects in the field of the reals), its value is a real number. Thus the matrix inner product is a mapping

IRn×m×IRn×mIR.

We see from the definition above that the inner product of real matrices satisfies

A, B= tr(ATB), (3.109)

which could alternatively be taken as the definition.

Rewriting the definition ofA, Basm j=1

n

i=1aijbij, we see that for real matrices

A, B=AT, BT. (3.110)

Like any inner product, inner products of matrices obey the Cauchy- Schwarz inequality (see inequality (2.26), page24),

Một phần của tài liệu Matrix algebra theory, computations and applications in statistics ( PDFDrive ) (Trang 120 - 124)

Tải bản đầy đủ (PDF)

(664 trang)