GENERAL SOLVABILITY THEOREM FOR Ax = b

Một phần của tài liệu Linear algebra and linear operators in engineering with applications in (Trang 146 - 163)

Although up to this point we have concentrated mostly on solving Ax = b for nonsingular matrices A, the more general problem is of substantial practical and theoretical interest. Consider the system of equations

^21^1 + %2-^2 + ' " + C^ln^n = ^2

(4.4.1)

or

Ax = b, (4.4.2) where A is an m x n matrix with elements a,^, x is an n-dimensional vector, and

b is an m-dimensional vector. We will consider the general situation in which m and n need not be the same and the rank r of A will be less than or equal to min(m, n). Square nonsingular and singular matrices constitute special cases of the general theory.

A non-square problem may or may not have a solution and when it does have a solution it is may not be unique. For instance, the system

(4.4.3) 2xi H- A:2 + 2x3 = 2

has the solution

jCi = - l - X 3 , JC2=0 (4.4.4)

for arbitrary JC3, and thus a solution exists but is not unique. On the other hand, the system

JC, -f- 2X2 -f- JC3 = 1

(4.4.5)

2JCI + 4JC2 + 2JC3 = 1

has no solution since multiplication of the first equation by 2 and subtraction from the second equation yields the contradiction

0 = - l . (4.4.6) Thus, the equations in Eq. (4.4.3) are compatible, whereas those in Eq. (4.4.5) are

not. The theory of solvability of Ax = b must address this compatability issue for the general case.

134 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

As an aid in examining the general case, we will first discuss Gauss-Jordan elimination for the 3 x 4 matrix

A = ^21 -^22 '*32

^23 ^24

^33 ''34

(4.4.7)

At this point, we know nothing about A except the values of a^j. For the elimination process, we note that the matrix I^y defined in the previous section has the useful properties that pre-multiplication of A by I,y (i.e., 1,^ A) interchanges rows / and j and post-multiplication of A by I^^ (i.e., AI,y) interchanges columns / and j . For example, if

0 1 0 1 0 0 0 0 1 then it is easy to see that

I,2A =

0 1 0 1 0 0 0 0 1

'*22

^32 '*23

^33

a 21 a 22

*32 '*23

^33

ô24

^14

ô34

And if

I i . =

0 1 0 0

1 0 0 0

0 0 1 0

0 0 0 1 then

AI,2 =

^31

^22

^32

^23

^33

^24

^34

^22 ^21 ^23 ^24

^32 ^33 -*34

^24 '*34

0 1 0 0

1 0 0 0

0 0 1 0

0 0 0 1

(4.4.8)

(4.4.9)

(4.4.10)

(4.4.11)

Note that, for an m x ô matrix A, the matrix I,y has to be an m x m matrix for pre-multiplication and an n x n matrix for post-multiplication.

GENERAL SOLVABILITY THEOREM FOR A x = 6 135 For Gauss-Jordan elimination, one other matrix is needed. This is the square matrix J^jik), i ^ j , which has elements 7*;, = 1 for / = 1 , . . . , n, j^j = k, and ji^ = 0 if/ 7ô^ m and Im ^ ij. To generate Jij(k), we can start with the unit matrix I and add k as the [ij ]ih element. For example, the 3 x 3 version of J23(fc) is

J23W =

1 0 0 0 1 k 0 0 1

(4.4.12)

Notice that the product of J23(^) with the matrix defined by Eq. (4.4.7) is

J23(^)A = ô21 H- ^ô31 ô22 + ^ô32 ^23 + ^^33

ô 3 l ^32 ^33

^24

^14

-hka 34 a 34

(4.4.13)

Thus, pre-multiplying A by Jij(k) produces a matrix in which the product of k and the elements of the jth row are added to the elements of the /th row. According to the elementary properties of determinants, the determinants of A and }fj{k)A are the same.

Since Jfjik) can be generated by multiplying the /th column of I by ^ and adding this to the yth column, it follows that the determinant of Jij(k) is 1. We also note that the inverse of J,y(A:) is J,^(—A;) because

lj(-k)lj(k) = l, (4.4.14)

by inspection. Since I,yl,y = I, the matrix I,y is its own inverse. The matrices I,^

and 3jj{k) are the tools we need to establish the solvability of any linear system represented by Ax = b.

In Gauss-Jordan elimination, the matrix A is transformed into A^^ by multipli- cation of the square nonsingular matrices 1,^ and J,^. It is important to remember that this will not change the rank of A. Consider the matrix defined by Eq. (4.4.7).

If the rank r of A is not 0, a nonzero element can be put at the {11} position by the interchange of rows and/or columns. As was shown above, this can be accom- plished by pre- and/or post-multiplying A by the appropriate 1,^ matrices. For sim- plicity of discussion, suppose that r > 0 (otherwise, A = [0]) and that fl|, ^ 0.

Then the first step of Gauss elimination is accomplished by the matrix operation

' < - ' & -

ôn 0

ô31 a,2

^22

ô32 a,3

^23

ô33

ô14

&24

ô34

(4.4.15)

where

0 0 1 0 0 1

(4.4.16)

136 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

a n d b22 = <^22 ~ ^12^21/^11' ^23 ^23 ~ ^13^21/^iiằ ^ ^ ^ ^24 ~ ^24 ~ ^u^ii/^ii' ^^

the next step, Eq. (4.4.15) is multiplied by J3i(—%i/an) to obtain

^"{-"tM-'&-

ô11

0 0

ô12

^22

^32

^13

^23

^33

ô14 1

^24

^34 J

(4.4.17)

If the rank r^ = 1, then all the elements b^j in Eq. (4.4.17) are 0 and the trans- formed matrix is simply

(4.4.18)

If r^ > 1, then at least one of the elements b^j has to be nonzero. By pre- and/or post-multiplication of Eq. (4.4.17) by the appropriate 1,^ matrices, row and/or col- umn interchanges can be carried out to place a nonzero element at the {22} posi- tion. For illustration, assume that ^22 ^^ ^32 ^^ ^23 ^^ ^ ^nd Z733 ^ 0. Then

Ir —

ô11

0 0

ô12

0 0

ô13

0 0

ô14

0 0

I M J 23J31 {-'tM-'t>"-

^11

0 0

^13

^33

0

ô12

0 0

ô14

^34 boA

(4.4.19)

Pre-multiplication of the above matrix by Ji2(—^13/^33) yields

0 0

0

^^33

0 0 0

^34

^24

(4.4.20)

If the rank r^ of A is 2, then ^24 = ^^ i^., the transformed matrix is

Atr =

0

If, however, r^

yields

(4.4.21)

3, then ^^24 7^ 0, and post-multiplication of Eq. (4.4.20) by I34 0

0

^33

0 0 0

^34

0

0 0

0

^33

0

^34 -^24

0 0

(4.4.22)

Pre-multiphcation by Ji3(—^14/^24) followed by J23(—^34/^24) gives the trans- formed matrix

0 0

0

-^33

0 0

^34

0 0

(4.4.23)

GENERAL SOLVABILITY THEOREM FOR A x = b 137 Equations (4.4.18), (4.4.21), and (4.4.23) are the Gauss-Jordan transformations of A for the cases r^ = 1, 2, and 3, respectively (with the special situation conditions

^722 = ^32 = ^23 = ^ introduccd for illustration purposes). The relationship between Ajr, say in the case of Eq. (4.4.23), is

PAQ, (4.4.24)

where

i-m-^M-^M-'tM-'t)

1 + Gi^a 13"31

ô i i ^ : 33 + aub a^xc 21*-14 11*^34

^21^24

^11^34

^34

^24

^34

ô]3 -1

^33

1

0

(4.4.25)

and

23*34 —

1 0 0 0

0 0 1 0

0 0 0 1

0 1 0 0

(4.4.26)

P is a product of nonsingular 3 x 3 matrices, and so it is a nonsingular 3 x 3 matrix itself. Likewise, Q is a nonsingular 4 x 4 matrix. For the particular cases resulting in Eqs. (4.4.25) and (4.4.26), we find |P| = - 1 , |Q| = -<1.

From this 3 x 4 example, the matrix transformation in the Gauss-Jordan elim- ination method becomes clear for the general matrix

A = ^21 ^ 2 2 '*2n

(4.4.27)

If the rank of A is r, then, through a sequence of pre- and post-multiplications by the I,y and J^y matrices, the matrix can be transformed into

A„ = PAQ = Yu

0 0 0

0

K22 •

0 • 0

• 0

• 0

Yrr

0

Ki,

¥2.

Yr.

r+1 /-fl

r+1

0

•• Yxn

" Yin

Yrn

0 0 0

(4.4.28)

I 3 8 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

in which P is a product of the I,y and J,y matrices and Q is a product of the I,^

matrices, and where y^i = 0 for / 7^ j and /, 7 = 1 , , . . , r and y^j = 0 for / > r.

The values of y^j fori < r, j > r may or may not be 0. Thus, the matrix A^^ can be partitioned as

A , Aj A2

O3 O4 (4.4.29)

where A| is an r x r matrix having nonzero elements y,, only on the main diagonal;

A2 is an r X (n—r) matrix with the elements y,^, / = 1 , . . . , r and 7 = r + 1 , . . . , /i;

O3 is an (m —r)xr matrix with only zero elements; and O4 is an im — r)x(n — r) matrix with only zero elements.

Since the determinants |P| and |Q| must be equal to ± 1 , the rank of A^^ is the same as the rank of A. Thus, the Gauss-Jordan elimination must lead to exactly r nonzero diagonal elements y^,. If there were fewer than r, then the rank of A would be greater than r. Either case would contradict the hypothesis that the rank of A is r.

The objective of this section is to determine the solvability of the system of equations

a j i ^ l + ' 3 j 2 ^ 2 + - - - + ^ l n - ^ ; i = ^ 1

(4.4.30)

represented by

Ax = b. (4.4.31) If P and Q are the matrices in Eq. (4.4.28), it follows that Eq. (4.4.31) can be

transformed into

or

PAQQ X = Pb (4.4.32)

where

At,y = a, (4.4.33)

y = Q-*x and a = Pb. (4.4.34)

GENERAL SOLVABILITY THEOREM FOR A x = b 139 In tableau form Eq. (4.4.33) reads

0 =ô,+ ,

(4.4.35)

0 =ậ

If the vector b is such that not all or,, where i > r, are 0, then the equations in Eq. (4.4.35) are inconsistent and so no solution exists. If a, = 0 for all / > r, then the solution to Eq. (4.4.35) is

3^1 = -Pl,r+iyr+l Puyn + (^i/Yn yi = -ft.r-hlJr+l ^yn + ô 2 / K 2 2

(4.4.36)

yr = -A-,r+l3^r+l Prnyn + ô r / K r ,

for arbitrary values of y^_^i,..., y„ and where

a _ ^U (4.4.37)

Of course, if Eq. (4.4.35) or Eq. (4.4.33) has a solution y, then the vector x = Qy is a solution to Ax = b (i.e., to Eq. (4.4.31)).

Let us explore further the conditions of solvability of Eq. (4.4.33). The aug- mented matrix corresponding to this equation is

[At„a] = Yn

0 0 0

0

K22

0 ••

0 ••

• 0

• 0

Yrr

0

n,

Yi

Yr, r+l r+1

r + l

0

•• Yin

" Yin

Yrn

0

ô!

ô 2

a.

a,.

0 0 0

(4.4.38)

As indicated above, Eq. (4.4.33) has a solution if and only if or, = 0, i > r. This is equivalent to the condition that the rank of the augmented matrix [Aj^, a ] is

140 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

the same as the rank of the matrix A^^—which is the same as the rank r of A. If Af^y = a has a solution, so does Ax = b.

When the rank of [Ajj., a ] is r, what is the rank of the augmented matrix [A, b]? From the corollary of the previous section, it follows that the rank of C, where

C = P[A, b] = [PA, Pb] = [PA, a ] , (4.4.39) is the same as the rank of [A, b] since P is a square nonsingular matrix. Note also that the augmented matrix

[A,„a] = [PAQ,a] (4.4.40)

has the same rank as C since PAQ and PA only differ in the interchange of columns—an elementary operation that does not change the rank of a matrix. Thus, the rank of [A, b] is identical to the rank of [Atj., a ] .

Collectively, what we have shown above implies the solvability theorem:

THEOREM. The equation Ax = b has a solution if and only if the rank of the augmented matrix [A, b] is the same as the rank of A..

Since the solution is not unique when A is a singular square matrix or a non- square matrix, we need to explore further the nature of the solutions in these cases.

First, we assume that the solvability condition is obeyed (a, = 0, / > r) and consider again Eq. (4.4.36). A particular solution to this set of equations is

or

Ji = — , / = 1 , . . . ,r, Yii

} \ = 0 , r = r + l , . . .

y i i

Yrr

0

0

(4.4.41) , n,

(4.4.42)

However, if yp is a solution to Eq. (4.4.36), then so is yp + y^, where y^^ is any solution to the homogeneous equations

y\ = - ^ l , r + i y r + l AnJ/t

yr = -Pr,r+\yr^\ Pmyn'

(4.4.43)

GENERAL SOLVABILITY THEOREM FOR A x = b 141 One simple solution to this set of equations can be obtained by letting y^_^^ = \ and y, = 0, / > r + 1. The solution is

yi"

~A,/-fl 1 0

(4.4.44)

Similarly, choosing y^^^ = 0, y^^2 = 1' ^"^ yi = 0, / > r + 2, gives the solution

yf

~Pr,r+2 0 1 0

(4.4.45)

(1) ^An-r)

Using this method, the set of solutions yj, , • . . , y}," can be generated in which

yH'

-A, r+j

~Pr,r+j

0

0

(4.4.46) (r -f 7)th row

for 7 = 1 , . . . , n — r. There are two important aspects of these homogeneous solutions. First, they are linearly independent since the (r -f 7)th element of y^^^ is 1, whereas the (r + 7)th element of all other y^^\ k j^ 7, is 0, and so any linear combination of y^^\ k ^ 7, will still have a zero (r + j)th element. The second aspect is that any solution to Eq. (4.4.43) can be expressed as a linear combination of the vectors yl^\ To see this, suppose that y ^ ^ , , . . . , j„ are given arbitrary values

I 4 2 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

in Eq. (4.4.43). The solution can be written as

yh

0 + 0

which, in turn, can be expressed as

7 = 1

- Prnyn + 0 + 0

+ yn

(4.4.47)

(4.4.48) The essence of this result is that the equation A^^y = ct has exactly n — r linearly independent homogeneous solutions. Any other homogeneous solution will be a linear combination of these linearly independent solutions.

In general, the solution to A^^y = ot can be expressed as

y = yp + E ^ i y h 0) (4.4.49)

; = i

where yp is a particular solution obeying Aj^yp = a, yl; are the linearly indepen-0)

dent solutions obeying At^y^^^ = 0, and Cj are arbitrary complex numbers.

Because y^^^ is a solution to the homogeneous equation Aj^yh = 0, the vectors xj,'^ = Qy^^^ are solutions to the homogeneous equation Ax^^^ = 0. The set {x^^^}, 7 == 1,. . . , ô — r, is also linearly independent. To prove this, assume that the set is linearly dependent. Then there exist numbers a j , . . ., a„_^, not all of which are 0, such that

0. (4.4.50)

But multiplying Eq. (4.4.50) by Q"^ and recalling that yj,^^ = Q~^x;^^ yields E ^ ^ y h .0') 0. (4.4.51)

7 = 1

Since the vectors y\^^ are linearly independent, the only set [aj] obeying (4.4.51) JD

is a^ = = . . . = a,^_^ = 0, which is a contradiction to the hypothesis that the x^^^'s are linearly dependent. Thus, the vectors xj^^ 7 = 1 , . . , , n - r, must be linearly independent.

GENERAL SOLVABILITY THEOREM FOR A x = b 143

We summarize the findings of this section with the complete form of the solvability theorem:

SOLVABILITY THEOREM. The equation

Ax = b (4.4.52)

has a solution if and only if the rank of the augmented matrix [A, b] is equal to the rank r of the matrix A. The general solution has the form

x = Xp + Xlc^-.0) (4.4.53)

where Xp is a particular solution satisfying the inhomogeneous equation AXp = b, the set x^^ consists ofn — r linearly independent vectors satisfying the homogen- eous equation Axj^ = 0, and the coefficients Cj are arbitrary complex numbers.

For those who find proofs tedious, this theorem is the "take-home lesson" of this section. Its beauty is its completeness and generality. A is an m x n matrix, m need not equal n, whose rank r can be equal to or less than the smallest of the number of rows m or the number of columns n. If the rank of [A, b] equals that of A, we know exactly how many solutions to look for; if not, we know not to look for any; or if not, we know to find where the problem lies if we thought we had posed a solvable physical problem.

EXAMPLE 4.4.1. Consider the electric circuit shown in Fig. 4.4.1. The V's denote voltages at the conductor junctions indicated by solid circles and the con- ductance c of each of the conductors (straight lines) is given. The current / across a conductor is given by Ohm's law, / = cAV, where AV is the voltage drop between conductor junctions. A current i enters the circuit where the voltage is V, and leaves where it is VQ. The values of Vj and VQ are set by external conditions.

V/ (1) Ki

FIGURE 4.4.1

Ye (2) Vo

144 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

From the conservation of current at each junction and from the values of the con- ductances, we can determine the voltages V , , . . . , Vg.

The conservation conditions at each junction are

2(V2 -VO + {V,- Vi) + 5(V3 - y,) + 4{V, - V,) + (V, - V,) = 0 2(V, - V2) + ( n - V2) = 0 3(^6 - V3) + 5(V, -V,)=0

4(Vi - V4) + 2(Vs - F4) = 0 (4.4.54) 2(V4 -V,) = 0

(V, - V,) + (V2 - Ve) + 3(^3 - V,) + 2(Vo - V,) = 0 3(^8 -Vj)=0

or

- 1 3 ^ 1 + 2 ^ 2 + 5^3+4^4 +V6

2V,-3V2 +V, 5V, -8V3 +3V^

4y, - 6y4 + 2y5 2y4 - 2y5 y, + y , + 3y3 - 7 K

•Vi

= 0

= 0

= 0

= 0 3y7 + 3yg = 0,

-2yn

(4.4.55)

which, in matrix form, is written as

AV = b, (4.4.56)

where

13 2 5 4 0 1 0

2 - 3

0 0 0 1 0

5 0 - 8

0 0 3 0

4 0 0 - 6

2 0 0

0 0 0 2 - 2

0 0

1 1 3 0 0 - 7 0

0 0 0 0 0 0 - 3

0 0 0 0 0 0 3

(4.4.57)

GENERAL SOLVABILITY THEOREM FOR A x = b

and

145

b = 0 0 0 0 0 -2Vn

(4.4.58)

We will assume that Vj = 1 and VQ = 0 and use Gauss-Jordan elimination to find the voltages V j , . . . , Vg. Note that since A is a 7 x 8 matrix, its rank is less than or equal to 7, and so if the system has a solution it is not unique. The reason is that the conductor between junctions 7 and 8 is "floating." Our analysis will tell us what we know physically, namely, that voltages Vj and Vg are equal to each other but are otherwise unknown from the context of the problem.

Solution. We perform Gauss-Jordan elimination on A, transforming the prob- lem into the form of Eq. (4.4.32). The resulting transformation matrices P and Q are

P =

1729 303 105 101 4715 2121 37506 20705 57988 128169

85 109

507 101 2450 1313 1476 707 32994 20705 5668 14241

93 109

1495 303 1260 1313 5945 2121 6486 4141 50140 128169 94 109

1729 303 105 101 4715 2121 23547 8282 91015 128169

85 109

1729 303 105 101 4715 2121 23547 8282 157069 128169 85 109

1105 303 1085 1313 3854 2121 4794 4141 37060 128169

1 0 0 0 0 0 0 1

(4.4.59)

and

Q = l8.

146 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

the n = 8 identity matrix. The matrix A then becomes

PAQ

3 0 0 0 0 0 0

0

35 13

0 0 0 0 0

0 0

41 7

0 0 0 0

0 0 0

846 205

0 0 0

0 0 0 0

436 423

0 0

0 0 0 0 0

303 109

0 0 0 0 0 0 0 -3

0 0 0 0 0 0 3

(4.4.60) and the vector b is transformed into

a = Pb - (

./22iOT/

V 303 *^0

V1313 *^C

V2121 *^0 / 9588 1/

V4141 ^ O 74120 y 128169 ^O

-(2Vo

+

4-

+ + +

-h

1229 T/\

303 1/

101 ^i)

1715 17 \ 2121 '^1/

37506 y \ 20705 ^i)

57988 y \ 128169 ^l)

109 *^I/

(4.4.61)

(4.4.62) 0

Since for this example y = x, the voltages can be obtained by inspection.

Substituting Vj = 1 and VQ = 0, we find Vi = 0.439 V2 = 0.386 V^ = 0.380 V4 = 0.439 V5 = 0.439 V6 = 0.281.

The last equation in A^^y = ct reduces to V7 = Vg. Thus, the values of Vj and Vg cannot be determined uniquely from the equation system. By Gauss-Jordan elimination, we found that the rank of A is equal to the number of rows (7) and by augmenting A with b the rank does not change. Therefore, although a solution does exist, it is not a unique one since A is a non-square matrix (and m < n).

EXAMPLE 4.4.2 (Chemical Reaction Equilibria). We, of course, know that molecules are made up of atoms. For example, water is composed of two hydro- gen atoms, H, and one oxygen atom, O. We say that the chemical formula for

GENERAL SOLVABILITY THEOREM FOR A x = b 147

water is H2O. Likewise, the formula for methane is CH4, indicating that methane is composed of one carbon atom, C, and four hydrogen atoms. We also know that chemical reactions can interconvert molecules; e.g., in the reaction

2Ho + O, = 2H,0, (4.4.63)

two hydrogen molecules combine with one molecule of oxygen to form two molecules of water. And in the reaction

2CH4 + O2 = 2CH3OH, (4.4.64)

two molecules of methane combine with one molecule of oxygen to form two molecules of methanol.

Suppose that there are m atoms, labeled a , , . . . , a,„, some or all of which are contained in molecule Mj. The chemical formula for Mj is then

M^ = (a, )„,,(a2)„,,..., (ô,„)„„. (4.4.65) Mj is thus totally specified by the column vector

'2j (4.4.66)

For example, if H, O, and C are the atoms 1, 2, and 3, and methane is designated as molecule 1, then the corresponding vector

a, = (4.4.67)

tells us that the formula for methane is (H)4(0)o(C)j or CH4.

If we are interested in reactions involving the n molecules M^ . . ., M„, the vectors specifying the atomic compositions of the molecules form the atomic matrix

A = [ a i , . . . , a J '*22

^7n2

(4.4.68)

We know from the solvability theory developed above that if the rank of A is r, then only r of the vectors of A are linearly independent. The remaining n — r vectors are linear combinations of these r vectors; i.e., if { a ^ , . . . , a^} denotes the set of linearly independent vectors, then there exist numbers p^j such that

^k = T.Pkj^j^ k = r -\- \,.. . ,n. (4.4.69)

j=i

I 4 8 CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

These equations represent chemical reactions among the molecular species. Since each vector represents a different molecule, Eq. (4.4.69) implies that the number of independent molecular components is r and a minimum of n — r reactions exists for all of the different molecular species since each equation in Eq. (4.4.69) contains a species not present in the other equations.

As an application, consider the atoms H, O, and C and the molecules H2, O2, H2O, CH4 and CH3OH. The atomic matrix is given by

H 0 C

H, 2 0 0

O2 0 2 0

H2O 2

1 0

CH4 4 0 1

CH3OH 4

1 1

The rank of this matrix is 3, and so there are three independent molecular com- ponents and there have to be at least 5 — 3 = 2 reactions to account for all the molecular species. Hj, O2, and CH4 can be chosen to be the independent compo- nents (because aj, 82, and Si^ are linearly independent) and the equilibrium of the two reactions in Eqs. (4.4.63) and (4.4.64) suffice to account theimodynamically for reactions among the independent species. H2, O2, and H2O cannot be chosen since 33 = BJ + a2/2, reflecting the physical fact that carbon is missing from these three molecules.

ILLUSTRATION 4.4.1 (Virial Coefficients of a Gas Mixture). Statistical mechanics provides a rigorous set of mixing rules when describing gas mixtures with the virial equation of state. The compressibility of a mixture at low density can be written as

z = l + B^,p + C^.p' + . . . , (4.4.70) where p is the molar density of the fluid and the virial coefficients, B^^i^, C^^^,

etc., are functions only of temperature and composition. At low enough density, we can truncate the series after the second term. From statistical mechanics, we can define pair coefficients that are only functions of temperature. The second virial coefficient for an A^-component mixture is then given by

Br^. = T.Y.yiyjBijiT). (4.4.71) where y, refers to the mole fraction of component /.

We desire to find the virial coefficients for a three-component gas mixture from the experimental values of B^^^^^ given in Table 4.4.1. For a three-component system, the relevant coefficients are B ^ , ^22, ^33, B12, ^ B , and B23. The mixing rule is given by

^.ix =y]Bu -f- 3^2^22 + J3 ^33 ^

GENERAL SOLVABILITY THEOREM FOR A x = b 149 lABLE 4.4.1

Gas Mixture

Second Virial Coefficient at 200 K for Ternary

^mlx 10.3 17.8 26.2 13.7 7.64

Y\

0 0 0 0.50 0.75

Yi 0.25 0.50 0.75 0.25 0

¥3 0.75 0.50 0.25 0.25 0.25

We can recast this problem in matrix form as follows. We define the vectors

b s

and the 5 X 6 matrix A by

A =

B„

B.

mix, 1

mix, 5

X =

i5^

By

B 23

(4.4.73)

yh

yii

yh y'U yU

yh

yii

yh yU yh

yh

y\.i

yh yh yh

>'l,l>'2,l yi.iyi,!

3'l,3>'2.3 V|,4^2.4

>'l,53'2.5

y i . i X i . i

>'l,2}'3,2 3'l,3>'3.3

>'l,4>'3,4

>'l,53'3,5

y2,iy3.i yz.iy^.i 3'2.3>'3,3 .V2,4>'3,4 3'2,53'3,5

(4.4.74)

where }\j refers to the ith component and the jth measurement in Table 4.4.1.

Similarly, the subscripts in the components of b refer to the measurements. Solving for the virial coefficients has then been reduced to solving the linear system

Ax = b. (4.4.75)

(i) Using the solvability theorems of this chapter, determine if a solution exists for Eq. (4.4.75) using the data in Table 4.4.1. If a solution exists, is it unique? Find the most general solution to the equation if one exists. If a solution does not exist, explain why.

We can find the "best fit" solution to Eq. (4.4.75) by applying the least squares analysis from Chapter 3. We define the quantity

2

(4.4.76)

1=1 \ j=i /

which represents a measure of the relative error in the parameters x,. The vector x that minimizes L can be found from the requirement that each of the derivatives of L with respect to x^ be equal to 0. The solution (see Chapter 3, Problem 8) is

A^Ax = A^b. (4.4.77)

ISO CHAPTER 4 GENERAL THEORY OF SOLVABILITY OF LINEAR ALGEBRAIC EQUATIONS

Here x contains the best fit parameters B^j for the given data.

(ii) Show that Eq. (4.4.77) has a unique solution for the data given in Table 4.4.1. Find the best fit virial coefficients. How does this best fit solution I I I compare to the general solution (if it exists) found in (i)?

Một phần của tài liệu Linear algebra and linear operators in engineering with applications in (Trang 146 - 163)

Tải bản đầy đủ (PDF)

(561 trang)