a function Fx1, x2,z, x n as done in Equation 10.22 for a single variable and When these equations are employed with Equation 10.27, the independent variables x i are eliminated from the
Trang 1Using Equation (10.21) and Equation (10.24), we have
w
Bx w
¤
¦¥
³µ´
¤
¦¥
³µ´
¤
¦
A w
B w A
optimi-of the objective function is obtained by differentiating it with respect to each optimi-of
the independent variables x i, in turn, and setting the derivative equal to zero If
each of these equations is multiplied by the corresponding x i, the resulting system
of equations is of the form
Trang 2a function F(x1, x2,z, x n) as done in Equation (10.22) for a single variable and
When these equations are employed with Equation (10.27), the independent
variables x i are eliminated from the optimum value of the objective function U.
Therefore, the optimum and the weighting factors are obtained by the geometric programming procedure outlined and applied earlier
It is seen that the weighting factors depend only on the exponents, not on the coefficients in the various terms This means that the relative importance of each term remains unchanged as long as the exponents are the same However, the optimum value and its location will change if the coefficients vary, for instance, because of changes in cost per unit item, energy consumption, etc The exponents represent the dependence of the objective function on the different variables and are often fixed for a given system or process
10.1.4 C ONSTRAINED O PTIMIZATION
Geometric programming can also be used for optimizing systems with equality constraints The degree of difficulty is again taken as zero, so that the total num-ber of polynomial terms in the objective function and the constraints is greater than the number of independent variables by one Let us consider the constrained optimization problem given by the objective function
w
u w
u w
¤
¦¥
³µ´
¤
¦¥
³µ´
¤
¦¥
³µ´
(10.32)
Trang 3¦¥
³µ´
¤
¦¥
³µ´
u w
u w
(10.33)with
w4 w5 1 and w4 u4
1, w5 u5
1
Equation (10.33) may be raised to the power of an arbitrary constant p, and the
objective function may be written as
w
u w
u w
u w
¤
¦¥
³µ´
¤
¦¥
³µ´
¤
¦¥
³µ´
¤1
1 2 2 3 3 4 4
¦¦¥
³µ´
¤
¦¥
³µ´
u w
5 5
respect to the independent variables x i, one at a time, and the resulting equations
multiplied by x i The constant p is arbitrary and can be taken as L/U* Then the
equations for the w’s are obtained as
Equation (10.34) gives the optimum value of the objective function and the pendent variables are obtained from the expressions for the weighting factors,
inde-as winde-as done before The sensitivity coefficient S –L –pU* and has the same
Trang 4physical interpretation as discussed in Chapter 8 for the Lagrange multiplier method, i.e., it is the negative of the rate of change in the optimum with respect
to a change in the adjustable parameter E in the constraint G g – E 0 The
preceding approach may be extended easily to more than one constraint as long
as the degree of difficulty is zero The following examples illustrate the use of the method for constrained optimization
tion U may be taken as the area, given by
U(x, y, z) xz 2xy 2yz
with the constraint due to the total volume given as
xyz 5
In order to apply geometric programming, the constraint is written as
0.2(xyz) 1All the four relevant terms in the objective function and in the constraint are poly- nomials and the number of independent variables is three Therefore, the degree of
difficulty D 4 – (3 1) 0.
From geometric programming for constrained optimization, the optimum value
of the objective function may be written as
w
xy w
yz w
¤
¦¥
³ µ´
¤
¦¥
³ µ´
pw
¤
¦¥
³ µ´
In order to eliminate the independent variables x, y, and z from the preceding
equa-tion for the objective funcequa-tion at the optimum, we have, respectively,
w1 w2 pw4 0
w2 w3 pw4 0
w1 w3 pw4 0 Also,
w w w 1
Trang 5¦¥
³ µ´
¤
¦¥
³ µ´
¤
¦
³ µ
1 3 2 3
2
0 2 1
3 1
Therefore, these equations are solved to obtain x 2.15 m, y 1.08 m, and z 2.15 m
at the optimum Again, it can be confirmed that the area obtained at the optimum
is a minimum by calculating U for small changes in x, y, and z from the
opti-mum values This simple example illustrates the use of geometric programming for constrained nonlinear optimization Even though the requirements of polynomial expressions and zero degree of difficulty limit the applicability of this approach, the method is useful in a variety of problems, particularly in thermal systems, where polynomials are frequently used to represent the characteristics.
Trang 6Formulate this optimization problem and apply geometric programming to mine the optimum.
deter-Solution
The constant in the objective function does not affect the optimum and the second constraint must be written in a form suitable for applying geometric programming Therefore, the optimization problem may be written as
¤
¦¥
³ µ´
¤
¦¥
³ µ´
yy xw
with the following equations for the unknowns w1, w2, w3, w4, p1, and p2:
w1 w2 1
p1w3 p1
p2w4 p22w1 p1w3 2w4 0
¤
¦¥
³ µ´
¤
¦¥
³ µ´
.
.
Therefore, the optimum cost C* 1.5 4.976 6.476 Employing the equations
5x2y w U* and 10/T2 w U*, along with the constraints, we obtain x 0.796,
Trang 7y 1.256, and T 3.170 The two Lagrange multipliers are L1 p1U*
L 2 p2U* 1.99, yielding corresponding sensitivity coefficients as (S c)1 1 and
(S c)2 –L 2 Therefore, the first constraint is more important and an increase of 0.1
in the constant, which is unity, in the constraint will increase the dimensionless cost by 0.5971 Similarly, an increase of 0.1 in the constant in the second constraint decreases the cost by 0.199 This information can be used to adjust the design vari- ables for convenience and to use readily available items for the final design.
10.1.5 N ONZERO D EGREE OF D IFFICULTY
For the application of geometric programming to the optimization of systems, we
have considered only those cases where the degree of difficulty D is zero For this
particular circumstance, the method requires the solution of linear equations and, consequently, provides a simple approach for optimization However, there are obviously many problems for which the degree of difficulty is not zero, as can be seen from the examples discussed in preceding chapters If the degree of difficulty
is higher than zero, geometric programming can be used, but it involves solving a system of nonlinear equations This considerably complicates the solution and it is then probably best to use some other optimization technique Efficient computa-tional algorithms may also be developed for solving such nonlinear systems, as dis-cussed earlier in Chapter 4 Then geometric programming may be employed for a broader range of problems than if we are constrained to problems with zero degree
of difficulty Inequality constraints can also be converted into equality constraints,
as discussed in earlier chapters, for applying this method of optimization
Despite the possibility of solving problems with degree of difficulty greater than zero, geometric programming is clearly best suited to cases where it is zero Therefore, effort is often directed at reducing the problem with a nonzero degree
of difficulty to one with zero degree of difficulty One technique of achieving this
is condensation, in which terms of similar characteristics may be combined to
reduce the number of terms For instance, in the rectangular container problem of
Example 10.4, if an additional term 200z arises due to side supports to the box,
the objective function becomes
Trang 8com-degree of difficulty zero In some cases, information on the physical istics of the system may be used to eliminate relatively unimportant terms The number of independent variables may also be reduced by holding one or more constant for the optimization in order to bring the degree of difficulty to zero All such techniques and procedures expand the application of geometric pro-gramming For additional information on geometric programming, the references given earlier may be consulted.
character-10.2 LINEAR PROGRAMMING
Linear programming is an important optimization technique that has been applied to a wide range of problems, particularly to those in economics, industrial engineering, power transmission, and material flow This method is applicable
if the objective functions as well as the constraints are linear functions of the independent variables The constraints may be equalities or inequalities Since its first appearance about 60 years ago, linear programming has found increasing use due to the need to model, manage, and optimize large systems such as those concerned with production, traffic, and telecommunications (Hadley, 1962; Murtaugh, 1981; Dantzig, 1998; Gass, 2004; Karloff, 2006) A large number of efficient optimization algorithms for linear programming have been developed and are available commercially as well as in the public domain For instance, MATLAB toolboxes have software that can be easily employed to solve linear programming problems for system or process optimization
The applicability of linear programming to thermal systems is somewhat ited because of the generally nonlinear equations that represent these systems However, there are problems concerned with the distribution and allocation of resources in various industries, such as manufacturing and the petroleum indus-try, which may be solved by linear programming techniques In addition, because
lim-of the availability lim-of efficient linear programming slim-oftware, nonlinear tion problems are solved, in certain cases, by converting these into a sequence
optimiza-of linear problems, as discussed in Chapter 4 for nonlinear algebraic systems Iteration is then used, starting with an initial guessed solution, to converge to the optimum A common method of linearization is to use the known values from the previous iteration for the nonlinear terms For instance, an objective function of
the form U3x x1 224x x
2 1 may be linearized as
U3x x1 2 l4x2(x1)l
where the superscript l indicates values from the previous iteration, the others
being from the current iteration Therefore, the function becomes linear because the quantities within the parentheses are taken as known and linear program-ming can be used, with iteration, to obtain the solution However, despite these efforts, linear programming finds its greatest use in the various areas mentioned previously, rather than in thermal engineering, for which nonlinear optimization
Trang 9techniques are often necessary Therefore, only the essential features of this mization technique and a few representative examples are given here For further details, various references already given may be consulted.
opti-Formulation and Graphical Method
The problem statement for linear programming is given in terms of the objective function and the constraints, which must both be linear functions of the inde-pendent variables Therefore, the objective function that is to be minimized or maximized is written as
where b i , a ij , and C i are constants The constraints may be equalities or
inequali-ties, with G i greater or smaller than the constants C i There are n variables and
m linear equations and/or inequalities that involve these variables In linear
pro-gramming, because of inequality constraints, n may be greater than, equal to, or smaller than m, unlike the method of Lagrange multipliers, which is applicable only for equality constraints and for n larger than m We are interested in finding
the values of these variables that satisfy the given equations and inequalities and
also maximize or minimize the linear objective function U.
Let us illustrate the application of linear programming with the following
problem involving two variables x and y:
axes, with the value of U increasing as one moves away from the origin Therefore, the maximum value of U is obtained by the line that touches point A, which is at the intersection of the two constraints At this point, x 2.8, y 1.6, and U 17.2
Therefore, the optimum occurs on the boundary of the feasible domain This is
Trang 10a particular feature of linear programming and most efficient algorithms seek to move rapidly along the boundary, including the axes to obtain the optimum.
Similarly, the optimum value of U may be obtained for a different set of
con-straints For instance, let us replace Equation (10.40c) by
In the first case, the optimum is obtained at x 2 and y 8/3, yielding U 46/3 Again, the optimum is given by the line of constant U passing through the point
given by the intersection of the two constraints In the second case, the optimum
is at x 1 and y 4, giving U 13.0 As expected, the optima occur at the
bound-ary of the feasible domain
FIGURE 10.4 Graphical method for solving the linear programming problems given by
Equation (10.40) and Equation (10.41).
Trang 11Similarly, Equation (10.40c) may be written as
with s2 0 The other inequalities just considered may also be written as ties by using slack variables Therefore,
equali-x s3 2, and, 4x y s4 8 (10.44)The slack variables indicate the difference from the constraint Therefore, the
optimization problem considered here now involves the four variables, x1, x2, s1,
and s2, and there are only two equations, i.e., n 4 and m 2 To find the optimum value of U, two variables, in turn, are set equal to zero and the remaining variables
are obtained from a solution of the two equations For this problem, there are six
such combinations, this number being in general [n!]/[m!(n – m)!], where n! is n
factorial The optimum is the extremum obtained from these combinations It can easily be shown that the results given earlier from a graphical solution are also obtained by employing algebra, as outlined here The following example further illustrates the extraction of the optimum by linear programming
Example 10.7
A company produces x quantity of one product and y of another, with the profit
on the two items being four and three units, respectively Item 1 requires one hour
of facility A and three hours of facility B for fabrication, whereas item 2 requires three hours of facility A and two hours of facility B, as shown schematically in
Figure 10.5 The total number of hours per week available for the two facilities
is 200 and 300, respectively Formulate the optimization problem and solve it by linear programming to obtain the allocation between the two items for maximum profit.
FIGURE 10.5 Schematic showing the utilization of facilities A and B to manufacture
items 1 and 2 in Example 10.7.
Trang 12x 3y s1
3x 2y s2 300
Two variables are set equal to zero, in turn, and these equations are solved for the
other variables The value of the objective function is determined in each case If s1
or s2 is negative, the solution is not allowed, since these were assumed to be positive
to satisfy the constraints The six combinations yield the following results:
Therefore, a maximum value of 414.3 is obtained for U at x 71.43 and y 42.86
This problem can also be solved graphically, as shown in Figure 10.6, to confirm that the optimum arises at the intersection of the two constraints and is thus on the boundary of the feasible region.
Simplex Algorithm
Extensive literature is available on linear programming and many efficient rithms have been developed for solving large optimization problems consisting of five or more terms Most computer systems include software packages for linear
algo-programming Among these is the simplex algorithm, which searches through
the many possible combinations for the optimum value of the objective function
It is based on the Gauss-Jordan elimination procedure, outlined in Chapter 4, for solving a set of simultaneous linear equations Therefore, the normalization of