Various optimization schemes have been proposed in the past 15 years. The scheme proposed by Vining and Myers2(referred to as VM) is to optimize a primary response subject to an appropriate secondary response constraint using the Lagrangian multi- plier approach:
minX (or max) yprimary
subject to ysecondary =ε, whereεis a specific target value.
Del Castillo and Montgomery3 (referred to as DM) solved the problem using the generalized reduced gradient algorithm, which is available in some software packages such as Excel Solver. It has been demonstrated that the approach can handle the cases of target-is-best, larger-is-better, and smaller-is-better.
Review of Existing Techniques 309 Lin and Tu4 (referred to as LT) noted that the optimization scheme based on Lagrangian multipliers may be unrealistic, since it forces the estimated mean to a specific value. They thus propose minimizing the mean squared error (MSE) instead:
minX MSE=( ˆyμ−T)2+yˆσ2.
Copeland and Nelson5(referred to as CN) pointed out that minimizing the MSE places no restriction on how far the resulting value of ˆyμmight be from the target valueTand suggested the use of direct minimization of an objective function. For target-is-best situation, the proposed scheme is to minimize the function ˆyσ+ε,where
ε=
( ˆyμ−T)2, if (yμ−T)2> 2, 0, if (yμ−T)2≤2.
A quality lost function model proposed by Ames et al.6for multiple response sur- faces can also be applied to the dual response case. The scheme is:
min QLP=m
i=1
ωi( ˆyi−Ti)2,
where ˆyiis the estimated response for quality criterioni, with respective target value, Ti, andWiis the corresponding weight for each response.
Recently, Kim and Lin7 (referred to as KL) proposed a fuzzy approach for dual response surface optimization based on the desirability function formulation,8a pop- ular multiresponse optimization technique. Vining9extended the work of Khuri and Conlon,10Pignatiello,11and Ames et al.6and proposed an optimization scheme using a squared error loss function that explicitly incorporates both the variance--covariance structure of both of the responses.
20.2.1 Optimization scheme
For a dual response system in which targets can be specified, it is natural to try to achieve the ideal targets as closely as possible. While there are several ways to achieve such objective, we propose a formulation using the idea of goal programming,12,13in which both the objective function and constraints are of quadratic form as follows:
min δμ2+δ2σ
subject to yμ(x)−ωμδμ =T*
μ, yσ(x)−ωσδσ =T*
σ,
and either xxT≤r2 or xl≤x≤xu. (20.1)
The function to be minimizedδ2μ+δσ2, is the objective function whereδμ, δσ are un- restricted scalar variables (i.e. they can be positive or negative). The subsequent two equations are the constraints where ωμ, ωσ (≥0) are user-defined weights;
T*= {T*
μ,T*
σ}are the ideal response mean and standard deviation associated with a set of response functions ˆY(x)= {yˆμ(x),yˆσ(x)}.which are typically of quadratic form;
and x=[x1,x2, . . . ,xa]Tis set of control factors. The typical solution space of control factors that is of interest can be either xxT ≤r2 for spherical region, wherer is the
310 A Unified Approach for Dual Response Surface Optimization
radius of the zone of interest, or xl ≤x≤xufor rectangular region. The productsωiδi
(i =μ,σ) introduce an element of slackness into the problem, which would otherwise require that the targetsTi (i=μ,σ) be rigidly met; which will happen ifωμandωσ are both set to zero. The setting ofωμ, ωσ will thus enable the user to control the rel- ative degree of under- or over-achievement of the targets for the mean and standard deviation, respectively. The relative magnitudes of theωi also represents the user’s perception of the relative ease with which the optimal solution ofyi(x) will achieve the targetsT*
i . By using different parameter values, alternative Pareto solutions with varying degrees of under-attainment can be found. A Pareto solution14(also called a noninferior or efficient solution) is one in which an improvement in the meeting of one target requires a degradation in the meeting of another.
The above formulation can be run in Excel Solver using a simple template shown in Figure 20.1. In general, it is recommended that the optimization program be run with different starting points. Although the scheme may not guarantee a global optimum, it is a simple and practical technique that gives good results that are close to the global optimal solution in many instances. The issues involved in using Solver are discussed by Del Castillo and Montgomery.3An algorithm (DRSALG) and its ANSI FORTRAN implementation that guarantees global optimality, in a spherical experimentation re- gion, are presented by Del Castillo et al.15,16Recently, Fan17 presented an algorithm, DR2, that can guarantee a global optimal solution for nondegenerate problems and
Figure 20.1 Screen shot for implementing the proposed formulation in Excel Solver.
Review of Existing Techniques 311 returns a near-global one for degenerate problems within a radial region of exper- imentation. A comprehensive discussion and comparison of various algorithms are also given in the papers cited.
20.2.2 Comparison of proposed scheme with existing techniques
The proposed method provides a general framework that unifies some of the existing techniques. This can be seen as follows.
Consider LT’s formulation4in which the MSE is minimized:
minX MSE=( ˆyμ−T)2+yˆσ2. (20.2)
From equation (20.1), if we setωμ=ωσ =1 andTσ*=0, we have min δμ2+δ2σ
subject to ˆyμ(x)−δμ=T*
μ, yˆσ(x)−δσ =0. (20.3)
It follows thatδμ=yˆμ(x)−T*
μ andδσ=yσ(x). Thus equation (20.3) is equivalent to (20.2), since
δμ2+δ2σ= ˆ
yμ(x)−T*
μ
2
+yσ(x)2.
Next, consider VM and DM’s target-is-best formulation,2,3 which can be formulated as
min yˆσ
subject to yˆμ=T*
μ. (20.4)
From (20.1), if we setωμ=0, ωσ =1, andT*
σ =0, thenδμhas no bearing on (20.1) andδσ =yˆσ(x). It follows that (20.1) is equivalent to
min yˆσ2+constant subject to yˆμ(x)=T*
μ, which is the same as (20.4).
It is clear that by using suitable settings forωμ, ωσ,andT*
i , other existing formu- lations can be replicated. These are summarized in Table 20.1, whereTμmaxandTμmin are respectively the maximum and minimum possible values for the mean response so that we have min
ˆ
yμ−Tμmax2
≡max ( ˆyμ) and min ˆ
yμ−Tμmin2
≡min ( ˆyμ). In all the above schemes, the solution space is restricted to xxT≤r2 or xl ≤x≤xu.
Although an equivalent formulation for the CN approach5 cannot be readily ob- tained from the proposed scheme, we shall show in the numerical example that iden- tical result can be obtained by tuning the weights. It is worth noting that our scheme not only provides the slackness for the mean target as in LT and CN, but also includes the slackness from the variance response target, which others do not consider. We discuss later how the trade-off between meeting the mean and variance targets can be expressed explicitly by tuning the weights. Such flexibility, which is not available in previous approaches, enhances the practical appeal of the current formulation.
312 A Unified Approach for Dual Response Surface Optimization Table 20.1 Comparison of proposed scheme with existing techniques.
Case Existing optimization scheme Proposed scheme
1. LT: Lin and Tu4 min
X
yˆμ−Tμ*2
+yˆσ2
ωμ=1, ωσ=1,T*
σ =0
⇒δμ= yμ(x)−T*
μ, δσ=yˆσ(x)
⇒δ2μ+δσ2= ˆ
yu(x)−T*
μ
2
+yˆσ(x)2 2. VM and DM: Target-is-best2,3
min ˆyσ
subject to ˆyμ=T*
μ
ωμ=0, ωσ=1,T*
σ =0
⇒δσ=yˆσ, min ˆy2σ
subject to ˆyμ=T*
μ
3. VM and DM: Larger-is-better2,3 max ˆyμ
subject to ˆyσ =T*
σ
ωμ=1, ωσ =0,T*
μ =Tμmax
⇒δμ= yˆμ−Tμmax, min
yˆμ−Tμmax2
subject to yσ =T*
σ
4. VM and DM: Smaller-is-better2,3 min ˆyμ
subject to ˆyσ =T*
σ
ωμ=1, ωσ =0,T*
μ =Tμmin
⇒δμ= yμ−Tμmin min
yμ−Tμmin2
subject to yσ =T*
σ
5. Quality lost function model6 min QLP=m
i=1ωi( ˆyi−Ti)2
The constraints can be written as δμ= yˆμ−T*
μ
ωμ
andδσ = ˆ
yσ−T*
σ
ωσ
forωμ, ωσ =0
min yˆμ−T*
μ
ωμ 2
+ yˆσ−T*
σ
ωσ 2
It can also be seen from Table 20.1 that the three common goals, the target-is-best, the larger-is-better and the smaller-is-better, are represented by cases 2 through 4, respectively.
20.2.3 Settingωμandωσ
Besides establishing the dual response surface for the problem under consideration, a crucial step in implementing the proposed scheme is the setting of the values of ωμandωσ. Readers are referred elsewhere for the construction of the dual response surfaces.18−21Here, we focus on the choosing the weightsωμandωσ.
The weights, in some sense, reflect a practitioner’s preference for the closeness of the mean and standard deviation of the response to their respective target values. The interpretations of several possible schemes are as follows (for the justifications, see Table 20.1).
r Setωμ=ωσ =1. This scheme will lead to minimizing the MSE since the sum of the square of the distance measures of the mean and variance responses from their
Review of Existing Techniques 313 respective targets, i.e. the true mean and zero, are minimized. It is in fact the same objective as in Lin and Tu.4
r Set ωμ=0, ωσ =1,T*
σ =0. This scheme is reduced to the target-is-best formulation.2,3
r Setting theωμ=1, ωσ =0,T*
μ =Tμmax, This scheme is reduced to the larger-is-better formulation.2,3
r Set ωμ=1, ωσ =0,T*
μ =Tμmin. This scheme is reduced to the smaller-is-better formulation.2,3
In order to assess the effect of the weights on the final solution, a sensitivity analysis should be performed. This is best done through experimenting with different convex combinations of the weights, that is,ωμ+ωσ =1, ωμ>0, ωσ >0.In this way,ωμand ωσ can be varied parametrically to generate a set of noninferior solutions for this multiple objective optimization problem (see the weighting method in Cohon22and Ehrgott23).
Note that for weights between 0 and 1, our formulation is similar to that of the quality loss model proposed by Ames et al.6Thus, the choice of weights can be tuned to represent the preference for quality loss.
In addition, there are two interesting settings of the weights that could also represent the preference of a decision maker.
r Set the weights equal to their corresponding targets. This results in
δμ= yˆμ(x)−Tμ*
Tμ* and δσ = yˆσ(x)−Tσ* Tσ* .
Here,δμ, δσactually represent the percentage of under- or over-attainment from the targets for the dual responses. Thus minimizingδ2μ+δσ2 is identical to minimizing the percentage of deviation from respective targets with the same preference.
r Set the weights equal to the reciprocal of the targets. From equation (20.1), the constraints become
δμ= ˆ
yμ(x)−T*
μ
T*
μ and δσ =
ˆ
yσ(x)−T*
σ
T*
σ. Thus,
min
δμ2+δ2σ
=min ˆ
yμ(x)−Tμ*2
Tμ*2+ ˆ
yσ(x)−Tσ*2
Tσ*2 .
The objective function becomes that of minimizing the weighted sum of squares of deviations from the targets in which the weights are given by the square of the respective target value. This will result in a smaller deviation for the larger target value due to higher weight. It thus implies the preference for smaller deviation from the larger target value.
314 A Unified Approach for Dual Response Surface Optimization