Some Special Cases of Optimizing overthe Efficient Set of a Generalized Convex Multiobjective Programming Problem Tran Ngoc Thang School of Applied Mathematics and Informatics Hanoi Univ
Trang 1Some Special Cases of Optimizing over
the Efficient Set of a Generalized Convex
Multiobjective Programming Problem
Tran Ngoc Thang
School of Applied Mathematics and Informatics
Hanoi University of Science and Technology
Email: thang.tranngoc@hust.edu.vn
Tran Thi Hue
Faculty of Management Information System The Banking Academy of Vietnam Email: huett@bav.edu.vn
Abstract—Optimizing over the efficient set is a very hard
and interesting task in global optimization, in which local
optima are in general different from global optima At the
same time, this problem has some important applications
in finance, economics, engineering, and other fields In this
article, we investigate some special cases of optimization
problems over the efficient set of a generalized convex
multiobjective programming problem Preliminary
com-putational experiments are reported and show that the
proposed algorithms can work well.
AMS Subject Classification: 90 C29; 90 C26
Keywords: Global optimization, Efficient set,
General-ized convexity, Multiobjective programming problem
I INTRODUCTION
The generalized convex multiobjective programming
problem (GM OP ) is given as follows
Minf (x) = (f1(x), , fm(x))T s.t x ∈ X ,
where X ⊂ Rn is a nonempty convex compact set
and fi, i = 1, , m, are generalized convex functions
on X In the case m = 2, problem (GM OP ) is
called a generalized convex biobjective programming
problem The special cases of problem(GM OP ), where
fi, i = 1, , m are linear (resp convex), called a linear multiobjective programming problem (resp a con-vex multiobjective programming problem), have received
special attention in the literature (see the survey in [21] and references therein) However, to the best of our knowledge, there is little result about numerical methods
in the nonconvex case where fi, i = 1, , m, are nonconvex (see [4], [16], )
The main problem in this paper is formulated as
whereΦ : X → R is a continuous function and XE is the efficient solution set for problem(GM OP ), i.e
XE= {x0
∈ X | 6 ∃x ∈ X : f (x0
) ≥ f (x), f (x0
) 6= f (x)}
As usual, the notationy1 ≥ y2, where y1, y2
∈ Rm, is used to indicatey1
i ≥ y2
i for alli = 1, , m
It is well-known that, in general, the set XE is non-convex and given implicitly as the form of a standard mathematical programming problem, even in the case
m = 2, the objective functions f1, f2 are linear and the feasible set X is polyhedral Hence, problem (PX)
is a global optimization problem and belongs to
Trang 2NP-hard problem class This problem has many applications
in economics, finance, engineering, and other fields
Recently this problem has a great deal of attention
from researchers (for instance, see [1], [5], [6], [7],
[8], [12], [15], [16] [17], [19] and references therein)
Like problem (GM OP ), there is only few numerical
algorithms to solve problem (PX) in the nonconvex
case (see [1], [16], ) In this article, simple convex
programming procedures are proposed for solving three
special cases of problem(PX) where XE is the efficient
solution set for problem (GM OP ) in the nonconvex
case These special-case procedures require quite little
computational effort in comparison to ones required by
algorithms for the general problem(PX)
In Section 2, the theoretical preliminaries are presented
to analyze three special cases of optimization over the
efficient solution set of problem (GM OP ) Section 3
proposes the algorithms to solve these cases, including
some computational experiments to illustrate the
algo-rithms Some conclusions are given in the last section
II THEORETICALPRELIMINARIES
First, recall that a differentiable function h : X → R
is called pseudoconvex on X if
h∇h(x2
), x1− x2i ≥ 0 ⇒ h(x1
) − h(x2
) ≥ 0 for allx1, x2∈ X For example, by Proposition 5.20 in
[2], the fractional functionr(x)/l(x), where r : Rn→ R
is convex on X and l : Rn → R is linear such that
l(x) > 0 for all x ∈ X , is a pseudoconvex function
By the definition in [10, tr 132], a function h is called
quasiconvex on X if
h(x1
) − h(x2
) ≤ 0 ⇒ h(λx1
+ (1 − λ)x2
) ≤ h(x2
), for all x1, x2 ∈ X and 0 ≤ λ ≤ 1 If h is quasiconvex
theng := −h is quasiconcave.
In the caseh is differentiable, if h is quasiconvex on
X , we have h(x1) − h(x2) ≤ 0 ⇒ h∇h(x2), x1− x2
for allx1, x2∈ X , where ∇h(x2) is the gradient vector
ofh at x2 (see Theorem 9.1.4 in [10])
A vector function f (x) = (f1(x), , fm(x))T is
called convex (resp., pseudoconvex, quasiconvex) onX if its component functionsfi(x), i = 1, , m, are convex (resp., pseudoconvex, quasiconvex) functions onX Recall that a vector function f is called scalarly pseudoconvex on X if Pm
i=1λifi is pseudoconvex on
X for every λ = (λ1, , λm) ≥ 0 (see [16])
By definition, iff is convex then it is scalarly pseudo-convex, andf is scalarly pseudoconvex then it is pseudo-convex Hence, the convex multiobjective programming problem is a special case of problem(GM OP ).
Example II.1 Consider the vector function f (x) over the setX = {x ∈ R2| Ax ≥ b, x ≥ 0}, where
f (x) = −x
2− 0.6x1+ 0.5x2
x1− x2− 2 ,
x2+ x1
x2− x1+ 2
and
A =
−2 −1
, b =
2 6
−10
We have
λTf (x) = λ1(x
2+ 0.6x1− 0.5x2) + λ2(x2+ x1)
It is easily seen that
r(x) = λ1(x2
1+ 0.6x1− 0.5x2) + λ2(x2
2+ x1)
is convex becauseλ1, λ2≥ 0, and l(x) = x2−x1+2 > 0
for all x ∈ X Therefore, by Proposition 5.20 in [2], the functionλTf (x) = r(x)/l(x) is pseudoconvex , i.e f (x)
is scalarly pseudoconvex.
Let
Y = {y ∈ Rm|y = f (x) for some x ∈ X }
Trang 3As usual, the set Y is said to be the outcome set for
problem(GM OP )
Let yI
i = min{yi | y ∈ Y}, i = 1, , m It is
clear that yI
i is also the optimal value of the following
programming problem
min fi(x) s.t x ∈ X (Pi) For each i ∈ {1, , m}, if fi is pseudoconvex, we can
apply convex programming algorithms to solve problem
(Pi) (Remark 2.3 in [3])
The point yI = (yI
1, , yI
m) is called the ideal point
of the set Y Notice that the ideal point yI need not
belong toY
Fig 1 The ideal point y I
Consider a generalized convex biobjective
program-ming problem
Minf (x) = (f1(x), f2(x))T s.t x ∈ X , (GBOP )
where X := {x ∈ Rn | gj(x) ≤ 0, j = 1, , k},
the functions gj(x), j = 1, , k, are differentiable
quasiconvex on Rm and the objective function f is
scalarly pseudoconvex on X
Now we will describe the three sets of conditions
associated with three special cases of problem (PX)
under consideration
Case 1 The feasible set XE is the efficient solution
set of problem (GM OP ), the ideal point yI belongs
to the outcome setY and the objective function Φ(x) of problem(PX) is pseudoconvex on X
Case 2 The feasible setXE is the efficient solution set
of problem (GBOP ) and the objective function Φ(x) has the form asΦ(x) = ϕ(f (x)) where ϕ : Y → R and
with λ = (λ1, λ2)T
∈ R2 This case could happen
in certain common situations, for instance, when the objective function of problem(PX) represents the linear composition of the criteria fi(x), i ∈ {1, 2} with the weighted coefficientsλi, i ∈ {1, 2}
Case 3 The feasible setXE is the efficient solution set
of problem (GBOP ) and the objective function Φ(x) has the form as Φ(x) = ϕ(f (x)) where ϕ : Y → R is quasiconcave and decreasing monotonic
Let Q ⊂ Rm be a nonempty set A point q0 ∈ Q
is called an efficient point of the set Q if there is no
q ∈ Q such that q0 ≥ q and q0 6= q The set of all efficient points ofQ is denoted by MinQ It is clear that
a point q0 ∈ MinQ if Q ∩ (q0
− Rm +) = {q0}, where
Rm+ = {y ∈ Rm|yi≥ 0, i = 1, , m}
Since the functions fi, i = 1, , m, are continuous and X ⊂ Rn is a nonempty compact set, the outcome setY is also compact set in Rm Therefore, the efficient setMinY is nonempty [9] Let
The setYEis called the efficient outcome set for problem
(GM OP ) By definition, it is easy to see that
The relationship between the efficient solution setXE
and the efficient setMinY is described as follows
Proposition II.1.
i) For anyy0∈ MinY, if x0∈ X satisfies f (x0) ≤ y0, thenx0∈ X .
Trang 4ii) For anyx ∈ XE, ify = f (x ), then y ∈ MinY.
Proof: i) Since y0∈ MinY, by definition, we have
(y0
− Rm
+) ∩ Y = y0
Moreover,f (x0
) ∈ (y0
− Rm +) and
f (x0) ∈ Y because x0∈ X and f (x0) ≤ y0 Therefore,
f (x0) = y0 ∈ MinY Combined this fact with (3) and
(4), we implyx0∈ XE
ii) This fact follows immediately from (3) and (4)
Fig 2 The efficient set MinY
Let
Z = Y + Rm
+ = {z ∈ Rm | z ≥ y for some y ∈ Y}
It is clear that Z is a nonempty, full-dimension closed
set but it is nonconvex in general
Fig 3 The set Z
The following interesting property ofZ (see Theorem 3.2 in [20]) will be used in the sequel
Proposition II.2. MinZ = MinY.
Now we will consider the first special case where the ideal pointyI belongs to the outcome set Y
Proposition II.3 If yI ∈ Y then MinY = {yI} Proof: SinceyI ∈ Y, there exists xI ∈ X such that
yI = f (xI), i.e yI
i = fi(xI) for all i = 1, 2, , m For each i ∈ {1, 2, , m}, since yI
i is the optimal value
of problem (Pi), xI is an optimal solution of problem (Pi) Hence, xI ∈ Argmin{fi(x) | x ∈ X } for all
i = 1, , m By definition, xI is an efficient solution
of problem (GM OP ), i.e xI ∈ XE From Proposition II.1(ii), it implies yI ∈ MinY Since yI
i is the optimal value of problem(Pi) for i = 1, 2, , m, by definition
of efficient points,yI is the only efficient point ofY Let
Xid= {x ∈ X | fi(x) ≤ yI
i, i = 1, 2, , m}
By [10, Theorem 9.3.5], for each i = 1, 2, , m,
if fi is the pseudoconvex function, fi is quasiconvex Therefore,Xidis a convex set because every lower level set of continuous quasiconvex functions is convex The following assertion provides a property to detect whether
yI belongs to Y
Proposition II.4 IfXid is not empty then yI ∈ Y and
XE= Xid Otherwise,yI does not belong to Y Proof: By definition, if Xid = ∅, yI 6∈ Z Since
Y ⊆ Z, yI 6∈ Y Otherwise, Xid is not empty, yI ∈ Z Therefore, yI ∈ Y because yI
i is the optimal value of problem (Pi) for i = 1, 2, , m By Proposition II.3 and Proposition II.1(i), we getMinY = {yI} and XE= {x ∈ X | f (x) ≤ yI} = Xid
In the next two cases, we consider problem (PX) where X is the efficient solution set of problem
Trang 5(GBOP ) and Φ(x) has the form as Φ(x) = ϕ(f (x))
where ϕ : Y → R Then the outcome-space
reformula-tion of problem (PX) can be given by
Combining (4) and Proposition II.2, problem (PY) can
be rewritten as follows
Therefore, instead of solving problem (PY), we solve
problem(PZ)
Proposition II.5 If f is scalarly pseudoconvex, the set
Z = f (X) + R2
+ is a convex set inR2.
Proof: Letw be an arbitrary point in the boundary¯
∂Z of the set Z By geometry, there exists ¯y ∈ MinZ
such that y ≤ ¯¯ w From Proposition II.2, we imply that
¯
y ∈ MinY Hence, by Proposition II.1(ii), there exists
¯
x ∈ X such that ¯y = f (¯x) and ¯x is an efficient solution
of problem(GM OP )
Let J = {j ∈ {1, , k} | gj(¯x) = 0} and s = |J|
For any vector a ∈ Rs, we denote aJ := {aj, j ∈ J}
Since x ∈ X¯ E, by [11, Corollary 3.1.6], there exist a
vector ¯λ ∈ R2
+\ {0} and a vector ¯µJ ∈ Rs
+, such that
¯
λT∇f (¯x) + ¯µT∇gJ(¯x) = 0 which means
¯
λT∇f (¯x) = −¯µT
J∇gJ(¯x) (5) Since µ¯J ≥ 0, gJ(x) ≤ 0 for all x ∈ X and
gJ(¯x) = 0, we have ¯µTgJ(x) − ¯µTgJ(¯x) ≤ 0 for
all x ∈ X Combined this fact with the condition
gj, j = 1, , k, are differentiable quasiconvex and (1),
we get µT
J∇gJ(¯x), x − ¯x ≤ 0 for all x ∈ X Thus, by
(5), one has ¯λT∇f (¯x), x − ¯x ≥ 0 or
T
f (¯x) , x − ¯x ≥ 0 ∀x ∈ X (6) Moreover, ¯λTf is pseudoconvex on X because f is
scalarly pseudoconvex onX Therefore, (6) implies that
¯
λTf (x) − ¯λTf (¯x) ≥ 0 ∀x ∈ X ,
i.e ¯λ, y − ¯y ≥ 0 for all y ∈ Y For i = 1, 2, set ˆ
λi= ¯λi if ¯i= ¯wi and ˆλi = 0 if ¯yi 6= ¯wi It is easy to check that
Dˆλ, ¯w − ¯yE= 0 and ˆλ ≥ 0, ˆλ 6= 0 (7) Hence, D ˆλ, y − ¯wE ≥ 0 for all y ∈ Z Set H( ¯w) = {y ∈ Rm | Dˆλ, y − ¯wE
≥ 0} Then Z ⊂ H( ¯w) for all
¯
w ∈ ∂Z By [14, Theorem 6.20], we imply that Z is a convex set
Since Z is a nonempty convex subset in R2 by Proposition II.5, it is well known [13] that the efficient set MinZ is homeomorphic to a nonempty closed interval
ofR By geometry, it is easily seen that the problem
min{y2 : y ∈ Z, y1= yI
1} (PS) has a unique optimal solutionyS and the problem
min{y1 : y ∈ Z, y2= yI
2} (PE) has a unique optimal solution yE Since Z is convex, problems(PS) and (PE) are convex programming prob-lems If yI ∈ Y then, by Propositions II.3 and II.4, yI
is the only optimal solution to problem(PZ) and Xid is the optimal solution set of problem(PX)
Fig 4 The efficient curve MinZ
If yI 6∈ Y then yS 6= yE and the efficient setMinZ
is a curve on the boundary of Z with starting point yS
Trang 6and the end pointy such that
y1E> yS1 andyS2 > yE2 (8) Note that we also get the efficient solutionsxS, xE∈ XE
such that yS = f (xS) and yE = f (xE) while solving
problems (PS) and (PE) For the convenience, xS, xE
is called to be the efficient solutions respect to yS, yE,
respectively
In the second case, we consider problem(PZ) where
ϕ(y) = hλ, yi Direct computation shows that the
equa-tion of the line throughyS andyE ishc, yi = α, where
y E
1 −y S
1, 1
y S
2 −y E 2
,
E 1
y E
1 −y S 1
E 2
y S
2 −y E 2
(9)
From (8), it is easily seen that the vector c is strictly
positive Now, let
˜
Z = {y ∈ Z | hc, yi ≤ α}
and
Γ = ∂ ˜Z \ (yS
, yE), where (yS, yE) = {y = tyS + (1 − t)yE | 0 < t < 1}
and∂ ˜Z is the boundary of the set ˜Z
Fig 5 The convex set ˜ Z
It is clear that ˜Z is a compact convex set because Z
is convex By the definition and geometry, we can see
thatΓ contains the set of all extreme points of ˜Z and
Consider the following convex problem
minhλ, yi s.t y ∈ ˜Z (CP0) that has the explicit reformulation as follows
s.t f (x) − y ≤ 0
x ∈ X ,
hc, yi ≤ α,
(CP1)
where the vector c ∈ R2 and the real number α is determined by (9)
Proposition II.6 Suppose that (x∗, y∗) is an optimal solution of problem (CP1) Then x∗ is an optimal solution of problem(PX).
Proof: It is well known that a convex programming
problem with the linear objective function has an optimal solution which belongs to the extreme point set of the feasible solution set [18] Therefore, problem(CP0) has
an optimal solution y∗ ∈ Γ This fact and (10) implies thaty∗∈ MinZ Since MinZ ⊂ ˜Z, it implies that y∗ is
an optimal solution of problem(PZ)
Since MinZ = YE = MinY, by definition, we have
hλ, y∗i ≤ hλ, yi for all y ∈ YE andy∗∈ MinY Then
hλ, y∗i ≤ hλ, f (x)i, ∀x ∈ XE (11) Since (x∗, y∗) is a feasible solution of problem (CP1),
we have f (x∗) ≤ y∗ By Proposition II.1, x∗ ∈ XE Furthermore, we have f (x∗) ∈ Y and y∗ ∈ MinY The definition of efficient points infers thaty∗= f (x∗) Combining this fact and (11), we getΦ(x) ≥ Φ(x∗) for all x ∈ XE which means x∗ is an optimal solution of problem(PX)
In the last case, we consider problem(PZ) where the function ϕ : Y → R is quasiconcave and decreasing monotonic The following assertion presents the special property of the optimal solution to problem(PZ)
Trang 7Fig 6 The illustration to Case 3
Proposition II.7 If the function ϕ is quasiconcave
and decreasing monotonic then the optimal solution of
problem (PZ) is attained at either yS or yE.
Proof: Let Z△ = conv{yS, yI, yE}, where
conv{yS, yI, yE} stands for the convex hull of the points
{yS, yI, yE} Since MinZ ⊂ Z△, we have
min{ϕ(y) | y ∈ MinZ} ≥ min{ϕ(y) | y ∈ Z△} (12)
It is obvious that the optimal solution of the problem
min{ϕ(y) | y ∈ Z△}, where the objective function ϕ
is quasiconcave, belongs to the extreme point set ofZ△
[18] Therefore,
Argmin{ϕ(y) | y ∈ Z△} ∈ {yS, yI, yE}
Moreover, sinceϕ is also decreasing, we have
Argmin{ϕ(y) | y ∈ Z△} ∈ {yS, yE}
Since yS, yE∈ MinZ, this fact and (12) imply
min{ϕ(y) | y ∈ MinZ} = min{ϕ(y) | y ∈ Z△}
andArgmin{ϕ(y) | y ∈ MinZ} ∈ {yS, yE}
III PROCEDURES ANDCOMPUTINGEXPERIMENTS
Case 1 The feasible set XE is the efficient solution
set of problem (GM OP ), the ideal point yI belongs
to the outcome set Y and the objective function Φ(x) of
problem (P ) is pseudoconvex on X
By Proposition II.4, to detect whether the ideal point
yI belongs to Y and solve problem (PX) in this case,
we solve the following problem
min Φ(x) s.t x ∈ Xid (CPid) Since Φ(x) is pseudoconvex on X and Xid is convex,
we can apply convex programming algorithms to solve problem(CPid) (Remark 2.3 in [3]) The procedure for this case is described as follows
Procedure 1.
Step 1 For eachi = 1, , m, find the optimal value yI
i
of problem(Pi)
Step 2 Solve problem(CPid)
If problem(CPid) is not feasible Then STOP (Case 1
does not apply)
Else Find an optimal solution x∗ to problem (CPid) STOP (x∗ is an optimal solution to problem(PX)) Below we present a numerical example to illustrate Procedure 1
Example III.1 Consider the problem(PX), where XE
is the efficient solution set to the following problem
Min (f1(x), f2(x)) = (0.5x2
1− x1+ 0.3x2, x2
2+ x1)
s.t. x2
1+ x2
2− 4x1− 4x2≤ −6
−x1+ x2≥ α
x1+ x2≥ 2
x1≥ 1
andΦ(x) = min{0.5x2
+ x2+ 0.2; 2x2− 4.6x1+ 5.8}.
• In the case α = 0:
Step 1 Solving problems(P1) and (P2), we obtain the ideal pointyI = (−0.2000, 2.0000).
Step 2 Solving problem(CPid), we can find an optimal solution x∗ = (1.0000, 1.0000) Then x∗ is the optimal solution to problem (PX) and Φ(x∗) = 1.7000 is the optimal value of problem(P )
Trang 8• In the case α = −1:
Step 1 Solving problems (P1) and (P2), we obtain the
ideal pointyI = (−0.2299, 1.8917).
Step 2 Solving problem (CPid), we can find that it is
not feasible It means that the ideal point yI does not
belong to Y.
Case 2 The feasible setXEis the efficient solution set of
problem (GBOP ) and the objective function Φ(x) has
the form as (2).
In this case, the procedure for solving problem(PX)
is established by Proposition II.4 and Proposition II.6
Recall that if yI ∈ Y, Xid = Argmin(PX) Therefore,
we can obtain an optimal solution of problem(PX) by
solving the following convex programming problem
min he, xi s.t x ∈ Xid, (CPid
2 ) wheree = (1, , 1) ∈ Rn
Procedure 2.
Step 1 For each i = 1, 2, find the optimal value yI
i of problem(Pi)
Step 2 Solve problem(CPid
2 )
If problem (CPid
2 ) is not feasible Then Go to Step 3
(yI 6∈ Y)
Else Find an optimal solutionx∗to the problem(CPid
2 )
STOP (x∗ is an optimal solution to problem(PX))
Step 3 Solve problem(PS) and problem (PE) to find the
efficient pointsyS, yE and the efficient solutionsxS, xE
respect to yS, yE, respectively
Step 4 Solve problem(CP1) to find an optimal solution
(x∗, y∗) STOP (x∗ is an optimal solution to(PX))
Below are some examples to illustrate Procedure 2
Example III.2 Consider problem (PX), where XE is
the efficient solution set to problem (GBOP ) with
f (x) = x2
+ 1, f (x) = (x − 3)2
+ 1,
X =x ∈ R | (x1− 1) + (x2− 2) ≤ 1, 2x1− x2≤ 1 ,
andΦ(x) = λ1f1(x) + λ2f2(x).
It is easily seen that the function f is scalarly pseudo-convex on X because f1, f2are convex on X Therefore,
we can apply Procedure 2 to solve this problem.
Step 1 The optimal value of the problems(P1) and (P2), respectively, isyI
1= 1.0000 and yI
2= 1.0000.
Step 2 Solving problem (CPid
2 ), we can find that it is not feasible Then go to Step 3.
Step 3 Solve problems(PS) and (PE) to obtain
yS = (1.9412, 1.0000), yE= (1.0000, 1.9326)
and
xS = (0.9612, 2.9991), xE= (0.0011, 2.0435)
respect to yS, yE, respectively.
Step 4 For each λ = (λ1, λ2) ∈ R2
, solve problem
(CP1
) to find the optimal solution (x∗, y∗) Then x∗ is
an optimal solution and Φ(x∗) is the optimal value of
(PX) The computational results are shown in Table I.
(0.0, 1.0) (0.9612, 2.9991) (1.9412, 1.0000) 1.0000 (0.2, 0.8) (0.4460, 2.8325) (1.1989, 1.0281) 1.0622 (0.5, 0.5) (0.2929, 2.7071) (1.0858, 1.0858) 1.0858 (0.8, 0.2) (0.1675, 2.5540) (1.0208, 1.1989) 1.0622 (1.0, 0.0) (0.0011, 2.0435) (1.0000, 1.9326) 1.0000 (−0.2, 0.8) (0.9654, 2.9992) (1.9412, 1.0000) 0.4118 (0.8, −0.2) (0.0011, 2.0435) (1.0000, 1.9326) 0.4146
TABLE I
C OMPUTATIONAL RESULTS OF E XAMPLE III.2
Example III.3 Consider problem(PX), where Φ(x) =
λ1f1(x) + λ2f2(x) and XE is the efficient solution set
to problem (GBOP ) with
f (x) = −x
2− 0.6x1+ 0.5x2
x2+ x1
Trang 9
andX =x ∈ R | Ax ≥ b, x ≥ 0 , where
A =
−2 −1
, b =
2 6
−10
By Example II.1, it is verified that f is scalarly
pseudoconvex on X Therefore, we can apply Procedure
2 to solve this problem.
Step 1 The optimal value of the problems(P1) and (P2),
respectively, is yI
1= −0.4167 and yI
2 = 1.5400.
Step 2 Solving problem (CPid
2 ), we can find that it is not feasible Then go to Step 3.
Step 3 Solve problems(PS) and (PE) to obtain
yS= (−0.2000, 1.5400), yE= (−0.4167, 8.3332)
and
xS = (0.4000, 2.4000), xE = (0.0000, 9.9997)
respect to yS, yE, respectively.
Step 4 For each λ = (λ1, λ2) ∈ R2, solve problem
(CP1) to find the optimal solution (x∗, y∗) Then x∗ is
an optimal solution and Φ(x∗) is the optimal value of
(PX) The computational results are shown in Table II.
(0.0, 1.0) (0.4000, 2.4000) (−0.2000, 1.5400) 1.5400
(0.2, 0.8) (0.4000, 2.4000) (−0.2000, 1.5400) 1.1920
(0.5, 0.5) (0.4000, 2.4000) (−0.2000, 1.5400) 0.6700
(0.8, 0.2) (0.0900, 2.8650) (−2.2870, 1.7378) 0.1180
(1.0, 0.0) (0.0000, 9.9997) (−0.4167, 8.3332) −0.4167
(−0.2, 0.8) (0.4000, 2.4000) (−0.2000, 1.5400) 1.2720
(0.8, −0.2) (0.0000, 9.9997) (−0.4167, 8.3332) − 2.0000
TABLE II
C OMPUTATIONAL RESULTS OF E XAMPLE III.3
Case 3 The feasible set XE is the efficient solution
set of problem (GBOP ) and the objective function Φ(x)
has the form as Φ(x) = ϕ(f (x)) where ϕ : Y → R is
quasiconcave and decreasing monotonic.
In this case, the procedure for solving problem is established by Proposition II.4 and Proposition II.7 Let
xopt be an optimal solution of problem(PX)
Procedure 3.
Step 1 For each i = 1, 2, find the optimal value yI
i of problem(Pi)
Step 2 Solve problem(CPid
2 )
If problem (CPid
2 ) is not feasible Then Go to Step 3
(yI 6∈ Y)
Else Find an optimal solutionx∗to the problem(CPid
2 ) STOP (x∗ is an optimal solution to problem(PX))
Step 3 Solve problem(PS) and problem (PE) to find the efficient pointsyS, yE and the efficient solutionsxS, xE
respect toyS, yE, respectively
Step 4 Ifϕ(yS) > ϕ(yE) Then xopt= xE Elsexopt=
xS STOP (xoptis an optimal solution to problem(PX))
We give below an example to illustrate Procedure 3
Example III.4 Consider problem (PX), where XE is the efficient solution set to problem (GBOP ) with
f1(x) = x2
1+ 2x1x2+ 3x2
2, f2(x) = x2
2− 0.5x1+ 0.3x2,
X =x ∈ R2| g1(x) ≤ 0, g2(x) ≤ 0 ,
g1(x) = 9(x1−2)2
+4(x2−3)2
−36, g2(x) = x1−2x2−3,
and Φ(x) = ϕ(f (x)) with ϕ(y) = −y2− y2.
It is easily verified that the function vector f is scalarly pseudoconvex on X because f1, f2 are convex on X
Moreover, the function ϕ is quasiconcave and decreasing monotonic on R2 Therefore, we can apply Procedure 3
to solve this problem.
Step 1 The optimal value of the problems(P1) and (P2), respectively, isyI
1= 2.2192 and yI
2= −1.2623.
Step 2 Solving problem (CPid
2 ), we can find that it is not feasible Then go to Step 3.
Trang 10Step 3 Solve problem(PS) and problem (PE) to obtain
yS= (9.2894, −1.2623), yE= (2.2192, −0.4012)
and
xS = (2.7848, 0.2406), xE = (1.1432, 0.2892)
respect to yS, yE, respectively.
Step 4 Since ϕ(yS) < ϕ(yE), the optimal solution to
problem (PX) is xopt= xS = (2.7874, 0.2406) and the
optimal value of problem(PX) is Φ(xE) = −87.8812.
IV CONCLUSION
In this article, we have developed the simple convex
programming procedures for solving three special cases
of optimization problem over the efficient set (PX)
These special case procedures require quite little
com-putation effort in comparision to that required to solve
the general case because only some convex programming
problems need solving Therefore, they can be used as
screening devices to detect and solve these special cases
ACKNOWLEDGMENT This research is funded by Hanoi University of Science
and Technology under grant number T2016-TC-205
REFERENCES [1] L T H An, P D Tao, N C Nam and L D Muu, “Method for
optimizing over efficient and weakly efficient the sets of an affine
fractional vector optimization program”, Optim., vol 59, no 1, pp.
77-93, 2010.
[2] M Avriel, W E Diewert, S Schaible and I Zang, Generalized
Concavity, Plenum Press, New York, 1998.
[3] H P Benson, “On the Global Optimization of Sums of Linear
Fractional Functions over a Convex Set”, J Optim Theory Appl.,
vol 121, no 1, pp 19-39, 2004.
[4] H P Benson, “A global optimization approach for generating
efficient points for multiobjective concave fractional programs”,
J Multi Criteria Decis Anal., vol 13, pp 15–28, 2005.
[5] H P Benson, “An outcome space algorithm for optimization
over the weakly efficient set of a multiple objective nonlinear
programming problem”, J Glob Optim., vol 52, pp 553- 574,
2012.
[6] J Fulop and L D Muu, “Branch-and-bound variant of an
outcome-based algorithm for optimizing over the efficient set of a
bicriteria linear programming problem”, J Optim Theory Appl.,
vol 105, pp 37-54, 2000.
[7] R Horst, N V Thoai, Y Yamamoto and D Zenke, “On
optimiza-tion over the efficient set in linear multicriteria programming”, J.
Optim Theory Appl., vol 134, pp 433-443, 2007.
[8] N T B Kim and T N Thang, “Optimization over the Efficient Set
of a Bicriteria Convex Programming Problem”, Pacific J Optim.,
vol 9, pp 103-115, 2013
[9] D T Luc, Theory of Vector Optimization, Springer-Verlag, Berlin,
Germany, 1989.
[10] O L Mangasarian, Nonlinear Programming, McGraw-Hill, New
York, 1969.
[11] K Miettinen, Nonlinear Multiobjective Optimization, Kluwer
Academic Publishers, Boston, 1999.
[12] L D Muu and L Q Thuy, “Smooth optimization algorithms for optimizing over the Pareto efficient set and their application
to Minmax flow problems”, Vietnam J Math., vol 39, no 1, pp.
31-48, 2011.
[13] H X Phu, “On efficient sets inR2”, Vietnam J Math, vol 33,
pp 463-468, 2005.
[14] R T Rockafellar and R B Wets, Variational Analysis,
Springer-Verlag, Berlin, Germany, 2010.
[15] T N Thang and N T B Kim, “Outcome space algorithm for generalized multiplicative problems and optimization over the
efficient set”, J Ind Manag Optim., vol 12, no 4, pp 1417
-1433, 2016.
[16] T N Thang, D T Luc and N T B Kim, “Solving generalized convex multiobjective programming problems by a normal
direc-tion method”, Optim., vol 65, no 12, pp 2269-2292, 2016.
[17] N V Thoai, “Reverse convex programming approach in the space
of extreme criteria for optimization over efficient sets”, J Optim
Theory and Appl., vol.147, pp 263-277, 2010.
[18] H Tuy, Convex Analysis and Global Optimization, Kluwer, 1998 [19] Y Yamamoto, “Optimization over the efficient set: overview”, J.
Global Optim., vol 22, pp 285-317, 2002.
[20] P L Yu, Multiple-Criteria Decision Making Plenum Press, New
York and London, 1985.
[21] M M Wiecek, M Ehrgott and A Engau, “Continuous
multiob-jective programming”, Multiple criteria decision analysis: State of
the art surveys, Oper Res Manag Sci., Springer, New York, vol.
233, pp 739-815, 2016.