DSpace at VNU: Extragradient algorithms extended to equilibrium problems tài liệu, giáo án, bài giảng , luận văn, luận á...
Trang 1On: 11 October 2014, At: 03:50
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Optimization: A Journal of Mathematical Programming and Operations Research
Publication details, including instructions for authors andsubscription information:
To cite this article: D Quoc Tran , M Le Dung & Van Hien Nguyen (2008) Extragradient algorithms
extended to equilibrium problems , Optimization: A Journal of Mathematical Programming andOperations Research, 57:6, 749-776, DOI: 10.1080/02331930601122876
To link to this article: http://dx.doi.org/10.1080/02331930601122876
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content Any opinionsand views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis The accuracy of the Contentshould not be relied upon and should be independently verified with primary sources
of information Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content
This article may be used for research, teaching, and private study purposes Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden Terms &
Trang 2and-conditions
Trang 3Vol 57, No 6, December 2008, 749–776
Extragradient algorithms extended to equilibrium problemsô
D Quoc Trana, M Le Dungb* and Van Hien Nguyenca
National University of Hanoi, Vietnam;bHanoi Institute of Mathematics, Vietnam;
cUniversity of Namur, Belgium(Received 5 September 2005; final version received 17 October 2006)
We make use of the auxiliary problem principle to develop iterative algorithms forsolving equilibrium problems The first one is an extension of the extragradientalgorithm to equilibrium problems In this algorithm the equilibrium bifunction is notrequired to satisfy any monotonicity property, but it must satisfy a certain Lipschitz-type condition To avoid this requirement we propose linesearch procedures commonlyused in variational inequalities to obtain projection-type algorithms for solvingequilibrium problems Applications to mixed variational inequalities are discussed
A special class of equilibrium problems is investigated and some preliminarycomputational results are reported
Keywords: equilibrium problem; extragradient method; linesearch; auxiliaryproblem principle; variational inequality
Mathematics Subject Classifications 2000: 65k10; 90c25
1 Introduction and the problem statement
Let K be a nonempty closed convex subset of the n-dimensional Euclidean space Rnand let
f: K K ! R [ þ1f g Consider the following equilibrium problem in the sense of Blumand Oettli [6]:
Find x 2Ksuch that f ðx, yÞ 0 for all y 2 K ðPEPÞ
where f ðx, xÞ ¼ 0 for every x 2 K As usual, we call a bifunction satisfying this property anequilibrium bifunctionon K
Equilibrium problems have been considered by several authors (see e.g [6,12,13,21,22]and the references therein) It is well known (see e.g [13,21,23]) that various classes ofmathematical programing problems, variational inequalities, fixed point problems, Nashequilibrium in noncooperative games theory and minimax problems can be formulated inthe form of (PEP)
*Corresponding author Email: ldmuu@math.ac.vn
ôThis article is dedicated to the Memory of W Oettli
ISSN 0233–1934 print/ISSN 1029–4945 online
# 2008 Taylor & Francis
DOI: 02331930601122876
http://www.tandf.co.uk/journals
Trang 4The proximal point method was first introduced by Martinet in [16] for solvingvariational inequalities and then extended by Rockafellar [28] to the problem of finding
a zero of a maximal monotone operator Moudafi [20] further extended the proximal pointmethod to monotone equilibrium problems Konnov [14] used the proximal point methodfor solving Problem (PEP) with f being a weakly monotone equilibrium function.Another strategy is to use, as for variational inequality problems, a gap function
in order to convert an equilibrium problem into an optimization problem [14,18]
In general, the transformed mathematical programing problem is not convex
The auxiliary problem principle, first introduced for solving optimization problems, byCohen in [7], and then extended to variational inequalities in [8], becomes a useful tool foranalyzing and developing efficient algorithms for the solution to variousclasses of mathematical programming and variational inequality problems (see e.g.[1,2,7–9,11,24,29] and the references cited therein) Recently, Mastroeni in [17] furtherextended the auxiliary problem principle to equilibrium problems involving stronglymonotone equilibrium bifunctions satisfying some Lipschitz-type condition Noor in[25] used the auxiliary problem principle to develop iterative methods for solvingproblems where the equlibrium bifunctions are supposed to be partially relaxed stronglymonotone As in the proximal point method, the subproblems needed to solve in thesemethods are strongly monotone equilibrium problems In a recent article, Nguyen et al.[31] developed a bundle method for solving problems where the equilibrium functionssatisfy a certain cocoercivity condition A continuous extragradient method is proposed in[3] for solving equilibrium problems with skew bifunctions
It is well known that algorithms based upon the auxiliary problem principle, ingeneral, are not convergent for monotone variational inequalities that are special cases
of the monotone equilibrium problem (PEP) To overcome this drawback, the gradient method, first introduced by Korpelevich [15] for finding saddle points, is used tosolve monotone, even pseudomonotone, variational inequalities [9,23,24]
extra-In this article, we use the auxiliary problem principle to extend the extragradientmethod to equilibrium problems By this way, we obtain extragradient algorithms forsolving Problem (PEP) Convergence of the proposed algorithms does not require f tosatisfy any type of monotonicity, but it must satisfy a certain Lipschitz-type condition asintroduced in [17] In order to avoid this requirement, we use a linesearch technique toobtain convergent algorithms for solving (PEP)
The rest of the article is organized as follows In the next section, we give fixed-pointformulations to Problem (PEP) We then use these formulations in the thirdsection to describe an extragradient algorithm for (PEP) Section four is devoted topresentation of linesearch algorithms and their convergence results avoiding theaforementioned Lipschitz-type condition In section five, we discuss applications ofthe proposed algorithms to mixed multivalued variational inequalities The lastsection contains some preliminary computational results and experiments
2 Fixed point formulations
First we recall some well-known definitions on monotonicity that we need in the sequel.Definition 2.1 Let M and K be nonempty convex sets in Rn, M K, and let
f: K K ! R [ fþ1g The bifunction f is said to be
Trang 5(a) strongly monotone on M with constant > 0 if for each pair x, y 2 M, we have
f ðx, yÞ þ f ð y, xÞ kx yk2;(b) strictly monotone on M if for all distinct x, y 2 M, we have
Following [14], associated with (PEP) we consider the following dual problem of (PEP)
Find x 2Ksuch that f ð y, xÞ 0 8y 2 K: ðDEPÞFor each x 2 K, let
LfðxÞ:¼ y 2 K : f ðx, yÞ 0
:Clearly, x is a solution to (DEP) if and only if x2T
We will denote by K and Kd the solution sets of (PEP) and (DEP), respectively.Conditions under which (PEP) and (DEP) have solutions can be found, for example, in[6,12,13,30] and the references therein Since Kd¼T
x2KLfðxÞ, the solution set Kdis closedconvex if f ðx, Þ is closed convex on K In general, K may not be convex However, if f isclosed convex on K with respect to the second variable and hemicontinuous with respect tothe first variable, then Kis convex and KdK Moreover, if f is pseudomonotone on K,then K¼Kd (see [14,21]) In what follows, we suppose that Kd6¼ 60
The following lemma gives a fixed-point formulation for (PEP)
LEMMA2.1 ([17,23]) Let f: K K ! R [ þ1f gbe an equilibrium bifunction
Then the following statements are equivalent:
(i) x is a solution to (PEP);
(ii) x is a solution to the problem
Trang 6Let L : K K ! R be a nonnegative differentiable convex bifunction on K withrespect to the second argument y (for each fixed x 2 K) such that
(i) Lðx, xÞ ¼ 0 for all x 2 K,
(ii) r2Lðx, xÞ ¼ 0 for all x 2 K
where, as usual, r2Lðx, xÞ denotes the gradient of the function Lðx, Þ at x An importantexample for such a function is Lðx, yÞ :¼12ky xk2
We consider the auxiliary equilibrium problem defined as
Find x 2Ksuch that f ðx, yÞ þ Lðx, yÞ 0 for all y 2 K ðAuPEPÞwhere > 0 is a regularization parameter
Applying Lemma 2.1 to the equilibrium function f þ L we see that x is a minimizer
of the convex program
min
y2Kf ðx, yÞ þ Lðx, yÞ
Equivalence between (PEP) and (AuPEP) is stated in the following lemma
LEMMA 2.2 ([17,23]) Let f : K K ! R [ f þ1gf g be an equilibrium bifunction, andlet x 2K Suppose that f ðx, Þ : K ! R is convex and subdifferentiable on K Let
L: K K ! Rþ be a differentiable convex function on K with respect to the secondargument y such that
(i) Lðx, xÞ ¼0,
(ii) r2Lðx, xÞ ¼0
Then x2K is a solution to(PEP) if and only if x is a solution to(AuPEP)
We omit the proof for this nondifferentiable case because it is similar to the one given
in [17,23] for differentiable case
3 An extragradient algorithm for EP
As we have mentioned, if f ðx, Þ is closed convex on K and f ð, yÞ is upper hemicontinuous
on K, then the solution set of (DEP) is contained in that of (PEP) In the followingalgorithm, as in [17], we use the auxiliary bifunction given by
Lemma 2.2 gives a fixed-point formulation for Problem (PEP) that suggests an iterativemethod for solving (PEP) by setting xkþ1¼s x k
where s(xk) is the unique solution of thestrongly convex problem (Cxk) Unfortunately, it is well known (see also [9]) that, for
Trang 7monotone variational inequality problems, which are special cases of monotone librium problem (PEP), the sequence fxkgmay not be convergent This fact suggested theuse of the extragadient method introduced by Korpelevich in [15], first for finding saddlepoints, to monotone variational inequalities [9,23] For the singlevalued variationalinequality problem given as
equi-Find x2Ksuch that FðxÞ, x x
0 for all x 2 K ðVIPÞthe extragradient (or double projection) method constructs two sequences fxkgand f ykgbysetting
yk:¼ KxkF x k
and xkþ1:¼ KxkF y k where > 0 and Kdenotes the Euclidean projection onto K
Now we further extend the extragradient method to equilibrium problem (PEP).Throughout the rest of the article, we suppose that the function f ðx, Þ is closed, convexand subdifferentiable on K for each x 2 K Under this assumption, subproblems needed tosolve in the algorithms below are convex programs with strongly convex objective functions
In Algorithm 1 we are going to describe, in order to be able to obtain its convergence, theregularization must satisfy some condition (see convergence Theorem 3.2)
Algorithm 1
Step 0 Take x0 2K, > 0 and set k :¼ 0
Step 1 Solve the strongly convex program
to obtain its unique optimal solution yk
If yk¼xk, then stop: xkis a solution to (PEP) Otherwise, go to Step 2
Step 2 Solve the strongly convex program
to obtain its unique solution xkþ1
Step 3 Set k :¼ k þ 1, and go back to Step 1
The following lemma shows that, if Algorithm 1 terminates after a finite number ofiterations, then a solution to (PEP) has already been found
LEMMA 3.1 If the algorithm terminates at some iterate point xk, then xk is a solution of(PEP)
Proof If yk¼xk, then, by the fact that f ðx, xÞ ¼ 0, we have
Trang 8Thus, by Lemma 2.2, xkis a solution to (PEP) gThe following theorem establishes the convergence of the algorithm.
THEOREM3.2 Suppose that
(i) G is strongly convex with modulus >0 and continuously differentiable on an open set
containing K
(ii) There exist two constants c1>0 and c2>0 such that
f ðx, yÞ þ f ð y, zÞ f ðx, zÞ c1ky xk2c2kz yk2 8x, y, z 2 K: ð3:4ÞThen
(a) For every x 2Kd, it holds true
for each y 2 K
(b) Suppose in addition that f is lower semicontinuous on K K, f ð, yÞ is upper uous on K, and 0 < < minffð=2c1Þg, ð=2c2Þ, then the sequence fxkg is bounded, andevery cluster point of fxkgis a solution to(DEP)
semicontin-Moreover, if Kd¼K (in particular, if f is pseudomonotone on K), then the wholesequence fxkg converges to a solution of(PEP)
Proof (a) Take any x2Kd By the definition of l and since xk, xkþ12K, we have
where NK(x) is the (outward) normal cone of K at x 2 K
Thus, since f ð yk, Þ is subdifferentiable and G is strongly convex, differentiable on K, bythe well-known Moreau-Rockafellar theorem [xxvii], there exists w 2 @2f ð yk, xkþ1Þsuch that
Trang 9By the definition of subgradient we have, from the latter inequality, that
Trang 10Since G is strongly convex with modulus > 0, for every x and y, one has
which proves (a)
Now we prove (b) By Assumption 0 < < min ð=2c 1Þ, ð=2c2Þ
Thus flðxkÞk0gis a nonincreasing sequence Since it is bounded below by 0, it converges to
l Passing to the limit as k ! 1 it is easy to see from (3.13) that
Since f is lower semicontinuous on K K, f ð, yÞ is upper semicontinuous on K and
f ðx, xÞ ¼ 0, letting i ! 1 we obtain from the last inequality that
f ðx, yÞ þ Gð yÞ GðxÞ rGðxÞ, y x
0 8y 2 K,
Trang 11which shows that x is a solution of the (AuPEP ) corresponding to Lðx, yÞ ¼ Gð yÞ GðxÞ rGðxÞ, y x
Then, by Lemma 2.2, x is a solution to (PEP)
Suppose now Kd¼K We claim that the whole sequence fxkgk0 converges to x.Indeed, using the definition of l(x^k) with x¼x 2 Kd, we have lðxÞ ¼ 0 Thus, as G is
-strongly convex, we can write
f ðx, yÞ :¼ ’ð yÞ ’ðxÞ, then clearly that (3.4) holds true for any c10, c20 and for anyfunction ’
4 Linesearch algorithms
Algorithm 1 requires that f satisfies the Lipschitz-type condition (3.4) which in some cases
is not known In order to avoid this requirement, in this section we modify Algorithm 1 byusing a linesearch The linesearch technique has been used widely in descent methods formathematical programing problems as well as for variational inequalities [9,13]
First, we begin with the following definition
Definition 4.1 ð[13]Þ Let K be a nonempty closed set in Rn A mapping P : Rn! Rn issaid to be
(i) feasible with respectto K if
PðxÞ 2 K 8x 2 Rn,(ii) quasi-nonexpansive with respectto K if for every x 2 Rn,
Note that, if K is the Euclidean projection on K, then K is a feasible nonexpansive mappings We denote by F ðKÞ the class of feasible quasi-nonexpansivemappings with respect to K
quasi-Next, we choose a sequence k
k0 such that
k2 ð0, 2Þ 8k ¼0, 1, 2, and lim inf
k!1 kð2 kÞ>0: ð4:2ÞThe algorithm then can be described as follows
Trang 12to obtain its unique solution yk.
If yk¼xk, stop: xkis a solution to (PEP) Otherwise, go to Step 2
Step 2Step 2.1 Find the smallest positive integer m such that
Step 4 Set k :¼ k þ 1, and go back to Step 1
The following lemma indicates that if Algorithm 2 terminates at Step 1 or Step 2.2,then indeed a solution of (PEP) has been found
LEMMA 4.1 If Algorithm 2 terminates at Step 1 (resp Step 2.2), then xk (resp zk) is asolution to(PEP)
Proof If the algorithm terminates at Step 1, then xk¼yk Since ykis the solution to theconvex optimization problem (4.3), we have
If the algorithm terminates at Step 2.2, then 0 2 @2f ðzk, zkÞ Since f ðzk, Þ is convex,
it implies that f ðzk, zkÞ f ðzk, yÞ for all y 2 K Alternatively, since f ðzk, zkÞ ¼0, it shows
The next lemma shows that there always exists a positive integer m such that Condition(4.4) in Step 2.1 is satisfied
LEMMA4.2 Suppose that f is upper semicontinuous on K with respect to the first variable,and yk6¼xk Then
(i) There exists an integer m > 0 such that the inequality in (4.4) holds
Trang 13The statement (ii) is immediate from the rule for determination of zkas
In order to prove the convergence of Algorithm 2, we give the following key property
of the sequence fxkg generated by the algorithm
Trang 14LEMMA 4.3 If f ðx, Þ is convex and subdifferentiable on K, then the followingstatements hold true:
(i) For every solution x of (DEP) one has
Proof First, we prove (i) Take any x 2Kd By property (4.1) of Pkand (4.5), setting
Trang 15which, together with (4.9) and (4.10), implies
xkþ1x 2 xkx 2kð2 kÞk gk 2
8x2Kd:
To prove (ii) we apply the latter inequality for every k from 0 to m to obtain
Xm k¼0
kð2 kÞðkkgkkÞ2< 1:
We finally prove (iii) To prove that fgkg is bounded we first observe that f ykg isbounded Indeed, since yk is the unique solution of Problem (4.3) whose objectivefunction is continuous and the feasible set is constant, by the Maximum Theorem(Proposition 23 in [4], see also [5]), the mapping xk!s x k
We are now in a position to prove the following convergence theorem for Algorithm 2
As we have seen in Lemma 4.1 that if Algorithm 2 terminates then a solution to (PEP) hasalready been found Otherwise, if the algorithm does not terminate, we have the followingconvergence results
THEOREM4.4 In addition to the assumptions of Lemmas4.2 and 4.3 we assume that f iscontinuous on K K Then
(i) The sequence fxkg is bounded, and every cluster point of fxkg is a solution to(PEP).(ii) If K ¼Kd (in particular, if f is pseudomonotone on K), then the whole sequence fxkgconverges to a solution of(PEP) In addition, if k¼ 2 ð0, 2Þ for all k 0, then