DOI 10.1007/s10589-010-9360-4Iterative methods for solving monotone equilibrium problems via dual gap functions Tran Dinh Quoc · Le Dung Muu Received: 27 March 2010 / Published online: 1
Trang 1DOI 10.1007/s10589-010-9360-4
Iterative methods for solving monotone equilibrium
problems via dual gap functions
Tran Dinh Quoc · Le Dung Muu
Received: 27 March 2010 / Published online: 14 October 2010
© Springer Science+Business Media, LLC 2010
Abstract This paper proposes an iterative method for solving strongly monotone
equilibrium problems by using gap functions combined with double projection-type mappings Global convergence of the proposed algorithm is proved and its complex-ity is estimated This algorithm is then coupled with the proximal point method to generate a new algorithm for solving monotone equilibrium problems A class of lin-ear equilibrium problems is investigated and numerical examples are implemented to verify our algorithms
Keywords Gap function· Double projection-type method · Monotone equilibrium problem· Proximal point method · Global convergence · Complexity
1 Introduction
Let C be a nonempty closed convex subset in R n , and f : C × C → R ∪ {+∞} be
a bifunction such that f (x, x) = 0 for all x ∈ C We are interested in the following
problem:
Find x∗∈ C such that f (x∗, y) ≥ 0 for all y ∈ C. (PEP)
This paper is supported in part by NAFOSTED, Vietnam.
T.D Quoc ()
Hanoi University of Science, Hanoi, Vietnam
e-mail: quoc.trandinh@esat.kuleuven.be
Present address:
T.D Quoc
Department of Electrical Engineering (ESAT/SCD) and OPTEC, K.U Leuven, Leuven, Belgium L.D Muu
Institute of Mathematics, Hanoi, Vietnam
e-mail: ldmuu@math.ac.vn
Trang 2710 T.D Quoc, L.D Muu Problem (PEP) is known as an equilibrium problem in the sense of Blum and Oet-tli [1] (see also [12]) This problem is also referred as Ky-Fan’s inequality due to his results in this field [13] Associated with the primal form (PEP), its dual form is defined as follows:
Find x∗∈ C such that f (y, x∗) ≤ 0 for all y ∈ C. (DEP)
Let us denote by S∗
p and S∗
d the solution sets of Problems (PEP) and (DEP),
respec-tively The nonemptiness of S∗
p and S∗
d, their structure, and the relations between these sets have been studied in the literature (see, e.g [4])
Problem (PEP) on one hand covers many practical problems in optimization and nonlinear analysis such as optimization problems, variational inequalities, complementarity problems, fixed point problems and Nash equilibrium models [1,13,16,18,19] On the other hand, it is resulted from many practical problems
in economics, transportation, mechanics and engineering Theory, methods and ap-plications of equilibrium problems are widely studied by many researchers
In recent years, methods for solving Problems (PEP)–(DEP) have been studied extensively The first solution-approach is based on the auxiliary problem principle This principle was first introduced to optimization problems by Cohen [2] and then extended to variational inequalities in [3] Mastroeni [10] further applied the auxiliary problem principle to the equilibrium problems of the form (PEP) involving strongly monotone bifunctions satisfying a certain Lipschitz-type condition Noor [17] used this principle to develop the iterative algorithms for solving (PEP), where the
bifunc-tion f was supposed to be partially relaxed strongly monotone.
One of the most popular methods is proximal point method This method was first introduced by Martinet [8] for variational inequalities and then extended by Rock-afellar [21] for finding a zero point of a maximal monotone operator Moudafi [11] and Konnov [4] further extended the proximal point method to Problem (PEP) with monotone and weakly monotone bifunctions, respectively
Other solution methods have been well developed in mathematical program-ming and variational inequalities such as gap function-based, extragradient and bundle methods [5,14–16] recently have been extended to equilibrium problems [13,19,22]
In this paper, we first extend the method proposed in [15] for strongly monotone variational inequalities to strongly monotone equilibrium problems that satisfy a cer-tain Lipschitz-type condition The global convergence of the proposed algorithm is investigated and the complexity is estimated We prove that the rate of convergence
of the algorithm is linear It is noticeable that the global contraction rate is better than
the projection method proposed in [9,13] when the condition number of (PEP) is
greater than (2+√5) 1/2that is often the case in practice However, as a compen-sation, in each iteration of the algorithm, two convex programming problems need
to be solved instead of one as in the projection method The obtained algorithm is then coupled with the inexact proximal point method [11] to obtain a new variant
of the proximal point algorithm for solving the monotone (not necessarily strongly monotone) problem (PEP) A class of linear equilibrium problem is also considered
as a special case of (PEP) and two equilibrium models are implemented to verify the proposed algorithms
Trang 3The rest of this paper is organized as follows In Sect.2we propose an algorithm (Algorithm1) for solving strongly monotone equilibrium problems The convergence
of this algorithm is proved and its complexity is estimated In Sect.3we present a combination of Algorithm1and the proximal point method [11] and show the con-vergence of the resulting algorithm In the last section, a class of linear equilibrium problems is investigated and two numerical examples are implemented
2 Algorithm for strongly monotone equilibrium problems
Recently, Nesterov and Scrimali [15] proposed an iterative method for solving strongly monotone variational inequalities that satisfy a Lipschitz condition This method is known as an extrapolation from the extragradient algorithm or a dou-ble projection-type method In this section, we extend the idea in [15] to strongly monotone equilibrium problems that satisfy a certain Lipschitz-type condition We prove the global convergence of the proposed algorithm and estimate its complex-ity Before presenting the algorithmic scheme, we recall the following well-known definitions that will be used in the sequel
Definition 1 [4,16] Let X⊆ Rn and f : X × X → R ∪ {+∞} be a bifunction Then
f is said to be
(i) strongly monotone on X with parameter τ > 0 if for all x and y in X, it holds
f (x, y) + f (y, x) ≤ −τy − x2;
(ii) monotone on X if for all x and y in X, we have
f (x, y) + f (y, x) ≤ 0;
(iii) pseudo-monotone on X if f (x, y) ≥ 0 implies f (y, x) ≤ 0 for all x, y in X; (iv) Lipschitz-type continuous on X if there exists a constant L > 0 such that
f (x, y) + f (y, z) ≥ f (x, z) − Ly − x z − y, ∀x, y, z ∈ C. (1)
It is obvious from Definition1that (i)⇒ (ii) ⇒ (iii)
Remark 1 Note that the Lipschitz-type condition (1) implies the Lipschitz-type con-dition in the sense of Mastroeni [9]:
f (x, y) + f (y, z) ≥ f (x, z) − c1y − x2− c2z − y2, ∀x, y, z ∈ C, (2)
where c1 , c2>0 are two given constants
Indeed, applying the Cauchy-Schwartz inequality we have L y − x z − y ≤ L
2q y − x2+ Lq
2 z − y2 for an arbitrary q > 0 Thus if we denote by c1:= L
2q and c2:=Lq
2 then the condition (1) implies (2) Condition (1) can be considered as a variant of the Lipschitz-type condition (2)
Trang 4712 T.D Quoc, L.D Muu
If f (x, y) = F (x) T (y − x), which means that (PEP) collapses to a variational
inequality problem, then f satisfies (1) if F is Lipschitz continuous with a Lip-schitz constant L > 0 Indeed, since F is LipLip-schitz continuous, using the Cauchy-Schwartz inequality, we have f (x, y) + f (y, z) − f (x, z) = (F (y) − F (x)) T (z−
y) ≥ −F (y) − F (x) z − y ≥ −Ly − x z − y
If f (x, x) = 0 for all x ∈ C then by substituting z = x into (1) we obtain f (x, y)+
f (y, x) ≥ −Ly − x2 This inequality implies that if f is strongly monotone with parameter τ > 0 then it requires τ ≤ L.
Let us denote by domg the domain of a convex function g and ri(X) is the set of relative interior points of a convex set X Throughout this section, we assume that
C ⊆ ri(domf (x, ·)) for all x ∈ C, and the following assumptions hold:
Assumption 1 The function f ( ·, y) is upper semicontinuous on C with respect to the first argument for all y ∈ C, and f (x, ·) is proper, closed convex on C with respect
to the second argument for all x ∈ C.
Assumption 2 The function f is strongly monotone on C with parameter τ > 0 (see
Definition1(i))
Note that, by assumption C ⊆ ri(domf (x, ·)), f (x, ·) is subdifferentiable on C with respect to the second variable for all x in C (see [20, Theorem 2.34])
Lemma 1 Under Assumptions1 2, the problems (PEP) and (DEP) have the same
unique solution.
Proof The nonemptiness and uniqueness of S∗
phas been proved (see, e.g [6]) Since
f (x, ·) is convex and subdifferentiable on C, it implies that S∗
d ⊆ S∗
p according to Proposition 8 [6] Moreover, since f is strongly monotone, it is pseudo-monotone
on C, applying again Proposition 8 [6], we have S∗
p ⊆ S∗
d Hence, S∗
d = S∗
Definition 2 [10] A function g : C → R is called a gap function of Problem (DEP) if
(i) g(x) ≥ 0 for all x ∈ C, and
(ii) g(x∗) = 0 if and only if x∗solves (DEP).
Let us define the following function:
g(x):= supf (y, x)+τ
2y − x2| y ∈ C, (3)
where τ > 0 is the strongly monotone parameter of f We refer to this function as a
dual gap function of (PEP)
We recall that a function h : X → R, where X is a convex set in R n, is said to be
strongly convex with parameter γ > 0 if h( ·) − γ
2 · 2is convex on X The function
h is said to be strongly concave with parameter γ if −h is strongly convex with parameter γ The next lemma shows that g is well-defined and is a gap function to
(DEP) on ri C.
Trang 5Lemma 2 Suppose that Assumptions1 2 are satisfied Then the function g given
by (3) is well-defined and strongly convex with parameter τ Moreover, it is a gap
function to both (DEP) and (PEP)
Proof Since f is strongly monotone with parameter τ > 0, for all y ∈ C, we have
f x (y) := f (y, x) + τ
2y − x2≤ −f (x, y) − τy − x2+τ
2y − x2
≤ sup−f (x, y) − τ
2y − x2| y ∈ C:= u(x). (4)
Since f (x, ·) is convex with respect to the second argument for all x ∈ C, it implies that f (x, ·)+ τ
2·−x2is strongly convex with parameter τ > 0 Thus the supremum
in the second line of (4) is attained, i.e (i) the function u(x) defined in this line is well-defined On the other hand, (ii) fx (x) = 0 ≤ u(x) It follow from (i) and (ii) that the level set Lu(x) (f x ) := {y ∈ C | fx (y) ≤ u(x)} of fx is nonempty and bounded
in Rn Moreover, since f ( ·, y) is upper semi-continuous on C for all y ∈ C, g is well-defined Note that the function fy (x) := f (y, x)+ τ
2x −y2is strongly convex with
parameter τ As a consequence, the function g(x) = sup{fy (x) | y ∈ C} defined as the supremum of a family of strongly convex functions with parameter τ is strongly convex with parameter τ [20]
Since f (x, x) = 0, we have g(x) = sup{f (y, x)+ τ
2y −x2| y ∈ C} ≥ f (x, x)+ τ
2x −x2= 0 If ¯x ∈ C such that g( ¯x) = 0 then f (y, ¯x)+ τ
2y − ¯x2≤ g( ¯x) = 0 for all y ∈ C This inequality implies that f (y, x) ≤ − τ
2y − ¯x2≤ 0, i.e ¯x is a solution
to (DEP) Consequently, ¯x is also a solution to (PEP) by virtue of Lemma1
Remark 2 Let x∗ be a solution to (PEP) Then, from the definition of g, we have
g(x) ≥ g(x∗) = 0 for all x ∈ C Since g is strongly convex with parameter τ , it implies that x∗is the unique global solution to minx∈C g(x) Moreover, one has
g(x)≥τ
Let{x i}i≥0be a sequence in C and {λi}i≥0be a positive sequence in (0, +∞) Let
us define
S k:=
k
i=0
λ i and ¯x k:= 1
S k
k
i=0
Δ k:= max
k
i=0
λ i
−f (x i , y)−τ
2y − x i2
| y ∈ C
Lemma 3 Under Assumptions1 2, the quantity Δk defined by (7) satisfies
g( ¯x k )≤ 1
Trang 6714 T.D Quoc, L.D Muu
Proof From the definition of ¯x k , Sk and Δk , and using the strong monotonicity of f and the convexity of f (y,·) we have
g( ¯x k )= sup
f (y, ¯x k )+τ
2y − ¯x k2| y ∈ C
= sup
⎧
⎨
⎩f
y, 1
S k
k
i=0
λ i x i
+τ 2
y−
1
S k
k
i=0
λ i x i
2
y ∈ C⎫⎬
⎭
≤ sup
1
S k
k
i=0
λ i
f (y, x i )+τ
2y − x i2 y ∈ C
≤ 1
S k max
k
i=0
λ i
−f (x i , y)−τ
2y − x i2 y ∈ C
= 1
S k
Δ k
Based on Lemma 3, the algorithm will be designed to control the sequence
{Δk}k≥0such that its growth can be compared to the sum Sk.
For a given ρ > 0, we define the following functions:
ϕ x ρ (y) := −f (x, y) − ρ
ψ k (y):=
k
i=0
λ i ϕ τ
As usual, we say that a concave function h is subdifferentiable on C if −h is subdif-ferentiable on C Note that, since f (x k , ·) is convex and subdifferentiable on C, the function ϕ ρ x is strongly concave with parameter ρ and subdifferentiable on C As a consequence, ψk is strongly concave with parameter τ Sk and subdifferentiable on C For a given x0∈ C, consider two sequences {u k}k≥0and{x k}k≥0generated by the following scheme:
u k:= argmax
x k+1:= argmax
y ∈C ϕ
ρ
The next theorem provides a key property to prove the convergence of our algorithm that will be described later
Theorem 1 Suppose that Assumptions 1 2 are satisfied, and the sequence
{(u k , x k )}k≥0 is generated by (11) and (12) Suppose further that f satisfies the
Trang 7Lipschitz-type condition (1) with a Lipschitz constant L > 0 and the parameter ρ in
(12) is chosen such that ρ≥1
2(√
4L2+ τ2− τ) Then the sequence {Δk}k≥0defined
by (7) satisfies
Δ k+1≤ Δk−1
2
ρ− L2
ρ + τ
λ k+1x k+1− u k2≤ Δk , (13)
provided that λ k+1≤ τ
ρ S k
Proof Note that ψ kdefined by (10) is strongly concave with parameter τ Skand
sub-differentiable on C Using the optimality condition for (11) we get
ψ k (y) ≤ ψk (u k )−τ
2S ky − u k2, ∀y ∈ C. (14)
On the other hand, it follows from the definition (10) that
ψ k+1(y) = ψk (y) + λk+1ϕ x τ k+1(y). (15) Using the definition (7) of Δk , we have Δk= maxy∈C ψ k (y) Combing this relation, (14) and (15) we obtain
Δ k+1= max
y ∈C ψ k+1(y)
= maxψ k (y) + λk+1ϕ x τ k+1(y) | y ∈ C
≤ ψk (u k )+ max
λ k+1ϕ τ
x k+1(y)−τ
2S k y − u k2| y ∈ C
= Δk+ max
λ k+1
−f (x k+1, y)−τ
2y − x k+1 2
−τ
2S ky − u k2| y ∈ C (16)
Now, since x k+1is a solution to (12), using the optimality condition for this problem,
we have
ξ k+1+ ρ(x k+1− u k )
T
(y − x k+1) ≥ 0, ∀y ∈ C, (17)
where ξk+1∈ ∂f (u k , x k+1) Since ϕ ρ
u k is strongly concave with parameter ρ > 0, it
follows from (17) that
−ϕ ρ
u k (x k+1)+ρ
2y − x k+1 2≤ −ϕ ρ
u k (y), ∀y ∈ C.
This inequality is equivalent to
f (u k , x k+1)+ρ
2x k+1− u k2+ρ
2y − x k+1 2− f (u k , y)−ρ
2y − u k2≤ 0,
Trang 8716 T.D Quoc, L.D Muu From (18) and using the Lipschitz-type condition (1) with x = u k , y = x k+1 and
z = y we obtain
f (x k+1, y)+τ
2y − x k+1 2
≥ f (u k , x k+1) + f (x k+1, y) − f (u k , y)+(τ + ρ)
2 y − x k+1 2 +ρ
2x k+1− u k2−ρ
2y − u k2
≥ρ
2u k − x k+1 2+(τ + ρ)
2 y − x k+1 2
−ρ
2y − u k2− Lu k − x k+1 y − x k+1
=(ρ + τ)
2
y − x k+1 − L
(ρ + τ) u
k − x k+1
2
+
ρ
2 − L2
2(ρ + τ)
x k+1− u k2−ρ
2y − u k2. (19) Substituting (19) into (16), and noting that λk+1≤τ
ρ S k, we get
Δ k+1≤ Δk+ max
− λk+1(ρ + τ)
2
y − x k+1 − L
(ρ + τ) x
k+1− u k
2
− λk+1
ρ
2 − L2
2(ρ + τ)
x k+1− u k2
+
λ k+1ρ
2 −τ S k
2
y − u k2| y ∈ C
= Δk − λk+1
ρ
2 − L2
2(ρ + τ)
x k+1− u k2
+ max
− λk+1(ρ + τ)
2
y − x k+1 − L
(ρ + τ) x k+1− u k
2
−
τ S k
2 −λ k+1ρ
2
y − u k2| y ∈ C
≤ Δk − λk+1
ρ
2 − L2
2(ρ + τ)
which proves the first inequality of (13)
From the choice of ρ and since λk+1> 0, we have λk+1( ρ2− L2
2(ρ +τ) )≥ 0 Thus
We continue expanding scheme (11)–(12) algorithmically For simplicity of
dis-cussion, we assume that the constants τ and L are known in advance Otherwise,
Trang 9global strategies such as line search procedures should be used to estimate ρ From
Theorem1, the parameter ρ≥1
2(√
4L2+ τ2− τ) > 0 can be chosen in advance and
λ k+1≤τ
ρ S kat each iteration The algorithm is described in detail as follows:
Algorithm 1
Initialization: Given a tolerance ε > 0 Choose λ0:= 1, ρ ≥1
2(√
4L2+ τ2− τ) and
ω ∈ (0, 1] Set α := ρ
τ and take an initial point x0∈ C.
Iteration k (k = 0, 1, , kε ): Perform the three steps below:
Step 1: Solve the first strongly convex programming problem
min
k
i=0
λ i
f (x i , y)+τ
2y − x i2
| y ∈ C
(21)
to obtain the unique solution u k
Step 2: Solve the second strongly convex programming problem
min
f (u k , y)+ρ
2y − u k2| y ∈ C (22)
to obtain the unique solution x k+1.
Step 3: Update λ k+1:=ω
α S kand go back to Step 1
Output: Compute the final output:
¯x k:= 1
S k
k
i=0
The main tasks of Algorithm1are to solve two strongly convex programs (21) and (22) Note that the objective function qk (y):=k
i=0λ i[f (x i , y)+τ
2y − x i2]
of (21) can be computed recursively, i.e qk+1(y) = qk (y) + λk+1[f (x k+1, y)+
τ
2y − x k+1 2] Thus the method for solving this problem can conveniently exploit the computational information of the previous steps to reply on the current step It
re-mains to determine the maximum number of iterations kεin Algorithm1 Remark4
will provide an estimation for this constant
Theorem 2 Under the assumptions of Theorem1 and under the condition that the parameters ρ and λ k are computed as in Algorithm1 Let x∗be a solution to (PEP).
Then the final output sequence { ¯x k}k≥0generated by Algorithm1satisfies
τ
2 ¯x k+1− x∗2≤ g( ¯x k+1)≤
g(x0)+(L2− τ2)
2τ x0− x∗2
exp
− ωk
α + ω
≤ L2
τ2g(x0)exp
− ωk
α + ω
Trang 10
718 T.D Quoc, L.D Muu
As a consequence, this sequence converges linearly to the unique solution x∗ of
(PEP)
Proof The first inequality of (24) follows immediately from (5) with x = ¯x k+1∈ C.
We prove the middle inequality It is obvious that S0 = λ0= 1 Applying the updating
rule of λk+1at Step 3 of Algorithm1we have
S k+1= Sk + λk+1= Sk+ω
α S k=
ω
α+ 1
S k=
ω
α + 1
k
S0=
ω
α+ 1
k
. (25) From Lemma3, Theorem1and (25) we have
g( ¯x k+1)≤Δ k+1
S k+1 ≤ Δ0
( ω α + 1) k = Δ0
1− ω
α + ω
k
≤ Δ0exp
− ωk
α + ω
. (26)
To estimate Δ0, on one hand, we note that
Δ0= max
y ∈C ψ0(y)= max
y ∈C ϕ
τ
x0(y)= max
−f (x0, y)−τ
2y − x0 2| y ∈ C (27)
On the other hand, using the Lipschitz-type condition (1) and noting that x∗ is a
solution to (PEP), i.e f (x∗, y) ≥ 0 for all y ∈ C, we have
−f (x0, y)−τ
2y − x0 2
≤ −f (x∗, y) + f (x∗, x0) + Lx0− x∗ y − x0 −τ
2y − x0 2
≤ f (x∗, x0) + Lx0− x∗ y − x0 −τ
2y − x0 2
≤ f (x∗, x0)+τ
2
2L
τ x0− x∗ y − x0 − y − x0 2
−L2
τ2x0− x∗2
+L2
2τ x0− x∗2
=
f (x∗, x0)+τ
2x0− x∗2
−τ 2
y − x0 +L
τ x0− x∗
2
+
L2
2τ −τ
2
x0− x∗2
≤ g(x0)−τ
2
y − x0 +L
τ x0− x∗
2 +
L2
2τ −τ 2
x0− x∗2. (28)
Here, the last inequality in (28) follows from the fact that
g(x0)= sup
f (y, x0)+τ
2y − x0 2| y ∈ C ≥ f (x∗, x0)+τ
2x0− x∗ 2.