1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Dual extragradient algorithms extended to equilibrium problems

21 104 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 21
Dung lượng 319,93 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

DSpace at VNU: Dual extragradient algorithms extended to equilibrium problems tài liệu, giáo án, bài giảng , luận văn, l...

Trang 1

DOI 10.1007/s10898-011-9693-2

Dual extragradient algorithms extended to equilibrium

problems

Tran D Quoc · Pham N Anh · Le D Muu

Received: 7 June 2010 / Accepted: 7 February 2011 / Published online: 19 February 2011

© Springer Science+Business Media, LLC 2011

Abstract In this paper we propose two iterative schemes for solving equilibrium problems

which are called dual extragradient algorithms In contrast with the primal extragradient

methods in Quoc et al (Optimization 57(6):749–776,2008) which require to solve two eral strongly convex programs at each iteration, the dual extragradient algorithms proposed

gen-in this paper only need to solve, at each iteration, one general strongly convex program, oneprojection problem and one subgradient calculation Moreover, we provide the worst casecomplexity bounds of these algorithms, which have not been done in the primal extragradientmethods yet An application to Nash-Cournot equilibrium models of electricity markets ispresented and implemented to examine the performance of the proposed algorithms

Keywords Dual extragradient algorithm · Equilibrium problem · Gap function ·Complexity· Nash-Cournot equilibria

Department of Electrical Engineering (ESAT/SCD) and OPTEC,

K.U Leuven, Leuven, Belgium

Trang 2

1 Introduction

In recent years, equilibrium problems (EP) become an attractive field for many researchersboth in theory and applications (see, e.g [1,7 10,12,15,18,19,21,22,27–29,33,34] and thereferences quoted therein) It is well-known that equilibrium problems include many impor-tant problems in nonlinear analysis and optimization such as the Nash equilibrium problem,variational inequalities, complementarity problems, (vector) optimization problems, fixedpoint problems, saddle point problems and game theory [1,3,9,10,26] Furthermore, theycan represent rather general and suitable format for the formulation and investigation of var-ious complex problems arising in economics, physics, transportation and network models(see, e.g [7,10])

The typical form of equilibrium problems is formulated by means of Ky Fan’s inequalityand is given as [1]:

Find x∈ C such that f (x, y) ≥ 0 for all y ∈ C, (PEP)

where C is a nonempty closed convex subset in R n x and f : C × C → R is a bifunction

such that f (x, x) = 0 for all x ∈ C Problems of the form (PEP) are referred as primal

equilibrium problems Associated with problemPEP), the dual form is presented as:

Find y∈ C such that f (x, y) ≤ 0 for all x ∈ C. (DEP)

by Zhu and Marcotte [36] Mastroeni [19] further exploited them for equilibrium problems.The second approach is based on auxiliary problem principle Problem (PEP) is reformu-lated equivalently to an auxiliary problem, which is usually easier to solve than the originalone This principle was first introduced by Cohen [4] for optimization problems and thenapplied to variational inequalities in [5] Mastroeni [18] further extended the auxiliary prob-lem principle to the equilibrium problems of the form (PEP) involving a strongly monotonebifunction and satisfying a certain Lipschitz-type condition The third approach is proximalpoint method Proximal point methods were first investigated by Martinet [17] for solv-ing variational inequalities and then was deeply studied by Rockafellar [31] for finding azero point of a maximal monotone operator Recently, many researchers have exploited thismethod for equilibrium problems (see, e.g [20,22])

One of the methods for solving equilibrium problems based on the auxiliary problem ciple recently proposed in [8], which is called proximal-like methods The authors in [29]further extended and investigated the convergence of this method under different assumptions.The methods in [29] are also called extragradient methods due to the results of Korpelevich in

Trang 3

prin-[13] The extragradient method for solving problem (PEP) generates two iterative sequences

{x k } and {y k} as:



y k := argminρ fx k , y+ Gx k , y : y ∈ C,

x k+1:= argminρ fy k , y+ Gx k , y : y ∈ C, (1)

where x0 ∈ C is given, ρ > 0 is a regularization parameter, and G(x, y) is the Bregman

distance function (see, e.g [7,25]) Under mild conditions, the sequences{x k } and {y k} erated by scheme (1) simultaneously converge to a solution of problem (PEP) This method

gen-is further investigated in [33] combining with interior proximal point methods

Recently, Nesterov [24] introduced a dual extrapolation method1 for solving monotonevariational inequalities Instead of working on the primal space, this method performs themain step in the dual space

Motivated by this work and comparing to the primal extragradient methods in [8,29], inthis paper, we extend the dual extrapolation method to convex monotone equilibrium prob-lems Note that, in the primal extragradient method (1), two general convex programs need

to be solved at each iteration In contrast to the primal methods, the dual extragradient rithms developed here only require (i) to solve one general convex program, (ii) to computeone projection point on a convex set and (iii) to calculate one subgradient of a convex func-

algo-tion In practice, if the feasible set C is simple (e.g box, ball, polytope) then the projection problem (ii) is usually cheap to compute Moreover, if the bifunction f is convex and dif-

ferentiable with respect to the second argument then problem (iii) collapses to calculate thegradient vector∇2f (x, ·) of f (x, ·) at x.

The methods proposed in this paper look quite similar to the Ergodic iteration scheme in[2,23] among many other average schemes in fixed point theory and nonlinear analysis How-ever, the methods developed here use the new information computed at the current iteration

in a different way The new informationw kenters into the next iteration with the same weightthanks to the guidance of the Lipschitz constants (see Algorithm1below) In contrast to themethod in [2], the new informationw kis not thoroughly exploited The subgradient−w kis

used to compute the next iteration with a decreasing weight (t k > 0 such thatk t k = ∞and

k t k2 w k 2 < ∞) The average schemes are usually very sensitive to the calculation

errors and leads to slow convergence in practice

The main contribution of this paper is twofold: algorithms and convergence theory Weprovide two algorithms for solving (PEP)-(DEP) and prove their convergence The worst-casecomplexity bounds of these algorithms are also estimated This task has not been done in theprimal extragradient methods yet An application to Nash-Cournot oligopolistic equilibriummodels of electricity markets is presented This problem is not monotone, which can notdirectly apply the algorithms developed here However, we will show that this problem can

be reformulated equivalently to a monotone equilibrium problem by means of the auxiliaryproblem principle

The rest of this paper is organized as follows In Sect.2a restricted dual gap function of(DEP) is defined and its properties are considered Then, a scheme to compute dual extragra-dient step is provided and its properties are investigated The dual extragradient algorithmsare presented in detail in Sect.3 The convergence of these algorithms is proved and thecomplexity bound is estimated in this section An application to Nash-Cournot equilibriummodels of electricity markets is presented and implemented in the last section

Notation Throughout this paper, we use the notation · for the Euclidean norm Thenotation “:=” means “to be defined” For a given real number x, [x] defines the largest integer

1It is also called dual extragradient method in [32 ].

Trang 4

number which is less than or equal to x A function f : C ⊆ R n x → R is said to be strongly

convex on C with parameter ρ > 0 if f (·) − ρ2 · 2is convex on C The notation ∂ f denotes

the classical subdifferential of a convex function f and ∂2f is the subdifferential of f with

respect to the second argument If f is differentiable with respect to the second argument

then∇2f (x, ·) denotes the gradient vector of f (x, ·).

2 Dual extragradient scheme

Let X be a subset of R n x and f : X × X → R ∪ {+∞} be a bifunction such that f (x, x) = 0

for all x ∈ X We first recall the following well-known definitions that will be used in the

sequel (see [1,19,29])

Definition 1 A bifunction f is said to be

a) strongly monotone on X with a parameter ρ > 0 if f (x, y) + f (y, x) ≤ −ρ x − y 2

for all x and y in X ;

b) monotone on X if f (x, y) + f (y, x) ≤ 0 for all x and y in X;

c) pseudomonotone on X if f (x, y) ≥ 0 implies f (y, x) ≤ 0 for all x and y in X.

It is obvious from these definitions that: (a)⇒ (b) ⇒ (c)

The following concept is family in nonlinear analysis A multivalued mapping F : X ⊆

Rn x ⇒2Rnx is said to be Lipschitz continuous on X with a Lipschitz constant L > 0 if

where dist(A, B) is the Hausdorff distance between two sets A and B The multivalued

mapping F is said to be uniformly bounded on X if there exists M > 0 such that

sup{dist(0, F(x)) | x ∈ X} ≤ M.

Throughout this paper, we will use the following assumptions:

A 1 The set of interior points int (C) of C is nonempty.

A 2 f (·, y) is upper semi-continuous on C for all y in C, and f (x, ·) is proper, closed, convex and subdifferentiable on C for all x in C.

Now, let us recall the dual gap function of problem (PEP) defined as follows [19]:

Under Assumption A.2, to compute one value of g, a general optimization problem needs

to be solved When f (·, x) is concave for all x ∈ C, it becomes a convex problem The

following lemma shows that (3) is indeed a gap function of (DEP) whose proof can be found,for instance, in [11,19]

Lemma 1 The function g defined by (3) is a gap function of ( DEP ), i.e.:

a) g(x) ≥ 0 for all x ∈ C;

Trang 5

b) x∈ C and g(x) = 0 if and only if xis a solution of ( DEP ).

If f is pseudomonotone then xis a solution of ( DEP ) if and only if it solves ( PEP ).

Under Assumptions A.1–A.3, the gap function g may not be well-defined due to the factthat problem sup{ f (y, x) | y ∈ C} may not be solvable Instead of using gap function g, we consider a restricted dual gap function g Rdefined as follows

Definition 2 Suppose that ¯x ∈ int(C) is fixed and R > 0 is given The restricted dual gap

function of problem (DEP) is defined as:

g R (x) := sup { f (y, x) | y ∈ C, y − ¯x ≤ R} (4)

Let us denote by B R ( ¯x) := {y ∈ R n x | y − ¯x ≤ R} the closed ball in R n x of radius R

centered at¯x, and by C R ( ¯x) := C ∩ B R ( ¯x) Then the characterizations of the restricted dual

gap function g Rare indicated in the following lemma

Lemma 2 Suppose that Assumptions A.1–A.3 hold Then:

a) The function g R defined by ( 4 ) is well-defined and convex on C.

b) If x∈ C R ( ¯x) is a solution of ( DEP ) then g R (x) = 0.

c) If there exists ˜x ∈ C such that g R ( ˜x) = 0, ˜x − ¯x < R and f is pseudomonotone then

˜x is a solution of ( DEP ) (and, therefore, a solution of ( PEP )).

Proof Since f (·, x) is upper semi-continuous on C for all x ∈ C and B R ( ¯x) is bounded, the

supremum in (4) attains Hence, gR is well-defined Moreover, since f (x, ·) is convex for all

x ∈ C and g R is the supremum of a family of convex functions depending on parameter x, then g Ris convex (see [30]) The statement a) is proved

Now, we prove b) Since f (x, x) = 0 for all x ∈ C, it immediately follows from the

def-inition of g R that g R (x) ≥ 0 for all x ∈ C Let x∈ B R ( ¯x) be a solution of (DEP), we have

f (y, x) ≤ 0 for all y ∈ C and, particularly, f (y, x) ≤ 0 for all x ∈ C ∩ B R ( ¯x) ≡ C R ( ¯x).

Hence, g R (x) = sup{ f (y, x) | y ∈ C ∩ B R ( ¯x)} ≤ 0 However, since g R (x) ≥ 0 for all

x ∈ C, we conclude that g R (x) = 0.

By the definition of g R , it is obvious that g R is a gap function of (DEP) restricted to

C ∩ B R ( ¯x) Therefore, if g( ˜x) = 0 for some ˜x ∈ C and ˜x − ¯x < R then ˜x is a solution

of (DEP) restricted to C∩ B R ( ¯x) On the other hand, since f is pseudomonotone, ˜x is also

a solution of (PEP) restricted to C ∩ B R ( ¯x) Furthermore, since ˜x ∈ int(B R ( ¯x)), for any

y ∈ C, we can choose t > 0 sufficiently small such that y t := ˜x + t(y − ˜x) ∈ B R ( ¯x) and

0≤ f ( ˜x, y t ) = f ( ˜x, ty + (1 − t) ˜x) ≤ t f ( ˜x, y) + (1 − t) f ( ˜x, ˜x) = t f ( ˜x, y).

Here, the middle inequality follows from the convexity of f ( ˜x, ·) and the last equality

hap-pens because f (x, x) = 0 Since t > 0, dividing this inequality by t > 0, we conclude that

˜x is a solution of (PEP) on C Finally, since f is pseudomonotone, ˜x is also a solution of

For a given nonempty, closed, convex set C ⊆ Rn x and an arbitrary point x ∈ Rn x, let

us denote by d C (x) the Euclidean distance from x to C and by π C (x) the point attained this

distance, i.e

d C (x) := min

y∈C y − x , and π C (x) := argmin

y∈C y − x (5)

Trang 6

It is well-known thatπ C is a nonexpansive and co-coercive operator on C [7] For any

x , y ∈ R n x andβ > 0, we define the following function:

Consequently, the function Q β defined by ( 6 ) possesses the following properties:

a) For x, y ∈ R n x , Q β (x, y) ≤ y 2 If x ∈ C then Q β (x, y) ≥ β2 x − π C (x +1

By the definition of d C (·) and noting that d2

C (x) = π C (x) − x 2, taking the minimum withrespect tov ∈ C in both sides of (10) we get

d C2(x + y) ≥ d2

C (π C (x) + y) + d2

C (x) − 2y T [π C (x) − x].

This inequality is indeed (8)

The inequality Q β (x, y) ≤ y 2directly follows from the definition (6) of Qβ more, if we denote byπ k

Since x ∈ C, applying (7) with π k

C instead ofπ C (x) and v = x, it follows from (11) that

Q β (x, y) ≥ β2 x − π C (x + 1

β y ) 2which proves the second part of a)

Trang 7

Then subtracting the identity y + z 2 = y 2+ z 2+ 2y T z to the last inequality after

multiplying byβ2and using the definition of Q β, we get (9) 

Remark 1 For any x , y ∈ R n x , if we define q (β) := 1

2β Q β (x, y) then the function q(β) is

nonincreasing with respect toβ > 0, i.e q(β1) ≤ q(β2) for all β1≥ β2> 0.

Indeed, consider the functionψ(v, β) := 1

2β y 2−β2 v−x−1

β y 2= y T (v−x)− β2 v−x 2,which is convex with respect to(v, β) Since q(β) = min v∈C ψ(v, β), it is convex (see [30])

On the other hand, q(β) := − π C (x + 1

β y ) − x 2 ≤ 0 Thus q is nonincreasing.

For a given integer number n ≥ 0, suppose that {x k}n

k=0is a finite sequence of arbitrary

Clearly, the point¯x nis a convex combination of{x k}n

k=0with given coefficients{λ k

S n}n k=0.

Using the definition of g R and the convexity of f (x, ·), it is easy to show that

g R ( ¯x n ) = maxf (y, ¯x n )|y ∈ C R ( ¯x)= max f

2β Q β



¯x, s n+β R2

where s n:=n

k=0 λ k w k

Trang 8

Proof Let us define L (x, ρ) := w T (y − ¯x) + ρ(R2− y − ¯x 2) as the Lagrange function

of the minimizing problem in (13) Using duality theory in convex optimization, for some

x k , x k, we have− fx k , y≤w kT

2β Q β



¯x, s n+β

2R

For a given toleranceε ≥ 0, we say that ¯x n is anε-solution of (PEP) if gR ( ¯x n ) ≤ ε and

¯x n − ¯x < R Note that if ε = 0 then an ε-solution ¯x nis indeed a solution of (PEP) due toLemma2

Trang 9

Let s−1:= 0 The dual extragradient step (u k , x k , s k , w k ) at iteration k(k ≥ 0) is

whereρ k > 0 and β > 0 are given parameters, and w k ∈ −∂2f (x k , x k ).

Lemma 5 The sequence {(u k , x k , s k , w k )} generated by scheme (21) satisfies:

ρ k w k and noting that

ρ k

w k T

Since f (u k , ·) is subdifferentiable, using the first order necessary condition for optimality in

convex optimization, it follows from the second line of (21) that

Trang 10

Now, applying again (9) with x = u k , y = −1

Ngày đăng: 16/12/2017, 04:18

TỪ KHÓA LIÊN QUAN