1. Trang chủ
  2. » Ngoại Ngữ

Mathematical methods for solving equilibrium, variational inequality and fixed point problems

109 415 1

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 109
Dung lượng 701,5 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

VIETNAM NATIONAL UNIVERSITY-HO CHI MINH CITYUNIVERSITY OF SCIENCES PHAN TU VUONG MATHEMATICAL METHODS FOR SOLVING EQUILIBRIUM, VARIATIONAL INEQUALITY AND FIXED POINT PROBLEMS Speciality:

Trang 1

VIETNAM NATIONAL UNIVERSITY-HO CHI MINH CITY

PhD Thesis in Mathematics

Ho Chi Minh City - 2014

Trang 2

VIETNAM NATIONAL UNIVERSITY-HO CHI MINH CITY

UNIVERSITY OF SCIENCES

PHAN TU VUONG

MATHEMATICAL METHODS FOR SOLVING EQUILIBRIUM, VARIATIONAL INEQUALITY

AND FIXED POINT PROBLEMS

Speciality: Mathematical Optimization

Code: 62 46 20 01

Reviewer 1: Prof NGUYEN XUAN TAN

Reviewer 2: Assoc Prof NGUYEN DINH PHU

Reviewer 3: Assoc Prof LAM QUOC ANH

Anonymous Reviewer 1: Prof NGUYEN XUAN TAN

Anonymous Reviewer 2: Dr DUONG DANG XUAN THANH

Supervisor 1: Assoc Prof NGUYEN DINHSupervisor 2: Prof VAN HIEN NGUYEN

Ho Chi Minh City - 2014

Trang 3

I hereby declare that the work contained in this thesis has never previously been mitted for a degree, diploma or other qualifications in any University or Institution andthat, to the best of my knowledge and belief, the thesis contains no material previouslypublished or written by another person except when due reference is made in the thesisitself

sub-Ph.D Student

Phan Tu Vuong

Trang 4

There have been many people who have helped me through my graduate studies

In the next few lines, I would like to point out a few of these people to whom I amespecially in debt

First and foremost, I would like to express my deepest appreciation to my sors, Associate Professor Nguyen Dinh (VNU-HCM, International University) and Pro-fessor Van Hien Nguyen (Institute for Computational Science and Technology (ICST)and University of Namur, Belgium)

supervi-The completion of the academic research work that led to the results published

in this thesis would have not been possible without the constant encouragement, port, and advice received from Professor Van Hien Nguyen and Professor Jean JacquesStrodiot (both at ICST and University of Namur) and as such, my gratitude goes tothem

sup-I would like also to thank all members of the dissertation committee and two mous reviewers for their useful comments and suggestions

anony-This thesis presents the results of the research carried out at ICST during the periodMarch 2011 - November 2013 This research work was funded by the Department

of Science and Technology at Ho Chi Minh City Support provided by the ICST isgratefully acknowledged

I am grateful to my former advisor, Associate Professor Nguyen Bich Huy sity of Pedagogy) who introduced and helped me to start my graduate studies

(Univer-I would also like to thank my friends at (Univer-ICST and University of Technical Education

Ho Chi Minh City for their kind help

Finally, to my family, I owe much more than a few words can capture I thankthem for all the love and support through all the time

Ho Chi Minh City, May 2014Phan Tu Vuong

Trang 5

2.1 Elements of convex analysis 9

2.2 The projection operator and useful lemmas 12

2.3 Fixed point problems 14

2.4 Variational inequalities 16

2.5 Equilibrium problems 17

2.5.1 Some particular equilibrium problems 19

2.5.2 Solution methods for solving equilibrium problems 21

2.6 Previous works 23

2.6.1 The hybrid projection method 24

2.6.2 The shrinking projection method 24

2.6.3 The viscosity approximation method 25

2.6.4 The extragradient method 25

3 Hybrid Projection Extragradient Methods 27 3.1 A hybrid extragradient algorithm 27

3.2 Extragradient algorithms with linesearches 34

3.3 Shrinking projection methods 44

3.4 The particular case of variational inequalities 50

3.5 Numerical illustrations 53

4 Extragradient Viscosity Methods 57 4.1 An extragradient viscosity algorithm 57

4.2 A linesearch extragradient viscosity algorithm 64

4.3 Applications to variational inequality problem 73

4.4 Numerical illustrations 74

Trang 6

5 A Mathematical Analysis of Subgradient Viscosity Methods 78

5.1 Previous works and motivation 78

5.2 A general algorithm 80

5.3 Two projected subgradient algorithms 90

5.4 Some interesting special cases 94

Trang 7

Basic Notation and Terminology

H: a real Hilbert space

h·, ·i: the inner product of the space

k · k: the norm of the space

domf : the domain of a function f

∂f : the subdifferential of a convex function f

∂C: the boundary of a set C

F ix(S): the set of fixed points of operator S

V I(F, C): the variational inequality problem whose objective operator is F and whosefeasible set is C

SolV I(F, C): the solution set of V I(F, C)

EP (f, C): the equilibrium problem whose objective function is f and whose feasibleset is C

SolEP (f, C): the solution set of EP (f, C)

Trang 8

Chapter 1

Introduction

Let H be a real Hilbert space with inner product h·, ·i and norm k · k, respectively.Consider a nonempty closed and convex set C ⊂ H and a bifunction f : C × C → Rsuch that f (x, x) = 0 for all x ∈ C The equilibrium problem, denoted by EP (f, C),consists of finding x∗ ∈ C such that

f (x∗, y) ≥ 0 for all y ∈ C

The solution set of EP (f, C) will be denoted by SolEP (f, C) To the best of ourknowledge, the term “equilibrium problem” was coined in [16] (see also [66]), but theproblem itself was studied by Ky Fan in [35] (for historical comments see [36])

Equilibrium problems have been extensively studied in recent years (see, for ple, [13, 15, 16, 28, 32, 45, 51, 52, 57, 60, 61, 63, 65, 67, 74, 75, 81, 82, 85, 89, 97] and thereferences therein) It is well known that they include, as particular cases, scalar andvector optimization problems, saddle-point problems, variational inequalities (mono-tone or otherwise), Nash equilibrium problems, complementarity problems, fixed pointproblems, and other problems of interest in many applications (see, for instance, therecent books [51, 40])

exam-Let F : H → H be a given mapping If f (x, y) := hF x, y − xi, for all x, y ∈ C,then each solution x∗ ∈ C of the equilibrium problem EP (f, C) is a solution of thevariational inequality

hF x∗, y − x∗i ≥ 0 for all y ∈ C,and vice versa

Variational inequalities have shown to be important mathematical models in thestudy of many real problems, in particular in network equilibrium models ranging fromspatial price equilibrium problems and imperfect competitive oligopolistic market equi-librium problems to general financial or traffic equilibrium problems (see, for example,the recent monographs [70, 34])

Trang 9

A point x∗ ∈ C is called a fixed point of a mapping S : H → H if Sx∗ = x∗ Theset of fixed points of S is the set F ix(S) := {x ∈ H : Sx = x} The computation

of fixed points is important in the study of many problems including inverse problems

in science and engineering (see, for example, [11]) Construction of fixed points ofnonexpansive mappings (i.e., kSx − Syk ≤ kx − yk for all x, y ∈ H) is an importantsubject in nonlinear operator theory and its applications [21]; in particular, in imagerecovery and signal processing [20]

In 2007, S Takahashi and W Takahashi [93] introduced an iterative scheme by theviscosity approximation method for finding a common element of the solution set of

EP (f, C) and the set of fixed points of a nonexpansive mapping S in a real Hilbert spaceand obtained, under certain appropriate conditions, a strong convergence theorem forsuch scheme

Motivated and inspired by the ongoing results of obtaining strong convergence orems for approximation of common elements of equilibrium problems and fixed pointproblems [80, 25, 94, 48, 49, 87, 31, 46, 3], we introduce, as a first contribution of ourthesis, a new and different algorithm from the existing algorithms in the literature.Indeed, the method used in most papers for solving the equilibrium problem EP (f, C)

the-is the proximal point method [22, 53, 54, 81, 86, 92] Ththe-is method consthe-ists in solving

at each iteration a nonlinear variational inequality problem which seems not easy tosolve [63, 67] In this thesis, we propose instead, to use an extragradient method with

or without the incorporation of a linesearch At each iteration, one or two convex imization problems must be solved depending on the presence or not of a linesearch.Working in a Hilbert space, these methods usually generate sequences of iterates thatonly converge weakly to a solution of the problem while it is well known that stronglyconvergent algorithms are of fundamental importance for solving problems in infinitedimensional spaces [10] For obtaining the strong convergence from the weak conver-gence without additional assumptions on the data of the problem we propose to usethe hybrid projection method [47, 68, 69, 71] In this method, the solution set of theproblem is outer approximated by a sequence of polyhedral subsets and the sequence

min-of iterates converges to the orthogonal projection min-of a given point onto the solutionset We report some preliminary numerical tests to show the behavior of the proposedalgorithms Chapter 3 will provide a detailed description of our first contribution tothe thesis

Chapter 4 and Chapter 5, which constitute the second and third contribution to ourthesis work, will consider a class of ‘hierarchical optimization’: a variational inequalityproblem constrained by a fixed point problem and/or an equilibrium problem Thisclass of problems (also known as ‘bilevel problems’) has been studied extensively in theliterature (see, for example, [2, 32] and the references cited therein) Such hierarchical

Trang 10

and equilibrium models are of interest in energy markets, and particularly in electricityand natural gas markets [40].

More precisely, our aim in the second contribution is to study new numerical rithms for finding a solution of a variational inequality problem whose constraint set

algo-is the set of common elements of the set of fixed points of a mapping and the set ofsolutions of an equilibrium problem in a real Hilbert space The strategy is to use theextragradient methods with or without linesearch instead of the proximal methods tosolve equilibrium problems To obtain the strong convergence of the iterates generated

by these algorithms, a regularization procedure is added (the so-called viscosity imation method; see, for example, [1, 55, 57, 64, 93]) after an extragradient method.Preliminary numerical tests are presented to show the behavior of the extragradientmethods when a viscosity step is added For more details, please see Chapter 4.The third contribution of this thesis, Chapter 5, contains some numerical methodsfor finding a solution of a variational inequality problem over the solution set of anequilibrium problem defined on a subset C of a real Hilbert space The strategy used

approx-in this chapter is to combapprox-ine viscosity-type approximations with projected subgradienttechniques to obtain the strong convergence of the iterates to a solution of the problem.First a general scheme is considered, and afterwards two practical realizations arestudied depending on the characteristics of the feasible set C When this set is simple,the projections onto C can be easily computed and all the iterates remain in C Onthe other hand, when C is described by convex inequalities, the projections onto C arereplaced by projections onto half-spaces containing C with the consequence that mostiterates are outside the feasible set C

This strategy has been recently used in [56], and partly in [12, 13, 85], for finding asolution of a variational inequality problem over the solution set of another variationalinequality problem defined on C Here we develop a similar approach but for equilib-rium constraints instead of variational inequality constraints For more details, pleasesee Chapter 5

The results presented in this dissertation have been published in

• Journal of Optimization Theory and Applications (SCI) and Vietnam Journal ofMathematics for Chapter 3 ([98, 90]);

• Optimization (SCIE) for Chapter 4 ([99]);

• Journal of Global Optimization (SCI) for Chapter 5 ([100])

Trang 11

Chapter 2

Preliminaries

In this chapter, we recall some definitions and fundamental results related to thetheory of convex analysis and nonlinear mappings in Hilbert spaces We focus mainly onthe background material needed to approach our work, specially the existing results forfixed point problems, variational inequalities and equilibrium problems The interestedreader can find more comprehensive informations in these fields, for example, in [33,

34, 51, 83, 109] Throughout this work, H denotes a real Hilbert space equipped withthe scalar product h·, ·i and the associated norm k · k

2.1 Elements of convex analysis

Let f : H → R ∪ {+∞} be a real function The effective domain of f is the set

domf = {x ∈ H : f (x) < +∞}

The function f is said to be proper if its effective domain is nonempty We say that f

is convex if, for any x, y ∈ H and any λ ∈ [0, 1],

f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y)

If this inequality is strict whenever x, y are different, the function f is strictly convex.Moreover, f is said to be strongly convex on H if there exists a constant α > 0 suchthat, for any x, y ∈ H and any λ ∈ [0, 1],

Trang 12

When we consider the weak topology in H, the corresponding notion is the weak lowersemi-continuity Obviously, any weakly lower semi-continuous function is lower semi-continuous The converse is not true in general, but we have the following valuableproperty:

Proposition 2.1.1 ([33], Chap I, Corollary 2.2)

Any convex function f : H → R ∪ {+∞} is weakly lower semi-continuous if and only

if it is lower semi-continuous

We now recall the concept of Gˆateaux-differentiability

Definition 2.1.1 Let f : H → R∪{+∞} be a real function The directional derivative

of f at x in the direction d, denoted by f0(x, d), is the limit as λ → 0+, if it exists, ofthe quotient

f (x + λd) − f (x)

If there exists s ∈ H such that f0(x, d) = hd, si for all d ∈ H, then we say that f is

Gˆateaux-differentiable (or G-differentiable) at x, we call s the Gˆateaux-derivative (orG-derivative) of f at x, and we denote it by ∇f (x)

The uniqueness of the G-derivative follows directly It is characterized by

general-Definition 2.1.2 Let f : H → R ∪ {+∞} be a real convex function An element

s ∈ H is called a subgradient of f at x ∈ H if f (x) ∈ R and

f (y) ≥ f (x) + hs, y − xi ∀y ∈ H

The set of all subgradients of f at x is called the subdifferential of f at x and is denoted

by ∂f (x) If no subgradient exists at x, we say that f is not subdifferentiable at x, and

we set ∂f (x) = ∅

The following proposition gives basic properties of the subdifferential of a lowersemi-continuous and convex function

Trang 13

Proposition 2.1.2 ([8], Chap 4, Sect 3, Theorem 17)

Let f : H → R ∪ {+∞} be a proper, convex, lower semi-continuous function Then f

is subdifferentiable on int(domf ) and, for any x ∈ int(domf ), ∂f (x) is bounded, closedand convex

The following proposition shows that the subdifferential generalizes the G-derivative.Proposition 2.1.3 ([33], Chap I, Proposition 5.3)

Let f : H → R ∪ {+∞} be a convex function If f is G-differentiable at x ∈ H, then

f is subdifferentiable at x and ∂f (x) = {∇f (x)} Conversely, if at a point x ∈ H, f

is continuous, finite and has only one subgradient, then f is G-differentiable at x and

∂f (x) = {∇f (x)}

To end this section, we give some results that are generalizations, in infinite sional spaces, of Theorem 10.8 and Corollary 10.8.1 in [83] They will be used in nextchapters

dimen-Proposition 2.1.4 Let H be a real Hilbert space Consider {ϕn} a sequence of tinuous convex functions from H into R, and ϕ a continuous convex function from Hinto R If {ϕn} converges pointwise to ϕ on H, then there exists η > 0 such that {ϕn}converges uniformly to ϕ on ηB where B denotes the unit closed ball in H

con-Proof Since, by assumption, the sequence {ϕn(x)} is bounded for every x ∈ H, itfollows from Theorem 2.2.22 in [109] that the sequence {ϕn} is locally equi-Lipschitz

on H, and thus also locally equi-bounded on H So, there exist δ > 0 and M > 0 suchthat, for every x ∈ δB and n ∈ N, we have ϕn(x) ≤ M

Now, let S be the collection of singletons in H Since the sequence {ϕn} convergespointwise to ϕ, it also S-converges to ϕ (see Lemma 1.4 in [17]) Then, the assumptions

of Lemma 1.5 in [17] are satisfied with W = {0}, and consequently {ϕn} convergesuniformly to ϕ on ηB for some 0 < η < δ

Corollary 2.1.1 Let {ϕn} be a sequence of continuous convex functions from H into

R, and let ϕ be a continuous convex function from H into R such that

Trang 14

Proof Let ψn= max{ϕn, ϕ} Then, the sequence {ψn} of convex continuous functionsconverges pointwise to ϕ on H So, by Proposition 2.1.4, there exists η > 0 such that{ψn} converges uniformly to ϕ on ηB Hence, for every ε > 0, there exists n0 ∈ N suchthat

|ψn(x) − ϕ(x)| < ε ∀n ≥ n0, ∀x ∈ ηB

Since ϕn ≤ ψn and ψn(x) − ϕ(x) ≥ 0, we obtain the desired result

ϕn(x) ≤ ψn(x) < ϕ(x) + ε ∀n ≥ n0, ∀x ∈ ηB

2.2 The projection operator and useful lemmas

Let C be a nonempty closed convex subset of a Hilbert space H For each x ∈ H,there exists a unique point in C [50], denoted by PCx, such that

(b) For any x ∈ H and y ∈ C, it holds that hx − PCx, y − PCxi ≤ 0 Conversely, if

u ∈ C and hx − u, y − ui ≤ 0 for all y ∈ C, then u = PCx

Definition 2.2.1 A sequence {xn} ⊂ H is said to be:

(i) strongly convergent to x ∈ H if and only if limn→∞kxn− xk = 0,

(ii) weakly convergent to x ∈ H if and only if limn→∞hxn − x, yi = 0 for every

Trang 15

Lemma 2.2.1 ([47], Lemma 2.5) Let K be a nonempty closed convex subset of H.Let u ∈ H and let {xn} be a sequence in H If any weak limit point of {xn} belongs to

K and if kxn− uk ≤ ku − PKuk for all n ∈ IN , then xn→ PKu

Lemma 2.2.2 ([47], Lemma 2.2) For any t ∈ [0, 1] and for any x, y ∈ H, the followinginequality holds

ktx + (1 − t)yk2 = tkxk2+ (1 − t)kyk2 − t(1 − t)kx − yk2.Lemma 2.2.3 ([27], Lemma 3.1) Let {an} and {bn} be two nonnegative sequencessatisfying the following conditions

where n0 is some nonnegative integer Then limn→∞an exists

Lemma 2.2.4 ([55], Lemma 2.1) Let {an} and {bn} be two nonnegative sequencessatisfying the following conditions

Then the following two results hold

(i) There exists a subsequence {bnk} of {bn} such that limk→∞bnk = 0

(ii) If {an} and {bn} are also such that bn+1 − bn ≤ θan (for some θ > 0), thenlimn→∞bn= 0

Lemma 2.2.5 ([55], Lemma 3.1) Let {bn} be a sequence of real numbers that doesnot decrease at infinity, in the sense that there exists a subsequence {bn j} of {bn} suchthat

bτ (n) ≤ bτ (n)+1 and bn≤ bτ (n)+1

Trang 16

2.3 Fixed point problems

Let S : H → H be a mapping, the fixed point problem associated with S is to find

a point x∗ ∈ H such that x∗ = Sx∗ The fixed point set of S is denoted F ix(S)

Definition 2.3.1 Let C be a subset of H The mapping S is said to be

(i) nonexpansive on C if

kSx − Syk ≤ kx − yk ∀x, y ∈ C;

(ii) quasi-nonexpansive on C if F ix(S) 6= ∅ and

kSx − x∗k ≤ kx − x∗k for all (x, x∗) ∈ C × F ix(S);

(iii) ξ-strict pseudo-contractive on C if there exists ξ ∈ [0, 1) such that

kSx − Syk2 ≤ kx − yk2+ ξ k(I − S)x − (I − S)yk2 for all x, y ∈ Cwhere I denotes the identity mapping;

(iv) β-demicontractive (or β-quasi-strict pseudo-contractive) on C if F ix(S) 6= ∅ andthere exists β ∈ [0, 1) such that

kSx − x∗k2 ≤ kx − x∗k2+ β kx − Sxk2 for all (x, x∗) ∈ C × F ix(S);

(v) demiclosed at zero on C if for every sequence {xn} contained in C,

xn* x and Sxn− xn→ 0 ⇒ Sx = x

It is well known that if C is a nonempty bounded closed convex subset of H and if S

is nonexpansive, then F ix(S) is a nonempty closed convex subset of C Moreover wehave

Proposition 2.3.1 [47] Let C be a nonempty closed convex subset of H and let S :

Trang 17

Lemma 2.3.1 [55, Remark 4.2] Let β ∈ [0, 1), C be a nonempty closed convex subset

of H and S : C → C be a β-demicontractive mapping such that F ix(S) 6= ∅ Then

Sw = (1 − w)I + wS is a quasi-nonexpansive mapping over C for every w ∈ [0, 1 − β].Furthermore

kSwx − x∗k2 ≤ kx − x∗k2 − w(1 − β − w) kSx − xk2 for all (x, x∗) ∈ H × F ix(S)

Many methods used in the literature for solving the fixed point problem are derivedfrom Mann’s iterative algorithm:

Given x0 ∈ C, compute, for all n ∈ IN ,

xn+1 = αnxn+ (1 − αn)Sxn (2.2)where the sequence {αn} is in the interval [0, 1] This method has been extensively in-vestigated for nonexpansive mappings In particular, it was proven that if the sequence{αn} is chosen such thatP∞

n=0αn(1 − αn) = +∞ then the sequence {xn} generated by(2.2) converges weakly to a point of F ix(S) The main drawback of Mann’s iteration

is that it generates a weakly convergent iterative sequence

Recently several modifications of Mann’s iteration have been proposed to obtain thestrong convergence of the iterates A first one is the CQ projection method introduced

by Nakajo and Takahashi [71] and defined as follows:

Given x0 ∈ C, compute for all n ∈ IN ,

a closed convex set which is reduced after each iteration and which contains F ix(S).More precisely, the iteration in the shrinking projection method is the following:

Trang 18

Given x0 ∈ C0 := C, compute for all n ∈ IN ,

xn+1 = αng(xn) + (1 − αn)Sxnwhere {αn} ⊂ (0, 1) is a slowly vanishing sequence, i.e., αn → 0 andP∞

n=0αn= ∞ Thesequence {xn} converges strongly to a fixed point of S Furthermore, when g(x) = ufor every x ∈ C, the sequence {xn} converges strongly to the projection of u onto thefixed point set F ix(S)

2.4 Variational inequalities

Let C be a nonempty closed convex subset of H and let F : H → H be a mapping.The problem of finding x∗ ∈ C such that

is called a variational inequality (VI, for short) We denote the problem (2.5) by

V I(F, C) and the corresponding solution set by SolV I(F, C) One often considers VIswith some additional properties imposed on the mapping F such as continuity, strongmonotonicity, monotonicity or pseudomonotonicity of F Let us recall some well-knowndefinitions

Definition 2.4.1 The mapping F : H → H is said to be

(a) strongly monotone on C if there exists γ > 0 such that

hF x − F y, x − yi ≥ γkx − yk2 ∀x, y ∈ C;

(b) monotone on C if

hF x − F y, x − yi ≥ 0 ∀x, y ∈ C;

Trang 19

(c) pseudomonotone on C if

hF x, y − xi ≥ 0 =⇒ hF y, y − xi ≥ 0for all x, y ∈ C;

(d) L-Lipschitz continuous on C (for some L > 0) if

kF x − F yk ≤ Lkx − ykfor all x, y ∈ C

The implications (a) ⇒ (b) and (b) ⇒ (c) are obvious

Proposition 2.4.1 [42] Let F : H → H be a L-Lipschitz continuous and γ-stronglymonotone mapping and C is a nonempty closed convex set of H, then the variationalinequality problem V I(F, C) has a unique solution

2.5 Equilibrium problems

Let C be a nonempty closed convex subset of H and let f be a function from H × H

to R such that f (x, x) = 0 for all x ∈ C The equilibrium problem (EP, for short)associated with f and C, in the sense of [35] or [16] (see also [66]), is denoted EP (f, C),and consists in finding a point x∗ ∈ C such that

f (x∗, y) ≥ 0 for every y ∈ C

The set of solutions of EP (f, C) is denoted SolEP (f, C) For an excellent survey onexistence of solutions and methods for solving equilibrium problems, we refer the read-ers to [15]

Definition 2.5.1 Let C be a subset of a Hilbert space H A mapping f : H × H → R

Trang 20

(iii) monotone on C if

f (x, y) + f (y, x) ≤ 0 ∀x, y ∈ C;

(iv) pseudomonotone on C if

f (x, y) ≥ 0 ⇒ f (y, x) ≤ 0 ∀x, y ∈ C

It is obvious that (i) ⇒ (ii) ⇒ (iii) ⇒ (iv)

Let ε > 0 and let f : H × H → R be a bifunction satisfying the two properties:

f (x, x) = 0 and f (x, ·) is convex for every x ∈ H Let also x ∈ H The ε-subdifferential

of f (x, ·) at x is denoted ∂ε

2f (x, x) and defined by

∂2εf (x, x) = {u ∈ H : f (x, y) − f (x, x) ≥ hu, y − xi − ε ∀y ∈ H}

= {u ∈ H : f (x, y) ≥ hu, y − xi − ε ∀y ∈ H} (2.6)Let us mention that ∂ε

2f (x, x) with ε > 0 and x ∈ H is an extension or enlargement of

∂2f (x, x) (the Fenchel subdifferential of f (x, ·) at x with respect to the second variable)

in the sense that ∂2f (x, x) = ∂0

2f (x, x) ⊂ ∂ε

2f (x, x) The use of elements in ∂ε

2f allows

an extra degree of freedom, comparing with those of ∂2f

The following two properties of ∂2f will be used in the next chapters

Lemma 2.5.1 Assume that f (x, x) = 0 and f (x, ·) is convex on H for every x ∈ H.Then

(i) For every x∗ ∈ SolEP (f, C), there exists ¯g ∈ ∂2f (x∗, x∗) such that

Proof (i) Let x∗ ∈ SolEP (f, C) Then x∗ ∈ C and f (x∗, y) ≥ 0 for every y ∈ C Hence

x∗ is a minimum of the convex function f (x∗, ·) over C and thus, by the optimalitycondition,

0 ∈ ∂2f (x∗, x∗) + NC(x∗)where NC(x∗) denotes the normal cone to C at x∗ Consequently, using the definition

of the normal cone, we obtain that there exists ¯g ∈ ∂2f (x∗, x∗) such that h¯g, z − x∗i ≥ 0

Trang 21

for every z ∈ C.

(ii) Let x, y ∈ H Since g ∈ ∂ε

2f (x, x) and ˆg ∈ ∂2f (y, y), we have successively

f (x, y) ≥ f (x, x) + hg, y − xi − ε

f (y, x) ≥ f (y, y) + hˆg, x − yi

Adding these two inequalities and noting that by assumption f (x, x) = f (y, y) = 0, weget

hg − ˆg, y − xi ≤ f (x, y) + f (y, x) + ε ≤ εwhere we have used the monotonicity of f to conclude

In this subsection, we briefly show how some of the main mathematical models can

be formulated as an equilibrium problem

(a) Optimization problems

Let C be a nonempty closed convex subset of H and let φ : C → R be a convexmapping The optimization problem is defined as

min φ(x),subject to x ∈ C

Let the function f be defined by f (x, y) = φ(y) − φ(x) for every x, y ∈ C Thenthe optimization problem can be rewritten as an equilibrium problem EP (f, C).(b) Pareto optimization problems

Given m real valued functions φi : Rn→ R, a weak Pareto global minimum of thevector function φ = (φ1, , φm) over a nonempty closed convex set C ⊂ Rnis any

x∗ ∈ C such that for any y ∈ C there exists an index i such that φi(y)−φi(x∗) ≥ 0.Finding a weak Pareto global minimum amounts to solving EP (f, C) with

f (x, y) = max

i=1, ,m{φi(y) − φi(x)}

(c) Saddle point problems

Given two closed sets C1 ⊂ Rn 1 and C2 ⊂ Rn 2, a saddle point of a function

L : C1× C2 → R is any x∗ = (x∗1, x∗2) ∈ C1× C2 such that

L(x∗1, y2) ≤ L(x∗1, x∗2) ≤ L(y1, x∗2)

Trang 22

holds for any y = (y1, y2) ∈ C1× C2 Finding a saddle point of L(·, ·) amounts

to solving EP (f, C) with C = C1× C2 and

f ((x1, x2), (y1, y2)) = L(y1, x2) − L(x1, y2)

(d) Complementarity problems and systems of equations

Given a nonempty closed convex cone C ⊂ Rn and a mapping F : Rn → Rn, thecomplementarity problem asks to determine a point x∗ ∈ C such that x∗ ⊥ F x∗

and hF x∗, yi ≥ 0 for any y ∈ C, i.e., F x∗ ∈ C∗ where C∗ denotes the dual cone

of C The system of equations F x = 0 is a special complementarity problem with

C = Rn Solving the complementarity problem amounts to solving EP (f, C)with

f (x, y) = hF x, y − xi

(e) Variational inequality problems

Given a nonempty closed convex set C ⊂ Rn and a mapping F : Rn → Rn, theStampacchia variational inequality problem asks to determine a point x∗ ∈ Csuch that hF x∗, y − x∗i ≥ 0 for any y ∈ C Solving this problem amounts tosolving EP (f, C) with

f (x, y) = hF x, y − xi

If F : Rn ⇒ Rn is a set-valued mapping with compact values, then finding

x∗ ∈ C and u∗ ∈ F x∗ such that hu∗, y − x∗i ≥ 0 for any y ∈ C amounts to solving

EP (f, C) with

f (x, y) = max

u∈F xhu, y − xi

Given two mappings F, g : Rn→ Rnand a function h : Rn → (−∞, +∞], anotherkind of generalized variational inequality problem asks to find a point x∗ ∈ Rn

such that

hF x∗, y − g(x∗)i + h(y) − h(g(x∗)) ≥ 0,for every y ∈ Rn Solving this problem amounts to solving EP (f, C) with C = Rnand

f (x, y) = hF x, y − g(x)i + h(y) − h(g(x))

(f ) Fixed point problems

Given a closed set C ⊂ H, a fixed point of a mapping S : C → C is any x∗ suchthat Sx∗ = x∗ Finding a fixed point amounts to solving EP (f, C) with

f (x, y) = hx − Sx, y − xi

Trang 23

If S : C ⇒ C is a set-valued mapping with compact values, then finding x∗ ∈ Csuch that x∗ ∈ F x∗ amounts to solving EP (f, C) with

f (x, y) = max

u∈F xhx − u, y − xi

(g) Nash equilibrium problems

Assume there are N players each controlling the variables xi ∈ Rn i Each player

i has a set of possible strategies Ki ⊂ Rn i Denote by x the overall vector of allvariables: x = (x1, , xN), and K = K1× · · · × KN The aim of player i, giventhe other players’ strategies, is to choose an xi ∈ Ki ⊂ Rn i that minimizes theloss function fi : K → R A solution of the Nash equilibrium problem (NEP) is

a feasible point x∗ ∈ K such that for all i

fi(x∗) ≤ fi(x∗(yi)) ∀yi ∈ Kiwhere x∗(yi) denotes the vector obtained from x∗ by replacing x∗i with yi Finding

a Nash equilibrium amounts to solving EP (f, K) with the function f defined as

2.5.2 Solution methods for solving equilibrium problems

For our purpose in the next chapters, we recall some well-known solution methodsfor equilibrium problems in this subsection Other interesting solution methods forequilibrium problems can be found in [15] To begin with, let us recall the two basicassumptions for the function f associated with the equilibrium problem EP (f, C):(i) f (x, x) = 0 for all x ∈ C;

(ii) f (x, ·) is convex, subdifferentiable and lower semicontinuous for all x ∈ C

(a) Fixed point method

This method has been introduced by Mastroeni [61] for solving strongly monotoneequilibrium problems

It can be expressed as follows:

Given xn∈ C, find xn+1 ∈ C solution of the strongly convex optimization problem

f (x, y) + f (y, z) ≥ f (x, z) − c1ky − xk2− c2kz − yk2 (2.7)

Trang 24

holds for every x, y, z ∈ C and that λn= λ ∈ 0, min 1

2c1,

12c2 The rate ofconvergence of this method is linear [67], i.e., there exists q ∈ (0, 1) such that

kxn+1− x∗k ≤ qkxn− x∗k ∀n ∈ INwhere x∗ is the unique solution of EP (f, C)

It is worth noting that the uniqueness of the solution follows from the strongmonotonicity assumption, which is rather restrictive Actually, convergence can

be also achieved if f is pseudomonotone and f (x, ·) is Lipschitz continuous on Cuniformly in x, i.e., there exists L > 0 such that

|f (x, y) − f (x, z)| ≤ Lky − zk ∀x, y, z ∈ Cand the sequence {λn} is chosen such that the seriesP λ2

n < +∞, see [75].(b) Extragradient methods

Introduced by Korpelevich [52] for finding saddle points, the extragradient methodhas been extended for solving variational inequalities In the correspondingmethod, two projections are computed per iteration:

Given xn∈ C, compute

yn= PC(xn− λnF (xn)) and xn+1= PC(xn− λnF (yn)) (2.8)where PC denotes the orthogonal projection onto C Let us recall that thismethod is convergent when F is pseudomonotone and Lipschitz continuous Re-cently, the extragradient method has been generalized in [97] for solving equilib-rium problems in Rn In this case, the two steps (2.8) become:

Given xn∈ C, find successively yn and xn+1 as follows:

yn = arg miny∈C{λnf (xn, y) +12ky − xnk2}

xn+1 = arg miny∈C{λnf (yn, y) +12ky − xnk2}where {λn} ⊂ (0, 1] This method has been proven convergent to some x∗ ∈SolEP (f, C) when f is pseudomonotone and satisfies a Lipschitz-type property(2.7) However, this latter condition is strong and difficult to check So, in[97], the authors replaced the computation of xn+1 by an Armijo backtracking

Trang 25

linesearch, followed by a projection onto a hyperplane More precisely, given

xn∈ C, the iterates yn, zn, and xn+1 are calculated as follows:

yn= arg miny∈C{λnf (xn, y) +12ky − xnk2}

zn = (1 − γm)xn+ γmyn, where m is the smallest nonnegative integer such that

(c) Proximal Point Method

The basic idea of the proximal point method for EP (f, C) comes from the imal point method for optimization problems It consists in finding the nextiteration xn+1 ∈ C such that

prox-f (xn+1, y) + 1

rnhy − xn+1, xn+1− xni ≥ 0 for every y ∈ C (2.9)where the sequence {rn} is positive Under some conditions defined on f , it

is proven that the sequence {xn} is well defined and converges to a solution

of EP (f, C) [65] However, we mention that it is not an easy work to find

xn+1 satisfying (2.9) Recently, Mordukhovich et al [63] suggested to solve thisinequality by using descent methods based on gap functions and in [67] anothersubproblem must be solved to obtain xn+1 satisfying (2.9)

2.6 Previous works

For finding a common solution of an equilibrium problem and a fixed point problem,the strategy is to combine a method for solving equilibrium problems with a methodfor solving fixed point problems Most of the methods for solving equilibrium problems

in the literature are based on the proximal point method These methods require thatthe function f is assumed to satisfy the following conditions:

(E1) f (x, x) = 0 for all x ∈ C

(E2) f is monotone, i.e., f (x, y) + f (y, x) ≤ 0 for all x, y ∈ C

(E3) lim supt→0+ f (tz + (1 − t)x, y) ≤ f (x, y) for all x, y, z ∈ C

(E4) f (x, ·) is convex, subdifferentiable and lower semicontinuous for all x ∈ C

Trang 26

Under these assumptions, for each r > 0 and x ∈ H, there exists a unique element

z ∈ C such that

f (z, y) + 1

rhy − z, z − xi ≥ 0 for every y ∈ C (2.10)(see, for example, [28])

The hybrid projection method consists in combining the proximal point method forsolving equilibrium problems with the CQ projection method for solving fixed pointproblem Here the sequence {xn} is generated as follows:

Given x0 ∈ C, compute for all n ∈ IN ,

The shrinking projection method [47, 53, 71] constructs step by step a decreasingsequence of subsets containing the solution set The sequence {xn} is defined as follows:

Trang 27

for every integer n ≥ 1, where {αn} ⊂ [a, b] for some a, b ∈ (0, 1), and {rn} ⊂ (0, ∞)satisfies the condition lim infn→∞rn> 0 In [47], Jaiboon and Kumam have proven thatthe sequence {xn} generated by the shrinking projection method converges strongly tothe projection of x0 onto F ix(S) ∩ SolEP (f, C) provided that the function f satisfiesconditions (E1)–(E4) and the mapping S is a ξ-strict pseudo-contractive, and that{αn} ⊂ [a, b] for some a, b ∈ (ξ, 1).

This method consists in combining the viscosity approximation method with aproximal method adapted for solving the equilibrium problem This gives rise to thefollowing algorithm [93]:

Given x0 ∈ H, compute, for all n ∈ N,

n=1|rn+1− rn| < ∞

Besides the strong convergence property of this sequence, this method also allows us

to select a particular fixed point of S For example, if g(x) = x0 for every x ∈

C, then the sequence {xn} converges strongly to the projection of x0 onto the fixedpoint set F ix(S) Related papers on viscosity approximation method can be found in[1, 18, 55, 57, 64, 78, 105]

When the problem is to find a common solution to a variational inequality problemand a fixed point problem, many papers combine an extragradient iteration with afixed point iteration (see, for example, [23, 26, 55, 69]) The extragradient method hasbeen extended to equilibrium problems [97], and in [1, 4, 76] to the problem of finding

a common solution x∗ ∈ F ix(S) ∩ SolEP (f, C) The advantage of the extragradientiteration is that two strongly convex minimization problems are solved at each iteration,which seems numerically easier than solving the nonlinear inequality in the proximal

Trang 28

method In other words, given xn ∈ C, the proximal step computing zn is replaced bythe following two steps

where the mappings Si, i = 1, , p, are strict pseudo-contractive

Under appropriate conditions on {λn}, {λn,i} and {αn}, the authors of [4] prove thatthe sequence {xn} converges strongly to the projection of x0 onto the set ∩pi=1F ix(Si) ∩SolEP (f, C) provided that f satisfies the Lipschitz-type condition (2.7) In the case

f (x, y) = hF x, y − xi for every x, y ∈ C, it is easy to see that this condition holds whenthe map F : C → H is Lipschitz continuous However, the Lipschitz-type condition on

f is rather strong and not necessary for proving the convergence of the method whenthe proximal point iteration is used Here, to avoid the introduction of the Lipschitzproperty on f , an Armijo backtracking linesearch procedure will be introduced in theextragradient iteration, see [97] Furthermore, the shrinking projection method or theviscosity method can be combined with the extragradient method to obtain the strongconvergence of the sequence {xn}

Trang 29

f : C × C → R be an equilibrium bifunction The problem considered in this chapteris:

Find x∗ ∈ F ix(S) ∩ SolEP (f, C) (3.1)

To solve this problem, the strategy is to replace the proximal point iteration used inmost papers by an extragradient procedure with or without an Armijo backtrackinglinesearch The strong convergence of the iterates generated by each method is ob-tained thanks to a hybrid projection method, under the assumptions that the fixedpoint mapping is a ξ-strict pseudo-contractive mapping, and the function associatedwith the equilibrium problem is pseudomonotone and weakly continuous A Lipschitz-type condition is assumed to hold on this function when the basic iteration comes fromthe extragradient method This assumption is unnecessary when an Armijo backtrack-ing linesearch is incorporated in the extragradient method Some preliminary numericalillustrations are reported to show the efficiency of the proposed methods and to com-pare their behavior

3.1 A hybrid extragradient algorithm

In this section, first we combine the extragradient method for solving EP (f, C) with

a fixed point method More precisely, we consider the sequences {xn}, {yn}, {zn}, and

Trang 30

yn = arg miny∈C{λnf (xn, y) +12ky − xnk2},

zn = arg miny∈C{λnf (yn, y) +12ky − xnk2},

tn = αnxn+ (1 − αn) [βnzn+ (1 − βn)Szn],

xn+1 = tn,for every n ∈ N, where {αn} ⊂ [0, 1), {βn} ⊂ (0, 1), and {λn} ⊂ (0, 1]

Here, we assume that the following conditions are satisfied on the bifunction f :(A1) f (x, x) = 0 for all x ∈ C

(A2) f is pseudomonotone on C, i.e., f (x, y) ≥ 0 ⇒ f (y, x) ≤ 0 ∀x, y ∈ C

(A3) f is jointly weakly continuous on C × C in the sense that, if x, y ∈ C and {xn}and {yn} are two sequences in C converging weakly to x and y, respectively, then

f (xn, yn) → f (x, y)

(A4) f (x, ·) is convex and subdifferentiable on C for all x ∈ C

(A5) f satisfies the Lipschitz-type condition: ∃ c1 > 0, ∃ c2 > 0 such that, for every

x, y, z ∈ C,

f (x, y) + f (y, z) ≥ f (x, z) − c1ky − xk2− c2kz − yk2

It is well known that if f satisfies the properties (A1)-(A4), then the set SolEP (f, C)

of solutions of the equilibrium problem is closed and convex (see, for example [97])

Remark 3.1.1 A first example of function f satisfying assumption (A5), is given by

f (x, y) = hF x, y − xi for every x, y ∈ C,where F : C → H is Lipschitz continuous on C (with constant L > 0) [61] In thatexample, c1 = c2 = L/2 Another example, related to the Cournot-Nash equilibriummodel, is described in [97, p 768] The function f : C × C → R is defined, for every

x, y ∈ C, by

f (x, y) = hF x + Qy + q, y − xi,with C = {x ∈ Rn : Ax ≤ b}, F : C → Rn, Q ∈ Rn×n, a symmetric positivesemidefinite matrix, and q ∈ Rn If F is Lipschitz continuous on C (with constant

L > 0), then f satisfies (A5) with c1 > 0, c2 > 0 such that 2√

c1c2 ≥ L + kQk 2

Trang 31

Furthermore, we also suppose that the mapping S satisfies the condition:

(S1) S is a ξ-strict pseudo-contractive mapping for some ξ ∈ [0, 1)

In this section, we also suppose that the sequences of parameters {αn}, {βn}, and {λn}satisfy the conditions

Now, let {xn}, {yn}, {zn}, and {tn} be the sequences generated by the combination

of the extragradient method and the fixed point method described at the beginning ofthis section These sequences satisfy the following properties:

Proposition 3.1.1 [1, Lemma 3.1] For every x∗ ∈ SolEP (f, C), and every n ∈ N,one has

ktx + (1 − t)yk2 = tkxk2+ (1 − t)kyk2 − t(1 − t)kx − yk2,valid for any t ∈ [0, 1] and for any x, y ∈ H, we obtain successively

Trang 32

where we have used the ξ-strict pseudo-contraction property of the mapping S to getthe last inequality Finally, it remains to apply Proposition 3.1.1 (ii) to obtain theannounced result.

In order to obtain the strong convergence of the sequence {xn} generated by thecombination of the extragradient method and the fixed point method, we can useLemma 2.2.1 by setting K = SolEP (f, C) ∩ F ix(S) and u = x0 Furthermore, weimpose that the sequence {xn} generated by our algorithms satisfies, for all n ∈ N, theinequality

kxn− x0k ≤ kxn+1− x0k ≤ k˜x0− x0k, (3.3)where ˜x0 = PSolEP (f,C)∩F ix(S)x0 In that case, the sequence {kxn− x0k} is convergentand the sequence {xn} is bounded These properties will be useful to prove that anyweak limit point of {xn} belongs to SolEP (f, C) ∩ F ix(S)

In order to construct a sequence {xn} satisfying (3.3), we consider an outer imation method, that is, we construct a sequence {Ωn} of subsets of C such that

approx-Ωn⊃ SolEP (f, C) ∩ F ix(S) ∀n ∈ N and PΩ nx0 → ˜x0 as n → ∞,

and we set xn+1 = PΩnx0 In that purpose, from the equality

Proposition 3.1.3 For every n ∈ N, the sets Cn and Dn defined above are closed andconvex Furthermore, when SolEP (f, C) ∩ F ix(S) 6= ∅ and xn+1 = PCn∩D nx0, one has

SolEP (f, C) ∩ F ix(S) ⊂ Cn∩ Dn

Trang 33

Proof Obviously, Cn and Dn are closed and Dn is convex for every n ∈ N Since wecan write Cn under the form

Cn= {z ∈ C : ktn− xnk2+ 2htn− xn, xn− zi ≤ 0},

we see immediately that Cnis also convex for every n ∈ N Furthermore, by Proposition3.1.1 (ii), we have that SolEP (f, C) ∩ F ix(S) ⊂ Cn for every n ∈ N

Next, we prove, by induction on n, that SolEP (f, C) ∩ F ix(S) ⊂ Cn∩ Dn

For n = 0, we have D0 = C and SolEP (f, C) ∩ F ix(S) ⊂ C0∩ D0 Now, suppose thatfor some n ∈ N, SolEP (f, C) ∩ F ix(S) ⊂ Cn∩ Dn Since SolEP (f, C) ∩ F ix(S) isnonempty, Cn∩ Dn is a nonempty closed convex subset of C So, xn+1 = PCn∩D nx0

is well defined, and hxn+1 − z, x0 − xn+1i ≥ 0 holds for every z ∈ Cn ∩ Dn SinceSolEP (f, C) ∩ F ix(S) ⊂ Cn∩ Dn, we have, in particular, hxn+1− z, x0− xn+1i ≥ 0,for every z ∈ SolEP (f, C) ∩ F ix(S), and consequently, SolEP (f, C) ∩ F (S) ⊂ Dn+1.Therefore, we obtain SolEP (f, C) ∩ F ix(S) ⊂ Cn+1∩ Dn+1

The inequalities (3.3) being satisfied, the sequence {xn} defined by xn+1 = PCn∩Dnx0,for every n ∈ N, is also bounded, and to obtain the strong convergence of that sequence

to the projection of x0onto SolEP (f, C)∩F ix(S), it is sufficient, thanks to Proposition3.1.3, to prove that every weak limit point of {xn} belongs to SolEP (f, C) ∩ F ix(S).This method is known in the literature as the hybrid projection method [47] It isthat method we will use in each of our algorithms to get the strong convergence of theiterates

Combining the hybrid projection method described above, with the extragradientmethod, we obtain the following basic algorithm:

Algorithm 3.1.1

Step 0 Choose the sequences {αn} ⊂ [0, 1), {βn} ⊂ (0, 1), and {λn} ⊂ (0, 1]

Step 1 Let x0 ∈ C Set n = 0

Step 2 Solve successively the strongly convex programs

Trang 34

Step 5 Set n := n + 1, and go to Step 2.

Before proving the strong convergence of the iterates generated by Algorithm 3.1.1,

we justify the stopping criterion in the next proposition

Proposition 3.1.4 If yn= xn, then xn ∈ SolEP (f, C) If yn = xn and tn = xn, then

xn∈ SolEP (f, C) ∩ F ix(S)

Proof When yn = xn, it follows from Proposition 3.1.1 (i) that

0 ≤ λnf (xn, y) − λnf (xn, xn) = λnf (xn, y)holds, for every y ∈ C But this means that xn ∈ SolEP (f, C)

On the other hand, when yn = xn and tn = xn, we have that zn= xn, and thus

to the projection of x0 onto SolEP (f, C) ∩ F ix(S)

Theorem 3.1.1 Let C be a nonempty closed convex subset of H Let f be a functionfrom C × C into R satisfying conditions (A1)-(A5), and let S be a mapping from C to

C satisfying conditions (S1), and such that SolEP (f, C) ∩ F ix(S) 6= ∅ Suppose thatthe sequences {αn}, {βn}, and {λn} satisfy the conditions (P1) Then, the sequence{xn} generated by Algorithm 3.1.1 converges strongly to the projection of x0 onto theset SolEP (f, C) ∩ F ix(S)

Trang 35

Proof Let {xn} be the infinite sequence generated by Algorithm 3.1.1 Since we use thehybrid projection method, it follows from our previous discussion, that the sequence{xn} generated by Algorithm 3.1.1 satisfies the inequalities (3.3) and thus is bounded.Consequently, from Lemma 2.2.1, the strong convergence of the sequence {xn} to theprojection of x0onto SolEP (f, C)∩F ix(S) holds, provided that every weak limit point

of {xn} is an element of SolEP (f, C) ∩ F ix(S) So, let ¯x be a weak limit point of {xn},and suppose that xni * ¯x Since C is closed and convex, it is also weakly closed, andthus ¯x ∈ C The proof of ¯x ∈ SolEP (f, C) ∩ F ix(S) is done in several steps

Step 1 kxn+1−xnk → 0, kxn−tnk → 0, kxn−ynk → 0, kxn−znk → 0, kSzn−znk → 0.The sequence {kxn− x0k}, being nondecreasing and bounded thanks to (3.3), is con-vergent to some a ≥ 0 But then, since xn+1 ∈ Dn, it follows from (3.4) that, for every

n ∈ N,

kxn+1− xnk2 ≤ kxn+1− x0k2− kxn− x0k2, (3.5)and thus that kxn+1− xnk → 0, because the right-hand side of (3.5) tends to zero.Now, ktn− xn+1k ≤ kxn− xn+1k because xn+1∈ Cn Hence

kxn− tnk ≤ kxn− xn+1k + kxn+1− tnk ≤ 2 kxn− xn+1k

Since kxn− xn+1k → 0, we deduce that kxn− tnk → 0

Next, let x∗ ∈ SolEP (f, C) ∩ F ix(S) Then, using Proposition 3.1.2, we can write, forevery n ∈ N, that

(1 − αn)(1 − 2λnc1) kyn− xnk2 ≤ kxn− x∗k + ktn− x∗k kxn− tnk,(1 − αn)(1 − 2λnc2) kzn− ynk2 ≤ kxn− x∗k + ktn− x∗k kxn− tnk,(1 − αn)(1 − βn)(βn− ξ)kSzn− znk2 ≤ kxn− x∗k + ktn− x∗k kxn− tnk,because kxn− x∗k − ktn− x∗k ≤ kxn− tnk

Since, for every n ∈ N,

1 − αn≥ 1 − c > 0, 1 − 2λnc1 ≥ 1 − 2λmaxc1 > 0, kxn− tnk → 0,

1 − 2λnc2 ≥ 1 − 2λmaxc2 > 0, 1 − βn ≥ 1 − b > 0, βn− ξ ≥ d − ξ > 0,

and since the sequences {xn} and {tn} are bounded, it follows that

kyn− xnk → 0, kzn− ynk → 0, kzn− xnk → 0, and kSzn− znk → 0

Trang 36

Step 2 ¯x ∈ SolEP (f, C).

Since xni * ¯x and kxn− ynk → 0, we have that yni * ¯x On the other hand, usingProposition 3.1 (i), we have, for every y ∈ C, and for every i ∈ N, that

hxni− yni, y − ynii ≤ λnif (xni, y) − λnif (xni, yni) (3.6)Since kxni − ynik → 0 and y − yni * y − ¯x as i → ∞, and since, for every i ∈ N,

0 < λmin ≤ λni ≤ λmax, we obtain, after taking the limit in (3.6), that

f (¯x, y) ≥ 0 for every y ∈ C,i.e., ¯x ∈ SolEP (f, C)

Step 3 ¯x ∈ F ix(S)

Since xni * ¯x and kxn− znk → 0, we have that zni * ¯x On the other hand, we knowthat kSzni− znik → 0 Consequently, the mapping I − S being demiclosed at zero, itfollows immediately that S ¯x − ¯x = 0, i.e., ¯x ∈ F ix(S)

3.2 Extragradient algorithms with linesearches

The extragradient-type algorithm considered in the previous section has been proven

to be strongly convergent under the Lipschitz-type condition (A5) This condition pends on two positive parameters c1 and c2 and, in some cases, they are unknown ordifficult to approximate In this section, we modify the second step of the extragradientiteration, i.e., the computation of zn, by including a linesearch and a projection onto

de-a hyperplde-ane Considering two different projections, we obtde-ain two de-algorithms whosestrong convergence can be established without assuming the Lipschitz-type condition(A5) However, in compensation, we have to slightly reinforce assumption (A3) Moreprecisely, in this section, we suppose that the function f satisfies conditions (A1), (A2),(A4), as well as the following condition:

(A3bis) f is jointly weakly continuous on the product ∆ × ∆ where ∆ is an openconvex set containing C in the sense that if x, y ∈ ∆ and {xn} and {yn} are two se-quences in ∆ converging weakly to x and y, respectively, then f (xn, yn) → f (x, y)

Furthermore, we also suppose that the mapping S satisfies condition (S1) and thesequences of parameters {αn}, {βn}, and {λn} satisfy the conditions:

Trang 37

Under these assumptions, we will obtain an algorithm strongly converging to apoint x∗ ∈ SolEP (f, C) ∩ F ix(S) under the same assumptions as the ones used bythe proximal-type methods [22, 53, 54, 81, 86, 92] However, our subproblems, be-ing strongly convex minimization problems, seem easier to solve than the proximalsubproblem (2.9).

Our first extragradient-type algorithm with a linesearch can be expressed as follows:Algorithm 3.2.1

Step 0 Choose α ∈ (0, 2), γ ∈ (0, 1) and the sequences

{αn} ⊂ [0, 1), {βn} ⊂ (0, 1) and {λn} ⊂ (0, 1)

Step 1 Let x0 ∈ C Set n = 0

Step 2 Solve the strongly convex program miny∈C{λnf (xn, y) +12ky − xnk2}

to obtain the unique optimal solution yn

Step 3 If yn = xn, then set zn = xn Otherwise

Step 3.1 Find m the smallest nonnegative integer such that

Trang 38

Step 7 Set n := n + 1, and go to Step 2.

The next proposition will be used in the proof of the convergence theorem It is aninfinite-dimensional version of Theorem 24.5 in [83]

Proposition 3.2.1 Let f : ∆ × ∆ → R be a function satisfying conditions (A3bis)and (A4) Let ¯x, ¯z ∈ ∆ and {xn} and {zn} be two sequences in ∆ converging weakly

to ¯x, ¯z, respectively Then, for any ε > 0, there exist η > 0 and nε ∈ N such that

∂2f (zn, xn) ⊂ ∂2f (¯z, ¯x) + ε

ηB,for every n ≥ nε, where B denotes the closed unit ball in H

Proof For every n ∈ N, consider the convex functions hn := f (zn, ·) and h := f (¯z, ·).Let also d ∈ H and µ > h0(¯x; d) Then, by definition of the directional derivative, thereexists λ > 0 such that

h(¯x + λd) − h(¯x)

Since f is jointly weakly continuous on ∆ × ∆ and since xn+ λd * ¯x + λd and xn* ¯x,

we obtain that hn(xn+ λd) → h(¯x + λd) and hn(xn) → h(¯x) Hence, for n sufficientlylarge, we have

Trang 39

The next proposition will play the same role as Propositions 3.1.1 and 3.1.2, butfor Algorithm 3.2.1.

Proposition 3.2.2 For every x∗ ∈ SolEP (f, C) ∩ F ix(S) and all n ∈ N, one has(i) kwn− x∗k2 ≤ kxn− x∗k2− σ2

nkgnk2,(ii) ktn− x∗k2 ≤ kxn− x∗k2− (1 − αn)σ2

zn= xn and σn = 0 Combining (3.8) and (3.9), we easily deduce (i)

(ii) Let x∗ ∈ SolEP (f, C) ∩ F ix(S) Then, the inequality (3.2) can be easily obtainedwith wn instead of zn:

ktn− x∗k2 ≤ αnkxn− x∗k2+ (1 − αn)kwn− x∗k2− (1 − αn)(1 − βn)(βn− ξ)kSwn− wnk2.Combining this inequality and (i), we directly obtain (ii)

Proposition 3.2.3 Suppose that yn6= xn for some n ∈ N Then

(i) The linesearch corresponding to xn and yn (Step 3.1) is well defined

(ii) f (zn, xn) > 0

(iii) 0 6∈ ∂2f (zn, xn)

Trang 40

Proof We only prove the first statement The proof of the other ones can be found

in [97, Lemma 4.5] Let n ∈ N and suppose, to get a contradiction, that the followinginequality hold for every integer m ≥ 0

f (zn,m, xn) − f (zn,m, yn) < α

2λnkxn− ynk2, (3.10)where zn,m = (1 − γm)xn+ γmyn and γ ∈ (0, 1) Then, {zn,m}m converges strongly to

xn as m → ∞, and thus also weakly Since f (·, x) is weakly continuous on an openset ∆ ⊃ C for every x ∈ ∆, it follows that f (zn,m, xn) → f (xn, xn) = 0, and that

f (zn,m, yn) → f (xn, yn) So, taking the limit on m in (3.10), yields

− f (xn, yn) ≤ α

2λnkyn− xnk2 (3.11)Now, from Proposition 3.1.1 (i) with y = xn, we can write that

kyn− xnk2 ≤ −λnf (xn, yn)

Combining this inequality with (3.11), we obtain that (1 − α2)kyn − xnk2 ≤ 0 Since

α ∈ (0, 2), we deduce that yn = xn, which gives rise to a contradiction, because wehave supposed that yn6= xn Consequently, the linesearch is well defined

As a consequence of this proposition, we have that σn is well defined and positivewhen yn 6= xn (see Step 4 of Algorithm 3.1.1) In the next proposition, we justify thestopping criterion

Proposition 3.2.4 If yn= xn, then xn ∈ SolEP (f, C) If yn = xn and tn = xn, then

wn = xn and xn ∈ SolEP (f, C) ∩ F ix(S)

Proof When yn= xn, it follows from Proposition 3.1.1 (i), that

0 ≤ λnf (xn, y) − λnf (xn, xn) = λnf (xn, y)holds for every y ∈ C But this means that xn ∈ SolEP (f, C)

On the other hand, when yn= xn and tn = xn, we have that wn= xn, because σn= 0and xn∈ C Hence

xn = αnxn+ (1 − αn)[βnxn+ (1 − βn)Sxn]

Since, by assumption, 1 − αn > 0 and 1 − βn > 0, it follows that Sxn = xn, i.e.,

xn∈ F ix(S)

Ngày đăng: 21/05/2016, 22:17

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN