1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: One step from DC optimization to DC mixed variational inequalities

16 200 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 178,5 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

1, January 2010, 63–76One step from DC optimization to DC mixed variational inequalities Le Dung Muua* and Tran Dinh Quocb a Hanoi Institute of Mathematics, VAST, 18 Hoang Quoc Viet road

Trang 1

On: 23 October 2014, At: 01:29

Publisher: Taylor & Francis

Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Optimization: A Journal of Mathematical Programming and Operations Research

Publication details, including instructions for authors and subscription information:

http://www.tandfonline.com/loi/gopt20

One step from DC optimization to DC mixed variational inequalities

Le Dung Muu a & Tran Dinh Quoc b a

Hanoi Institute of Mathematics , VAST, 18 Hoang Quoc Viet road, Cau Giay district, Hanoi, Vietnam

b Hanoi University of Science , 334-Nguyen Trai road, Thanh Xuan district, Hanoi, Vietnam

Published online: 12 Feb 2010

To cite this article: Le Dung Muu & Tran Dinh Quoc (2010) One step from DC optimization to

DC mixed variational inequalities, Optimization: A Journal of Mathematical Programming and Operations Research, 59:1, 63-76, DOI: 10.1080/02331930903500282

To link to this article: http://dx.doi.org/10.1080/02331930903500282

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the

“Content”) contained in the publications on our platform However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content Any opinions and views expressed in this publication are the opinions and views of the authors,

and are not the views of or endorsed by Taylor & Francis The accuracy of the Content should not be relied upon and should be independently verified with primary sources

of information Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content

This article may be used for research, teaching, and private study purposes Any

substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden Terms &

Trang 2

and-conditions

Trang 3

Vol 59, No 1, January 2010, 63–76

One step from DC optimization to DC mixed variational

inequalities

Le Dung Muua* and Tran Dinh Quocb

a

Hanoi Institute of Mathematics, VAST, 18 Hoang Quoc Viet road, Cau Giay district, Hanoi, Vietnam;bHanoi University of Science, 334-Nguyen Trai road,

Thanh Xuan district, Hanoi, Vietnam (Received 10 April 2008; final version received 15 March 2009)

We apply the proximal point method to mixed variational inequalities by using DC decompositions of the cost function An estimation for the iterative sequence is given and then applied to prove the convergence of the obtained sequence to a stationary point Linear convergence rate is achieved when the cost function is strongly convex For nonconvex case, global algorithms are proposed to search a global equilibrium point

A Cournot–Nash oligopolistic market model with concave cost function which motivates our consideration is presented

Keywords: mixed variational inequality; splitting proximal point method;

DC decomposition; local and global equilibria; Cournot–Nash model

1 Introduction

Let ; 6¼ C  Rnbe a closed convex subset, F be a mapping from Rnto C and ’ be

a real-valued (not necessarily convex) function defined on Rn We consider the following mixed variational inequality problem (MVIP):

Find x2Csuch that: FðxÞTðy  xÞ þ’ð yÞ  ’ðxÞ 0 for all y 2 C: ð1Þ

We call such a point x a global solution in contrast to a local solution defined by a point x2Cthat satisfies

FðxÞTðy  xÞ þ’ð yÞ  ’ðxÞ 0, 8y 2 C \ U, ð2Þ where U is an open neighbourhood of x These points are sometimes referred to local and global equilibrium points, respectively Note that when ’ is convex on C, a local solution is a global one When ’ is not convex, a local solution may not be a global one MVIPs of the form (1) is extensively studied in the literature The results on existence, stability and solution-approach when ’ is convex are obtained in many research papers (see, e.g [2,5,7,8,10,12,19] and the references quoted therein) However, when ’ is nonconvex, these results might be no longer preserved

*Corresponding author Email: ldmuu@math.ac.vn

ISSN 0233–1934 print/ISSN 1029–4945 online

ß 2010 Taylor & Francis

DOI: 10.1080/02331930903500282

Trang 4

The proximal point method was first introduced by Martinet [9] to variational inequalities and then extended by Rockafellar [18] to finding a zero point of a maximal monotone operator Sun et al in [20] applied the proximal point method

to DC optimization It is observed that with a suitable DC decomposition,

DC algorithm introduced by Pham [13] for DC optimization problems becomes a proximal point algorithm Recently, DC optimization has been successfully applied

to many practical problems (see [1,14–16] and the references therein)

It is motivated from the well-known Cournot–Nash oligopolistic market model with an observation that the cost function is not only always linear or convex but also can be concave when amount of production increases In this article, we further apply the proximal point method to mixed variational inequality (1) by using a DC decomposition of the cost function ’ The DC decomposition of ’ ¼ g  h allows us

to develop a splitting proximal point algorithm to find a stationary point of (1) This splitting algorithm is useful when g is a convex function such that the resulting convex subproblem is easy to minimize, since the resolvent is defined by using only the subgradient of g

The rest of the article is organized as follows In Section 2, a splitting proximal point method for mixed variational inequalities with DC cost function is proposed An estimation for the iterative sequence to a stationary point is given Then it is used to prove the convergence to a stationary point when the cost function happens to be convex The linear convergence rate is achieved when the cost function is strongly convex In order to apply to nonconvex cases, global algorithms are also presented in Section 3 to search a global solution We close this by the Cournot–Nash oligopolistic market model which gives an evidence of our consideration

2 The proximal point method to DC mixed variational inequality

In this section, firstly, we investigate the properties regarding local and global solutions to MVIP (1) where ’ is a DC function Next, we extend the proximal point method to find a stationary point of this problem Finally, we prove convergence results when ’ happens to be convex and F is strongly monotone

2.1 Condition for equilibrium

As usual, for problem (1), we call C the feasible domain, F the cost operator and ’ the cost function Since C is closed convex, it is easy to see that any local solution to (1) is a global one provided that ’ is convex on C Motivated by this fact, we name problem (1) a convex mixed variational inequality when ’ is convex in contrast to nonconvex mixed variational inequalities where the cost function is not convex Let us denote

NC:¼ fðx, U Þ: x 2 C, U is a neighbourhood of xg, and define the mapping S : N !2Cand the function m : N ! Rby taking

Trang 5

Sðx, U Þ :¼ argmin

FðxÞTðy  xÞ þ ’ð yÞ: y 2 C \ U

mðx, U Þ :¼ min

FðxÞTðy  xÞ þ ’ð yÞ  ’ðxÞ: y 2 C \ U

respectively As usual, we refer to m(x, U ) as a local gap function for problem (1) The following proposition gives necessary and sufficient conditions for a point to

be a solution (local or global) to (1)

PROPOSITION 2.1 Suppose that S(x, U ) 6¼ ; for every (x, U ) 2 NC Then the following statements are equivalent:

(a) x is a local solution to(1);

(b) x2C and x2S(x, U );

(c) x2C and m(x, U ) ¼ 0

Proof We first prove that (a) is equivalent to (b) Suppose x2Cand x2S(x, U ) Then

0 ¼ FðxÞTðxxÞ þ’ðxÞ ’ðxÞ FðxÞTðy  xÞ þ’ð yÞ  ’ðxÞ, 8y 2 C \ U: Hence, x is a local solution to problem (1) Conversely, if

FðxÞTðy  xÞ þ’ð yÞ  ’ðxÞ 0, 8y 2 C \ U ð5Þ then it is clear that x2S(x, U )

It is observed that m(x, U )  0 for every x 2 C \ U Thus x2C \ U and m(x, U ) ¼ 0 if and only if F(x)T( y  x) þ ’( y)  ’(x)  0 for all y 2 C \ U which

Clearly, if, in Proposition 2.1, U contains C then x is a global solution to (1) Unlike convex mixed variational inequality (including convex optimization and variational inequalities), a DC mixed variational inequality may not have solution for all the related functions are continuous and the feasible domain is compact For example, if we take C :¼ [1, 1]  R, F(x) ¼ x and ’(x) ¼ x2(a concave function) then problem (1) has no solution in this such case Conditions for existence of solution of MVIPs lacking convexity have been considered in some recent papers (see, e.g [7,12]) However, in those papers, we focus only on solution approaches to (1)

We denote Cthe set of proper, lower semicontinuously subdifferentiable convex functions on C and suppose that both convex functions g and h belong to C Moreover, since ’(x) ¼ (g(x) þ g1(x))  (h(x) þ g1(x)) for arbitrary function g12C,

we may assume that both g and h are strongly convex on C

By Proposition 2.1, x is a solution to (1) if and only if x solves the following optimization problem:

min FðxÞTðy  xÞ þ gð yÞ  hð yÞ: y 2 C

: Motivated by this fact we can borrow the concept of stationary point from optimization to MVIP (1)

Trang 6

Definition 2.2 A point x 2 C is called a stationary point to problem (1) if

0 2 FðxÞ þ @gðxÞ  @hðxÞ þ NCðxÞ, ð6Þ where

NCðxÞ:¼ w: w Tðy  xÞ 0, 8y 2 C denotes the (outward) normal cone of C at x 2 C, and @g(x) and @h(x) are the subgradients at x of g and h, respectively

Since NCis a cone, for every c40, inclusion (6) is equivalent to

0 2 c FðxÞ þ @gðxÞ  @hðxÞ 

Let us define g1(x) :¼ g(x) þ C(x), where @C(x) is the subgradient of the indicator function Cof C at x Then, applying the well-known Moreau–Rockafellar theorem,

we have @g1(x) ¼ @g(x) þ @C(x) Thus, by definition, x is a stationary point if and only if

0 2 c FðxÞ þ @gðxÞ  @hðxÞ

þ@CðxÞ, where c40 is referred to a regularization parameter in the algorithm to be described below

From Proposition 2.1, it follows, like in optimization, that every local solution to the problem (1) is a stationary point Since both g and C are proper, convex and closed, so is the function g1 Thus @g1(x) ¼ @g(x) þ NC(x) for every x 2 C

PROPOSITION 2.3 A necessary and sufficient condition for x to be a stationary point to the problem(1) is that

x 2

I þ c@g11

x  cFðxÞ þ c@hðxÞ

where c40 and I stands for the identity mapping

Proof Since g1 is proper closed convex, (I þ @g1)1 is single valued and defined everywhere [18] Hence, x satisfies (8) if and only if x  cF(x) þ cv(x) 2 (I þ c@g1)(x) for some v(x) 2 @h(x) Since NC(x) is a cone and @g1(x) ¼ @g(x) þ @C(x) ¼

@g(x) þ NC(x), the inclusion x  cF(x) þ cv(x) 2(I þ c@g1)(x) is equivalent to

0 2 F(x) þ @g(x)  @h(x) þ NC(x) which proves (8) g

2.2 The algorithm and its convergence

If we denote the right-hand side of (8) by Z(x) then inclusion (8) becomes x 2 Z(x) Proposition 2.3 suggests that finding a stationary point of (1) is indeed to find a fixed point of the splitting proximal point mapping Z According to framework of proximal point methods we can construct an iterative sequence as follows:

Taking an arbitrary x02Cand set k :¼ 0

For each k ¼ 0, 1, , for a given xk, we compute xkþ1by taking:

xkþ1¼

I þ ck@g1

1

xkckFðxkÞ þckvðxkÞ

where v(xk) 2 @h(xk)

Trang 7

If we denote by yk:¼ xkckF(xk) þ ckv(xk) then finding xkþ1is reduced to solving the strongly convex programming problem:

min gðxÞ þ 1

2ckkx  y

kk2: x 2 C

Indeed, by the well-known optimality condition for convex programming, xkþ1is the optimal solution to the convex problem (10) if and only if

0 2 @gðxkþ1Þ þ 1

ck

ðxkþ1ykÞ þNCðxkþ1Þ:

Note that when F  0 and h  0 the process (9) becomes the well-known proximal point algorithm for convex programming problems, whereas F  0 it is the proximal method for DC optimization [14,18,20]

In order to prove the convergence of the proximal point method defined by (9),

we recall the following well-known definitions (see, e.g [7,17,21])

For any mapping  : C ! 2R n

:  is said to be monotone on C if

ðu  vÞTðx  yÞ 0, 8x, y 2 C, 8u 2 ðxÞ, v 2 ð yÞ;

 is said to be maximal monotone if its graph is not contained properly in the graph of another monotone mapping;

 is said to be strongly monotone with modulus 40, shortly, -strongly monotone, on C if

ðu  vÞTðx  yÞ  kx  yk2, 8x, y 2 C, 8u 2 ðxÞ, v 2 ð yÞ;

 is said to be cocoercive with modulus 40, shortly, -cocoercive, on C if

ðu  vÞTðx  yÞ  ku  vk2, 8x, y 2 C, 8u 2 ðxÞ, v 2 ð yÞ: ð11Þ Clearly, if  is single valued and -cocoercive then it is 1/-Lipschitz

PROPOSITION 2.4 Suppose that the set of stationary points S of problem (1) is nonempty, that F is -cocoercive, g is strongly convex on C with modulus 40, and h is L-Lipschitz differentiable on C Then, for each x2S, we have

kkxkxk2kkxkþ1xk2ckð2  ckÞkFðxkÞ FðxÞk2, ð12Þ where k¼1 þ ckLt, k¼ ð1 þ 2ck  ckLtÞand t40

Proof First we note that, since g is proper, closed convex and C is nonempty closed convex, the mapping (I þ ck@g1)1is single valued and defined everywhere for every

ck40 [18] Thus the sequence {xk} constructed by (9) is well-defined It follows from (9) that

xkþ1¼xkckFðxkÞ þckvkckzkþ1, ð13Þ where vk¼ rh(xk) and zkþ12@g1(xkþ1) ¼ @g(xkþ1) þ NC(xkþ1) For simplicity of notation, in the remainder of the following expressions, we write Fkfor F(xk) and

F for F(x) By definition, if x is a stationary point to MVIP (1)

Trang 8

then 0 ¼ zþF(x)  v, where z2@g1(x) and v¼ rh(x) The cocoercivity of F on

Cimplies that

0  ðFkFÞTðxkxÞ kFkFk2

¼ ðFkFÞTðxkxckFkþckFÞ Dk, where Dk¼(  ck)kFkFk2 Since F¼vz and xkþ1¼xkckFkþ

ckvkckzkþ1, we obtain from the last inequality that

0  ðFkFÞTðxkckFkþckvkckzkþ1xÞ

ckðFkFÞTðvkvzkþ1þzÞ Dk

¼ ðFkFÞTðxkþ1xÞ ckðFkFÞTðvkvzkþ1þzÞ Dk: ð14Þ

On the other hand, since g is strongly convex with modulus 40, it is obvious that @g

is strongly monotone with modulus  which implies that the mapping @g1:¼ @g þ NC

is also strongly monotone with modulus  Thus, from zkþ12@g1(xkþ1), we can write

ðxkþ1xÞTðzkþ1zÞ kxkþ1xk20: ð15Þ Adding (14) to (15), and then using zþF(x) ¼ v and (13), we get

0  ðFkFþzkþ1zÞTðxkþ1xÞ

ckðFkFÞTðvkvzkþ1þzÞ kxkþ1xk2Dk

¼ ðxkþ1xÞT vkþðxkxÞ  ðxkþ1xÞ



ckðFkFÞTðvkvzkþ1þzÞ kxkþ1xk2Dk: Now, we denote by ^xk:¼ xkx, ^xkþ1:¼ xkþ1x, ^vk:¼ vkv, ^zkþ1:¼ zkþ1z

and ^Fk:¼ FkF We can write the last inequality as

2ckðx^kþ1ÞT^vk2ð ^xkþ1ÞTðx^kþ1x^kÞ 2c2kð ^FkÞTð^vk^zkþ1Þ 2ckk ^xkþ1k22ckDk0:

ð16Þ From (13), we have

kx^kþ1x^kk2¼c2kk ^Fkk2þc2kk^zkþ1^vkk22c2kð ^FkÞTð^vk^zkþ1Þ:

Then using the identity

2ð ^xkþ1ÞTðx^kþ1x^kÞ ¼ kx^kþ1x^kk2þ kx^kþ1k2 kx^kk2,

we obtain from (16) that

2ckðx^kþ1ÞT^vk ð1 þ 2ckÞk ^xkþ1k2þ kx^kk2c2kk ^Fkk2

c2kk^vk^zkþ1k22ckDk0: ð17Þ Since rh is L-Lipschitz continuous, we have k ^vkk Lk ^xkk It is easy to show by using the Chebyshev inequality that

2ð ^xkþ1ÞT^vk2k ^xkþ1kk^vkk 2Lk ^xkkkx^kþ1k 2L tk ^xkk2þkx^kþ1k2

t

! , 8t4 0:

Trang 9

Thus we can replace 2ð ^xkþ1ÞT^vk by 2Lðtk ^xkk2þkx^kþ1k

2

t Þ into (17) and using the definition of Dkto obtain

kkx^kk2kkx^kþ1k2ckð2  ckÞk ^Fkk2þc2kk^vk^zkþ1k2, ð18Þ where k¼1 þ ckLt, k¼ ð1 þ 2ck  ckL

tÞ and t40 The proposition is

The following corollary proves the convergence of the proximal point sequence {xk} by using estimation (12)

COROLLARY 2.5 Under the assumptions of Proposition 2.4, we suppose further that

  L Then the sequence {xk} generated by (9) converges to a stationary point of problem(1) Moreover, if either 4L or F is -strongly monotone then the sequence {xk} linearly converges to a stationary point of (1)

Proof Suppose   L Let m and M be two real numbers such that 05m 

ckM52 If we choose t ¼ 1, it follows from (18) that

kxkxk2 kxkþ1xk2mð2  M Þ

1 þ ML kFðx

kÞ FðxÞk20:

Hence, {xk} is bounded and the sequence {kxkxk2} is convergent, since it is nonincreasing and bounded from below by 0 Moreover, this inequality implies that limk!1F(xk) ¼ F(x) Note that, by (13), we have

0 ¼ lim

k!1

xkþ1xk

ck ¼k!1limðvkzkþ1FkÞ ¼ lim

k!1ðvkzkþ1Þ F, which follows by the fact F¼zv that limk!1(vkvzkþ1þz) ¼ 0 Let x1

be a limit point of the bounded sequence {xk} and {xk: k 2 K} be the subsequence converging to x1

Since F is cocoercive, it is continuous Thus limk!1F(xk) ¼ F(x) implies F(x1) ¼ F(x) By assumption that rh is L-Lipschitz,

we have kvkvk Lkxkxk Thus {vk} is bounded too By taking again a subsequence, if necessary, we may assume that the subsequence {vk: k 2 K} converges

to v1

Using the continuity of rh, we have v1

¼ rh(x1

) Now, we show that

v1F(x1) 2 @g1(x1) To this end, let z 2 @g1(x) Then it follows from the strong monotonicity of @g1that

0  kxkxk2  ðzkzÞTðxkxÞ ¼ ðzkzÞTðxkx1Þ þ ðzkzÞTðx1xÞ

¼ ðzkzÞTðxkx1Þ þ ðzkvk1zÞTðx1xÞ þ ðvk1ÞTðx1xÞ:

Note that, from limk!1(vkvzkþ1þz) ¼ 0, we have

zkþ1vk!zv¼ FðxÞ:

Then, taking the limit on the left-hand side of the last inequality, we obtain

ðv1FðxÞ xÞTðx1xÞ 0 which, by the maximal monotonicity of @g1[17], implies that v1

F(x) 2 @g1(x1

) Since F(x1) ¼ F(x), we have v1F(x1) 2 @g1(x1) Note that v1¼ rh(x1) we have

0 2 @g (x1) þ F(x1)  rh(x1), which means that x1 is a stationary point of

Trang 10

problem (1) Substituting x into (12) by x1 and observing that {kxkx1k} is convergent, we can imply that the whole sequence {xk} converges to x1

, because it has a subsequence converging to x1

Next, it follows from (12) that kkxkxk2kkxkþ1xk2 Thus if L5 then

0 5 r :¼ ffiffiffiffi

 k

 k

q

5 1 for every k  0 Hence, kxkþ1xk rkxkxk, which shows that the sequence {xk} converges linearly to x

If F is strongly monotone with modulus 40 then kFðxkÞ FðxÞkkxkxk  ðFðxkÞ FðxÞÞTðxkxÞ kxkxk2: Consequently,

kFðxkÞ FðxÞk22kxkxk2: Substituting this inequality into (12), after a simple arrangement, we get

½k2ckð2  ckÞkxkxk2kkxkþ1xk2: Using assumption 05m  ckM52 and taking t ¼ 1, we obtain from the last inequality that

½1 þ ckL  2ckð2  ckÞkxkxk2 ð1 þ 2ck  ckLÞkxkþ1xk2: Since  þ 2ð m2Þ4 L, 1 þ ckL  2ck(2  ck)51 þ 2ck  ckL, it is easy to see

3 Global solution methods

In this section, we propose solution methods for finding a global solution of MVIP (1), where the cost function ’ may not be convex, but a DC function The first method uses the convex envelope of the cost function to convert a nonconvex MVIP

to a convex one The second method is devoted to the case when the cost function ’

is concave In this case, a global solution attains at an extreme point of the feasible domain This fact suggests that outer approximations which have widely used in global optimization can be applied to nonconvex mixed variational inequalities First, we recall that the convex envelope of a function  on a convex set C is a convex function conv  on C that satisfies the following conditions:

(i) conv (x)  (x) for every x 2 C, (ii) If l is convex on C and l(x)  (x) for all x 2 C then l(x)  conv (x) for all

x 2 C

We need the following lemma

LEMMA 3.1 [6] Let :¼ l þ 1 with l being an affine function Suppose that C is a polyhedral convex set Then the convex envelope conv  of  on C is l þ conv 1, where conv 1denotes the convex envelope of 1on C

Using Lemma 3.1 we can prove the following proposition which states that the problem (1) is equivalent to a convex MVIP whenever it admits a solution

Ngày đăng: 16/12/2017, 00:57

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN