1. Trang chủ
  2. » Luận Văn - Báo Cáo

Subdifferentials of optimal value functions in parametric convex optimization problems

107 89 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 107
Dung lượng 532,71 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGYINSTITUTE OF MATHEMATICS DUONG THI VIET AN SUBDIFFERENTIALS OF OPTIMAL VALUE FUNCTIONS IN PARAMETRIC CONVEX OPTIMIZATION PROBLEMS DISSERTATION SU

Trang 1

VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY

INSTITUTE OF MATHEMATICS

DUONG THI VIET AN

SUBDIFFERENTIALS OF OPTIMAL VALUE FUNCTIONS IN PARAMETRIC CONVEX

OPTIMIZATION PROBLEMS

DISSERTATION

SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

FOR THE DEGREE OFDOCTOR OF PHILOSOPHY IN MATHEMATICS

HANOI - 2018

Trang 2

VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY

INSTITUTE OF MATHEMATICS

DUONG THI VIET AN

SUBDIFFERENTIALS OF OPTIMAL VALUE FUNCTIONS IN PARAMETRIC CONVEX

OPTIMIZATION PROBLEMS

Speciality: Applied Mathematics

Speciality code: 9 46 01 12

DISSERTATION

SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

FOR THE DEGREE OFDOCTOR OF PHILOSOPHY IN MATHEMATICS

Supervisor: Prof Dr.Sc NGUYEN DONG YEN

HANOI - 2018

Trang 3

This dissertation was written on the basis of my research works carried out

at Institute of Mathematics, Vietnam Academy of Science and Technologyunder the supervision of Prof Nguyen Dong Yen All the presented resultshave never been published by others

May 27, 2018The author

Duong Thi Viet An

Trang 4

I first learned about Variational Analysis and Optimization in 2011 when

I met Prof Nguyen Dong Yen, who was the scientific adviser of my masterthesis I have been studying under his guidance since then I am deeplyindebted to him not only for his supervision, encouragement and support in

my research, but also for his precious advices in life

I am sincerely grateful to Assoc Prof Nguyen Thi Thu Thuy, who pervised my University Diploma Thesis and helped me to start my researchcareer

su-The wonderful research environment of the Institute of Mathematics, nam Academy of Science and Technology, and the excellence of its staff havehelped me to complete this work within the schedule I would like to express

Viet-my special appreciation to Prof Hoang Xuan Phu, Assoc Prof Ta DuyPhuong, Assoc Prof Phan Thanh An, and other members of the weeklyseminar at Department of Numerical Analysis and Scientific Computing, In-stitute of Mathematics, as well as all the members of Prof Nguyen DongYen’s research group for their valuable comments and suggestions on my re-search results In particular, I would like to express my sincere thanks toProf Le Dung Muu, Dr Pham Duy Khanh, MSc Vu Xuan Truong fortheir significant comments and suggestions concerning the research related

to Chapters 2 and 3 of this dissertation

Financial supports from the Vietnam National Foundation for Science andTechnology Development (NAFOSTED), the Vietnam Institute for AdvancedStudy in Mathematics (VIASM), and Thai Nguyen University of Sciences, aregratefully acknowledged

I am sincerely grateful to Prof Jen-Chih Yao from National Sun Yat-senUniversity, Taiwan, for granting several short-termed scholarships for my doc-torate studies I would like to thank MSc Nguyen Tuan Duong (Department

of Business Management, National Sun Yat-sen University, Taiwan) for his

Trang 5

kind help in my English study.

I am indebted to the members of the Thesis Evaluation Committee atthe Department Level and the two anonymous referees for their helpful sug-gestions which have helped me a lot in improving the presentation of mydissertation

Furthermore, I am grateful to the leaders of Thai Nguyen University of ences, and all my colleagues at Department of Mathematics and Informatics,for their encouragement and constant support during the long period of mymaster and PhD studies

Sci-My enormous gratitude goes to my husband for his love, encouragement,and especially for his patience in these years Finally, I would like to ex-press my love and thanks to other members of my family for their strongencouragement and support

Trang 6

1.1 Subdifferentials 1

1.2 Coderivatives 3

1.3 Optimal Value Function 3

1.4 Problems under the Convexity 5

1.5 Some Facts from Functional Analysis and Convex Analysis 7

1.6 Conclusions 10

Chapter 2 Differential Stability in Parametric Convex Program-ming Problems 11 2.1 Differential Stability of Convex Optimization Problems under Inclusion Constraints 11

2.2 Convex Programming Problems under Functional Constraints 16 2.3 Conclusions 26

Chapter 3 Stability Analysis using Aubin’s Regularity Condi-tion 27 3.1 Differential Stability under Aubin’s Regularity Condition 27

3.2 An Analysis of the Regularity Conditions 34

3.3 Conclusions 38

Chapter 4 Subdifferential Formulas Based on Multiplier Sets 40 4.1 Optimality Conditions for Convex Optimization 40

4.2 Subdifferential Estimates via Multiplier Sets 44

4.3 Computation of the Singular Subdifferential 48

4.4 Conclusions 50

Trang 7

Chapter 5 Stability Analysis of Convex Discrete Optimal

5.1 Control Problem 51

5.2 Differential Stability of the Parametric Mathematical Program-ming Problem 53

5.3 Differential Stability of the Control Problem 57

5.4 Applications 63

5.5 Conclusions 67

Chapter 6 Stability Analysis of Convex Continuous Optimal Control Problems 69 6.1 Problem Setting and Auxiliary Results 69

6.2 Differential Stability of the Control Problem 71

6.3 Illustrative Examples 79

6.4 Conclusions 86

Trang 8

Table of Notations

N (x) the set of all the neighborhoods of x

cl∗A the closure of a set A in the weak∗ topology

Lp([0, 1],Rn) the Banach space of Lebesgue measurable

functions x : [0, 1] → Rn for which

R1

0 ||x(t)||pdt is finite

W1,p([0, 1],Rn) the Sobolev space consisting of absolutely

continuous functions x : [0, 1] → Rn such that

˙x ∈ Lp([0, 1],Rn)

Mn,n(R) the set of functions mapping R

to the linear space of n × n real matrices

x∈Kf (x) the infimum of the set {f (x) | x ∈ K}

dom f the effective domain of a function f

∂∞f (x) the singular subdifferential of f at x

∇ f (x) the Fr´echet derivative of f at x

Trang 9

∂xϕ(¯x, ¯y) the partial subdifferential in x at (¯x, ¯y)

N (¯x; Ω) the normal cone of Ω at ¯x

F : X ⇒ Y a set-valued map between X and Y

D∗F (¯x, ¯y)(·) the coderivative of F at (¯x, ¯y)

M∗ : Y∗ → X∗ the adjoint operator of M

span{(x∗j, yj∗) | j = 1, , m} the linear subspace generated by

Trang 10

If a mathematical programming problem depends on a parameter, that is,the objective function and the constraints depend on a certain parameter,then the optimal value is a function of the parameter, and the solution map

is a set-valued map on the parameter of the problem In general, the timal value function is a fairly complicated function of the parameter; it isoften nondifferentiable on the parameter, even if the functions defining theproblem in question are smooth w.r.t all the programming variables andthe parameter This is the reason of the great interest in having formulasfor computing generalized directional derivatives (Dini directional derivative,Dini-Hadarmard directional derivative, Clarke generalized directional deriva-tive, ) and formulas for evaluating subdifferentials (subdifferential in thesense of convex analysis, Clarke subdifferential, Fr´echet subdifferential, lim-iting subdifferential – also called Mordukhovich subdifferential, ) of theoptimal value function

op-Studies on differentiability properties of the optimal value function and

of the solution map in parametric mathematical programming are usuallyclassified as studies on differential stability of optimization problems Someresults in this direction can be found in [2, 4, 6, 16, 18, 27] and the referencestherein

For differentiable nonconvex programs, pioneering works are due to vin and Tolle [19], Gauvin and Dubeau [17] The authors obtained formulasfor computing and estimating Dini directional derivatives and Clarke gen-eralized gradients of the optimal value function when the problem data un-dergoes smooth perturbations Auslender [8], Rockafellar [36], Golan [20],Thibault [42], Ioffe and Penot [21], and many other authors, have shown thatsimilar results can be obtained for nondifferentiable nonconvex programs

Gau-In particular, the connections between subdifferential of the optimal valuefunction in the Dini-Hadamard sense and in the Fr´echet sense with the cor-responding subdifferential of the objective function were pointed in [21] For

Trang 11

optimization problems with inclusion constraints on Banach spaces, tiability properties of the optimal value function have been established viathe dual-space approach by Mordukhovich et al in [29], where it is shownthat the new general results imply several fundamental results which wereobtained by the primal-space approach.

differen-Differential stability for convex programs has been studied intensively inthe last five decades A formula for computing the subdifferential of the op-timal value function of a standard convex mathematical programming prob-lem with right-hand-side perturbations, called the perturbation function, viathe set of Kuhn-Tucker vectors (i.e., the vectors of Kuhn-Tucker coefficients;see [35, p 274]) was given by Rockafellar [35, Theorem 29.1] Until now,many analogues and extensions of this classical result have been given in theliterature (see, e.g., [33, Theorem 3.85])

Besides the investigations on differential stability of parametric matical programming problems, the study on differential stability of optimalcontrol problems is also an issue of importance (see, e.g., [13–15, 23, 32, 37–

mathe-39, 41, 43–46] and the references therein)

According to Bryson [12, p 27, p 32], optimal control had its origins inthe calculus of variations in the 17th century The calculus of variationswas developed further in the 18th by L Euler and J.L Lagrange and inthe 19th century by A.M Legendre, C.G.J Jacobi, W.R Hamilton, andK.T.W Weierstrass In 1957, R.E Bellman gave a new view of Hamilton-Jacobi theory which he called dynamic programming, essentially a nonlinearfeedback control scheme McShane [26] and Pontryagin et al [34] extendedthe calculus of variations to handle control variable inequality constraints.The Maximum Principle was enunciated by Pontryagin

As noted by Tu [47, p 110], although much pioneering work had beencarried out by other authors, Pontryagin and his associates are the first ones

to develop and present the Maximum Principle in unified manner Their workattracted great attention among mathematicians, engineers, economists, andspurred wide research activities in the area (see [28, Chapter 6], [47, 48], andthe references therein)

Motivated by the recent work of Mordukhovich et al [29] on the mal value function in parametric programming under inclusion constraints,this dissertation focuses on differential stability of convex optimization prob-lems In other words, we study differential properties of the optimal value

Trang 12

opti-function Namely, we obtain some formulas for computing the tial and the singular subdifferential of the optimal value function of infinite-dimensional convex optimization problems under inclusion constraints and

subdifferen-of infinite-dimensional convex optimization problems under geometrical andfunctional constraints Our main tool is the Moreau–Rockafellar Theorem(see, e.g., [22, p 48]) and appropriate regularity conditions By virtue of theconvexity, several assumptions used in the above paper by Mordukhovich etal., like the nonemptyness of the Fr´echet upper subdifferential of the objectivefunction, the existence of a local upper Lipschitzian selection of the solutionmap, as well as the µ-inner semicontinuity and the µ-inner semicompactness

of the solution map, are no longer needed We also discuss the connectionbetween the subdifferentials of the optimal value function and certain mul-tiplier sets Applied to parametric optimal control problems, with convexobjective functions and linear dynamical systems, either discrete or continu-ous, our results can lead to some rules for computing the subdifferential andthe singular subdifferential of the optimal value function via the data of thegiven problem

The dissertation has six chapters, a list of the related papers of the thor, a section of general conclusions, and a list of references The first fourchapters, where some preliminaries and a series of new results on sensitivityanalysis of parametric convex programming problems under inclusion con-straints are given, constitute the first part of the dissertation The secondpart is formed by the last two chapters, where applications of the just men-tioned results to parametric convex control problems under linear constraintsare carried on

au-Chapter 1 collects some basic concepts from convex analysis, variationalanalysis, and functional analysis needed for subsequent chapters

Chapter 2 presents some new results on differential stability of convexoptimization problems under inclusion constraints in Hausdorff locally convextopological vector spaces The main tool is the Moreau-Rockafellar Theorem,which can be viewed as a well-known result in convex analysis, and someappropriate regularity conditions The results obtained here lead to new facts

on differential stability of convex optimization problems under geometricaland functional constraints

In Chapter 3 we first establish formulas for computing the subdifferentials

of the optimal value function for parametric convex programs under three

Trang 13

assumptions: the objective function is closed, the constraint multifunctionhas closed graph, and Aubin’s regularity condition is satisfied Then, wederive relationships between regularity conditions Our investigations haverevealed that one cannot use Aubin’s regularity assumption in a Hausdorfflocally convex topological vector space setting, because the related sum rule

is established via the Banach open mapping theorem

Chapter 4 discusses differential stability of convex programming problems

in Hausdorff locally convex topological vector spaces Optimality conditionsfor convex optimization problems under inclusion constraints and for con-vex optimization problems under geometrical and functional constraints areformulated here too After establishing an upper estimate for the subdiffer-entials via the Lagrange multiplier sets, we give an example to show thatthe upper estimate can be strict Then, by defining a satisfactory multiplierset, we obtain formulas for computing the subdifferential and the singularsubdifferential of the optimal value function

In Chapter 5 we first derive an upper estimate for the subdifferential of theoptimal value function of convex discrete optimal control problems in Banachspaces Then we present new calculus rules for computing the subdifferential

if the objective function is differentiable The main tools of our analysis arethe formulas for computing subdifferentials of the optimal value function fromChapter 2 We also show that the singular subdifferential of the just mentionoptimal value function always consists of the origin of the dual space

Finally, in Chapter 6, we focus on differential stability of convex continuousoptimal control problems Namely, based on the results of Chapter 5 aboutdifferential stability of parametric convex mathematical programming prob-lems, we get new formulas for computing the subdifferential and the singularsubdifferential of the optimal value function Moreover, we also describe indetails the process of finding vectors belonging to the subdifferential (resp.,the singular subdifferential) of the optimal value function Meaningful exam-ples, which have the origin in [34, Example 1, p 23], are designed to illustrateour results

The dissertation is written on the basis of 5 published papers: An andYao [2] in Journal of Optimization Theory and Applications, An, Yao, andYen [3] in Applied Mathematics and Optimization (FirstOnline), An andYen [4] in Applicable Analysis, An and Toan [1] in Acta Mathematica Viet-namica, An and Yen [5] in Vietnam Journal of Mathematics

Trang 14

The results of this dissertation have been presented at

- The weekly seminar of the Department of Numerical Analysis and tific Computing, Institute of Mathematics, Vietnam Academy of Science andTechnology;

Scien The 10th Workshop on “Optimization and Scientific Computing” (April18–21, 2012, Ba Vi, Hanoi);

- “Taiwan-Vietnam 2015 Winter Mini-Workshop on Optimization” ber 17, 2015, National Cheng Kung University, Tainan, Taiwan);

(Novem The 14th Workshop on “Optimization and Scientific Computing” (April21–23, 2016, Ba Vi, Hanoi);

- International Conference “New Trends in Optimization and VariationalAnalysis for Applications” (December 7–10, 2016, Quy Nhon, Vietnam);

- “Vietnam-Korea Workshop on Selected Topics in Mathematics” ary 20–24, 2017, Danang, Vietnam);

(Febru “International Conference on Analysis and its Application” (December20–22, 2017, Aligarh Muslim University, Aligarh, India);

- International Workshop “Mathematical Optimization Theory and plications” (January 18–20, 2018, Vietnam Institute for Advanced Study inMathematics, Hanoi, Vietnam);

Ap The 7th International Conference “High Performance Scientific ing” (March 19–23, 2018, Hanoi, Vietnam);

Comput The 16th Workshop on “Optimization and Scientific Computing” (April19–21, 2018, Ba Vi, Hanoi)

Trang 15

Chapter 1

Preliminaries

Several concepts and results from convex analysis, variational analysis,and functional analysis are recalled in this chapter Two types of parametricoptimization problems to be considered in the subsequent three chapters arealso presented in this chapter

The present chapter is written on the basis of the books of Bonnans andShapiro [11], Ioffe and Tihomirov [22], and the paper by Mordukhovich, Nam,and Yen [29]

epi f := {(x, α) ∈ X × R| α ≥ f (x)}

If epi f is a convex set, then f is said to be a convex function

Definition 1.2 Let f : X → R be a convex function Suppose that ¯x ∈ Xand |f (¯x)| < ∞

Trang 16

(i) The set

∂f (¯x) = {x∗ ∈ X∗ | hx∗, x − ¯xi ≤ f (x) − f (¯x), ∀x ∈ X}

is called the subdifferential of f at ¯x

(ii) The set

∂∞f (¯x) = {x∗ ∈ X∗ | (x∗, 0) ∈ N ((¯x, f (¯x)); epi f )} (1.1)

is called the singular subdifferential of f at ¯x

In the case where |f (¯x)| = ∞, one lets ∂f (¯x) and ∂∞f (¯x) to be empty sets

Given a convex subset Ω ⊂ X, one defines the indicator function δ(·; Ω) :

For any ¯x ∈ Ω, it is easy to see that

⇔ h(x∗, 0), (u, µ) − (x, f (x))i ≤ 0, ∀(u, µ) ∈ epi f

⇔ h(x∗, 0), (u − x, µ − f (x))i ≤ 0, ∀(u, µ) ∈ epi f

⇔ hx∗, u − xi ≤ 0, ∀u ∈ dom f

⇔ x∗ ∈ N (x; dom f )

In a Banach space setting, the singular subdifferential will be useful for thestudy of non-Lipschitzian functions Because, if the function f is Lipschitzcontinuous around ¯x, then ∂∞f (¯x) = {0}, see, e.g., [30, Theorem 3.1(ii)]

Trang 17

1.2 Coderivatives

Let F : X ⇒ Y be a convex set-valued map The graph and the domain

of F are given, respectively, by the formulas

gph F := {(x, y) ∈ X × Y | y ∈ F (x)},dom F := {x ∈ X | F (x) 6= ∅}

Equipping the product space X × Y with the norm k(x, y)k := kxk + kyk, bythe above notions of normal cones, one can define the concept coderivative

of convex set-valued maps as follows

Definition 1.3 The coderivative of F at (¯x, ¯y) ∈ gph F is the multifunction

D∗F (¯x, ¯y) : Y∗ ⇒ X∗ defined by

D∗F (¯x, ¯y)(y∗) := {x∗ ∈ X∗ | (x∗, −y∗) ∈ N ((¯x, ¯y); gph F )} , ∀y∗ ∈ Y∗

If (¯x, ¯y) /∈ gph F , then we accept the convention that the set D∗F (¯x, ¯y)(y∗)

is empty for any y∗ ∈ Y∗

Note that, in a Banach space setting, the coderivative of the convex valued map has been defined in [7, Definition 1, p 178] under the namecodifferential

Consider a set-valued map G : X ⇒ Y between Banach spaces, a function

ϕ : X × Y → R The optimal value function (or the marginal function) ofthe parametric optimization problem under an inclusion constraint, defined

by G and ϕ, is the function µ : X → R, with

µ(x) := inf {ϕ(x, y) | y ∈ G(x)} (1.2)

By the convention inf ∅ = +∞, we have µ(x) = +∞ for any x /∈ dom G.The set-valued map G (resp., the function ϕ) is called the map describingthe constraint set (resp., the objective function) of the optimization problem

on the right-hand-side of (1.2)

Corresponding to each data pair {G, ϕ} we have one optimization problemdepending on a parameter x:

Trang 18

Formulas for computing or estimating the subdifferentials (the Fr´echet differential, the Mordukhovich subdifferential, the singular subdifferential,and the subdifferential in the sense of convex analysis) of the optimal valuefunction µ(.) are tightly connected with the solution map of (1.3) The justmentioned solution map, denoted by M : dom G ⇒ Y , is given by

sub-M (x) := {y ∈ G(x) | µ(x) = ϕ(x, y)} (∀x ∈ dom G) (1.4)

Namely, in [29] the authors have obtained an upper estimate for the Fr´echetsubdifferential of the optimal value function in formula (1.2) at a given pa-rameter ¯x This estimate is established via the Fr´echet coderivative of themap G describing the constraint set and the Fr´echet upper subdifferential ofthe objective function ϕ In addition, if ϕ is Fr´echet differentiable at (¯x, ¯y)and the solution map M given in (1.4) has a local upper Lipschitzian selection

at (¯x, ¯y), then the obtained upper estimate become an equality (see [29, orems 1 and 2] for details)

The-The assumption about the nonempty property of the Fr´echet upper differential of ϕ, i.e., ∂b+ϕ(¯x, ¯y) 6= ∅, in [29, Theorem 1] is rather strict Forinstance, it excludes from our consideration Lipschitzian convex functions

sub-of the type ϕ(x, y) = |x| + y, (x, y) ∈ R × R, or ϕ(x, y) = ||x|| + g(y),(x, y) ∈ X × Y , where g : Y → R is a given function, X and Y are Banachspaces with dim X ≥ 1 Indeed, for the first example, choosing (¯x, ¯y) = (0, 0)

we have ∂b+ϕ(¯x, ¯y) = ∅ For the second example, we have ∂b+ϕ(¯x, ¯y) = ∅ forany (¯x, ¯y) = (0, v) ∈ X × Y

Moreover, to obtain formulas for computing the Mordukhovich ential of µ(.) in (1.2), Mordukhovich et al need some assumptions about thesequentially normally compact property of ϕ, the existence of a local upperLipschitzian selection of the solution map M , as well as the µ-inner semicon-tinuity or the µ-inner semicompactness of the solution map M (see [29, The-orem 7] for details)

subdiffer-By imposing the convexity requirement on (1.3), in next Chapters 2 and 3,

we need not to rely on the assumption ∂b+ϕ(¯x, ¯y) 6= ∅ in [29, Theorem 1], thecondition saying that the solution map M : dom G ⇒ Y has a local upperLipschitzian selection at (¯x, ¯y) in [29, Theorem 2], as well as the sequentiallynormally compact property of ϕ, the µ-inner semicontinuity or the µ-innersemicompactness conditions on the solution map M (·) in [29, Theorem 7]

Trang 19

1.4 Problems under the Convexity

Let X and Y be Hausdorff locally convex topological vector spaces Let

ϕ : X × Y → R be a proper convex extended-real-valued function Given

a convex set-valued map G : X ⇒ Y , we consider the parametric convexoptimization problem under an inclusion constraint

depending on the parameter x The optimal value function of problem (1.5),

is the function µ : X → R, with

µ(x) := inf {ϕ(x, y) | y ∈ G(x)} (1.6)The solution map M : dom G ⇒ Y of that problem is defined by

M (x) := {y ∈ G(x) | µ(x) = ϕ(x, y)} (∀x ∈ dom G)

Proposition 1.2 Let G : X ⇒ Y be a convex set-valued map, ϕ : X ×Y → R

a convex function Then, the function µ(.) is defined by (1.6) is convex.Proof We will prove that epi µ = {(x, α) ∈ X ×R | µ(x) ≤ α} is a convexsubset of X × R Taking any (x, α), (x0, β) ∈ epi µ and λ ∈ (0, 1), we need

to show that

λ(x, α) + (1 − λ)(x0, β) ∈ epi µ

This is equivalent to

inf{ϕ(λx + (1 − λ)x0, z) | z ∈ G(λx + (1 − λ)x0)} ≤ λα + (1 − λ)β.For any ε > 0, since (x, α) ∈ epi µ, one has

Trang 20

Similarly, we consider the case (x0, β) ∈ epi µ.

• If inf{ϕ(x0, y0) | y0 ∈ G(x0)} = −∞, then for any β ∈ R, there exists

Letting ε → 0+, we obtain the convexity of the optimal value function µ 2

In next two chapters, to obtain formulas for computing/estimating thesubdifferential of the optimal value function µ via the subdifferential of ϕand the coderivative of G, we will apply the following scheme, which hasbeen formulated clearly by Professor Truong Xuan Duc Ha in her review onthis dissertation

Step 1 Consider the unconstrained optimization problem

µ(x) := inf

ϕ(x, y) + δ((x, y); gph G)

,where δ(·; gph G) is the indicator function of gph G

Step 2 Apply some known results to show that

Trang 21

Step 3 Employ the sum rule for subdifferentials to get

by direct proofs; see, e.g., [29, Theorems 1 and 2]

Thanks to some regularity conditions on the function ϕ and the mapping

G, the result of Steps 2 and 3 is an upper estimate for ∂µ(¯x) In the sequel,

we will see that the inner estimate (that is the reverse inclusion of the upperestimate) is valid for convex optimization problems without any regularitycondition

Analysis

First, we recall a result related to continuous linear operators Consider acontinuous linear operator A : X → Y from a Banach space X to anotherBanach space Y with the adjoint A∗ : Y∗ → X∗ The null space and therange of A are defined, respectively, by ker A = {x ∈ X | Ax = 0} and

stands for the orthogonal complement of the set ker A

(ii) If rge A is closed, then (ker A)⊥ = rge (A∗), and there is c > 0 such thatfor every x∗ ∈ rge (A∗) there exists y∗ ∈ Y∗ with ||y∗|| ≤ c||x∗|| and x∗ = A∗y∗.(iii) If, in addition, rge A = Y , i.e., A is onto, then A∗ is one-to-one andthere exists c > 0 such that ||y∗|| ≤ c||A∗y∗||, for all y∗ ∈ Y∗

(iv) (ker A∗)⊥ = cl(rge A)

Trang 22

We now recall some results from functional analysis related to Banachspaces, which can be found in [22, pp 20–22].

For every p ∈ [1, ∞), the symbol Lp([0, 1],Rn) denotes the Banach space

of Lebesgue measurable functions x from [0, 1] to Rn for which the integral

kx(t)kpdt



1 p

The dual space of Lp([0, 1],Rn) is Lq([0, 1],Rn), where 1p + 1q = 1 In otherwords, for every continuous linear functional ϕ on the space Lp([0, 1],Rn),there exists a unique element x∗ ∈ Lq([0, 1],Rn) such that

ϕ(x) = hϕ, xi =

Z 1 0

x∗(t)x(t)dt ∀x ∈ Lp([0, 1],Rn)

Moreover, one has ||ϕ|| = ||x∗||q

The Sobolev space W1,p([0, 1],Rn) consisting of absolutely continuous tions x : [0, 1] → Rn such that ˙x ∈ Lp([0, 1],Rn) is equipped with the norm

hϕ, xi = ha, x(0)i +

Z 1 0

˙x(t) ˙y(t)dt,for all x, y ∈ W1,2([0, 1],Rn)

Next, we recall two results on normal cones to convex sets Suppose that

A0, A1, , An are convex subsets of a Hausdorff locally convex topologicalvector space X and A = A0 ∩ A1 ∩ · · · ∩ An By int Ai, for i = 1, , n, wedenote the interior of Ai

Trang 23

Proposition 1.4 (See [22, Proposition 1, p 205]) If one has

A0 ∩ (int A1) ∩ · · · ∩ (int An) 6= ∅, (1.12)then

N (x; A) = N (x; A0) + N (x; A1) + · · · + N (x; An)for any point x ∈ A In other words, if the regularity condition (1.12) issatisfied, then the normal cone to the intersection of sets is equal to the sum

of the normal cones to these sets

Proposition 1.5 (See [22, Proposition 3, p 206]) If one has int Ai 6= ∅ for

i = 1, 2, , n then, for any x0 ∈ A, the following statements are equivalent:(a) A0∩ (int A1) ∩ · · · ∩ (int An) = ∅;

(b) There exist x∗i ∈ N (x0; Ai) for i = 0, 1, , n, not all zero, such that

x∗0 + x∗1+ · · · + x∗n = 0

In the sequel, we will need the following fundamental calculus rule of convexanalysis

Theorem 1.1 (The Moreau-Rockafellar Theorem) (See [22, Theorem 0.3.3

on pp 47–50, Theorem 1 on p 200]) Let f1, , fm be proper convex functions

on X Then

∂(f1 + · · · + fm)(x) ⊃ ∂f1(x) + · · · + ∂fm(x)for all x ∈ X If, at a point x0 ∈ dom f1 ∩ · · · ∩ dom fm, all the functions

f1, , fm, except, possibly, one are continuous, then

∂(f1 + · · · + fm)(x) = ∂f1(x) + · · · + ∂fm(x)for all x ∈ X

The forthcoming theorem characterizes the continuity of valued convex functions defined on Hausdorff locally convex topological vectorspaces

extended-real-Theorem 1.2 (See [22, extended-real-Theorem 1, p 170]) Let f be a proper convex function

on a Hausdorff locally convex topological vector space X Then the followingassertions are equivalent:

(i) f is bounded from above on a neighborhood of a point x ∈ X;

(ii) f is continuous at a point x ∈ X;

(iii) int(epi f ) 6= ∅;

(iv) int(dom f ) 6= ∅ and f is continuous on int(dom f ) Moreover,

int(epi f ) = {(α, x) ∈ R× X | x ∈ int(dom f ), α > f (x)}

Trang 24

1.6 Conclusions

This chapter presents several basic results from convex analysis, two types

of general parametric optimization problems, and some facts from functionalanalysis which will be used repeatedly in the subsequent chapters Moreover,Theorems 1, 2 and 7 in [29], which are the motivations for the researchleading to our results in the next two chapters, are also briefly analyzed inthis chapter

Trang 25

Chapter 2

Differential Stability in Parametric

Convex Programming Problems

Motivated by the work of Mordukhovich, Nam, and Yen [29] on the mal value function in parametric programming under inclusion constraints,this chapter establishes some new results about differential stability of convexoptimization problems under inclusion constraints and functional constraints

opti-By using a version of the Moreau-Rockafellar Theorem, which has been called in Theorem 1.1, and appropriate regularity conditions, we obtain for-mulas for computing the subdifferential and the singular subdifferential ofthe optimal value function

re-The chapter is written on the basis of [4]

Prob-lems under Inclusion Constraints

The following theorem provides us with formulas for computing the ifferential and the singular subdifferential of µ given in (1.6)

subd-Theorem 2.1 Suppose that G : X ⇒ Y is a convex set-valued mapping and

ϕ : X × Y → R is a proper convex function If at least one of the followingregularity conditions is satisfied:

(a) int(gph G) ∩ dom ϕ 6= ∅,

(b) ϕ is continuous at a point (x0, y0) ∈ gph G,

Trang 26

then for any ¯x ∈ dom µ, with µ(¯x) 6= −∞, and for any ¯y ∈ M (¯x) we have

Proof Let ¯x ∈ dom µ and ¯y ∈ M (¯x) To prove the inclusion “ ⊂ ” in (2.1),take an arbitrary element ¯x∗ ∈ ∂µ(¯x) Since the optimal value function µ isconvex, we have

(¯x∗, 0) ∈ ∂(ϕ + δ(·; gph G))(¯x, ¯y) (2.4)Since gph G is convex, δ(·; gph G) : X × Y → R is convex Obviously,δ(·; gph G) is continuous at every point belonging to int(gph G)

Consequently, if the regularity condition (a) is satisfied, then δ(·; gph G) iscontinuous at a point in dom ϕ By Theorem 1.1, from (2.4) we have

(¯x∗, 0) ∈ ∂ϕ(¯x, ¯y) + ∂δ(·; gph G)(¯x, ¯y)

= ∂ϕ(¯x, ¯y) + N ((¯x, ¯y); gph G) (2.5)

Trang 27

Thus, there exists (x∗, y∗) ∈ ∂ϕ(¯x, ¯y) such that

(¯x∗, 0) ∈ (x∗, y∗) + N ((¯x, ¯y); gph G),or

(¯x∗− x∗, −y∗) ∈ N ((¯x, ¯y); gph G),i.e.,

dom δ(·; gph G) = gph G,from (b) it follows that ϕ is continuous at a point in dom δ(·; gph G) There-fore, by Theorem 1.1, from (2.4) we also have (2.5) Thus, there exists(x∗, y∗) ∈ ∂ϕ(¯x, ¯y) such that (2.6) is satisfied

In both cases, since ¯x∗ ∈ ∂µ(¯x) can be taken arbitrarily, by (2.6) we candeduce that

Taking an arbitrary vector u∗ ∈ x∗ + D∗G(¯x, ¯y)(y∗), we have to show that

u∗ ∈ ∂µ(¯x) The inclusion u∗ ∈ x∗+ D∗G(¯x, ¯y)(y∗) yields

u∗ − x∗ ∈ D∗G(¯x, ¯y)(y∗) (2.7)Clearly, condition (2.7) can be written equivalently as

Trang 28

Without any regularity condition, the last inclusion implies that

(u∗, 0) ∈ ∂(ϕ + δ(·; gph G))(¯x, ¯y)

Hence

ϕ(x, y) − ϕ(¯x, ¯y) ≥ hu∗, x − ¯xi + h0, y − ¯yi, ∀(x, y) ∈ gph G (2.8)For each fixed element x ∈ dom G, taking infimum on both sides of (2.8) on

y ∈ G(x) and remembering that µ(¯x) = ϕ(¯x, ¯y), we obtain

of (2.1) Indeed, since dom δ(·; dom ϕ) = dom ϕ, if the regularity requirement

in (a) is satisfied then int(gph G) ∩ dom δ(·; dom ϕ) 6= ∅ Next, if the tion (b) is fulfilled then (x0, y0) ∈ int(dom ϕ); so δ(·; dom ϕ) is continuous at(x0, y0) ∈ gph G Now, consider the optimization problem (1.5) with ϕ(x, y)replaced by δ((x, y); dom ϕ) By (2.9), the corresponding optimal value func-tion µ(x) coincides with δ(x; dom µ) Therefore, in accordance with (2.1), wehave

∂δ(·; dom µ)(¯x) = N (¯x; dom µ) = ∂∞µ(¯x)and

∂δ(·; dom ϕ)(¯x, ¯y) = N ((¯x, ¯y); dom ϕ) = ∂∞ϕ(¯x, ¯y)

Here are two simple examples designed to illustrate Theorem 2.1

Trang 29

Example 2.1 Let X = Y = R and ¯x = 0 Consider the optimal valuefunction µ(x) in (1.6) with ϕ(x, y) = |y| and G(x) = 

y | y ≥ 12|x|

for all

x ∈ R Then we have µ(x) = 12|x| for all x ∈ R So ∂µ(¯x) = [−12,12],

∂∞µ(¯x) = {0}, and M (¯x) = {0} For ¯y := 0 ∈ M (¯x), ∂ϕ(¯x, ¯y) = {0} × [−1, 1]and ∂∞ϕ(¯x, ¯y) = {(0, 0)} Since G is a convex set-valued mapping, we have

x for all x ≥ 0, µ(x) = +∞ for all x < 0, and

M (¯x) = {0} Hence ∂µ(¯x) = ∅ and ∂∞µ(¯x) = (−∞, 0] For ¯y := 0 ∈ M (¯x),

∂ϕ(¯x, ¯y) = [−1, 1] × {1} and ∂∞ϕ(¯x, y) = {(0, 0)} By the convexity of G wehave

N ((¯x, ¯y);gph G)

= 

(x∗, y∗) ∈R2 | h(x∗, y∗), (x, y) − (0, 1)i ≤ 0, ∀(x, y) ∈ gph G

= (−∞, 0] × {0};

Trang 30

so D∗G(¯x, ¯y)(0) = (−∞, 0] and D∗G(¯x, ¯y)(y∗) = ∅ for every nonzero y∗ Then

we can calculate the right-hand-sides of (2.1) and of (2.2) as follows:

As ∂µ(¯x) = ∅ and ∂∞µ(¯x) = (−∞, 0], the equalities (2.1) and (2.2) are valid

Constraints

We now apply the above general results to convex optimization problemsunder geometrical and functional constraints As in the preceding section, Xand Y are Hausdorff locally convex topological vector spaces Consider theproblem

min {ϕ(x, y) | (x, y) ∈ C, gi(x, y) ≤ 0, i ∈ I, hj(x, y) = 0, j ∈ J } , (2.10)

in which ϕ : X × Y → R is a convex function, C ⊂ X × Y is a convex set,

I = {1, , m}, J = {1, , k}, gi : X ×Y → R(i ∈ I) are continuous convexfunctions, and hj : X × Y → R (j ∈ J ) are continuous affine functions Foreach x ∈ X, we put

Qj := {(x, y) | hj(x, y) = 0} (j ∈ J )are convex sets

The following infinite-dimensional version of the Farkas lemma [35, p 200]has been obtained by Bartl [9]

Trang 31

Lemma 2.1 (See [9, Lemma 1]) Let W be a vector space over R Let A :

W → Rm be a linear mapping and γ : W → R a linear functional Supposethat A is represented in the form A = (αi)m

i , where each αi : W → R is alinear functional (i.e., for each x ∈ W , A(x) is a column vector whose i-thcomponent is αi(x), for i = 1, , m) Then, the inequality γ(x) ≤ 0 is aconsequence of the inequalities system

α1(x) ≤ 0, α2(x) ≤ 0, , αm(x) ≤ 0

if and only if there exist nonnegative real numbers λ1, λ2, , λm ≥ 0 suchthat

γ = λ1α1 + · · · + λmαm.The following lemma describes the normal cone of the intersection offinitely many affine hyperplanes

Lemma 2.2 Let X, Y be Hausdorff locally convex topological vector spaces.Let (x∗j, yj∗) ∈ X∗ × Y∗ and αj ∈ R, j = 1, , m, be given Set

(x, y) ∈ Q iff (u, v) := (x, y) − (¯x, ¯y) belongs to

Q0

j := 

(u, v) | h(x∗j, y∗j), (u, v)i ≤ 0, h−(x∗j, yj∗), (u, v)i ≤ 0

, j = 1, , m

Trang 32

Indeed, the inclusion (x, y) ∈ Q implies

j, y∗j), (x, y)i ≤ αj, j = 1, , m (2.16)Moreover, the condition (¯x, ¯y) ∈ Q assures that

h(x∗j, yj∗), (¯x, ¯y)i = αj, j = 1, , m (2.17)Combining (2.16) and (2.17) yields

j, yj∗), (x, y) − (¯x, ¯y)i ≤ 0, j = 1, , m (2.18)Since (¯x, ¯y) ∈ Q, (2.17) holds So, from (2.18) one has

j, y∗j), (x, y)i ≤ αj, j = 1, , m

which obviously implies that (x, y) ∈ Q

Next, (2.14) and (2.15) show that (x∗, y∗) ∈ N ((¯x, ¯y); Q) if and only if theinequality h(x∗, y∗), (u, v)i ≤ 0 is a consequence of the inequalities system

Trang 33

The latter means that (x∗, y∗) ∈ span{(x∗j, y∗j), j = 1, , m} Formula (2.13)

The next lemma from [22], which has a very brief proof, describes thenormal cone of the sublevel set of a convex function Due to the importance

of this result, here we will give a detailed proof

Lemma 2.3 (See [22, p 206]) Let f be a proper convex function on X, which

is continuous at a point x0 ∈ X Assume that f (x1) < f (x0) = α0 for some

x1 ∈ X Then,

N (x0; Lα0f ) = K∂f (x0),where Lα0f := {x | f (x) ≤ α0} is a sublevel set of f and

K∂f (x0) := {u∗ ∈ X∗ | u∗ = λx∗, λ ≥ 0, x∗ ∈ ∂f (x0)}

is the cone generated by the subdifferential of f at x0

Proof Put A = Lα0f Since f is convex, A is a convex set It is clear that

x0 ∈ A We need to prove that N (x0; A) = K∂f (x0)

First, let us prove that K∂f (x0) ⊂ N (x0; A) Take an arbitrary element

u∗ ∈ K∂f (x0) Then u∗ = λx∗, with x∗ ∈ ∂f (x0) and λ ≥ 0 As x∗ ∈ ∂f (x0),

hx∗, x − x0i ≤ f (x) − f (x0), ∀x ∈ X

Therefore, for every x in A, since f (x) ≤ f (x0), hx∗, x − x0i ≤ 0 This showsthat x∗ ∈ N (x0; A) Hence u∗ = λx∗ ∈ N (x0; A) Thus K∂f (x0) ⊂ N (x0; A).Next, we will prove that N (x0; A) ⊂ K∂f (x0) Take an arbitrary vector

x∗ ∈ N (x0; A) If x∗ = 0, then the inclusion x∗ ∈ K∂f (x0) is obvious Considerthe case where x∗ 6= 0 Note that

H := {(α, x) ∈ R× X | α = f (x0), hx∗, x − x0i = 0}

is an affine set As f is convex, epi f is a convex set By the assumptionthat f is continuous at x0, f bounded on a neighborhood of x0 InvokingTheorem 1.2 we have int(epi f ) 6= ∅ Besides, by the same theorem, we alsohave int(dom f ) 6= ∅, f is continuous on int(dom f ), and we can determinethe set int(epi f ) by the formula given in property (iv) of Theorem 1.2 Wewill show that H ∩ int(epi f ) = ∅ Suppose on the contrary that there exists( ¯α, ¯x) ∈ H ∩ int(epi f ) The last property means that

¯

α = f (x0), hx∗, ¯x − x0i = 0

Trang 34

¯

α > f (¯x), x ∈ int(dom f ).¯Since α0 = f (x0) = ¯α > f (¯x) and f is continuous at ¯x, there exists aneighborhood U ∈ N (0) such that

Thus hx∗, vi = 0 for all v ∈ U ∩ (−U ); hence x∗ = 0 (a contradiction)

In conclusion, H ∩ int(epi f ) = ∅ By the separation theorem for convexsets [40, Theorem 3.4(a)], there exists (α∗, y∗) ∈ (R× X∗) \ {(0, 0)} satisfyingh(α∗, y∗), (α, x)i ≤ h(α∗, y∗), (α0, x0)i, ∀(α, x) ∈ H, ∀(α0, x0) ∈ epi f (2.19)

If α∗ < 0 then, by substituting (α, x) = (α0, x0) to the left-hand-side and

(α0, x0) = (α0 + µ, x0) = (f (x0) + µ, x0),with µ ≥ 0, to the right-hand-side of (2.19), and letting µ → +∞, we get acontradiction So we can assume that α∗ ≥ 0 If α∗ = 0 then (2.19) implies

hy∗, ui ≥ 0, ∀u ∈ U1.Hence y∗ = 0 (this is a contradiction, because (α∗, y∗) 6= (0, 0)) Thus thecase α∗ = 0 cannot happen

Consider the case where α∗ > 0 If we choose (α0, x0) = (α0, x0) ∈ epi f ,then by formula (2.19) we have

h(α∗, y∗), (α, x)i ≤ h(α∗, y∗), (α0, x0)i, ∀(α, x) ∈ H

Trang 35

h(α∗, y∗), (α, x) − (α0, x0)i ≤ 0, ∀(α, x) ∈ H (2.20)Since H is an affine set and since (α0, x0) ∈ H, M := H−(α0, x0) is a subspaceparallel to H According to (2.20), we have

h(α∗, y∗), (β, u)i ≤ 0, ∀(β, u) ∈ M,hence

h(α∗, y∗), (α, x)i = α∗α0 + hy∗, x0i

⇔ α∗α + hy∗, xi = α∗f (x0) + hy∗, x0i

⇔ α + h(α∗)−1y∗, x − x0i = f (x0),and this is equivalent to

α = h−(α∗)−1y∗, x − x0i + f (x0) (2.22)Setting ye∗ = −(α∗)−1y∗, from (2.22) we get

α = hye∗, x − x0i + f (x0), ∀(α, x) ∈ Π (2.23)The inclusion H ⊂ Π yields ye∗ = γx∗, with γ ∈ R Indeed, as H ⊂ Π,

Trang 36

By Lemma 2.1, there exist α1 ≥ 0, α2 ≥ 0 satisfying

e

y∗ = α1x∗ + α2(−x∗) = (α1− α2)x∗ = γx∗,with γ := α1 − α2 ∈ R We will show that γ < 0 Indeed, since α∗ > 0, wecan replace the pair (α∗, y∗) in (2.19) by (1, y∗/α∗), i.e., we can suppose that

α∗ = 1 By the continuity of f at x0 and by (2.19), there exists U2 ∈ N (0)such that

h(1,ye∗), (α0, x0)i ≤ h(1,ye∗), (f (x), x)i, ∀x ∈ x0+ U2.This is equivalent to

α0 ≤ f (x) + hye∗, x − x0i, ∀x ∈ x0 + U2 (2.24)Substituting x = (1 − t)x0 + tx1, with t ∈ (0, 1) being chosen as small as

x ∈ x0+ U2, into the last inequality, we obtain

hye∗, x1− x0i ≤ 0

One the other hand, by (2.25) we obtain

hye∗, x1− x0i ≥ f (x0) − f (x1) > 0

We have arrived at a contradiction Thus γ < 0

From formula (2.24) we get

f (x0) ≤ f (x) + hye∗, x − x0i, ∀x ∈ x0 + U2.This shows that the convex function x 7→ f (x) + hye∗, x − x0i reaches a localminimum at x0 Then, by the convexity of f we have

h−ye∗, x − x0i ≤ f (x) − f (x0), ∀x ∈ X,i.e., −ye∗ ∈ ∂f (x0), or −γx∗ ∈ ∂f (x0) As γ < 0, it follows that

x∗ ∈ 1

−γ∂f (x0) ∈ K∂f (x0 )

Trang 37

This is exactly what we have to prove 2Let us go back to considering the parametric convex programming prob-lem (2.10) Our first result in this section can be formulated as follows.Theorem 2.2 Suppose that the equality constraints hj(x, y) = 0 (j ∈ J ) areabsent in (2.10) If at least one of the following regularity conditions

(a1) There exists a point (u0, v0) ∈ dom ϕ such that (u0, v0) ∈ int C and

∂∞µ(¯x) = [

(x ∗ ,y ∗ )∈∂ ∞ ϕ(¯ x,¯ y)



x∗ + Q∗0 , (2.27)where

If (a1) is satisfied, then it is clear that (u0, v0) ∈ int(gph G); hence thecondition (a) in Theorem 2.1 is fulfilled If (b1) is satisfied, then ϕ is con-tinuous at the point (x0, y0) which belongs to gph G; so the condition (b) inTheorem 2.1 is satisfied Therefore, our assumptions guarantee that (2.1)and (2.2) hold

By the definition of coderivative,

D∗G(¯x, ¯y)(y∗) = {u∗ ∈ X∗ | (u∗, −y∗) ∈ N ((¯x, ¯y); gph G)} (2.29)Since the constraints hj(x, y) = 0 (j ∈ J ) are absent in (2.10), formula (2.12)becomes

Trang 38

If (a1) is satisfied, then (u0, v0) ∈ (int C) ∩ T

i∈I

int Ωi If (b1) is valid, then(x0, y0) ∈ C ∩

T

Since N ((¯x, ¯y); Ωi) = {(0, 0)} for every i /∈ I(¯x, ¯y), this formula can be written

in the equivalent form

N ((¯x, ¯y); gph G) = N ((¯x, ¯y); C) + X

i∈I(¯ x,¯ y)

N ((¯x, ¯y); Ωi) (2.31)

By Lemma 2.3, for every i ∈ I(¯x, ¯y) we have

N ((¯x, ¯y); Ωi) = K∂gi(¯x,¯y) = cone ∂gi(¯x, ¯y)

Combining this with (2.28), (2.29), (2.31), we get (2.26) from (2.1) and (2.27)

Let us consider the following illustrative example

Example 2.3 Let X = Y = R, C = X × Y , ϕ(x, y) = |x + y|, m = 1,

k = 0 (no equality functional constraint), g1(x, y) = y for all (x, y) ∈ X × Y Choosing ¯x = 0, we note that M (¯x) = {¯y}, with ¯y = 0 Since

ϕ(x, y) = |x + y| = max{x + y, −x − y},

by applying a well known formula for computing the subdifferential of themaximum function [22, Theorem 3, pp 201–202] we get

∂ϕ(¯x, ¯y) = co(1, 1)T, (−1, −1)T ,where co Ω denotes the convex hull of Ω On one hand,

Trang 39

So we find ∂µ(¯x) = [−1, 0] On the other hand,

Moreover, since the function ϕ is Lipschitz continuous around (¯x, ¯y), we have

∂∞ϕ(¯x, ¯y) = {(0, 0)} It is easy to show that ∂∞µ(¯x) = {0} Therefore, (2.26)and (2.27) are valid

We now consider the case where the affine constraints hj(x, y) = 0 (j ∈ J )are available in (2.10) The second result of this section reads as follows.Theorem 2.3 For every j ∈ J , suppose that

hj(x, y) = h(x∗j, yj∗), (x, y)i − αj, αj ∈ R

If ϕ is continuous at a point (x0, y0) with (x0, y0) ∈ int C, gi(x0, y0) < 0, forall i ∈ I and hj(x0, y0) = 0, for all j ∈ J , then for any ¯x ∈ dom µ, withµ(¯x) 6= −∞, and for any ¯y ∈ M (¯x) we have

A := X

i∈I(¯ x,¯ y)

cone ∂gi(¯x, ¯y) + span{(x∗j, yj∗), j ∈ J } (2.35)

Trang 40

Proof (This proof follows the same scheme as the proof of Theorem 2.2.)For the set-valued map G(·) defined by (2.11), we have (x0, y0) ∈ gph G.Hence the condition (b) in Theorem 2.1 is satisfied, and we know that (2.1)and (2.2) hold By our assumptions,

in [29, Theorem 1], the requirement that the solution map M : dom G ⇒ Yhas a local upper Lipschitzian selection at (¯x, ¯y) in [29, Theorem 2], thesequentially normally compact property of ϕ, and the µ-inner semicontinuity

or the µ-inner semicompactness conditions on the solution map M (·) in [29,Theorem 7]

Ngày đăng: 24/04/2019, 11:33

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[2] D.T.V. An and J.-C. Yao , Further results on differential stability of convex optimization problems, J. Optim. Theory Appl. 170 (2016), 28–42 Sách, tạp chí
Tiêu đề: Further results on differential stability of convex optimization problems
Tác giả: D.T.V. An, J.-C. Yao
Nhà XB: J. Optim. Theory Appl.
Năm: 2016
[3] D.T.V. An, J.-C. Yao, and N.D. Yen , Differential stability of a class of convex optimal control problems, Applied Mathematics and Optimiza- tion (2017), DOI 10.1007/s00245-017-9475-4 Sách, tạp chí
Tiêu đề: Differential stability of a class of convex optimal control problems
Tác giả: D.T.V. An, J.-C. Yao, N.D. Yen
Nhà XB: Applied Mathematics and Optimization
Năm: 2017
[6] J.-P. Aubin , Optima and Equilibria. An Introduction to Nonlinear Anal- ysis, 2nd ed., Springer-Verlag, Berlin, 1998 Sách, tạp chí
Tiêu đề: Optima and Equilibria. An Introduction to Nonlinear Analysis
Tác giả: J.-P. Aubin
Nhà XB: Springer-Verlag
Năm: 1998
[7] J.-P. Aubin and I. Ekeland , Applied Nonlinear Analysis, A Wiley- Interscience Publication. John Wiley and Sons, Inc., New York, 1984 Sách, tạp chí
Tiêu đề: Applied Nonlinear Analysis
Tác giả: J.-P. Aubin, I. Ekeland
Nhà XB: John Wiley and Sons, Inc.
Năm: 1984
[10] D.P. Bertsekas , Dynamic Programming and Optimal Control, Vol- ume I, Athena Scientific, Belmont, Massachusetts, 2005 Sách, tạp chí
Tiêu đề: Dynamic Programming and Optimal Control
Tác giả: D.P. Bertsekas
Nhà XB: Athena Scientific
Năm: 2005
[12] A.E. Bryson , Optimal control–1950 to 1985, IEEE Control Systems 16 (1996), 26–33 Sách, tạp chí
Tiêu đề: Optimal control–1950 to 1985
Tác giả: A.E. Bryson
Nhà XB: IEEE Control Systems
Năm: 1996
[13] A. Cernea and H. Frankowska , A connection between the maxi- mum principle and dynamic programming for constrained control prob- lems, SIAM J. Control Optim. 44 (2005), 673–703 Sách, tạp chí
Tiêu đề: A connection between the maximum principle and dynamic programming for constrained control problems
Tác giả: A. Cernea, H. Frankowska
Nhà XB: SIAM J. Control Optim.
Năm: 2005
[14] N.H. Chieu, B.T. Kien, and N.T. Toan , Further results on subgra- dients of the value function to a parametric optimal control problem, J Sách, tạp chí
Tiêu đề: Further results on subgradients of the value function to a parametric optimal control problem
Tác giả: N.H. Chieu, B.T. Kien, N.T. Toan
Nhà XB: J
[15] N.H. Chieu and J.-C. Yao , Subgradients of the optimal value function in a parametric discrete optimal control problem, J. Ind. Manag. Optim.6 (2010), 401–410 Sách, tạp chí
Tiêu đề: Subgradients of the optimal value function in a parametric discrete optimal control problem
Tác giả: N.H. Chieu, J.-C. Yao
Nhà XB: J. Ind. Manag. Optim.
Năm: 2010
[17] J. Gauvin and F. Dubeau , Differential properties of the marginal function in mathematical programming, Math. Programming Stud. 19 (1982), 101–119 Sách, tạp chí
Tiêu đề: Differential properties of the marginal function in mathematical programming
Tác giả: J. Gauvin, F. Dubeau
Nhà XB: Math. Programming Stud.
Năm: 1982
[19] J. Gauvin and W.J. Tolle , Differential stability in nonlinear pro- gramming, SIAM J. Control Optimization 15 (1977), 294–311 Sách, tạp chí
Tiêu đề: Differential stability in nonlinear programming
Tác giả: J. Gauvin, W.J. Tolle
Nhà XB: SIAM J. Control Optimization
Năm: 1977
[20] B. Gollan , On the marginal function in nonlinear programming, Math Sách, tạp chí
Tiêu đề: On the marginal function in nonlinear programming
Tác giả: B. Gollan
[21] A.D. Ioffe and J.-P. Penot , Subdifferentials of performance func- tions and calculus of coderivatives of set-valued mappings, Serdica Math.J. 22 (1996), 359–384 Sách, tạp chí
Tiêu đề: Subdifferentials of performance functions and calculus of coderivatives of set-valued mappings
Tác giả: A.D. Ioffe, J.-P. Penot
Nhà XB: Serdica Math.J.
Năm: 1996
[22] A.D. Ioffe and V.M. Tihomirov , Theory of Extremal Problems, North-Holland Publishing Company, Amsterdam-New York, 1979 Sách, tạp chí
Tiêu đề: Theory of Extremal Problems
Tác giả: A.D. Ioffe, V.M. Tihomirov
Nhà XB: North-Holland Publishing Company
Năm: 1979
[24] A.N. Kolmogorov and S.V. Fomin, Introductory Real Analysis, Dover Publications, Inc., New York, 1975 Sách, tạp chí
Tiêu đề: Introductory Real Analysis
Tác giả: A.N. Kolmogorov, S.V. Fomin
Nhà XB: Dover Publications, Inc.
Năm: 1975
[26] E.J. McShane, On multipliers for Lagrange problems, Amer. J. Math.61 (1939), 809–819 Sách, tạp chí
Tiêu đề: On multipliers for Lagrange problems
Tác giả: E.J. McShane
Nhà XB: Amer. J. Math.
Năm: 1939
[27] B.S. Mordukhovich, Variational Analysis and Generalized Differen- tiation, Volume I: Basic Theory, Springer-Verlag, Berlin, 2006 Sách, tạp chí
Tiêu đề: Variational Analysis and Generalized Differentiation, Volume I: Basic Theory
Tác giả: B.S. Mordukhovich
Nhà XB: Springer-Verlag
Năm: 2006
[30] B.S. Mordukhovich and Y.H. Shao , On nonconvex subdifferential calculus in Banach spaces, J. Convex Anal. 2 (1995), 211–227 Sách, tạp chí
Tiêu đề: On nonconvex subdifferential calculus in Banach spaces
Tác giả: B.S. Mordukhovich, Y.H. Shao
Nhà XB: J. Convex Anal.
Năm: 1995
[32] M. Moussaoui and A. Seeger , Sensitivity analysis of optimal value functions of convex parametric programs with possibly empty solution sets, SIAM J. Optim. 4 (1994), 659–675 Sách, tạp chí
Tiêu đề: Sensitivity analysis of optimal value functions of convex parametric programs with possibly empty solution sets
Tác giả: M. Moussaoui, A. Seeger
Nhà XB: SIAM J. Optim.
Năm: 1994
[33] J.-P. Penot , Calculus Without Derivatives, Graduate Texts in Math- ematics, Springer, New York, 2013 Sách, tạp chí
Tiêu đề: Calculus Without Derivatives
Tác giả: J.-P. Penot
Nhà XB: Springer
Năm: 2013

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN