If a mathematical programming problem depends on a parameter, that is,the objective function and the constraints depend on a certain parameter, thenthe optimal value is a function of the
Trang 1VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY
INSTITUTE OF MATHEMATICS
DUONG THI VIET AN
SUBDIFFERENTIALS OF OPTIMAL VALUE FUNCTIONS IN PARAMETRIC CONVEX
OPTIMIZATION PROBLEMS
Speciality: Applied MathematicsSpeciality code: 9 46 01 12
SUMMARYDOCTORAL DISSERTATION IN MATHEMATICS
HANOI - 2018
Trang 2The dissertation was written on the basis of the author’s research works carried
at Institute of Mathematics, Vietnam Academy of Science and Technology
Supervisor: Prof Dr.Sc Nguyen Dong Yen
First referee:
Second referee:
Third referee:
To be defended at the Jury of Institute of Mathematics, Vietnam Academy of Science and Technology:
on , at o’clock
The dissertation is publicly available at:
• The National Library of Vietnam
• The Library of Institute of Mathematics
Trang 3If a mathematical programming problem depends on a parameter, that is,the objective function and the constraints depend on a certain parameter, thenthe optimal value is a function of the parameter, and the solution map is aset-valued map on the parameter of the problem In general, the optimal valuefunction is a fairly complicated function of the parameter; it is often nondif-ferentiable on the parameter, even if the functions defining the problem inquestion are smooth w.r.t all the programming variables and the parameter.This is the reason of the great interest in having formulas for computing gener-alized directional derivatives (Dini directional derivative, Dini-Hadarmard di-rectional derivative, Clarke generalized directional derivative, ) and formulasfor evaluating subdifferentials (subdifferential in the sense of convex analysis,Clarke subdifferential, Fr´echet subdifferential, limiting subdifferential - alsocalled Mordukhovich subdifferential, ) of the optimal value function
Studies on differentiability properties of the optimal value function and ofthe solution map in parametric mathematical programming are usually classi-fied as studies on differential stability of optimization problems
For differentiable nonconvex programs, pioneering works are due to J vin and W.J Tolle (1977), J Gauvin and F Dubeau (1982) The authors ob-tained formulas for computing and estimating Dini directional derivatives andClarke generalized gradients of the optimal value function when the problemdata undergoes smooth perturbations A Auslender (1979), R.T Rockafellar(1982), B Golan (1984), L Thibault (1991), and many other authors, haveshown that similar results can be obtained for nondifferentiable nonconvexprograms For optimization problems with inclusion constraints on Banachspaces, differentiability properties of the optimal value function have been es-tablished via the dual-space approach by B.S Mordukhovich, N.M Nam, andN.D Yen (2009), where it is shown that the new general results imply severalfundamental results which were obtained by the primal-space approach
Gau-Differential stability for convex programs has been studied intensively in thelast five decades A formula for computing the subdifferential of the optimalvalue function of a standard convex mathematical programming problem withright-hand-side perturbations, called the perturbation function, via the set ofKuhn-Tucker vectors (i.e., the vectors of Kuhn-Tucker coefficients) was given
by R.T Rockafellar (1970) Until now, many analogues and extensions of thisclassical result have been given in the literature
Besides the investigations on differential stability of parametric cal programming problems, the study on differential stability of optimal controlproblems is also an issue of importance
Trang 4mathemati-According to A.E Bryson (1996), optimal control had its origins in the culus of variations in the 17th century The calculus of variations was devel-oped further in the 18th by L Euler and J.L Lagrange and in the 19th century
cal-by A.M Legendre, C.G.J Jacobi, W.R Hamilton, and K.T.W Weierstrass
In 1957, R.E Bellman gave a new view of Hamilton-Jacobi theory which hecalled dynamic programming, essentially a nonlinear feedback control scheme.E.J McShane (1939) and L.S Pontryagin, V.G Boltyanskii, R.V Gamkre-lidze, and E.F Mishchenko (1962) extended the calculus of variations to handlecontrol variable inequality constraints The Maximum Principle was enunci-ated by Pontryagin
As noted by P.N.V Tu (1984), although much pioneering work had beencarried out by other authors, Pontryagin and his associates are the first ones
to develop and present the Maximum Principle in unified manner Their workattracted great attention among mathematicians, engineers, economists, andspurred wide research activities in the area
Motivated by the recent work of B.S Mordukhovich, N.M Nam, and N.D.Yen (Math Program., 2009) on the optimal value function in parametricprogramming under inclusion constraints, this dissertation focuses on differ-ential stability of convex optimization problems In other words, we studydifferential properties of the optimal value function Namely, we obtain someformulas for computing the subdifferential and the singular subdifferential ofthe optimal value function of infinite-dimensional convex optimization prob-lems under inclusion constraints and of infinite-dimensional convex optimiza-tion problems under geometrical and functional constraints Our main tool
is Moreau–Rockafellar Theorem and appropriate regularity conditions Byvirtue of the convexity, several assumptions used in the just cited work, likethe nonemptyness of the Fr´echet upper subdifferential of the objective func-tion, the existence of a local upper Lipschitzian selection of the solution map,
as well as the µ-inner semicontinuity or the µ-inner semicompactness of thesolution map, are no longer needed We also discuss the connection betweenthe subdifferentials of the optimal value function and certain multiplier sets.Applied to parametric optimal control problems, with convex objective func-tions and linear dynamical systems, either discrete or continuous, our resultscan lead to some rules for computing the subdifferential and the singular sub-differential of the optimal value function via the data of the given problem.The dissertation has six chapters, a list of the related papers of the author, asection of general conclusions, and a list of references The first four chapters,where some preliminaries and a series of new results on sensitivity analysis
of parametric convex programming problems under inclusion constraints aregiven, constitute the first part of the dissertation The second part is formed
by the last two chapters, where applications of the just mentioned results toparametric convex control problems under linear constraints are carried on.Chapter 1 collects some basic concepts from convex analysis, variationalanalysis, and functional analysis needed for subsequent chapters
Trang 5Chapter 2 presents some new results on differential stability of convex timization problems under inclusion constraints in Hausdorff locally convextopological vector spaces The main tool is the Moreau-Rockafellar Theorem,which can be viewed as a well-known result in convex analysis, and some ap-propriate regularity conditions The results obtained here lead to new facts ondifferential stability of convex optimization problems under geometrical andfunctional constraints.
op-In Chapter 3 we first establish formulas for computing the subdifferentials
of the optimal value function for parametric convex programs under three sumptions: the objective function is closed, the constraint multifunction hasclosed graph, and Aubin’s regularity condition is satisfied Then, we deriverelationships between regularity conditions Our investigations have revealedthat one cannot use Aubin’s regularity assumption in a Hausdorff locally con-vex topological vector space setting, because the related sum rule is establishedvia the Banach open mapping theorem
as-Chapter 4 discusses differential stability of convex programming problems
in Hausdorff locally convex topological vector spaces Optimality conditionsfor convex optimization problems under inclusion constraints and for convexoptimization problems under geometrical and functional constraints are for-mulated here too After establishing an upper estimate for the subdifferentialsvia the Lagrange multiplier sets, we give an example to show that the upperestimate can be strict Then, by defining a satisfactory multiplier set, we ob-tain formulas for computing the subdifferential and the singular subdifferential
of the optimal value function
In Chapter 5 we first derive an upper estimate for the subdifferential of theoptimal value function of convex discrete optimal control problems in Banachspaces Then we present new calculus rules for computing the subdifferential
if the objective function is differentiable The main tools of our analysis arethe formulas for computing subdifferentials of the optimal value function fromChapter 2 We also show that the singular subdifferential of the just mentionoptimal value function always consists of the origin of the dual space
Finally, in Chapter 6, we focus on differential stability of convex continuousoptimal control problems Namely, based on the results of Chapter 5 aboutdifferential stability of parametric convex mathematical programming prob-lems, we get new formulas for computing the subdifferential and the singularsubdifferential of the optimal value function Moreover, we also describe in de-tails the process of finding vectors belonging to the subdifferential (resp., thesingular subdifferential) of the optimal value function Meaningful examples,which have the origin in the book of Pontryagin et al (1962), are designed toillustrate our results
Trang 6Chapter 1
Preliminaries
Several concepts and results from convex analysis, variational analysis, andfunctional analysis are recalled in this chapter Two types of parametric opti-mization problems to be considered in the subsequent three chapters are alsopresented in this chapter
The epigraph of f is defined by epi f := {(x, α) ∈ X × R | α ≥ f (x)} Ifepi f is a convex set, then f is said to be a convex function
Definition 1.2 Let f : X → R be a convex function Suppose that ¯x ∈ Xand |f (¯x)| < ∞
(i) The set
∂f (¯x) = {x∗ ∈ X∗ | hx∗, x − ¯xi ≤ f (x) − f (¯x), ∀x ∈ X}
is called the subdifferential of f at ¯x
(ii) The set
∂∞f (¯x) = {x∗ ∈ X∗ | (x∗, 0) ∈ N ((¯x, f (¯x)); epi f )}
is called the singular subdifferential of f at ¯x
In the case where |f (¯x)| = ∞, one lets ∂f (¯x) and ∂∞f (¯x) to be empty sets
Trang 71.2 Coderivatives
Let F : X ⇒ Y be a convex set-valued map The graph and the domain of
F are given, respectively, by the formulas
gph F := {(x, y) ∈ X × Y | y ∈ F (x)},dom F := {x ∈ X | F (x) 6= ∅}
Definition 1.3 The coderivative of F at (¯x, ¯y) ∈ gph F is the multifunction
D∗F (¯x, ¯y) : Y∗ ⇒ X∗ defined by
D∗F (¯x, ¯y)(y∗) := {x∗ ∈ X∗ | (x∗, −y∗) ∈ N ((¯x, ¯y); gph F )} , ∀y∗ ∈ Y∗
If (¯x, ¯y) /∈ gph F , then we accept the convention that the set D∗F (¯x, ¯y)(y∗) isempty for any y∗ ∈ Y∗
Consider a function ϕ : X × Y → R, a set-valued map G : X ⇒ Y betweenBanach spaces The optimal value function (or the marginal function) of theparametric optimization problem under an inclusion constraint, defined by Gand ϕ, is the function µ : X → R, with
µ(x) := inf {ϕ(x, y) | y ∈ G(x)} (1.1)
By the convention inf ∅ = +∞, we have µ(x) = +∞ for any x /∈ dom G.The set-valued map G (resp., the function ϕ) is called the map describingthe constraint set (resp., the objective function) of the optimization problem
sub-M (x) := {y ∈ G(x) | µ(x) = ϕ(x, y)} (∀x ∈ dom G)
By imposing the convexity requirement on (1.2), in next Chapters 2 and 3,
we need not to rely on the assumption ∂b +ϕ(¯x, ¯y) 6= ∅ in Theorem 1 of thepaper by B.S Mordukhovich, N.M Nam, and N.D Yen (Math Program.,2009), the condition saying that the solution map M : dom G ⇒ Y has a
Trang 8local upper Lipschitzian selection at (¯x, ¯y) in Theorem 2 of the just citedpaper, as well as the sequentially normally compact property of ϕ, the µ-innersemicontinuity or the µ-inner semicompactness conditions on the solution map
M (·) in Theorem 7 of the same article
Let X and Y be Hausdorff locally convex topological vector spaces Let
ϕ : X × Y → R be a proper convex extended-real-valued function Given
a convex set-valued map G : X ⇒ Y , we consider the parametric convexoptimization problem under an inclusion constraint
depending on the parameter x The optimal value function of problem (1.3),
is the function µ : X → R, with
µ(x) := inf {ϕ(x, y) | y ∈ G(x)} (1.4)The solution map M : dom G ⇒ Y of that problem is defined by
M (x) := {y ∈ G(x) | µ(x) = ϕ(x, y)} (∀x ∈ dom G)
Proposition 1.1 Let G : X ⇒ Y be a convex set-valued map, ϕ : X × Y → R
a convex function Then, the function µ(.) is defined by (1.4) is convex
In next two chapters, to obtain formulas for computing/estimating the differential of the optimal value function µ via the subdifferential of ϕ andthe coderivative of G, we will apply the following scheme, which has beenformulated clearly by Professor Truong Xuan Duc Ha in her review on thisdissertation
sub-Step 1 Consider the unconstrained optimization problem
µ(x) := inf
ϕ(x, y) + δ((x, y); gph G)
,where δ(·; gph G) is the indicator function of gph G
Step 2 Apply some known results to show that
Step 3 Employ the sum rule for subdifferentials to get
(x∗, 0) ∈ ∂ϕ(¯x, ¯y) + ∂δ((¯x, ¯y); gph G)
Step 4 Use the relationships between ∂δ((¯x, ¯y); gph G), N ((¯x, ¯y); gph G) andthe definition of the coderivative in question
Trang 91.5 Some Facts from Functional Analysis and Convex
Analysis
Consider a continuous linear operator A : X → Y from a Banach space X
to another Banach space Y with the adjoint A∗ : Y∗ → X∗ The null spaceand the range of A are defined, respectively, by ker A = {x ∈ X | Ax = 0}and rge A = {y ∈ Y | y = Ax, x ∈ X}
Proposition 1.2 (See J.F Bonnans and A Shapiro (2000)) The next ties are valid:
proper-(i) (ker A)⊥ = cl∗(rge (A∗)), where cl∗(rge (A∗)) denotes the closure of the setrge (A∗) in the weak∗ topology of X∗, and
(ker A)⊥ = {x∗ ∈ X∗ | hx∗, xi = 0 ∀x ∈ ker A}
stands for the orthogonal complement of the set ker A
(ii) If rge A is closed, then (ker A)⊥ = rge (A∗), and there is c > 0 such thatfor every x∗ ∈ rge (A∗) there exists y∗ ∈ Y∗ with ||y∗|| ≤ c||x∗|| and x∗ = A∗y∗.(iii) If, in addition, rge A = Y , i.e., A is onto, then A∗ is one-to-one and thereexists c > 0 such that ||y∗|| ≤ c||A∗y∗||, for all y∗ ∈ Y∗
(iv) (ker A∗)⊥ = cl(rge A)
Suppose that A0, A1, , An are convex subsets of a Hausdorff locally convextopological vector space X and A = A0 ∩ A1 ∩ · · · ∩ An By int Ai, for i =
1, , n, we denote the interior of Ai The following two propositions and onetheorem can be found in the book “Theory of Extremal Problems” of A.D Ioffeand V.M Tihomirov (1979)
Proposition 1.3 If one has
A0∩ (int A1) ∩ · · · ∩ (int An) 6= ∅,then N (x; A) = N (x; A0) + N (x; A1) + · · · + N (x; An) for any point x ∈ A.Proposition 1.4 If one has int Ai 6= ∅ for i = 1, 2, , n then, for any x0 ∈ A,the following statements are equivalent:
(a) A0∩ (int A1) ∩ · · · ∩ (int An) = ∅;
(b) There exist x∗i ∈ N (x0; Ai) for i = 0, 1, , n, not all zero, such that
x∗0+ x∗1 + · · · + x∗n = 0
Theorem 1.1 (The Moreau-Rockafellar Theorem) Let f1, , fm be properconvex functions on X Then
∂(f1+ · · · + fm)(x) ⊃ ∂f1(x) + · · · + ∂fm(x)for all x ∈ X If, at a point x0 ∈ dom f1 ∩ · · · ∩ dom fm, all the functions
f1, , fm, except, possibly, one are continuous, then
∂(f1+ · · · + fm)(x) = ∂f1(x) + · · · + ∂fm(x)for all x ∈ X
Trang 10Chapter 2
Differential Stability in Parametric
Convex Programming Problems
This chapter establishes some new results about differential stability ofconvex optimization problems under inclusion constraints and functional con-straints By using a version of the Moreau-Rockafellar Theorem, which hasbeen recalled in Theorem 1.1, and appropriate regularity conditions, we obtainformulas for computing the subdifferential and the singular subdifferential ofthe optimal value function
Prob-lems under Inclusion Constraints
The next theorem provides us with formulas for computing the tial and the singular subdifferential of µ given in (1.4)
subdifferen-Theorem 2.1 Let G : X ⇒ Y be a convex set-valued mapping and ϕ : X ×
Y → R a proper convex function If at least one of the following regularityconditions is satisfied:
(a) int(gph G) ∩ dom ϕ 6= ∅,
Trang 112.2 Convex Programming Problems under Functional
Constraints
Consider the problem
min {ϕ(x, y) | (x, y) ∈ C, gi(x, y) ≤ 0, i ∈ I, hj(x, y) = 0, j ∈ J } , (2.1)
in which ϕ : X × Y → R is a convex function, C ⊂ X × Y is a convex set,
I = {1, , m}, J = {1, , k}, gi : X × Y → R (i ∈ I) are continuous convexfunctions, and hj : X × Y → R (j ∈ J ) are continuous affine functions Foreach x ∈ X, we put
J ) are convex sets
Theorem 2.2 Suppose that the equality constraints hj(x, y) = 0 (j ∈ J ) areabsent in (2.1) If at least one of the following regularity conditions
(a1) There exists a point (u0, v0) ∈ dom ϕ such that (u0, v0) ∈ int C and
gi(u0, v0) < 0 for all i ∈ I,
(b1) ϕ is continuous at a point (x0, y0) ∈ C where gi(x0, y0) < 0 for all i ∈ I,
is satisfied, then for any ¯x ∈ dom µ, with µ(¯x) 6= −∞, and for any ¯y ∈ M (¯x)
Trang 12If ϕ is continuous at a point (x0, y0) with (x0, y0) ∈ int C, gi(x0, y0) < 0, forall i ∈ I and hj(x0, y0) = 0, for all j ∈ J , then for any ¯x ∈ dom µ, withµ(¯x) 6= −∞, and for any ¯y ∈ M (¯x) we have
Con-dition
Let G : X ⇒ Y be a convex multifunction between Banach spaces, whosegraph is closed Let ϕ : X × Y → R be a proper, closed, convex function.Consider the parametric optimization problem under an inclusion constraint
Using the regularity condition
(0, 0) ∈ int(dom ϕ − gph G), (3.2)
Trang 13we will derive formulas for computing the subdifferential and the singularsubdifferential of the optimal value function µ : X → R of (3.1), which isgiven by
Consider an example satisfying Aubin’s regularity condition (3.2), but bothregularity conditions (a) and (b) in Theorem 2.1 are not fulfilled, whereas theconclusion of the Theorem 3.1 holds true
Example 3.1 Let X = Y = R2 and (¯x, ¯y) = (0, 0) Consider the optimalvalue function µ(x) defined by (3.3) with ϕ0(y) = 0 if y1 = 0 and ϕ0(y) = +∞
if y1 6= 0, for every y = (y1, y2) ∈ Y, and
G(x) =
(
R× {0} if x = 0,
∅ if x 6= 0,for every x = (x1, x2) ∈ X Clearly, ϕ0 is a proper, closed, convex function withdom ϕ0 being closed In addition, G is a convex multifunction of closed graph.Setting ϕ(x, y) = ϕ0(y) for all (x, y) ∈ X ×Y , we have gph G = {0R2}×R×{0}and dom ϕ = R2 × {0} × R Since int(gph G) = ∅, the regularity conditionint(gph G) ∩ dom ϕ 6= ∅ fails to hold Obviously, ϕ is discontinuous at anypoint (x0, y0) ∈ gph G Meanwhile, dom ϕ − gph G = X × Y , so (3.2) issatisfied It is easy to see that
µ(x) = inf {ϕ0(y) | y ∈ G(x)} =
(
0 if x = 0,+∞ if x 6= 0
A simple calculation shows that ∂µ(¯x) = R2 and ∂ϕ(¯x, ¯y) = {0R2} ×R× {0} For any y∗ = (y∗1, 0) ∈ R × {0}, we have