We propose the notion of higherorder a radialcontingent derivative of a setvalued map, develop some calculus rules and apply directly them to obtain optimality conditions for several particular optimization problems. Then, we employ this derivative together with contingenttype derivatives to analyze sensitivity for nonsmooth vector optimization. Properties of higherorder contingenttype derivatives of the perturbation and weak perturbation maps of a parameterized optimization problem are obtained
Trang 1On Higher-Order Sensitivity Analysis in Nonsmooth Vector Optimization
H T H Diem·P Q Khanh·L T Tung
Abstract We propose the notion of higher-order a radial-contingent derivative of a set-valued map,develop some calculus rules and apply directly them to obtain optimality conditions for severalparticular optimization problems Then, we employ this derivative together with contingent-typederivatives to analyze sensitivity for nonsmooth vector optimization Properties of higher-ordercontingent-type derivatives of the perturbation and weak perturbation maps of a parameterizedoptimization problem are obtained
Keywords Sensitivity · Higher-order radial-contingent derivative · Higher-order contingent-typederivative · Set-valued vector optimization · Perturbation map · Weak perturbation map
2010 Mathematics Subject Classifications: 90C31, 49J52, 49J53
P Q Khanh (corresponding author)
Department of Mathematics, International University of Hochiminh City, Linh Trung, Thu Duc, Hochiminh City,Vietnam
e-mail: pqkhanh@hcmiu.edu.vn
L T Tung
Department of Mathematics, College of Sciences, Cantho University, Cantho, Vietnam
e-mail: lttung@ctu.edu.vn
Trang 2problems, the reader is referred to the book [1] by Fiacco For nonsmooth optimization, thefirst related works are [2, 3], where Tanino studied the behavior of solution maps called per-turbation or weak perturbation maps in terms of contingent derivatives The TP-derivativewas proposed in [4] and used to weaken some assumptions in [2] Behaviors of many kinds
of efficient points were investigated in [5] The papers [3, 6, 7] studied the behavior of turbation maps in nonsmooth convex problems Important results on sensitivity analysiswere obtained by Levy and Rockafellar for generalized equations, a general model includingoptimization/minimization problems, in [8, 9], using the proto-derivative notion introduced
per-by Rockafellar in [10] (Recall that the proto-derivative of a map is the contingent derivativewhich coincides with the adjacient derivative.) Some developments were obtained in [11, 12].Levy and Mordukhovich investigated sensitivity in terms of coderivatives in [13, 14], whilethe generalized Clarke epiderivative was the tool for analyzing sensitivity in [15] All theabove mentioned works dealt with only first-order sensitivity analysis For higher-order con-siderations we observe only references [16, 17] In [16], the (higher-order) lower Studniarskiderivative (defined in [18]) of perturbation maps in vector optimization was considered In[17], variational sets, introduced recently in [19, 20, 21] together with calculus rules andapplications in establishing higher-order optimality conditions, were employed to deal withsensitivity of perturbation and weak perturbation maps of vector optimization
Since higher-order considerations for sensitivity, like for optimality conditions and manyother topics in optimization, are of great importance, we aim to deal with this subject inthe present paper Our tools of generalized derivatives are different from [16, 17] First wepropose the notion of higher-order radial-contingent derivative and develop some calculus
Trang 3rules This kind of derivatives of set-valued maps combines the ideas of the well-known(higher-order) contingent derivative and the radial derivatives, which were developed andsuccessfully used recently in establishing optimality conditions in [22, 23] This combina-tion makes the radial-contingent derivative bigger than the contingent-type derivative (asset-valued maps) and hence leads to better results in researches on optimality conditionsand sensitivity analysis Furthermore, unlike the radial derivative which captures globalproperties of a map, the radial-contingent derivative reflects local natures of a map, and
is more suitable in such researches We apply this kind of derivative with some similarity
as the TP-derivative employed in [4], but now for higher-order considerations While theradial-contingent derivative appears mainly in our assumptions, the conclusions of our resultsare in terms of contingent-type derivatives This derivative is different from the well-known(higher-order) contingent derivative and has appeared in the literature also under the name
“upper Studniarski derivative”
The plan of this paper is as follows In Sect 2, some definitions and preliminary facts arecollected for our use in the sequel We define the higher-order radial-contingent derivative,develop its calculus rules and apply directly them to establishing optimality conditions forvarious kinds of solutions to some particular vector optimization problems for the illustrativepurpose in Sect 3 Section 4 consists of relations between contingent-type derivatives of aset-valued map and its profile map (defined at the beginning of Sect 2), and also relationsbetween sets of various kinds of efficient points of these derivatives In Sect 5, we discussrelations between contingent-type derivatives of the perturbation, weak perturbation mapsand the feasible-set map in a general vector optimization problem The short Sect 6 contains
Trang 4some concluding remarks.
In this paper, if not otherwise stated, let X, Y and Z be normed spaces, and C ⊆ Y a closedconvex cone U (x0) is used for the set of the neighborhoods of x0 R, R+, and N stand forthe set of the real numbers, nonnegative real numbers, and natural numbers, respectively(shortly, resp) For M ⊆ X, intM, clM, bdM denote its interior, closure and boundary,resp A convex set B ⊆ Y is called a base of C iff 0 6∈ clB and C = {tb|t ∈ R+, b ∈ B}.Clearly C has a compact base B if and only if C ∩ bdB is compact For H : X → 2Y, thedomain, graph, and epigraph of H are defined by, resp,
Trang 5(iii) Assuming that C is pointed, a0 is termed a Henig-proper minimal/efficient point of A,denoted by a0 ∈ HeCA, iff there exist a convex cone K $ Y with C \ {0} ⊆ intK and
Recall now the two kinds of higher-order derivatives which we are most concerned with
in the sequel Let F : X → 2Y, u ∈ X, m ∈ N, and (x0, y0) ∈ grF
(i) ([18]) The mth-order contingent-type derivative of F at (x0, y0) is defined by
DmF (x0, y0)(u) := {v ∈ Y | ∃tn↓ 0, ∃(un, vn) → (u, v), y0 + tmnvn∈ F (x0+ tnun)}
Setting (xn, yn) := (x0+ tnun, y0+ tmnvn) and γn= t−1n , we have
DmF (x0, y0)(u) := {v ∈ Y | ∃γn > 0, ∃(xn, yn) ∈ grF : (xn, yn) → (x0, y0),
(γn(xn− x0), γnm(yn− y0)) → (u, v)}
Trang 6(ii) ([25]) The mth-order radial derivative of F at (x0, y0) is DRmF (x0, y0) defined by
DmRF (x0, y0)(u) := {v ∈ Y | ∃tn> 0 , ∃(un, vn) → (u, v), y0+ tmnvn∈ F (x0+ tnun)}
“contingent-type” to reflect the similarity to the well-known mth-order contingent derivative
of F at (x0, y0) wrt (u1, v1), , (um−1, vm−1) ∈ X × Y defined as (see [29], Chapter 5)
DmF (x0, y0, u1, v1, , um−1, vm−1)(u) := {v ∈ Y | ∃tn→ 0+, ∃(un, vn) → (u, v),
RF (x0, y0) is called the mth-order outer radial derivative There are the
cor-responding lower/inner objects obtained by replacing “∃tn, ∃un” by “∀tn, ∀un” Since weconsider only the upper/outer objects, we omit these adjectives
Trang 73 Higher-order radial-contingent derivatives
Now, we propose an object, intermediate between DmF (x0, y0) and DmRF (x0, y0), as follows
Definition 3.1 The mth-order radial-contingent derivative of F at (x0, y0) is Dm
DSmF (x0, y0)(u) := {v ∈ Y | ∃γn> 0, ∃(xn, yn) ∈ grF : xn→ x0, (γn(xn−x0), γnm(yn−y0)) → (u, v)}
Note that D1SF (x0, y0) was introduced in [4] and called the TP-derivative To have some
comparisons, we propose a higher-order derivative corresponding to the adjacent derivative
(see [29], Chapter 5) in the same way as DmF (x0, y0) corresponds to D1F (x0, y0) as follows
The mth-order adjacent-type derivative of F at (x0, y0) is DbmF (x0, y0) defined by
DbmF (x0, y0)(u) := {v ∈ Y | ∀tn ↓ 0, ∃(un, vn) → (u, v), y0+ tmnvn∈ F (x0+ tnun)}
Clearly, DbmF (x0, y0)(u) ⊆ DmF (x0, y0)(u) This inclusion may be strict as for F : R → 2R
Trang 8Clearly, DlmF (x0, y0)(u) ⊆ DbmF (x0, y0)(u) Similarly as for the preceding strict inclusion,this inclusion may be strict.
The proof of the following properties is immediate
Proposition 3.1 Let F : X → 2Y, u ∈ X, m ∈ N, and (x0, y0) ∈ grF
if u is nonzero, then DSmF (x0, y0)(u) = DmF (x0, y0)(u)
The first two inclusions in Proposition 3.1 (iii) are shown above to be possibly strict.The following example shows that so are the other two
Example 3.1 Let X = Y = R, (x0, y0) = (0, 0), and
F (x) = {0}, if x ≤ 0,
{1, −x2}, if x > 0,Then, we have
DR2F (x0, y0)(u) = {0}, if u < 0,
R+∪ {−u2}, if u ≥ 0
Trang 9D2F (x0, y0)(0) $ D2SF (x0, y0)(0),
DS2F (x0, y0)(u) $ D2RF (x0, y0)(u), ∀u > 0
Now we discuss the possibility of having equalities in Proposition 3.1 (iii), except forthe last derivative DmRF (x0, y0)(u), which has a global character (unlike the local character
of others) Consider the special case where F = f , a single-valued map Since, the abovementioned higher-order derivatives (except DmF (x0, y0, u1, v1, , um−1, vm−1)) do not includethe intermediate powers from 2 to m − 1, they are not compared with the Fr´echet derivative
We state the corresponding modification of this classical object as follows: for f : X → Yand x0 ∈ X, dmf (x0) is a map from X to L(X, L(X, Y )) ) (m times of L) such that thefollowing holds, where L(X, Y ) denotes the space of the bounded linear map from X to Yand dmf (x0)(x, x, , x) = ( (dmf (x0)x)x )x (m times of x),
Proposition 3.2 For f : X → Y and x0, u ∈ X, if there exists dmf (x0), then
{dmf (x0)(u, u, , u)} = Dlmf (x0, f (x0))(u) = Dbmf (x0, f (x0))(u)
= Dmf (x0, f (x0))(u) = DmSf (x0, f (x0))(u)
Trang 10Proof By the similarity, we consider only the case m = 2 Assume that d2f (x0) exists.Then, for all u ∈ X, we have the following characterizations
It remains to show that DS2f (x0, f (x0))(u) ⊆ {d2f (x0)(u, u)} Let v ∈ DS2f (x0, f (x0))(u).Then
∃tn > 0, ∃(un, vn) → (u, v): tnun → 0, f (x0) + t2nvn = f (x0+ tnun) Setting hn= tnun, onehas
We will see later that, though Dm
SF (x0, y0) is different from DmF (x0, y0) only at the
origin, it plays a significant role in addressing optimality conditions and sensitivity analysis.Furthermore, among the above-mentioned generalized derivatives, only DRmF has a globalcharacter All the others have a local character, since tn ↓ 0 or tnun → 0 appears in thedefinitions
For some calculus rules of the derivative DmF , we need the following notion
Definition 3.2 Let F : X → 2Y, (x0, y0) ∈ grF , u ∈ X, and m ∈ N If
DSmF (x0, y0)(u) = {v ∈ Y | ∀tn > 0, ∀un→ u : tnun→ 0, ∃vn → v, y0+tmnvn∈ F (x0+tnun)},
Trang 11and the set on the right side is nonempty, then DSmF (x0, y0) is called a mth-order semi-derivative of F at (x0, y0) in direction u.
radial-We choose the term “semi-derivative” following the idea of Penot for semi-differentiability(of order one) in [30] Note further that in this paper we need to assume this property only
when we are concerned with some calculus rules (we do not need this when we apply Dm
SF
without using these rules) This property clearly holds if the left side of the equality inDefinition 3.2 is a singleton
Proposition 3.3 Let F1, F2 : X → 2Y, x0 ∈ int(domF1) ∩ domF2, u ∈ X, and yi ∈ Fi(x0)
for i = 1,2 Suppose F1 has a mth-order radial-semi-derivative Dm
Trang 12Example 3.2 Let X = Y = R, x0 = y1 = y2 = 0, and
DmSG(y0, z0)(DS1F (x0, y0)(u)) ⊆ DSm(G ◦ F )(x0, z0)(u)
(ii) Suppose G has a radial-semi-derivative D1
SG(y0, z0) in any direction in Dm
SF (x0, y0)(u)
Then,
D1SG(y0, z0)(DmSF (x0, y0)(u)) ⊆ DSm(G ◦ F )(x0, z0)(u)
Proof By the similarity, we prove only (i) Let u ∈ X, v1 ∈ D1
SF (x0, y0)(u) and v2
∈ Dm
SG(y0, z0)(v1) There exist tn > 0, un → u, v1
n → v1 such that tnun → 0 and, for
Trang 13all n, y0+ tnv1n ∈ F (x0+ tnun) Since DmSG(y0, z0) is a mth-order radial-semi-derivative indirection v2, with tn, vn1 above, there exists v2n→ v2 such that z0+ tmnv2n∈ G(y0+ tnv1n) So,
The following properties are immediate from definition
Proposition 3.5 Let F : X → 2Y, (x0, y0) ∈ grF , λ > 0 and β ∈ R Then, for all u ∈ X,
(i) DSm(βF )(x0, βy0)(u) = βDmSF (x0, y0)(u);
(ii) DSmF (x0, y0)(λu) = λmDmSF (x0, y0)(u)
Now we apply mth-order radial-contingent derivatives to establish necessary optimalityconditions for Q-minimal solutions of some particular optimization problems Consider firstthe following unconstrained problem, for F : X → 2Y,
(P) min F (x), x ∈ X
For a vector optimization problem, from the concepts of optimality/efficiency recalled
at the end of Sect 1, we define in the usual and natural way, the corresponding solutionnotions For instance, (x0, y0) ∈ grF is called a local Q-minimal solution of (P) iff thereexists U ∈ U (x0) such that (F (U ) − y0) ∩ (−Q) = ∅
Proposition 3.6 Let X, Y , and Q be as before, F : X → 2Y, and (x0, y0) ∈ grF If (x0, y0)
is a local Q-minimal solution of (P) Then, for all m ∈ N, DSmF (x0, y0)(X) ∩ (−Q) = ∅
Proof Suppose to the contrary there exist u ∈ X and v ∈ Dm
SF (x0, y0)(u) ∩ (−Q) Then,
there exist sequences tn> 0 and (un, vn) → (u, v) such that tnun→ 0 and y0+ tm
nvn
Trang 14∈ F (x0 + tnun) Since the cone Q is open, tmnvn ∈ −Q for large n Therefore, for such n,
tmnvn ∈ (F (x0+ tnun) − y0) ∩ (−Q), a contradiction Proposition 3.6 is applicable, while some recent existing results are not, in the following
Example 3.3 Let X = Y = R, (x0, y0) = (0, 0), Q = intR+, C = R+, and
Since D1F (x0, y0)(u)∩(−intC) = ∅ for all u ∈ X, we cannot use Theorem 2.1 in [22] to rejectthe candidate (x0, y0) for a weak solution As D2F (x0, y0)(u)∩(−intC) = ∅ for all u ∈ X, the
second contingent-type derivative cannot be used either But D2
SF (x0, y0)(0) ∩ (−intC) 6= ∅
So, Proposition 3.6 rejects (x0, y0)
The next example indicates the necessity of higher-order considerations
Example 3.4 Let X = Y = R, Q = intR+, C = R+, (x0, y0) = (0, 0), u = 0, and
= ∅, Theorem 3.1 in [31] with a first-order condition gives nothing Since DS2F (x0, y0)(u)
∩(−intC) 6= ∅, (x0, y0) is not a weak solution due to Proposition 3.6
Trang 15Proposition 3.7 Assume that X is finite dimensional, C has a compact base B, F : X
→ 2Y, and (x0, y0) ∈ grF If, for at least one m ≥ 1,
(i) DmSF (x0, y0)(0) ∩ (−C) = {0};
(ii) DmSF (x0, y0)(u) ∩ (−C) = ∅ for all nonzero u,
then (x0, y0) is a local minimal solution of (P)
Proof Suppose to the contrary there exist xn → x0 and yn ∈ F (xn) such that yn− y0
∈ −C \ {0} There are rn > 0 and bn ∈ B such that yn− y0 = −rnbn and bn → b for some
Since X is finite dimensional, there exists u ∈ X \ {0} such that (xn− x0)/sn→ u Hence, 0
∈ Dm
SF (x0, y0)(u), contradicting (ii) If {sn/tn} has a convergent subsequence, say, sn/tn
→ α ≥ 0, then y0+ tmn(−bn) ∈ F (xn) = F (x0+ tn[((xn− x0)/sn)(sn/tn)]) Since ((xn− x0)/sn)(sn/tn) → αu, −b ∈ DmSF (x0, y0)(αu), which contradicts (i) if α = 0 or (ii) if α 6= 0
The next two examples explain advantages of Proposition 3.7 over recent existing results
Example 3.5 Let X = Y = R, C = R+, (x0, y0) = (0, 1), and
Trang 16Then, for all u ∈ X,
Example 3.6 Let X, Y , C, and (x0, y0) be as in Example 3.5 Let
Applying the above chain rule of mth-order radial-contingent derivatives, we easily lish necessary optimality conditions for local Q-minimal solutions of the following problem
estab-(P1) minF (x0) subject to x ∈ X and x0 ∈ G(x),where F : X → 2Y and G : X → 2X This problem can be restated as the unconstrainedproblem min(F ◦ G)(x) s.t x ∈ X
Trang 17Proposition 3.8 Let ImG ⊆ domF , (x0, z0) ∈ grG, and (z0, y0) ∈ grF Assume that(x0, y0) is a local Q-minimal solution of (P1) and u is any point in X.
(i) If F has a mth-order radial-semi-derivative Dm
Proof By the similarity, we prove only (i) From Proposition 3.6, for u ∈ X we have
DSm(F ◦ G)(x0, y0)(u) ∩ (−Q) = ∅ Proposition 3.4 (i) implies that
= TepiF(x0, y0) is said to be the contingent epiderivative of F at (x0, y0) ∈ grF
Example 3.7 Let X = Y = R, Q = intR+, C = R+, G(x) = {−|x|}, and
Trang 18computa-derivative at (G(0), 0) in all directions in DS1G(0, G(0))(0) = {0}, DSmF (G(0), 0)[D1SG(0, G(0))(0)]
= R−, which meets −intC Therefore, Proposition 3.8 above rejects the candidate (0, 0)
To illustrate the sum rule, we consider the following problem
to 2 (PC) has also been studied independently from (P2) Optimality conditions for thisgeneral problem (PC) were obtained in [32] by using sum rules and scalar product rules forcontingent epiderivatives In [21], problem (PC) was investigated by using variational sets.Now, we apply Propositions 3.3, 3.5, and 3.6 for mth-order contingent-radial derivatives toget the following necessary condition for local Q-minimal solutions of (PC) Here, s can beany positive number
Proposition 3.9 Let domF ⊆ domG, x0 ∈ M, y0 ∈ F (x0), u ∈ X, and either F or G have
a mth-order radial-semi-derivative at (x0, y0) or (x0, 0), resp, in direction u If (x0, y0) is a
Trang 19local Q-minimal solution of (PC), then,
(DSmF (x0, y0)(u) + sDmSG(x0, 0)(u)) ∩ (−Q) = ∅
Proof By Proposition 3.6, one gets DSm(F + sG)(x0, y0)(u) ∩ (−Q) = ∅ According to sition 3.5, sDSmG(x0, 0)(u) = DmS(sG)(x0, 0)(u) Then, Proposition 3.3 completes the proof:
Propo-DSmF (x0, y0)(u) + sDSmG(x0, 0)(u) ⊆ DmS(F + sG)(x0, y0+ 0)(u)
The next example indicates a case, where Proposition 3.9 is more advantageous thanearlier existing results
Example 3.8 Let X = Y = R, Q = intR+, C = R+, g(x) = x4− 2x3, and
derivative of order 1 at (0,0) in any direction, D1
SF (0, 0)(0) = R−, and {0} ⊆ D1
SG(0, 0)(0)
⊆ R+ So, (DS1F (0, 0)(0) + sDS1G(0, 0)(0)) ∩ (−intC) 6= ∅ In view of Proposition 3.9, (x0, y0)
is not a local weak solution of (PC) This fact can be checked directly too
deriva-tives
In this section, we discuss relations between higher-order contingent-type derivatives of aset-valued map and those of its profile map Such relations for various kinds of efficient