1. Trang chủ
  2. » Khoa Học Tự Nhiên

On Higher Order Sensitivity Analysis in Nonsmooth Vector Optimization

39 225 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 39
Dung lượng 201,08 KB
File đính kèm Preprint1321.rar (186 KB)

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

We propose the notion of higherorder a radialcontingent derivative of a setvalued map, develop some calculus rules and apply directly them to obtain optimality conditions for several particular optimization problems. Then, we employ this derivative together with contingenttype derivatives to analyze sensitivity for nonsmooth vector optimization. Properties of higherorder contingenttype derivatives of the perturbation and weak perturbation maps of a parameterized optimization problem are obtained

Trang 1

On Higher-Order Sensitivity Analysis in Nonsmooth Vector Optimization

H T H Diem·P Q Khanh·L T Tung

Abstract We propose the notion of higher-order a radial-contingent derivative of a set-valued map,develop some calculus rules and apply directly them to obtain optimality conditions for severalparticular optimization problems Then, we employ this derivative together with contingent-typederivatives to analyze sensitivity for nonsmooth vector optimization Properties of higher-ordercontingent-type derivatives of the perturbation and weak perturbation maps of a parameterizedoptimization problem are obtained

Keywords Sensitivity · Higher-order radial-contingent derivative · Higher-order contingent-typederivative · Set-valued vector optimization · Perturbation map · Weak perturbation map

2010 Mathematics Subject Classifications: 90C31, 49J52, 49J53

P Q Khanh (corresponding author)

Department of Mathematics, International University of Hochiminh City, Linh Trung, Thu Duc, Hochiminh City,Vietnam

e-mail: pqkhanh@hcmiu.edu.vn

L T Tung

Department of Mathematics, College of Sciences, Cantho University, Cantho, Vietnam

e-mail: lttung@ctu.edu.vn

Trang 2

problems, the reader is referred to the book [1] by Fiacco For nonsmooth optimization, thefirst related works are [2, 3], where Tanino studied the behavior of solution maps called per-turbation or weak perturbation maps in terms of contingent derivatives The TP-derivativewas proposed in [4] and used to weaken some assumptions in [2] Behaviors of many kinds

of efficient points were investigated in [5] The papers [3, 6, 7] studied the behavior of turbation maps in nonsmooth convex problems Important results on sensitivity analysiswere obtained by Levy and Rockafellar for generalized equations, a general model includingoptimization/minimization problems, in [8, 9], using the proto-derivative notion introduced

per-by Rockafellar in [10] (Recall that the proto-derivative of a map is the contingent derivativewhich coincides with the adjacient derivative.) Some developments were obtained in [11, 12].Levy and Mordukhovich investigated sensitivity in terms of coderivatives in [13, 14], whilethe generalized Clarke epiderivative was the tool for analyzing sensitivity in [15] All theabove mentioned works dealt with only first-order sensitivity analysis For higher-order con-siderations we observe only references [16, 17] In [16], the (higher-order) lower Studniarskiderivative (defined in [18]) of perturbation maps in vector optimization was considered In[17], variational sets, introduced recently in [19, 20, 21] together with calculus rules andapplications in establishing higher-order optimality conditions, were employed to deal withsensitivity of perturbation and weak perturbation maps of vector optimization

Since higher-order considerations for sensitivity, like for optimality conditions and manyother topics in optimization, are of great importance, we aim to deal with this subject inthe present paper Our tools of generalized derivatives are different from [16, 17] First wepropose the notion of higher-order radial-contingent derivative and develop some calculus

Trang 3

rules This kind of derivatives of set-valued maps combines the ideas of the well-known(higher-order) contingent derivative and the radial derivatives, which were developed andsuccessfully used recently in establishing optimality conditions in [22, 23] This combina-tion makes the radial-contingent derivative bigger than the contingent-type derivative (asset-valued maps) and hence leads to better results in researches on optimality conditionsand sensitivity analysis Furthermore, unlike the radial derivative which captures globalproperties of a map, the radial-contingent derivative reflects local natures of a map, and

is more suitable in such researches We apply this kind of derivative with some similarity

as the TP-derivative employed in [4], but now for higher-order considerations While theradial-contingent derivative appears mainly in our assumptions, the conclusions of our resultsare in terms of contingent-type derivatives This derivative is different from the well-known(higher-order) contingent derivative and has appeared in the literature also under the name

“upper Studniarski derivative”

The plan of this paper is as follows In Sect 2, some definitions and preliminary facts arecollected for our use in the sequel We define the higher-order radial-contingent derivative,develop its calculus rules and apply directly them to establishing optimality conditions forvarious kinds of solutions to some particular vector optimization problems for the illustrativepurpose in Sect 3 Section 4 consists of relations between contingent-type derivatives of aset-valued map and its profile map (defined at the beginning of Sect 2), and also relationsbetween sets of various kinds of efficient points of these derivatives In Sect 5, we discussrelations between contingent-type derivatives of the perturbation, weak perturbation mapsand the feasible-set map in a general vector optimization problem The short Sect 6 contains

Trang 4

some concluding remarks.

In this paper, if not otherwise stated, let X, Y and Z be normed spaces, and C ⊆ Y a closedconvex cone U (x0) is used for the set of the neighborhoods of x0 R, R+, and N stand forthe set of the real numbers, nonnegative real numbers, and natural numbers, respectively(shortly, resp) For M ⊆ X, intM, clM, bdM denote its interior, closure and boundary,resp A convex set B ⊆ Y is called a base of C iff 0 6∈ clB and C = {tb|t ∈ R+, b ∈ B}.Clearly C has a compact base B if and only if C ∩ bdB is compact For H : X → 2Y, thedomain, graph, and epigraph of H are defined by, resp,

Trang 5

(iii) Assuming that C is pointed, a0 is termed a Henig-proper minimal/efficient point of A,denoted by a0 ∈ HeCA, iff there exist a convex cone K $ Y with C \ {0} ⊆ intK and

Recall now the two kinds of higher-order derivatives which we are most concerned with

in the sequel Let F : X → 2Y, u ∈ X, m ∈ N, and (x0, y0) ∈ grF

(i) ([18]) The mth-order contingent-type derivative of F at (x0, y0) is defined by

DmF (x0, y0)(u) := {v ∈ Y | ∃tn↓ 0, ∃(un, vn) → (u, v), y0 + tmnvn∈ F (x0+ tnun)}

Setting (xn, yn) := (x0+ tnun, y0+ tmnvn) and γn= t−1n , we have

DmF (x0, y0)(u) := {v ∈ Y | ∃γn > 0, ∃(xn, yn) ∈ grF : (xn, yn) → (x0, y0),

(γn(xn− x0), γnm(yn− y0)) → (u, v)}

Trang 6

(ii) ([25]) The mth-order radial derivative of F at (x0, y0) is DRmF (x0, y0) defined by

DmRF (x0, y0)(u) := {v ∈ Y | ∃tn> 0 , ∃(un, vn) → (u, v), y0+ tmnvn∈ F (x0+ tnun)}

“contingent-type” to reflect the similarity to the well-known mth-order contingent derivative

of F at (x0, y0) wrt (u1, v1), , (um−1, vm−1) ∈ X × Y defined as (see [29], Chapter 5)

DmF (x0, y0, u1, v1, , um−1, vm−1)(u) := {v ∈ Y | ∃tn→ 0+, ∃(un, vn) → (u, v),

RF (x0, y0) is called the mth-order outer radial derivative There are the

cor-responding lower/inner objects obtained by replacing “∃tn, ∃un” by “∀tn, ∀un” Since weconsider only the upper/outer objects, we omit these adjectives

Trang 7

3 Higher-order radial-contingent derivatives

Now, we propose an object, intermediate between DmF (x0, y0) and DmRF (x0, y0), as follows

Definition 3.1 The mth-order radial-contingent derivative of F at (x0, y0) is Dm

DSmF (x0, y0)(u) := {v ∈ Y | ∃γn> 0, ∃(xn, yn) ∈ grF : xn→ x0, (γn(xn−x0), γnm(yn−y0)) → (u, v)}

Note that D1SF (x0, y0) was introduced in [4] and called the TP-derivative To have some

comparisons, we propose a higher-order derivative corresponding to the adjacent derivative

(see [29], Chapter 5) in the same way as DmF (x0, y0) corresponds to D1F (x0, y0) as follows

The mth-order adjacent-type derivative of F at (x0, y0) is DbmF (x0, y0) defined by

DbmF (x0, y0)(u) := {v ∈ Y | ∀tn ↓ 0, ∃(un, vn) → (u, v), y0+ tmnvn∈ F (x0+ tnun)}

Clearly, DbmF (x0, y0)(u) ⊆ DmF (x0, y0)(u) This inclusion may be strict as for F : R → 2R

Trang 8

Clearly, DlmF (x0, y0)(u) ⊆ DbmF (x0, y0)(u) Similarly as for the preceding strict inclusion,this inclusion may be strict.

The proof of the following properties is immediate

Proposition 3.1 Let F : X → 2Y, u ∈ X, m ∈ N, and (x0, y0) ∈ grF

if u is nonzero, then DSmF (x0, y0)(u) = DmF (x0, y0)(u)

The first two inclusions in Proposition 3.1 (iii) are shown above to be possibly strict.The following example shows that so are the other two

Example 3.1 Let X = Y = R, (x0, y0) = (0, 0), and

F (x) = {0}, if x ≤ 0,

{1, −x2}, if x > 0,Then, we have

DR2F (x0, y0)(u) = {0}, if u < 0,

R+∪ {−u2}, if u ≥ 0

Trang 9

D2F (x0, y0)(0) $ D2SF (x0, y0)(0),

DS2F (x0, y0)(u) $ D2RF (x0, y0)(u), ∀u > 0

Now we discuss the possibility of having equalities in Proposition 3.1 (iii), except forthe last derivative DmRF (x0, y0)(u), which has a global character (unlike the local character

of others) Consider the special case where F = f , a single-valued map Since, the abovementioned higher-order derivatives (except DmF (x0, y0, u1, v1, , um−1, vm−1)) do not includethe intermediate powers from 2 to m − 1, they are not compared with the Fr´echet derivative

We state the corresponding modification of this classical object as follows: for f : X → Yand x0 ∈ X, dmf (x0) is a map from X to L(X, L(X, Y )) ) (m times of L) such that thefollowing holds, where L(X, Y ) denotes the space of the bounded linear map from X to Yand dmf (x0)(x, x, , x) = ( (dmf (x0)x)x )x (m times of x),

Proposition 3.2 For f : X → Y and x0, u ∈ X, if there exists dmf (x0), then

{dmf (x0)(u, u, , u)} = Dlmf (x0, f (x0))(u) = Dbmf (x0, f (x0))(u)

= Dmf (x0, f (x0))(u) = DmSf (x0, f (x0))(u)

Trang 10

Proof By the similarity, we consider only the case m = 2 Assume that d2f (x0) exists.Then, for all u ∈ X, we have the following characterizations

It remains to show that DS2f (x0, f (x0))(u) ⊆ {d2f (x0)(u, u)} Let v ∈ DS2f (x0, f (x0))(u).Then

∃tn > 0, ∃(un, vn) → (u, v): tnun → 0, f (x0) + t2nvn = f (x0+ tnun) Setting hn= tnun, onehas

We will see later that, though Dm

SF (x0, y0) is different from DmF (x0, y0) only at the

origin, it plays a significant role in addressing optimality conditions and sensitivity analysis.Furthermore, among the above-mentioned generalized derivatives, only DRmF has a globalcharacter All the others have a local character, since tn ↓ 0 or tnun → 0 appears in thedefinitions

For some calculus rules of the derivative DmF , we need the following notion

Definition 3.2 Let F : X → 2Y, (x0, y0) ∈ grF , u ∈ X, and m ∈ N If

DSmF (x0, y0)(u) = {v ∈ Y | ∀tn > 0, ∀un→ u : tnun→ 0, ∃vn → v, y0+tmnvn∈ F (x0+tnun)},

Trang 11

and the set on the right side is nonempty, then DSmF (x0, y0) is called a mth-order semi-derivative of F at (x0, y0) in direction u.

radial-We choose the term “semi-derivative” following the idea of Penot for semi-differentiability(of order one) in [30] Note further that in this paper we need to assume this property only

when we are concerned with some calculus rules (we do not need this when we apply Dm

SF

without using these rules) This property clearly holds if the left side of the equality inDefinition 3.2 is a singleton

Proposition 3.3 Let F1, F2 : X → 2Y, x0 ∈ int(domF1) ∩ domF2, u ∈ X, and yi ∈ Fi(x0)

for i = 1,2 Suppose F1 has a mth-order radial-semi-derivative Dm

Trang 12

Example 3.2 Let X = Y = R, x0 = y1 = y2 = 0, and

DmSG(y0, z0)(DS1F (x0, y0)(u)) ⊆ DSm(G ◦ F )(x0, z0)(u)

(ii) Suppose G has a radial-semi-derivative D1

SG(y0, z0) in any direction in Dm

SF (x0, y0)(u)

Then,

D1SG(y0, z0)(DmSF (x0, y0)(u)) ⊆ DSm(G ◦ F )(x0, z0)(u)

Proof By the similarity, we prove only (i) Let u ∈ X, v1 ∈ D1

SF (x0, y0)(u) and v2

∈ Dm

SG(y0, z0)(v1) There exist tn > 0, un → u, v1

n → v1 such that tnun → 0 and, for

Trang 13

all n, y0+ tnv1n ∈ F (x0+ tnun) Since DmSG(y0, z0) is a mth-order radial-semi-derivative indirection v2, with tn, vn1 above, there exists v2n→ v2 such that z0+ tmnv2n∈ G(y0+ tnv1n) So,

The following properties are immediate from definition

Proposition 3.5 Let F : X → 2Y, (x0, y0) ∈ grF , λ > 0 and β ∈ R Then, for all u ∈ X,

(i) DSm(βF )(x0, βy0)(u) = βDmSF (x0, y0)(u);

(ii) DSmF (x0, y0)(λu) = λmDmSF (x0, y0)(u)

Now we apply mth-order radial-contingent derivatives to establish necessary optimalityconditions for Q-minimal solutions of some particular optimization problems Consider firstthe following unconstrained problem, for F : X → 2Y,

(P) min F (x), x ∈ X

For a vector optimization problem, from the concepts of optimality/efficiency recalled

at the end of Sect 1, we define in the usual and natural way, the corresponding solutionnotions For instance, (x0, y0) ∈ grF is called a local Q-minimal solution of (P) iff thereexists U ∈ U (x0) such that (F (U ) − y0) ∩ (−Q) = ∅

Proposition 3.6 Let X, Y , and Q be as before, F : X → 2Y, and (x0, y0) ∈ grF If (x0, y0)

is a local Q-minimal solution of (P) Then, for all m ∈ N, DSmF (x0, y0)(X) ∩ (−Q) = ∅

Proof Suppose to the contrary there exist u ∈ X and v ∈ Dm

SF (x0, y0)(u) ∩ (−Q) Then,

there exist sequences tn> 0 and (un, vn) → (u, v) such that tnun→ 0 and y0+ tm

nvn

Trang 14

∈ F (x0 + tnun) Since the cone Q is open, tmnvn ∈ −Q for large n Therefore, for such n,

tmnvn ∈ (F (x0+ tnun) − y0) ∩ (−Q), a contradiction Proposition 3.6 is applicable, while some recent existing results are not, in the following

Example 3.3 Let X = Y = R, (x0, y0) = (0, 0), Q = intR+, C = R+, and

Since D1F (x0, y0)(u)∩(−intC) = ∅ for all u ∈ X, we cannot use Theorem 2.1 in [22] to rejectthe candidate (x0, y0) for a weak solution As D2F (x0, y0)(u)∩(−intC) = ∅ for all u ∈ X, the

second contingent-type derivative cannot be used either But D2

SF (x0, y0)(0) ∩ (−intC) 6= ∅

So, Proposition 3.6 rejects (x0, y0)

The next example indicates the necessity of higher-order considerations

Example 3.4 Let X = Y = R, Q = intR+, C = R+, (x0, y0) = (0, 0), u = 0, and

= ∅, Theorem 3.1 in [31] with a first-order condition gives nothing Since DS2F (x0, y0)(u)

∩(−intC) 6= ∅, (x0, y0) is not a weak solution due to Proposition 3.6

Trang 15

Proposition 3.7 Assume that X is finite dimensional, C has a compact base B, F : X

→ 2Y, and (x0, y0) ∈ grF If, for at least one m ≥ 1,

(i) DmSF (x0, y0)(0) ∩ (−C) = {0};

(ii) DmSF (x0, y0)(u) ∩ (−C) = ∅ for all nonzero u,

then (x0, y0) is a local minimal solution of (P)

Proof Suppose to the contrary there exist xn → x0 and yn ∈ F (xn) such that yn− y0

∈ −C \ {0} There are rn > 0 and bn ∈ B such that yn− y0 = −rnbn and bn → b for some

Since X is finite dimensional, there exists u ∈ X \ {0} such that (xn− x0)/sn→ u Hence, 0

∈ Dm

SF (x0, y0)(u), contradicting (ii) If {sn/tn} has a convergent subsequence, say, sn/tn

→ α ≥ 0, then y0+ tmn(−bn) ∈ F (xn) = F (x0+ tn[((xn− x0)/sn)(sn/tn)]) Since ((xn− x0)/sn)(sn/tn) → αu, −b ∈ DmSF (x0, y0)(αu), which contradicts (i) if α = 0 or (ii) if α 6= 0 

The next two examples explain advantages of Proposition 3.7 over recent existing results

Example 3.5 Let X = Y = R, C = R+, (x0, y0) = (0, 1), and

Trang 16

Then, for all u ∈ X,

Example 3.6 Let X, Y , C, and (x0, y0) be as in Example 3.5 Let

Applying the above chain rule of mth-order radial-contingent derivatives, we easily lish necessary optimality conditions for local Q-minimal solutions of the following problem

estab-(P1) minF (x0) subject to x ∈ X and x0 ∈ G(x),where F : X → 2Y and G : X → 2X This problem can be restated as the unconstrainedproblem min(F ◦ G)(x) s.t x ∈ X

Trang 17

Proposition 3.8 Let ImG ⊆ domF , (x0, z0) ∈ grG, and (z0, y0) ∈ grF Assume that(x0, y0) is a local Q-minimal solution of (P1) and u is any point in X.

(i) If F has a mth-order radial-semi-derivative Dm

Proof By the similarity, we prove only (i) From Proposition 3.6, for u ∈ X we have

DSm(F ◦ G)(x0, y0)(u) ∩ (−Q) = ∅ Proposition 3.4 (i) implies that

= TepiF(x0, y0) is said to be the contingent epiderivative of F at (x0, y0) ∈ grF

Example 3.7 Let X = Y = R, Q = intR+, C = R+, G(x) = {−|x|}, and

Trang 18

computa-derivative at (G(0), 0) in all directions in DS1G(0, G(0))(0) = {0}, DSmF (G(0), 0)[D1SG(0, G(0))(0)]

= R−, which meets −intC Therefore, Proposition 3.8 above rejects the candidate (0, 0)

To illustrate the sum rule, we consider the following problem

to 2 (PC) has also been studied independently from (P2) Optimality conditions for thisgeneral problem (PC) were obtained in [32] by using sum rules and scalar product rules forcontingent epiderivatives In [21], problem (PC) was investigated by using variational sets.Now, we apply Propositions 3.3, 3.5, and 3.6 for mth-order contingent-radial derivatives toget the following necessary condition for local Q-minimal solutions of (PC) Here, s can beany positive number

Proposition 3.9 Let domF ⊆ domG, x0 ∈ M, y0 ∈ F (x0), u ∈ X, and either F or G have

a mth-order radial-semi-derivative at (x0, y0) or (x0, 0), resp, in direction u If (x0, y0) is a

Trang 19

local Q-minimal solution of (PC), then,

(DSmF (x0, y0)(u) + sDmSG(x0, 0)(u)) ∩ (−Q) = ∅

Proof By Proposition 3.6, one gets DSm(F + sG)(x0, y0)(u) ∩ (−Q) = ∅ According to sition 3.5, sDSmG(x0, 0)(u) = DmS(sG)(x0, 0)(u) Then, Proposition 3.3 completes the proof:

Propo-DSmF (x0, y0)(u) + sDSmG(x0, 0)(u) ⊆ DmS(F + sG)(x0, y0+ 0)(u) 

The next example indicates a case, where Proposition 3.9 is more advantageous thanearlier existing results

Example 3.8 Let X = Y = R, Q = intR+, C = R+, g(x) = x4− 2x3, and

derivative of order 1 at (0,0) in any direction, D1

SF (0, 0)(0) = R−, and {0} ⊆ D1

SG(0, 0)(0)

⊆ R+ So, (DS1F (0, 0)(0) + sDS1G(0, 0)(0)) ∩ (−intC) 6= ∅ In view of Proposition 3.9, (x0, y0)

is not a local weak solution of (PC) This fact can be checked directly too

deriva-tives

In this section, we discuss relations between higher-order contingent-type derivatives of aset-valued map and those of its profile map Such relations for various kinds of efficient

Ngày đăng: 16/10/2015, 14:06

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
1. Fiacco, A.V.: Introduction to Sensitivity and Stability Analysis in Nonlinear Program- ming. Academic Press, New York (1983) Sách, tạp chí
Tiêu đề: Introduction to Sensitivity and Stability Analysis in Nonlinear Programming
Tác giả: A.V. Fiacco
Nhà XB: Academic Press
Năm: 1983
10. Rockafellar, R.T.: Proto-differentiability of set-valued mappings and its applications in optimization. Ann. Inst. H. Poincar´ e Anal. Non Lin´ eaire 6, 449-482 (1989) Sách, tạp chí
Tiêu đề: Proto-differentiability of set-valued mappings and its applications in optimization
Tác giả: R.T. Rockafellar
Nhà XB: Ann. Inst. H. Poincar´ e Anal. Non Lin´ eaire
Năm: 1989
11. Huy, N.Q., Lee, G.M.: Sensitivity of solutions to a parametric generalized equation.Set-valued Anal. 16, 805-820 (2008) Sách, tạp chí
Tiêu đề: Sensitivity of solutions to a parametric generalized equation
Tác giả: Huy, N.Q., Lee, G.M
Nhà XB: Set-valued Anal.
Năm: 2008
13. Mordukhovich, B.S.: Coderivetive analysis of variational systems. J. Global Optim.28, 347-362 (2004) Sách, tạp chí
Tiêu đề: Coderivetive analysis of variational systems
Tác giả: B.S. Mordukhovich
Nhà XB: J. Global Optim.
Năm: 2004
14. Levy, A.B., Mordukhovich, B.S.: Coderivetives in parametric optimization. Math.Program. Ser. A 99, 311-327 (2004) Sách, tạp chí
Tiêu đề: Coderivetives in parametric optimization
Tác giả: A.B. Levy, B.S. Mordukhovich
Nhà XB: Math. Program. Ser. A
Năm: 2004
15. Chuong, T.D., Yao, J.C.: Generalized Clarke epiderivatives of Parametric vector opti- mization problems. J. Optim. Theory Appl. 146, 77-94 (2010) Sách, tạp chí
Tiêu đề: Generalized Clarke epiderivatives of Parametric vector optimization problems
Tác giả: Chuong, T.D., Yao, J.C
Nhà XB: J. Optim. Theory Appl.
Năm: 2010
17. Anh, L.N.H., Khanh, P.Q.: Variational sets of perturbation maps and applications to sensitivity analysis for constrained vector optimization. J. Optim. Theory Appl.Onlinefirst, DOI 10.1007/s10957-012-0257-5 Sách, tạp chí
Tiêu đề: Variational sets of perturbation maps and applications to sensitivity analysis for constrained vector optimization
Tác giả: L.N.H. Anh, P.Q. Khanh
Nhà XB: J. Optim. Theory Appl.
Năm: 2012
19. Khanh, P.Q., Tuan, N.D.: Variational sets of multivalued mappings and a unified study of optimality conditions. J. Optim. Theory Appl. 139, 45-67 (2008) Sách, tạp chí
Tiêu đề: Variational sets of multivalued mappings and a unified study of optimality conditions
Tác giả: Khanh, P.Q., Tuan, N.D
Nhà XB: J. Optim. Theory Appl.
Năm: 2008
22. Luc, D.T.: Contingent derivatives of set-valued maps and applications to vector opti- mization. Math. Program. 50, 99-111 (1991) Sách, tạp chí
Tiêu đề: Contingent derivatives of set-valued maps and applications to vector optimization
Tác giả: Luc, D.T
Nhà XB: Math. Program.
Năm: 1991
23. Anh, L.N.H., Khanh, P.Q.: Higher-order optimality conditions in set-valued opti- mization using radial sets and radial derivatives. J. Global Optim. Onlinefirst, DOI 10.1007/s10898-012-9861-z Sách, tạp chí
Tiêu đề: Higher-order optimality conditions in set-valued optimization using radial sets and radial derivatives
Tác giả: L.N.H. Anh, P.Q. Khanh
Nhà XB: J. Global Optim.
Năm: 2012
24. Ha, T.D.X.: Optimality conditions for several types of efficient solutions of set-valued optimization problems. In: Pardalos, P., Rassis, Th. M., Khan, A. A. (eds.) Nonlinear Analysis and Variational Problems, pp. 305-324. Springer, Berlin (2009) Sách, tạp chí
Tiêu đề: Nonlinear Analysis and Variational Problems
Tác giả: Ha, T.D.X
Nhà XB: Springer
Năm: 2009
25. Anh, N.L.H., Khanh, P.Q., Tung, L.T.: Higher-order radial derivatives and optimality conditions in nonsmooth vector optimization. Nonlinear Anal. TMA. 74, 7365-7379 (2011) Sách, tạp chí
Tiêu đề: Higher-order radial derivatives and optimality conditions in nonsmooth vector optimization
Tác giả: N.L.H. Anh, P.Q. Khanh, L.T. Tung
Nhà XB: Nonlinear Anal. TMA
Năm: 2011
27. Guerraggio, A., Molho, E., Zaffaroni, A.: On the notion of proper efficiency in vector optimization. J. Optim. Theory Appl. 82, 1-21 (1994) Sách, tạp chí
Tiêu đề: On the notion of proper efficiency in vector optimization
Tác giả: A. Guerraggio, E. Molho, A. Zaffaroni
Nhà XB: J. Optim. Theory Appl.
Năm: 1994
28. Makarov, E.K., Rachkovski, N.N.: Unified representation of proper efficiency by means of dilating cones. J. Optim. Theory Appl. 101, 141-165 (1999) Sách, tạp chí
Tiêu đề: Unified representation of proper efficiency by means of dilating cones
Tác giả: E.K. Makarov, N.N. Rachkovski
Nhà XB: J. Optim. Theory Appl.
Năm: 1999
30. Penot, J.-P.: Differentiability of relations and differential stability of perturbed opti- mization problems. SIAM J. Control Optim. 22, 529-551 (1984) Sách, tạp chí
Tiêu đề: Differentiability of relations and differential stability of perturbed optimization problems
Tác giả: Penot, J.-P
Nhà XB: SIAM J. Control Optim.
Năm: 1984
31. Taa, A.: Necessary and sufficient conditions for multiobjective optimization problems.Optimization 36, 97-104 (1996) Sách, tạp chí
Tiêu đề: Necessary and sufficient conditions for multiobjective optimization problems
Tác giả: A. Taa
Nhà XB: Optimization
Năm: 1996
32. Jahn, J., Khan, A.A.: Some calculus rules for contingent epiderivatives. Optimization 52, 113-125 (2003) Sách, tạp chí
Tiêu đề: Some calculus rules for contingent epiderivatives
Tác giả: Jahn, J., Khan, A.A
Nhà XB: Optimization
Năm: 2003
2. Tanino, T.: Sensitivity analysis in multiobjective optimization. J. Optim. Theory Appl. 56, 479-499 (1988) Khác
3. Tanino, T.: Stability and sensitivity analysis in convex vector optimization. SIAM J.Control Optim. 26, 521-536 (1988) Khác
4. Shi, D.S.: Contingent derivative of the perturbation map in multiobjective optimiza- tion. J. Optim. Theory Appl. 70, 385-396 (1991) Khác

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN