1. Trang chủ
  2. » Luận Văn - Báo Cáo

Approximate karush kuhn tucker optimality conditions in multiobjective ptimization

35 11 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 35
Dung lượng 276,06 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

HANOI PEDAGOGICAL UNIVERSITY 2 DEPARTMENT OF MATHEMATICS——————–o0o——————— Dao Ngoc Cao Approximate Karush-Kuhn-Tucker optimality conditions in multiobjective optimization BACHELOR THESIS

Trang 1

HANOI PEDAGOGICAL UNIVERSITY 2 DEPARTMENT OF MATHEMATICS

——————–o0o———————

Dao Ngoc Cao

Approximate Karush-Kuhn-Tucker optimality conditions in multiobjective optimization

BACHELOR THESIS

Major: Analysis

Hanoi, May 2019

Trang 2

HANOI PEDAGOGICAL UNIVERSITY 2 DEPARTMENT OF MATHEMATICS

——————–o0o———————

Dao Ngoc Cao

Approximate Karush-Kuhn-Tucker optimality conditions in multiobjective optimization

Trang 3

Thesis acknowledgment

I would like to express my gratitudes to the teachers of the Department of ics, Hanoi Pedagogical University 2, the teachers in the analysis group as well as theteachers involved The lecturers have imparted valuable knowledge and facilitated for

Mathemat-me to complete the course and the thesis

In particular, I would like to express my deep respect and gratitude to Dr NguyenVan Tuyen, who has direct guidance, help me complete this thesis

Due to time, capacity and conditions are limited, so the thesis can not avoiderrors Then, I look forward to receiving valuable comments from teachers and friends

Hanoi, 02 May, 2019

Student

Dao Ngoc Cao

Trang 4

Thesis assurance

I assure that the data and the results of this thesis are true and not identical to othertopics I also assure that all the help for this thesis has been acknowledge and that theresults presented in the thesis has been identified clearly

Hanoi, 02 May, 2019

Student

Dao Ngoc Cao

Trang 5

Preface 2

1 Preliminaries 4

1.1 Convex functions 4

1.2 Clarke subdifferential 5

1.2.1 Generalization of Derivatives 5

1.2.2 Subdifferential Calculus 6

2 Approximate Karush-Kuhn-Tucker optimality conditions in multiobjective optimization 11

2.1 Approximate KKT Condition for Multiobjective Optimization Problems 11 2.2 Relations of the AKKT Condition with Other Optimality Conditions 21 Bibliography 29

Trang 6

Karush–Kuhn–Tucker (KKT) optimality conditions play an important role in tion theory, both for scalar optimization and for multiobjective optimization However,KKT optimality conditions do not need to be fulfilled at local minimum points unlesssome constraint qualifications are satisfied In [4], Andreani, Mart´ınez and Svaiter in-troduced the so-called complementary approximate Karush–Kuhn–Tucker (CAKKT)condition for scalar optimization problems with smooth data Then, the authorsproved that this condition is necessary for a point to be a local minimizer without anyconstraint qualification Moreover, they also showed that the augmented Lagrangianmethod with lower-level constraints introduced in [2] generates sequences converging toCAKKT points under certain conditions Optimality conditions of CAKKT-type havebeen recognized to be useful in designing algorithms for finding approximate solutions

optimiza-of optimization problems

In this thesis, based on the recent work by Giorgi, Jim´enez and Novo [9], we studyapproximate Karush–Kuhn–Tucker (AKKT) condition for multiobjective optimizationproblems We show that the AKKT condition holds for local weak efficient solutionswithout any additional requirement Under the convexity of the related functions, anAKKT-type sufficient condition for global weak efficient solutions is established Someenhanced KKT-conditions are also examined

The thesis is organized as follows In Chapter 1, we recall some basic definitionsand preliminaries from nonsmooth analysis, which are widely used in the sequel InChapter 2, we introduce the approximate KKT condition for a continuously differen-tiable multiobjective problem in finite-dimensional spaces, whose feasible set is defined

by inequality and equality constraints We show that, without any constraint fication, the AKKT condition is a necessary for a local weak efficient solution of theconsidered problem For convex problems, we prove that the AKKT condition is anecessary and sufficient optimality condition for a global weak efficient solution We

Trang 7

quali-also prove that, under some suitable additional conditions, an AKKT condition is quali-also

a KKT one We also introduce the notion of enhanced KKT-condition and study therelations with the above concepts

Trang 8

Definition 1.4 A function f is called convex if epif is a convex set.

Theorem 1.5 A function f is convex if and only if for all x1 and x2 and for all

α ∈ [0; 1] we have

f (αx1+ (1 − α)x2) ≤ αf (x1) + (1 − α)f (x2)

Trang 9

Corollary 1.8 If f : Rn → R is locally Lipschitz continuous at x, then the function

d 7→ f◦(x; d) is convex, its epigraph epif◦(x; ·) is a convex cone and we have

Definition 1.10 Letf : Rn → R be a locally Lipschitz continuous function at a point

x ∈ Rn Then the subdifferential of f at x is the set ∂f (x) of vectors ξ ∈ Rn such that

∂f (x) = {ξ ∈ Rn|f◦(x; d) ≥ ξTd for all d ∈ Rn}

Trang 10

Each vector ξ ∈ ∂f (x) is called a subgradient of x at x The subdifferential has thesame basic properties than in convex case

Theorem 1.11 Let f : Rn→ R be a locally Lipschitz continuous function at x ∈ Rn

with a Lipschitz constant K Then the subdifferential ∂f (x) is nonempty, convex, andcompact set such that

∂f (x) ⊆ B (0; K)Theorem 1.12 Let f : Rn→ R be a locally Lipschitz continuous function at x ∈ Rn,then

f◦(x; d) = maxξT

d | ξ ∈ ∂f (x) for all d ∈ Rn

Theorem 1.13 Let f be a locally Lipschitz continuous and differentiable at x Then

∇f (x) ∈ ∂f (x) Theorem 1.14 If f is continuously differentiable at x, then

∂f (x) = {∇f (x)}

Theorem 1.15 If the functionf : Rn → R is convex, then

(i) f0(x; d) = f◦(x; d) for all d ∈ Rn and

(ii) ∂cf (x) = ∂f (x)

Theorem 1.16 Let f : Rn → R be a locally Lipschitz continuous at x ∈ Rn Then

∂f (x) = conv{ξ ∈ Rn| ∃(xi) ⊂ Rn\Ωf such that xi −→ x and 5 f (xi) −→ ξ}

1.2.2 Subdifferential Calculus

In order to maintain equalities instead of inconclusions in subderivation rules we needthe following regularity property

Definition 1.17 The function f : Rn → R is said to be subdifferentially regular at

x ∈ Rn if it locally Lipschitz continuous at x and for all d ∈ Rnthe classical directionalderivative f0(x; d) exists and we have

Trang 11

Note, that the equality (1.1) is not necessarily valid in general even if f0(x; d) exists.This is the case, for instance, with concave nonsmooth functions For example, thefunction f (x) = − |x| has the directional derivative f0(0; 1) = −1, but the generalizeddirectional derivatve is f◦(0; 1) = 1

We now note some sufficient conditions for f to be subdifferentially regular.Theorem 1.18 The function f : Rn→ R is subdifferentially regular at x if

(i) f is continuosly differentiable at x,

(ii) f is convex, or,

Trang 12

In addition, if fi is subdifferentially regular at x and λi ≥ 0 for all i = 1, , m, then f

is also subdifferentially regular at x and equality holds in (1.4)

The following results is one the most important results in optimization theory.Theorem 1.23 If the function : Rn → R is locally Lipschitz continuous and attainsits extremum at x, then

Theorem 1.26 Suppose, that the assumptions of Theorem 1.25 are valid If

(i) the function g is subdifferentially regular at h(x), each hi is subdifferentially lar at x and for any α ∈ ∂g(h(x)) we have αi ≥ 0 for all i = 1, , m Then also

regu-f is subdiregu-fregu-ferentially regular at x and we have

∂f (x) = conv∂h(x)T∂g(h(x))

Trang 13

(ii) the function g is subdifferentially regular at h(x) and hi is continuously tiable at x for all i = 1, , m Then

f2

If in addition f1(x) ≥ 0, f2(x) > 0 and f1, f2 are both subdifferentially regular at x,then the function f1/f2 is subdifferentially regular at x and equality holds in (1.8).Theorem 1.29 (max-function) Let fi : Rn → R be locally Lipschitz continuous at xfor all i = 1, , m Then the function

f (x) := max{fi(x)| i = 1, , m}

is locally Lipschitz continuous at x and

Trang 14

I(x) := {i ∈ {1, , m}| fi(x) = f (x)}

In addition, if fi is subdifferentially regular at x for all i = 1, , m, then f is alsosubdifferentially regular at x and equality holds in (1.9)

Trang 15

Chapter 2

Approximate Karush-Kuhn-Tucker optimality conditions in

multiobjective optimization

2.1 Approximate KKT Condition for

Multiobjec-tive Optimization Problems

For a, b ∈ Rp, by a 5 b, we mean al ≤ bl for all l = 1, , p and by a < b, we mean

Trang 16

(i) x0 is a (global) weak efficient solution of (MOP) iff there is no x ∈ S satisfying

f (x) < f (x0)

(ii) x0 is a local weak efficient solution of (MOP) iff there is a neighborhood U of x0

such that x0 is a weak efficient solution on U ∩ S

We now recall the concept of approximate Karush-Kuhn-Tucker condition for themultiobjective problem (MOP) from [9]

Definition 2.2 The approximate Karush–Kuhn–Tucker condition (AKKT) is satisfiedfor (MOP) at a feasible point x0 ∈ S iff there is sequences xk ⊂ Rnand λk, µk, τk ⊂

p

P

l=1

λkl = 1,(A3) gj(x0) < 0 ⇒ µkj = 0 for sufficiently large k, j = 1, , m

Points satisfying the AKKT condition will be called AKKT points Let us sider that the sequence of points (xk) is not required to be feasible

con-Remark 2.3 Assuming µk∈ Rm

+, condition (A3) is clearly equivalent to

µkjgj xk ≥ 0 for sufficient large k, ∀j /∈ J x0 (2.1)

Each of theses conditions implies the condition

In order to establish necessary optimality conditions for problem (MOP), we aregoing to scalarize it To this aim, we consider the nonsmooth function φ : Rp → Rdefined by

φ (y) := max

1≤i≤p{yi}

It is obiviously that φ (y) ≤ 0 ⇔ y ≤ 0, and φ (y) < 0 ⇔ y < 0

The following result is well known

Lemma 2.4 If x0 is a weak efficient solution of (MOP), then x0 is also a minimizer

of the fucntion φ(f (·) − f (x0)) on S

Trang 17

Theorem 2.5 If x0 ∈ S is a local weak efficient solution of problem (MOP), then x0

satisfies the AKKT condition with sequences (xk) and λk, µk, τk Furthermore, forthese sequences we obtain that

(E1) µk = bkg xk

+ and τk = ckh (xk), in which bk, ck > 0, ∀k,(E2) fl xk − fl(x0) +

Proof By assumption, there exists δ > 0 such that x0 is an efficient solution of f on

S ∩ B (x0, δ), and by Lemma 2.4, we deduce that x0 is a minimizer of the functionφ(f (·) − f (x0)) on S ∩ B (x0, δ) Then we choose a small δ if necessary and supposethat x0 is the unique solution of problem

Observing that xk exists because ψk is continuous and B (x0, δ) is compact Let z be

an accumulation point of (xk) We may assume that xk → z (choosing a subsequence

if necessary) On one hand, we have

Trang 18

Let us claim that z is a feasible point of problem (2.3).

Indeed, first as xk− x0 ≤ δ if follows that kz − x0k ≤ δ

Second, suppose that

i=1hi(z)2 = 0, and this infers that z ∈ S

From (2.5) one has

We claim that z = x0, as x0is the unique solution of problem (2.3) with value 0 Hence,

xk → x0 and xk− x0 < δ for all k sufficiently large

Now, as xkis a solution of the nonsmooth problem (2.4) and it is an interior point

Trang 19

of the feasible set, for k large enough, it follows that 0 ∈ ∂C xk We have

We are easy to see that x0 satisfies conditions (A1), (A2) and (E1) If gj(x0) < 0, then

gj xk < 0 for all k large enough, and so µk

From here, condition (E2) follows and the proof is finished

Observing that in Theorem 2.5, any constraint qualification is not required Thatmeans, for any local weak efficient solution without additional requirements, thesenecessary optimality conditions are true In particular, condition (E1) shows that themultiplier µk and τk are proportional, respectively, to g xk+ and h xk Moreover,condition (E1) is also satisfied, namely, when the external penalty method is applied[11]

Next we illustrate Theorem 2.5 with an example

Trang 20

Example 2.6 Consider the following multiobjective problem:

Min f (x1, x2) = (f1(x1, x2) , f2(x1, x2)) subject to g (x1, x2) = x22− x1 ≤ 0,

where

f1(x1, x2) = 4x1− x2

2, f2(x1, x2) = −2x1− x2.Let us note that it is a nonconvex problem The point x0 = (1, 1) is a weak efficientsolution as can be checked, and so Theorem 2.5 can be applied First let us solve theequation to find sequences satisfying (A0)-(A3), (E1) and (E2)

λ1∇f1(x1, x2) + λ2∇f2(x1, x2) + µ∇g (x1, x2) = (0, 0) , (2.8)

with λ1, λ2, µ ≥ 0, λ1+ λ2 = 1 We get for x2 < −1 or x2 ≥ 0,

λ1 = 4x2+ 110x2+ 1, λ2 =

6x210x2+ 1, µ =

we have λε1∇f1(xε) + λε2∇f2(xε) + (µε+µeε) ∇g (xε) = 0, ∀ε > 0 From here,

λε1∇f1(xε) + λε2∇f2(xε) + µε∇g (xε) = −eµε∇g (xε) → 0 when ε ↓ 0

The statement (A0)-(A3) are obiviously satisfied with (xε, λε1, λε2, µε) (we can transformthe points in sequences selecting ε = 1k if neccessary) Moreover, (E1) is fulfilledselecting bε= g(xµεε)+ > 0 Condition (E2) is also satisfied since (after some calculations)

f1(xε) − f1 x0 + µεg (xε) = − (2160ε + 588) ε2

11 + 60ε < 0 ∀ε > 0,

f2(xε) − f2 x0 + µεg (xε) = −192ε2

11 + 60ε < 0 ∀ε > 0.

Condition (E1) and (E2) are good properties as it is showed in Remark 2.8 and

in Theorem 2.16 Neverthless, it is hard for finding a sequence with such

Trang 21

prop-erties However sequences satisfying (A0)-(A3) are easily obtained As instance,

if x0 is a KKT-point (see Definition 2.24) and Pp

l=1λl = 1, then every sequence

xk, λk, µk, τk

⊂ Rn × Rp+ × Rm

+ × Rr converging to (x0, λ, µ, τ ) satisfy (A0)-(A3)whenever µk

j = 0 for sufficiently large k if gj(x0) < 0

The reciprocal of Theorem 2.5 is not true, the example is shown as follows:Example 2.7 Consider problem (MOP) with the following data:

, λk1 = 1, λk2 = 0, µk= 1

Remark 2.8 The following statement are true:

(i) Condition (E1) implies the following condition (sign condition, SGN in short):for every k one has

Trang 22

(iii) (CVG) implies the following condition, that we say that sum tending to zerocondition:

(iv) (SGN) and (SCZ) imply (CVG)

Indeed, part (i) is immediate since µk

For part (ii), let sk :=Pm

j=1µkjgj xk + Pr

i=1τikhi xk Using (SGN) and (E2),

we have 0 ≤ sk ≤ fl(x0) − fl xk for all k and l From here, sk → 0 follows by thecontinuity of fl and property (A0)

Parts (iii)-(iv) are obvious

By taking into account Remark 2.8(i), from (E2) it shows that fl xk ≤ fl(x0) , l =

1, , p, i.e., the points xk are better than x0 This condition is equivalent to the erty given by (22), which is used further on to define a strong EFJ-point As a conse-quence, if we wish a sequence (xk) satisfying (E1)-(E2), then we have to look for it inthe set defined by the system fl(x) ≤ fl(x0) , l = 1, , p

prop-Remark 2.9 In some works, in order to define the AKKT condition for scalar mization problems, several variants of condition (A3) or some of the above ones areused For instance, in [4] the authors use the CAKKT condition, where (A3) is replacedwith

which is clearly similar to (CVG)

We are going to prove that the reciprocal implications of Remark 2.8 are not trueand the invalidity if any assumption is not satisfied in next remark

Remark 2.10 (i) (SGN) ; (E1) Consider Example 2.6, with x0 = (1, 1) , xk =

Trang 23

but (E2) is not since

so (SCZ) holds but (CVG) does not

Theorem 2.5 is extended to multiobjective optimization (and improved in somecases) Theorem 3.3 in Andreani, Mart´ınez, Svaiter [4], Theorem 2.1 in Haeser andSchuverdt [11] and Theorem 2.1 (with I = ∅) in Andreani, Haeser and Mart´ınez [3].Next we found that the reciprocal of Theorem 2.5 is true for convex programs.Theorem 2.11 Assume that fl(l = 1, , p) and gj(j = 1, , m) are convex and

hi(i = 1, , r) are affine If x0 ∈ S satisfies the AKKT condition and (SCZ) isfulfilled, then x0 is a (global) weak sfficient solution of (MOP)

Proof Suppose x0 is not a weak efficient solution Then, there is bx ∈ S satisfying

Ngày đăng: 23/12/2019, 16:17

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN