1. Trang chủ
  2. » Khoa Học Tự Nhiên

Variational analysis and some special optimization problems (giải tích biến phân và một số bài toán tối ưu đặc biệt) (tt)

26 363 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 26
Dung lượng 338,25 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The first numerical algorithm for solving the Fermat-Torricelli problemwas introduced by E.. Generalized versions of the Fermat-Torricelliproblem and several new algorithms have been int

Trang 1

VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY

INSTITUTE OF MATHEMATICS

NGUYEN THAI AN

Speciality: Applied MathematicsSpeciality code: 62 46 01 12

SUMMARY OFDOCTORAL DISSERTATION IN MATHEMATICS

HANOI - 2016

Trang 2

The dissertation was written on the basis of the author’s research works carried out at the Institute of Mathematics, Vietnam Academy of Science and Technology

Supervisors:

1 Prof Dr Hab Nguyen Dong Yen

2 Assoc Prof Nguyen Mau Nam

First referee:

Second referee:

Third referee:

To be defended at the Jury of the Institute of Mathematics, Vietnam Academy of Science and Technology:

on 2016, at o’clock

The dissertation is publicly available at:

• The National Library of Vietnam

• The Library of Institute of Mathematics

Trang 3

opera-of demands/customers Depending on specific applications, location modelsare very different in their objective functions, the distance metric applied,the number and size of the facilities to locate; see, e.g., Z Drezner and H.Hamacher, Facility Location: Applications and Theory, (Springer, Berlin,2002) and R Z Farahani and M Hekmatfar, Facility Location: Concepts,Models, Algorithms and Case Studies, (Physica-Verlag Heidelberg, 2009), andthe references therein.

The origin of location theory can be traced back as far as to the 17thcentury when P de Fermat (1601-1665) formulated the problem of finding afourth point such that the sum of its distances to the three given points in theplane is minimal This celebrated problem was then solved by E Torricelli(1608-1647) At the beginning of the 18th century, A Weber incorporatedweights, and was able to treat facility location problems with more than 3points as follows

where αi > 0 for i = 1, , m are given weights and the vectors ai ∈ IRn for

i = 1 m are given demand points

The first numerical algorithm for solving the Fermat-Torricelli problemwas introduced by E Weiszfeld (1937) As pointed out by H W Kuhn(1973), the Weiszfeld algorithm may fail to converge when the iterative se-quence enters the set of demand points The assumptions guaranteeing the

Trang 4

convergence of the Weiszfeld algorithm along with a proof of the convergencetheorem were given by Kuhn Generalized versions of the Fermat-Torricelliproblem and several new algorithms have been introduced to solve general-ized Fermat-Torricelli problems as well as to improve the Weiszfeld algorithm.The Fermat-Torricelli problem has also been revisited several times from dif-ferent viewpoints.

The Fermat-Torricelli/Weber problem on the plane with some negativeweights was first introduced and solved in the triangle case by L.-N Tellier(1985) and then generalized by Z Drezner and G O Wesolowsky (1990) withthe following formulation in IR2:

or repelling the facility, and the optimal location as the one that balancesthe forces Since the problem is nonconvex in general, traditional solutionmethods of convex optimization widely used in the previous convex versions

of the Fermat-Torricelli problem, are no longer applicable to this case Thefirst numerical algorithm for solving this nonconvex problem which is based

on the outer-approximation procedure from global optimization was given byP.-C Chen, P Hansen, B Jaumard, and H Tuy (1992)

The smallest enclosing circle problem can be stated as follows: Given afinite set of points in the plane, find the circle of smallest radius that en-closes all of the points It was introduced in the 19th century by the Englishmathematician J J Sylvester (1814–1897) The mathematical model of theproblem in high dimensions can be formulated as follows

min



max1≤i≤mkx − aik : x ∈ IRn



where ai ∈ IRn for i = 1, , m are given points Problem (2) is both afacility location problem and a major problem in computational geometry.The Sylvester problem and its versions in higher dimensions are also knownunder other names such as the smallest enclosing ball problem, the minimumball problem, or the bomb problem Over a century later, research on thesmallest enclosing circle problem remains very active due to its importantapplications to clustering, nearest neighbor search, data classification, facilitylocation, collision detection, computer graphics, and military operations The

Trang 5

problem has been widely treated in the literature from both theoretical andnumerical standpoints.

In this dissertation, we use tools from nonsmooth analysis and optimizationtheory to study some complex facility location problems involving distances tosets in a finite dimensional space In contrast to the existing facility locationmodels where the locations are of negligible sizes, represented by points, theapproach adapted in this dissertation allows us to deal with facility locationproblems where the locations are of non-negligible sizes, now represented bysets Our efforts focus not only on studying theoretical aspects but also ondeveloping effective solution methods for these problems

The dissertation has five chapters, a list of references, and an appendixcontaining MATLAB codes of some numerical examples

Chapter 1 collects several concepts and results from convex analysis and

DC programming that are useful for subsequent studies We also describebriefly the majorization-minimization principle, Nesterov’s accelerated gra-dient method and smoothing technique, as well as P D Tao and L T H.An’s DC algorithm

Chapter 2 is devoted to numerically solving a number of new models of cility location which generalize the classical Fermat-Torricelli problem Con-vergence of the proposed algorithms are proved and numerical tests are pre-sented

fa-Chapter 3 studies a generalized version of problem (2) from both theoreticaland numerical viewpoints Sufficient conditions guaranteeing the existenceand uniqueness of solutions, optimality conditions, constructions of the solu-tions in special cases are addressed We also propose an algorithm based onthe log-exponential smoothing technique and Nesterov’s accelerated gradientmethod for solving the problem under consideration

Chapter 4 is dedicated to studying a nonconvex facility location problemthat is a generalization of problem (1) After establishing some theoreticalproperties, we propose an algorithm by combining the DC algorithm and theWeiszfeld algorithm for solving the problem

Chapter 5 is totally different from the preceding parts of the dissertation.Motivated by some methods developed recently, we introduce a generalizedproximal point algorithm for solving optimization problems in which the ob-jective functions can be represented as differences of nonconvex and convexfunctions Convergence of this algorithm under the main assumption that theobjective function satisfies the Kurdyka- Lojasiewicz property is established

Trang 6

Chapter 1

Preliminaries

Several concepts and results from convex analysis and DC programmingare recalled in this chapter As a preparation for the investigations in Chap-ters 2–5, we also describe the majorization-minimization principle, Nesterov’saccelerated gradient method and smoothing technique, as well as DC algo-rithm

We use IRn to denote the n-dimensional Euclidean space, h·, ·i to denotethe inner product, and k · k to denote the associated Euclidean norm Thesubdifferential in the sense of convex analysis of a convex function f : IRn →

IR ∪ {+∞} at ¯x ∈ domf := {x ∈ IRn : f (x) < +∞} is defined by

∂f (¯x) := {v ∈ IRn : hv, x − ¯xi ≤ f (x) − f (¯x) ∀ x ∈ IRn}

For a nonempty closed convex subset Ω of IRn and a point ¯x ∈ Ω, the normalcone to Ω at ¯x is the set N (¯x; Ω) := {v ∈ IRn : hv, x − ¯xi ≤ 0 ∀x ∈ Ω} Thisnormal cone is the subdifferential of the indicator function

δ(x; Ω) =



0 if x ∈ Ω,+∞ if x /∈ Ω,

at ¯x, i.e., N (¯x; Ω) = ∂δ(¯x; Ω) The distance function to Ω is defined by

d(x; Ω) := inf{kx − ωk : ω ∈ Ω}, x ∈ IRn (1.1)

The notation P (¯x; Ω) := { ¯w ∈ Ω : d(¯x; Ω) = k¯x − ¯wk} stands for theEuclidean projection from ¯x to Ω The subdifferential of the distance function(1.1) at ¯x can be computed by the formula



if ¯x /∈ Ω,where IB denotes the Euclidean closed unit ball of IRn

Trang 7

1.2 Majorization-Minimization Principle

The basic idea of majorization-minimization (MM) principle is to convert

a hard optimization problem (for example, a non-differentiable problem) into

a sequence of simpler ones (for example, smooth problems) The objectivefunction f : IRn → IR is said to be majorized by a surrogate function M :

IRn × IRn → IR on Ω if f (x) ≤ M (x, y) and f (y) = M(y, y) for all x, y ∈ Ω.Given x0 ∈ Ω, the iterates of the associated MM algorithm for minimizing f

on Ω are defined by

xk+1 ∈ argmin

x∈ΩM(x, xk

)

Because, f (xk+1) ≤ M(xk+1, xk) ≤ M(xk, xk) = f (xk), the MM iteratesgenerate a descent algorithm driving the objective function downhill

Let f : IRn → IR be a convex function with Lipschitz gradient That is,there exists ` ≥ 0 such that k∇f (x) − ∇f (y)k ≤ `kx − yk for all x, y ∈ IRn.Let Ω be a nonempty closed convex set Yu Nesterov (1983, 2005) consideredthe optimization problem

x0 = argmin{d(x) : x ∈ Ω} We can assume that d(x0) = 0 Then Nesterov’saccelerated gradient algorithm for solving (1.2) is outlined as follows

INPUT: f , `, x0∈ Ω set k = 0

repeat find yk:= Ψ Ω (xk) find zk:= argmin  `

σ d(x) + P k

i=0 i+1

2 [f (xi) + h∇f (xi), x − xii] : x ∈ Ω set xk:= k+32 zk+k+1k+3yk

set k := k + 1 until a stopping criterion is satisfied.

OUTPUT: yk.

Let Ω be a nonempty closed convex subset of IRn and let Q be a nonemptycompact convex subset of IRm Consider the constrained optimization prob-

Trang 8

lem (1.2) in which f : IRn → IR is a convex function of the type

f (x) := max{hAx, ui − φ(u) : u ∈ Q}, x ∈ IRn,where A is an m×n matrix and φ is a continuous convex function on Q Let d1

be a prox-function of Q with modulus σ1 > 0 and ¯u := argmin{d1(u) : u ∈ Q}

be the unique minimizer of d1 on Q Assume that d1(¯u) = 0 We work mainlywith d1(u) = 12ku − ¯uk2 where ¯u ∈ Q Let µ be a positive number called asmooth parameter Define

fµ(x) := max{hAx, ui − φ(u) − µd1(u) : u ∈ Q} (1.3)Theorem 1.1 The function fµ in (1.3) is well defined and continuously dif-ferentiable on IRn The gradient of the function is ∇fµ(x) = A>uµ(x), where

uµ(x) is the unique element of Q such that the maximum in (1.3) is attained.Moreover, ∇fµ is a Lipschitz function with the Lipschitz constant

`µ = 1

µσ1kAk2.Let D1 := max{d1(u) : u ∈ Q} Then fµ(x) ≤ f (x) ≤ fµ(x) + µD1 ∀x ∈ IRn

Let g : IRn → IR ∪ {+∞} and h : IRn → IR be convex functions Here

we assume that g is proper and lower semicontinuous Consider the DCprogramming problem

min{f (x) := g(x) − h(x) : x ∈ IRn} (1.4)Proposition 1.1 If ¯x ∈ dom f is a local minimizer of (1.4), then

inf{g(x) − h(x) : x ∈ IRn} = inf{h∗(y) − g∗(y) : y ∈ IRn}

The DCA for solving (1.4) is summarized as follows:

Step 1 Choose x0 ∈ dom g

Step 2 For k ≥ 0, use xk to find yk ∈ ∂h(xk)

Then, use yk to find xk+1 ∈ ∂g∗(yk)

Step 3 Increase k by 1 and go back to Step 2

Trang 9

Chapter 2

Effective Algorithms for Solving

Generalized Fermat-Torricelli

Problems

In this chapter, we present algorithms for solving a number of new models

of facility location which generalize the classical Fermat-Torricelli problem.The chapter is written on the basis of the paper [2] in the list of author’srelated papers

B S Modukhovich, N M Nam and J Salinas (2012) proposed the ing generalized model of the Fermat-Torricelli problem

sub-dF(x; Θ) := inf{σF(x − w) : w ∈ Θ}, (2.3)where F is a nonempty compact convex set of IRn that contains the origin as

an interior point If F is the closed unit Euclidean ball of IRn, the function(2.3) becomes the familiar distance function (2.2) We focus on developing

Trang 10

algorithms for solving the following generalized version of (2.1)

where Ωi for i = 1, , m and Ω are nonempty closed convex sets The sets

Ωi for i = 1, , m are called the target sets and the set Ω is called theconstraint set When all the target sets are singletons such as Ωi = {ai} for

Form of the MM Principle

We now present a simplified version of Theorem 1.1 for which the gradient

of fµ has an explicit representation

Theorem 2.1 Let A be an m × n matrix and Q be a nonempty compact andconvex subset of IRm Consider the function f (x) := max{hAx, ui − hb, ui :



d(¯u + Ax − b

µ ; Q)

 2and is continuously differentiable on IRn with its gradient given by

∇fµ(x) = A>P (¯u + Ax − b

µ ; Q).

The gradient ∇fµ is a Lipschitz function with constant `µ = 1

µkAk2 over, fµ(x) ≤ f (x) ≤ fµ(x) + µ

More-2[D(¯u; Q)]

2 for all x ∈ IRn with D(¯u; Q) :=sup{k¯u − uk : u ∈ Q}

Trang 11

We continue with a more general version of MM principle Let f : IRn → IR

be a convex function and let Ω be a nonempty closed convex subset of IRn.Consider the optimization problem

Let M : Rn ×Rp → R and let F : IRn

⇒ IRp be a set-valued mapping withnonempty values such that the following properties hold for all x, y ∈ IRn:

f (x) ≤ M(x, z) ∀z ∈ F (y), and f (x) = M(x, z) ∀z ∈ F (x)

Given x0 ∈ Ω, the MM algorithm to solve (2.6) is given by

Choose zk ∈ F (xk) and find xk+1 ∈ argmin{M(x, zk) : x ∈ Ω}

We say that F is normally smooth if for every boundary point x of Fthere exists ax ∈ IRn such that N (x; F ) is the cone generated by ax Let

IBF∗ := {u ∈ IRn : σF(u) ≤ 1}

Proposition 2.1 F is normally smooth if and only if IBF∗ is strictly convex.Proposition 2.2 Suppose that F is normally smooth If for any x, y ∈ Ωwith x 6= y, the line connecting x and y, L(x, y), does not contain at leastone of the points ai for i = 1, , m, then problem (2.5) has a unique optimalsolution

Given any ¯u ∈ F , consider the smooth approximation function given by

Trang 12

INPUT: a i for i = 1, , m, µ.

INITIALIZE: Choose x0∈ Ω and set ` = m

µ.Set k = 0

Repeat the following Compute ∇H µ (x k ) = P m

OUTPUT: yk.

The generalized projection from a point x ∈ IRn to a set Θ is defined by

πF(x; Θ) := {w ∈ Θ : σF(x − w) = dF(x; Θ)} A convex set F is said to benormally round if N (x; F ) 6= N (y; F ) for any distinct boundary points x, y

of F

Proposition 2.4 Given a nonempty closed convex set Θ, consider the eralized distance function (2.3) Then the following properties hold:

gen-(i) |dF(x; Θ) − dF(y; Θ)| ≤ kF k kx − yk for all x, y ∈ IRn

(ii) The function dF(·; Θ) is convex, and ∂dF(¯x; Θ) = ∂σF(¯x − ¯w) ∩ N ( ¯w; Θ)for any ¯x ∈ IRn, where ¯w ∈ πF(¯x; Θ) and this representation does not depend

on the choice of ¯w

(iii) If F is normally smooth and round, then σF(·) is differentiable at anynonzero point, and dF(·; Θ) is continuously differentiable on the complement

of Θ with ∇dF(¯x; Θ) = ∇σF(¯x − ¯w), where ¯x /∈ Θ and ¯w := πF(¯x; Θ)

Proposition 2.5 Suppose that F is normally smooth and the target sets Ωifor i = 1, , m are strictly convex with at least one of them being bounded

If for any x, y ∈ Ω, with x 6= y, there exists an index i ∈ {1, , m} suchthat πF(x; Ωi) /∈ L(x, y) Then problem (2.4) has a unique optimal solution

Let us apply the MM principle to the generalized Fermat-Torricelli lem We rely on the following properties which hold for all x, y ∈ IRn:

prob-(i) dF(x; Θ) = σF(x − w) for all w ∈ πF(x; Θ)

(ii) dF(x; Θ) ≤ σF(x − w) for all w ∈ πF(y; Θ)

Consider the set-valued mapping F (x) := Πm

i=1πF(x; Ωi) Then the costfunction T (x) is majorized by

Trang 13

Moreover, T (x) = M(x, w) whenever w ∈ F (x) Thus, given x0 ∈ Ω, the

MM iteration is given by

xk+1 ∈ argmin{M(x, wk) : x ∈ Ω} with wk ∈ F (xk)

This algorithm can be written more explicitly as follows

INPUT: Ω and m target sets Ω i , i = 1, , m.

INITIALIZE: x0∈ Ω.

Set k = 0.

Repeat the following Find yk,i∈ π F (xk; Ω i ).

Solve the following problem with a stopping criterion

min x∈ΩPmi=1σ F (x − y k,i ), and denote its solution by xk+1.

until a stopping criterion is satisfied.

OUTPUT: xk.

Proposition 2.6 Consider the generalized Fermat-Torricelli problem (2.4)

in which F is normally smooth and round Let {xk} be the sequence in the

MM algorithm defined by xk+1 ∈ argmin {P m

i=1σF(x − πF(xk; Ωi)) : x ∈ Ω} Suppose that {xk} converges to ¯x that does not belong to Ωi for i = 1, , m.Then ¯x is an optimal solution of problem (2.4)

Lemma 2.1 Consider the generalized Fermat-Torricelli problem (2.4) in which

at least one of the target sets Ωi for i = 1, , m is bounded and F is normallysmooth and round Suppose that the constraint set Ω does not intersect any

of the target sets Ωi for i = 1, , m, and for any x, y ∈ Ω with x 6= y theline connecting x and y, L(x, y), does not intersect at least one of the targetsets For any x ∈ Ω, consider the mapping ψ : Ω → Ω defined by

Theorem 2.2 Consider problem (2.4) in the setting of Lemma 2.1 Let {xk}

be a sequence generated by the MM algorithm, i.e., xk+1 = ψ(xk) with a given

x0 ∈ Ω Then any cluster point of the sequence {xk} is an optimal solution ofproblem (2.4) If we assume additionally that Ωi for i = 1, , m are strictlyconvex, then {xk} converges to the unique optimal solution of the problem

It is important to note that the algorithm may not converge in general Ourexamples (given in the dissertation) partially answer the question raised by

E Chi, H Zhou, and K Lange (2013)

Ngày đăng: 12/04/2017, 14:39

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w