Given p, q and a finite set of convex polygons hP1, . . . , PN i in R 3 , we propose an approximate algorithm to find an Euclidean shortest path starting at p then visiting the relative boundaries of the convex polygons in a given order, and ending at q. The problem can be rewritten as a variant of the problem of minimizing a sum of Euclidean norms: minp1,...,pN PN i=0 kpi − pi+1k, where p0 = p and pN+1 = q, subject to pi is on the relative boundary of Pi , for i = 1, . . . , N. The object function is convex but not everywhere differentiable and the constraint of the problem is not convex. By using a smooth inner approximation of Pi with parameter t, a relaxation form of the problem, is constructed such that its solution, denoted by pi(t), is inside Pi but outside the inner approximation. The relaxing problem is then solved iteratively using sequential convex programming. The obtained solution pi(t), however, is actually not on the relative boundary of Pi . Then a socalled refinement of pi(t) is finally required to determine a solution passing the relative boundary of Pi , for i = 1, . . . , N
Trang 1A sequential convex programming algorithm for minimizing a sum of Euclidean norms with non-convex constraints
Le Hong Trang∗1,2, Attila Kozma1, Phan Thanh An2,3, and Moritz Diehl1,4
1Electrical Engineering Department (ESAT-STADIUS) / OPTEC, KU Leuven, Kasteelpark Arenberg 10, 3001
Leuven-Heverlee, Belgium
2Center for Mathematics and its Applications (CEMAT), Instituto Superior T´ecnico, Universidade de Lisboa,
A Rovisco Pais, 1049-001 Lisboa, Portugal
3Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi, Vietnam
4Institute of Microsystems Engineering (IMTEK), University of Freiburg, Georges-Koehler-Allee 102, 79110 Freiburg,
Germany
Abstract
Given p, q and a finite set of convex polygonshP1, , PNi in R3, we propose an approxi-mate algorithm to find an Euclidean shortest path starting at p then visiting the relative bound-aries of the convex polygons in a given order, and ending at q The problem can be rewritten as a variant of the problem of minimizing a sum of Euclidean norms:minp1, ,pNPNi=0kpi− pi+1k,
where p0 = p and pN+1 = q, subject to pi is on the relative boundary of Pi, for i= 1, , N
The object function is convex but not everywhere differentiable and the constraint of the problem
is not convex By using a smooth inner approximation of Piwith parameter t, a relaxation form
of the problem, is constructed such that its solution, denoted by pi(t), is inside Pi but outside the inner approximation The relaxing problem is then solved iteratively using sequential convex programming The obtained solution pi(t), however, is actually not on the relative boundary of
Pi Then a so-called refinement of pi(t) is finally required to determine a solution passing the
relative boundary of Pi, for i= 1, , N
It is shown that the solution of the relaxing problem tends to its refined one as t → 0 The
algorithm is implemented by Matlab using CVX package Numerical tests indicate that solution obtained by the algorithm is very closed to a global one
Keywords— Sequential convex programming, shortest path, minimizing a sum of Euclidean norms,
non-convex constrain, relaxation
The problem of minimizing a sum of Euclidean norms arises in many applications, including fa-cilities location problem [20] and VLSI (very-large-scale-integration) layout problem [1] Many numerical algorithms for solving the problem were introduced (see [11, 13, 14, 20], etc) To solve
∗ email: le.hongtrang@esat.kuleuven.be
Trang 2the unconstrained problems of minimizing a sum of Euclidean norms, Overton [11] gave an algo-rithm which has quadratic convergence under some given conditions Based on polynomial-time interior-point methods, Xue and Ye [20] in 1997 introduced an efficient algorithm which computes
an ǫ-approximation solution of the problem Some applications in the Euclidean single facility loca-tion problems, the Euclidean multifacility localoca-tion problems, and the shortest network under a given tree topology, were also presented Qi et al [13, 14] proposed two methods for solving the problem
A smoothing Newton method was introduced in 2000, the algorithm is globally and quadratically convergent In 2002 by transforming the problem and its dual into a system of strongly semi-smooth equations, they presented a primal-dual algorithm for the problem by solving this system For solv-ing the problem of minimizsolv-ing a sum of Euclidean norms with linear constraints, Andersen and Christiansen [2] proposed a Newton barrier method in which the linear constraints are handle by
an exact penalty L1 A globally and quadratically convergent method was recently introduced by Zhou [21] to solve this problem All these problems are convex
In this paper we consider the problem of finding the shortest path starting a point p then visiting the relative boundaries of convex polygons in 3D, denoted byhP1, , PNi, in a given order and
ending at q The problem can be rewritten as a variant of problem of minimizing a sum of Euclidean norms: minp 1 , ,p N
i=0kpi − pi+1k, where p0 = p and pN+1 = q are fixed, pi is on the relative boundaries of the convex polygons, for i = 1, , N This problem is non-convex Based on
se-quential convex programming approach [15], we introduce an approximate algorithm for solving the problem By using a smooth inner approximation of Pi with parameter t, a relaxation form of the problem is constructed such that its solution, denoted by pi(t), is inside Pi but outside the inner ap-proximation The relaxing problem is then solved iteratively using sequential convex programming The obtained solution pi(t), however, is actually not on the relative boundary of Pi Then a so-called refinement of pi(t) is finally required to determine a solution passing the relative boundary of Pi, for
i = 1, , N It is also shown that the solution of the relaxing problem tends to its refined one as
t→ 0 The algorithm is implemented by Matlab using CVX package Numerical tests indicate that
solution obtained by the algorithm is closed enough to global one
The rest of the paper is organized as follows In section 2, we briefly recall the general frame-work of sequential convex programming, some notations, and a method for approximating convex polygons Section 3 presents the formulations of the problem, and then introduces our new algo-rithm Section 4 gives analysis of proposed algoalgo-rithm In section 5, some numerical tests are given The conclusion is given in section 6
We now recall a framework of SCP [15] Consider the problem
min cTx
s.t
g(x) ≤ 0,
x∈ Ω,
(1)
Trang 3where g : Rn
→ Rm is a nonlinear and smooth function on its domain, andΩ is a nonempty closed
convex subset inRn
The main challenge of the problem (1) is concentrated in the nonlinear of g(x) This can be
overcome by linearizing it at current iteration point while maintaining the remaining convexity of
the original problem Let us assume that g(x) in twice continuously differentiable on its domain and
denote by λ the Lagrange multiplier of Lagrange function of (1) The full-step sequential convex programming algorithm for solving (1) is given as follows
A general SCP framework
1 Choose an initial point x0 ∈ Ω and λ0 ∈ Rm Let k= 0
2 We solve iteratively the following convex problem
min cTx
s.t
g(xk
) + ∇g(xk)T
(x − xk
) ≤ 0,
to obtain a solution z+k := (xk
+, λk +) We check if kzk
+− zkk ≤ ε holds for a given ε > 0, then
stop Otherwise, zk+1 := zk
+and k:= k + 1
We denote the Euclidean norm inRn(n ≥ 2) with k · k Let a ∈ R3 and A⊂ R3, the distance from
a to A is given by
d(a, A) = inf
a ′ ∈Aka − a′
k
Let A, B⊂ R3, the directed Hausdorff distance from A to B is defined by (see [7])
dH(A, B) = sup
a∈A
d(a, B)
Given a convex polygon P inR3, we also recall the relative interior [18], denoted by riP , as
follows
riP = {p ∈ P : ∀q ∈ P, ∃λ > 1, λp + (1 − λ)q ∈ P }
Then the set ∂P := P \ riP, is said to be relative boundary of the polygon P
Let hP1, , PNi be a sequence of convex polygons in R3 and hp1, , pNi be a sequence of
points where pi ∈ Pi, for i = 1, , N We define a refinement of the sequence hp1, , pNi into
∂Pi as follows
Definition 1 The sequenceh¯p1, ,p¯Ni obtained by
¯
pi = arg min
p ′
i ∈∂P i {kp′i− pik}, for i= 1, , N, (2)
is called the refined sequence ofhp1, , pNi on ∂Pi
Trang 4Given a convex polygon P inR3, a parameter t >0, and p ∈ P , a function denoted by Φ(p, t) is
said to be an inner approximation of ∂P if the function satisfies the following:
Φ(p, t) is a concave differentiable function ∀p ∈ P,
SPΦ(p,t) → SP as t→ 0 monotonously,
where SPΦ(p,t)and SP denote the areas of closed regions bounded by PΦ(p,t)and P , respectively Such approximation can be obtained by some methods, for example using KS-function [8] (described in next subsection), convex function approximation [6], and soft-max [4]
The KS function was first introduced by Kreisselmeier and Steinhauser [8] The function aims to present a single measure of all constraints in an optimization problem In particular, consider an optimization problem containing m inequality constraints g(x) ≤ 0, the KS function overestimates
a set of inequalities of the form y = gj(x), j = 1, , N A composite function is defined as
fKS(x, ρ) = 1
ρln
m
X
j=1
eρgj (x),
where ρ is an approximate parameter Raspanti et al [17] have shown that for ρ > 0, fKS(x, ρ) ≥ max{gj(x)} Furthermore, for ρ1 ≥ ρ2, fKS(x, ρ1) ≤ fKS(x, ρ2) This implies that the fKS(x, ρ1)
gives a better estimation of the feasible region of the optimization problem than fKS(x, ρ2) In the
following example, the application of KS function for some single-variable functions is visualized Consider two simple convex inequality constraints
g1(x, y) = (x − 5)2− y ≤ 0,
g2(x, y) = x − y − 1 ≤ 0
The KS function of the constraints is shown in Fig 1 Because the convexity of KS function is fol-lowed the original constraints, KS function is convex Furthermore, fKS(x, ρ) tends to max{g1(x), g2(x)}
as ρ→ ∞ [17] Intuitively, this means that fKS(x, ρ) approaches the boundary of the feasible region
relative boundaries of convex polygons
Given p, q and a sequence of convex polygonshP1, P2, , PNi in R3, a formulation of the problem
of finding shortest path visiting the relative boundaries of convex polygons, can be given by
min
p i ∈∂P i
N
X
i=0
Trang 50 2 4 6 8 10
−5 0 5 10 15 20 25
x
g
1 (x) = (x−5)2 g
2 (x) = x−1
t = 0.1
t = 0.2
t = 0.5
Figure 1: KS function for simple constraints
where p0 = p and pN +1 = q There are two challenges for solving the problem The first one is that
piis constrained to be on ∂Pi, then this constraint is not convex Second, the function presenting the relative boundary is non-smooth at vertices of the polygons Hence, the function is not differentiable
at vertices of the convex polygons These can be overcome by using a relaxation of the constrains
of problem (PMSN)
Our method computes first an inner approximation of each ∂Pi using a functionΦi(pi, t)
satis-fying (3) An intermediate solution denoted by pi(t), can be then obtained by using the sequential
convex programming, described later, such that both following conditions are satisfied,
for i = 1, 2, , N We then decrease the value of t to get better approximation of Pi, this aims
to push pi(t) to ∂Pi In Particular, pi(t) is generated such that pi(t) → ¯pi(t) as t → 0, where
¯
pi(t) ∈ ∂Pi is obtained by using (2), for i= 1, 2, , N (see Proposition 6 later)
A geometrical interpretation of proposed method is shown in Fig 2 Given two point p, q and a convex polygons P1, P2, we seek a point pi ∈ ∂Pi such that kp − p1k + kp1 − p2k + kp2 − qk is
minimal, for i = 1, 2 Let us initialize t = t0, an approximationΦi(pi, t0) of ∂Pi is computed We then determine pi(t0) such that Φi(pi(t0), t0) ≥ 0 and pi(t0) ∈ Pi by using SCP We reduce t = tj
and compute iteratively pi(tj), for j = 1, 2, , until t is small enough This means that pji will be moved closer to ∂Pi step by step A sequence of intermediate solutions pi(tj) (j ≥ 0) is determined
such thatΦi(pi(tj), tj) ≥ 0 and pi(tj) ∈ Pi, for i= 1, 2
Instead of solving directly (PMSN), we solve a sequence of sub-problems as follows
minp 1 , ,p N
i=0kpi− pi+1k
s.t
Φi(pi, t) ≤ 0, i= 1, , N,
pi ∈ Pi, i= 1, , N,
(PRLX(p, t))
where p0 = p and pN +1 = q We now formulate (PRLX(p, t)) for a certain value of t With given
Trang 6p
P1
P2
p1(t 0 )
p 1 (t 1 )
p1(t 2 )
p2(t 1 )
p 2 (t 2 )
p2(t 0 )
Φ 2 (p 2 , t 2 ) ≤ 0
Φ 2 (p 2 , t 1 ) ≤ 0
Figure 2: An example for illustrating the idea of our method
convex polygons Pi ⊂ R3 and pi ∈ Pi, for i = 1, , N, this can be expressed by
Aipi+ bi ≤ 0, i= 1, , N,
cT
where Ai and bi are respectively parameter matrix and vector For each convex polygon Pi, a row
of Ai together a corresponded element of bi and cTi pi+ di = 0 specify a line equation containing a
boundary edge of Pi A numerical formulation of nonlinear optimization of problem (PRLX(p, t))
is thus given by
minp 1 , ,p N
i=0kpi− pi+1k
s.t
Φi(pi, t) ≤ 0, i= 1, , N,
Aipi+ bi ≤ 0, i= 1, , N,
cT
i pi+ di = 0, i= 1, , N,
(PN LP(p, t))
where p0 = p and pN+1 = q
Remark 1 Let f(p1, p2, , pN) = PN
i=0kpi− pi+1k, where p0 = p and pN+1 = q, pi ∈ Pi for
i= 1, , N Then f is a convex function
In problem (PN LP(p, t)), the first constraint is a concave By using the SCP approach described
in subsection 2.1, we take the linearization ofΦi(pi, t) at ¯piby
Φi(pi, t) ≃ Φi( ¯pi, t) + ∇Φi( ¯pi, t)T(pi− ¯pi) (6)
Trang 7The solution of the problem (PN LP(p, t)) can then be obtained approximately by solving a sequence
of the following subproblems
minp1, ,pNPNi=0kpi− pi+1k
s.t
Φi(pk
i, t) + ∇Φi(pk
i, t)T(pi− pk
i) ≤ 0, i= 1, , N, k = 0, 1, ,
cT
(PSCP(pk, t))
where p0 = p, pN +1 = q, and p0
i is an initial feasible value
We can approximate the convex polygons in problem (PN LP(p, t)) using KS function described in
subsection 2.3 Let
Ai := (Ai1, , Aim i)T, and bi := (bi1, , bim i)T
where mi is the number of edges of Pi, for i = 1, , N In order to use KS function, we first set
proper matrices Aiand bithen take
Φi(pi, t) := t ln
m i X
j=1
where t is the approximate parameter, for i= 1, , N The problem (PN LP(p, t)) is then rewritten
as follows,
minp1, ,pN PNi=0kpi− pi+1k
s.t
Φi(pi, t) ≤ 0, i= 1, , N,
Aipi + bi ≤ 0, i= 1, , N,
cT
i pi+ di = 0, i= 1, , N,
(PN LP−KS(p, t))
where p0 = p and pN+1 = q Taking linearization of Φi(pi, t) gives the following problem in the
SCP approach
minp1, ,pNPNi=0kpi− pi+1k
s.t
Φi(pk
i, t) + ∇Φi(pk
i, t)T(pi− pk
i) ≤ 0, i= 1, , N, k = 0, 1, ,
cT
(PSCP−KS(pk, t))
where p0 = p, pN +1 = q, and p0
i is an initial feasible value
Given p, q and a sequence of convex polygonshP1, P2, , PNi in R3, Algorithm 1 solves itera-tively (PN LP−KS(p, t)) to obtain a refined local approximate shortest path passing the ∂Pi by means
Trang 8of numerical solution of the corresponding nonlinear programming problem, for i = 1, 2, , N.
Namely, each step of the algorithm a local shortest path can be found with a certain value of t The path, however, passes in the relative interiors of the convex polygons, then the refinement (2) is performed to ensure that it passes the relative boundaries of the polygons By Definition 1 this is a refinement of the resulting shortest path Proposition 6 indicates that the shortest path tends to its refinement as t → 0 Let us denote by hp1(t), , pN(t)i and h¯p1(t), , ¯pN(t)i are respectively
the SCP solution of (PN LP−KS(p, t)) and its refinement with a certain value of t
Algorithm 1 Finding the refined shortest path visiting the relative boundaries of convex polygons Input: Given p, q andhP1, P2, , PNi in R3, an error of solution ε > 0, step length 0 < η < 1,
and a tolerance of SCP solver µ >0
Output: π(p, q) = hp, ¯p1(t), , ¯pN(t), qi where ¯pi(t) ∈ ∂Pi, for i= 1, , N
1: Initialize t← t0 >0, and p0i ∈ Pi, for i= 1, , N
2: repeat
3: Approximate ∂PibyΦi(pi, t) using (7), i = 1, , N
4: p(t) := (p1(t), , pN(t))T andΦ(p, t) := (Φ1(pi, t), , ΦN(pi, t))T
5: Call SCP SOLVER Φ(p, t), p to solve (PN LP−KS(p, t)) which gives hp1(t), , pN(t)i 6: Refinehp1(t), , pN(t)i to obtain h¯p1(t), , ¯pN(t)i to be on ∂Pi using (2)
7: p(t) := (¯¯ p1(t), , ¯pN(t))T
9: untilkp(t) − ¯p(t)k ≤ ε
10: return π(p, q) := hp, ¯p1(t), , ¯pN(t), qi
11: procedure SCP SOLVER(Φ, p0)
13: repeat
14: Compute linearization ofΦi(pi, t) at pk
i
15: Solve convex subproblem (PSCP−KS(pk, t)) to obtain pk+1
i
← (pk
1, pk
2, , pk
N)T
18: untilkpk− pk−1k ≤ µ
19: returnhpk
1, pk
2, , pk
Ni
20: end procedure
Let p := (p1, p2, , pN)T We now analyze the algorithm The following property shows the feasibility of full SCP step of the Algorithm 1
Proposition 1 For a certain value of t, if p′(t) is a feasible point of the problem (PN LP−KS(p, t)) then a solution p∗(t) of the problem (PSCP−KS(pk, t)) corresponding to p′(t) exists and is feasible for the problem (PN LP−KS(p, t)).
Trang 9Proof Since p∗(t) is a solution of (PSCP−KS(pk, t)), Aip∗i(t) + bi ≤ 0 and cT
i p∗i(t) + di = 0, for
i = 1, , N We need only to show that Φi p∗i(t), t
≤ 0, for i = 1, , N Indeed, since
Φi pi(t), t is concave, we have
Φi p∗i(t), t ≤ Φi p′i(t), t + ∇Φi p′i(t), tT
p∗i(t) − p′i(t),
for i= 1, , N Furthermore, p∗(t) is a solution of (PSCP−KS(pk, t)), then
Φi p′i(t), t + ∇Φi p′i(t), tT
p∗i(t) − p′i(t) ≤ 0
It follows thatΦi p∗i(t), t ≤ 0 The proof is completed
Proposition 2 Procedure SCP SOLVERgives a local solution of the problem (PN LP−KS(p, t)) Proof In order to solve problem (PN LP−KS(p, t)), procedure SCP SOLVERfirst converts the prob-lem using extra variables y1, y2, , yN which gives the following equivalent problem,
minp 1 , ,p N ,y 1 , ,y N
i=1yi
s.t
kpi− pi+1k − yi ≤ 0, i = 0, , N − 1,
Φi(pi, t) ≤ 0, i= 1, , N,
Aipi + bi ≤ 0, i= 1, , N,
cT
i pi+ di = 0, i= 1, , N
(8)
this is a second-order cone program By [15] the SCP framework in procedure SCP SOLVER gives
a local solution of the non-linear problem (8)
In vector form, let
hi(pi, t) :=
Φi(pi, t)
cT
i pi+ di
,
and
gi(pi) := maxj(Aijpi+ bi)
cT
ipi+ di
,
for i= 1, , N, j = 1, , mi, where miis the number of edges of Pi
Proposition 3. limt→0khi(pi, t) − gi(pi)k = 0, for i = 1, N.
Proof The following is proven in [17],
lim
t→0Φi(pi, t) = max
j∈{1, ,m i }{Aijpi+ bij}, for i = 1, , N
Hence,
khi(pi, t) − gi(pi)k = |Φi(pi, t) − max
j∈{1, ,m i }{Aijpi+ bij}|
It follows that
lim
t→0khi(pi, t) − gi(pi)k = 0, for i = 1, N
The proof is completed
Trang 10This means that hi(pi, t) converges to gi(pi) as t → 0, for i = 1, , N The following states
that the achieved convergence is uniform
Proposition 4 hi(pi, t) converges uniformly to gi(pi) as t → 0, for i = 1, N.
Proof We have that Pi is compact On the one hand,{Φ(pi, t)} is a sequence of continuous
func-tions and converges tomaxj∈{1, ,m i }{Aijpi + bi} on Pi On other hand, by Property 3 in [17] we have
Φ(pi, t1) ≥ Φ(pi, t2), ∀pisuch that t1 ≥ t2
By Theorem 7.13 in [19], we conclude thatΦ(pi, t) converges continuously to maxj∈{1, ,m i }{Aijpi+
bij}, for i = 1, , N, i.e., for every ǫ > 0, there exists T > 0, such that for every pi ∈ Pi and
t < T , we have
|Φ(pi, t) − max
j∈{1, ,m i }{Aijpi+ bij}| < ǫ
Since
khi(pi, t) − gi(pi)k = |Φi(pi, t) − max
j∈{1, ,m i }{Aijpi+ bij}|,
khi(pi, t) − gi(pi)k < ǫ It follows that hi(pi, t) converges uniformly to gi(pi) as t → 0, for i =
1, N
Combining this proposition and Theorem 7.9 in [19], we have following corollary
Corollary 1. ∀pi ∈ Pi, then
lim
t→0 sup
pi∈P i {khi(pi, t) − gi(pi)k} = 0,
Furthermore, set ui := pi x, pi y,Φi(pi, t)T
and vi := pi x, pi y,maxj∈{1, ,m i }{Aijpi+ bij}T
, then
lim
t→0 sup
p i ∈P i {kui− vik} = 0, for i= 1, , N
Under the uniform convergence of hi(pi, t) the following indicates that there is a corresponding
convergence of their graphs with respect to directed Hausdorff distance
Proposition 5. limt→0dH ∂PΦi(pi,t), ∂Pi = 0, for i= 1, , N
Proof For pi ∈ Pi, taking ui := pi x, pi y,Φi(pi, t)T
∈ ∂PΦ i (p i ,t), for i= 1, , N, we have that
dH ∂PΦi(pi,t), ∂Pi = sup
ui∈∂PΦi(pi,t)
d(ui, ∂Pi)
u i ∈∂PΦi(pi,t)
inf
v ′
i ∈∂P i
Set vi := pi x, pi y,maxj∈{1, ,m i }{Aijpi+ bij}T
∈ ∂Pi, for i= 1, , N By Corollary 1, lim
t→0 sup
pi∈P i {kui− vik} = 0