1. Trang chủ
  2. » Luận Văn - Báo Cáo

Báo cáo hóa học: " Research Article Generalized Augmented Lagrangian Problem and Approximate Optimal Solutions in Nonlinear Programming" docx

12 267 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 12
Dung lượng 526,52 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Volume 2007, Article ID 19323, 12 pagesdoi:10.1155/2007/19323 Research Article Generalized Augmented Lagrangian Problem and Approximate Optimal Solutions in Nonlinear Programming Zhe Che

Trang 1

Volume 2007, Article ID 19323, 12 pages

doi:10.1155/2007/19323

Research Article

Generalized Augmented Lagrangian Problem and Approximate Optimal Solutions in Nonlinear Programming

Zhe Chen, Kequan Zhao, and Yuke Chen

Received 19 March 2007; Accepted 29 August 2007

Recommended by Yeol Je Cho

We introduce some approximate optimal solutions and a generalized augmented La-grangian in nonlinear programming, establish dual function and dual problem based

on the generalized augmented Lagrangian, obtain approximate KKT necessary optimal-ity condition of the generalized augmented Lagrangian dual problem, prove that the ap-proximate stationary points of generalized augmented Lagrangian problem converge to that of the original problem Our results improve and generalize some known results Copyright © 2007 Zhe Chen et al This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

1 Introduction

It is well known that dual method and penalty function method are popular methods

in solving nonlinear optimization problems Many constrained optimization problems can be formulated as an unconstrained optimization problem by dual method or penalty function method Recently, a general class of nonconvex constrained optimization prob-lem has been reformulated as unconstrained optimization probprob-lem via augmented La-grangian [1]

In [1], Rockafellar and Wets introduced an augmented Lagrangian for minimizing

an extended real-valued function Based on the augmented Lagrangian, a strong dual-ity result without any convexdual-ity requirement in the primal problem was obtained under mild conditions A necessary and sufficient condition for the exact penalization based

on the augment Lagrangian function was given [1] Chen et al [2] and Huang and Yang [3] used augmented Lagrangian functions to construct the set-valued dual functions and corresponding dual problems and obtained weak and strong duality results of multiob-jective optimization problem More recently a generalized augmented Lagrangian was

Trang 2

introduced in [4] by Huang and Yang They relaxed the convexity on the augmented func-tion, and many papers in the literature are devoted to investigate augmented Lagrangian problems Necessary and sufficient optimality conditions, duality theory, saddle point theory as well as exact penalization results between the original constrained optimization problems and its unconstrained augmented Lagrangian problems have been established under mild conditions (see, e.g., [5–9]) It is worth noting that most of these results are established on the basis of assumption that the set of optimal solutions of the primal constrained optimization problems is not empty

However, many mathematical programming problems do not have an optimal solu-tion, moreover sometimes we do not need to find an exact optimal solution due to the fact that it is often very hard to find an exact optimal solution even if it does exist As a mater

of fact, many numerical methods only yield approximate optimal solutions, thus we have

to resort to approximate solution of nonlinear programming (see [10–14]) In [10] Liu used exact penalty function to transform a multiobjective programming problem with inequality constraints into an unconstrained problem and derived the Kuhn-Tucker con-ditions for-Pareto optimality of primal problem In [14] Huang and Yang investigated relationship between approximate optimal values of nonlinear Lagrangian problem and that of primal problem As we known, Ekeland’s variational principle and penalty func-tion methods are effective tools to study approximate solutions of constrained optimiza-tion problems and the augmented Lagrangian funcoptimiza-tions have some similar properties of penalty functions Thus it is possible to apply them in the study of approximate solutions

of constrained optimization problems

In this paper, based on the results in [4,10,14], we investigate the possibility of ob-taining the various versions of approximate solutions to a constrained optimization prob-lem by solving an unconstrained programming probprob-lem formulated by using a general-ized augmented Lagrangian function As an application, an approximate KKT optimality condition is obtained for a kind of approximate solutions to the generalized augmented Lagrangian problem We prove that the approximate stationary points of the generalized augmented Lagrangian problem converge to that of the original problems Our results generalized Huang and Yang’s corresponding results in [4,6,9] into approximate case which is more practical from computational viewpoint

The paper is organized as follows InSection 2, we present some concepts, basic as-sumptions, and preliminary results InSection 3, we obtain an approximate KKT opti-mality condition of generalized augmented Lagrangian problem and prove that the ap-proximate stationary points of the generalized augmented Lagrangian problem converge

to that of the original problem

2 Preliminaries

In this section, we present some definitions and Ekeland’s variational principle Consider the following constrained optimization problem:

inff (x) s.t x ∈ X,

g j(x) =0, j =1, ,m, (P)

Trang 3

whereX ⊂ R nis a nonempty and closed set, f : X → R,g j:X → R, f and g jare continu-ously differentiable functions Let S= { x ∈ X, g j(x) =0, j =1, ,m }, it is clear thatS is

the set of feasible solutions For any > 0, we denote by S  the set offeasible solution, that is,

S  =x ∈ X : g j(x) = , j =1, ,m, (2.1) and byM Pthe optimal value of problem (P

Letu ∈ R, we define a functionF :Rn × R → R:

F(x,u) =

f (x), if g j(x) ≤ u,

So we have a perturbed problem

infF(x,u) s.t x ∈ R n (P*) Define the optimal value function byp(u) =infx∈R n F(x,u), obviously p(0) is the optimal

value of problem (P

Definition 2.1 [1] (i) A functiong :Rn → R ∪ {−∞, +∞} is called level-bounded if, for

anyα ∈ R, the set { x ∈ R n;g(x) ≤ α } is bounded (ii) A functionh :Rn × R m → R ∪ {−∞, +∞}with valuesh(x,u) is called level-bounded in x locally uniformly in u if, for

eachu ∈ R mandα ∈ R, there exists a neighborhoodV uofu along with a bounded set

D ⊂ R nsuch that{ x ∈ R n:h(x,v) ≤ α } ⊂ D for all v ∈ V u

Definition 2.2 [4] A functionσ :Rm →R+∪ {+∞}is called a generalized augmented func-tion if it is proper, lower semicontinuous (lsc), level-bounded onRm, argminy σ(y) = {0}, andσ(0) =0

Define the dualizing parameterization function:

f p(x,u) = f (x) + δ R m



G(x) + u+δ X(x), x ∈ R n,u ∈ R m, (2.3) whereG(x) = { g1(x), ,g m(x) },δ Dis the indicator function of the setD, that is,

δ D(z) =

0, ifz ∈ D,

So a class of generalized augmented Lagrangians of (P) with dualizing parameterization function f p(x,u) defined by (2.3) can be expressed as

l p(x, y,r) =inf

f p(x,u) − y,u +rσ(u) : u ∈ R m

, x ∈ R n, y ∈ R m,r ≥0. (2.5) Whenσ(u) =[ m

j=1| u j |]α(α > 0), the above abstract-generalized augmented Lagrangian

can be formulated as the following generalized augmented Lagrangian:

l p(x, y,r) = f (x) + m

j=1

y j g j(x) +

m

j=1

g j(x) α . (2.6)

Trang 4

In this paper, we will focus on the problems about the above generalized augmented La-grangian

The generalized augmented Lagrangian problem (Q) corresponding to l pis defined as

ψ p(y,r) =inf

l p(x, y,r); x ∈ R n y ∈ R m,r ≥0. (2.7) The following various definitions of approximate solutions are taken from Loridan [11]

Definition 2.3 Let  > 0, the point x ∗ ∈ S is said to be an -solution of (P) if

fx ∗

≤ f (x) +  ∀ x ∈ S. (2.8)

Definition 2.4 Let  > 0, the point x ∗ ∈ S is said to be an -quasi solution of (P) if

fx ∗

≤ f (x) + x − x ∗  ∀ x ∈ S. (2.9)

Definition 2.5 Let  > 0, the point x ∗ ∈ S is said to be a regular -solution of (P) if it is both ansolution and an-quasi solution of (P

Definition 2.6 Let  > 0, the point x ∗ ∈ S is said to be an almost-solution of (P) if

fx ∗

≤ f (x) +  ∀ x ∈ S. (2.10)

Definition 2.7 The point x ∗ ∈ S is said to be an almost regular -solution of (P) if it is both an almost-solution and a regular-solution of (P

Proposition 2.8 (Ekeland’s variational principle) [13] Let f :Rn → R be proper lower semicontinous function which is bounded below Then for any  > 0, there exists x ∗ ∈ S such that

fx ∗

≤ f (x) + , ∀ x ∈ S,

fx ∗

< f (x) + x − x ∗, ∀ x ∈ S \

x ∗

3 Main results

In this section, we will discuss some approximate optimality conditions of constrained optimization problem, obtain necessary condition for an approximate solution of gen-eralized augmented Lagrangian problem (Q), and prove that the approximate stationary

points of (Q) converges to that of the primal problem (P) We say that the linear indepen-dence constrained qualification (LICQ in short) for (P) holds atx if {∇ g j(x) : j ∈ J1(x) }

is linearly independent Suppose thatx ∈ R nis a local optimal solution to (P) and the (LICQ) for (P) holds at x Then the first-order necessary optimality condition is that

there existsμ j ≥0,j =1, ,m, such that

∇ f (x) + m

j=1

Trang 5

Proposition 3.1 Suppose x  ∈ R n is a  -quasi solution for ( P ) and the (LICQ) for ( P ) holds at x  ∈ R n Then first-order approximate necessary conditions hold that there exists real numbers μ j()≥ 0, j =1, ,m, such that

∇ fx 

+

m j=1

μ j()∇ g j

x 

Proof From the definition of -quasi solution, we have that there existsx  ∈ S such that

fx ≤ f (x) + x − x   ∀ x ∈ S. (3.3)

We conclude thatx is a local optimal solution of the following constrained optimization problem (P*):

inf

f (x) + x − x  s.t.x ∈ S. (P*)

For the objective function,{ f (x) +  x − x  }is only locally Lipschitz Thus we apply Proposition 2.8and obtain the KKT necessary condition of (P*):

∇ fx 

+ξ +

m

j =1

μ j()∇ g j

x 

=0 ξ ∈[1, 1]. (3.4)

It follows that

∇ fx +

m

j =1

μ j()∇ g j(x )



It is easy to see that the generalized augmented Lagrangian function is a nonsmooth function, moreover it is not locally Lipschitz when 0< α < 1 Thus it is necessary that we

divide the generalized augmented Lagrangian problems into the following two parts:

α > 1, 0 < α < 1. (3.6) First let us consider the case (1), the generalized augmented Lagrangian function is a nonsmooth function, thus we have the following conclusion

Trang 6

Proposition 3.2 For any  > 0, suppose x  ∈ R n is a  -quasi solution of generalized aug-mented Lagrangian problem (Q), then

∇ fx 

+

m j=1

∇ g j

x 

y j+θrα

m

j=1

g j

x  α−1⎫⎬

≤ , (3.7)

where θ ∈[− 1, 1].

Proof Since x  ∈ R nis a-quasi solution of generalized augmented Lagrangian problem (Q), we can see that

fx )+

m

j=1

y j g j

x 

+

m

j=1

g j

x  α ≤ f (x) + m

j=1

y j g j(x) +

m

j=1

g j(x) α+x − x ,

(3.8) thus we have thatx  is a local optimal solution of the following optimization problem (P**):

inf



f (x) + m

j=1

y j g j(x) +

m

j=1

g j(x) α+x − x ,x ∈ R n



Since the objective function of (P**) is only locally Lipschitz Thus we apply the corollary

of Proposition 2.4.3 in [15] and Example 2.1.2 in [15] and obtain the approximate KKT necessary condition of (P**):

∇ fx 

+

m j=1

∇ g j

x ⎧⎨

y j+θrα

m

j=1

g j

x  α−1⎫⎬



Theorem 3.3 (convergence analysis) Suppose { y k } ∈ R m is bounded, 0 < r k →+∞ as k →

+∞ , x k

 ∈ R n is generated by some methods for solving the following problem (Q k ):

inf

l p

x, y k,r k

;x ∈ R n

y k ∈ R m,r k ≥0. (3.10)

Assume that there exist n, N ∈ R such that f (x k

)≥ n, l p(x k

,y k,r k)≤ N for any k Then every limit point x ∗

 of { x k

 } is feasible to the primal problem ( P ) Further assume that each x k



satisfies the approximate first-order necessary optimality condition stated in Proposition 3.2

and the (LICP) of ( P ) holds at x ∗

 Then x ∗

 satisfies the approximate first-order necessary optimality condition of ( P ).

Proof Without loss of generality, we suppose that x k → x ∗

 Noting thatl p(x k,y k,r k)≤ N

for anyk, so we can see

fx k





+

m j=1

y k

j g j

x k





+r k

m

j=1

g j

x k

Trang 7

Moreover, since f (x k)≥ n and y k ∈ R mis bounded, thus there existN1∈ R such that

r k

m

j=1

g j

x k

 α ≤ N1, m

j=1

g j

x k

 α ≤ N1

r k

(3.12)

It is clear thatg j(x ∗

)=0 asr k →+ Therefore,x ∗

 is a feasible solution to (P 

Lettingν k

j = { y k

j+θrα[ m j=1| g j(x k

)|]α−1},j =1, ,m, where θ ∈[1, 1], the inequal-ity (3.7) can be formulated as

∇ fx k





+

m j=1

ν k

j ∇ g j

x k





Now we prove by contradiction that the sequence m

j=1| ν k

j |is bounded ask →+ Oth-erwise without loss of generality, we assume that m

j=1| ν k

j | →+, then we can see that

lim

k →+

ν k j

m j=1 ν k

j = ν ∗

j, j =1, ,m. (3.14) Dividing (3.13) by m

j=1| ν k

j |and lettingk to the limit, we can derive that

m j=1

ν ∗

j ∇ g j

x ∗





This contradicts with the (LICQ) of (P) which holds atx ∗

 Hence m

j=1| ν k

j |is bounded and without loss of generality, we can assume that

ν k

j −→ ν j, j =1, ,m. (3.16) Thus taking limit in (3.14) and applying (3.16), we can obtain the approximate first-order necessary condition of (P

Next let’s consider the case 0< α < 1 It is clear that the generalized augmented

La-grangian functionl p(x, y,r) is a nonlocal Lipschitz nonsmooth function when 0 < α < 1.

However, we have not founded one that is suitable for our purpose of convergence anal-ysis of the second case Fortunately, we may smoothl p(x, y,r) by approximation Definition 3.4 For any 0 <  k →0 ask →+, the following function is called an approx-imate generalized augmented Lagrangian:

l px, y,r,  k

= f (x) + m

j=1

y j g j(x) + r

m

j=1



g j(x)2+2

k

α

It is clear that the approximate generalized augmented Lagrangian is a smooth function

Trang 8

So we have the corresponding approximate generalized augmented Lagrangian prob-lem (Q ) can be expressed as follows:

inf

l p

x, y,r,  k

,x ∈ R n y ∈ R m,r ≥0. (3.18) For this approximate generalized augmented Lagrangian function, it is necessary to consider error estimation between generalized augmented Lagrangian function and the approximate generalized augmented Lagrangian function The following Lemma is about the error estimation

Lemma 3.5 For generalized augmented Lagrangian function and approximate generalized

augmented Lagrangian function, the following statement holds:

l p

x, y,r,  k

− l p(x, y,r) ≤ rm  k, (3.19)

where  k → 0 as k →+

Proof From (2.6) and (3.17), we can see that



f (x) + m

j=1

y j g j(x) + r

m

j=1



g j(x)2+2

k

α



f (x) + m

j=1

y j g j(x) + r

m

j=1



g j(x)2

α

= r

 m

j=1



g j(x)2+2

k

α

m

j=1



g j(x)2

α

.

(3.20) For

g j(x)2+2

k −g j(x)2≤  k, thus we have that

m

j=1



g j(x)2+2

k −

m

j=1



g j(x)2 ≤ m  k, (3.21) lettingM = m j=1



g j(x)2, then we can derive that

m

j=1



g j(x)2+2

k

α

m

j=1



g j(x)2

α

M + m  kα

− M α (3.22)

Since 0< α < 1, when M + m  k ≥1, we can see that

M + m  kα

− M α ≤ M + m  k − M = m  k, (3.23) whenM + m  k < 1, we have that

M + m  kα

− M α ≤ ξ k, ξ k ∈(0, 1). (3.24) However, we can see k →0 ask →+, thus we have thatξ k →0 Without lose of gener-ality, we can derive thatm  k = ξ kwhenk is sufficiently large Thus we have the following

Trang 9

l px, y,r,  k

− l p(x, y,r) ≤ rm  k (3.25)



Next we will discuss approximate optimality of approximate generalized augmented Lagrangian problem (Q )

Proposition 3.6 (approximate optimality condition) Assume that x  ∈ R n is a  -quasi solution of (Q  ), then

∇ fx 

+

m

j=1



y j+

m

j=1



g j

x  2

+2

k]α−1

m j=1



g j

x  2

+2

k

1/2

g j

x 

∇ g j

x 

≤,

(3.26)

where  k → 0, as k →+

Proof From the definition of -quasi solution, we have that

l p

x ,y,r,  k

≤ l p

x, y,r,  k

+x − x . (3.27)

From (3.17), we can see that

fx 

+

m j=1

y j g j

x 

+r

m

j=1



g j

x  2

+2

k α

≤ f (x) +

m j=1

y j g j(x) + r

m

j=1



g j(x)2+2

k

α

+x − x ; (3.28)

it is clear thatx is a local optimal solution of the following optimization problem:

inf

x∈R n



f (x) + m

j=1

y j g j(x) + r

m

j=1



g j(x)2+2

k

α

+x − x . (3.29)

Since the objective function of the above problem is local Lipschitz Thus we apply the corollary of Proposition 2.4.3 in [15] and Example 2.1.3 in [15], and obtain the KKT necessary condition:

∇ fx 

+

m

j =1



y j+

m

j =1



g j

x  2

+2

k α− 1 m

j =1



g j

x  2

+2

k

1/2

g j

x 

∇ g j

x 

+ξ =0, (3.30) whereξ ∈[1, 1], thus we have that

∇ fx +

m

j=1



y j+

m

j=1



g j

x 2

+2

k α− 1 m j=1



g j

x 2

+2

k

1/2

g j

x 



∇ g j

x 

≤ 

(3.31)



Trang 10

Theorem 3.7 (convergence analysis) Assume that y k ∈ R m is bounded, 0 < r k →+∞ as

k →+∞ , x k

 ∈ R n is generated by some methods for solving the following problem (Q k ):

inf

l p

x, y k,r k

;x ∈ R n y k ∈ R m,r k ≥0. (3.32)

Suppose that there exist n, N ∈ R such that for any k, f (x k

)≥ n, l p(x k

,y k,r k, k)≤ N Then every limit point x  of { x k

 } is feasible to the primal problem ( P ) Further assume that each x k



satisfies the approximate first-order necessary optimality condition stated in Proposition 3.6

and the (LICP) of ( P ) holds at x  Then x  satisfies the approximate first-order necessary optimality condition of ( P ).

Proof Without loss of generality, we assume that x k

 → x  Froml p(x k

,y k,r k)≤ N, we

have that

fx k





+

m j=1

y k

j g j

x k





+r k

m

j=1



g j

x k



 2

+2

k

α

Since f (x k

)≥ n and { y k } ∈ R mbe bounded, so there existN1∈ R nsuch that

r k

m

j=1



g j

x k



 2

+2

k

α

whenk →+, we have thatg j(x )=0 andx is a feasible solution to (P 

Sincex k

satisfies approximate optimality condition stated inProposition 3.6 Let

μ k

j =



y k

j+r k α

m

j=1



g j

x k 2

+2

k α− 1 m j=1



g j

arx k



 2

+2

k

1/2

g j

x kα





. (3.35)

From (3.26) we have that

∇ fx k





+

m j=1

μ k

j ∇ g j

x k





Now we prove by contradiction that the sequence m

j=1| μ k

j |is bounded ask →+ Oth-erwise without loss of generality, we assume that m

j=1| μ k

j | →+, then we can see that

lim

k→+

μ k j

m

j=1 μ k

j = μ ∗

j j =1, ,m. (3.37)

We divide (3.26) by m

j =1| μ ∗

j |and takek →+, we have that

m j=1

μ ∗

j ∇ g j

x ∗

... class="text_page_counter">Trang 4

In this paper, we will focus on the problems about the above generalized augmented La-grangian

The generalized. .. gen-eralized augmented Lagrangian problem (Q), and prove that the approximate stationary

points of (Q) converges to that of the primal problem (P) We say that the linear indepen-dence...

3 Main results

In this section, we will discuss some approximate optimality conditions of constrained optimization problem, obtain necessary condition for an approximate solution

Ngày đăng: 22/06/2014, 18:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN