1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

David G. Luenberger, Yinyu Ye - Linear and Nonlinear Programming International Series Episode 2 Part 1 pot

25 311 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 25
Dung lượng 424,21 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The quadratic form associated with this objective is defined by the matrix Q +ccT and, accordingly, the convergence rate of steepest descent will be governed by the condition number of t

Trang 1

8.7 Applications of the Theory 243

Application 2 (Penalty methods) Let us briefly consider a problem with a singleconstraint:

subject to hx= 0

One method for approaching this problem is to convert it (at least approximately)

to the unconstrained problem

minimize fx+1

where  is a (large) penalty coefficient Because of the penalty, the solution to (42)

will tend to have a small hx Problem (42) can be solved as an unconstrained

problem by the method of steepest descent How will this behave?

For simplicity let us consider the case where f is quadratic and h is linear.Specifically, we consider the problem

minimize 1

2x

subject to cTx= 0

The objective of the associated penalty problem is 1/2 xTQx + xTccT − bTx

The quadratic form associated with this objective is defined by the matrix Q +ccT

and, accordingly, the convergence rate of steepest descent will be governed by

the condition number of this matrix This matrix is the original matrix Q with a

large rank-one matrix added It should be fairly clear† that this addition will causeone eigenvalue of the matrix to be large (on the order of ) Thus the conditionnumber is roughly proportional to  Therefore, as one increases  in order to get

an accurate solution to the original constrained problem, the rate of convergencebecomes extremely poor We conclude that the penalty function method used in thissimplistic way with steepest descent will not be very effective (Penalty functions,and how to minimize them more rapidly, are considered in detail in Chapter 11.)

Scaling

The performance of the method of steepest descent is dependent on the particular

choice of variables x used to define the problem A new choice may substantially

alter the convergence characteristics

Suppose that T is an invertible n× n matrix We can then represent points in

En either by the standard vector x or by y where Ty = x The problem of finding

†See the Interlocking Eigenvalues Lemma in Section 10.6 for a proof that only one eigenvaluebecomes large

Trang 2

x to minimize fx is equivalent to that of finding y to minimize hy = fTy Using y as the underlying set of variables, we then have

where  f is the gradient of f with respect to x Thus, using steepest descent, the

direction of search will be

 y= −TT

which in the original variables is

Thus we see that the change of variables changes the direction of search

The rate of convergence of steepest descent with respect to y will be determined

by the eigenvalues of the Hessian of the objective, taken with respect to y That

Hessian is

2hy ≡ Hy = TTFTyT

Thus, if x= Ty∗ is the solution point, the rate of convergence is governed by the

matrix

Very little can be said in comparison of the convergence ratio associated with

H and that of F If T is an orthonormal matrix, corresponding to y being defined from x by a simple rotation of coordinates, then TTT = I, and we see from (41) that the directions remain unchanged and the eigenvalues of H are the same as those

of F.

In general, before attacking a problem with steepest descent, it is desirable,

if it is feasible, to introduce a change of variables that leads to a more favorableeigenvalue structure Usually the only kind of transformation that is at all practical

is one having T equal to a diagonal matrix, corresponding to the introduction

of scale factors on each of the variables One should strive, in doing this, tomake the second derivatives with respect to each variable roughly the same.Although appropriate scaling can potentially lead to substantial payoff in terms ofenhanced convergence rate, we largely ignore this possibility in our discussions ofsteepest descent However, see the next application for a situation that frequentlyoccurs

Application 3 (Program design) In applied work it is extremely rare that onesolves just a single optimization problem of a given type It is far more usual thatonce a problem is coded for computer solution, it will be solved repeatedly forvarious parameter values Thus, for example, if one is seeking to find the optimal

Trang 3

8.7 Applications of the Theory 245

production plan (as in Example 1 of Section 7.2), the problem will be solved forthe different values of the input prices Similarly, other optimization problems will

be solved under various assumptions and constraint values It is for this reasonthat speed of convergence and convergence analysis is so important One wants aprogram that can be used efficiently In many such situations, the effort devoted toproper scaling repays itself, not with the first execution, but in the long run

As a simple illustration consider the problem of minimizing the function

Table 8.2 Solution to Scaling Application

Trang 4

The reason for this poor performance is revealed by examining the Hessianmatrix

In view of the second-order sufficiency conditions for a minimum point, we

assume that at a relative minimum point, x, the Hessian matrix, Fx∗, is positivedefinite We can then argue that if f has continuous second partial derivatives,

F(x) is positive definite near x∗ and hence the method is well defined near thesolution

Order Two Convergence

Newton’s method has very desirable properties if started sufficiently close to thesolution point Its order of convergence is two

Theorem. (Newton’s method) Let f ∈ C3 on En, and assume that at the

local minimum point x, the Hessian Fx is positive definite Then if started

sufficiently close to x, the points generated by Newton’s method converge to

x The order of convergence is at least two.

Trang 5

8.8 Newton’s Method 247

Proof. There are  > 0 1> 0 2> 0 such that for all x with

holds −1 1 (see Appendix A for the definition of the norm of a matrix)

and ∗T− fxT− Fxx

2 ∗ 2 Now suppose xk is selectedwith 12 k− x

k− x∗ k+1− x

k− x− Fxk−1 fxkT

k−1 fx∗T− fxkT− Fxkx− xk

k−1 2 k− x∗ 2

 12 k− x∗ 2< k− x

The final inequality shows that the new point is closer to x∗than the old point, and

hence all conditions apply again to xk+1 The previous inequality establishes thatconvergence is second order

Modifications

Although Newton’s method is very attractive in terms of its convergence propertiesnear the solution, it requires modification before it can be used at points that areremote from the solution The general nature of these modifications is discussed inthe remainder of this section

1 Damping The first modification is that usually a search parameter  is introduced

so that the method takes the form

xk+1= xk− kFxk−1 fxkTwhere kis selected to minimize f Near the solution we expect, on the basis of howNewton’s method was derived, that k 1 Introducing the parameter for generalpoints, however, guards against the possibility that the objective might increasewith k= 1, due to nonquadratic terms in the objective function

2 Positive definiteness A basic consideration for Newton’s method can be seen

most clearly by a brief examination of the general class of algorithms

where Mkis an n×n matrix,  is a positive search parameter, and gk= fxkT We

note that both steepest descent Mk= I and Newton’s method Mk= Fxk−1

belong to this class The direction vector dk= −Mkgk obtained in this way is adirection of descent if for small  the value of f decreases as  increases fromzero For small  we can say

fxk+1= fx + fxxk+1− x k+1− x 2

Trang 6

Employing (44) this can be written as

fxk+1= fxk− gT

kMkgk+ O2

As → 0, the second term on the right dominates the third Hence if one is to

guarantee a decrease in f for small , we must have gT

kMkgk> 0 The simplest

way to insure this is to require that Mkbe positive definite

The best circumstance is that where F(x) is itself positive definite throughout

the search region The objective function of many important optimization problemshave this property, including for example interior-point approaches to linearprogramming using the logarithm as a barrier function Indeed, it can be argued thatconvexity is an inherent property of the majority of well-formulated optimizationproblems

Therefore, assume that the Hessian matrix F(x) is positive definite throughout the search region and that f has continuous third derivatives At a given xkdefine

the symmetric matrix T = Fxk−1/2 As in section 8.7 introduce the change of

variable Ty = x Then according to (41) a steepest descent direction with respect

to y is equivalent to a direction with respect to x of d = −TTTgxk, where gxk

is the gradient of f with respect to x at xk Thus, d = F−1gx

k In other words, a

steepest descent direction in y is equivalent to a Newton direction in x.

We can turn this relation around to analyze Newton steps in x as equivalent

to gradient steps in y We know that convergence properties in y depend on the

bounds on the Hessian matrix given by (42) as

and upper bounds on the eigenvalues of Fx0−1/2Fx0Fx0−1/2 where x0and x0

are arbitrary points in the local search region These bounds depend, in turn, onthe bounds of the third-order derivatives of f It is clear, however, by continuity of

Fxand its derivatives, that the rate becomes very fast near the solution, becomingsuperlinear, and in fact, as we know, quadratic

3 Backtracking The backtracking method of line search, using = 1 as the initialguess, is an attractive procedure for use with Newton’s method Using this methodthe overall progress of Newton’s method divides naturally into two phases: first

a damping phase where backtracking may require  < 1, and second a quadraticphase where = 1 satisfies the backtracking criterion at every step The dampingphase was discussed above

Let us now examine the situation when close to the solution We assume that allderivatives of f through the third are continuous and uniformly bounded We also

Trang 7

8.8 Newton’s Method 249

assume that in the region close to the solution, Fx is positive definite with a > 0

and A > 0 being, respectively, uniform lower and upper bounds on the eigenvalues

where the o bound is uniform for all xk Since k k→ x∗, it

follows that once xkis sufficiently close to x, then fxk+dk < fxk kTd k

and hence the backtracking test (the first part of Amijo’s rule) is satisfied Thismeans that = 1 will be used throughout the final phase

4 General Problems In practice, Newton’s method must be modified to

accom-modate the possible nonpositive definiteness at regions remote from the solution

A common approach is to take Mk kI + Fxk−1 for some non-negative

k This can be regarded as a kind of compromise between steepest descent

Mkpositive definite We shall present one modification of this type

Let Fk≡ Fxk Fix a constant  > 0 Given xk, calculate the eigenvalues of Fk

function of xkand hence the mapping D  En→ E2n defined by Dxk= xk dk

is continuous Thus the algorithm A = SD is closed at points outside the solution

set  kI +Fkis positive definite, dkis a descent

direction and thus Zx ≡ fx is a continuous descent function for A Therefore,

assuming the generated sequence is bounded, the Global Convergence Theorem

applies Furthermore, if  > 0 is smaller than the smallest eigenvalue of Fx∗,

then for xk sufficiently close to x∗ k= 0, and the method reduces toNewton’s method Thus this revised method also has order of convergence equal

to two

The selection of an appropriate  is somewhat of an art A small  means thatnearly singular matrices must be inverted, while a large  means that the order twoconvergence may be lost Experimentation and familiarity with a given class ofproblems are often required to find the best 

Trang 8

The utility of the above algorithm is hampered by the necessity to calculate

the eigenvalues of Fxk, and in practice an alternate procedure is used In one

k,

kI +Fxk= GGT(see Exercise 6 of Chapter 7)

is employed to check for positive definiteness If the factorization breaks down,

k is increased The factorization then also provides the direction vector through

solution of the equations GGTdk= gk, which are easily solved, since G is triangular Then the value fxk+dk is examined If it is sufficiently below fxk, then xk+1 is

k +1

in these methods It should be clear from this discussion that the simplicity thatNewton’s method first seemed to promise is not fully realized in practice

Newton’s Method and Logarithms

Interior point methods of linear and nonlinear programming use barrier functions,which usually are based on the logarithm For linear programming especially, thismeans that the only nonlinear terms are logarithms Newton’s method enjoys somespecial properties in this case,

To illustrate, let us apply Newton’s method to the one-dimensional problem

x = x − fx−1fx= x − x2



t−1x

is directly evident and exact Expression (54) represents a reduction in the errormagnitude only if

then Newton’s method must be used with damping until the region 0 < x < 2/t isreached From then on, a step size of 1 will exhibit pure quadratic error reduction

Trang 9

Fig 8.11 Newton’s method applied to minimization of tx− ln x

The situation is shown in Fig 8.11 The graph is that of fx= t − 1/x Theroot-finding form of Newton’s method (Section 8.2) is then applied to this function

At each point, the tangent line is followed to the x axis to find the new point.The starting value marked x1 is far from the solution 1/t and hence following thetangent would lead to a new point that was negative Damping must be applied atthat starting point Once a point x is reached with 0 < x < 1/t, all further pointswill remain to the left of 1/t and move toward it quadratically

In interior point methods for linear programming, a logarithmic barrier function

is applied separately to the variables that must remain positive The convergenceanalysis in these situations is an extension of that for the simple case given here,allowing for estimates of the rate of convergence that do not require knowledge ofbounds of third-order derivatives

Self-Concordant Functions

The special properties exhibited above for the logarithm have been extended to the

general class of self-concordant functions of which the logarithm is the primary

example A function f defined on the real line is self-concordant if it satisfies

A function defined on En is said to be self-concordant if it is self-concordant

in every direction: that is if fx +d is self-concordant with respect to  for every

dthroughout the domain of f

Trang 10

Self-concordant functions can be combined by addition and even by sition with affine functions to yield other self-concordant functions (Seeexercise 29.) For example the function

x = fxFx−1fxT1/2

where as usual F(x) is the Hessian matrix of f at x Then it can be shown that close

to the solution

Furthermore, in a backtracking procedure, estimates of both the stepwise progress

in the damping phase and the point at which the quadratic phase begins can beexpressed in terms of parameters that depend only on the backtracking parameters.Although, this knowledge does not generally influence practice, it is theoreticallyquite interesting

Example 1. (The logarithmic case) Consider the earlier example of fx=

Recall that one way to analyze Newton’s method is to change variables from

x to y according to ˜y = Fx−1/2˜x where here x is a reference point and ˜x is variable The gradient with respect to y at ˜y is then Fx−1/2f˜x and hence the norm of the gradient at y is

fxFx−1fxT1/2

≡ x Hence it is perhaps not surprising that x plays a role analogous to the role played by the norm of

the gradient in the analysis of steepest descent

Trang 11

8.9 Coordinate Descent Methods 253

The algorithms discussed in this section are sometimes attractive because of theireasy implementation Generally, however, their convergence properties are poorerthan steepest descent

Let f be a function on En having continuous first partial derivatives Given

a point x= x1 x2     xn, descent with respect to the coordinate xi (i fixed)means that one solves

minimize

xi fx1 x2     xn

Thus only changes in the single component xi are allowed in seeking a new and

better vector x In our general terminology, each such descent can be regarded as a descent in the direction ei (or−ei) where ei is the ith unit vector By sequentiallyminimizing with respect to different components, a relative minimum of f mightultimately be determined

There are a number of ways that this concept can be developed into a full

algorithm The cyclic coordinate descent algorithm minimizes f cyclically with

respect to the coordinate variables Thus x1 is changed first, then x2 and so forththrough xn The process is then repeated starting with x1again A variation of this is

the Aitken double sweep method In this procedure one searches over x1 x2     xn,

in that order, and then comes back in the order xn−1 xn−2     x1 These cyclic

methods have the advantage of not requiring any information about  f to determine

the descent directions

If the gradient of f is available, then it is possible to select the order of descent

coordinates on the basis of the gradient A popular technique is the Gauss–Southwell Method where at each stage the coordinate corresponding to the largest (in absolute

value) component of the gradient vector is selected for descent

Global Convergence

It is simple to prove global convergence for cyclic coordinate descent The

algorithmic map A is the composition of 2n maps

A = SCnSCn−1   SC1

where Cix = x ei with ei equal to the ith unit vector, and S is the usual line

search algorithm but over the doubly infinite line rather than the semi-infinite line

The map Ci is obviously continuous and S is closed If we assume that points are restricted to a compact set, then A is closed by Corollary 1, Section 7.7 We define

the solution set 

a search along any coordinate direction yields a unique minimum point, then the

function Zx ≡ fx serves as a continuous descent function for A with respect

to  This is because a search along any coordinate direction either must yield adecrease or, by the uniqueness assumption, it cannot change position Therefore,

Trang 12

if at a point x we have  fx = 0, then at least one component of fx does

not vanish and a search along the corresponding coordinate direction must yield adecrease

Local Convergence Rate

It is difficult to compare the rates of convergence of these algorithms with the rates

of others that we analyze This is partly because coordinate descent algorithms arefrom an entirely different general class of algorithms than, for example, steepestdescent and Newton’s method, since coordinate descent algorithms are unaffected

by (diagonal) scale factor changes but are affected by rotation of coordinates—theopposite being true for steepest descent Nevertheless, some comparison is possible

It can be shown (see Exercise 20) that for the same quadratic problem as treated

in Section 8.6, there holds for the Gauss–Southwell method

to require about n line searches to equal the effect of one step of steepest descent.The above discussion again illustrates the general objective that we seek inconvergence analysis By comparing the formula giving the rate of convergencefor steepest descent with a bound for coordinate descent, we are able to drawsome general conclusions on the relative performance of the two methods that arenot dependent on specific values of a and A Our analyses of local convergenceproperties, which usually involve specific formulae, are always guided by thisobjective of obtaining general qualitative comparisons

Example. The quadratic problem considered in Section 8.6 with

Ngày đăng: 06/08/2014, 15:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm