1. Trang chủ
  2. » Công Nghệ Thông Tin

LECTURE SLIDES ON NONLINEAR PROGRAMMING BASED ON LECTURES GIVEN AT THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASS DIMITRI P. BERTSEKAS

202 339 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Lecture Slides On Nonlinear Programming Based On Lectures Given At The Massachusetts Institute Of Technology Cambridge, Mass Dimitri P. Bertsekas
Tác giả Dimitri P. Bertsekas
Trường học Massachusetts Institute of Technology
Chuyên ngành Nonlinear Programming
Thể loại Lecture Slides
Năm xuất bản 2005
Thành phố Cambridge
Định dạng
Số trang 202
Dung lượng 690,83 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

LECTURE SLIDES ON NONLINEAR PROGRAMMING BASED ON LECTURES GIVEN AT THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASS DIMITRI P... COMPUTATION PROBLEM• Iterative descent • Appr

Trang 1

LECTURE SLIDES ON NONLINEAR

PROGRAMMING BASED ON

LECTURES GIVEN AT THE

MASSACHUSETTS INSTITUTE OF TECHNOLOGY CAMBRIDGE, MASS

DIMITRI P BERTSEKAS

Trang 2

LECTURE SLIDES ON NONLINEAR PROGRAMMING

BASED ON LECTURES GIVEN AT THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY

CAMBRIDGE, MASS DIMITRI P BERTSEKAS

These lecture slides are based on the book:

“Nonlinear Programming,” Athena Scientific,

by Dimitri P Bertsekas; see

http://www.athenasc.com/nonlinbook.html

for errata, selected problem solutions, and other support material.

The slides are copyrighted but may be freely

reproduced and distributed for any

noncom-mercial purpose.

LAST REVISED: Feb 3, 2005

Trang 4

NONLINEAR PROGRAMMING

min

x ∈X f (x),

where

• f :  n →  is a continuous (and usually

differ-entiable) function of n variables

• X =  n or X is a subset of  n with a ous” character

“continu-• If X =  n, the problem is called unconstrained

• If f is linear and X is polyhedral, the problem

is a linear programming problem Otherwise it is

a nonlinear programming problem

• Linear and nonlinear programming have

tradi-tionally been treated separately Their ologies have gradually come closer

Trang 5

method-TWO MAIN ISSUES

Trang 6

APPLICATIONS OF NONLINEAR PROGRAMMING

• Data networks – Routing

• Production planning

• Resource allocation

• Computer-aided design

• Solution of equilibrium models

• Data analysis and least squares formulations

• Modeling human or organizational behavior

Trang 7

− Zero 1st order variation along all directions

on the constraint surface

− Lagrange multiplier theory

• Sensitivity

Trang 8

COMPUTATION PROBLEM

• Iterative descent

• Approximation

• Role of convergence analysis

• Role of rate of convergence analysis

• Using an existing package to solve a nonlinear

programming problem

Trang 9

POST-OPTIMAL ANALYSIS

• Sensitivity

• Role of Lagrange multipliers as prices

Trang 10

Min Common Point

Max Intercept Point Max Intercept Point

Min Common Point

Trang 11

6.252 NONLINEAR PROGRAMMING

LECTURE 2 UNCONSTRAINED OPTIMIZATION -

OPTIMALITY CONDITIONS

LECTURE OUTLINE

• Unconstrained Optimization

• Local Minima

• Necessary Conditions for Local Minima

• Sufficient Conditions for Local Minima

• The Role of Convexity

Trang 12

MATHEMATICAL BACKGROUND

• Vectors and matrices in  n

• Transpose, inner product, norm

• Eigenvalues of symmetric matrices

• Positive definite and semidefinite matrices

• Convergent sequences and subsequences

• Open, closed, and compact sets

• Continuity of functions

• 1st and 2nd order differentiability of functions

• Taylor series expansions

• Mean value theorems

Trang 13

LOCAL AND GLOBAL MINIMA

f(x)

x Strict Local

Trang 14

NECESSARY CONDITIONS FOR A LOCAL MIN

• 1st order condition: Zero slope at a local

• There may exist points that satisfy the 1st and

2nd order conditions but are not local minima

Trang 15

PROOFS OF NECESSARY CONDITIONS

• 1st order condition ∇f(x) = 0 Fix d ∈  n

Then (since x ∗ is a local min), from 1st order Taylor

Since ∇f(x ∗ ) = 0 and x ∗ is local min, there is

sufficiently small  > 0 such that for all α ∈ (0, ),

Trang 16

SUFFICIENT CONDITIONS FOR A LOCAL MIN

• 1st order condition: Zero slope

∇f(x ∗) = 0

• 1st order condition: Positive curvature

2f (x ∗) : Positive Definite

• Proof: Let λ > 0 be the smallest eigenvalue of

2f (x ∗) Using a second order Taylor expansion,

we have for all d

f (x ∗ + d) − f(x ∗) = ∇f(x ∗) d + 1

2d  ∇2f (x ∗ )d + o( d 2)

Trang 17

A convex function Linear interpolation underestimates the function.

Trang 18

MINIMA AND CONVEXITY

• Local minima are also global under convexity

Illustration of why local minima of convex functions are

also global Suppose that f is convex and that x ∗ is a

local minimum of f Let x be such that f (x) < f (x ∗) By

convexity, for all α ∈ (0, 1),

f

≤ αf(x ∗) + (1 − α)f(x) < f(x ∗ ).

Thus, f takes values strictly lower than f (x ∗) on the line

segment connecting x ∗ with x, and x ∗ cannot be a local

minimum which is not global.

Trang 19

OTHER PROPERTIES OF CONVEX FUNCTIONS

• f is convex if and only if the linear approximation

at a point x based on the gradient, underestimates

• f is convex if and only if ∇2f (x) is positive

semidefinite for all x

Trang 20

6.252 NONLINEAR PROGRAMMING LECTURE 3: GRADIENT METHODS

LECTURE OUTLINE

• Quadratic Unconstrained Problems

• Existence of Optimal Solutions

• Iterative Computational Methods

• Gradient Methods - Motivation

• Principal Gradient Methods

• Gradient Methods - Choices of Direction

Trang 21

QUADRATIC UNCONSTRAINED PROBLEMS

• Q ≥ 0 ⇒ f : convex, nec conditions are also

sufficient, and local minima are also global

• Conclusions:

− Q : not ≥ 0 ⇒ f has no local minima

− If Q > 0 (and hence invertible), x ∗ = Q −1 b

is the unique global minimum

− If Q ≥ 0 but not invertible, either no solution

or ∞ number of solutions

Trang 22

0 0

α > 0, β = 0 {(1/ α, ξ) | ξ: real} is the set of global minima

α = 0 There is no global minimum

α > 0, β < 0 There is no global minimum

Illustration of the isocost surfaces of the quadratic cost

Trang 23

EXISTENCE OF OPTIMAL SOLUTIONS

Consider the problem

− A global minimum exists if f is continuous

and X is compact (Weierstrass theorem)

− A global minimum exists if X is closed, and

f is continuous and coercive, that is, f (x) →

∞ when x → ∞

Trang 24

GRADIENT METHODS - MOTIVATION

If d makes an angle with

∇f(x) that is greater than

Trang 25

PRINCIPAL GRADIENT METHODS

• Simplest method: Steepest descent

Trang 26

STEEPEST DESCENT AND NEWTON’S METHOD

x0 Slow convergence of

Quadratic Approximation of f at x0

Quadratic Approximation of f at x1

Fast convergence of

New-ton’s method w/ α k = 1.

Given x k, the method

ob-tains x k+1 as the minimum

of a quadratic

approxima-tion of f based on a

sec-ond order Taylor expansion

around x k.

Trang 27

OTHER CHOICES OF DIRECTION

• Diagonally Scaled Steepest Descent

Trang 28

6.252 NONLINEAR PROGRAMMING

LECTURE 4 CONVERGENCE ANALYSIS OF GRADIENT METHODS

LECTURE OUTLINE

• Gradient Methods - Choice of Stepsize

• Gradient Methods - Convergence Issues

Trang 29

× βs

Unsuccessful Stepsize Trials

β 2 s Stepsize α k

=

f(xk + αd k ) - f(xk)

Start with s and continue with βs, β2s, , until β m s falls

within the set of α with

f (x k) − f(x k + αd k) ≥ −σα∇f(x k) d k

Trang 31

GRADIENT METHODS WITH ERRORS

x k+1 = x k − α k(∇f(x k ) + e k)

where e k is an uncontrollable error vector

• Several special cases:

− e k small relative to the gradient; i.e., for all

− {e k } is bounded, i.e., for all k, e k ≤ δ,

where δ is some scalar.

− {e k } is proportional to the stepsize, i.e., for

all k, e k ≤ qα k , where q is some scalar.

− {e k } are independent zero mean random

vec-tors

Trang 32

CONVERGENCE ISSUES

• Only convergence to stationary points can be

guaranteed

• Even convergence to a single limit may be hard

to guarantee (capture theorem)

• Danger of nonconvergence if directions d k tend

to be orthogonal to ∇f(x k)

• Gradient related condition:

For any subsequence {x k } k ∈K that converges to

a nonstationary point, the corresponding quence {d k } k ∈K is bounded and satisfies

subse-lim sup

k →∞, k∈K ∇f(x k) d k < 0.

• Satisfied if d k = −D k ∇f(x k) and the

eigenval-ues of D k are bounded above and bounded awayfrom zero

Trang 33

CONVERGENCE RESULTS CONSTANT AND DIMINISHING STEPSIZES

Let {x k } be a sequence generated by a gradient

method x k+1 = x k + α k d k, where {d k } is gradient

related Assume that for some constant L > 0,

we have

∇f(x) − ∇f(y) ≤ L x − y , ∀ x, y ∈  n ,

Assume that either

(1) there exists a scalar  such that for all k

0 <  ≤ α k ≤ (2 − )|∇f(x k) d k |

L d k 2

or

(2) α k → 0 and ∞ k=0 α k = ∞.

Then either f (x k) → −∞ or else {f(x k)}

con-verges to a finite value and ∇f(x k) → 0.

Trang 34

MAIN PROOF IDEA

f(xk + αd k ) - f(xk)

The idea of the convergence proof for a constant stepsize.

Given x k and the descent direction d k, the cost

differ-ence f (x k + αd k) − f(x k ) is majorized by α ∇f(x k) d k +

1

2α2L d k 2 (based on the Lipschitz assumption; see next

slide) Minimization of this function over α yields the

Trang 36

CONVERGENCE RESULT – ARMIJO RULE

Let {x k } be generated by x k+1 = x k +α k d k, where

{d k } is gradient related and α k is chosen by theArmijo rule Then every limit point of {x k } is sta-

tionary

Proof Outline: Assume x is a nonstationary limit

point Then f (x k) → f(x), so α k ∇f(x k) d k → 0.

• If {x k }K → x, lim sup k →∞, k∈K ∇f(x k) d k < 0,

by gradient relatedness, so that {α k }K → 0.

• By the Armijo rule, for large k ∈ K

Use the Mean Value Theorem and let k → ∞.

We get −∇f(x)  p ≤ −σ∇f(x)  p, where p is a limit

point of p k – a contradiction since ∇f(x)  p < 0.

Trang 37

6.252 NONLINEAR PROGRAMMING

LECTURE 5: RATE OF CONVERGENCE

LECTURE OUTLINE

• Approaches for Rate of Convergence Analysis

• The Local Analysis Method

• Quadratic Model Analysis

• The Role of the Condition Number

• Scaling

• Diagonal Scaling

• Extension to Nonquadratic Problems

• Singular and Difficult Problems

Trang 38

APPROACHES FOR RATE OF CONVERGENCE ANALYSIS

• Computational complexity approach

• Informational complexity approach

• Local analysis

• Why we will focus on the local analysis method

Trang 39

THE LOCAL ANALYSIS APPROACH

• Restrict attention to sequences x k converging

• Geometric or linear convergence [if e(x k) ≤ qβ k

for some q > 0 and β ∈ [0, 1), and for all k] Holds

• Superlinear convergence [if e(x k) ≤ q · β p k for

some q > 0, p > 1 and β ∈ [0, 1), and for all k].

• Sublinear convergence

Trang 40

QUADRATIC MODEL ANALYSIS

• Focus on the quadratic function f(x) = (1/2)x  Qx,

with Q > 0.

• Analysis also applies to nonquadratic problems

in the neighborhood of a nonsingular local min

• Consider steepest descent

x k+1 = x k − α k ∇f(x k ) = (I − α k Q)x k

x k+1 2 = x k  (I − α k Q)2x k

max eig (I − α k Q)2

x k 2

The eigenvalues of (I − α k Q)2 are equal to (1

α k λ i)2, where λ i are the eigenvalues of Q, so

max eig of (I −α k Q)2 = max

Trang 41

OPTIMAL CONVERGENCE RATE

• The value of α k that minimizes the bound is

• The ratio M/m is called the condition number

of Q, and problems with M/m: large are called

ill-conditioned

Trang 42

SCALING AND STEEPEST DESCENT

• View the more general method

x k+1 = x k − α k D k ∇f(x k)

as a scaled version of steepest descent

• Consider a change of variables x = Sy with

S = (D k)1/2 In the space of y, the problem is

minimize h(y) ≡ f(Sy)

subject to y ∈  n

• Apply steepest descent to this problem, multiply

with S, and pass back to the space of x, using

∇h(y k ) = S ∇f(x k),

y k+1 = y k − α k ∇h(y k)

Sy k+1 = Sy k − α k S ∇h(y k)

x k+1 = x k − α k D k ∇f(x k)

Trang 43

DIAGONAL SCALING

• Apply the results for steepest descent to the

scaled iteration y k+1 = y k − α k ∇h(y k):

where m k and M k are the smallest and largest

eigenvalues of the Hessian of h, which is

2h(y) = S ∇2f (x)S = (D k)1/2 Q(D k)1/2

• It is desirable to choose D k as close as possible

to Q −1 Also if D k is so chosen, the stepsize α = 1

is near the optimal 2/(M k + m k)

• Using as D k a diagonal approximation to Q −1

is common and often very effective Corrects forpoor choice of units expressing the variables

Trang 44

NONQUADRATIC PROBLEMS

• Rate of convergence to a nonsingular local

min-imum of a nonquadratic function is very similar tothe quadratic case (linear convergence is typical)

• If D k → 2f (x ∗)−1

, we asymptotically obtainoptimal scaling and superlinear convergence

• More generally, if the direction d k = −D k ∇f(x k)approaches asymptotically the Newton direction,i.e.,

• Convergence rate to a singular local min is

typ-ically sublinear (in effect, condition number = ∞)

Trang 45

6.252 NONLINEAR PROGRAMMING

LECTURE 6 NEWTON AND GAUSS-NEWTON METHODS

LECTURE OUTLINE

• Newton’s Method

• Convergence Rate of the Pure Form

• Global Convergence

• Variants of Newton’s Method

• Least Squares Problems

• The Gauss-Newton Method

Trang 46

− Very fast when it converges (how fast?)

− May not converge (or worse, it may not be

defined) when started far from a nonsingularlocal min

− Issue: How to modify the method so that

it converges globally, while maintaining thefast convergence rate

Trang 47

CONVERGENCE RATE OF PURE FORM

• Consider solution of nonlinear system g(x) = 0

where g :  n →  n, with method

x k+1 = x k − ∇g(x k)−1

g(x k)

− If g(x) = ∇f(x), we get pure form of Newton

• Quick derivation: Suppose x k → x ∗ withg(x ∗) = 0 and ∇g(x ∗) is invertible By Taylor

Trang 48

CONVERGENCE BEHAVIOR OF PURE FORM

Trang 49

MODIFICATIONS FOR GLOBAL CONVERGENCE

• Use a stepsize

• Modify the Newton direction when:

− Hessian is not positive definite

− When Hessian is nearly singular (needed to

improve performance)

• Use

d k = 2f (x k) + ∆k−1

∇f(x k ),

whenever the Newton direction does not exist or

is not a descent direction Here ∆k is a diagonalmatrix such that

2f (x k) + ∆k > 0

− Modified Cholesky factorization

− Trust region methods

Trang 51

PURE FORM OF THE GAUSS-NEWTON METHOD

• Idea: Linearize around the current point x k

Trang 52

MODIFICATIONS OF THE GAUSS-NEWTON

• Similar to those for Newton’s method:

− Start a cycle with ψ0 (an estimate of x)

− Update ψ using a single component of g

Trang 53

MODEL CONSTRUCTION

• Given set of m input-output data pairs (y i , z i),

i = 1, , m, from the physical system

• Hypothesize an input/output relation z = h(x, y),

where x is a vector of unknown parameters, and

h is known

• Find x that matches best the data in the sense

that it minimizes the sum of squared errors

1 2

m



i=1

z i − h(x, y i) 2

• Example of a linear model: Fit the data pairs by

a cubic polynomial approximation Take

h(x, y) = x3y3 + x2y2 + x1y + x0,

where x = (x0, x1, x2, x3) is the vector of unknowncoefficients of the cubic polynomial

Trang 54

NEURAL NETS

• Nonlinear model construction with multilayer

perceptrons

• x of the vector of weights

• Universal approximation property

Trang 55

PATTERN CLASSIFICATION

• Objects are presented to us, and we wish to

classify them in one of s categories 1, , s, based

on a vector y of their features.

• Classical maximum posterior probability

ap-proach: Assume we know

p(j |y) = P (object w/ feature vector y is of category j)

Assign object with feature vector y to category

j ∗ (y) = arg max

j=1, ,s p(j |y).

• If p(j|y) are unknown, we can estimate them

using functions h j (x j , y) parameterized by vectors

x j Obtain x j by minimizing

1 2

Trang 56

• Conjugate Direction Methods

• The Conjugate Gradient Method

• Quasi-Newton Methods

• Coordinate Descent Methods

• Recall the least-squares problem:

Trang 57

INCREMENTAL GRADIENT METHODS

• Steepest descent method

bi

a maxi i

bi

x*

x R

Advantage of incrementalism

Trang 58

VIEW AS GRADIENT METHOD W/ ERRORS

• Can write incremental gradient method as

• Error term is proportional to stepsize α k

• Convergence (generically) for a diminishing

step-size (under a Lipschitz condition on ∇g i g i)

• Convergence to a “neighborhood” for a constant

stepsize

Trang 59

CONJUGATE DIRECTION METHODS

• Aim to improve convergence rate of steepest

de-scent, without the overhead of Newton’s method

• Analyzed for a quadratic model They require n

iterations to minimize f (x) = (1/2)x  Qx −b  x with

Q an n × n positive definite matrix Q > 0.

• Analysis also applies to nonquadratic problems

in the neighborhood of a nonsingular local min

• The directions d1, , d k are Q-conjugate if d i Qd j =

Trang 60

GENERATING Q-CONJUGATE DIRECTIONS

• Given set of linearly independent vectors ξ0, , ξ k,

we can construct a set of Q-conjugate directions

d0, , d k s.t Span(d0, , d i ) = Span(ξ0, , ξ i)

• Gram-Schmidt procedure Start with d0 = ξ0

If for some i < k, d0, , d i are Q-conjugate and

the above property holds, take

0

d1= ξ 1 + c10d0

d2= ξ 2 + c20d0 + c21d1

Ngày đăng: 26/10/2013, 17:15

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN