1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists part 152 doc

7 91 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 428,85 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Avriel, M., Nonlinear Programming: Analysis and Methods, Dover Publications, New York, 2003.. Bellman, R., Dynamic Programming, Dover Edition, Dover Publications, New York, 2003.. V., Dy

Trang 1

where aij is the payoff (positive or negative) of player A against player B if player A uses the pure strategy Aiand player B used the pure strategy Bj.

Remark The sum of payoffs of both players is zero for each move (That is why the game is called a

zero-sum game.)

Let αi = minj{aij} be the minimum possible payoff of player A if he uses the pure strategy Ai If player A acts reasonably, he must choose a strategy Ai for which αi is maximal,

α = max

ii} = max

i minj {aij} (19.2.1.28)

The number α is called the lower price of the game Let βj = max

i

5

aij6

be the maximum

possible loss of player B if he uses the pure strategy Bj If player B acts reasonably, he must choose a strategy Bjfor which βjis minimal,

β = min

j βj = minj maxi {aij} (19.2.1.29)

The number β is called the upper price of the game.

Remark The principle for constructing the strategies of player A (the first player) based on the maxi-mization of minimal payoffs is called the maximin principle The principle for constructing the strategies of player B (the second player) based on the minimization of maximal losses is called the minimax principle.

The lower price of the game is the guaranteed minimal payoff of player A if he follows

the maximin principle The upper price of the game is the guaranteed maximal loss of

player B if he follows the minimax principle.

THEOREM In a two-person zero-sum game, the lower price α and the upper price β

satisfy the inequality

If α = β, then such a game is called the game with a saddle point, and a pair (Ai,opt, Bj,opt)

of optimal strategies is called a saddle point of the payoff matrix The entry v = aij corresponding to a saddle point (Ai,opt, Bj,opt) is called the game value If a game has a

saddle point, then one says that the game can be solved in pure strategies.

Remark There can be several saddle points, but they all have the same value

If the payoff matrix has no saddle points, i.e., the strict inequality α < β holds, then the

search of a solution of the game leads to a complex strategy in which a player randomly uses two or more strategies with certain frequencies Such complex strategies are said to

be mixed.

The strategies of player A are determined by the set x = (x1, , xm) of probabilities

that the player uses the respective pure strategies A1, , Am For player B, the strategies

are determined by the set y = (y1, , yn) of probabilities that the player uses the respective

pure strategies B1, , Bn These sets of probabilities must satisfy the identity

m



i=1

xi =

n



j=1

yj = 1.

The expectation of the payoff of player A is given by the function

H(x, y) =

m



i=1

n



j=1

aijxiyj. (19.2.1.31)

Trang 2

1026 CALCULUS OFVARIATIONS ANDOPTIMIZATION

THE VONNEUMANN MINIMAX THEOREM There exist optimal mixed strategies xand y, i.e., strategies such that

H(x, y) ≤ H(x, y) ≤ H(x, y) (19.2.1.32)

for any probabilities x and y.

The number v = H(x, y) is called the game price in mixed strategies.

MINIMAX THEOREM FOR ANTAGONISTIC TWO-PERSON ZERO-SUM GAMES.

For any payoff matrix (19.2.1.27),

v = max

x1, ,x m



min

y1, ,y n

m



i=1

n



j=1

aijxiyj



= min

y1, ,y n

 max

x1, ,x m

m



i=1

n



j=1

aijxiyj

 (19.2.1.33)

19.2.1-8 Relationship between game theory and linear programming.

Without loss of generality, we can assume that v > 0 This can be ensured if we add the

same positive constant a > 0 to all entries aij of the payoff matrix (19.2.1.27); in this case,

only the game price varies (increases by a > 0), while the optimal solution remains the same.

An antagonistic two-person zero-sum game can be reduced to a linear programming problem by the change of variables

v = 1

Zmin

Wmax

,

xi= vXi (i = 1, 2, , m);

yj = vYj (j = 1, 2, , n).

(19.2.1.34)

The quantities Zmin, Wmax, Xi, and Yjform a solution of the following pair of dual problems:

Z = X1+ X2+ · · · + Xm → min,

a11X1+ a21X2+ · · · + am1Xm≥ 1,

a12X1+ a22X2+ · · · + am2Xm≥ 1,

a1nX1+ a2nX2+ · · · + amnXm ≥ 1,

Xi ≥ 0 (i = 1, 2, , m);

(19.2.1.35)

W = Y1+ Y2+ · · · + Yn→ max,

a11Y1+ a12Y2+ · · · + a1nYn≤ 1,

a21Y1+ a22Y2+ · · · + a2nYn≤ 1,

am1Y1+ am2Y2+ · · · + amnYn≤ 1,

Yj ≥ 0 (j = 1, 2, , n).

(19.2.1.36)

Trang 3

19.2.2 Nonlinear Programming

19.2.2-1 General statement of nonlinear programming problem.

The nonlinear programming problem is the problem of finding n variables x = (x1, , xn) that provide an extremum of the objective function

Z(x) = f (x) → extremum (19.2.2.1) and satisfy the system of constraints

ϕi(x) = 0 for i = 1, 2, , k,

ϕi(x) ≤ 0 for i = k + 1, k + 2, , l,

ϕi(x) ≥ 0 for i = l + 1, l + 2, , m.

(19.2.2.2)

Here the objective function (19.2.2.1) and/or at least one of the functions ϕi(x) (i =

1, 2, , m) is nonlinear.

Depending on the properties of the functions f (x) and ϕi(x), the following types of

problems are distinguished:

1 Convex programming.

2 Quadratic programming.

3 Geometric programming.

A necessary condition for the maximum of the function

under the inequality constraints

ϕi(x) ≤ 0 (i = 1, 2, , m)

is that there exist m + 1 nonnegative Lagrange multipliers λ0, λ1, , λm that are not simultaneously zero and satisfy the conditions

λi ≥ 0 (i = 0, 1, , m),

λiϕi(x) = 0 (i = 1, 2, , m),

λ0fx j+

m



i=1

λii)x j = 0,

(19.2.2.4)

where derivatives fx jand (ϕi)x jare evaluated at x.

One of the most widely used methods of nonlinear programming is the penalty function

method This method approximates a problem with constraints by a problem without

constraints and with objective function that penalizes infeasibility The higher the penalties, the closer the problem of maximizing the penalty function is to the original problem.

19.2.2-2 Dynamic programming.

Dynamic programming is the branch of mathematical programming dealing with multistage

optimal decision-making problems.

The general outline of a multistage optimal decision-making process is as follows.

Consider a controlled system S taken by the control from an initial state s0to a state 2s Let

Trang 4

1028 CALCULUS OFVARIATIONS ANDOPTIMIZATION

xk(k = 1, 2, , n) be the control at the kth stage, let x = (x1, , xn) be the control taking

the system S from the state s0to the state 2s, and let sk be the state of the system after the

kth control step The efficiency of the control is characterized by an objective function that

depends on the initial state and the control,

We assume that

1 The state skdepends only on the preceding state sk–1and the control xkat the kth step,

sk= ϕk(sk–1, xk) (k = 1, 2, , n) (19.2.2.6)

2 The objective function (19.2.2.5) is an additive function of the performance factor at each step.

If the performance factor at the kth step is

Zk= fk(sk–1, xk) (k = 1, 2, , n), (19.2.2.7) then the objective function (19.2.2.5) can be written as

Z =

n



k=1

fk(sk–1, xk). (19.2.2.8)

The dynamic programming problem Find an admissible control x taking the

sys-tem S from the state s0 to the state 2s and maximizing (or minimizing) the objective

function (19.2.2.8).

THEOREM(BELLMAN’S OPTIMALITY PRINCIPLE) For any state s of the system after any

number of steps, one should choose the control at the current step so as to ensure that this control, together with the optimal control at all subsequent steps, leads to the optimal payoff

at all remaining steps, including the current step.

Let Zk ∗(sk–1) be the conditional maximum of the objective function obtained under the

optimal control at n–k –1 steps starting from the kth step until the end under the assumption that the system was in the state sk–1by the beginning of the kth step The equations

Z

n(sn–1) = max

x n {fn(sn–1, xn)},

Z

k(sk–1) = maxx

k {fk(sk–1, xk) + Zk+ ∗ 1(sk)} (k = n – 1, n – 2, , 1)

are called the Bellman equations The Bellman equations for the dynamic programming problem and for any n and s0permit finding a solution, which is given by

Zmax= Z1(s0).

References for Chapter 19

Akhiezer, N I., Calculus of Variations, Taylor & Francis, London, New York, 1988.

Avriel, M., Nonlinear Programming: Analysis and Methods, Dover Publications, New York, 2003.

Bazaraa, M S., Sherali, H D., and Shetty, C M., Nonlinear Programming: Theory and Algorithms, Wiley,

New York, 2006

Belegundu, A D and Chandrupatla, T R., Optimization Concepts and Applications in Engineering, Bk&CD

ROM Edition, Prentice Hall, Englewood Cliffs, New Jersey, 1999.

Trang 5

Bellman, R., Adaptive Control Processes: A Guided Tour, Princeton University Press, Princeton, New Jersey,

1961

Bellman, R., Dynamic Programming, Dover Edition, Dover Publications, New York, 2003.

Bellman, R and Dreyfus, S E., Applied Dynamic Programming, Princeton University Press, Princeton, New

Jersey, 1962

Bertsekas, D P., Nonlinear Programming, 2nd Edition, Athena Scientific, Belmont, Massachusetts, 1999 Bertsimas, D and Tsitsiklis, J N., Introduction to Linear Optimization (Athena Scientific Series in

Optimiza-tion and Neural ComputaOptimiza-tion, Vol 6), Athena Scientific, Belmont, Massachusetts, 1997

Bolza, O., Lectures on the Calculus of Variations, 3rd Edition, American Mathematical Society, Providence,

Rhode Island, 2000

Bonnans, J F., Gilbert, J C., Lemarechal, C., and Sagastizabal, C A., Numerical Optimization, Springer,

New York, 2003

Boyd, S and Vandenberghe, L., Convex Optimization, Cambridge University Press, Cambridge, 2004 Brechteken-Mandersch, U., Introduction to the Calculus of Variations, Chapman & Hall/CRC Press, Boca

Raton, 1991

Brinkhuis, J and Tikhomirov, V., Optimization: Insights and Applications, Princeton University Press,

Princeton, New Jersey, 2005

Bronshtein, I N., Semendyayev, K A., Musiol, G., and M ¨uhlig, H., Handbook of Mathematics, 4th Edition,

Springer, New York, 2004

Bronson, R and Naadimuthu, G., Schaum’s Outline of Operations Research, 2nd Edition, McGraw-Hill,

New York, 1997

van Brunt, B., The Calculus of Variations, Springer, New York, 2003.

Calvert, J E., Linear Programming, Brooks Cole, Stamford, 1989.

Chong, E K P and ˇZak, S H., An Introduction to Optimization, 2nd Edition, Wiley, New York, 2001 Chvatal, V., Linear Programming (Series of Books in the Mathematical Sciences), W H Freeman, New

York, 1983

Cooper, L., Applied Nonlinear Programming for Engineers and Scientists, Aloray, Goshen, 1974.

Dacorogna, B., Introduction to the Calculus of Variations, Imperial College Press, London, 2004.

Darst, R B., Introduction to Linear Programming (Pure and Applied Mathematics (Marcel Dekker)), CRC

Press, Boca Raton, 1990

Denardo, E V., Dynamic Programming: Models and Applications, Dover Publications, New York, 2003 Dreyfus, S E., Dynamic Programming and the Calculus of Variations, Academic Press, New York, 1965 Elsgolts, L., Differential Equations and the Calculus of Variations, University Press of the Pacific, Honolulu,

Hawaii, 2003

Ewing, G M., Calculus of Variations with Applications (Mathematics Series), Dover Publications, New York,

1985

Fletcher, R., Practical Methods of Optimization, 2nd Edition, Wiley, New York, 2000.

Fomin, S V and Gelfand, I M., Calculus of Variations, Dover Publications, New York, 2000.

Fox, C., An Introduction to the Calculus of Variations, Dover Publications, New York, 1987.

Galeev, E M and Tikhomirov, V M., Optimization: Theory, Examples, and Problems [in Russian], Editorial

URSS, Moscow, 2000

Gass, S I., An Illustrated Guide to Linear Programming, Rep Edition, Dover Publications, New York, 1990 Gass, S I., Linear Programming: Methods and Applications, 5th Edition, Dover Publications, New York, 2003 Gill, Ph E., Murray, W., and Wright, M H., Practical Optimization, Rep Edition, Academic Press, New

York, 1982

Giusti, E., Direct Methods in the Calculus of Variations, World Scientific Publishing Co., Hackensack, New

Jersey, 2003

Glicksman, A M., Introduction to Linear Programming and the Theory of Games, Dover Publications, New

York, 2001

Hillier, F S and Lieberman, G J., Introduction to Operations Research, McGraw-Hill, New York, 2002 Horst, R and Pardalos, P M (Editors), Handbook of Global Optimization, Kluwer Academic, Dordrecht,

1995

Jensen, P A and Bard, J F., Operations Research Models and Methods, Bk&CD-Rom Edition, Wiley, New

York, 2002

Jost, J and Li-Jost, X., Calculus of Variations (Cambridge Studies in Advanced Mathematics), Cambridge

University Press, Cambridge, 1999

Kolman, B and Beck, R E., Elementary Linear Programming with Applications, 2nd Edition (Computer

Science and Scientific Computing), Academic Press, New York, 1995

Korn, G A and Korn, T M., Mathematical Handbook for Scientists and Engineers: Definitions, Theorems,

and Formulas for Reference and Review, 2nd Rev Edition, Dover Publications, New York, 2000.

Trang 6

1030 CALCULUS OFVARIATIONS ANDOPTIMIZATION

Krasnov, M L., G K., Makarenko, G K.,, and Kiselev, A I., Problems and Exercises in the Calculus of

Variations, Imported Publications, Inc., New York, 1985.

Lebedev, L P and Cloud, M J., The Calculus of Variations and Functional Analysis with Optimal Control

and Applications in Mechanics (Series on Stability, Vibration and Control of Systems, Series A, Vol 12),

World Scientific Publishing Co., Hackensack, New Jersey, 2003

Liberti, L and Maculan, N (Editors), Global Optimization: From Theory to Implementation (Nonconvex

Optimization and Its Applications), Springer, New York, 2006.

Luenberger, D G., Linear and Nonlinear Programming, 2nd Edition, Springer, New York, 2003.

MacCluer, C R., Calculus of Variations: Mechanics, Control, and Other Applications, Prentice Hall,

Engle-wood Cliffs, New Jersey, 2004

Mangasarian, O L., Nonlinear Programming (Classics in Applied Mathematics, Vol 10), Society for Industrial

& Applied Mathematics, University City Science Center, Philadelphia, 1994

Marlow, W H., Mathematics for Operations Research, Dover Publications, New York, 1993.

Morse, Ph M and Kimball, G E., Methods of Operations Research, Dover Publications, New York, 2003 Moser, J., Selected Chapters in the Calculus of Variations: Lecture Notes by Oliver Knill (Lectures in

Mathe-matics ETH Zurich), Birkh¨auser Verlag, Basel, Stuttgart, 2003

Murty, K G., Linear Programming, Rev Edition, Wiley, New York, 1983.

Nash, S G and Sofer, A., Linear and Nonlinear Programming, McGraw-Hill, New York, 1995.

Nocedal, J and Wright, S., Numerical Optimization, Springer, New York, 2000.

Padberg, M., Linear Optimization and Extensions (Algorithms and Combinatorics), 2nd Edition, Springer,

New York, 1999

Pannell, D J., Introduction to Practical Linear Programming, Bk&Disk Edition, Wiley, New York, 1996 Pardalos, P M and Resende, M G C (Editors), Handbook of Applied Optimization, Oxford University

Press, Oxford, 2002

Pardalos, P M and Romeijn, H E (Editors), Handbook of Global Optimization, Vol 2 (Nonconvex

Opti-mization and Its Applications), Springer, New York, 2002.

Pierre, D A., Optimization Theory with Applications, Dover Publications, New York, 1987.

Rao, S S., Engineering Optimization: Theory and Practice, 3rd Edition, Wiley, New York, 1996.

Rardin, R L., Optimization in Operations Research, Prentice Hall, Englewood Cliffs, New Jersey, 1997 Ross, S M., Applied Probability Models with Optimization Applications, Rep Edition (Dover Books on

Mathematics), Dover Publications, New York, 1992

Ruszczynski, A., Nonlinear Optimization, Princeton University Press, Princeton, New Jersey, 2006.

Sagan, H., Introduction to the Calculus of Variations, Rep Edition, Dover Publications, New York, 1992 Shenoy, G V., Linear Programming: Methods and Applications, Halsted Press, New York, 1989.

Simon, C P and Blume, L., Mathematics for Economists, W W Norton & Company, New York, 1994 Smith, D R., Variational Methods in Optimization (Dover Books on Mathematics), Dover Publications, New

York, 1998

Strayer, J K., Linear Programming and Its Applications (Undergraduate Texts in Mathematics),

Springer-Verlag, Berlin, 1989

Sundaram, R K., A First Course in Optimization Theory, Cambridge University Press, Cambridge, 1996 Taha, H A., Operations Research: An Introduction, 7th Edition, Prentice Hall, Englewood Cliffs, New Jersey,

2002

Tslaf, L Ya., Calculus of Variations and Integral Equations, 3rd Edition [in Russian], Lan, Moscow, 2005 Tuckey, C., Nonstandard Methods in the Calculus of Variations, Chapman & Hall/CRC Press, Boca Raton, 1993 Vasilyev, F P and Ivanitskiy, A Y., In-Depth Analysis of Linear Programming, Springer, New York, 2001 Venkataraman, P., Applied Optimization with MATLAB Programming, Wiley, New York, 2001.

Wan, F., Introduction to the Calculus of Variations and Its Applications, 2nd Edition, Chapman & Hall/CRC

Press, Boca Raton, 1995

Weinstock, R., Calculus of Variations, Dover Publications, New York, 1974.

Winston, W L., Operations Research: Applications and Algorithms (with CD-ROM and InfoTrac), 4th Edition,

Duxbury Press, Boston, 2003

Young, L C., Lecture on the Calculus of Variations and Optimal Control Theory, American Mathematical

Society, Providence, Rhode Island, 2000

Trang 7

Probability Theory

20.1 Simplest Probabilistic Models

20.1.1 Probabilities of Random Events

20.1.1-1 Random events Basic definitions.

The simplest indivisible mutually exclusive outcomes of an experiment are called elementary

events ω The set of all elementary outcomes, which we denote by the symbol Ω, is called

the space of elementary events or the sample space Any subset of Ω is called a random

event A (or simply an event A) Elementary events that belong to A are said to favor A In

any probabilistic model, a certain condition set Σ is assumed to be fixed.

An event A implies an event B (A ⊆ B) if B occurs in each realization of Σ for which A occurs Events A and B are said to be equivalent (A = B) if A implies B and B implies A,

i.e., if, for each realization of Σ, both events A and B occur or do not occur simultaneously The intersection C = A ∩ B = AB of events A and B is the event that both A and B occur The elementary outcomes of the intersection AB are the elementary outcomes that simultaneously belong to A and B.

The union C = A ∪ B = A + B of events A and B is the event that at least one of the events A or B occurs The elementary outcomes of the union A + B are the elementary outcomes that belong to at least one of the events A and B.

The difference C = A \B = A–B of events A and B is the event that A occurs and B does not occur The elementary outcomes of the difference A \B are the elementary outcomes

of A that do not belong to B.

The event that A does not occur is called the complement of A, or the complementary

event, and is denoted by A The elementary outcomes of A are the elementary outcomes

that do not belong to the event A.

An event is said to be sure if it necessarily occurs for each realization of the condition

set Σ Obviously, the sure event is equivalent to the space of elementary events, and hence the sure event should be denoted by the symbol Ω.

An event is said to be impossible if it cannot occur for any realization of the condition

set Σ Obviously, the impossible event does not contain any elementary outcome and hence should be denoted by the symbol ∅.

Two events A and A are said to be opposite if they simultaneously satisfy the following

two conditions:

A ∪ A = Ω, A ∩ A = ∅.

Events A and B are said to be incompatible, or mutually exclusive, if their simultaneous realization is impossible, i.e., if A ∩ B = ∅.

Events H1, , Hnare said to form a complete group of events, or to be collectively

exhaustive, if at least one of them necessarily occurs for each realization of the condition

set Σ, i.e., if

H1∪ · · · ∪ Hn= Ω.

1031

Ngày đăng: 02/07/2014, 13:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm