1.5.1 Classification Based on the Existence of Constraints 14 1.5.2 Classification Based on the Nature of the Design Variables 15 1.5.3 Classification Based on the Physical Structure of
Trang 2Engineering Optimization: Theory and Practice, Fourth Edition Singiresu S Rao
Trang 3Engineering Optimization Theory and Practice
Fourth Edition
Singiresu S Rao
JOHN WILEY & SONS, INC
Trang 4Copyright c 2009 by John Wiley & Sons, Inc All rights reserved
Published by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission
of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750– 8400, fax (978) 646– 8600, or on the web
at www.copyright.com Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748– 6011, fax (201) 748– 6008, or online at www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and the author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness
of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose No warranty may be created or extended by sales representatives or written sales materials The advice and strategies contained herein may not be suitable for your situation You should consult with a professional where appropriate Neither the publisher nor the author shall be liable for any loss
of profit or any other commercial damages, including but not limited to special, incidental, consequential,
or other damages.
For general information about our other products and services, please contact our Customer Care Department within the United States at (800) 762– 2974, outside the United States at (317) 572– 3993 or fax (317) 572– 4002.
Wiley also publishes its books in a variety of electronic formats Some content that appears in print may not be available in electronic books For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
10 9 8 7 6 5 4 3 2 1
Trang 61.5.1 Classification Based on the Existence of Constraints 14 1.5.2 Classification Based on the Nature of the Design Variables 15 1.5.3 Classification Based on the Physical Structure of the Problem 16 1.5.4 Classification Based on the Nature of the Equations Involved 19 1.5.5 Classification Based on the Permissible Values of the Design Variables 28 1.5.6 Classification Based on the Deterministic Nature of the Variables 29 1.5.7 Classification Based on the Separability of the Functions 30 1.5.8 Classification Based on the Number of Objective Functions 32
2.4 Multivariable Optimization with Equality Constraints 75
2.4.2 Solution by the Method of Constrained Variation 77
vii
Trang 7viii Contents
2.5 Multivariable Optimization with Inequality Constraints 93
4.3.3 Primal – Dual Relations When the Primal Is in Standard Form 193
4.5.1 Changes in the Right-Hand-Side Constants b i 208 4.5.2 Changes in the Cost Coefficients c j 212
4.5.4 Changes in the Constraint Coefficients a ij 215
Trang 84.7 Karmarkar’s Interior Method 222
5.13.2 Implementation in Multivariable Optimization Problems 293
Trang 96.8.2 Rate of Change of a Function along a Direction 338
Trang 107 Nonlinear Programming III: Constrained Optimization Techniques 380
7.16 Extrapolation Techniques in the Interior Penalty Function Method 447
7.16.1 Extrapolation of the Design Vector X 448
7.16.2 Extrapolation of the Function f 450
7.18 Penalty Function Method for Problems with Mixed Equality and Inequality
7.18.1 Interior Penalty Function Method 454
7.19 Penalty Function Method for Parametric Constraints 456
7.20.3 Mixed Equality – Inequality-Constrained Problems 463
Trang 11xii Contents
7.21.2 Testing the Kuhn – Tucker Conditions 465
8.4 Solution of an Unconstrained Geometric Programming Program Using Differential
8.9 Primal and Dual Programs in the Case of Less-Than Inequalities 510
9.2.2 Representation of a Multistage Decision Process 546 9.2.3 Conversion of a Nonserial System to a Serial System 548
9.3 Concept of Suboptimization and Principle of Optimality 549
Trang 129.5 Example Illustrating the Calculus Method of Solution 555
9.6 Example Illustrating the Tabular Method of Solution 560
9.7 Conversion of a Final Value Problem into an Initial Value Problem 566
10.5.1 Representation of an Integer Variable by an Equivalent System of Binary
10.5.2 Conversion of a Zero – One Polynomial Programming Problem into a
Trang 13xiv Contents
11.2.2 Random Variables and Probability Density Functions 633
11.2.5 Jointly Distributed Random Variables 639
11.2.8 Probability Distributions 643
12.2.3 Lagrange Multipliers and Constraints 675
12.3.1 Necessary Conditions for Optimal Control 679
12.4.1 Optimality Criteria with a Single Displacement Constraint 683 12.4.2 Optimality Criteria with Multiple Displacement Constraints 684
Trang 1413.4.4 Solution of the Constrained Optimization Problem 711
14.4 Derivatives of Static Displacements and Stresses 745
14.5 Derivatives of Eigenvalues and Eigenvectors 747
14.7 Sensitivity of Optimum Solution to Problem Parameters 751
14.7.1 Sensitivity Equations Using Kuhn – Tucker Conditions 752
Trang 1514.10.2 Inverted Utility Function Method 764
Trang 16The ever-increasing demand on engineers to lower production costs to withstand globalcompetition has prompted engineers to look for rigorous methods of decision mak-ing, such as optimization methods, to design and produce products and systems botheconomically and efficiently Optimization techniques, having reached a degree ofmaturity in recent years, are being used in a wide spectrum of industries, includingaerospace, automotive, chemical, electrical, construction, and manufacturing industries.With rapidly advancing computer technology, computers are becoming more powerful,and correspondingly, the size and the complexity of the problems that can be solvedusing optimization techniques are also increasing Optimization methods, coupled withmodern tools of computer-aided design, are also being used to enhance the creativeprocess of conceptual and detailed design of engineering systems
The purpose of this textbook is to present the techniques and applications of neering optimization in a comprehensive manner The style of the prior editions hasbeen retained, with the theory, computational aspects, and applications of engineeringoptimization presented with detailed explanations As in previous editions, essentialproofs and developments of the various techniques are given in a simple mannerwithout sacrificing accuracy New concepts are illustrated with the help of numericalexamples Although most engineering design problems can be solved using nonlin-ear programming techniques, there are a variety of engineering applications for whichother optimization methods, such as linear, geometric, dynamic, integer, and stochasticprogramming techniques, are most suitable The theory and applications of all thesetechniques are also presented in the book Some of the recently developed methods ofoptimization, such as genetic algorithms, simulated annealing, particle swarm optimiza-tion, ant colony optimization, neural-network-based methods, and fuzzy optimization,are also discussed Favorable reactions and encouragement from professors, students,and other users of the book have provided me with the impetus to prepare this fourthedition of the book The following changes have been made from the previous edition:
engi-• Some less-important sections were condensed or deleted
• Some sections were rewritten for better clarity
• Some sections were expanded
• A new chapter on modern methods of optimization is added
• Several examples to illustrate the use of Matlab for the solution of different types
of optimization problems are given
Features
Each topic in Engineering Optimization: Theory and Practice is self-contained, with all
concepts explained fully and the derivations presented with complete details The putational aspects are emphasized throughout with design examples and problems taken
com-xvii
Trang 17xviii Preface
from several fields of engineering to make the subject appealing to all branches ofengineering A large number of solved examples, review questions, problems,project-type problems, figures, and references are included to enhance the presentation
of the material
Specific features of the book include:
• More than 130 illustrative examples accompanying most topics
• More than 480 references to the literature of engineering optimization theory andapplications
• More than 460 review questions to help students in reviewing and testing theirunderstanding of the text material
• More than 510 problems, with solutions to most problems in the instructor’smanual
• More than 10 examples to illustrate the use of Matlab for the numerical solution
of optimization problems
• Answers to review questions at the web site of the book, www.wiley.com/rao
I used different parts of the book to teach optimum design and engineering mization courses at the junior/senior level as well as first-year-graduate-level at IndianInstitute of Technology, Kanpur, India; Purdue University, West Lafayette, Indiana; andUniversity of Miami, Coral Gables, Florida At University of Miami, I cover Chapters 1,
opti-2, 3, 5, 6, and 7 and parts of Chapters 8, 10, 1opti-2, and 13 in a dual-level course entitled
Mechanical System Optimization In this course, a design project is also assigned toeach student in which the student identifies, formulates, and solves a practical engineer-ing problem of his/her interest by applying or modifying an optimization technique.This design project gives the student a feeling for ways that optimization methods work
in practice The book can also be used, with some supplementary material, for a ond course on engineering optimization or optimum design or structural optimization.The relative simplicity with which the various topics are presented makes the bookuseful both to students and to practicing engineers for purposes of self-study The bookalso serves as a reference source for different engineering optimization applications.Although the emphasis of the book is on engineering applications, it would also be use-ful to other areas, such as operations research and economics A knowledge of matrixtheory and differential calculus is assumed on the part of the reader
sec-Contents
The book consists of fourteen chapters and three appendixes Chapter 1 provides anintroduction to engineering optimization and optimum design and an overview of opti-mization methods The concepts of design space, constraint surfaces, and contours ofobjective function are introduced here In addition, the formulation of various types ofoptimization problems is illustrated through a variety of examples taken from variousfields of engineering Chapter 2 reviews the essentials of differential calculus useful
in finding the maxima and minima of functions of several variables The methods ofconstrained variation and Lagrange multipliers are presented for solving problems withequality constraints The Kuhn– Tucker conditions for inequality-constrained problemsare given along with a discussion of convex programming problems
Trang 18Chapters 3 and 4 deal with the solution of linear programming problems Thecharacteristics of a general linear programming problem and the development of thesimplex method of solution are given in Chapter 3 Some advanced topics in linearprogramming, such as the revised simplex method, duality theory, the decompositionprinciple, and post-optimality analysis, are discussed in Chapter 4 The extension oflinear programming to solve quadratic programming problems is also considered inChapter 4.
Chapters 5– 7 deal with the solution of nonlinear programming problems InChapter 5, numerical methods of finding the optimum solution of a function of a singlevariable are given Chapter 6 deals with the methods of unconstrained optimization.The algorithms for various zeroth-, first-, and second-order techniques are discussedalong with their computational aspects Chapter 7 is concerned with the solution ofnonlinear optimization problems in the presence of inequality and equality constraints.Both the direct and indirect methods of optimization are discussed The methodspresented in this chapter can be treated as the most general techniques for the solution
of any optimization problem
Chapter 8 presents the techniques of geometric programming The solution niques for problems of mixed inequality constraints and complementary geometricprogramming are also considered In Chapter 9, computational procedures for solvingdiscrete and continuous dynamic programming problems are presented The problem
tech-of dimensionality is also discussed Chapter 10 introduces integer programming andgives several algorithms for solving integer and discrete linear and nonlinear optimiza-tion problems Chapter 11 reviews the basic probability theory and presents techniques
of stochastic linear, nonlinear, and geometric programming The theory and tions of calculus of variations, optimal control theory, and optimality criteria methodsare discussed briefly in Chapter 12 Chapter 13 presents several modern methods ofoptimization including genetic algorithms, simulated annealing, particle swarm opti-mization, ant colony optimization, neural-network-based methods, and fuzzy systemoptimization Several of the approximation techniques used to speed up the conver-gence of practical mechanical and structural optimization problems, as well as parallelcomputation and multiobjective optimization techniques are outlined in Chapter 14.Appendix A presents the definitions and properties of convex and concave functions
applica-A brief discussion of the computational aspects and some of the commercial tion programs is given in Appendix B Finally, Appendix C presents a brief introduction
optimiza-to Matlab, optimization optimiza-toolbox, and use of Matlab programs for the solution of mization problems
Trang 19as a function of certain decision variables, optimization can be defined as the process
of finding the conditions that give the maximum or minimum value of a function It can
be seen from Fig 1.1 that if a point x∗corresponds to the minimum value of function
f (x), the same point also corresponds to the maximum value of the negative of thefunction, −f (x) Thus without loss of generality, optimization can be taken to meanminimization since the maximum of a function can be found by seeking the minimum
of the negative of the same function
In addition, the following operations on the objective function will not change theoptimum solution x∗ (see Fig 1.2):
1 Multiplication (or division) of f (x) by a positive constant c.
2 Addition (or subtraction) of a positive constant c to (or from) f (x).
There is no single method available for solving all optimization problems ciently Hence a number of optimization methods have been developed for solvingdifferent types of optimization problems The optimum seeking methods are also known
effi-as mathematical programming techniques and are generally studied effi-as a part of ations research Operations research is a branch of mathematics concerned with the
oper-application of scientific methods and techniques to decision making problems and withestablishing the best or optimal solutions The beginnings of the subject of operationsresearch can be traced to the early period of World War II During the war, the Britishmilitary faced the problem of allocating very scarce and limited resources (such asfighter airplanes, radars, and submarines) to several activities (deployment to numer-ous targets and destinations) Because there were no systematic methods available tosolve resource allocation problems, the military called upon a team of mathematicians
to develop methods for solving the problem in a scientific manner The methods oped by the team were instrumental in the winning of the Air Battle by Britain Thesemethods, such as linear programming, which were developed as a result of research
devel-on (military) operatidevel-ons, subsequently became known as the methods of operatidevel-onsresearch
1
Engineering Optimization: Theory and Practice, Fourth Edition Singiresu S Rao
Trang 20Figure 1.1 Minimum of f (x) is same as maximum of −f (x).
Figure 1.2 Optimum solution of cf (x) or c + f (x) same as that of f (x).
Table 1.1 lists various mathematical programming techniques together with otherwell-defined areas of operations research The classification given in Table 1.1 is notunique; it is given mainly for convenience
Mathematical programming techniques are useful in finding the minimum of afunction of several variables under a prescribed set of constraints Stochastic processtechniques can be used to analyze problems described by a set of random variableshaving known probability distributions Statistical methods enable one to analyze theexperimental data and build empirical models to obtain the most accurate represen-tation of the physical situation This book deals with the theory and application ofmathematical programming techniques suitable for the solution of engineering designproblems
Trang 211.2 Historical Development 3
Table 1.1 Methods of Operations Research
recognition
(factor analysis)
Dynamic programming Integer programming Stochastic programming Separable programming Multiobjective programming Network methods: CPM and PERT Game theory
Modern or nontraditional optimization techniques
Genetic algorithms Simulated annealing Ant colony optimization Particle swarm optimization Neural networks
Fuzzy optimization
1.2 HISTORICAL DEVELOPMENT
The existence of optimization methods can be traced to the days of Newton, Lagrange,and Cauchy The development of differential calculus methods of optimization waspossible because of the contributions of Newton and Leibnitz to calculus The founda-tions of calculus of variations, which deals with the minimization of functionals, werelaid by Bernoulli, Euler, Lagrange, and Weirstrass The method of optimization for con-strained problems, which involves the addition of unknown multipliers, became known
by the name of its inventor, Lagrange Cauchy made the first application of the est descent method to solve unconstrained minimization problems Despite these earlycontributions, very little progress was made until the middle of the twentieth century,when high-speed digital computers made implementation of the optimization proce-dures possible and stimulated further research on new methods Spectacular advancesfollowed, producing a massive literature on optimization techniques This advance-ment also resulted in the emergence of several well-defined new areas in optimizationtheory
steep-It is interesting to note that the major developments in the area of numerical ods of unconstrained optimization have been made in the United Kingdom only in the1960s The development of the simplex method by Dantzig in 1947 for linear program-ming problems and the annunciation of the principle of optimality in 1957 by Bellmanfor dynamic programming problems paved the way for development of the methods
meth-of constrained optimization Work by Kuhn and Tucker in 1951 on the necessary and
Trang 22sufficiency conditions for the optimal solution of programming problems laid the dations for a great deal of later research in nonlinear programming The contributions
foun-of Zoutendijk and Rosen to nonlinear programming during the early 1960s have beensignificant Although no single technique has been found to be universally applica-ble for nonlinear programming problems, work of Carroll and Fiacco and McCormickallowed many difficult problems to be solved by using the well-known techniques ofunconstrained optimization Geometric programming was developed in the 1960s byDuffin, Zener, and Peterson Gomory did pioneering work in integer programming,one of the most exciting and rapidly developing areas of optimization The reason forthis is that most real-world applications fall under this category of problems Dantzigand Charnes and Cooper developed stochastic programming techniques and solvedproblems by assuming design parameters to be independent and normally distributed.The desire to optimize more than one objective or goal while satisfying the phys-ical limitations led to the development of multiobjective programming methods Goalprogramming is a well-known technique for solving specific types of multiobjectiveoptimization problems The goal programming was originally proposed for linear prob-lems by Charnes and Cooper in 1961 The foundations of game theory were laid byvon Neumann in 1928 and since then the technique has been applied to solve severalmathematical economics and military problems Only during the last few years hasgame theory been applied to solve engineering design problems
Modern Methods of Optimization. The modern optimization methods, also times called nontraditional optimization methods, have emerged as powerful and pop-ular methods for solving complex engineering optimization problems in recent years.These methods include genetic algorithms, simulated annealing, particle swarm opti-mization, ant colony optimization, neural network-based optimization, and fuzzy opti-mization The genetic algorithms are computerized search and optimization algorithmsbased on the mechanics of natural genetics and natural selection The genetic algorithmswere originally proposed by John Holland in 1975 The simulated annealing method
some-is based on the mechanics of the cooling process of molten metals through annealing.The method was originally developed by Kirkpatrick, Gelatt, and Vecchi
The particle swarm optimization algorithm mimics the behavior of social organismssuch as a colony or swarm of insects (for example, ants, termites, bees, and wasps), aflock of birds, and a school of fish The algorithm was originally proposed by Kennedyand Eberhart in 1995 The ant colony optimization is based on the cooperative behavior
of ant colonies, which are able to find the shortest path from their nest to a foodsource The method was first developed by Marco Dorigo in 1992 The neural networkmethods are based on the immense computational power of the nervous system to solveperceptional problems in the presence of massive amount of sensory data through itsparallel processing capability The method was originally used for optimization byHopfield and Tank in 1985 The fuzzy optimization methods were developed to solveoptimization problems involving design data, objective function, and constraints stated
in imprecise form involving vague and linguistic descriptions The fuzzy approachesfor single and multiobjective optimization in engineering design were first presented
by Rao in 1986
Trang 231.3 Engineering Applications of Optimization 5 1.3 ENGINEERING APPLICATIONS OF OPTIMIZATION
Optimization, in its broadest sense, can be applied to solve any engineering problem.Some typical applications from different engineering disciplines indicate the wide scope
of the subject:
1 Design of aircraft and aerospace structures for minimum weight
2 Finding the optimal trajectories of space vehicles
3 Design of civil engineering structures such as frames, foundations, bridges,
towers, chimneys, and dams for minimum cost
4 Minimum-weight design of structures for earthquake, wind, and other types of
random loading
5 Design of water resources systems for maximum benefit
6 Optimal plastic design of structures
7 Optimum design of linkages, cams, gears, machine tools, and other mechanical
components
8 Selection of machining conditions in metal-cutting processes for minimum
pro-duction cost
9 Design of material handling equipment, such as conveyors, trucks, and cranes,
for minimum cost
10 Design of pumps, turbines, and heat transfer equipment for maximum efficiency
11 Optimum design of electrical machinery such as motors, generators, and
trans-formers
12 Optimum design of electrical networks
13 Shortest route taken by a salesperson visiting various cities during one tour
14 Optimal production planning, controlling, and scheduling
15 Analysis of statistical data and building empirical models from experimental
results to obtain the most accurate representation of the physical phenomenon
16 Optimum design of chemical processing equipment and plants
17 Design of optimum pipeline networks for process industries
18 Selection of a site for an industry
19 Planning of maintenance and replacement of equipment to reduce operating
Trang 241.4 STATEMENT OF AN OPTIMIZATION PROBLEM
An optimization or a mathematical programming problem can be stated as follows
where X is an n-dimensional vector called the design vector , f (X) is termed the
objec-tive function, and gj(X) and lj(X) are known as inequality and equality constraints,
respectively The number of variables n and the number of constraints m and/or p
need not be related in any way The problem stated in Eq (1.1) is called a constrained
optimization problem.†Some optimization problems do not involve any constraints andcan be stated as
variables xi, i = 1, 2, , n The design variables are collectively represented as a
design vector X = {x1, x2, , xn}T As an example, consider the design of the gearpair shown in Fig 1.3, characterized by its face width b, number of teeth T1 and
T2, center distance d, pressure angle ψ, tooth profile, and material If center distance
d, pressure angle ψ, tooth profile, and material of the gears are fixed in advance,
these quantities can be called preassigned parameters The remaining quantities can be
collectively represented by a design vector X = {x1, x2, x3}T= {b, T1, T2}T If there are
no restrictions on the choice of b, T1, and T2, any set of three numbers will constitute adesign for the gear pair If an n-dimensional Cartesian space with each coordinate axisrepresenting a design variable xi(i = 1, 2, , n) is considered, the space is called
† In the mathematical programming literature, the equality constraints l j(X) = 0, j = 1, 2, , p are often
neglected, for simplicity, in the statement of a constrained optimization problem, although several methods are available for handling problems with equality constraints.
Trang 251.4 Statement of an Optimization Problem 7
Figure 1.3 Gear pair in mesh.
the design variable space or simply design space Each point in the n-dimensional design space is called a design point and represents either a possible or an impossible
solution to the design problem In the case of the design of a gear pair, the designpoint {1.0, 20, 40}T, for example, represents a possible solution, whereas the designpoint {1.0, −20, 40.5}T represents an impossible solution since it is not possible tohave either a negative value or a fractional value for the number of teeth
In many practical problems, the design variables cannot be chosen arbitrarily; rather,they have to satisfy certain specified functional and other requirements The restrictions
that must be satisfied to produce an acceptable design are collectively called design
constraints Constraints that represent limitations on the behavior or performance of
the system are termed behavior or functional constraints Constraints that represent
physical limitations on design variables, such as availability, fabricability, and
trans-portability, are known as geometric or side constraints For example, for the gear pair
shown in Fig 1.3, the face width b cannot be taken smaller than a certain value, due
to strength requirements Similarly, the ratio of the numbers of teeth, T1/T2, is dictated
by the speeds of the input and output shafts, N1and N2 Since these constraints depend
on the performance of the gear pair, they are called behavior constraints The values
of T1and T2 cannot be any real numbers but can only be integers Further, there can
be upper and lower bounds on T1and T2due to manufacturing limitations Since these
constraints depend on the physical limitations, they are called side constraints.
Trang 261.4.3 Constraint Surface
For illustration, consider an optimization problem with only inequality constraints
gj(X) ≤ 0 The set of values of X that satisfy the equation gj(X) = 0 forms a
hyper-surface in the design space and is called a constraint hyper-surface Note that this is an
(n − 1)-dimensional subspace, where n is the number of design variables The constraintsurface divides the design space into two regions: one in which gj(X) <0 and the other
in which gj(X) >0 Thus the points lying on the hypersurface will satisfy the constraint
gj(X)critically, whereas the points lying in the region where gj(X) >0 are infeasible
or unacceptable, and the points lying in the region where gj(X) <0 are feasible oracceptable The collection of all the constraint surfaces gj(X) = 0, j = 1, 2, , m,
which separates the acceptable region is called the composite constraint surface.
Figure 1.4 shows a hypothetical two-dimensional design space where the infeasibleregion is indicated by hatched lines A design point that lies on one or more than one
constraint surface is called a bound point , and the associated constraint is called an
active constraint Design points that do not lie on any constraint surface are known as
free points Depending on whether a particular design point belongs to the acceptable
or unacceptable region, it can be identified as one of the following four types:
1 Free and acceptable point
2 Free and unacceptable point
3 Bound and acceptable point
4 Bound and unacceptable point
All four types of points are shown in Fig 1.4
Figure 1.4 Constraint surfaces in a hypothetical two-dimensional design space.
Trang 271.4 Statement of an Optimization Problem 9
The conventional design procedures aim at finding an acceptable or adequate designthat merely satisfies the functional and other requirements of the problem In general,there will be more than one acceptable design, and the purpose of optimization is
to choose the best one of the many acceptable designs available Thus a criterionhas to be chosen for comparing the different alternative acceptable designs and forselecting the best one The criterion with respect to which the design is optimized,
when expressed as a function of the design variables, is known as the criterion or merit
or objective function The choice of objective function is governed by the nature of
problem The objective function for minimization is generally taken as weight in aircraftand aerospace structural design problems In civil engineering structural designs, theobjective is usually taken as the minimization of cost The maximization of mechanicalefficiency is the obvious choice of an objective in mechanical engineering systemsdesign Thus the choice of the objective function appears to be straightforward in mostdesign problems However, there may be cases where the optimization with respect
to a particular criterion may lead to results that may not be satisfactory with respect
to another criterion For example, in mechanical design, a gearbox transmitting themaximum power may not have the minimum weight Similarly, in structural design,the minimum weight design may not correspond to minimum stress design, and theminimum stress design, again, may not correspond to maximum frequency design Thusthe selection of the objective function can be one of the most important decisions inthe whole optimum design process
In some situations, there may be more than one criterion to be satisfied taneously For example, a gear pair may have to be designed for minimum weightand maximum efficiency while transmitting a specified horsepower An optimization
simul-problem involving multiple objective functions is known as a multiobjective
program-ming problem With multiple objectives there arises a possibility of conflict, and onesimple way to handle the problem is to construct an overall objective function as alinear combination of the conflicting multiple objective functions Thus if f1(X) and
f2(X)denote two objective functions, construct a new (overall) objective function foroptimization as
f (X) = α1f1(X) + α2f2(X) (1.3)where α1 and α2 are constants whose values indicate the relative importance of oneobjective function relative to the other
The locus of all points satisfying f (X) = C = constant forms a hypersurface in the
design space, and each value of C corresponds to a different member of a family of
surfaces These surfaces, called objective function surfaces, are shown in a hypothetical
two-dimensional design space in Fig 1.5
Once the objective function surfaces are drawn along with the constraint surfaces,the optimum point can be determined without much difficulty But the main problem
is that as the number of design variables exceeds two or three, the constraint andobjective function surfaces become complex even for visualization and the problem
Trang 28Figure 1.5 Contours of the objective function.
has to be solved purely as a mathematical problem The following example illustratesthe graphical optimization procedure
Example 1.1 Design a uniform column of tubular section, with hinge joints at bothends, (Fig 1.6) to carry a compressive load P = 2500 kgf for minimum cost Thecolumn is made up of a material that has a yield stress (σy)of 500 kgf/cm2, modulus
of elasticity (E) of 0.85 × 106kgf/cm2, and weight density (ρ) of 0.0025 kgf/cm3.The length of the column is 250 cm The stress induced in the column should be lessthan the buckling stress as well as the yield stress The mean diameter of the column
is restricted to lie between 2 and 14 cm, and columns with thicknesses outside therange 0.2 to 0.8 cm are not available in the market The cost of the column includesmaterial and construction costs and can be taken as 5W + 2d, where W is the weight
in kilograms force and d is the mean diameter of the column in centimeters
SOLUTION The design variables are the mean diameter (d) and tube thickness (t):
X = xx1
The objective function to be minimized is given by
f (X) = 5W + 2d = 5ρlπ dt + 2d = 9.82x1x2+ 2x1 (E2)
Trang 291.4 Statement of an Optimization Problem 11
i
Figure 1.6 Tubular column under compression.
The behavior constraints can be expressed as
stress induced ≤ yield stressstress induced ≤ buckling stressThe induced stress is given by
induced stress = σi = P
π dt = 2500
π x1x2
(E3)The buckling stress for a pin-connected column is given by
buckling stress = σb= Euler buckling load
I = second moment of area of the cross section of the column
Trang 30Thus the behavior constraints can be restated as
g1(X) = 2500
πx1x2− 500 ≤ 0that is,
x1x2≥ 1.593Thus the curve x1x2= 1.593 represents the constraint surface g1(X) = 0 This curve
can be plotted by finding several points on the curve The points on the curve can befound by giving a series of values to x1 and finding the corresponding values of x2
that satisfy the relation x1x2= 1.593:
These points are plotted and a curve P1Q1passing through all these points is drawn asshown in Fig 1.7, and the infeasible region, represented by g1(X) >0 or x1x2<1.593,
is shown by hatched lines.†Similarly, the second constraint g2(X) ≤ 0 can be expressed
as x1x2(x21+ x22) ≥ 47.3 and the points lying on the constraint surface g2(X) = 0 can
be obtained as follows for x1x2(x12+ x22) = 47.3:
† The infeasible region can be identified by testing whether the origin lies in the feasible or infeasible region.
Trang 311.4 Statement of an Optimization Problem 13
Figure 1.7 Graphical optimization of Example 1.1.
These points are plotted as curve P2Q2, the feasible region is identified, and the sible region is shown by hatched lines as in Fig 1.7 The plotting of side constraints
infea-is very simple since they represent straight lines After plotting all the six constraints,
the feasible region can be seen to be given by the bounded area ABCDEA.
Trang 32Next, the contours of the objective function are to be plotted before finding theoptimum point For this, we plot the curves given by
f (X) = 9.82x1x2+ 2x1= c = constantfor a series of values of c By giving different values to c, the contours of f can beplotted with the help of the following points
x∗
1 = 5.44 cm and t∗= x∗
2= 0.293 cm with fmin= 26.53
1.5 CLASSIFICATION OF OPTIMIZATION PROBLEMS
Optimization problems can be classified in several ways, as described below
1.5.1 Classification Based on the Existence of Constraints
As indicated earlier, any optimization problem can be classified as constrained or strained, depending on whether constraints exist in the problem
Trang 33uncon-1.5 Classification of Optimization Problems 15 1.5.2 Classification Based on the Nature of the Design Variables
Based on the nature of design variables encountered, optimization problems can beclassified into two broad categories In the first category, the problem is to find values
to a set of design parameters that make some prescribed function of these parametersminimum subject to certain constraints For example, the problem of minimum-weight
design of a prismatic beam shown in Fig 1.8a subject to a limitation on the maximum
deflection can be stated as follows:
Find X = bd which minimizes
called parameter or static optimization problems In the second category of problems,
the objective is to find a set of design parameters, which are all continuous functions
of some other parameter, that minimizes an objective function subject to a set ofconstraints If the cross-sectional dimensions of the rectangular beam are allowed tovary along its length as shown in Fig 1.8b, the optimization problem can be stated as
Find X(t) = b(t)d(t) which minimizes
Figure 1.8 Cantilever beam under concentrated load.
Trang 34Here the design variables are functions of the length parameter t This type of problem,where each design variable is a function of one or more parameters, is known as a
trajectory or dynamic optimization problem [1.55].
1.5.3 Classification Based on the Physical Structure of the Problem
Depending on the physical structure of the problem, optimization problems can beclassified as optimal control and nonoptimal control problems
Optimal Control Problem. An optimal control (OC) problem is a mathematical
pro-gramming problem involving a number of stages, where each stage evolves from thepreceding stage in a prescribed manner It is usually described by two types of vari-
ables: the control (design) and the state variables The control variables define the system and govern the evolution of the system from one stage to the next, and the state
variables describe the behavior or status of the system in any stage The problem is
to find a set of control or design variables such that the total objective function (also
known as the performance index , PI) over all the stages is minimized subject to a
set of constraints on the control and state variables An OC problem can be stated asfollows [1.55]:
Find X which minimizes f (X) =
of the ith stage to the total objective function; gj, hk, and qi are functions of xj, yk,and xi and yi, respectively, and l is the total number of stages The control and statevariables xi and yi can be vectors in some cases The following example serves toillustrate the nature of an optimal control problem
Example 1.2 A rocket is designed to travel a distance of 12s in a vertically upwarddirection [1.39] The thrust of the rocket can be changed only at the discrete pointslocated at distances of 0, s, 2s, 3s, , 12s If the maximum thrust that can be devel-oped at point i either in the positive or negative direction is restricted to a value of
Fi, formulate the problem of minimizing the total time of travel under the followingassumptions:
1 The rocket travels against the gravitational force.
2 The mass of the rocket reduces in proportion to the distance traveled.
3 The air resistance is proportional to the velocity of the rocket.
Trang 351.5 Classification of Optimization Problems 17
Figure 1.9 Control points in the path of the rocket.
SOLUTION Let points (or control points) on the path at which the thrusts of therocket are changed be numbered as 1, 2, 3, , 13 (Fig 1.9) Denoting xi as the thrust,
vi the velocity, ai the acceleration, and mi the mass of the rocket at point i, Newton’ssecond law of motion can be applied as
net force on the rocket = mass × accelerationThis can be written as
thrust − gravitational force − air resistance = mass × acceleration
Trang 36xi− mig − k1vi = miai (E1)where the mass mi can be expressed as
mi − g − k1vi
mi
+ tivi− s = 0 (E4)from which ti can be determined as
From an analysis of the problem, the control variables can be identified as the thrusts,
xi, and the state variables as the velocities, vi Since the rocket starts at point 1 andstops at point 13,
Trang 371.5 Classification of Optimization Problems 19
Thus the problem can be stated as an OC problem as
|xi| ≤ Fi, i = 1, 2, , 12
v1= v13= 0
1.5.4 Classification Based on the Nature of the Equations Involved
Another important classification of optimization problems is based on the nature ofexpressions for the objective function and the constraints According to this classi-fication, optimization problems can be classified as linear, nonlinear, geometric, andquadratic programming problems This classification is extremely useful from the com-putational point of view since there are many special methods available for the efficientsolution of a particular class of problems Thus the first task of a designer would be
to investigate the class of problem encountered This will, in many cases, dictate thetypes of solution procedures to be adopted in solving the problem
Nonlinear Programming Problem. If any of the functions among the objective and
constraint functions in Eq (1.1) is nonlinear, the problem is called a nonlinear
pro-gramming (NLP) problem This is the most general programming problem and all other
problems can be considered as special cases of the NLP problem
Example 1.3 The step-cone pulley shown in Fig 1.10 is to be designed for mitting a power of at least 0.75 hp The speed of the input shaft is 350 rpm and theoutput speed requirements are 750, 450, 250, and 150 rpm for a fixed center distance
trans-of a between the input and output shafts The tension on the tight side trans-of the belt is to
be kept more than twice that on the slack side The thickness of the belt is t and thecoefficient of friction between the belt and the pulleys is µ The stress induced in thebelt due to tension on the tight side is s Formulate the problem of finding the widthand diameters of the steps for minimum weight
Trang 38Figure 1.10 Step-cone pulley.
SOLUTION The design vector can be taken as
d12
1 +350
2+ d22
1 +350
1 + 350
2
(E1)where ρ is the density of the pulleys and di′is the diameter of the ith step on the inputpulley
Trang 391.5 Classification of Optimization Problems 21
To have the belt equally tight on each pair of opposite steps, the total length of thebelt must be kept constant for all the output speeds This can be ensured by satisfyingthe following equality constraints:
i
N − 1
2
d2 i
T1i = stw, i = 1, 2, 3, 4 (E6)where s is the maximum allowable stress in the belt and t is the thickness of the belt.The constraint on the power transmitted can be stated as (using lbffor force and ft forlinear dimensions)
(T1i− T2i)π d′i(350)33,000 ≥ 0.75which can be rewritten, using T1i = stw from Eq (E6), as
× 33,000350
Trang 40
Finally, the lower bounds on the design variables can be taken as
As the objective function, (E1), and most of the constraints, (E2) to (E9), are nonlinearfunctions of the design variables d1, d2, d3, d4, and w, this problem is a nonlinearprogramming problem
Geometric Programming Problem.
Definition A function h(X) is called a posynomial if h can be expressed as the sum
of power terms each of the form
cixai11 xai22 · · · xainn
where ci and aij are constants with ci> 0 and xj> 0 Thus a posynomial with N termscan be expressed as
h(X) = c1x1a11x2a12· · · xna1n+ · · · + cNx1aN1x2aN2· · · xnaN n (1.7)
A geometric programming (GMP) problem is one in which the objective function
and constraints are expressed as posynomials in X Thus GMP problem can be posed
k = d
4G8D3N
τ = Ks
8F D
π d3
fn= 12
kg
w = 12
d4G8D3N
gρ(π d2/4)π DN =
√
Gg d
2√2ρπ D2N