Equations, Linear Inequalities, and LPs. . . . 1-1 1.2 Applicability of the LP Model: Classical
Examples of Direct Applications. . . . 1-4 Product Mix Problems•Blending Problems•The Diet Problem•The Transportation Model•Multiperiod Production Planning, Storage, Distribution Problems
1.3 LP Models Involving Transformations of
Variables. . . . 1-12 Min–Max, Max–Min Problems•Minimizing Positive Linear Combinations of Absolute Values
of Affine Functions
1.4 Intelligent Modeling Essential to Get Good
Results, an Example from Container Shipping. . . . 1-18 1.5 Planning Uses of LP Models. . . . 1-22
Finding the Optimum Solutions•Infeasibility Analysis
•Values of Slack Variables at an Optimum Solution• Marginal Values, Dual Variables, and the Dual Problem, and Their Planning Uses•Evaluating the Profitability of New Products
1.6 Brief Introduction to Algorithms for Solving
LP Models. . . . 1-26 The Simplex Method•Interior Point Methods for LP 1.7 Software Systems Available for Solving LP
Models. . . . 1-31 1.8 Multiobjective LP Models . . . . 1-31 References. . . . 1-34
1.1 Brief History of Algorithms for Solving Linear Equations, Linear Inequalities, and LPs
The study of mathematics originated with the construction of linear equation models for real world problems several thousand years ago. As an example we discuss an application that leads to a model involving a system of simultaneous linear equations from Murty (2004).
1-1
Example 1.1: Scrap Metal Blending Problem
A steel company has four different types of scrap metal (SM-1 to SM-4) with the following compositions (Table 1.1).
The company needs to blend these four scrap metals into a mixture for which the com- position by weight is: Al—4.43%, Si—3.22%, C—3.89%, and Fe—88.46%. How should they prepare this mixture? To answer this question, we need to determine the proportions of the four scrap metals SM-1 to SM-4 in the blend to be prepared.
The most fundamental idea in mathematics that was discovered more than 5000 years ago by the Chinese, Indians, Iranians, Babylonians, and Greeks is to represent the quantities that we wish to determine by symbols, usually letters of the alphabet likex,y,z, and then express the relationships between the quantities represented by these symbols in the form of equations, and finally use these equations as tools to find out the true values represented by the symbols. The symbols representing the unknown quantities to be determined are nowadays called unknowns or variables or decision variables. The process of representing the relationships between the variables through equations or other functional relationships is calledmodeling ormathematical modeling.
This process gradually evolved into algebra, one of the chief branches of mathematics.
Even though the subject originated more than 5000 years ago, the name algebra itself came much later; it is derived from the title of an Arabic book Al-Maqala fi Hisab al-jabr w’almuqabalah written by Al-Khawarizmi around 825 AD. The term “al-jabr” in Arabic means “restoring” in the sense of solving an equation. In Latin translation the title of this book became Ludus Algebrae, the second word in this title surviving as the modern word
“algebra” for the subject, and Al-Khawarizmi is regarded as the father of algebra. The earliest algebraic systems constructed are systems of linear equations.
In the scrap metal blending problem, the decision variables are:xj= proportion of SM-j by weight in the mixture, forj= 1–4. Then the percentage by weight of the element Al in the mixture will be 5x1+ 7x2+ 2x3+x4, which is required to be 4.43. Arguing the same way for the elements Si, C, and Fe, we find that the decision variablesx1tox4 must satisfy each equation in the followingsystem of linear equationsto lead to the desired mixture:
5x1+ 7x2+ 2x3+x4= 4.43 3x1+ 6x2+x3+ 2x4= 3.22 4x1+ 5x2+ 3x3+x4= 3.89 88x1+ 82x2+ 94x3+ 96x4= 88.46
x1+x2+x3+x4= 1
The last equation in the system stems from the fact that the sum of the proportions of various ingredients in a blend must always be equal to 1. This system of equations is the mathematical model for our scrap metal blending problem; it consists of five equations
TABLE 1.1 Scrap Metal Composition Data
% of Element by Weight, in Type
Type Al Si C Fe
SM-1 5 3 4 88
SM-2 7 6 5 82
SM-3 2 1 3 94
SM-4 1 2 1 96
in four variables. It is clear that a solution to this system of equations makes sense for the blending application only if all the variables in the system have nonnegative values in it. The nonnegativity restrictions on the variables arelinear inequality constraints. They cannot be expressed in the form of linear equations, and as nobody knew how to handle linear inequalities at that time, they ignored them.
Linear algebradealing with methods for solving systems of linear equations is the classical subject that initiated the study of mathematics a long time ago. The most effective method for solving systems of linear equations, called theelimination method, was discovered by the Chinese and the Indians over 2500 years ago and this method is still the leading method in use today. This elimination method was unknown in Europe until the nineteenth century when the German mathematician Gauss rediscovered it while calculating the orbit of the asteroid Ceres based on recorded observations in tracking it. The asteroid was lost from view when the Sicilian astronomer Piazzi tracking it fell ill. Gauss used the method of least squares to estimate the values of the parameters in the formula for the orbit. It led to a system of 17 linear equations in 17 unknowns that he had to solve, which is quite a large system for mental computation. Gauss’s accurate computations helped in relocating the asteroid in the skies in a few months’ time, and his reputation as a mathematician soared.
Another German, Wilhelm Jordan, popularized the algorithm in a late nineteenth-century book that he wrote. From that time, the method has been popularly known as the Gauss–
Jordan elimination method. Another version of this method, called the Gaussian elimination method, is the most popular method for solving systems of linear equations today.
Even though linear equations were resolved thousands of years ago, systems of linear inequalities remained unsolved until the middle of the twentieth century. The following theorem (Murty, 2006) relates systems of linear inequalities to systems of linear equations.
THEOREM 1.1 Consider the system of linear inequalities in variables x
Ai.x≥bi, i= 1, . . ., m (1.1)
whereAi.is the coefficient vector for thei-th constraint. If this system has a feasible solution, then there exists a subsetP={p1, . . ., ps} ⊂ {1, . . ., m}such that every solution of the system of equations: Ai.x=bi, i∈P is also a feasible solution of the original system of linear inequalities (Equation 1.1).
This theorem can be used to generate a finite enumerative algorithm to find a feasible solution to a system of linear constraints containing inequalities, based on solving subsys- tems in each of which a subset of the inequalities are converted into equations and the other inequality constraints are eliminated. However, if the original system hasminequality con- straints, in the worst case this enumerative algorithm may have to solve 2msystems of linear equations before it either finds a feasible solution of the original system, or concludes that it is infeasible. The effort required grows exponentially with the number of inequalities in the system in the worst case.
In the nineteenth century, Fourier generalized the classical elimination method for solving linear equations into an elimination method for solving systems of linear inequalities. The method called Fourier elimination, or the Fourier–Motzkin elimination method, is very elegant theoretically. However, the elimination of each variable adds new inequalities to the remaining system, and the number of these new inequalities grows exponentially as more and more variables are eliminated. So this method is also not practically viable for large problems.
The simplex method for linear programming developed by Dantzig (1914–2005) in the mid-twentieth century (Dantzig, 1963) is the first practically and computationally viable method for solving systems of linear inequalities. This has led to the development oflinear programming (LP), a branch of mathematics developed in the twentieth century as an extension of linear algebra to solve systems oflinear inequalities. The development of LP is a landmark event in the history of mathematics and its applications that brought our ability to solve general systems of linear constraints (including linear equations, inequalities) to a state of completion.
A general system of linear constraints in decision variablesx= (x1, . . ., xn)T is of the form:
Ax≥b, Dx=d, where the coefficient matrices A, D are given matrices of orders m×n, p×n, respectively. The inequality constraints in this system may include sign restrictions or bounds on individual variables.
A general LP is the problem of finding an optimum solution for the problem of minimizing (or maximizing) a given linear objective function z=cx say, subject to a system of linear constraints.
Suppose there is no objective function to optimize, and only a feasible solution of a system of linear constraints is to be found. When there are inequality constraints in the system, the only practical method to even finding a feasible solution is to solve a linear programming formulation of it as a Phase I linear programming problem. Dantzig developed this Phase I formulation as part of the simplex method for LPs that he developed in the mid-twentieth century.