Until now we have not shown how the trend and seasonal coefficients can be initialized using the information that is given by the sales volume of the first three weeks. The next subsection demonstrates how the data basis can be improved if additional information is considered. Sections 26.2.2 and 26.2.3 finally present the initialization of the seasonal coefficients ˆcãand the trend parameters ˆaãand ˆbã
26.2.1 Consideration of Further Information
When looking at the data of the first three weeks (see Fig. 26.1) two phe- nomena seem to be contradictory to the assumption of a linear trend with seasonality:
1. Sales on Monday -13 are unexpectedly low. In weeks 1 and 3 sales on Mondays are clearly higher than sales on Tuesdays.
2. While the trend of weekly increasing sales is obvious, sales on Sunday 0 are much lower than sales on the respective Sundays of the first two weeks (days -14 and -7).
We want to know whether these inconsistencies are purely random or due to an identifiable actuator and get the following information:
1. In some parts of Germany Monday -13 was a holiday. Therefore, 58 % of the stores of the shoe retailer were closed this day.
2. Usually, shoe stores have to be closed on Sundays in Germany. Some few cities, however, granted a special authorization for sale. Starting with the third week 9313% of these cities do not grant such an authorization any more.
We can now improve our data basis by exploiting this information about special influences in our further investigations. Therefore, the sales volume of day -13 is increased by 138.1 % (x−13= 2600ã100100−58 = 6190.4761) and sales on Sundays -14 (410 SKU) and -7 (457 SKU) are decreased by 9313% so that x−14= 27.¯3 andx−7 = 30.4¯6. In the next two subsections original sales are replaced by these corrected sales.
26.2.2 Determination of Seasonal Coefficients by the Ratio-to-Moving Averages Decomposition
The ratio-to-moving averages decomposition (see e. g. Makridakis et al. (1998, pp. 109)) is used as an example to determine the initial seasonal coefficients of Winters’ method. In Sect. 26.1.3 we already applied the equation:
observed sales int= (deseasonalized sales int)ã(seasonal coefficient oft).
468 Herbert Meyr
In other words, if we want to isolate seasonal coefficients, we have to compute seasonal coefficient of periodt= observed sales int
deseasonalized sales int (26.4) where the deseasonalized sale in period t is a sales volume that does not contain any seasonal influences. But how to determine such a value?
Considering our working example, the sales volume of a full week is appar- ently not influenced by daily sales peaks. So the most intuitive way to obtain sales data without seasonal influences is to compute daily sales averaged over a full week. This leads to average daily sales 4419+...7+27.¯3 = 3544.6, 4951.6 and 5122.4 SKU for the weeks 1 to 3 (see Table 26.3). Thereby, the Thursday is settled in the middle of each week.
But we can employ the same procedure for each other time period of seven days, e. g. day -19, . . . , -13, and assign the average daily sales 3797.7 to the medium Friday -16. By doing so we compute moving averages over a full sea- sonal cycle of 7 days for each day -17, . . . , -3 which represent deseasonalized daily sales volumes. Table 26.3 illustrates the whole procedure.
In a next step we apply (26.4), thus setting the observed salesxtinratio to the deseasonalized moving averages(remember the name of the algorithm).
The result are multiple observations of seasonal coefficients oweekweekday(t) for each day of the week (three for a Thursday and two for each other weekday) which still contain the random noiseut.
In order to reduce this randomness, now we compute the average sea- sonal coefficientsoaverweekdayof each weekday (Table 26.4). For example, for the Thursday we get
oaverT hursday = oweekT hursday1 (−17) +oweekT hursday2 (−10) +oweekT hursday3 (−3)
number of weeks =
= 1.1031 + 1.1188 + 1.1377
3 = 1.1199
If a pure trend without any seasonal influence is given, one would expect all seasonal coefficients to equal 1 (see Sect. 26.1.2), thus summing up to 7 for a weekly seasonal cycle. As we can see in Table 26.4, the sum of our average seasonal coefficients ototal = Sunday
day=M ondayoaverday = 6.9045 falls short of 7.
To reflect the trend correctly, we have to normalize ouroaverã by multiplying them with the constant 7/ototal. The resulting final seasonal coefficients for Monday. . .Sundayare already known as ˆc−6, . . . ,cˆ0from Table 26.2.
26.2.3 Determining the Trend by Linear Regression
Finally it will be shown how the trend parametersaandbcan be determined.
When “deseasonalizing” the observed sales by dividing through ct one can see from (26.5) that the trenda+bãt, distorted by some random noise ut
ct, results:
dt= xt
ct =(a+bãt)ãct+ut
ct =a+bãt+ut
ct. (26.5)
26 Forecast Methods 469 Table 26.3.Ratio–to–moving averages decomposition
week day weekday (corr.)xt moving aver. (mat) oweekweekday(t) =maxt
t
1 -20 Monday 4419
1 -19 Tuesday 3821
1 -18 Wednesday 3754
1 -17 Thursday 3910 3544.6 1.1031
1 -16 Friday 4363 3797.7 1.1489
1 -15 Saturday 4518 4074.0 1.1090
1 -14 Sunday (27.333¯3) 4302.3 0.0064
2 -13 Monday (6190.4761) 4535.1 1.3650
2 -12 Tuesday 5755 4719.0 1.2195
2 -11 Wednesday 5352 4951.1 1.0810
2 -10 Thursday 5540 4951.6 1.1188
2 -9 Friday 5650 4804.1 1.1761
2 -8 Saturday 6143 4664.6 1.3169
2 -7 Sunday (30.466¯6) 4680.6 0.0065
3 -6 Monday 5158 4721.8 1.0924
3 -5 Tuesday 4779 4873.8 0.9806
3 -4 Wednesday 5464 5120.8 1.0670
3 -3 Thursday 5828 5122.4 1.1377
3 -2 Friday 6714
3 -1 Saturday 7872
3 0 Sunday 42
Table 26.4.Reducing randomness of seasonal coefficients
week Mo Tu We Th Fr Sa Su
1 1.1031 1.1489 1.1090 0.0064
2 1.3650 1.2195 1.0810 1.1188 1.1761 1.3169 0.0065
3 1.0924 0.9806 1.0670 1.1377 ototal:
oaverweekday 1.2287 1.1000 1.0740 1.1199 1.1625 1.2130 0.0064 6.9045 ˆcã 1.2457 1.1153 1.0889 1.1354 1.1786 1.2297 0.0065 7.00
470 Herbert Meyr
The parameters aand b can be estimated by means of linear regression (see Wood and Field (1976, pp. 76)). As Fig. 26.4 shows, appropriate esti- mators ˆaand ˆbare computed by minimizing the (squared) vertical distances between the deseasonalized sales dt = xˆt
ct and the trend line ˆa+ ˆbãt. This useful way of eliminating the random noise is also applied in causal forecasts and has already been introduced in Sect. 7.2.2.
Linear Regression
trend line = 123.3t+ 5849
-25 -20 -15 -10 -5 0 5 10
Dayst
StockKeepingUnits
deseasonalized sales trend line
3500 4500 5500 6500
minimize squared distance minimize squared distance
Fig. 26.4.Visualization of linear regression
Table 26.5 and Equations (26.6) and (26.7) illustrate how the trend pa- rameters ˆa0 and ˆb0 have been calculated by linear regression to initialize Winters’ method in Sect. 26.1.3:
bˆ0=
t(t−t)(d¯ t−d)¯
t(t−t)¯2 = 94943
770 = 123.3 (26.6)
ˆ
a0= ¯d−bˆ0ã¯t= 4616−123.3ã(−10) = 5849. (26.7) Here ¯t = 211 ã
tt = −21210 = −10 and ¯d = 211 ã
tdt = 9693621 = 4616 represent the average values oft and dt over the first weeks of our working example.
Please note that similar deseasonalized sales have been obtained by the moving averages computation in the last subsection. These could also be used to estimate ˆaand ˆb by linear regression. In this case, however, only 15
26 Forecast Methods 471 Table 26.5.Calculation of linear regression
week day (corr.)xt cˆt dt=xˆct
t (t−¯t)2 (t−t¯)(dt−d¯t)
1 -20 4419 1.2457 3547 100 10686
1 -19 3821 1.1153 3426 81 10709
1 -18 3754 1.0889 3448 64 9347
1 -17 3910 1.1354 3444 49 8206
1 -16 4363 1.1786 3702 36 5484
1 -15 4518 1.2297 3674 25 4710
1 -14 (27.333¯3) 0.0065 4192 16 1695
2 -13 (6190.4761) 1.2457 4970 9 -1060
2 -12 5755 1.1153 5160 4 -1088
2 -11 5352 1.0889 4915 1 -299
2 -10 5540 1.1354 4879 0 0
2 -9 5650 1.1786 4794 1 178
2 -8 6143 1.2297 4995 4 759
2 -7 (30.466¯6) 0.0065 4673 9 170
3 -6 5158 1.2457 4141 16 -1901
3 -5 4779 1.1153 4285 25 -1655
3 -4 5464 1.0889 5018 36 2413
3 -3 5828 1.1354 5133 49 3620
3 -2 6714 1.1786 5697 64 8646
3 -1 7872 1.2297 6401 81 16068
3 0 42 0.0065 6442 100 18256
-210 96936 770 94943
instead of 21 observations of deseasonalized sales would have been available, thus preparing a noticeably smaller sample to overcome randomness.
References
Hanke, J. E.; Wichern, D. E.; Reitsch, A. G. (2001) Business forecasting, 7th ed., New Jersey
Makridakis, S.; Wheelwright, S. C.; Hyndman, R. J. (1998) Forecasting: Methods and applications, 3rd ed., New York et al.
Silver, E. A.; Pyke, D. F.; Peterson, R. (1998)Inventory management and production planning and scheduling, 3rd ed., New York et al.
472 Herbert Meyr
Waters, C. D. J. (1992)Inventory control and management, Chichester et al.
Winters, P. (1960) Forecasting sales by exponentially weighted moving averages, Management Science, Vol. 6, No. 3, 324–342
Wood, D.; Field, R. (1976) Forecasting for business: Methods and applications, London et al.
27 Linear and Mixed Integer Programming
Hartmut Stadtler
Darmstadt University of Technology, Department of Production & Supply Chain Management, Hochschulstraòe 1, 64289 Darmstadt, Germany
Linear Programming(LP) is one of the most famous optimization techniques introduced independently by Kantarowitsch in 1939 and by Dantzig in 1949 (Krek´o, 1973). LP is applicable in decision situations where quantities (vari- ables) can take any real values only restricted by linear (in-) equalities, e. g.
for representing capacity constraints. Still, LP has turned out to be very use- ful for many companies so far. LP is used in APS e. g. in Master Planning as well as in Distribution and Transport Planning. Very powerful solution algorithms have been developed (named solvers), solving LP models with thousands of variables and constraints within a few minutes on a personal computer.
In case some decisions can only be expressed by integer values, e. g. the number of additional shifts for a given week, LP usually will not provide a feasible solution. Similarly, logical implications might be modelled by binary variables. As an example consider the decision whether to setup a flow line for a certain product or not: A value of “0” will be attributed to a decision “no”
and a value of “1” to “yes”. Still, the corresponding model may be described by linear (in-) equalities. In case the model solely consists of integer variables, it is called a pureInteger Programming(IP) model. If the model contains both real and integer variables aMixed Integer Programming(MIP) model is given.
Thus, both LP and MIP comprise special model types and associated so- lution algorithms. Numerous articles and textbooks have been written on LP and MIP (e. g. Martin (1999), Winston (1994) and Wolsey (1998)) represent- ing a high level of knowledge which cannot be reviewed here. In order to give an understanding of LP and MIP, only the basic ideas will be provided in the following by means of an example.
First, an LP model is presented and solved graphically (Sect. 27.1). This model is then converted into an IP model and solved by Branch and Bound (Sect. 27.2), where for each submodel a LP model is solved graphically. Fi- nally, a few remarks and recommendations regarding the effective use of LP and MIP complements this chapter (Sect. 27.3).