Tightening up the notation, let y t denote the actual data point for obser-vation t, ˆ y tdenote the fitted value from the regression line in other words, for the given value of x of this
Trang 1to get the line that best ‘fits’ the data The researcher would then be seeking
to find the values of the parameters or coefficients, α and β, that would
place the line as close as possible to all the data points taken together
This equation (y = α + βx) is an exact one, however Assuming that this equation is appropriate, if the values of α and β had been calculated, then, given a value of x, it would be possible to determine with certainty what the value of y would be Imagine – a model that says with complete certainty
what the value of one variable will be given any value of the other
Clearly this model is not realistic Statistically, it would correspond to thecase in which the model fitted the data perfectly – that is, all the data pointslay exactly on a straight line To make the model more realistic, a random
disturbance term, denoted by u, is added to the equation, thus:
where the subscript t (= 1, 2, 3, ) denotes the observation number.
The disturbance term can capture a number of features (see box 4.2)
● Even in the general case when there is more than one explanatory variable, some determinants ofy t will always in practice be omitted from the model This might, for example, arise because the number of influences ony is too large to place in a
single model, or because some determinants ofy are unobservable or not
measurable.
● There may be errors in the way thaty is measured that cannot be modelled.
Trang 2● There are bound to be random outside influences ony that, again, cannot be
modelled For example, natural disasters could affect real estate performance in a way that cannot be captured in a model and cannot be forecast reliably Similarly, many researchers would argue that human behaviour has an inherent randomness and unpredictability!
How, then, are the appropriate values of α and β determined? α and β
are chosen so that the (vertical) distances from the data points to the fittedlines are minimised (so that the line fits the data as closely as possible) Theparameters are thus chosen to minimise collectively the (vertical) distancesfrom the data points to the fitted line This could be done by ‘eyeballing’ the
data and, for each set of variables y and x, one could form a scatter plot and
draw on a line that looks as if it fits the data well by hand, as in figure 4.2
Note that it is the vertical distances that are usually minimised, rather than
the horizontal distances or those taken perpendicular to the line This arises
as a result of the assumption that x is fixed in repeated samples, so that the problem becomes one of determining the appropriate model for y given (or conditional upon) the observed values of x.
This procedure may be acceptable if only indicative results are required,but of course this method, as well as being tedious, is likely to be impre-cise The most common method used to fit a line to the data is known asordinary least squares (OLS) This approach forms the workhorse of econo-metric model estimation, and is discussed in detail in this and subsequentchapters
x
y
Figure 4.2
Scatter plot of two
variables with a line
of best fit chosen by
eye
Trang 3y
10 8 6 4 2 0
Two alternative estimation methods (for determining the appropriate
val-ues of the coefficients α and β) are the method of moments and the method
of maximum likelihood A generalised version of the method of moments,due to Hansen (1982), is popular, although the method of maximum likeli-hood is also widely employed.1
Suppose now, for ease of exposition, that the sample of data contains onlyfive observations The method of OLS entails taking each vertical distancefrom the point to the line, squaring it and then minimising the total sum ofthe areas of squares (hence ‘least squares’), as shown in figure 4.3 This can
be viewed as equivalent to minimising the sum of the areas of the squaresdrawn from the points to the line
Tightening up the notation, let y t denote the actual data point for
obser-vation t, ˆ y tdenote the fitted value from the regression line (in other words,
for the given value of x of this observation t, ˆ y t is the value for y which the
model would have predicted; note that a hat [ˆ] over a variable or parameter
is used to denote a value estimated by a model) and ˆu t denote the residual,
which is the difference between the actual value of y and the value fitted by the model – i.e (y t − ˆy t ) This is shown for just one observation t in figure 4.4.
What is done is to minimise the sum of the ˆu2
t The reason that the sum
of the squared distances is minimised rather than, for example, finding thesum of ˆu t that is as close to zero as possible is that, in the latter case, somepoints will lie above the line while others lie below it Then, when the sum
to be made as close to zero as possible is formed, the points above the linewould count as positive values, while those below would count as negatives.These distances will therefore in large part cancel each other out, whichwould mean that one could fit virtually any line to the data, so long as thesum of the distances of the points above the line and the sum of the distances
of the points below the line were the same In that case, there would not be
1 Both methods are beyond the scope of this book, but see Brooks (2008, ch 8) for a detailed discussion of the latter.
Trang 4Figure 4.4
Plot of a single
observation,
together with the
line of best fit, the
residual and the
fitted value
a unique solution for the estimated coefficients In fact, any fitted line thatgoes through the mean of the observations (i.e ¯x, ¯y) would set the sum ofthe ˆu tto zero On the other hand, taking the squared distances ensures thatall deviations that enter the calculation are positive and therefore do notcancel out
Minimising the sum of squared distances is given by minimising ( ˆu2
1+ˆ
This sum is known as the residual sum of squares (RSS) or the sum of squared
residuals What is ˆu t, though? Again, it is the difference between the actual
point and the line, y t − ˆy t So minimising
t uˆ2
t is equivalent to minimising
t (y t − ˆy t)2.Letting ˆαand ˆβ denote the values of α and β selected by minimising the
RSS, respectively, the equation for the fitted line is given by ˆy t = ˆα + ˆβx t
Now let L denote the RSS, which is also known as a loss function Take the summation over all the observations – i.e from t = 1 to T , where T is the
Lis minimised with respect to (w.r.t.) ˆαand ˆβ , to find the values of α and
βthat minimise the residual sum of squares to give the line that is closest
Trang 5to the data So L is differentiated w.r.t ˆ αand ˆβ, setting the first derivatives
to zero A derivation of the ordinary least squares estimator is given in theappendix to this chapter The coefficient estimators for the slope and theintercept are given by
αand ˆβ, that best fit the set of data To reiterate, this method of finding theoptimum is known as OLS It is also worth noting that it is obvious fromthe equation for ˆα that the regression line will go through the mean of theobservations – i.e that the point ( ¯x, ¯y) lies on the regression line
4.5 Some further terminology
4.5.1 The data-generating process, the population regression function and the sample regression function
The population regression function (PRF) is a description of the model that is
thought to be generating the actual data and it represents the true relationship
between the variables The population regression function is also known as the
data-generating process (DGP) The PRF embodies the true values of α and β,
and is expressed as
Note that there is a disturbance term in this equation, so that, even if one
had at one’s disposal the entire population of observations on x and y, it
would still in general not be possible to obtain a perfect fit of the line tothe data In some textbooks, a distinction is drawn between the PRF (the
underlying true relationship between y and x) and the DGP (the process describing the way that the actual observations on y come about), but, in
this book, the two terms are used synonymously
The sample regression function (SRF) is the relationship that has beenestimated using the sample observations, and is often written as
ˆ
Notice that there is no error or residual term in (4.7); all this equation states
is that, given a particular value of x, multiplying it by ˆ βand adding ˆα will
Trang 6give the model fitted or expected value for y, denoted ˆ y It is also possible
to write
Equation (4.8) splits the observed value of y into two components: the fitted
value from the model, and a residual term
The SRF is used to infer likely values of the PRF That is, the estimatesˆ
αand ˆβ are constructed, for the sample of data at hand, but what is really
of interest is the true relationship between x and y – in other words, the
PRF is what is really wanted, but all that is ever available is the SRF! Whatcan be done, however, is to say how likely it is, given the figures calculatedfor ˆα and ˆβ, that the corresponding population parameters take on certainvalues
4.5.2 Estimator or estimate?
Estimators are the formulae used to calculate the coefficients – for example, the
expressions given in (4.4) and (4.5) above, while the estimates, on the other
hand, are the actual numerical values for the coefficients that are obtained from
the sample
Example 4.1
This example uses office rent and employment data of annual frequency.These are national series for the United Kingdom and they are expressed asgrowth rates – that is, the year-on-year (yoy) percentage change The rentseries is expressed in real terms – that is, the impact of inflation has beenextracted The sample period starts in 1979 and the end value is for 2005,giving twenty-seven annual observations The national office data provide
an ‘average’ picture in the growth of real rents in the United Kingdom It isexpected that regions and individual markets have performed around thisgrowth path The source of the rent series is constructed by the authorsusing UK office rent series from a number of real estate consultancies Theemployment series is that for finance and business services published bythe Office for National Statistics (ONS)
Assume that the analyst has some intuition that employment (in ular, employment growth) drives growth in real office rents After all, inthe existing literature, employment series (service sector employment orfinancial and business services employment) receive empirical support as adirect or indirect driver of office rents (see Giussani, Hsia and Tsolacos, 1993,D’Arcy, McGough and Tsolacos, 1997, and Hendershott, MacGregor andWhite, 2002) Employment in business and finance is a proxy for businessconditions among firms occupying office space and their demand for office
Trang 720 15 10 5
A starting point to study the relationship between employment and realrent growth is a process of familiarisation with the path of the seriesthrough time (and possibly an examination of their statistical properties,although we do not do so in this example), and the two series are plotted infigure 4.5
The growth rate of office rents fluctuated between nearly −25 per centand 20 per cent during the sample period This magnitude of variation
in the growth rate is attributable to the severe cycle of the late 1980/early1990s in the United Kingdom that also characterised office markets in othercountries The amplitude of the rent cycle in more recent years has lessened.Employment growth in financial and business services has been mostlypositive in the United Kingdom, the exception being three years (1981, 1991and 1992) when it was negative The UK economy experienced a prolongedrecession in the early 1990s We observe greater volatility in employmentgrowth in the early part of the sample than later Panels (a) and (b) offigure 4.5 indicate that the two series have a general tendency to movetogether over time so that they follow roughly the same cyclical pattern Thescatter plot of employment and real rent growth, shown in figure 4.6, reveals
a positive relationship that conforms with our expectations This positive
Trang 88 6 4 2 0
0 Real rents (yoy %)
where RRg t is the growth in real rents at time t and EFBSg t is the growth
in employment in financial and business services at time t Equation (4.9) embodies the true values of α and β, and u t is the disturbance term Esti-mating equation (4.9) over the sample period 1979 to 2005, we obtain thesample regression equation
R ˆ Rg t = ˆα + ˆβEFBSg t = −9.62 + 3.27EFBSg t (4.10)The coefficients ˆαand ˆβare computed based on the formulae (4.4) and (4.5) –that is,
α = 0.08 − 3.27 × 2.97 = −9.62 The sign of the coefficient estimate for β (3.27) is positive When employ-
ment growth is positive, real rent growth is also expected to be positive If
we examine the data, however, we observe periods of positive employmentgrowth associated with negative real rent growth (e.g 1980, 1993, 1994,2004) Such inconsistencies describe a minority of data points in the sam-ple, otherwise the sign on the employment coefficient would not have beenpositive Thus it is worth noting that the regression estimate indicates thatthe relationship will be positive on average (loosely speaking, ‘most of thetime’), but not necessarily positive during every period
Trang 9close to the y-axis
The coefficient estimate of 3.27 is interpreted as saying that, if ment growth changes by one percentage point (from, say, 1.4 per cent to2.4 per cent – i.e employment growth accelerates by one percentage point),real rent growth will tend to change by 3.27 percentage points (from, say,
employ-2 per cent to 5.employ-27 per cent) The computed value of 3.employ-27 per cent is an age estimate over the sample period In reality, when employment increases
aver-by 1 per cent, real rent growth will increase aver-by over 3.27 per cent in someperiods but less than 3.27 per cent in others This is because all the otherfactors that affect rent growth do not remain constant from one period tothe next It is important to remember that, in our model, real rent growth
depends on employment growth but also on the error term u t, which ies other influences on rents The intercept term implies that employmentgrowth of zero will tend on average to result in a fall in real rent growth by9.62 per cent
embod-A word of caution is in order, however, concerning the reliability ofestimates of the coefficient on the constant term Although the strict inter-pretation of the intercept is indeed as stated above, in practice it is often
the case that there are no values of x (employment growth, in our example)
close to zero in the sample In such instances, estimates of the value ofthe intercept will be unreliable For example, consider figure 4.7, which
demonstrates a situation in which no points are close to the y-axis.
In such cases, one could not expect to obtain robust estimates of the value
of y when x is zero, as all the information in the sample pertains to the case
in which x is considerably larger than zero.
Trang 1020 (yoy %)
15 10 5 0
Actual and fitted
values and residuals
for RR regression
Similar caution should be exercised when producing predictions for
y using values of x that are a long way outside the range of values in
the sample In example 4.1, employment growth takes values between
−1.98 per cent and 6.74 per cent, only twice taking a value over 6 percent As a result, it would not be advisable to use this model to determinereal rent growth if employment were to shrink by 4 per cent, for instance,
or to increase by 8 per cent
On the basis of the coefficient estimates of equation (4.10), we can generatethe fitted values and examine how successfully the model replicates theactual real rent growth series We calculate the fitted values for real rentgrowth as follows:
The fitted values series replicates most of the important features of theactual values series In particular years we observe a larger divergence – afinding that should be expected, as the environment (economic, real estatemarket) within which the relationship between rent growth and employ-ment growth is studied, is changing The difference between the actual andfitted values produces the estimated residuals The properties of the residu-als are of great significance in evaluating a model Key misspecification testsare performed on these residuals We study the properties of the residuals
in detail in the following two chapters
Trang 114.6 Linearity and possible forms for the regression function
In order to use OLS, a model that is linear is required This means that, in the simple bivariate case, the relationship between x and y must be capable of
being expressed diagramatically using a straight line More specifically, the
model must be linear in the parameters (α and β), but it does not necessarily have to be linear in the variables (y and x) By ‘linear in the parameters’, it is
meant that the parameters are not multiplied together, divided, squared orcubed, etc
Models that are not linear in the variables can often be made to take
a linear form by applying a suitable transformation or manipulation Forexample, consider the following exponential regression model:
depen-equation (4.6)), the coefficients denote unit changes as described above
Similarly, if theory suggests that x should be inversely related to y
accord-ing to a model of the form
and regressing y on a constant and z Clearly, then, a surprisingly
var-ied array of models can be estimated using OLS by making suitable
Trang 12transformations to the variables On the other hand, some models are sically non-linear – e.g.
intrin-y t = α + βx γ
Such models cannot be estimated using OLS, but might be estimable using
a non-linear estimation method.24.7 The assumptions underlying the classical linear regression model
The model y t = α + βx t + u tthat has been derived above, together with the
assumptions listed below, is known as the classical linear regression model Data for x t are observable, but, since y t also depends on u t, it is necessary to be
specific about how the u ts are generated The set of assumptions shown
in box 4.3 are usually made concerning the u ts, the unobservable error ordisturbance terms
Technical notation Interpretation (1)E(u t) = 0 The errors have zero mean.
(2)var(u t)= σ2<∞ The variance of the errors is constant and
finite over all values ofx t (3)cov(u i , u j) = 0 The errors are statistically independent of
one another.
(4)cov(u t , x t) = 0 There is no relationship between the error
and correspondingx variable.
Note that no assumptions are made concerning their observable terparts, the estimated model’s residuals As long as assumption (1) holds,
coun-assumption (4) can be equivalently written E(x t u t) = 0 Both formulationsimply that the regressor is orthogonal to (i.e unrelated to) the error term
An alternative assumption to (4), which is slightly stronger, is that the x tsare non-stochastic or fixed in repeated samples This means that there is no
sampling variation in x t, and that its value is determined outside the model
A fifth assumption is required to make valid inferences about the
popu-lation parameters (the actual α and β) from the sample parameters ( ˆ αandˆ
β)estimated using a finite amount of data:
(5)u t ∼ N(0, σ2 ) u t is normally distributed.
2 See chapter 8 of Brooks (2008) for a discussion of one such method, maximum likelihood estimation.
Trang 134.8 Properties of the OLS estimator
If assumptions (1) to (4) hold, then the estimators ˆα and ˆβdetermined by OLSwill have a number of desirable properties; such an estimator is known as abest linear unbiased estimator (BLUE) What does this acronym represent?
● ‘Estimator’ means that ˆα and ˆβ are estimators of the true value of α and β.
● ‘Linear’ means that ˆαand ˆβare linear estimators, meaning that the mulae for ˆαand ˆβ are linear combinations of the random variables (in
for-this case, y).
● ‘Unbiased’ means that, on average, the actual values of ˆαand ˆβ will beequal to their true values
● ‘Best’ means that the OLS estimator ˆβhas minimum variance among theclass of linear unbiased estimators; the Gauss–Markov theorem provesthat the OLS estimator is best by examining an arbitrary alternative linearunbiased estimator and showing in all cases that it must have a variance
no smaller than the OLS estimator
Under assumptions (1) to (4) listed above, the OLS estimator can be shown
to have the desirable properties that it is consistent, unbiased and efficient.This is, essentially, another way of stating that the estimator is BLUE Thesethree properties will now be discussed in turn
than some arbitrary fixed distance δ away from its true value tends to zero
as the sample size tends to infinity, for all positive values of δ In the limit
(i.e for an infinite number of observations), the probability of the tor being different from the true value is zero – that is, the estimates willconverge to their true values as the sample size increases to infinity Consis-tency is thus a large-sample, or asymptotic, property The assumptions that
estima-E(x t u t)= 0 and var(u t)= σ2 <∞ are sufficient to derive the consistency ofthe OLS estimator
Trang 14that E(u t) = 0 Clearly, unbiasedness is a stronger condition than tency, since it holds for small as well as large samples (i.e for all samplesizes).
consis-4.8.3 Efficiency
An estimator ˆβ of a parameter β is said to be efficient if no other estimator
has a smaller variance Broadly speaking, if the estimator is efficient, it will
be minimising the probability that it is a long way off from the true value of
β In other words, if the estimator is ‘best’, the uncertainty associated withestimation will be minimised for the class of linear unbiased estimators Atechnical way to state this would be to say that an efficient estimator wouldhave a probability distribution that is narrowly dispersed around the truevalue
4.9 Precision and standard errors
Any set of regression estimates ˆαand ˆβ are specific to the sample used intheir estimation In other words, if a different sample of data was selected
from within the population, the data points (the x t and y t)will be different,leading to different values of the OLS estimates
Recall that the OLS estimators ( ˆα and ˆβ) are given by (4.4) and (4.5) It
would be desirable to have an idea of how ‘good’ these estimates of α and
β are, in the sense of having some measure of the reliability or precision
of the estimators ( ˆαand ˆβ) It is therefore useful to know whether one canhave confidence in the estimates, and whether they are likely to vary muchfrom one sample to another sample within the given population An idea
of the sampling variability and hence of the precision of the estimates can
be calculated using only the sample of data available This estimate of the
Trang 15precision of a coefficient is given by its standard error Given assumptions
(1) to (4) above, valid estimators of the standard errors can be shown to begiven by
where s is the estimated standard deviation of the residuals (see below).
These formulae are derived in the appendix to this chapter
It is worth noting that the standard errors give only a general indication
of the likely accuracy of the regression parameters They do not show howaccurate a particular set of coefficient estimates is If the standard errorsare small, it shows that the coefficients are likely to be precise on average,not how precise they are for this particular sample Thus standard errors
give a measure of the degree of uncertainty in the estimated values for the
coefficients It can be seen that they are a function of the actual observations
on the explanatory variable, x, the sample size, T , and another term, s The
last of these is an estimate of the standard deviation of the disturbance term
The actual variance of the disturbance term is usually denoted by σ2 How
can an estimate of σ2be obtained?
4.9.1 Estimating the variance of the error term (σ2)
From elementary statistics, the variance of a random variable u tis given by
var(u t)= E[(u t)− E(u t)]2 (4.22)Assumption (1) of the CLRM was that the expected or average value of theerrors is zero Under this assumption, (4.22) above reduces to
var(u t)= E u2t
(4.23)
What is required, therefore, is an estimate of the average value of u2
t, whichcould be calculated as
s2 = 1
T
Unfortunately, (4.24) is not workable, since u tis a series of population
distur-bances, which is not observable Thus the sample counterpart to u , which
Trang 16is ˆu t, is used:
s2 = 1
T
ˆ
sis also known as the standard error of the regression or the standard error
of the estimate It is sometimes used as a broad measure of the fit of theregression equation Everything else being equal, the smaller this quantity
is, the closer the fit of the line is to the actual data
4.9.2 Some comments on the standard error estimators
It is possible, of course, to derive the formulae for the standard errors of thecoefficient estimates from first principles using some algebra, and this is left
to the appendix to this chapter Some general intuition is now given as towhy the formulae for the standard errors given by (4.20) and (4.21) containthe terms that they do and in the form that they do The presentation offered
in box 4.4 loosely follows that of Hill, Griffiths and Judge (1997), which isvery clear
(1) The larger the sample size,T , the smaller the coefficient standard errors will be.
T appears explicitly inSE( ˆα) and implicitly in SE( ˆβ ) T appears implicitly as the
sum
(x t − ¯x)2 is fromt = 1 to T The reason for this is simply that, at least for
now, it is assumed that every observation on a series represents a piece of useful information that can be used to help determine the coefficient estimates.
Therefore, the larger the size of the sample the more information will have been used in the estimation of the parameters, and hence the more confidence will be placed in those estimates.
(2) Both SE( ˆα) and SE( ˆβ ) depend on s2 (ors ) Recall from above that s2 is the estimate of the error variance The larger this quantity is, the more dispersed the residuals are, and so the greater the uncertainty is in the model Ifs2 is large, the data points are, collectively, a long way away from the line.