The most well known of these is the arithmetic mean usually just termed ‘the mean’, which is simply calculated as the sum of all values in the series divided by the number of values.. Th
Trang 13.1.3 Panel data
Panel data have the dimensions of both time series and cross-sections – e.g.the monthly prices of a number of REITs in the United Kingdom, France andthe Netherlands over two years The estimation of panel regressions is aninteresting and developing area, but will not be considered further in thistext Interested readers are directed to chapter 10 of Brooks (2008) and thereferences therein
Fortunately, virtually all the standard techniques and analysis in metrics are equally valid for time series and cross-sectional data This bookconcentrates mainly on time series data and applications, however, sincethese are more prevalent in real estate For time series data, it is usual to
econo-denote the individual observation numbers using the index t and the total number of observations available for analysis by T For cross-sectional data, the individual observation numbers are indicated using the index i and the total number of observations available for analysis by N Note that there is,
in contrast to the time series case, no natural ordering of the observations
in a cross-sectional sample For example, the observations i might be on
city office yields at a particular point in time, ordered alphabetically bycity name So, in the case of cross-sectional data, there is unlikely to be anyuseful information contained in the fact that Los Angeles follows London
in a sample of city yields, since it is purely by chance that their names bothbegin with the letter ‘L’ On the other hand, in a time series context, theordering of the data is relevant as the data are usually ordered chronolog-ically In this book, where the context is not specific to only one type of
data or the other, the two types of notation (i and N or t and T ) are used
interchangeably
3.1.4 Continuous and discrete data
As well as classifying data as being of the time series or cross-sectional type,
we can also distinguish them as being either continuous or discrete, exactly
as their labels would suggest Continuous data can take on any value and
are not confined to take specific numbers; their values are limited only byprecision For example, the initial yield on a real estate asset could be 6.2per cent, 6.24 per cent, or 6.238 per cent, and so on On the other hand,
discrete data can take on only certain values, which are usually integers1(whole numbers), and are often defined to be count numbers – for instance,the number of people working in offices, or the number of industrial units
1 Discretely measured data do not necessarily have to be integers For example, until they became ‘decimalised’, many financial asset prices were quoted to the nearest 1/16th or 1/32nd of a dollar.
Trang 2transacted in the last quarter In these cases, having 2,013.5 workers or6.7 units traded would not make sense.
3.1.5 Cardinal, ordinal and nominal numbers
Another way in which we can classify numbers is according to whether theyare cardinal, ordinal or nominal This distinction is drawn in box 3.2
Box 3.2 Cardinal, ordinal and nominal numbers
● Cardinal numbers are those for which the actual numerical values that a particular
variable takes have meaning, and for which there is an equal distance between the numerical values.
● On the other hand, ordinal numbers can be interpreted only as providing a position
or an ordering Thus, for cardinal numbers, a figure of twelve implies a measure that is ‘twice as good’ as a figure of six Examples of cardinal numbers would be the price of a REIT or of a building, and the number of houses in a street On the other hand, for an ordinal scale, a figure of twelve may be viewed as ‘better’ than a figure of six, but could not be considered twice as good Examples include the ranking of global office markets that real estate research firms may produce Based
on measures of liquidity, transparency, risk and other factors, a score is produced Usually, in this scoring, an office centre ranking second in transparency cannot be said to be twice as transparent as the office market that ranks fourth.
● The final type of data that can be encountered would be when there is no natural ordering of the values at all, so a figure of twelve is simply different from that of a figure of six, but could not be considered to be better or worse in any sense Such data often arise when numerical values are arbitrarily assigned, such as telephone numbers or when codings are assigned to qualitative data (e.g., when describing the use of space, ‘1’ might be used to denote offices, ‘2’ to denote retail and ‘3’
to denote industrial, and so on) Sometimes, such variables are called nominal
variables.
● Cardinal, ordinal and nominal variables may require different modelling approaches
or, at least, different treatments.
3.2 Descriptive statistics
When analysing a series containing many observations, it is useful to be able
to describe the most important characteristics of the series using a smallnumber of summary measures This section discusses the quantities thatare most commonly used to describe real estate and other series, which are
known as summary statistics or descriptive statistics Descriptive statistics are
calculated from a sample of data rather than being assigned on the basis
of theory Before describing the most important summary statistics used in
Trang 3work with real estate data, we define the terms population and sample, which
have precise meanings in statistics
3.2.1 The population and the sample
The population is the total collection of all objects to be studied For example,
in the context of determining the relationship between risk and return for
UK REITs, the population of interest would be all time series observations
on all REIT stocks traded on the London Stock Exchange (LSE)
The population may be either finite or infinite, while a sample is a
selec-tion of just some items from the populaselec-tion A populaselec-tion is finite if it contains
a fixed number of elements In general, either all the observations for theentire population will not be available, or they may be so many in numberthat it is infeasible to work with them, in which case a sample of data is taken
for analysis The sample is usually random, and it should be representative of
the population of interest A random sample is one in which each
individ-ual item in the population is eqindivid-ually likely to be drawn A stratified sample
is obtained when the population is split into layers or strata and the
num-ber of observations in each layer of the sample is set to try to match thecorresponding number of elements in those layers of the population The
size of the sample is the number of observations that are available, or that
the researcher decides to use, in estimating the parameters of the model
3.2.2 Measures of central tendency
The average value of a series is sometimes known as its measure of location or measure of central tendency The average value is usually thought to measure
the ‘typical’ value of a series There are a number of methods that can be
used for calculating averages The most well known of these is the arithmetic mean (usually just termed ‘the mean’), which is simply calculated as the sum
of all values in the series divided by the number of values
The two other methods for calculating the average of a series are the
mode and the median The mode measures the most frequently occurring
value in a series, which is sometimes regarded as a more representative
measure of the average than the arithmetic mean Finally, the median is the
middle value in a series when the elements are arranged in an ascendingorder For a symmetric distribution, the mean, mode and median will becoincident For any non-symmetric distribution of points however, the threesummary measures will in general be different
Each of these measures of average has its relative merits and demerits Themean is the most familiar method to most researchers, but can be undulyaffected by extreme values, and, in such cases, it may not be representative
of most of the data The mode is, arguably, the easiest to obtain, but it is
Trang 4not suitable for continuous, non-integer data (e.g returns or yields) or fordistributions that incorporate two or more peaks (known as bimodal andmultimodal distributions, respectively) The median is often considered to
be a useful representation of the ‘typical’ value of a series, but it has thedrawback that its calculation is based essentially on one observation Thus
if, for example, we had a series containing ten observations and we were
to double the values of the top three data points, the median would beunchanged
The geometric mean
There exists another method that can be used to estimate the average of a
series, known as the geometric mean It involves calculating the N th root of the product of N numbers In other words, if we want to find the geometric
mean of six numbers, we multiply them together and take the sixth root(i.e raise the product to the power of 1/6th)
In real estate investment, we usually deal with returns or percentagechanges rather than actual values, and the method for calculating the geo-metric mean just described cannot handle negative numbers Therefore weuse a slightly different approach in such cases To calculate the geometric
mean of a set of N returns, we express them as proportions (i.e on a (−1, 1)
scale) rather than percentages (on a (−100, 100) scale), and we would usethe formula
R G = [(1 + r1)(1+ r2) (1 + r N)]1/N− 1 (3.1)
where r1, r2, , r N are the returns and R G is the calculated value of thegeometric mean Hence, what we would do would be to add one to eachreturn, multiply the resulting expressions together, raise this product to
the power 1/N and then subtract one right at the end.
Which method for calculating the mean should we use, therefore? Theanswer is, as usual, ‘It depends.’ Geometric returns give the fixed return onthe asset or portfolio that would have been required to match the actualperformance, which is not the case for the arithmetic mean Thus, if youassumed that the arithmetic mean return had been earned on the assetevery year, you would not reach the correct value of the asset or portfolio atthe end! It could be shown that the geometric return is always less than orequal to the arithmetic return, however, and so the geometric return is adownward-biased predictor of future performance Hence, if the objective is
to forecast future returns, the arithmetic mean is the one to use Finally, it
is worth noting that the geometric mean is evidently less intuitive and lesscommonly used than the arithmetic mean, but it is less affected by extremeoutliers than the latter There is an approximate relationship that holds
Trang 5between the arithmetic and geometric means, calculated using the sameset of returns:
R G ≈ R A− 1
2σ
where R G and R A are the geometric and arithmetic means, respectively,
and σ2is the variance of the returns
3.2.3 Measures of spread
Usually, the average value of a series will be insufficient to characterise
a data series adequately, since two series may have the same average butvery different profiles because the observations on one of the series may bemuch more widely spread about the mean than the other Hence anotherimportant feature of a series is how dispersed its values are In financetheory, for example, the more widely spread returns are around their meanvalue the more risky the asset is usually considered to be, and the sameprinciple applies in real estate The simplest measure of spread is arguably
the range, which is calculated by subtracting the smallest observation from
the largest While the range has some uses, it is fatally flawed as a measure
of dispersion by its extreme sensitivity to an outlying observation
A more reliable measure of spread, although it is not widely employed by
quantitative analysts, is the semi-interquartile range, also sometimes known
as the quartile deviation Calculating this measure involves first ordering the data and then splitting the sample into four parts (quartiles)2with equal num-bers of observations The second quartile will be exactly at the halfway point,and is known as the median, as described above The semi-interquartilerange focuses on the first and third quartiles, however, which will be at thequarter and three-quarter points in the ordered series, and which can becalculated respectively by the following:
Q1=
N+ 14
Trang 6This measure of spread is usually considered superior to the range, as it isnot so heavily influenced by one or two extreme outliers that, by definition,would be right at the end of an ordered series and so would affect the range.The semi-interquartile range still only incorporates two of the observations
in the entire sample, however, and thus another more familiar measure
of spread, the variance, is very widely used It is interpreted as the average
squared deviation of each data point about its mean value, and is calculatedusing the usual formula for the variance of a sample:
σ2=
(y i − y)2
Another measure of spread, the standard deviation, is calculated by taking
the square root of equation (3.6):
devi-While there is little to choose between the variance and the standarddeviation, the latter is sometimes preferred since it will have the same units
as the variable whose spread is being measured, whereas the variance willhave units of the square of the variable Both measures share the advantagethat they encapsulate information from all the available data points, unlikethe range and the quartile deviation, although they can also be heavilyinfluenced by outliers, as for the range The quartile deviation is an appro-priate measure of spread if the median is used to define the average value
of the series, while the variance or standard deviation will be appropriate ifthe arithmetic mean constitutes the adopted measure of central tendency.Before moving on, it is worth discussing why the denominator in the
formulae for the variance and standard deviation includes N − 1 rather
than N , the sample size Subtracting one from the number of available data points is known as a degrees of freedom correction, and this is necessary as the
spread is being calculated about the mean of the series, and this mean hashad to be estimated as well Thus the spread measures described above are
known as the sample variance and the sample standard deviation Had we
been observing the entire population of data rather than a mere samplefrom it, then the formulae would not need a degree of freedom correction
and we would divide by N rather than N− 1
Trang 7A further measure of dispersion is the negative semi-variance, which also gives rise to the negative semi-standard deviation These measures use identical
formulae to those described above for the variance and standard deviation,
but, when calculating their values, only those observations for which y i < y are used in the sum, and N now denotes the number of such observations.
This measure is sometimes useful if the observations are not symmetric
about their mean value (i.e if the distribution is skewed; see the next section).3
A final statistic that has some uses for measuring dispersion is the coefficient
of variation, CV This is obtained by dividing the standard deviation by the
arithmetic mean of the series:
CV = σ
CV is useful when we want to make comparisons between series Sincethe standard deviation has units of the series under investigation, it willscale with that series Thus, if we wanted to compare the spread of monthlyapartment rental values in Manhattan with those in Houston, using thestandard deviation would be misleading, as the average rental value in
Manhattan will be much bigger By normalising the standard deviation, the coefficient of variation is a unit-free (dimensionless) measure of spread, and
so could be used more appropriately to compare the rental values
Example 3.1
We calculate the measures of spreads described above for the annual officetotal return series in Frankfurt and Munich, which are presented in table 3.1.Annual total returns have ranged from −3.7 per cent to 11.3 per cent inFrankfurt and from −2.0 per cent to 13.3 per cent in Munich Applying
equation (3.3), the Q1 observation is the fourth observation – hence 0.8and 2.1 for Frankfurt and Munich, respectively The third quartile value isthe thirteenth observation – that is, 9.9 and 9.5 We observe that Frankfurtreturns have a lower mean and higher standard deviation than those forMunich On both the variance and standard deviation measures, Frankfurtexhibits more volatility than Munich This is confirmed by the coefficient ofvariation The higher value for Frankfurt indicates a more volatile market(the standard deviation is nearly as large as the mean return), whereas, forMunich, the standard deviation is only 0.7 times the mean return Note that
if the mean return in Frankfurt had been much higher (say 7 per cent), andall other metrics being equal, the coefficient of variation would have beenlower than Munich’s
3 Of course, we could also define the positive semi-variance, where only observations such
that y i > yare included in the sum.
Trang 8Table 3.1 Summary statistics for Frankfurt and Munich returns
Trang 9distribution, however, and therefore we also need what are known as the
higher moments of a series to characterise it fully The mean and the variance
are the first and second moments of a distribution, respectively, and the
(standardised) third and fourth moments are known as the skewness and tosis, respectively Skewness defines the shape of the distribution, and mea-
kur-sures the extent to which it is not symmetric about its mean value When thedistribution of data is symmetric, the three methods for calculating the aver-age (mean, mode and median) of the sample will be equal If the distribution
is positively skewed (when there is a long right-hand tail and most of the data
are bunched over to the left), the ordering will be mean > median > mode,
whereas, if the distribution is negatively skewed (a long left-hand tail andmost of the data bunched on the right), the ordering will be the opposite Anormally distributed series has zero skewness (i.e it is symmetric)
Kurtosis measures the fatness of the tails of the distribution and howpeaked at the mean the series is A normal distribution is defined to have
a coefficient of kurtosis of three It is possible to define a coefficient ofexcess kurtosis, equal to the coefficient of kurtosis minus three; a normaldistribution will thus have a coefficient of excess kurtosis of zero A normaldistribution is said to be mesokurtic Denoting the observations on a series
by y i and their variance by σ2, it can be shown that the coefficients ofskewness and kurtosis can be calculated respectively as4
while others do not, so that the divisor in such cases would be N rather than N− 1 in the equations.
Trang 10a leptokurtic distribution is more likely to characterise real estate (andeconomic) time series, and to characterise the residuals from a time seriesmodel In figure 3.2, the leptokurtic distribution is shown by the bold line,with the normal by the dotted line There is a formal test for normality, andthis is described and discussed in chapter 6.
We now apply equations (3.9) and (3.10) to estimate the skewness andkurtosis for the Frankfurt and Munich office returns given in table 3.1 (seetable 3.2) Munich returns show no skewness and Frankfurt slightly negativeskewness Therefore returns in Munich are symmetric about their mean; inFrankfurt, however, the tail tends to be a bit longer in the negative direction.Both series have a flatter peak around their mean and thinner tails than a
Trang 11Table 3.2 Skewness and kurtosis for
Frankfurt and Munich
Skewness KurtosisFrankfurt −0.2 1.9
normal distribution – i.e they are platykurtic The flatness results from thedata being less concentrated around their mean Office returns in both citiesare less concentrated around their means, and this is due to more volatilitythan usual The values of 1.9 and 2.2 for the coefficient of kurtosis suggestthat extreme values will not be highly likely, however
3.2.5 Measures of association
There are two key descriptive statistics that are used for measuring therelationships between series: the covariance and the correlation
Covariance
The covariance is a measure of linear association between two variables and
represents the simplest and most common way to enumerate the ship between them It measures whether they on average move in the samedirection (positive covariance) or in opposite directions (negative covari-ance), or have no association (zero covariance) The formula for calculating
relation-the covariance, σ x,y , between two series, x i and y i, is given by
A fundamental weakness of the covariance as a measure of association is
that it scales with the two variances, so it has units of x × y Thus, for ple, multiplying all the values of series y by ten will increase the covariance
exam-tenfold, but it will not really increase the true association between theseries since they will be no more strongly related than they were beforethe rescaling The implication is that the particular numerical value thatthe covariance takes has no useful interpretation on its own and hence
is not particularly useful The correlation, therefore, takes the covariance
and standardises or normalises it so that it is unit-free The result of thisstandardisation is that the correlation is bounded to lie on the (−1, 1) inter-
val A correlation of 1 (−1) indicates a perfect positive (negative) association
between the series The correlation measure, usually known as the correlation
Trang 12coefficient, is often denoted ρ x,y, and is calculated as
ρ x,y =
(x i − x)(y i − y) (N − 1)σ x σ y = σ x,y
where σ x and σ y are the standard deviations of x and y, respectively This measure is more strictly known as Pearson’s product moment correlation.
3.3 Probability and characteristics of probability distributions
The formulae presented above demonstrate how to calculate the mean andthe variance of a given set of actual data It is also useful to know how
to work with the theoretical expressions for the mean and variance of a
random variable, however A random variable is one that can take on any
value from a given set
The mean of a random variable y is also known as its expected value, ten E(y) The properties of expected values are used widely in econometrics, and are listed below, referring to a random variable y.
writ-● The expected value of a constant (or a variable that is non-stochastic) is
the constant, e.g E(c) = c.
● The expected value of a constant multiplied by a random variable is equal
to the constant multiplied by the expected value of the variable: E(cy)=
c E(y) It can also be stated that E(c y + d) = (c E(y)) + d, where d is also a
constant
● For two independent random variables, y1and y2, E(y1y2)= E(y1) E(y2)
The variance of a random variable y is usually written var (y) The properties
of the ‘variance operator’, var, are as follows
● The variance of a random variable y is given by var (y) = E[y − E(y)]2
● The variance of a constant is zero: var (c)= 0
● For c and d constants, var (c y + d) = c2var (y).
● For two independent random variables, y1 and y2, var (c y1+ dy2)= c2
var (y1)+ d2var (y2)
The covariance between two random variables, y1and y2, may be expressed
as cov (y1, y2) The properties of the ‘covariance operator’ are as follows
● cov (y1, y2)= E[(y1− E(y1))(y2− E(y2))]
● For two independent random variables, y1and y2, cov (y1, y2)= 0
● For four constants, c, d, e and f , cov (c + dy1, e + fy2)= df cov (y1, y2)
Trang 13It is often of interest to ask: ‘What is the probability that a random variablewill take on a value within a given range?’ This information is given by a
probability distribution A probability is defined to lie between zero and one,
with a probability of zero indicating an impossibility and one indicating acertainty
There are many probability distributions, including the binomial,
Pois-son, log-normal, normal, exponential, t, Chi-squared and F The most monly used distribution to characterise a random variable is a normal or Gaussian (these terms are equivalent) distribution The normal distribution
com-is particularly useful, since it com-is symmetric, and the only pieces of mation required to specify the distribution completely are its mean andvariance, as discussed in section 3.2.4 above
infor-The probability density function for a normal random variable with mean
µ and variance σ2is given by f (y) in the following expression:
f (y)= √1
2π e
−(y−µ)2/ 2σ2
(3.13)
Entering values of y into this expression would trace out the familiar ‘bell’
shape of the normal distribution, as shown in figure 3.3 below
If a random sample of size N : y1, y2, y3, , y Nis drawn from a population
that is normally distributed with mean µ and variance σ2, the samplemean, ¯y , is also normally distributed, with mean µ and variance σ2/N
In fact, an important rule in statistics, known as the central limit theorem,
states that the sampling distribution of the mean of any random sample ofobservations will tend towards the normal distribution with mean equal to
the population mean, µ, as the sample size tends to infinity This theorem is
a very powerful result, because it states that the sample mean, ¯y, will follow a
normal distribution even if the original observations (y1, y2, , y N) did not.This means that we can use the normal distribution as a kind of benchmarkwhen testing hypotheses, as described in the following section
3.4 Hypothesis testing
Real estate theory and experience will often suggest that certain parametersshould take on particular values, or values within a given range It is there-fore of interest to determine whether the relationships expected from realestate theory are upheld by the data to hand or not For example, estimates
of the mean (average) and standard deviation will have been obtained fromthe sample, but these values are not of any particular interest; the popula-tion values that describe the true mean of the variable would be of more
Trang 14interest, but are never available Instead, inferences are made concerningthe likely population values from the parameters that have been estimatedusing the sample of data In doing this, the aim is to determine whether thedifferences between the estimates that are actually obtained and the expec-tations arising from real estate theory are a long way from one another, in astatistical sense Thus we could use any of the descriptive statistic measuresdiscussed above (mean, variance, skewness, kurtosis, correlation, etc.) thatwere calculated from sample data to test the plausible population parame-ters given these sample statistics.
3.4.1 Hypothesis testing: some concepts
In the hypothesis-testing framework, there are always two hypotheses that
go together, known as the null hypothesis (denoted H0, or occasionally HN)and
the alternative hypothesis (denoted H1, or occasionally HA) The null hypothesis
is the statement or the statistical hypothesis that is actually being tested.The alternative hypothesis represents the remaining outcomes of interest.For example, suppose that we have estimated the sample mean of the
price of some houses to be £153,000, but prior research had suggested that the mean value ought to be closer to £180,000 It is of interest to test the hypothesis that the true value of µ – i.e the true but unknown population average house price – is in fact 180,000 The following notation would be
used:
H0: µ = 180,000
H1: µ = 180,000
This states that we are testing the hypothesis that the true but unknown
value of µ is 180,000 against an alternative hypothesis that µ is not 180,000 This would be known as a two-sided test, since the outcomes of both µ < 180,000 and µ > 180,000 are subsumed under the alternative hypothesis.
Sometimes, some prior information may be available, suggesting for
example that µ > 180,000 would be expected rather than µ < 180,000 In this case, µ < 180,000 is no longer of interest to us, and hence a one-sided
test would be conducted:
H0: µ = 180,000
H1: µ > 180,000 Here, the null hypothesis that the true value of µ is 180,000 is being tested against a one-sided alternative that µ is more than 180,000.
On the other hand, one could envisage a situation in which there is
prior information that µ < 180,000 was expected In this case, the null and
Trang 15alternative hypotheses would be specified as
H0: µ = 180,000
H1: µ < 180,000
This prior information that leads us to conduct a one-sided test rather than
a two-sided test should come from the real estate theory of the problemunder consideration, and not from an examination of the estimated value
of the coefficient Note that there is always an equality under the null
hypothesis So, for example, µ < 180,000 would not be specified under the
be rejected; if the value under the null hypothesis and the estimated valueare close to one another, the null hypothesis is less likely to be rejected For
example, consider µ = 180,000, as above A hypothesis that the true value
of µ is, say, 5,000 is more likely to be rejected than a null hypothesis that the true value of µ is 180,000 What is required now is a statistical decision rule that will permit the formal testing of such hypotheses.
In general, whether such null hypotheses are likely to be rejected willdepend on three factors
(1) The difference between the value under the null hypothesis, µ, and the
estimated value, ¯y (in this case 180,000 and 153,000, respectively).
(2) The variability of the estimates within the sample, measured by thesample standard deviation, ˆσ In general, the larger this is the moreuncertainty there would be surrounding the average value; by contrast,
if all the sample estimates were within the range (148,000, 161,000), we
could be more sure that the null hypothesis is incorrect
(3) The number of observations in the sample, N ; as stated above, the more
data points are contained within the sample the more information we
have, and the more reliable the sample average estimate will be Ceteris paribus, the larger the sample size the less evidence we would need
against a null hypothesis to reject it, and so the more likely such arejection is to occur
If we take repeated samples of size N from a population that has a mean µ and a standard deviation σ , then the sample mean will be distributed with mean µ and standard deviation (σ/√
N) Suppose, for example, that we wereinterested in measuring the average transaction price of a three-bedroom
Trang 16of means would converge upon a normal distribution This is an importantdefinition, since it allows us to test hypotheses about the sample mean.The way that we test hypotheses using the test of significance approach
would be to form a test statistic and then compare it with a critical value from
a statistical table If we assume that the population standard deviation, σ ,
is known, the test statistic will follow a normal distribution and we wouldobtain the appropriate critical value from the normal distribution tables.This will never be the case in practice, however, and therefore the following
discussion refers to the situation when we need to obtain an estimate of σ , which we usually denote by s (or sometimes by ˆ σ) In this case, a differentexpression for the test statistic would be required, and the sample mean
now follows a t-distribution with mean µ and variance σ2/ (N− 1) rather
than a normal distribution The test statistic would follow a t-distribution and the relevant critical value would be obtained from the t-tables.
3.4.2 A note on the t- and the normal distributions
The normal distribution, shown in figure 3.3, should be familiar to ers Note its characteristic ‘bell’ shape and its symmetry around the mean
read-A normal variate can be scaled to have zero mean and unit variance bysubtracting its mean and dividing by its standard deviation