CONTENTS LIST OF CONTRIBUTORS vii INTRODUCTION Thomas B Fomby and R Carter Hill ix A COMPARATIVE STUDY OF PURE AND PRETEST ESTIMATORS FOR A POSSIBLY MISSPECIFIED TWO WAY ERROR COMPONENT MODEL Badi H B.
Trang 1LIST OF CONTRIBUTORS vii
INTRODUCTION
A COMPARATIVE STUDY OF PURE AND PRETEST
ESTIMATORS FOR A POSSIBLY MISSPECIFIED TWO-WAY
ERROR COMPONENT MODEL
Badi H Baltagi, Georges Bresson and Alain Pirotte 1
TESTS OF COMMON DETERMINISTIC TREND SLOPES
APPLIED TO QUARTERLY GLOBAL TEMPERATURE DATA
Thomas B Fomby and Timothy J Vogelsang 29
THE SANDWICH ESTIMATE OF VARIANCE
TEST STATISTICS AND CRITICAL VALUES IN
SELECTIVITY MODELS
R Carter Hill, Lee C Adkins and Keith A Bender 75
ESTIMATION, INFERENCE, AND SPECIFICATION TESTING FOR POSSIBLY MISSPECIFIED QUANTILE REGRESSION
QUASI-MAXIMUM LIKELIHOOD ESTIMATION WITH
BOUNDED SYMMETRIC ERRORS
Douglas Miller, James Eales and Paul Preckel 133
v
Trang 2CONSISTENT QUASI-MAXIMUM LIKELIHOOD
ESTIMATION WITH LIMITED INFORMATION
AN EXAMINATION OF THE SIGN AND VOLATILITY
SWITCHING ARCH MODELS UNDER ALTERNATIVE
DISTRIBUTIONAL ASSUMPTIONS
ESTIMATING A LINEAR EXPONENTIAL DENSITY WHEN
THE WEIGHTING MATRIX AND MEAN PARAMETER
VECTOR ARE FUNCTIONALLY RELATED
Trang 3Lee C Adkins Oklahoma State University, Stillwater, USA
Florin Avram University de Pau, France
Badi H Baltagi Texas A&M University, College Station, USA
Keith A Bender University of Wisconsin-Milwaukee,
Milwaukee, USA
Georges Bresson Universit´e Paris II, Paris, France
James Eales Purdue University, West Lafayette, USA
Thomas B Fomby Southern Methodist University, Dallas, USA
James W Hardin University of South Carolina, Columbia, USA
R Carter Hill Louisiana State University, Baton Rouge,
USA
Tae-Hwan Kim University of Nottingham, Nottingham, UK
Sang-Hak Lee Purdue University, West Lafayette, USA
Douglas Miller Purdue University, West Lafayette, USA
Mohammed F Omran University of Sharjah, UAE
Alain Pirotte Universit´e de Valenciennes and Universit´e
Paris II, Paris, France
Paul Preckel Purdue University, West Lafayette, USA
vii
Trang 4Dhor-yiu Sin Hong Kong Baptist University, Hong Kong
Timothy J Vogelsang Cornell Univerisity, Ithaca, USA
Halbert White University of California, San Diego, USA
Tiemen Woutersen University of Western Ontario, Ontario,
Canada
Trang 5It is our pleasure to bring you a volume of papers which follow in the tradition of
the seminal work of Halbert White, especially his work in Econometrica (1980, 1982) and his Econometric Society monograph no 22 Estimation, Inference and
Specification Analysis (1994) Approximately 20 years have passed since White’s
initial work on heteroskedasticity-consistent covariance matrix estimation andmaximum likelihood estimation in the presence of misspecified models, so-calledquasi-maximum likelihood (QMLE) estimation Over this time, much has beenwritten on these and related topics, many contributions being by Hal himself Forexample, following Hal’s pure heteroskedasticity robust estimation work,Neweyand West (1987)extended robust estimation to autocorrelated data Extensionsand refinements of these themes continue today There is no econometric packagethat we know of today that does not have some provision for robust standarderrors in most of the estimation methods offered
All of these innovations can be credited to the germinating ideas produced
by Hal in his econometric research Thus, we offer this volume in recognition
of the pioneering work that he has done in the past and that has proved to be
so wonderfully useful in empirical research We look forward to seeing Hal’swork continuing well into the future, yielding, we are sure, many more usefuleconometric techniques that will be robust to misspecifications of the sort weoften face in empirical problems in economics and elsewhere
Now let us turn to a brief review of the contents of this volume
In the spirit ofWhite (1982), Baltagi, Bresson and Pirotte in their paper entitled
“A Comparative Study of Pure and Pretest Estimators for a Possibly MisspecifiedTwo-way Error Component Model” examine the consequences of model mis-specification using a panel data regression model Maximum likelihood, randomand fixed effects estimators are compared using Monte Carlo experiments undernormality of the disturbances but with possibly misspecified variance-covariancematrix In the presence of perfect foresight on the form of the variance-covariancematrix, GLS (maximum likelihood) is always the best in MSE terms However,
in the absence of perfect foresight (the more typical case), the authors show that apre-test estimator is a viable alternative given that its performance is a close second
to correct GLS whether the true specification is a two-way, one-way error nent or a pooled regression model The authors further show that incorrect GLS,
compo-ix
Trang 6maximum likelihood, or fixed effects estimators may lead to a big loss in meansquare error.
In their paper “Tests of Common Deterministic Trend Slopes Applied toQuarterly Global Temperature Data” Fomby and Vogelsang apply the multivariate
compare global warming trends both within and across the hemispheres of theglobe They find that globally and within hemispheres the seasons appear not to
be warming equally fast In particular, winters appear to be warming faster thansummers Across hemispheres, it appears that the winters in the northern andsouthern hemispheres are warming equally fast whereas the remaining seasonsappear to have unequal warming rates
In his paper “The Sandwich Estimate of Variance” Hardin examines the history,development, and application of the sandwich estimate of variance In describingthis estimator he pays attention to applications that have appeared in the literatureand examines the nature of the problems for which this estimator is used Healso describes various adjustments to the estimate for use with small samples andillustrates the estimator’s construction for a variety of models
In their paper “Test Statistics and Critical Values in Selectivity Models” Hill,Adkins, and Bender examine the finite sample properties of alternative covariancematrix estimators of the Heckman (1979) two-step estimator (Heckit) for theselectivity model so widely used in Economics and other social sciences Theauthors find that, in terms of how the alternative versions of asymptotic variance-covariance matrices used in selectivity models capture the finite sample variability
of the Heckit two-step estimator, the answer depends on the degree of censoring and
on whether the explanatory variables in the selection and regression equation differ
or not With severe censoring and if the explanatory variables in the two equationsare identical, then none of the asymptotic standard error formulations is reliable insmall samples In larger samples the bootstrap does a good job in reflecting estima-tor variability as does a version of the White heteroskedasticity-consistent estima-tor With respect to finite sample inference the bootstrap standard errors seem tomatch the nominal standard errors computed from asymptotic covariance matricesunless censoring is severe and there is not much difference in the explanatory vari-ables in the selection and regression equations Most importantly the critical values
of the pivotal bootstrap t-statistics lead to better test size than those based on usual
asymptotic theory
To date the literature on quantile regression and least absolute deviationregression has assumed either explicitly or implicitly that the conditional quantileregression model is correctly specified In their paper “Estimation, Inference,and Specification Testing for Possibly Misspecified Quantile Regression” Kimand White allow for possible misspecification of a linear conditional quantile
Trang 7regression model They obtain consistency of the quantile estimator for certain
“pseudo-true” parameter values and asymptotic normality of the quantile mator when the model is misspecified In this case, the asymptotic covariancematrix has a novel form, not seen in earlier work, and they provide a consistentestimator of the asymptotic covariance matrix They also propose a quick andsimple test for conditional quantile misspecification based on the quantileresiduals
esti-Miller, Eales, and Preckel propose in their paper “Quasi-Maximum LikelihoodEstimation with Bounded Symmetric Errors” a QMLE estimator for the locationparameters of a linear regression model with bounded and symmetricallydistributed errors The errors outcomes are restated as the convex combination ofthe bounds, and they use the method of maximum entropy to derive the quasi-loglikelihood function Under the stated model assumptions, they show that theproposed estimator is unbiased, consistent, and asymptotically normal Miller,Eales, and Preckel then conduct a series of Monte Carlo exercises designed
to illustrate the sampling properties of QMLE to the least squares estimator.Although the least squares estimator has smaller quadratic risk under normal andskewed error processes, the proposed QML estimator dominates least squares forthe bounded and symmetric error distribution considered in their paper
In their paper “Consistent Quasi-Maximum Likelihood Estimation with LimitedInformation” Miller and Lee use the minimum cross-entropy method to derive anapproximate joint probability model for a multivariate economic process based
on limited information about the marginal quasi-density functions and the jointmoment conditions The modeling approach is related to joint probability modelsderived from copula functions They note, however, that the entropy approach hassome practical advantages over copula-based models Under suitable regularityconditions, the authors show that the quasi-maximum likelihood estimator(QMLE) of the model parameters is consistent and asymptotically normal Theydemonstrate the procedure with an application to the joint probability model oftrading volume and price variability for the Chicago Board of Trade soybeanfutures contract
There is growing evidence in the financial economics literature that the response
of current volatility in financial data to past shocks is asymmetric with negativeshocks having more impact on current volatility than positive shocks In their paper
“An Examination of the Sign and Volatility Switching ARCH Models Under native Distributional Assumptions” Omran and Avram investigate the asymmetricARCH models ofGlosten et al (1993)andFornari and Mele (1997)and the sensi-tivity of their models to the assumption of normality in the innovations Omran andAvram hedge against the possibility of misspecification by basing the inferences
Alter-on the robust variance-covariance matrix suggested byWhite (1982) Their results
Trang 8suggest that using more flexible distributional assumptions on financial data canhave a significant impact on the inferences drawn from asymmetric ARCH models.
Gourieroux et al (1984)investigate the consistency of the parameters in theconditional mean, ignoring or misspecifying other features of the true conditionaldensity They show that it suffices to have a QMLE of a density from the linearexponential family (LEF) Conversely, a necessary condition for a QMLE beingconsistent for the parameters in the conditional mean is that the likelihood functionbelongs to the LEF As a natural extension in Chapter 5 of his book,White (1994)
shows that theGourieroux et al (1984)results carry over to dynamic models withpossibly serially correlated and/or heteroskedastic errors In his paper “Estimating
a Linear Exponential Density when the Weighting Matrix and Mean ParameterVector are Functionally Related” Sin shows that the above results do not holdwhen the weighting matrix of the density and the mean parameter vector arefunctionally related A prominent example is an autoregressive moving-average(ARMA) model with generalized autoregressive conditional heteroscedasticity(GARCH) error However, correct specification of the conditional variance addsconditional moment conditions for estimating the parameters of the conditionalmean Based on the recent literature of efficient instrumental variables estimator(IVE) or generalized method of moments (GMM), the author proposes an estima-tor that is based on the QMLE of a density from the quadratic exponential family(QEF) The asymptotic variance of this modified QMLE attains the lower boundfor minimax risk In this modeling approach the GARCH-M is also allowed
In his paper “Testing in GMM Models Without Truncation” Vogelsangproposes a new approach to testing in the generalized method of moments (GMM)framework The new tests are constructed using heteroskedasticity autocorrelation(HAC) robust standard errors computed using nonparametric spectral densityestimators without truncation While such standard errors are not consistent,
a new asymptotic theory shows that they lead to valid tests nonetheless In anover-identified linear instrumental variables model, simulations suggest that thenew tests and the associated limiting distribution theory provide a more accuratefirst order asymptotic null approximation than both standard nonparametric HACrobust tests and VAR-based parametric HAC robust tests Finite sample power ofthe new tests is shown to be comparable to standard tests
In applied work, economists analyze individuals or firms that differ in observedand unobserved ways These unobserved differences are usually referred to asheterogeneity and one can control for the heterogeneity in panel data by allowingfor time-invariant, individual-specific parameters This fixed effect approachintroduces many parameters into the model that causes the “incidental parameterproblem”: the maximum likelihood estimator is in general inconsistent.Woutersen(2001) shows how to approximately separate the parameters of interest from
Trang 9the fixed effects using a reparameterization He then shows how a Bayesianmethod gives a general solution to the incidental parameter for correctly specifiedmodels In his paper in this volume “Bayesian Analysis of Misspecified Modelswith Fixed Effects” Woutersen extends his 2001 work to misspecified models.
integrated likelihood is zero at the true values of the parameters He then derivesthe conditions under which a Bayesian estimator converges at the rate of√
N,
where N is the number of individuals Under these conditions, Woutersen shows
that the variance-covariance matrix of the Bayesian estimator has the form of
White (1982) He goes on to illustrate the approach by analyzing the dynamiclinear model with fixed effects and a duration model with fixed effects
Thomas B Fomby and R Carter Hill
Co-editors
REFERENCES
Fornari, F., & Mele, A (1997) Sign- and volatility-switching ARCH models: Theory and applications
to international stock markets Journal of Applied Econometrics, 12, 49–65.
Franses, P H., & Vogelsang, T J (2002) Testing for common deterministic trend slopes, Center for Analytic Economics Working Paper 01–15, Cornell University.
Glosten, L R., Jagannathan, R., & Runkle, D (1993) On the relationship between expected value and
the volatility of nominal excess returns on stocks Journal of Finance, 43, 1779–1801.
Gourieroux, C., Monfort, A., & Trognon, A (1984) Pseudo-maximum likelihood methods: Theory.
Econometrica, 52, 681–700.
Heckman, J J (1979) Sample selection bias as a specification error Econometrica, 47, 153–161.
Newey, W., & West, K (1987) A simple positive semi-definite heteroskedasticity and autocorrelation
consistent covariance matrix Econometrica, 55, 703–708.
White, H (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for
heteroskedasticity Econometrica, 48, 817–838.
White, H (1982) Maximum likelihood estimation of misspecified models Econometrica, 50, 1–25 White, H (1994) Estimation, inference and specification analysis Econometric Society Monograph
No 22 Cambridge, UK: Cambridge University Press.
Woutersen, T M (2001) Robustness against incidental parameters and mixing distributions, Working Paper, Department of Economics, University of Western Ontario.
Trang 10A COMPARATIVE STUDY OF PURE AND PRETEST ESTIMATORS FOR A POSSIBLY MISSPECIFIED TWO-WAY ERROR COMPONENT MODEL
Badi H Baltagi, Georges Bresson and Alain Pirotte
ABSTRACT
In the spirit of White’s (1982) paper, this paper examines the consequences
of model misspecification using a panel data regression model Maximum likelihood, random and fixed effects estimators are compared using Monte Carlo experiments under normality of the disturbances but with a possibly misspecified variance-covariance matrix We show that the correct GLS (ML) procedure is always the best according to MSE performance, but the researcher does not have perfect foresight on the true form of the variance covariance matrix In this case, we show that a pretest estimator is a viable alternative given that its performance is a close second to correct GLS (ML) whether the true specification is a two-way, a one-way error component model or a pooled regression model Incorrect GLS, ML or fixed effects estimators may lead to a big loss in MSE.
A fundamental assumption underlying classical results on the properties of the maximum likelihood estimator is that the stochastic law which determines the behavior of the
phenomena investigated (the “true” structure) is known to lie within a specified parametric family of probability distributions (the model) In other words, the probability model is
Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Advances in Econometrics, Volume 17, 1–27
Copyright © 2003 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 0731-9053/doi:10.1016/S0731-9053(03)17001-6
1
Trang 11assumed to be “correctly specified” In many (if not most) circumstances, one may not have complete confidence that this is so [e.g White, 1982 , p 1]
1 INTRODUCTION
Due to the non-experimental nature of econometrics, researchers use the same dataset to select the model and to estimate the parameters in the selected model Often,
we employ a preliminary test estimator (or pretest estimator for short) whenever
we test some aspect of a model’s specification and then decide, on the basis of thetest results, what version of the model to estimate or what estimation method touse Unfortunately, the statistical properties of pretest estimators are, in practice,very difficult to derive
The literature on pretest estimators is well surveyed inJudge and Bock (1978,1983)and in the special issue of the Journal of Econometrics (1984) edited by
Judge More recently, asymptotic aspects have been considered and different lection strategies (especially general-to-specific and specific-to-general) have beendiscussed For a summary of the latest developments, seeGiles and Giles (1993)
se-andMagnus (1999)
In the panel data literature, few pretest studies have been conducted TheseincludeZiemer and Wetzstein (1983)on the pooling problem andBaltagi and Li(1997)on the estimation of error component models with autocorrelated distur-bances TheBaltagi and Li (1997)Monte-Carlo study compares the finite sampleperformance of a number of pure and pretest estimators for an error componentmodel with first-order AR or MA remainder disturbances They show that thecorrect GLS procedure is always the best but the researcher does not have perfectforesight on which one it is (GLS assuming random error component with theremainder term following an AR(1) or MA(1) process) In that case, Baltagi and
Li show that the pretest estimator is a viable alternative given that its performance
is a close second to correct GLS whether the true serial correlation process isAR(1) or MA(1)
It is the aim of this paper to study the small sample performance of pureand pretest estimators in a panel data regression model when the two-way errorcomponent structure for the disturbances may be misspecified More specifically,this study performs Monte-Carlo experiments to compare the properties of 11alternative estimators when the true model may be a two-way error componentmodel with both individual and time effects, a one-way error component modelwith only time or individual effects, or simply a pooled regression model with notime or individual effects The estimators considered are: Ordinary Least Squares(OLS), Fixed effects (Within), Feasible Generalized Least Squares (FGLS) andMaximum Likelihood (ML) assuming normality of the disturbances The only
Trang 12type of misspecification considered is where one or both variance components areactually equal to zero The pretest estimator is based on the results of two tests.The first test is the Kuhn-Tucker test suggested byGouri´eroux, Holly and Monfort(1982)which tests whether both random effects are zero versus the alternative that
at least one of them is positive If the null hypothesis that both time and individualeffects are jointly zero is not rejected, the pretest estimator reduces to OLS If it isrejected, a second test proposed byBaltagi, Chang and Li (1992)is implemented.This is a conditional LM test for whether one of the variance components is zerogiven that the other variance component is positive Either hypothesis can be testedfirst, the order does not matter If both hypotheses are rejected, the pretest estimatorreduces to the two-way FGLS estimator If one of these hypotheses is rejectedwhile the other is not, the pretest estimator reduces to a one-way FGLS estimator
Of course, if both hypotheses are not rejected, the pretest estimator reduces to theOLS estimator This is described in the flow chart given below, seeFig 1 The
Fig 1 Flow Chart for the Pretest Estimator Note: GHM: Gouri´eroux, Holly and Monfort
test BCL: Baltagi, Chang and Li test Relative MSE= MSE(estimator)/MSE(True GLS)
Trang 13SAS computer code underlying the pretest estimator in the flow chart is given inthe Appendix.
Using Monte Carlo experiments, we compare the performance of this pretestestimator as well as the standard one-way and two-way error component estimatorsusing the relative mean squared error (MSE) criterion The basis for comparison isthe MSE of the true GLS estimator which is based on the true variance components
In fact, the relative MSE of each estimator is obtained by dividing its MSE by that
of true GLS
Section 2 describes the model and the pretest estimator Section 3 presentsthe Monte-Carlo design and the results of experiments Section 4provides ourconclusion
2 PURE AND PRETEST ESTIMATORS FOR A TWO-WAY
ERROR COMPONENT MODEL
Consider the two-way error component model written in matrix notation:
where y is an NT × 1 vector denoting the dependent variable y i ,t stacked such
that the slower index is i = 1, 2, , N, and the faster index is t = 1, 2, , T ␥
is a scalar,NT is a vector of ones of dimension NT × 1, X is an NT × k matrix
of independent variables x j ,i,t , with j = 1, 2, , k The disturbances follow a
two-way error component model
with Z= I N⊗ T and Z = N ⊗ I T I N and I Tare identity matrices of dimension
N ×N and T × T N andT are vectors of ones of dimension N × 1 and T × 1.
The variance-covariance matrix of u is given by
The GLS estimator of is given by
Trang 14with B denoting the Between variation and W denoting the Within variation.
We compute the MSE resulting from blind estimation of (i) a two-way errorcomponent (TWEC) model, (ii) a one-way individual error component (OWIEC)model, (iii) a one-way time error component (OWTEC) model or (iv) a pooledregression model with no individual or time effects Note that if both variancecomponents are positive, i.e.2
> 0 and 2
> 0, estimating this TWEC model
by OLS or assuming it is a OWEC model yields to possible loss in efficiency Thisloss will depend on the size of the panel, the magnitude of the variance components
and the Between and Within variation in X.
Based on the sequence of hypotheses tests described above on the variancecomponents, the pretest estimator is either OLS, the OWEC (individual or time)FGLS, or the TWEC estimator, see the flow chart given inFig 1 This pretestestimator is based on two Lagrange Multiplier (LM) tests The first one is based
on modifying theBreusch and Pagan (1980)joint LM test for the null hypothesisthat both the individual and time variance components are zero (seeBaltagi, 2001):
Trang 15If we do not reject the null hypothesis, the pretest estimator is the OLS estimator.
If we reject H0,A, we compute two conditional LM tests proposed byBaltagi,Chang and Li (1992) These test the following two null hypotheses with the orderbeing immaterial:
The estimated disturbances ˜u denote the one-way time effects GLS residuals using
the maximum likelihood estimates ˜2
Trang 16The estimated disturbances ˜u denote the one-way individual effects GLS residuals
using the maximum likelihood estimates ˜2
1and ˜2
.
If both hypotheses H0,B and H0,Care rejected, the pretest estimator is the TWEC
FGLS estimator If we reject H0,C but we do not reject H0,B, the pretest estimator
is the OWEC time FGLS estimator Similarly, if we reject H0,B but we do not
reject H0,C, the pretest estimator is the OWEC individual FGLS estimator If both
hypotheses H0,B and H0,Care not rejected, the pretest estimator is OLS Hence,
TW-FGLS if bothH0,BandH0,Care rejected
This pretest estimator is compared with the pure one-way and two-way estimatorsusing Monte Carlo experiments
3 THE MONTE CARLO RESULTS
First, we describe the Monte Carlo design We use the following simple regressionmodel:
y i,t = ␥ + x i ,t + u i ,t i = 1, , N and t = 1, , T (8)with
u i ,t= i+ t+ i ,t (9)
= 20 The latter means
that the total variance of the disturbances is fixed at 20 for all experiments We
consider three values for the number of individuals in the panel (N= 25, 50 and
500); two values for the time dimension (T= 10 and 20) The exogenous variable
is generated by choosing a random variable i ,t, uniformly distributed on the
Trang 17interval [−5, 5] and forming:
x i ,t = ␦1x i ,t−1+ i ,t , ␦1= 0.5
For each cross-sectional unit T+ 10 observations are generated starting with
x i ,0 = 0 The first ten observations are then dropped in order to reduce the
depen-dency on initial values The parameters and are varied over the set (0, 0.01,
0.2, 0.4, 0.6, 0.8) such that (1− − ) is always positive For each experiment,
5,000 replications were performed For every replication, (NT + N + T) IIN(0, 1)
random numbers were generated The first N numbers were used to generate
3.1 Results of the LMTests
The Kuhn-Tucker test of Gouri´eroux, Holly and Monfort (GHM) performs well
in testing the null of H0,A: 2
= 2
= 0 When one of the individual or time
variance components is more than 1% of the total variance, this test has high
power rejecting the null in over 98% of the cases for T = 10 and N = 25 In fact,
performance improves as T increases for fixed N or as N increases for a fixed T.
Also, the power of this test increases as the other variance component increases
For example, for (N , T) = (25, 10) and for ( = 0.01 and = 0, 0.01, 0.2, 0.4 and
0.6), the rate for rejection of the null increases very quickly from 7% to 12.7%
to 97.9% to 99.9% to 100% When both variances components are zero, the size
of the test is around 5% for all values of N and T In fact the frequency of type I error is 4.8%, 5.1% for N = 25 and T = (10, 20) and 4.4%, 5.5% for N = 50 and
T = (10, 20) and 5.4%, 5.1% for N = 500 and T = (10, 20).
The conditional LM test of Baltagi, Chang and Li (BCL) for testing H0,B: 2
=
0 implicitly assumes that2
> 0 Conversely, the other conditional LM test of BCL
for testing H0,C : 2
= 0 implicitly assumes that 2
> 0 These two conditional
LM tests perform well When = 0, and if 2
represents more than 1% of the
total variance, the H0,Btest has high power rejecting the null in over 99.9% of the
cases when N = 25 and T = 10 Similarly, when = 0, and if 2
represents more
than 1% of the total variance, the H0,Ctest has also high power rejecting the null in
Trang 18more than 99.9% of the cases when N = 25 and T = 10 This power performance
improves as T increases for fixed N or as N increases for a fixed T.
Finally, when both variance components are higher than 20% of the total ance, the non rejection rate for the TWEC estimator is more than 99.3% for
vari-N = 25 and T = 10 and increases to 100% for the other values of N and T
con-sidered Using the flow chart given above, the pretest estimator is determinedfor each replication depending on the outcome of these tests For 5,000 replica-
tions and (N , [T]) = (25, [10, 20]), (50, [10, 20]), (500, [10, 20]), this choice of
the pretest estimator is given byTables 1–3 For example, for N = 25 and T = 10,
and = = 0.4, the pretest estimator is identical to the two-way FGLS
estima-tor for all 5,000 replications If = 0.01, and = 0.8, the pretest estimator is a
two-way FGLS estimator in 2,259 replications and a one-way time effects FGLSestimator in the remaining 2,741 replications
We checked the sensitivity of our pretest estimator to the level of significanceused by increasing it from 5 to 10% The results are given inTable 1 For example,
for N = 25 and T = 10 and = = 0.4, the 10% pretest estimator is still identical
to the two-way FGLS estimator for all 5,000 replications If = 0.01 and = 0.8,
the 10% pretest estimator is a two-way FGLS estimator in 2,754 replications and
a one-way time effects FGLS estimator in the remaining 2,246 replications This
is an improvement over the 5% pretest estimator in that we are selecting the rightestimator more often
3.2 Performance of the Estimators
Now let us turn to the relative MSE comparisons of pure and pretest estimators
for (N , [T]) = (25, [10, 20]), (50, [10, 20]) and (500, [10, 20]).Tables 4–6reportour findings andFigs 2–4plot some of these results based on 5,000 replications.From these tables and figures, we observe the following:
(1) When the true model is a pooled regression with no individual or time fects, i.e. = = 0, the two-way fixed effects estimator (Within) performs
ef-the worst Its relative MSE with respect to true GLS, which in this case is
OLS, yields 1.456 for (N = 25, T = 10) and 1.264 for (N = 25, T = 20) The
Within estimator wipes out the Between variation whereas the OLS estimatorweights both the Between and Within variation equally When the Betweenvariation is important, this can lead to considerable loss in efficiency This per-formance improves as the variance components increase The one-way fixedeffects estimators perform better than the two-way Within estimator yielding a
relative MSE of 1.041 and 1.384 for (N = 25, T = 10) and 1.056 and 1.190 for
Trang 19GHM H0,A: Gouri´eroux, Holly and Monfort test (number of non-rejections).
If GHM H0,A is rejected then BCL tests: BCL OLS : both H0,B and H0,C are not rejected; BCL TIME : H0,B is not rejected while H0,C is rejected BCL INDIV : H0,C is not rejected while H0,Bis
rejected; BCL TW : both H and H are rejected.
Trang 20Table 2 Number of Non-Rejections of H0,A , H0,B and H0,Cfor 5,000
(N = 25, T = 20) Two-way FGLS is slightly worse than OLS yielding a
relative MSE of 1.007 for (N = 25, T = 10) and 1.010 for (N = 25, T = 20).
One-way FGLS performs better than two-way FGLS yielding a relative MSE
of 1.000 and 1.005 for (N = 25, T = 10) and 1.002 and 1.005 for (N = 25,
T= 20) The ML estimators perform well yielding a relative MSE that is no
more than 1.004 for (N = 25, T = 10), while the 5 and 10% pretest estimator
yield a relative MSE no more than 1.003 for (N = 25, T = 10) and no more
Trang 21Table 3. Number of Non-Rejections of H0,A , H0,B and H0,Cfor 5,000
results are quite stable as N increases from 25 to 50 to 500 The two-way
FGLS estimator yields a relative MSE of 1.004 for = 0.8, while the correct
one-way FGLS estimator yields a relative MSE of 1.002 The wrong one-wayFGLS yields a relative MSE of 9.090 The performance of two-way ML issimilar to that of FGLS yielding a relative MSE of 1.003 This is compared
to 1.002 for the correct one-way ML and 9.093 for the wrong one-way MLestimator The pretest estimator yields a relative MSE of 1.007 for the 5% level
Trang 22Two-Way Error Component One-Way Time Error Component One-Way Individual Error Component Pretest
Trang 23Two-Way Error Component One-Way Time Error Component One-Way Individual Error Component
Trang 24Two-Way Error Component One-Way Time Error Component One-Way Individual Error Component
Trang 28and 1.004 for the 10% level.Figure 2, corresponding to N = 25 and T = 10,
confirms that for = 0, the wrong one-way Within estimator performs the
worst in terms of relative MSE For high values of (0.5–0.8), its relative
MSE explodes Similarly, OLS, the wrong one-way FGLS and ML estimatorshave the same increasing shape for their relative MSE as gets large Inside
the box, the correct one-way ML and FGLS perform the best followed bytwo-way ML and FGLS The 5% pretest estimator is in that box with relative
MSE not higher than 1.041 for N = 25, T = 10, = 0 and = 0.1.
(3) Similarly, when the true specification is a one-way time effect error componentmodel, OWTEC ( = 0), the two-way Within estimator continues to perform
badly relative to true GLS yielding a relative MSE of 1.443 for = 0.8 and
N = 25, T = 10 (seeTable 4) The corresponding relative MSE for OLS is4.492 The correct one-way fixed effects estimator has a relative MSE of 1.001while the wrong one-way fixed effects estimator has a relative MSE of 8.266
while the correct one-way FGLS estimator yields a relative MSE of 0.999.The wrong one-way FGLS yields a relative MSE of 4.492 The performance
of two-way ML is similar to that of FGLS yielding a relative MSE of 1.005.This is compared to 0.999 for the correct one-way ML and 4.502 for thewrong one-way ML estimator The 5% pretest estimator yields a relative MSE
of 1.002 while the 10% pretest estimator yields a relative MSE of 1.007
Figure 3, corresponding to N = 25 and T = 10, confirms that for = 0, the
wrong one-way Within estimator performs the worst in terms of relative MSE.For high values of (0.5–0.8), its relative MSE explodes Similarly, OLS, the
wrong one-way FGLS and ML estimators have the same increasing shape fortheir relative MSE as gets large Inside the box, the correct one-way ML and
FGLS perform the best followed by two-way ML and FGLS The 5% pretest
estimator is in that box with relative MSE not higher than 1.015 for N= 25,
Trang 29FGLS estimator in 100% of the cases Figure 4, corresponding to N= 25
and T = 10, confirms that for = 0.2, OLS performs the worst, followed by
the wrong one-way time effects estimators and the wrong one-way individualeffects estimators, whether this is Within, FGLS or ML estimators Inside thebox, the correct two-way ML and FGLS estimators perform the best, followedclosely by the 5% pretest estimator The latter has relative MSE no higher than
1.022 for N = 25, T = 10, = 0.2 and = 0.1.
estimators can result in a big loss in MSE The pretest estimator comes out with
a clear bill of health being no more than 3% above the MSE of true GLS This
performance improves as N or T increases It is also not that sensitive to doubling
the size of the test from 5 to 10%
3.3 Robustness to Non-Normality
So far, we have been assuming that the error components have been generated
by the normal distribution In this section, we check the sensitivity of our results
to non-normal disturbances In particular, we generate thei’s andt’s from2
distributions and we let the remainder disturbances follow the normal distribution
In all experiments, we fix the total variance to be 20 Table 7gives the choice
of the pretest estimator under non-normality of the random effects for N= 25
and T= 10, 20 using the 5 and 10% significance levels, respectively Comparing
the results to the normal case inTable 1, we find that the GHM test committs ahigher probability of type II error under non-normality For example, when = 0
and = 0.2, GHM does not reject the null when false in 34% of the cases at the
5% level of significance and 29% of the cases at the 10% level of significance
as compared to 2.7 and 1.8% of the cases under normality The BCL tests areaffected in the same way too For = 0.2 and = 0.4, the choice of the pretest
as a two-way estimator is only 89.2% at the 5% level and 92.6% at the 10% level.This is compared to almost 100% of the cases under normality no matter whatsignificance level is chosen Despite this slight deterioration in the correct choice
of pretest estimator under non-normality, the resulting relative MSE performancereported inTable 8 seems to be unaffected This indicates that, at least for ourlimited experiments, the pretest estimator seems to be robust to non-normality
of the disturbances of the2 type We have also presented results for N= 25
and T = 20 at both the 5 and 10% significance levels The choice of the pretest
estimator as well as its relative MSE improves as T doubles from 10 to 20 for
N= 25
Trang 30Table 7 Number of Non-Rejections of H0,A , H0,B and H0,Cfor 5,000 Replications Under Non-Normality
of the Random Effects N = 25, T = 10 and T = 20.
0.2 0 1704 1458 559 449 25 13 2 1 3020 3124 4225 4155 34 39 5 8 217 366 209 387 0.2 0.1 449 331 35 21 6 5 0 0 801 639 276 196 812 747 294 226 2932 3278 4395 4557 0.2 0.2 344 249 19 12 1 0 0 0 662 518 204 135 741 640 245 184 3252 3593 4532 4669
Trang 31of the Random Effects N = 25, T = 10 and T = 20.
Two-Way Error Component One-Way Time Error Component One-Way Individual Error Component Pretest
Trang 324 CONCLUSION
Our experiments show that the correct FGLS procedure is always the best, followedclosely by the correct ML estimator However, the researcher does not have perfectforesight on the true specification, whether it is two-way, one-way, or a pooledregression model with no time or individual effects The pretest estimator proposed
in this paper provides a viable alternative given that its MSE performance is a closesecond to correct FGLS for all type of misspecified models considered The fixedeffects estimator has the advantage of being robust to possible correlation betweenthe regressors and the individual and time effects It is clear from our experimentsthat the wrong fixed effects estimator, like its counterpart the wrong random effectsestimator can lead to huge loss in MSE performance We checked the sensitivity
of our results to doubling the significance level for the pretest estimator from 5
to 10% as well as to specifying non-normal random effects We found our pretestestimator to be robust to non-normality
REFERENCES
Baltagi, B H (2001) Econometric analysis of panel data Chichester: Wiley.
Baltagi, B H., Chang, Y J., & Li, Q (1992) Monte Carlo results on several new and existing tests for
the error component model Journal of Econometrics, 54, 95–120.
Baltagi, B H., & Li, Q (1997) Monte Carlo results on pure and pretest estimators of an error component
model with autocorrelated disturbances Annales d’ ´ Economie et de Statistique, 48, 69–82 Breusch, T S (1980) Useful invariance results for generalized regression models Journal of Econometrics, 13, 327–340.
Breusch, T S., & Pagan, A R (1980) The Lagrange multiplier test and its applications to model
specification in econometrics Review of Economic Studies, 47, 239–253.
Trang 33Giles, J A., & Giles, D E A (1993) Pre-test estimation and testing in econometrics: Recent
developments Journal of Economic Surveys, 7, 145–197.
Gouri´eroux, C., Holly, A., & Monfort, A (1982) Likelihood ratio test, Wald test, and Kuhn-Tucker test
in linear models with inequality constraints on the regression parameters Econometrica, 50,
63–80.
Judge, G G., & Bock, M E (1978) The statistical implications of pre-test and Stein-Rule estimators
in econometrics Amsterdam: North-Holland.
Judge, G G., & Bock, M E (1983) Biased estimation In: Z Griliches & M.D Intrilligator (Eds),
Handbook of Econometrics (Vol 1, pp 601–649) Amsterdam: North-Holland.
Magnus, J R (1999) The traditional pretest estimator Theory of Probability and Its Applications,
Trang 34One-way time ML residuals */
Trang 35/* */
One-way ind ML residuals */
IF count[z,1] = 0 THEN DO;
END;
END ;
Trang 36IF count[z,1] = 0 THEN DO ;
END; /* End of replications loop */
/* Number of non-rejections of H(0,A), H(0,B) andH(0,C)*/
Trang 37TESTS OF COMMON DETERMINISTIC TREND SLOPES APPLIED
TO QUARTERLY GLOBAL
TEMPERATURE DATA
Thomas B Fomby and Timothy J Vogelsang
ABSTRACT
We examine the global warming temperature data sets of Jones et al (1999)
and Vinnikov et al (1994) in the context of the multivariate deterministic trend-testing framework of Franses and Vogelsang (2002) We find that, across all seasons, global warming seems to be present for the globe and for the northern and southern hemispheres Globally and within hemispheres,
it appears that seasons are not warming equally fast In particular, winters appear to be warming faster than summers Across hemispheres, it appears that the winters in the northern and southern hemispheres are warming equally fast whereas the remaining seasons appear to have unequal warming rates The results obtained here seem to coincide with the findings of
Kaufmann and Stern (2002) who use cointegration analysis and find that the hemispheres are warming at different rates.
1 INTRODUCTION
The use of heteroskedasticity autocorrelation (HAC) robust covariance matrixestimators to form standard errors is now a standard of practice in the econometrics
Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Advances in Econometrics, Volume 17, 29–43
© 2003 Published by Elsevier Ltd.
ISSN: 0731-9053/doi:10.1016/S0731-9053(03)17002-8
29
Trang 38literature and is a legacy of the seminal contribution ofWhite (1980).1Although
White (1980)focused on the case of heteroskedasticity, it was soon recognized,especially in the emerging generalized method of moments literature (Hansen,
1982), that robust covariance matrix estimators were needed for dependent, i.e.autocorrelated, data As pointed out by Hansen (1982) and others, obtainingconsistent standard errors for dependent, but stationary, data is equivalent toestimating a spectral density matrix at frequency zero Thus began a fruitfulmarriage of the non-parametric spectral density estimation literature developed
in the much earlier time series statistics literature (seePriestley, 1981) and theemerging robust standard error literature in econometrics Important contributions
in the econometrics literature that grew from this marriage include White andDomowitz (1984),Newey and West (1987),Gallant (1987),Gallant and White(1988),Andrews (1991),Andrews and Monahan (1992),Hansen (1992),Neweyand West (1994),den Haan and Levin (1997)andRobinson (1998)
In this paper, we are interested in testing hypotheses about global warming,especially with respect to potential differences in global warming trends byhemisphere, North versus South Because we approximate the trends in temper-ature with simple linear trend functions, hypotheses about global warming can
be parameterized in terms of linear trend slope coefficients As is well knownfrom the classic work of Grenander and Rosenblatt (1957), deterministic trendparameter estimates have asymptotic variances that are proportional to zerofrequency spectral densities Therefore, the tests we use in this paper require HACrobust covariance matrix estimators
Our analysis focuses on global temperature data that are aggregated to theseasonal level (winter, spring, summer, autumn) We examine global data aswell as data for the northern and southern hemispheres We want to examine thepresence or absence of climatic warming, both globally and hemispherical, and todetermine to what extent temperature trends differ across seasons globally, acrossseasons within hemispheres, and for seasons across hemispheres We test ourhypotheses using univariate trend tests proposed byVogelsang (1998)andBunzeland Vogelsang (2003) and multivariate trend tests developed by Franses andVogelsang (2002) The data we use are the quarterly temperature series ofJones,Osborn, Briffa and Parker (1999), hereafter JOBP, and of Vinnikov, Groismanand Lugina (1994), hereafter VGL The JOBP data span the years 1856–1998while the VGL data span the years 1881–1993 Although both series are available
in monthly frequencies, we examine the quarterly series in order to reduce theinteresting tests of trend to a manageable number
With respect to previous research on comparing the warming trends of thehemispheres, not much work has been done For example, in theIntergovernmental
Trang 39Panel on Climate Change (IPCC) Report Climate Change 2001: Synthesis Report,
no comparisons of the temperature trends of the northern and southern spheres are mentioned The National Oceanic and Atmospheric Administration’s
(page 2) that global surface temperatures are increasing but that “the warming
has not been globally uniform (report’s emphasis).” Furthermore, the site states
(page 2) “Some areas (including parts of the southeastern U.S.) have, in fact,cooled over the last century.” Evidently, there has been some regional analysis
of temperature trends but, alas, it has not been reported in any formal governmentreports that these authors can find However, there is one paper of note.Kaufmannand Stern (2002)apply cointegration analysis to hemispheric temperature seriesalong with atmospheric forcing variables (carbon dioxide, sulfur emissions,etc.) to examine the effects that human activity might have on warming byhemisphere They find, like we do here, that the hemispheres are warming atdifferent rates
The rest of the paper is organized as follows In the next section, we describethe statistical models and test statistics In Section 2, we present a univariateanalysis that provides evidence on the existence of warming trends InSection 3,
we use multivariate analysis to test hypotheses about intra-seasonal and hemispherical warming trends We find that there is strong evidence of warming
inter-in all the season but that the extent of warminter-ing varies across the seasons.Comparisons between the hemispheres suggest similar warming trends in wintersbut differences in the other seasons.Section 4concludes
2 UNIVARIATE ANALYSIS: EVIDENCE
OF GLOBAL WARMING
We begin by describing a univariate analysis designed to answer the followingsimple question For a given temperature series, is there evidence of a systematic
warming trend? Let y t denote a temperature series for t = 1, 2, , T We
approximate the systematic part of y t with a linear time trend that leads tothe model
Trang 40warming to be the alternative hypothesis:
where z t=t j=1 y j and S t =t j=1 u j For each of models (1) and (2), we
consider t-tests that are not only robust to serial correlation in u t but are also
robust to the possibility that u thas a unit root These tests do not suffer from theusual over-rejection problem that is often present with HAC robust tests whenserial correlation is highly persistent
(2003)and for model (2), we used one of the tests proposed byVogelsang (1998).Both tests depend on a common adjustment factor that controls the over-rejection
problem when u thas a unit root It is convenient to first describe this adjustmentfactor before defining the test statistics
Consider model (1) augmented with additional regressors that are polynomials
Under the assumption that model (1) adequately captures the deterministic
component of y t, the ␦i coefficients in regression (3) are all zero Consider the
following test statistic which is the standard Wald test (normalized by T−1) fortesting the joint hypothesis that the␦i coefficients are zero
J= RSS1− RSS3
where RSS1is the OLS residual sum of squares from regression (1) and RSS3is
the OLS residual sum of squares from regression (3) The J statistic was originally
u t has a unit root, J has a well-defined asymptotic distribution while when u t is
covariance stationary, J converges to zero.
Bunzel and Vogelsang (2003)analyzed t-tests for using OLS estimates from
model (1) and recommended the following test Let ˆ denote the OLS estimate