1. Trang chủ
  2. » Ngoại Ngữ

the-boes-forecasting-platform-compass-maps-ease-and-the-suite-of-models

112 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề The Bank of England’s Forecasting Platform: COMPASS, MAPS, EASE and the Suite of Models
Tác giả Stephen Burgess, Emilio Fernandez-Corugedo, Charlotta Groth, Richard Harrison, Francesca Monti, Konstantinos Theodoridis, Matt Waldron
Người hướng dẫn Charlie Bean, Spencer Dale, Robert Woods
Trường học Bank of England
Chuyên ngành Economic Forecasting
Thể loại Working Paper
Năm xuất bản 2013
Thành phố London
Định dạng
Số trang 112
Dung lượng 1,23 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • 2.1 Forecasting at the Bank of England (8)
  • 2.2 The role of the forecasting platform (9)
  • 2.3 Design principles (10)
  • 3.1 COMPASS (14)
  • 3.2 The suite of models (15)
  • 3.3 MAPS (15)
  • 3.4 EASE (16)
  • 4.1 The general modelling approach (17)
  • 4.2 The model (18)
    • 4.2.1 Supply (19)
    • 4.2.2 Price and wage setting (21)
    • 4.2.3 Private domestic demand (23)
    • 4.2.4 Interactions with the rest of the world (24)
    • 4.2.5 Fiscal and monetary policy (25)
    • 4.2.6 Forcing processes and shocks (27)
  • 4.3 Estimation (28)
    • 4.3.1 Data and measurement equations (29)
    • 4.3.2 Priors (32)
    • 4.3.3 Posterior parameter estimates (38)
  • 4.4 Model properties (39)
    • 4.4.1 A monetary policy shock (39)
    • 4.4.2 A labour augmenting productivity shock (41)
    • 4.4.3 A domestic risk premium shock (42)
  • 5.1 Reasons to employ a suite of models (44)
  • 5.2 Models which articulate economic shocks and channels missing from COM- (45)
    • 5.2.1 Models with energy (45)
    • 5.2.2 Models and tools for understanding the impact of the financial sector 41 (46)
  • 5.3 Models which expand the scope of the forecast (47)
    • 5.3.1 The Post-Transformation Model (PTM) (48)
    • 5.3.2 The Balance Sheet Model (BSM) (51)
  • 5.4 Models which generate alternative forecasts (52)
  • 6.1 EASE (56)
  • 6.2 MAPS (57)
    • 6.2.1 Modelling framework (57)
    • 6.2.2 Estimation (59)
    • 6.2.3 Model Analysis (60)
    • 6.2.4 Projection and simulation (62)
    • 6.2.5 Decompositions (64)
    • 6.2.6 Expectations & policy analysis (66)
    • 6.2.7 Non-Linear Backward-Looking models in MAPS (66)
  • 7.1 The ‘misspecification algorithm’ (68)
    • 7.1.1 Understanding the economics of the misspecification (69)
    • 7.1.2 Quantifying the effects of misspecification (70)
    • 7.1.3 Incorporating the quantitative effects of misspecification (70)
  • 7.2 Alternative approaches (72)
  • 8.1 Introducing data news (75)
    • 8.1.1 Does the identification of shocks look sensible given the data? (78)
    • 8.1.2 How should the forecast be changed in light of the data news? (80)
  • 8.2 Incorporating effects of VAT changes (83)
  • 8.3 Incorporating effects of financial frictions (89)
    • 8.3.1 Financial frictions and credit spreads (90)
    • 8.3.2 The suite models (92)
    • 8.3.3 Quantifying the effects of financial shocks (93)
    • 8.3.4 Mimicking the effects of financial shocks using COMPASS (97)
  • 8.4 Incorporating policy changes (100)

Nội dung

The platform consists of four components: COMPASS, a structural central organising model; a suite of models, used to fill in the gaps in the economics of COMPASS and provide cross-checks

Forecasting at the Bank of England

Each quarter, in accordance with section 18 of the Bank of England Act 1998, Bank of England staff prepare an Inflation Report for the Monetary Policy Committee A central feature of the report is the fan charts, which illustrate the MPC’s best collective judgment about the most likely paths for inflation and output, along with the uncertainties surrounding those central projections.

Forecasting at the Bank of England therefore has two key characteristics: first, the forecasts are ‘owned’ by the MPC; second, they are expressed as probability distribu- tions, since a full assessment of the outlook has to capture risks and uncertainties In constructing their forecast, the MPC has the following objectives in mind:

• To discuss the economic outlook and come to a view on the balance of risks to economic activity and inflation.

• To come to a view on the appropriate response of monetary policy in light of the discussion of the economics of the forecast and the uncertainty around it.

• To communicate the outlook to the public in a manner that promotes transparency and accountability.

It is discussion of the economics of the forecast, including the balance of rirsks, that underpins those objectives, not a desire to maximise the accuracy of their point forecasts per se The internal process through which the staff provide inputs to the MPC’s fore- cast discussions is tailored to those objectives Bean and Jenkinson (2001) describe the internal process that supports the production of the forecast An important feature is a high level of engagement from the MPC, taking place through a sequence of meetings in the weeks leading up to the production of the Inflation Report At each stage of the process, MPC judgements are discussed and incorporated into the forecasts.

The forecast process structure described by Bean and Jenkinson (2001) remains broadly unchanged, but the tools staff use to implement it have evolved Bean and Jenkinson (2001) identified a relatively standard macroeconometric model (MM) as a central forecasting tool, a role that was filled by the Bank of England Quarterly Model (BEQM) in 2003 (see Harrison et al., 2005) Since the November 2011 Inflation Report, the forecast process has been supported by the forecasting platform described in this paper.

1 This text appears in the foreword of each Inflation Report.

Changes to the forecast process may occur as the Bank implements recommendations from the Stockton Review, with details discussed at the end of Section 2.3.

3 The MM is described in Bank of England (1999) and Bank of England (2000).

The role of the forecasting platform

The platform for producing quarterly forecasts is a coordinated toolkit of tools and resources that staff rely on to support the MPC’s discussions, with economic models forming a central component of that toolkit and driving the analysis, scenarios, and evidence underpinning policy deliberations.

The Bank’s long-standing approach to forecasting has consistently recognised the strengths and weaknesses of macroeconomic models In the foreword to the 1999 volume

‘Economic models at the Bank of England’, Governor Eddie George wrote: 4

The Bank’s use of economic models is pragmatic and pluralist: in an ever-changing economy, no single model can capture all factors that matter for policy The responsibility for forming judgments about those factors and their policy implications lies with the Committee, not with models or modellers Nevertheless, economic models are indispensable tools in that process.

This view is reiterated in Bean and Jenkinson (2001, p438): 5

Economic models are imperfect reflections of the UK's complex economy, and at best they serve as tools to illuminate the forces shaping economic activity and inflation The MPC is acutely aware of these limitations, acknowledging that models aid analysis but must be interpreted cautiously when informing monetary policy.

In the forecast process, economic models play a supporting role rather than a starring one, while the forecasting platform used by staff organizes and coordinates contributions from a diverse range of models This platform brings together the wide variety of inputs different models can provide, reflecting a broad spectrum of analytical outputs and perspectives that contribute to a more robust forecast.

• Elucidating the economic mechanisms that might be determining the behaviour of particular macroeconomic variables.

• Assessing the quantitative effects of particular shocks or events.

• Identifying which types of economic shocks best explain the current state of the economy.

• Quantifying the sensitivity of any of the answers above to different assumptions about the underlying structure of the economy.

• Exploring the policy implications of particular shocks or events.

A quick look at the list shows that no single economic model can deliver on all items, underscoring why the Bank has long relied on a suite of economic models to cover diverse aspects of the economy Each model offers different strengths and limitations, and using a diversified toolkit improves the robustness of policy analysis and forecasting This approach reduces blind spots and enhances overall decision-making by balancing insights from multiple perspectives.

The decision to build a new forecasting platform was motivated in part by rapid advances in the tools available to estimate and analyse the outputs of models, enabled by

BEQM documentation makes it clear that the new macroeconomic model is not the sole input into forecasting and policy processes; it serves as one of several factors—along with other data, models, and expert judgment—that shape projections and policy decisions (Harrison et al., 2005, p151).

Although the considerations outlined are important to the MPC, additional practical requirements are needed to keep the platform usable in practice Specifically, staff must be able to apply judgement to the model efficiently, enabling the forecast to be constructed within the required timetable.

Advances in computing power, highlighted in Working Paper No 471 (May 2013), have accompanied the development and deployment of new forecasting models across central banks and policy institutions This progress reflects concerted efforts by many central banks to embed structural economic models at the heart of their policy and forecast processes.

Design principles

At the heart of the new forecasting platform lies the Box-Draper idea that all models are wrong, but some are useful Since any model used to support forecasts will be misspecified, the value comes from how well it informs discussion and decision-making rather than perfect accuracy The platform's purpose is to help MPC members and Bank staff extract the most meaningful insights from a wide array of models, turning model output into actionable intelligence By focusing on interpretation, synthesis, and robust cross-model evidence, the forecasting framework enhances the quality of every forecast while acknowledging model limits.

To produce the best statistical forecasts, ensemble forecasting—combining multiple models through a weighted average of their forecasts—has become a popular approach The staff generate forecasts from a suite of econometric models that are optimized for forecast performance, and these forecasts are used as cross-checks on the Monetary Policy Committee’s projection.

As noted in Section 2.1, the Inflation Report's primary aim is for the MPC to present a narrative that captures its best collective judgment about the forces shaping the current economy and the possible paths it may follow in the future By contrast, models optimized for statistical forecasting performance seldom provide a clear explanation of why they produce the forecasts they do.

To boost forecast accuracy, the new forecasting platform centers on a central forecasting model that provides an organizing framework for identifying the key forces shaping the current state of the economy and how they might affect the forecast Surrounding this core is a suite of complementary models and tools designed to cross-check results, interrogate assumptions, and adjust the forecast, especially in areas where the central model is likely to be deficient.

As already noted, the process of producing the Inflation Report forecasts using a

A central organizing model, surrounded by other models and tools, continues the Bank of England’s long-standing approach, but the forecasting platform described here explicitly recognises the role of the model suite In particular, the IT infrastructure used by Bank staff to produce and analyse forecasts has been designed with a focus on integrating the models and tools that support robust forecasting.

The decision to build the new forecasting platform was made before the financial crisis, and the project required substantial investment in developing new tools and IT systems, a process that necessarily took time to complete.

Eight central-bank macroeconomic models are highlighted, including the Czech National Bank’s g3 model (Andrle et al., 2009), the Reserve Bank of New Zealand’s KITT model (Beneš et al., 2009), Norges Bank’s NEMO model (Brubakk et al., 2006), the Riksbank’s RAMSES model (Adolfson et al., 2007), the ECB’s NAWM (Christoffel et al., 2008), the Federal Reserve Board’s EDO model (Edge et al., 2007; Chung et al., 2010), and the Bank of Canada’s ToTEM (Murchison and Rennison, 2006).

9 There are of course, different ways to weight the forecasts See, for example, Kapetanios et al.

From the late 1990s into the early 2000s, the Bank of England based its forecasts on the Medium-Term Macroeconomic Model (MTMM) In 2003, the framework shifted to the Bank of England Quarterly Model (BEQM) This move to a broader suite of models marks a material improvement in the design of the Bank’s forecasting platform compared with its previous systems.

Although the forecasting platform retains its long-standing generic design, the latest updates mark an evolution rather than a revolution, enhancing adaptability and scalability while preserving familiarity for users Crucially, the new platform explicitly recognizes that the central model is misspecified, and it improves how additional insights are applied to refine forecasts and support better decisions.

Delivering these benefits hinges on a careful design of the central organising model, requiring a balanced assessment of the model’s strengths and weaknesses across multiple dimensions Policymakers often want models that serve a broad range of purposes, and in the BEQM design discussions Harrison et al (2005, p 12) highlight a trade-off between empirical coherence and theoretical coherence across different model types In practice, however, the central organising model involves trade-offs along many more dimensions than this dichotomy suggests, and these trade-offs are unlikely to be as smooth or linear as the simple trade-off described by Harrison et al (2005) and Pagan (2003).

Our view is that there are typically trade-offs between the performance of a potential central organising model in the following dimensions:

• Theoretical foundations The behaviour of the central organising model should be consistent with the theory underpinning policymakers’ views of the monetary transmission mechanism.

• Empirical fit The central organising model should be able to explain the macroe- conomic data well.

• Tractability The central organising model should be easy to use, easy to under- stand, reliable and robust.

• Flexibility It should be possible to examine easily the implications of alternative economic assumptions (eg the implications of different parameter values) on the behaviour of the central organising model.

• Comprehensiveness The central organising model should provide adequate cov- erage of the key economic mechanisms and variables required to support policy- makers’ discussions.

Although a full treatment of these trade-offs lies beyond this paper, some clear examples are instructive: a large model is likely to cover the full range of macroeconomic variables relevant to policy and forecast discussions, but it is also likely to be less tractable than a smaller model The tradeoff between empirical coherence and theoretical coherence, emphasized by Harrison et al (2005) and Pagan (2003), clearly appears here as well, illustrating the competing demands on model design.

The design of the new forecasting platform reflects our assessment of the nature of the trade-offs listed above Our assessment led us to build the platform around a central

Vector autoregression (VAR) models typically fit the data well, signaling empirical coherence between the model and observed outcomes In contrast, dynamic stochastic general equilibrium (DSGE) models are constrained by theory, so their predictions must align with the underlying economic assumptions, reflecting theoretical coherence.

Working Paper No 471, published in May 2013, examines organizing a model of a specific type: a New Keynesian Dynamic Stochastic General Equilibrium (DSGE) model, similar to those implemented by other central banks in recent years The authors justify this choice by outlining several considerations that support adopting this modeling approach for policy analysis and forecasting within a modern monetary framework.

New Keynesian DSGE models provide a baseline, well-understood description of key elements of the monetary transmission mechanism that policymakers consider important They show that monetary policy can affect activity and inflation in the short to medium term because of rigidities in setting nominal prices and wages, as discussed in Monetary Policy Committee (1999) They also emphasize that expectations and the stabilising role of monetary policy are essential for understanding the economy—two central elements of the framework.

“consensus view” of monetary policy discussed by Bean (2007).

Tools for using DSGE models and analysing their outputs are now well established, and incorporating these tools within the forecasting platform makes the operation of the central DSGE model more tractable and enables faster interpretation of its outputs.

COMPASS

COMPASS serves as the central organizing model for projection analysis and scenario simulation, providing a unified framework to build forecasts, analyze and explain projections, and run experiments that assess how sensitive forecasts are to alternative assumptions.

COMPASS is an open-economy New Keynesian DSGE model used by central banks, sharing many features with antecedents Because prices and wages are assumed to be sticky, monetary policy can influence aggregate demand, hence output and employment, in the short and medium term, while in the long run output is determined by technology and the supply of production factors Consequently, there is no long-run trade-off between inflation and output (or growth) Additionally, expectations of future monetary policy actions can have important effects on current output and inflation due to their forward-looking nature.

The suite of models

COMPASS is a compact, easy-to-use central model that is straightforward to understand; like any model, it is misspecified in certain respects, which can affect forecast accuracy To mitigate these misspecifications, suite models can be employed to improve robustness and to inform staff and MPC judgments about the forecast produced by COMPASS.

The models in the suite can be divided into three broad classes, according to their main purpose:

(a) Models which articulate economic shocks and channels which are omitted from COM- PASS (see Section 5.2);

(b) Models which expand the scope of the forecast, by producing forecasts for additional variables not included in COMPASS (see Section 5.3);

In COMPASS, models that generate alternative forecasts for the system’s variables serve as a critical validation and adjustment mechanism for the central organising model, producing multiple forecast scenarios to verify the central model’s outputs and fine-tune predictions, as outlined in Section 5.4.

MAPS

MAPS, the Model Analysis & Projection System, is designed as a relatively general modelling language that works with many models rather than just the central organizing model; by enabling the same tools to be used across a broad range of models, MAPS greatly reduces the costs of using a suite of models in the production of forecast analysis.

MAPS offers two core capabilities: model analysis and the projection system The model analysis functions estimate and interrogate the properties of compatible models, while the projection system provides tools to construct forecasts, apply judgment, and analyze the properties of forecasts produced by compatible models, including COMPASS.

The modular design enables rapid development of new functions and capabilities by staff, with EASE operating independently of the user interface as shown in Figure 1 This decoupled architecture provides the flexibility to conduct new experiments for forecast analysis and to extend the toolkit in the future.

20 COMPASS is perhaps closest in structure to RAMSES model developed at the Riksbank (Adolfson et al., 2007) and the ECB’s New Area Wide Model (Christoffel et al., 2008).

21 By contrast, the infrastructure supporting the MTMM and BEQM worked solely with those models.

EASE

EASE is the Economic Analysis & Simulation Environment, a new IT user interface designed to give users straightforward access to the latest models and tools It enables applying the same tools to COMPASS and suite models through a common user interface, and the results can be easily charted and compared EASE supports staff workflow in producing inputs for key MPC meetings, increasing the efficiency of standard forecast operations and freeing up staff time to analyze the key economic questions posed by the forecast.

COMPASS serves as the Central Organising Model for Projection Analysis and Scenario Simulation, placing it at the heart of the forecasting platform It functions as the main organizing framework for constructing the MPC’s projections, guiding how scenarios are built and interpreted It also acts as a tool to analyze projections by identifying the most likely pattern of shocks that underpins them Additionally, COMPASS enables the examination of how alternative assumption sets—different scenarios, including various monetary policy responses—would alter those projections.

To fulfill its roles efficiently, COMPASS is designed to be small and simple As emphasized in Section 5, it abstracts from a wide range of important economic mechanisms, which are incorporated through insights from a suite of models Like many macroeconomic models used by central banks, COMPASS will evolve over time This evolution is driven by two factors: first, model parameters will be re-estimated regularly as new and revised data series become available (most likely once per year); second, the model’s structure is likely to evolve as we learn more about its performance.

Section 4 outlines the modelling framework by first detailing the high-level features of the modelling approach, clarifying the generic characteristics of DSGE models and how they relate to macroeconomic data, and then providing a high-level description of COMPASS that highlights its key components and how they fit together Because COMPASS is a general equilibrium model, its overall behaviour arises from the interaction of all parts rather than any single component in isolation.

Appendix A provides a detailed derivation along with a comprehensive list of all model equations In Section 4.3 we describe how the model parameters are estimated using Bayesian methods, while Section 4.4 offers a concise summary of several key model properties For readers seeking a deeper understanding, Appendix B contains a more comprehensive discussion of the model properties.

The general modelling approach

Here we outline the general features of our modelling approach, which follows the standard dynamic stochastic general equilibrium (DSGE) framework used for this class of models The model is a system of behavioral equations derived from the optimizing decisions of agents—households and firms, among others Because these problems are dynamic, agents’ expectations about future outcomes crucially influence current choices through intertemporal budget constraints Concretely, the model is defined by the first-order conditions from these optimization problems, together with budget constraints and market-clearing conditions that jointly determine an equilibrium for the whole system The behavior of the model is governed by parameters describing preferences, technologies, and other structural features.

For example, the Bank’s Model Development Team in the Monetary Analysis area—responsible for maintaining and developing the forecasting platform—plans to investigate alternative specifications for COMPASS’s world block as part of the planned autumn 2013 re-estimation.

23 Available at: http://www.bankofengland.co.uk/publications/Pages/workingpapers/2013/wp471.aspx

24 Available at: http://www.bankofengland.co.uk/publications/Pages/workingpapers/2013/wp471.aspx

This May 2013 Working Paper No 471 introduces a modeling framework in which frictions, such as sticky prices, are incorporated through the appropriate specification of objective functions and constraints Because the model is derived from explicit optimization problems, it uses a relatively small number of parameters compared with the number of variables, aiding tractability and interpretation.

The model is stochastic because exogenous shocks to preferences, technologies, and constraints affect agents’ decisions In the absence of shocks, the economy converges to a balanced growth path where all variables grow at constant rates that reflect exogenous population and technology trends Shocks temporarily move the variables away from this balanced growth path, and the speed of reversion depends on the persistence of those shocks and the strength of the model’s propagation mechanisms, which in turn depend on the specific frictions included in the model.

As noted, the intertemporal nature of the optimisation problems means that expec- tations of future events can have important effects on current decisions The default treatment of expectations in COMPASS follows the conventional approach of assuming that expectations are ‘model consistent’ (often also referred to as ‘rational expectations’). This means that agents’ expectations of the future paths of all variables coincide with the future paths of those variables produced by COMPASS in the absence of future unan- ticipated shocks While this standard assumption represents a convenient benchmark, it has some very strong implications (for example, agents’ forecast errors are uncorrelated with actual out-turns) The MAPS toolkit described in Section 6.2.6 includes tools for analysing versions of COMPASS in which expectations are formed by alternative means.

We follow a conventional solution approach and solve a log-linear approximation to the model equations To do this, we first de-trend the variables in the model by scaling them relative to the exogenous processes that generate growth in population and technology. This delivers a set of model equations in terms of stationary (de-trended) variables In the absence of shocks, these stationary variables will return to a steady state characterised by a constant value for those variables We approximate the model equations by taking a log-linear approximation around this steady state A set of ‘measurement equations’ are used to relate the stationary model variables to observable data We discuss these equations in more detail in Section 4.3.1.

The model

Supply

Domestic value added (GDP) is described by a Cobb-Douglas production function in which v_t = (1 − α_L) k_{t−1} + α_L l_t + ε̂_TFP_t, where v_t is value added, l_t is labor input (total hours worked), k_{t−1} is the capital stock, and ε̂_TFP_t captures exogenous movements in total factor productivity Under this Cobb-Douglas specification, it is optimal for firms to keep the expenditure shares of labor and capital in GDP constant in the long run The labor share in the production process is denoted by α_L, with 0 < α_L < 1, representing the proportion of value added attributed to labor.

Value-added firms experience efficiency fluctuations driven by temporary changes in total factor productivity (TFP) The efficiency of labor in producing value-added goods and services increases over time due to exogenous technological progress Together, these forces yield a sustained rise in productivity and competitiveness for value-added production.

26 See Section A.2 & Section A.4 in Appendix A for full details of the de-trending and log-linearisation process.

Twenty-seven unpublished versions of COMPASS incorporated variable capital utilization with adjustment costs, tested in two forms: one where higher utilization raises depreciation (wear and tear) of physical capital, and another where utilization imposes a direct cost on resources available for consumption and other uses In both specifications, the model’s estimated dynamics were similar with and without variable capital utilization, so, to preserve tractability, we dropped it from the model.

Within the model, there is a common stochastic trend, χ̃_Zt, whose logarithm evolves with a unit root and drift: log χ̃_Zt = log Γ_Z + log χ̃_Zt−1 + γ_tZ, where γ_tZ is a stochastic disturbance This structure implies that χ̃_Zt itself does not appear in the log-linearised equation because the variables are expressed in detrended percentage (or log) deviations from the model’s balanced growth path, having been de-trended by the technology term.

Firms hire labor and capital to produce value-added output, and cost minimisation determines how much capital they demand relative to labor as a function of the relative prices of the inputs A rise in the rental price of capital relative to the real wage lowers the demand for capital services Under cost minimisation, the real marginal cost of producing a unit of value added is a weighted average of the input prices, given by mc_Vt = (1−α_L) r_Kt + α_L w_t − ε̂^T F P_t, reflecting the input shares and a productivity term.

Domestic value added combines with imported intermediates by final output producers to form the final output, with a Cobb-Douglas production function z_t = α_V v_t + (1−α_V) m_t where z_t is final output, v_t is domestic value added, and m_t are imported intermediates Cost minimization by these producers generates an import demand that depends negatively on the relative price of imports to domestic value added, expressed as m_t − v_t = p_V_t − p_M_t − ε̂_M_t, where the demand for imports, m_t, responds to the price gap and includes a stochastic disturbance ε̂_M_t that shifts relative import demand for reasons unrelated to prices.

Final output can be used for consumption (by households and government), invest- ment or exports This means that:

Ze_t denotes the total production of final output at time t and is allocated across Ze_t^C (household consumption), Ze_t^G (government consumption), Ze_t^I (capital investment), Ze_t^IO (other investment), and Ze_t^X (exports) The tilde above each variable indicates that these series have not been detrended In a perfectly competitive retail setting, retailers purchase the final-output good from producers and convert it into consumption, investment, government spending, or export goods through a simple linear technology, with Ze_t^C representing the portion directed to consumption goods (and analogous mechanisms for the other uses).

In the stationary version of the model, all prices are expressed as relative prices, measured against the price of final output The real wage is also detrended by the level of labour-augmenting productivity to ensure that it is stationary.

Under a Cobb-Douglas production function, the marginal cost of value-added can be written as the labor share in value-added: mc_V,t = w_t + l_t − v_t − p_V,t Here C denotes non-detrended consumption, and the tilde χ_C,t captures productivity in the consumption retail sector There are analogous equations for investment, government, and export goods that allow for different sectoral productivities The tilde χ_C,t, tilde χ_G,t, tilde χ_I,t, tilde χ_IO,t and tilde χ_X,t variables all follow deterministic trends.

This approach allows different trend growth rates across sectors without requiring explicit treatment of a multi-sectoral supply side After detrending and log-linearising equation (6), the resulting specification is z_t = ω_CZ c_t + ω_GZ g_t + ω_IZ i_t + ω_IO i_O_t + ω_XZ x_t (8), where ω_CZ denotes the steady-state share of consumption expenditure in final output, and the remaining coefficients (ω_GZ, ω_IZ, ω_IO, ω_XZ) capture the corresponding sectoral shares in final output, enabling sectoral dynamics to differ without an explicit multi-sectoral supply-side model.

Price and wage setting

In value-added and final-output sectors, firms operate under monopolistic competition and price their products with a markup over marginal cost Price changes are costly, so firms account for these adjustment costs when setting prices As a result, the markup responds to both the current inflation rate and expected future inflation.

These assumptions give rise to pricing equations of the form: π V t = ˆà V t + 1 φV (1 +βΓ H ξV)mc V t + ξ V

Equation (10) describes how inflation evolves in the model, with π_V,t representing inflation in value-added prices and π_Z,t representing inflation in final output prices Expected inflation one period ahead is defined as E_t π_V,t+1 and E_t π_Z,t+1 Exogenous fluctuations in producers’ desired margins are captured by shocks to the V and Z margins, denoted by ˆà_V,t and ˆà_Z,t, respectively These margin shocks are assumed to be purely transitory, with no persistence They serve to proxy for temporary changes in market conditions or taxes applied to spending or production.

Both pricing equations share the same generic form because the same nominal rigidities affect both producer types The equations show that inflation tends to rise when real marginal costs rise (firms pass higher costs through as price increases), when expected inflation rises (firms raise current prices in anticipation of higher future prices), or when past inflation is higher (since price adjustment costs depend on past inflation) The responsiveness of current inflation to these drivers depends on the parameters that govern the nominal rigidities For example, if the parameter governing the degree of indexation to past value-added inflation, ξ_V, is set to zero, the equation becomes purely forward-looking for value-added pricing, with past inflation not influencing current price-setting And if the rigidity parameter φ_V is larger, value-added firms will tend to increase prices by less in response to higher costs.

30 Note that the productivity trend in the consumption retail sector is normalised so that ˜ χ C t ≡ χ ˜ Z t and that imports share the same trend as exports See A.2 in Appendix A for discussion.

31 The terms β and Γ H represent households’ discount factor and population growth respectively, where βΓ H ≃ 1.

Final output inflation and CPI inflation are equivalent in this model because consumption taxes such as VAT are not included Since marginal costs in the final output sector are a weighted average of imported prices and domestic value-added prices, CPI inflation is a function of both domestic and imported prices.

Importers purchase output produced overseas and sell it to domestic producers Import prices, quoted in domestic currency, are set as a markup over the cost importers pay for world export goods on world markets Since price adjustments are costly, the markup of import prices over costs depends on changes in the rate of import price inflation: π_M,t = ˆà_M,t + p_X,t F − q_t − p_M,t φ_M (1 + β Γ_H ξ_M) + ξ_M.

Equation (11) ties import price inflation, π_M, to the foreign currency price of world exports, p_X^F, and the real exchange rate, q, with domestic prices summarized by p_M, so that p_X^F − q − p_M is the domestic currency price of world exports relative to imports An exchange-rate appreciation, i.e., a rise in q, tends to lower the domestic-currency price of world export goods, which in turn reduces import price inflation Price adjustment costs create an incomplete pass-through from exchange-rate movements to import prices in the short run; as prices gradually adjust, the pass-through is complete in the long run.

Exporters operate under monopolistic competition and retain some price-setting power in world markets They set export prices in foreign currency as a markup over marginal costs, with marginal costs given by the price of domestic final output expressed in foreign currency, subject to the costs of adjusting prices These price-adjustment costs imply that the markup is a function of changes in export price inflation, written as π_EXP,t = ˆà_X,t + q_t − p_EXP,t φ_X(1 + β Γ_H ξ_X) + ξ_X.

Equation (12) defines 1 + β Γ H ξ X E_t π^{EXP}_{t+1}, where π^{EXP}_{t} is export price inflation expressed in foreign currency, q is the real exchange rate, and p^{EXP}_{t} is the foreign currency relative export price An appreciation of the domestic exchange rate raises marginal cost measured in foreign currency Analogous to the treatment of import prices, the pass-through from exchange rate movements to export prices expressed in foreign currency is incomplete in the short run due to price adjustment costs, but is complete in the long run.

Assuming similar adjustment costs for nominal wages, the model yields a labor-supply relationship in which wages are set as a markup over the expected cost of supplying labor The size of the markup hinges on the costs of adjusting wage inflation Specifically, wage inflation π_t^W is modeled as the sum of the expected wage component and adjustment shocks, including a lag term and an adjustment-cost term, written as π_t^W = ˆa W_t + ε̂ L_t + L_lt + C ( cot − ψ cot−1 ).

1 +βΓ H ξ W Etπ t+1 W (13) whereπ W t denotes wage inflation andwis the real wage measured in terms of final output (or, equivalently, consumption) The cost of supplying labour is increasing in the amount

32 Section 8.2 uses an extended version of COMPASS that includes a consumption tax rate to show how the effects of changes in VAT can be incorporated into a forecast built using COMPASS.

It is assumed that p_XF measures the world price of exported goods relative to the world price of final-output goods The quantity of labor supplied depends on households’ valuation of leisure As the cost of supplying labor increases relative to the real wage, workers demand higher wages, which tends to drive wage inflation.

Private domestic demand

Households are categorized as either constrained or unconstrained Constrained, or rule-of-thumb, households lack access to financial markets and cannot save, spending their current labor income on consumption, while unconstrained, or optimising, households can accumulate assets to smooth their consumption over time In addition, unconstrained households dislike abrupt changes in consumption due to habit formation, which creates inertia by smoothing both the level and the rate of change of consumption They save in two types of assets: physical capital rented to firms and deposits invested in domestic and foreign bonds, with a portfolio manager delivering a stochastic return funded by bond yields This combination of asset accumulation and habit formation leads to more persistent consumption patterns.

Unconstrained households choose a consumption path that maximizes lifetime utility The path is optimal because the marginal utility forgone from reducing current consumption by one unit equals the expected marginal utility gained from consuming the proceeds of that saving in the next quarter When this intertemporal condition is combined with the consumption behavior of constrained households, it can be written as an Euler equation for aggregate consumption, tying together current and future consumption in the economy (as in the formulation 36 c_t = 1).

Equation (14) shows that consumption is a function of expected future consumption and lagged consumption; expected future consumption matters because unconstrained households are forward-looking, while lagged consumption matters when the habit formation parameter ψ_C is strictly positive The second line shows that consumption is sensitive to the real interest rate, r_t − E_t π_{t+1}, which represents the opportunity cost of current consumption relative to one-period-ahead consumption—i.e., the cost of borrowing or the return to saving—adjusted for the effects.

Labor supply declines as consumption rises because leisure and consumption are substitutes, and the marginal utility of consumption falls with higher consumption The relevant measure of consumption is that of optimising households, c_o, which is defined and discussed in detail in Section 4.2.3.

We assume households own the capital stock directly rather than holding equity claims on firms that accumulate capital; in the absence of frictions that would cause firms and households to value capital differently, the model behaves identically under either assumption, so treating households as direct holders of the capital stock is the simpler and equally valid approach.

36 This equation is derived by combining equations (A.276), (A.287) and (A.288) from Appendix A.

37 If ψ C = 0, lagged consumption does not have a direct effect on current consumption.

Working Paper No 471 (May 2013) shows that the consumption response to a risk premium shock, ε^B_t, depends on the degree of financial constraints and on intertemporal substitution behavior The impact of changes in the adjusted real interest rate on consumption is larger when a greater share of the economy consists of unconstrained households (0 < ω_o ≤ 1) and when the elasticity of intertemporal substitution is higher Additionally, the third line of equation (14) demonstrates that contemporaneous labor income also affects consumption, with the influence of these income effects growing as the share of constrained households (1 − ω_o) increases.

Capital is an input to value-added production, and the value of the capital stock is the present value of the returns that one unit of capital is expected to generate This value is the discounted stream of future payoffs after accounting for depreciation, plus the continuation value of holding capital, minus the cost of funds and the expected inflation effects, with a disturbance term capturing shocks In essence, the current price of capital reflects the anticipated future path of returns, adjusted for depreciation, financing costs, inflation, and uncertainty.

Equation (15) defines Tobin’s Q as the value of one unit of capital, determined by the interaction between the current return on capital r^K_t and the discounted, depreciation-adjusted expected rental income (1−δ_K) E_t r^K_{t+1} The real interest rate in the model is rt − E_t π_{t+1} Z + ε^B_t, i.e., the rate adjusted for the risk premium shock, and this rate combines with Tobin’s Q to influence investment decisions The depreciation rate δ_K captures capital wear and tear, reducing the future value of capital services and thus shaping the incentive to invest.

Investment in physical capital is subject to adjustment costs, so although increases in the value of capital (tq) induce additional investment, this effect is tempered by the desire to smooth the change in investment: i_t = β Γ H.

Equation (16) defines investment i, with ˆε_I representing a shock to the investment adjustment-cost function and ψ_I ≥ 0 the parameter that determines the size of adjustment costs As ψ_I approaches zero, the investment rate becomes nearly costlessly adjustable The term Γ_H Γ_Z Γ_I^2 appears because Γ_H Γ_Z Γ_I is the steady-state growth rate of investment, and the adjustment cost is specified to be quadratic in the deviation of investment from its trend growth rate.

Interactions with the rest of the world

Exporters supply whatever quantity of exports is demanded by the rest of the world, so export supply adjusts to global demand Export demand is a straightforward function of world output and the relative price of exports, capturing how competitiveness shapes trade flows In this model, export demand at time t is x_t = z_t F + ε̂_t κ_t F − F(p^EXP_t − p^X_t), linking the traded quantity to global activity and the price gap between the world export price and the domestic/export price Higher world output or a tighter price advantage for exports increases demand, while a widening price gap against the world price reduces it.

An adjustment accounts for the expected one-period-ahead growth of the trend in final output or consumption, represented by E_t γ_{t+1} Z Based on the estimation described in Section 4.3, this term equals zero.

The risk-premium shock can be viewed as a reduced-form proxy for the factors that widen the wedge between the risk-free real interest rate and the rate households actually face It is purely exogenous and does not arise endogenously from financial frictions For a framework that incorporates financial frictions, see Section 8.3, which describes a suite model that does include these frictions.

Equation 40 includes a parameter β that determines the degree of over-discounting in the model, and this mechanism ensures the model's net foreign asset position is stationary.

The presence of β and Γ_H reflects the same discounting as in the Phillips curves described above (see footnote 31 on page 16) Here x denotes exports, z^F denotes world output, p_EXP denotes domestic export prices expressed in foreign currency, and p_X^F denotes world export prices The exogenous process êκ_t^F captures shocks to foreign preferences for domestically produced exports, so export demand may fluctuate for reasons that are not related to the overall level of world demand or to the relative price of domestic exports.

Optimal portfolio allocation between foreign and domestic bonds specifies that the re- turns on domestic and foreign bonds are equalised when measured in a common currency.

So the real exchange rate satisfies an uncovered interest parity (UIP) condition: q t =Etq t+1 + r t −Etπ t+1 Z

Equation (18) introduces an exogenous shock to the uncovered interest parity (UIP) condition, denoted ε̂^B_tF Here q_t is the real exchange rate, defined as the price of domestic consumption relative to foreign consumption, so that a higher q_t signals domestic currency appreciation The term r_t − E_t π^Z_{t+1} is the ex ante real interest rate The shock ε̂^B_tF captures disturbances to UIP that can arise from movements in the world real interest rate (assumed constant in the model) or from shifts in the global risk premium Taken together, these elements connect domestic real returns, expected inflation, and exchange-rate dynamics to exogenous factors outside the domestic economy.

The domestic economy is treated as small relative to the rest of the world, so world output and world prices are not affected by domestic developments They are modeled as simple exogenous AR(1) processes: world output responds to permanent labor-augmenting productivity shocks realized domestically via a parameter ω^F that controls how quickly the rest of the world inherits these shocks, while world prices for exports follow their own AR(1) dynamics Cointegration between world output and domestic output is imposed to ensure balanced growth in the model, and the ω^F term decouples the speed of world-output response to LAP shocks from the persistence of world-output shocks.

Fiscal and monetary policy

Real government expenditure is assumed to follow a simple autoregressive process: g t −gt−1+γ t Z = (ρ G −1)gt−1+ ˆε G t (21)

As detailed in the section beginning on page A7 of Appendix A, the domestic risk premium shock arises in the returns paid to households by a portfolio packager The returns the portfolio packager earns on investments in domestic and foreign bonds are not subject to this shock, and as a result, the shock does not feature in the UIP condition.

Earlier, unpublished versions of COMPASS included a net foreign asset term in the UIP condition, a specification often used to ensure the model reverts to a unique steady-state net foreign asset position after temporary shocks In the version analyzed here, however, this term is replaced by over-discounting of consumption, a choice noted in footnote 40 on page 19 and aligned with the approach of Schmitt-Grohe and Uribe.

(2003) for a comparison of this type of approach with several alternatives).

Many small open-economy DSGE models treat foreign variables via vector autoregressions (VARs) This approach has been used in work such as Adolfson et al (2007), Christoffel et al (2008), and Harrison and Oomen (2010) We opt for a simpler specification in this study, with the plan to explore alternative specifications in future research.

The model is defined by ωF_t = −γ_t Z_t + (1 − ζωF) ωF_{t−1}, with 0 < ζωF ≤ 1, where the parameter ζωF controls the speed with which world demand inherits permanent domestic labour productivity shocks An alternative, arguably more natural approach would have the domestic economy inherit permanent productivity shocks from the rest of the world; implementing that would require a much richer structure for the world block in order to identify those shocks.

An error-correction specification shows that the real growth of government spending (g) moves inversely with the deviation of detrended spending from its steady state, while exogenous shocks to spending are captured by G A positive labor-augmenting productivity shock (γ_t Z) reduces detrended government spending in the short run because spending does not rise one-for-one with productivity gains, but over the longer term detrended spending adjusts upward to the higher productivity level The speed and persistence of these adjustments are governed by 0 < ρ_G < 1, which determines how quickly government spending responds to productivity changes and how persistent the spending shocks are.

Government spending is financed by lump-sum taxes levied on unconstrained house- holds, though in principle the government may run budget deficits or surpluses by in- creasing or reducing the stock of debt that it issues to private agents The lump-sum (non-distortionary) tax levied on households is used as the fiscal instrument to ensure that government debt is stabilised The use of a non-distortionary tax gives rise to a

‘Ricardian equivalence’ result that (for a given path of government spending) the path of government debt (and hence the fiscal rule for the lump-sum tax used to control it) has no effect on the decisions of private agents This result allows us to simplify the model by assuming that the lump-sum tax is used to balance the government’s budget each period The simplified specification of fiscal policy in COMPASS means that it is ill-suited for analysis of changes in fiscal policy Instead, the effects of changes in fiscal policy are quantified and analysed using the suite 46

The central bank determines the domestic short-term nominal interest rate, and a simple reaction function—often called the Taylor rule—describes how this rate responds to key macroeconomic variables The rule is expressed as rt = θR rt−1 + (1 − θR), linking the current rate to its lag and to the weight placed on its past level.

The policy reaction function is specified as r_t = where r is the quarterly policy rate, π_t^Z is the deviation of quarterly consumer price inflation from the target, and ˆy_t measures the output gap The function allows for interest-rate smoothing, so the current rate depends on the rate set in the previous quarter The output gap used here is the difference between value-added, v_t, and the level of value-added that would prevail if all prices (including wages) were perfectly flexible—a flexible-price output gap A common alternative defines the output gap as the difference between actual output and the level implied by the value-added production function with inputs measured at trend levels In COMPASS, these two approaches yield very similar results in both estimation and the ensuing model properties, indicating that the two output-gap measures are quantitatively close over the estimation sample.

As of writing, the MPC forecast already incorporates fiscal policy changes using a spectrum of empirical multipliers that vary by the specific fiscal instrument in use Bank staff are also developing a fiscal-suite modeling framework to support a more structural analysis of fiscal issues in the future.

In this setup, potential output is measured using a flexible-price, flexible-wage version of COMPASS, in which temporary fluctuations away from the desired mark-ups are ignored.

48 The definition of the ‘trend’ level of output used in the production function approach is

Note that in practice the Inflation Report forecasts have been published under the assumption that Bank Rate and government spending follow a measure of market expectations of Bank Rate and the government’s published spending plans respectively These forecasts reflect the alignment between monetary policy expectations and fiscal intentions, with the projected inflation path driven by the market’s view of future Bank Rate and by the government’s stated spending plans If market expectations or fiscal plans were to shift, the forecast trajectory could change, since inflation outcomes depend on the interaction between monetary conditions and fiscal policy As a result, the reliability of the Inflation Report hinges on stable market expectations and the consistency of government spending with those expectations.

‘conditioning’ assumptions and other issues related to changes in monetary policy are discussed in Section 8.4.

Forcing processes and shocks

All movements of the endogenous variables away from the steady state are ultimately caused by exogenous, random shocks In COMPASS, each shock is assumed to be drawn from a Gaussian (normal) distribution with a mean of zero and a variance equal to a parameter that captures the magnitude of the shocks.

1 49 The shocks in the model are uncorrelated with each other and the realisations of a particular shock are uncorrelated over time Shocks in COMPASS are labelled using the symbol η.

Shocks affect the model via forcing processes that impart persistence to the shock's impact These forcing processes are typically modeled as AR(1) processes, with ρ denoting the AR coefficient We denote the standard deviations of the forcing processes by σ When forcing processes exhibit persistence, the shock coefficient η in each forcing process equation is defined as a function of both the persistence parameter ρ and the standard deviation scaling parameter σ, ensuring that the standard deviation of each forcing process depends only on σ.

A complete list of the forcing processes is given below: ˆ ε Z t F = 1−ρ 2 Z F

1/2 σ B F η B t F ˆ ε L t =ρLεˆ L t−1 + 1−ρ 2 L 1/2 σLη L t ˆ ε LAP t =ρLAPεˆ LAP t−1 + 1−ρ 2 LAP 1/2 σLAPη t LAP ˆ ε T F P t =ρ T F P εˆ T F P t−1 + 1−ρ 2 T F P 1/2 σ T F P η t T F P

49 See Section 6.2.1 for a discussion of the general modelling framework.

50 This is useful for separate identification of the standard deviations and persistence parameters in the estimation, as well as in setting the priors for the estimation.

Note that forcing processes are denoted ε̂, with the exception of forcing processes for firms’ desired price mark-ups, which are denoted ˆà and which have no persistence reflecting a strong prior that desired mark-ups should not fluctuate persistently away from their long-run levels The value-added production function in equation (1) implies that growth in the unit-root process for trend technological progress can be defined as: γ_t^Z = α_L ε̂ LAP_t + (1−α_L) γ_{t−1}^Z.

Estimation

Data and measurement equations

To estimate the model, we need to specify how the variables in the model are related to

Using UK data, this section explains that the COMPASS equations are written in terms of de-trended variables, so the de-trending step must be explicitly accounted for It provides a data description—identifying the data sources, variables, and quality—then clarifies how these data map to the COMPASS variables and how the de-trended measurements relate to the model’s terms.

We use data for fifteen macroeconomic variables (described below) and analyze a sample spanning 1993Q1–2007Q4 Although relatively short, this period is chosen to ensure that the estimation excludes large changes in monetary policy regime An alternative would be to extend the sample to capture more regime changes, but this approach introduces other issues We exclude the recent financial crisis from the sample to prevent this episode from disproportionately affecting the data properties and to avoid using data that are subject to larger revisions.

Model input data come from a set of raw observables, representing the data in their untransformed form before any de-trending These raw observables are then converted into the appropriate units using data transformation equations that remove trends and align the values with the scale used for COMPASS variables (typically 100 times the logarithmic deviation from steady state) Table 1 documents the raw observables, their data sources, and the specific data transformation equations employed.

The information in Table 1 shows how raw data is mapped to model obervable units.

As an example consider the data transformation equation for the import price deflator: dlnpmdef t = 100∆ ln pmdef t −Π ∗,tt t −Π m,tt t (24)

The variable dlnpmdef_t is defined as 100 times the logarithm of the stationary variable, mapped to COMPASS variables To obtain dlnpmdef_t, we first compute the first difference of the logarithm of the raw import deflator data and multiply the result by 100, and then subtract two time-varying trends that correct for deterministic movements in the import deflator’s trend over the sample that are not captured by the model’s own trends.

This analysis uses time-varying trends to correct for the low-frequency decline in CPI inflation associated with the transition to inflation targeting in the early part of the sample, and it also accounts for the fact that import prices have fallen less rapidly.

As is common in models like COMPASS, the number of observable data series is much smaller than the total number of endogenous variables in the model This gap partly reflects that many DSGE equations are identities—for example, market-clearing conditions that equate total expenditure with total output It also reflects that a large share of endogenous variables do not have measured counterparts in the data, such as Total Factor Productivity (TFP).

54 Data from 1987Q3–1992Q4 is used as a ‘training sample’ to initialise the Kalman filter.

Harrison and Oomen (2010) analyze UK macroeconomic data from the 1960s and address regime shifts by imposing deterministic breaks in the trends of nominal variables, illustrating the difficulties of assembling a consistently measured UK data set even when focusing on only a small number of series Building on this, Cúrdia and Finocchiaro (2013) propose a method to explicitly incorporate regime shifts into the estimation of structural models, offering a practical approach to account for structural change in macroeconomic analysis.

56 The term ‘observables’ comes from the state space modelling literature We use state space methods to estimate the model parameters using the Kalman filter For more details, see Section 6.2.2.

Scaling by 100 is a normalization step; the units of variables in COMPASS correspond, approximately, to percentage deviations from the steady state, which makes the model’s outputs easier to interpret and provides a convenient starting point for analysis and forecasting.

In Working Paper No 471 (May 2013), the CPI inflation target trend is defined relative to the current 2% per annum CPI inflation target, so that the deviations Π*,tt are zero for most of the sample The trend Πm,tt is computed as the average difference between the observed growth rate of import prices and the rate predicted by the model’s supply-side structure after adjusting for the CPI inflation trend The same approach is used for the other time-varying trends shown in the table.

1 That is, they are used to ensure that the mean of the transformed observables over the estimation sample is consistent with the assumptions embodied in the model These trends are: Π x,tt t , dlnxkp tt t , dlnmkp tt t , Π xf,tt t & dlnyf tt t The underlying economic rationale for these time-varying trends it that it is difficult to match the secular increase in global trade as a share of activity using standard supply side assumptions This is a feature of the data in many countries (a similar de-trending approach is taken by Adolfson et al.

After expressing observable data in appropriate units, a set of measurement equations links these transformed observables to the COMPASS variables, as shown in Table 1 A representative example is the measurement equation for import price inflation, dlnpmdef t, given by dlnpmdef t = π t M + 100 ln Π ∗ Γ X + σ me,pm me pm t, which indicates that the detrended log change of the import deflator maps to the model variable π t M (the deviation of quarterly import price inflation from its steady state) plus the model’s steady‑state trend growth in import prices, and a random measurement error term The steady‑state growth is determined by the inflation target, Π ∗, and the relative deterministic growth rate of technological progress in the import/export sector, Γ X (see Appendix A, Section A.2, for details) The measurement error term, me,pm t, captures potential measurement issues with import prices We also include measurement errors for investment growth, wage growth, hours worked, export prices, and for both import and export volumes.

In the early portion of our sample, the inflation target Π* is positive, reflecting the implicit inflation target implied by official policy announcements This interpretation follows the approach used by Batini and Nelson (2001), which treats policy communications as anchors for inflation expectations Consequently, the estimated Π* remains positive in the initial subsample, consistent with the period’s announced policy stance.

The supply-side structure of the model indicates that relative import prices decline in step with relative productivity in the export retail sector, ΓX Because we cannot calibrate GDP, final output, and all expenditure components’ trend growth rates at once, the approach fixes the relative trend growth of exports and imports, ΓX, so as to achieve target trend growth rates for final output, ΓZ, and for GDP, ΓV In short, ln ΓX is determined by a relationship that combines the target growth rates ln ΓZ and ln ΓV through the model’s parameters, ensuring the export-import growth path aligns with the desired macro targets.

Similar to structural shocks, measurement errors are modeled as independently and identically distributed (iid) random variables drawn from a standard normal distribution The scaling parameter σ_me,pm controls the size of the measurement error and sets its standard deviation, thereby determining the magnitude of the observation noise in the model.

Table 1: Observables, data transformation and measurement equations

Real GDP (gdpkp, ABMI) is defined by the data transformation dlngdpkp t ≡ 100∆ ln gdpkp t, with the measurement equation dlngdpkp t = ∆v t + γ Z t + 100 ln Γ Z Γ H Γ X − 1− αV αV Real consumption (ckp, ABJR+HAYO) is defined by dlnckp t ≡ 100∆ ln ckp t, with the measurement equation dlnckp t = ∆c t + γ t Z + 100 ln Γ Z Γ H Real business investment (ikkp, NPEL a) is defined by dlnikkp t ≡ 100∆ ln ikkp t, with the measurement equation dlnikkp t = ∆i t + γ t Z + 100 ln Γ Z Γ H Γ I.

+ σ I me me I t gonskp Real government spending NMRY+DLWF b dlngonskp t ≡ 100∆ ln gonskp t dlngonskp t = ∆g t + γ Z t + 100 ln Γ Z Γ H Γ G xkp Real exports IKBK c dlnxkp t ≡ 100∆ ln xkp t − dlnxkp tt t dlnxkp t = ∆x t + γ t Z + 100 ln Γ Z Γ H Γ X

+ σ X me me X t mkp Real imports IKBL c dlnmkp t ≡ 100∆ ln mkp t − dlnmkp tt t dlnmkp t = ∆m t + γ t Z + 100 ln Γ Z Γ H Γ X

Priors

Although the vector θ_est could, in principle, cover all COMPASS parameters, we partition them into two groups The first group contains parameters that determine the model’s steady state but have little to no impact on its dynamic properties, while the second focuses on parameters that drive the model’s dynamic behavior with minimal effect on the steady state We calibrate the first group and keep them fixed when estimating the dynamic parameters θ_est using Bayesian methods, a common approach when dealing with models that have a relatively large number of parameters.

Calibration of ‘steady state’ and other parameter values

As noted above, we calibrate a subset of the model parameters, including those that primarily influence the steady state, and Table 2 presents the values of these calibrated parameters, with the calibration driven by the rationale described.

Parameters that govern the steady-state shares of the expenditure components in final output, such as the consumption share ω_CZ, are calibrated to reproduce the average ratio of each expenditure component to total final expenditure observed in the estimation sample spanning 1993Q1–2007Q4 (63).

Parameters governing steady-state growth rates are calibrated as follows: the steady-state growth rate of hours worked, ΓH, is set to match the average growth of total hours worked in the estimation sample; given ΓH, the steady-state growth rate of final output per capita (or per hour), ΓZ, and the steady-state growth rate of investment per capita relative to final output, ΓI, are chosen so that steady-state consumption and business investment growth align with their sample averages; with those in place, the steady-state growth rate of imports and exports per capita relative to final output, ΓX, is calibrated to ensure that the steady-state growth of value-added matches the average GDP growth in the estimation sample; and the steady-state growth rate of government spending relative to final output, ΓG, is backed out by residual to ensure that the weighted sum of the growth rates of the expenditure components in the final-output accounting identity delivers the calibrated value for steady-state final output growth.

61 Note that Canova and Sala (2009) show that this approach can generate potential identification problems.

Two inconsequential steady-state normalizations are applied: steady-state hours worked are normalized to 1, with ν_L back out to achieve this, and the steady-state real exchange rate is normalized to be consistent with steady-state mark-ups in the import and export sectors that exceed 1, with κ_F back out to deliver that A full derivation of the steady state is provided in Section A.3 of Appendix A.

In Table 2, all share parameter values are presented; however, the consumption share in final output is determined by the residual and is not treated as a parameter, reflecting the constraint that the expenditure shares of the components in final output sum to one.

Compared with the sample average, the result is very close This similarity is not surprising, because any differences can only arise from variations in the growth rates of the final expenditure components treated as a residual in COMPASS In particular, dwellings investment and stockbuilding are captured by the IO component and are assumed to grow at the same rate as final output Since these residual components share the same growth trajectory as overall output, their contribution to deviations from the average is limited.

To align with the MPC’s inflation objective, Π* is set at 2% per year Given this target and the calibrated Γ_Z for the household discount factor β (time preference), β is chosen so that the model’s steady-state nominal interest rate R matches the sample average of just under 5.5% (65).

The labour share (in value added) ω_LV = α_VL is calibrated to match the average wage and salary share (adjusted for employer contributions) in nominal GDP measured at basic prices within the estimation sample The value-added share (in final output) ω_VZ = α_VZ is calibrated so that the share of imports in final output (equal to 1 − ω_VZ) aligns with the sample average of the imports share in total final expenditure at basic prices.

The steady-state final-output price mark-up, Z, is calibrated to a low value of 1.005 to ensure that COMPASS’s national accounting structure closely matches the ONS national accounting framework, enabling consistent comparisons and reliable economic analysis.

• The capital depreciation rate, δ K , is calibrated to match the value from the Bank of England’s previous quarterly forecast model, BEQM.

Two calibration choices shape the model’s dynamics: the rate at which the rest of the world inherits domestic LAP shocks, ζ ω F (see Section 4.2.4 for details), is calibrated to 0.9, and the over-discounting of consumption, captured by β, is tuned to have very little effect on model properties because it is a device to ensure a well-defined net foreign asset position in steady state and to guarantee a return to steady state after temporary shocks, with the β factor defined as β_factor = β (1 − ψ C).

C = 0.01 (see A5 of Appendix A for a brief discussion of the over-discounting approach).

We calibrate the persistence parameter of the LAP forcing process, rho_LAP, to zero, reflecting the prior belief that permanent productivity shocks are not persistent We also calibrate the forcing-process parameters for total-factor productivity (TFP) and value-added mark-ups, since COMPASS is over-identified, containing more shock processes than observable variables, as discussed below.

Over-identification proves useful when a model is misspecified because it provides flexibility in the economic rationale for imposing forecast judgments and reflects our broader approach to addressing COMPASS misspecification (see Section 7) However, separately identifying all forcing-process parameters can be problematic since some have very similar economic effects; with small weights in the final output, their data growth rates would have to diverge dramatically from the growth rate of the final output to generate a meaningful wedge between the observed growth rate of government spending and that inferred from the model residuals.

65 Note that this is adjusted for the inflation trend term discussed above, Π ∗,tt

66 We proxy import volumes at basic prices with data for import volumes at market prices.

Basu and Fernald (2002) show that imperfect competition in the use of intermediate inputs creates a wedge between measured GDP and true value added because the marginal product of intermediates diverges from their price In COMPASS, where intermediate imports contribute to final output and the final-output market exhibits imperfect competition, this wedge is present Yet calibrating the final-output sector’s markup to a very small level minimizes the impact of imperfect competition on national income accounting measures.

To implement forecast judgments without distorting the model, we calibrate a subset of shocks The calibration ensures these shocks have only a small impact on the model’s properties while remaining effective instruments for imposing forecast judgments.

Posterior parameter estimates

Tables 3 and 4 summarize the Bayesian estimation of COMPASS, applying the priors described above, and report the estimated parameters in the form of moments and percentiles derived from their posterior distributions; these results highlight the key features of the posterior summaries and the uncertainty surrounding the parameter estimates.

Posterior means for the ξ family—the parameters that govern inflation's indexation to lagged inflation—are all below 0.25 for final output, value-added, imports, exports, and wages These results also lie below the prior means, indicating that inflation indexation in these sectors is weaker than previously assumed.

• Both the posterior mean for habit formation, ψ C , and the coefficient of relative risk aversion, C , are materially lower than the prior means, implying a surprisingly low degree of habit formation (given the emprirical evidence discussed above) and a point specification of the utility function that is close to a log function (i.e.

70 The half-life of a process is defined as the number of periods it takes for the process to fall to half its value as measured in the period of the shock For an AR(1) process y t = ρy t−1 + η t , the half-life is equal to ln 0.5 ln ρ

Not all variances in the data could be matched, so we took a judgemental identification approach and fixed the standard deviations of the LAP, labor supply, risk premium, and export price mark-up shocks in the minimum distance estimation, opting not to attempt to match GDP, hours, or export price growth In the estimated model, all standard deviations (aside from the TFP and value-added mark-up shocks that we calibrated) are informed by a combination of priors and likelihood within a full Bayesian framework The minimum distance estimation was performed conditional on the means of the other priors, so there may exist parameter combinations that would improve the fit of the prior variances to the data (One with a relatively high intertemporal elasticity of substitution.) An informal analysis of sensitivity to different priors suggested a trade-off between the share of rule-of-thumb households, ω_o, and the degree of habit formation, with the intuition that rule-of-thumb consumption inherits wage stickiness, so more such households can dampen consumption in much the same way as higher habit formation.

The investment adjustment cost parameter’s posterior mean is materially higher than its prior mean, and its posterior standard deviation is wider than the prior distribution—a pattern seen in only a few parameters This parameter appears not to be particularly well identified, with its posterior distribution showing strong sensitivity to how the priors are specified.

• The posterior mean for the price elasticity of demand for UK exports is materially lower than the prior mean, implying a relatively low degree of price elasticity.

Overall, the posterior means for the persistence parameters in the forcing processes for shocks—the rho family—are lower than the prior means This suggests that the endogenous variables exhibit less persistence than initially assumed, indicating that shocks do not imprint long-lasting effects as strongly as the prior expectations indicated.

• The posterior means of the standard deviations of the shocks and measurement errors are, in general, slightly larger than the prior means.

Model properties

Models which articulate economic shocks and channels missing from COM-

Models which expand the scope of the forecast

MAPS

The ‘misspecification algorithm’

Introducing data news

Incorporating effects of financial frictions

Ngày đăng: 26/10/2022, 11:36

TỪ KHÓA LIÊN QUAN

TRÍCH ĐOẠN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w