We study partial equilibrium models of a kind applied in microeconomics.1 We describe two closely related equilibrium concepts for such models: a rational expectations or recursive compe
Trang 1Part III
Competitive equilibria and applica-tions
Trang 2Recursive (Partial) Equilibrium
7.1 An equilibrium concept
This chapter formulates competitive and oligopolistic equilibria in some dynamic settings Up to now, we have studied single-agent problems where components
of the state vector not under the control of the agent were taken as given In this chapter, we describe multiple-agent settings in which some of the components
of the state vector that one agent takes as exogenous are determined by the decisions of other agents We study partial equilibrium models of a kind applied
in microeconomics.1 We describe two closely related equilibrium concepts for such models: a rational expectations or recursive competitive equilibrium, and
a Markov perfect equilibrium The first equilibrium concept jointly restricts a Bellman equation and a transition law that is taken as given in that Bellman equation The second equilibrium concept leads to pairs (in the duopoly case)
or sets (in the oligopoly case) of Bellman equations and transition equations that are to be solved jointly by simultaneous backward induction
Though the equilibrium concepts introduced in this chapter obviously tran-scend linear-quadratic setups, we choose to present them in the context of linear quadratic examples in which the Bellman equations remain tractable
1 For example, see Rosen and Topel (1988) and Rosen, Murphy, and Scheinkman (1994)
– 186 –
Trang 3Example: adjustment costs 187
7.2 Example: adjustment costs
This section describes a model of a competitive market with producers who face adjustment costs.2 The model consists of n identical firms whose profit
function makes them want to forecast the aggregate output decisions of other firms just like them in order to determine their own output We assume that
n is a large number so that the output of any single firm has a negligible effect
on aggregate output and, hence, firms are justified in treating their forecast of
aggregate output as unaffected by their own output decisions Thus, one of n competitive firms sells output y t and chooses a production plan to maximize
∞
t=0
where
Rt = p tyt − 5d (yt+1 − yt)2 (7.2.2) subject to y0 being a given initial condition Here β ∈ (0, 1) is a discount factor,
and d > 0 measures a cost of adjusting the rate of output The firm is a price taker The price p t lies on the demand curve
where A0> 0, A1> 0 and Yt is the marketwide level of output, being the sum
of output of n identical firms The firm believes that marketwide output follows
the law of motion
Y t+1 = H0+ H1Yt ≡ H (Yt ) , (7.2.4) where Y0 is a known initial condition The belief parameters H0, H1 are among the equilibrium objects of the analysis, but for now we proceed on faith and take
them as given The firm observes Y t and y t at time t when it chooses y t+1
The adjustment costs d(y t+1 − yt)2 give the firm the incentive to forecast the market price
Substituting equation ( 7.2.3 ) into equation ( 7.2.2 ) gives
Rt = (A0− A1Yt ) y t − 5d (yt+1 − yt)2.
2 The model is a version of one analyzed by Lucas and Prescott (1971) and Sargent (1987a) The recursive competitive equilibrium concept was used by Lucas and Prescott (1971) and described further by Prescott and Mehra (1980)
Trang 4The firm’s incentive to forecast the market price translates into an incentive to
forecast the level of market output Y We can write the Bellman equation for
the firm as
v (y, Y ) = max
y
'
A0y − A1yY − 5d (y − y)2
+ βv (y , Y )
(
(7.2.5)
where the maximization is subject to Y = H(Y ) Here denotes next period’s value of a variable The Euler equation for the firm’s problem is
−d (y − y) + βvy (y , Y ) = 0 (7.2.6) Noting that for this problem the control is y and applying the Benveniste-Scheinkman formula from chapter 5 gives
vy (y, Y ) = A0− A1Y + d (y − y)
Substituting this equation into equation ( 7.2.6 ) gives
−d (yt+1 − yt ) + β [A0− A1Y t+1 + d (y t+2 − yt+1 )] = 0 (7.2.7)
In the process of solving its Bellman equation, the firm sets an output path
that satisfies equation ( 7.2.7 ), taking equation ( 7.2.4 ) as given, subject to the initial conditions (y0, Y0) as well as an extra terminal condition The terminal condition is
lim
t →∞ β t ytvy (y t, Yt ) = 0 (7.2.8)
This is called the transversality condition and acts as a first-order necessary condition “at infinity.” The firm’s decision rule solves the difference equation
( 7.2.7 ) subject to the given initial condition y0 and the terminal condition
( 7.2.8 ) Solving the Bellman equation by backward induction automatically incorporates both equations ( 7.2.7 ) and ( 7.2.8 ).
The firm’s optimal policy function is
Then with n identical firms, setting Y t = ny t makes the actual law of motion for output for the market
Trang 5Example: adjustment costs 189
Thus, when firms believe that the law of motion for marketwide output is
equa-tion ( 7.2.4 ), their optimizing behavior makes the actual law of moequa-tion equaequa-tion ( 7.2.10 ).
A recursive competitive equilibrium equates the actual and perceived laws
of motion ( 7.2.4 ) and ( 7.2.10 ) For this model, we adopt the following definition:
Definition: A recursive competitive equilibrium3 of the model with
adjust-ment costs is a value function v(y, Y ) , an optimal policy function h(y, Y ) , and
a law of motion H(Y ) such that
a Given H , v(y, Y ) satisfies the firm’s Bellman equation and h(y, Y ) is the
optimal policy function
b The law of motion H satisfies H(Y ) = nh(Y /n, Y )
The firm’s optimum problem induces a mapping M from a perceived law
of motion for capital H to an actual law of motion M(H) The mapping is
summarized in equation ( 7.2.10 ) The H component of a rational expectations
equilibrium is a fixed point of the operator M.
This equilibrium just defined is a special case of a recursive competitive equilibrium, to be defined more generally in the next section How might we find an equilibrium? The next subsection shows a method that works in the present case and often works more generally The method involves noting that the equilibrium solves an associated planning problem For convenience, we’ll assume from now on that the number of firms is one, while retaining the as-sumption of price-taking behavior
3 This is also often called a rational expectations equilibrium
Trang 67.2.1 A planning problem
Our solution strategy is to match the Euler equations of the market problem with those for a planning problem that can be solved as a single-agent dynamic programming problem The optimal quantities from the planning problem are then the recursive competitive equilibrium quantities, and the equilibrium price can be coaxed from shadow prices for the planning problem
To determine the planning problem, we first compute the sum of consumer
and producer surplus at time t , defined as
St = S (Y t, Y t+1) =
Y t
0
(A0− A1x) d x − 5d (Yt+1 − Yt)2 (7.2.11)
The first term is the area under the demand curve The planning problem is to choose a production plan to maximize
∞
t=0
β t S (Yt, Yt −1) (7.2.12)
subject to an initial condition Y0 The Bellman equation for the planning problem is
V (Y ) = max
Y
A0Y − A1
2 Y
2− 5d (Y − Y )2
+ βV (Y ) (7.2.13)
The Euler equation is
−d (Y − Y ) + βV (Y ) = 0. (7.2.14)
Applying the Benveniste-Scheinkman formula gives
V (Y ) = A0− A1Y + d (Y − Y ) (7.2.15) Substituting this into equation ( 7.2.14 ) and rearranging gives
βA0+ dY t − [βA1+ d (1 + β)] Y t+1 + dβY t+2= 0 (7.2.16) Return to equation ( 7.2.7 ) and set y t = Y t for all t (Remember that we have set n = 1 When n = 1 we have to adjust pieces of the argument for n.)
Notice that with y t = Y t , equations ( 7.2.16 ) and ( 7.2.7 ) are identical Thus,
a solution of the planning problem also is an equilibrium Setting y = Y in
Trang 7Recursive competitive equilibrium 191
equation ( 7.2.7 ) amounts to dropping equation ( 7.2.4 ) and instead solving for the coefficients H0, H1 that make y t = Y t true and that jointly solve equations
( 7.2.4 ) and ( 7.2.7 ).
It follows that for this example we can compute an equilibrium by forming the optimal linear regulator problem corresponding to the Bellman equation
( 7.2.13 ) The optimal policy function for this problem can be used to form the rational expectations H(Y ) 4
7.3 Recursive competitive equilibrium
The equilibrium concept of the previous section is widely used Following Prescott and Mehra (1980), it is useful to define the equilibrium concept more
generally as a recursive competitive equilibrium Let x be a vector of state variables under the control of a representative agent and let X be the vector
of those same variables chosen by “the market.” Let Z be a vector of other
state variables chosen by “nature”, that is, determined outside the model The representative agent’s problem is characterized by the Bellman equation
v (x, X, Z) = max
u {R (x, X, Z, u) + βv (x , X , Z )} (7.3.1)
where denotes next period’s value, and where the maximization is subject to the restrictions:
Here g describes the impact of the representative agent’s controls u on his state
x ; G and ζ describe his beliefs about the evolution of the aggregate state The
solution of the representative agent’s problem is a decision rule
u = h (x, X, Z) (7.3.5)
4 The method of this section was used by Lucas and Prescott (1971) It uses the connection between equilibrium and Pareto optimality expressed in the fundamental theorems of welfare economics See Mas-Colell, Whinston, and Green (1995)
Trang 8To make the representative agent representative, we impose X = x , but
only “after” we have solved the agent’s decision problem Substituting equation
( 7.3.5 ) and X = x t into equation ( 7.3.2 ) gives the actual law of motion
where G A (X, Z) ≡ g[X, X, Z, h(X, X, Z)] We are now ready to propose a
definition:
Definition: A recursive competitive equilibrium is a policy function h , an actual aggregate law of motion G A , and a perceived aggregate law G such that (a) Given G , h solves the representative agent’s optimization problem; and (b)
h implies that GA = G
This equilibrium concept is also sometimes called a rational expectations
equilibrium The equilibrium concept makes G an outcome of the analysis The
functions giving the representative agent’s expectations about the aggregate
state variables contribute no free parameters and are outcomes of the analysis.
There are no free parameters that characterize expectations 5 In exercise 7.1, you are asked to implement this equilibrium concept
7.4 Markov perfect equilibrium
It is instructive to consider a dynamic model of duopoly A market has two firms Each firm recognizes that its output decision will affect the aggregate output and therefore influence the market price Thus, we drop the assumption
of price-taking behavior.6 The one-period return function of firm i is
Rit = p tyit − 5d (yit+1 − yit)2 (7.4.1)
There is a demand curve
pt = A0− A1(y 1t + y 2t ) (7.4.2)
5 This is the sense in which rational expectations models make expectations disappear from a model
6 One consequence of departing from the price-taking framework is that the market outcome will no longer maximize welfare, measured as the sum of con-sumer and producer surplus See exercise 7.4 for the case of a monopoly
Trang 9Markov perfect equilibrium 193
Substituting the demand curve into equation ( 7.4.1 ) lets us express the return
as
Rit = A0yit − A1y it2 − A1yity −i,t − 5d (yit+1 − yit)2, (7.4.3) where y −i,t denotes the output of the firm other than i Firm i chooses a decision rule that sets y it+1 as a function of (y it, y −i,t) and that maximizes
∞
t=0
β t Rit.
Temporarily assume that the maximizing decision rule is y it+1 = f i (y it, y −i,t)
Given the function f −i , the Bellman equation of firm i is
vi (y it, y −i,t) = max
y it+1 {Rit + βv i (y it+1 , y −i,t+1)} , (7.4.4)
where the maximization is subject to the perceived decision rule of the other firm
y −i,t+1 = f −i (y −i,t , yit ) (7.4.5) Note the cross-reference between the two problems for i = 1, 2
We now advance the following definition:
Definition: A Markov perfect equilibrium is a pair of value functions v i and
a pair of policy functions f i for i = 1, 2 such that
a Given f −i , v i satisfies the Bellman equation ( 7.4.4 ).
b The policy function f i attains the right side of the Bellman equation
( 7.4.4 ).
The adjective Markov denotes that the equilibrium decision rules depend
only on the current values of the state variables y it, not their histories Perfect means that the equilibrium is constructed by backward induction and therefore builds in optimizing behavior for each firm for all conceivable future states, in-cluding many that are not realized by iterating forward on the pair of equilibrium
strategies f i
Trang 107.4.1 Computation
If it exists, a Markov perfect equilibrium can be computed by iterating to
con-vergence on the pair of Bellman equations ( 7.4.4 ) In particular, let v j i , f i j be
the value function and policy function for firm i at the j th iteration Then
imagine constructing the iterates
v j+1 i (y it, y −i,t) = max
y i,t+1
'
Rit + βv j i (y it+1 , y −i,t+1)
(
where the maximization is subject to
y −i,t+1 = f −i j (y −i,t , yit ) (7.4.7)
In general, these iterations are difficult.7 In the next section, we de-scribe how the calculations simplify for the case in which the return function is quadratic and the transition laws are linear
7.5 Linear Markov perfect equilibria
In this section, we show how the optimal linear regulator can be used to solve a model like that in the previous section That model should be considered to be
an example of a dynamic game A dynamic game consists of these objects: (a)
a list of players; (b) a list of dates and actions available to each player at each date; and (c) payoffs for each player expressed as functions of the actions taken
by all players
The optimal linear regulator is a good tool for formulating and solving dy-namic games The standard equilibrium concept—subgame perfection—in these games requires that each player’s strategy be computed by backward induction This leads to an interrelated pair of Bellman equations In linear-quadratic dynamic games, these “stacked Bellman equations” become “stacked Riccati equations” with a tractable mathematical structure
We now consider the following two-player, linear quadratic dynamic game.
An (n × 1) state vector xt evolves according to a transition equation
x t+1 = A txt + B 1t u 1t + B 2t u 2t (7.5.1)
7 See Levhari and Mirman (1980) for how a Markov perfect equilibrium can
be computed conveniently with logarithmic returns and Cobb-Douglas transition laws Levhari and Mirman construct a model of fish and fishers
Trang 11Linear Markov perfect equilibria 195
where u jt is a (k j × 1) vector of controls of player j We start with a finite
horizon formulation, where t0 is the initial date and t1 is the terminal date for the common horizon of the two players Player 1 maximizes
−
t1−1
t=t0
x T t R1xt + u T 1t Q1u 1t + u T 2t S1u 2t
(7.5.2)
where R1 and S1 are positive semidefinite and Q1 is positive definite Player
2 maximizes
−
t1−1
t=t0
x T t R2xt + u T 2t Q2u 2t + u T 1t S2u 1t
(7.5.3)
where R2 and S2 are positive semidefinite and Q2 is positive definite
We formulate a Markov perfect equilibrium as follows Player j employs
linear decision rules
ujt=−Fjtxt, t = t0, , t1− 1
where F jt is a (k j × n) matrix Assume that player i knows {F−i,t ; t =
t0, , t1− 1} Then player 1’s problem is to maximize expression (7.5.2)
sub-ject to the known law of motion ( 7.5.1 ) and the known control law u 2t=−F 2t xt
of player 2 Symmetrically, player 2’s problem is to maximize expression ( 7.5.3 ) subject to equation ( 7.5.1 ) and u 1t =−F 1t xt A Markov perfect equilibrium is
a pair of sequences {F 1t , F 2t ; t = t0, t0+ 1, , t1− 1} such that {F 1t } solves
player 1’s problem, given {F 2t }, and {F 2t } solves player 2’s problem, given {F 1t } We have restricted each player’s strategy to depend only on xt, and
not on the history h t={(xs, u 1s , u 2s ), s = t0, , t} This restriction on
strat-egy spaces accounts for the adjective “Markov” in the phrase “Markov perfect equilibrium.”
Player 1’s problem is to maximize
−
t1−1
t=t0
'
x T t R1+ F 2t T S1F 2t
xt + u T 1t Q1u 1t
(
subject to
x t+1 = (A t − B 2t F 2t ) x t + B 1t u 1t
This is an optimal linear regulator problem, and it can be solved by working backward Evidently, player 2’s problem is also an optimal linear regulator problem