1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

The handbook of experimental economics, volume 2

771 99 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 771
Dung lượng 10,99 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

There has been a lot of activity in a numberof areas that were not covered in the 1995 Handbook, including the emergence of neuroeconomics, significant growth in macroeconomic experiment

Trang 2

VOLUME 2

Trang 5

Published by Princeton University Press, 41 William Street,Princeton, New Jersey 08540

In the United Kingdom: Princeton University Press, 6 Oxford Street,Woodstock, Oxfordshire OX20 1TR

press.princeton.edu

Jacket image courtesy of Shutterstock

All Rights Reserved

ISBN 978-0-691-13999-9

Library of Congress Control Number: 2016935744

British Library Cataloging-in-Publication Data is available

This book has been composed in Minion Pro and Myriad ProPrinted on acid-free paper.∞

Typeset by Nova Techset Pvt Ltd, Bangalore, India

Printed in the United States of America

1 3 5 7 9 10 8 6 4 2

Trang 6

Preface xiii

Chapter 1 Macroeconomics: A Survey of Laboratory Research 1

John Duffy

1 Introduction: Laboratory Macroeconomics 1

2 Dynamic, Intertemporal Optimization 4

2.1 Optimal Consumption/Savings Decisions 4

2.2 Exponential Discounting and Infinite Horizons 12

2.3 Exponential or Hyperbolic Discounting? 13

2.4 Expectation Formation 14

3 Coordination Problems 21

3.1 Poverty Traps 21

3.2 Bank Runs 24

3.3 Resolving Coordination Problems: Sunspots 27

3.4 Resolving Coordination Problems: The Global Game Approach 30

Trang 7

3 Fundraising 1083.1 Announcements: Sequential and Dynamic Giving 1093.2 Lotteries 119

3.3 Auctions 1233.4 Rebates and Matches 126

2 Functional MRI: A Window into the Working Brain 1642.1 Functional MRI and the BOLD Signal 165

2.2 Design Considerations 1662.3 Image Analysis 1682.4 Summary of Functional MRI 171

3 Risky Choice 1723.1 Statistical Moments 1723.2 Prospect Theory 1723.3 Causal Manipulations 1753.4 Logical Rationality and Biological Adaptation 1763.5 Summary of Risky Choice 177

4 Intertemporal Choice and Self-regulation 1774.1 Empirical Regularities 178

4.2 Multiple-Self Models with Selves That Have Overlapping Periods

of Control 1814.3 Multiple-Self Models with Selves That Have Nonoverlapping Periods

of Control 1824.4 Unitary-Self Models 1824.5 Theoretical Summary 183

5 The Neural Circuitry of Social Preferences 1835.1 Social Preferences and Reward Circuitry 1845.2 Do Activations in Reward Circuitry Predict Choices? 1865.3 The Role of the Prefrontal Cortex in Decisions Involving SocialPreferences 186

5.4 Summary 188

6 Strategic Thinking 1896.1 Strategic Awareness 1896.2 Beliefs, Iterated Beliefs, and Strategic Choice 190

Trang 8

6.3 Learning 192

6.4 Strategic Teaching and Influence Value 194

6.5 Discussion of Strategic Neuroscience 196

I Where Things Stood Circa 1995 218

II Models of Other-Regarding Preferences, Theory, and Tests 222

A Outcome-Based Social Preference Models 222

B Some Initial Tests of the Bolton-Ockenfels and Fehr-Schmidt Models 225

C Social Preferences versus Difference Aversion 231

D Models Incorporating Reciprocity/Intentions of Proposers 233

E Other-Regarding Behavior and Utility Maximization 235

F Learning 236

III Other-Regarding Behavior, Applications, and Regularities 240

A The Investment/Trust Game 240

B Results from Multilateral Bargaining Experiments 242

C A Second Look at Dictator Games 244

D Procedural Fairness 247

E Diffusion of Responsibility 249

F Group Identity and Social Preferences 253

G Generalizability 255

IV Gift Exchange Experiments 259

A An Initial Series of Experiments 259

B Incomplete Contracts 261

C Wage Rigidity 262

D The Effect of Cognitive Ability and the Big Five Personality Characteristics

in Other-Regarding Behavior 264

E Why Does Gift Exchange Occur? 265

F Laboratory versus Field Settings and Real Effort 267

Trang 9

Chapter 5 Experiments in Market Design 290

5 Labor Market Clearinghouses 3185.1 Designing Labor Markets for Doctors 3185.2 Matching without a Clearinghouse: The Market for Economists, andOnline Dating 327

2 Experiments in Committee Bargaining 3522.1 Unstructured Committee Bargaining 3522.2 Committee Bargaining with a Fixed Extensive Form Structure 359

3 Elections and Candidate Competition 3813.1 The Spatial Model of Competitive Elections and the Median VoterTheorem 381

3.2 Multicandidate Elections 3873.3 Candidate Competition with Valence 390

4 Voter Turnout 3924.1 Instrumental Voting Experiments 3924.2 The Effects of Beliefs, Communication, and Information on Turnout 3974.3 Expressive Voting Experiments 398

5 Information Aggregation in Committees 4005.1 Condorcet Jury Experiments 400

5.2 The Swing Voter’s Curse 406

6 Voting Mechanisms that Reflect Preference Intensity 4106.1 Mechanisms Where a Budget of Votes Can Be AllocatedAcross Issues 411

6.2 Vote Trading and Vote Markets 414

7 Where Do We Go From Here? 418

Trang 10

III.A Methodological Notes 449

IV Token Economies 449

IV.A Methodological Notes 451

V Elderly 451

V.A Methodological Notes 455

VI Highly Demographically Varied (Representative) Sample 455

VI.A Methodological Notes 460

VII Subjects with Relevant Task Experience 461

VII.A Methodological Notes 468

II Gender Differences in Competitiveness 485

II.A Do Women Shy Away from Competition? 486

II.B Replication and Robustness of Women Shying Away

from Competition 489

II.C Reducing the Gender Gap in Tournament Entry 492

II.D Performance in Tournaments 497

II.E Field Experiments on Gender Differences in Competitiveness 503II.F External Relevance of Competitiveness 504

III Gender Differences in Selecting Challenging Tasks and

Speaking Up 507

III.A Gender Differences in Task Choice 507

III.B Gender Differences in Speaking up 510

IV Altruism and Cooperation 512

Trang 11

IV.A Dictator-Style Games 515IV.B Field Evidence and External Relevance of Gender Differences in Giving519

IV.C Prisoner’s Dilemma and Public Good Games 520IV.D New Directions 523

IV.E Conclusions 524

V Risk 525V.A Early Work and Surveys by Psychologists 527V.B Early and Most Commonly Used Elicitation Methods in Economics 530V.C Early Economic Surveys 533

V.D Recent Economic Surveys and Meta-Analyses on SpecificElicitation Tasks 535

V.E Stability of Risk Preferences and Their External Relevance 538V.F An Example of a Careful Control for Risk Aversion 543V.G Conclusions 545

VI Conclusions 546

Acknowledgments 547

Notes 547

References 553

Chapter 9 Auctions: A Survey of Experimental Research 563

John H Kagel and Dan Levin

INTRODUCTION 563

I Single-Unit Private Value Auctions 5641.1 Bidding above the RNNE in First-Price Private Value Auctions 5651.2 Bidding above the RNNE and Regret Theory 568

1.3 Using Experimental Data to Corroborate Maintained Hypotheses inEmpirical Applications to Field Data 569

1.4 Second-Price Private Value Auctions 5701.5 Asymmetric Private Value Auctions 5721.6 Sequential Auctions 576

1.7 Procurement Auctions 5781.8 Cash-Balance Effects and the Role of Outside Earnings On Bids 5801.9 An Unresolved Methodological Issue 581

II Single-Unit Common Value Auctions 5822.1 English Auctions 583

2.2 Auctions with Insider Information 5872.3 Common Value Auctions with an Advantaged Bidder 5882.4 New Results in the Takeover Game: Theory and Experiments 5902.5 Additional Common Value Auction Results 592

2.6 Is the Winner’s Curse Confined to College Sophomores? 596III Multiunit-Demand Auctions 598

3.1 Auctions with Homogeneous Goods—Uniform Price and VickreyAuctions 598

3.2 More on Multiunit-Demand Vickrey Auctions 604

Trang 12

3.3 Auctions with Synergies 605

3.4 Sequential Auctions with Multiunit-Demand Bidders 607

IV Additional Topics 610

4.1 Collusion in Auctions 610

4.2 Bidder’s Choice Auctions: Creating Competition Out of Thin Air 6154.3 Internet Auctions 617

4.4 Entry into Auctions 619

V Summary and Conclusions 623

Acknowledgments 623

Notes 623

References 629

Chapter 10 Learning and the Economics of Small Decisions 638

Ido Erev and Ernan Haruvy

INTRODUCTION 638

1 The Basic Properties of Decisions from Experience 641

1.1 Six Basic Regularities and a Model 641

1.2 The Effect of Limited Feedback 663

1.3 Two Choice-Prediction Competitions 665

2.3 Negative and Positive Transfer 671

2.4 The Effect of Delay and Melioration 671

2.5 Models of Learning in Dynamic Settings 672

3 Multiple Alternatives and Additional Stimuli 672

3.1 Successive Approximations, Hill Climbing, and the Neighborhood

Effect 672

3.2 Learned Helplessness 674

3.3 Multiple Alternatives with Complete Feedback 675

3.4 The Effect of Additional Stimuli (Beyond Clicking) 675

4 Social Interactions and Learning in Games 677

4.1 Social Interactions Given Limited Prior Information 678

4.2 Learning in Constant-Sum Games with Unique Mixed-Strategy

Equilibrium 680

4.3 Cooperation, Coordination, and Reciprocation 683

4.4 Fairness and Inequity Aversion 687

4.5 Summary and Alternative Approaches 688

5 Applications and the Economics of Small Decisions 688

5.1 The Negative Effect of Punishments 688

5.2 The Enforcement of Safety Rules 689

5.3 Cheating in Exams 691

5.4 Broken Windows Theory, Quality of Life, and Safety Climate 692

Trang 13

5.5 Hand Washing 6925.6 The Effect of the Timing of Warning Signs 6935.7 Safety Devices and the Buying-Using Gap 6935.8 The Effect of Rare Terrorist Attacks 6945.9 Emphasis-Change Training, Flight School, and Basketball 6955.10 The Pat-on-the-Back Paradox 695

5.11 Gambling and the Medium-Prize Paradox 6965.12 The Evolution of Social Groups 696

5.13 Product Updating 6975.14 Unemployment 6975.15 Interpersonal Conflicts and the Description-Experience Gap 6985.16 Implications for Financial Decisions 699

5.17 Summary and the Innovations-Discoveries Gap 699

Trang 14

This second volume of the Handbook of Experimental Economics follows some 20 years after the original Handbook There has been a lot of activity in a number

of areas that were not covered in the 1995 Handbook, including the emergence of

neuroeconomics, significant growth in macroeconomic experiments, and substantialgrowth in experiments that support market design research The goal here is to cover

some of these new growth topics and others not covered in the 1995 Handbook as well

as to update results in some areas of research (e.g., public goods and auctions) that were

covered in 1995 Even more so than in the 1995 Handbook, there is no way to cover the

entire field of experimental economics or to exhaustively cover the research areas eachchapter addresses Instead we left it to the authors of each chapter to curate importantdevelopments, with a view to reporting on series of experiments that highlight the backand forth between different experimenters, between experimenters and theorists, and

between experimenters and practitioners As in the 1995 Handbook, there is no chapter

explicitly devoted to experimental methodology, because we continue to believe thatmethodological issues are best covered within the context of the substantive researchquestions under investigation Also, there are a number of active areas of experimentalresearch, both new and old, that we wish we could have reported on here, but to keep

the Handbook manageable, we do not cover them.

Most of the experiments reported here consist of laboratory studies, but severalchapters report extensively on field experiments devoted to understanding the same orrelated issues studied in lab experiments, as called for by the questions being inves-tigated There is considerable back and forth both between lab and field experimentsand between experiments and naturally occurring field data The ultimate goal in allcases is to better understand economic behavior as it relates to economic theory andpolicy applications, with the emphasis on the role of experiments, lab or field (as well asnaturally occurring empirical data), in achieving these goals

Many colleagues have contributed to the Handbook in addition to the chapter writers.

Earlier chapter drafts were presented at several conferences at which members of theexperimental community were invited to comment on early outlines and drafts of thechapters.1 In addition, each chapter has circulated among specialists to get feedback

on the results reported and to identify omissions To be sure, not all this feedback hasbeen incorporated, but much of it has been included in revising chapter drafts In whatfollows we provide a brief overview of the contents of the chapters

Chapter 1: “Macroeconomics: A Survey of Laboratory Research,” by John Duffy

This chapter surveys the growing body of macroeconomic experiments over the past

20 years.2 This is both possible and relevant due to changes in macroeconomicmodeling that have come to rely more and more on microfoundations (Analogously,evolutionary biologists can’t conduct experiments directly on the fossil record or onspecies extinction, but our understanding of evolution is enhanced by experiments onfruit flies and on DNA.)

The chapter reviews experiments directed at issues in macroeconomics ranging fromintertemporal optimization to how agents form expectations, to resolving the many

Trang 15

coordination problems inherent to the macroeconomy, and to policy applications.There are efforts to reconcile laboratory outcomes with the field data For example,experiments show that subjects fail to smooth consumption over their laboratory

“lifetime” outcomes, typically overconsuming to begin with, which has a clear spondence in field data that shows massive undersaving for retirement In each areacovered, gaps in the literature and related problems ripe for experimental investigationare reported So not only does this chapter provide a summary of experimentalmacroeconomy research, it points macroeconomists to a number of open researchquestions that can be studied experimentally

corre-Chapter 2: “Using Experimental Methods to Understand Why and How We Give to

Charity,” by Lise Vesterlund

The literature on voluntary giving has grown substantially since the first Handbook, with

much research devoted to determining the factors that drive generosity Indicative ofthe growth of research in the field, to have a manageable survey, this chapter focuses onwhy and how people give to charities, using a blend of laboratory and field experiments,along with relevant field data The chapter covers two broad areas of research The firstpart of the chapter focuses on sorting out motives for giving, emphasizing results fromcreative modifications of the traditional public good and dictator games Issues underinvestigation are to what extent giving is intentional and to what extent it results fromgenuine concern for others (altruism), concern for self (“warm glow”), and mistakes.Experiments are reported that investigate alternative models of charitable giving, andself-image effects for giving.3

The second part of the chapter reviews research on fundraising mechanisms.Fundraising differs from the classic mechanism design problem as the fundraiser’sobjective is to maximize contributions (net of expenses), and he or she must rely

on voluntary giving Topics reported on include the potential benefits of announcingearly contributions even though this invites free riding on lead contributors, and thebenefits of different competitive contribution mechanisms such as lotteries, winner-pay and all-pay auctions Other fundraising techniques, such as matching and rebatingcontributions, are also studied The chapter, especially in the second part, reviewslaboratory and field experiments as well as naturally occurring field data related to thesame issues

Chapter 3: “Neuroeconomics,” by Colin Camerer, Jonathan Cohen, Ernst Fehr,

Paul Glimcher and David Laibson

At the time of the 1995 Handbook, we don’t think anyone would have envisioned that

neuroeconomics would so grip the attention of an enthusiastic band of pioneers that it

would need to have a chapter devoted to it in the subsequent Handbook But the field

has established itself in the interim and has critical mass and vibrancy, as evidenced by

the Society of Neuroeconomics, established in 2004, and the Journal of Neuroscience, Psychology and Economics, which started publication in 2008 The chapter is a team

effort by prominent scholars in the field

The chapter provides an introduction to neuroscience along with a summary ofresearch results to date in four areas of neuroeconomics—choice under uncertainty,intertemporal choice and self-regulation, the neural circuitry of social preferences, andstrategic thinking The first section outlines the neurobiological foundations of theresearch, providing the overall motivation and goals of the research program, alongwith characterizing the relevant parts of the brain that serve as the seat of various

Trang 16

types of behavior This is followed by a discussion of research methods, with a focus

on fMRI studies, including exactly what is being measured and how fMRI images areevaluated Research summaries in each of the four substantive areas covered focus onquestions and results in relation to leading economic models in each area that wouldhelp to pin down their validity (e.g., with respect to prospect theory, determining ifthere are different parts of the brain where gains and losses are evaluated) Overall, this

is a primer for anyone interested in neuroeconomics (casually or otherwise), along with

a discussion of early experimental results It will be interesting to come back in a decade

or two to revisit results in these four research areas and see to what extent these earlyresults have laid the groundwork for economics grounded in biology

Chapter 4: “Other-Regarding Preferences: A Selective Survey of Experimental Results,”

by David J Cooper and John H Kagel

The study of other-regarding preferences has been intensified in experimental research

as it became increasingly clear that the standard economic model of strictly income-maximizing agents fails to account for experimental outcomes for a number

own-of topics (e.g., bargaining, public goods provision, trust and reciprocity, and workplaceinteractions) Perhaps the best way to view the research reported in this chapter is

as an inquiry intended to narrow down what exactly is meant by “other-regarding”preferences This research has gone hand-in-hand with the growth of behavioraleconomics, as much of the anomalous experimental behavior has been incorporatedinto economic models In turn, these models have suggested new experiments to exploretheir predictions, which have deepened our understanding the nature of other-regardingpreferences

The chapter covers two broad areas of research: The first has to do with researchaimed at better understanding early results from bargaining games, many of which

were reviewed in the earlier Handbook Those earlier results led to the development of

formal models of other-regarding preferences, which provided the motivation for wholenew classes of experiments that would probably not have been considered except for

these models New lines of inquiry compared to those covered in the earlier Handbook

concern procedural fairness, delegation of responsibility for unkind behavior, groupidentity and social preferences, in addition to such staples as the trust and dictatorgames.4 The second broad area of research involves gift exchange in labor markets,

a subfield of “efficiency wage theory,” in which employers offer above-market wagesand are in turn rewarded with above-minimum effort There is considerable discussion

of the contributions of both laboratory and field experiments to better understandbehavior in this area

Chapter 5: “Experiments in Market Design,” by Alvin E Roth

When the first volume of the Handbook appeared in 1995, the kinds of economic

engineering that have come to be known as market design were just developing Newdesigns for spectrum auctions and for labor-market clearinghouses were proposed byeconomists in the 1990s and were adopted and implemented in new forms of marketorganization Market design has continued to grow, and much of the chapter focuses

on the way experiments have complemented other forms of investigation, not only toexplore the underlying science but also to communicate it to the many interested partiesamong whom new market designs have to be coordinated if they are to be implemented.The chapter considers the various roles that experiments played in the debatessurrounding the initial design of auctions for radio spectrum licenses and the continuing

Trang 17

role they have played in the development of more complicated auctions that allowbidders to bid on packages and not just on individual licenses It also considers howexperiments have played a role in understanding the role of the “hard-close” ending rule

in online eBay auctions, in guiding the revision of eBay’s reputation mechanism, in theuse of experiments to help design and implement labor market institutions, such as theclearinghouse “Matches” that are used in various markets for doctors, and the signalingmechanism used in the market for new PhD economists Throughout, the emphasis is

on how experiments play a role as one among many tools in bringing a new design fromconception through implementation

Chapter 6: “Experiments in Political Economy.” by Thomas R Palfrey

The focus of this chapter is on political science experiments in the methodologicaltradition of economic experiments with incentivized subjects and controlled laboratoryconditions The experiments reported are theory driven, dealing with outcomes innonmarket settings: elections, committees, and so on The issues covered deal withresource allocations, mechanism design, efficiency and distribution However, the

“currency” for deciding these issues is votes rather than money Five basic areas ofresearch are covered, all tightly linked to formal theoretical modeling in political science:(1) committee bargaining, (2) elections and candidate competition, (3) voter turnout,(4) information aggregation in committees, and (5) novel voting procedures designed

to reflect the intensity of voter preferences

The review of committee bargaining experiments includes early unstructured mittee experiments within the framework of cooperative game theory, and more recentsequential bargaining experiments with a fixed extensive form based on noncooper-ative game theory The section on elections and candidate competition covers bothtwo-candidate and multicandidate elections, and asymmetric elections in which onecandidate (e.g., the incumbent) has a built-in advantage Voter turnout is modeled

com-as a participation game, intended to rationalize turnout with costly voting in mcom-asselections The section on information aggregation in committees explores institutionsdesigned to deal with the aggregation of agents’ private information assuming commonpreferences Among the issues explored is how the swing voter’s curse (resulting fromsimilar forces as the winner’s curse in common value auctions) is largely corrected

for in voting, whereas it is typically not corrected for in auctions The section on

alternative voting mechanisms explores the inefficiency in outcomes when voters havestrong cardinal preferences and a number of alternative mechanisms designed to correctthese inefficiencies—for example, storable votes and combining voting with markets.Each subsection concludes with a concise summary of results and discussion of openquestions to be explored in both theory and experiments

Chapter 7: “Experimental Economics Across Subject Populations,”

by Guillaume R Fréchette

This chapter reviews the results of experiments using nonstandard subjects In ticular, experiments using nonhuman animals, people living in token economies,children, the elderly, demographically varied samples, and professionals are reviewed.Investigating such diverse subject pools addresses the question of the generalizability offindings from the standard undergraduate subject pool, as well as which behaviors arelearned and the impact of selection effects and/or experience on experimental outcomes

Trang 18

par-Reasons for why specific subject pools are interesting to study are discussed, along withsome of the methodological issues associated with conducting experiments with thesedifferent subject pools.

The concluding section of the paper pulls these results together with respect toquestions of interest in economics For example, there is reasonably close adherence toGARP (the generalized axiom of revealed preference) across subject populations, whichsuggests that the behavior is fundamental, and what data there are available for childrenshow that violations decrease with age, so that there is a learned component For thevoluntary contribution mechanism, contributions to the public good respond positively

to increases in the marginal per capita return but decline with repetition across bothstudents and nonstudents The lone exception to this pattern is that young children(less than 12 years of age) do not exhibit decreasing contributions with repetition of thegame With respect to the important question of experiments with professionals versuscollege students participating in an experiment designed to capture basic elements ofprofessional behavior (e.g., bidding in auctions), he concludes that in most cases resultsfrom students carry over, at least qualitatively, to the professionals

Chapter 8: “Gender,” by Muriel Niederle

This chapter reports research exploring gender differences in economic environments.These differences were barely on experimental economists’ radar screen at the time

of the first Handbook of Experimental Economics However, since the turn of the

millennium, there has been an explosion of research on gender differences in economics.These have been most extensively studied with respect to attitudes toward competition(with relevance to the glass ceiling effect), altruism and the closely related issue ofcooperation, and risk preferences There are considerable benefits to studying genderdifferences in the laboratory as this eliminates many of the potential confoundsencountered in field settings, which may be particularly important with respect togender; for example, is the underrepresentation of women in some occupations aresult of discrimination (real or anticipated) or a result of different attitudes to highlycompetitive environments?

Results are reported in three main areas: First, with respect to gender difference inrisk preferences, the present survey is much more skeptical of consistent differencesthan earlier surveys, particularly on account of inconsistencies in results across differentdomains under similar procedures This survey also notes a lack of economic signifi-cance (the small size) of gender differences typically reported Second, the survey notesthat gender differences in altruism tend to be quite mixed, with some studies findingstronger altruism in women, and others not, with what differences there are beingrelatively small Third, the survey reports large and consistent differences in reactions

to competition between men and women in mixed-gender tournaments, with muchsmaller differences in outcomes between single-gender tournaments Experiments ex-ploring the implications of these results for affirmative action in labor markets, alongwith possible changes in institutional structures (e.g., with respect to education) areexplored as well

Chapter 9: “Auctions: A survey of Experimental Research,” by John H Kagel

and Dan Levin

There has been a significant amount of new experimental research on auctions in the last

20 years; much of it motivated by the FCC wireless auctions and the growth of Internet

Trang 19

auctions The first part of the chapter revisits some old issues in single-unit private valueauctions (e.g., bidding above the risk-neutral Nash equilibrium in first-price privatevalue auctions) as well as how techniques applied to field data can be used both to betterexplore the experimental data and to better inform some of the assumptions underlyingthese techniques Other issues covered include asymmetric and sequential private valueauctions, along with new results with respect to second-price private value auctions,including a clever field experiment The second part of this chapter looks at single-unit common value auction experiments, including auctions with insider informationand auctions with an “advantaged bidder” who values the item more than the otherbidders, including the role of demographic and ability effects, standard issues in laboreconomics, on bidders’ ability to overcome the winner’s curse New experimentalresults on the winner’s curse in the takeover game, prompted by new theoreticalmodels aimed at better understanding the origin of the winner’s curse, are reported on

as well

The last half of the chapter largely covers topics that have gained prominence since

publication of the first Handbook Foremost among these are multiunit-demand

auc-tions in which individual bidders demand multiple items that can be either substitutes orcomplements due to synergies between items (e.g., regional cell phone licenses that can

be combined to provide nationwide coverage) Both theory and experiments here are adirect result of the FCC’s spectrum auctions Also generating significant attention areexperiments focusing on Internet auctions, which have, and continue to be, of growingimportance, while also having a variety of interesting institutional characteristics (e.g.,

a “buy-it-now” price prior to the start of the auction) Experiments in these areas haveimplications for market design issues covered in Roth (Chapter 5)

Chapter 10: “Learning and the Economics of Small Decisions,” by Ido Erev

and Ernan Haruvy

This chapter looks at economic outcomes tied to small decisions and whether or notthese decisions are reinforced; that is, it looks at economic outcomes determined byindirect shaping processes more familiar to psychologists than economists Unlike

“decisions from description” typical of economic experiments, where the incentivestructure is fully laid out, the experiments reported here mostly involve “decisions fromexperience,” in which decision makers do not receive a prior description of the incentivestructure but must learn about it This results in a number of notable differences fromdecisions from description For example, in choice under uncertainty, people exhibitoversensitivity to rare events in decisions from description (as in prospect theory)but exhibit the opposite bias when they need to rely on experience This “experience-description gap” shows up in a number of other settings as well

While many economists might be tempted to dismiss the importance of decisionsfrom experience versus decisions from description, their importance is particularly clearwhen performance of a task requires a series of small decisions, where the consequences

of each decision are relatively small (The importance of decisions from experiencecan also be seen from the fact that in most economic experiments, even after attempts

at clearly describing the economic contingencies and payoffs, experimental outcomesrarely exhibit equilibrium behavior to begin with, but typically rely on some sort oflearning process to move towards equilibrium outcomes.) The practical importance ofthe economics of small decisions shaped by their consequences is clearly brought out inthe concluding section of the paper through examples such as the enforcement of safety

Trang 20

rules, enhancing the performance of pilots and basketball players, and the implicationsfor financial decision making.

We acknowledge with thanks the work of those who contributed chapters or parts ofchapters to this edition

John H KagelAlvin E Roth

Trang 22

VOLUME 2

Trang 24

Macroeconomics:

A Survey of Laboratory Research

John Duffy

1 INTRODUCTION: LABORATORY MACROECONOMICS

Macroeconomic theories have traditionally been tested using nonexperimental fielddata, most often national income account data on GDP and its components Thispractice follows from the widely held belief that macroeconomics is a purely observa-tional science: history comes around just once and there are no “do-overs.” Controlledmanipulation of the macroeconomy to gain insight regarding the effects of alternativeinstitutions or policies is viewed by many as impossible, not to mention unethical, and

so, apart from the occasional natural experiment, most macroeconomists would argue

that macroeconomic questions cannot be addressed using experimental methods.1

Yet, as this survey documents, over the past twenty-five years, a wide variety of

macroeconomic models and theories have been examined using controlled laboratory

experiments with paid human subjects, and this literature is growing The use oflaboratory methods to address macroeconomic questions has come about in largepart due to changes in macroeconomic modeling, though it has also been helpedalong by changes in the technology for doing laboratory experimentation, especiallythe use of large computer laboratories The change in macroeconomic modeling is,

of course, the now widespread use of explicit microfounded models of constrained,intertemporal choice in competitive general equilibrium, game-theoretic or search-theoretic frameworks The focus of these models is often on how institutional changes orpolicies affect the choices of decision makers such as household and firms, in addition tothe more traditional concern with responses in the aggregate time series data (e.g., GDP)

or to the steady states of the model While macroeconomic models are often expressed at

an aggregate level—for instance, there is a “representative” consumer or firm or a marketfor the “capital good”—an implicit, working assumption of many macroeconomists isthat aggregate sectoral behavior is not different from that of the individual actors orcomponents that comprise each sector.2Otherwise, macroeconomists would be obliged

to be explicit about the mechanisms by which individual choices or sectors aggregate up

to the macroeconomic representations they work with, and macroeconomists have beenlargely silent on this issue Experimentalists testing nonstrategic macroeconomic models

Trang 25

have sometimes taken this representativeness assumption at face value and conductedindividual decision-making experiments with a macroeconomic flavor But, as we shallsee, experimentalists have also considered whether small groups of subjects interactingwith one another via markets or by observing or communicating with one another mightoutperform individuals in tasks that macroeconomic models assign to representativeagents.

While there is now a large body of macroeconomic experimental research as reviewed

in this survey, experimental methods are not yet a mainstream research tool used by thetypical macroeconomist, as they are in nearly every other field of economics This state

of affairs likely arises from the training that macroeconomists receive, which does nottypically include exposure to laboratory methods and is instead heavily focused on theconstruction of dynamic stochastic general equilibrium models that may not be wellsuited to experimental testing As Sargent (2008, p 27) observes,

I suspect that the main reason for fewer experiments in macro than in micro is that thechoices confronting artificial agents within even one of the simpler recursive competitiveequilibria used in macroeconomics are very complicated relative to the settings with whichexperimentalists usually confront subjects

This complexity issue can be overcome, but, as we shall see, it requires experimentaldesigns that simplify macroeconomic environments to their bare essence or involveoperational issues such as the specification of the mechanism used to determine equilib-rium prices Despite the complexity issue, I will argue in this survey that experimentalmethods can and should serve as a complement to the modeling and empirical methodscurrently used by macroeconomists as laboratory methods can shed light on importantquestions regarding the empirical relevance of microeconomic foundations, questions

of causal inference, equilibrium selection and the role of institutions.3

Indeed, to date the main insights from macroeconomic experiments include (1)

an assessment of the microassumptions underlying macroeconomic models, (2) abetter understanding of the dynamics of forward-looking expectations, which play acritical role in macroeconomic models, (3) a means of resolving equilibrium selection(coordination) problems in environments with multiple equilibria, (4) validation ofmacroeconomic model predictions for which the relevant field data are not available,and (5) the impact of various macroeconomic institutions and policy interventions

on individual behavior In addition, laboratory tests of macroeconomic theories havegenerated new or strengthened existing experimental methodologies, including imple-mentation of the representative-agent assumption, overlapping generations, and search-theoretic models, methods for assisting with the roles of forecasting and optimizing,implementation of discounting and infinite horizons, methods for assessing equilibra-tion, and the role played by various market clearing mechanisms in characterizingWalrasian competitive equilibrium (for which the precise mechanism of exchange isleft unmodeled)

The origins of macroeconomic experiments are unclear Some might point to

A W Phillips’ (1950) experiments using a colored liquid-filled tubular flow model

of the macroeconomy, though this did not involve human subjects! Others mightcite Vernon Smith’s (1962) double-auction experiment demonstrating the importance

of centralized information to equilibration to competitive equilibrium as the firstmacroeconomic experiment Yet another candidate might be John Carlson’s (1967)early experiment examining price expectations in stable and unstable versions ofthe cobweb model However, I will place the origins more recently with Lucas’s

Trang 26

1986 invitation to macroeconomists to conduct laboratory experiments to resolvemacrocoordination problems that were unresolved by theory Lucas’s invitation wasfollowed up on by Aliprantis and Plot (1992), Lim, Prescott, and Sunder (1994), andMarimon and Sunder (1993, 1994, 1995), and, perhaps as the result of their interestingand influential work, over the past two decades, there has been a great blossoming ofresearch testing macroeconomic theories in the laboratory This literature is now solarge that I cannot hope to cover every paper in a single chapter, but I do hope to givethe reader a good road map as to the kinds of macroeconomic topics that have beenstudied experimentally as well as to suggest some further extensions.

How shall we define a macroeconomic experiment? One obvious dimension might

be to consider the number of subjects in the study Many might argue that a

macroeco-nomic experiment should involve a large number of subjects; perhaps the skepticism of

some toward macroeconomic experiments has to do with the necessarily small numbers

of subjects (and small scale of operations) that are possible in laboratory studies.4Themain problem with small numbers of subjects is that strategic considerations may play arole that is not imagined (or possible) in the macroeconomic model that is being tested,which may instead focus on perfectly competitive Walrasian equilibrium outcomes.However, research has shown that attainment of competitive equilibrium outcomesmight not require large numbers of subjects For example, the evidence from numerousdouble-auction experiments beginning with Smith (1962) and continuing to the presentreveals that equilibration to competitive equilibrium can occur reliably with as few asthree to five buyers or sellers on each side of the market Duffy and others (2011) studybidding behavior in a Shapley-Shubik market game and show that with small numbers

of subjects (e.g., groups of size two), Nash equilibrium outcomes are indeed far awayfrom the competitive equilibrium outcome of the associated pure exchange economy.However, they also show that as the number of subjects increases, the Nash equilibriumsubjects coordinate upon becomes approximately Walrasian; economies with just tensubjects yield market-based allocations that are indistinguishable from the competitiveequilibrium of the associated pure exchange economy Thus, while more subjects aregenerally better than fewer subjects for obtaining competitive equilibrium outcomes, itseems possible to establish competitive market conditions with the small numbers ofsubjects available in the laboratory.5

A more sensible approach is to define a macroeconomic experiment as one thattests the predictions of a macroeconomic model or its assumptions or is framed in thelanguage of macroeconomics, involving, for example, intertemporal consumption andsavings decisions, inflation and unemployment, economic growth, bank runs, monetaryexchange, monetary or fiscal policy, or any other macroeconomic phenomena Unlikemicroeconomic models and games, which often strive for generality, macroeconomicmodels are typically built with a specific macroeconomic story in mind that is not aseasily generalized to other nonmacroeconomic settings For this reason, our definition

of a macroeconomic experiment may be too restrictive There are many microeconomicexperiments—coordination games for instance—that can be given both a macroeco-nomic interpretation or a more microeconomic interpretation, for example; as models

of firm or team behavior In discussing those studies as macroeconomic experiments, Iwill attempt to emphasize the macroeconomic interpretation

The coverage of this chapter can be viewed as an update on some topics covered

in several chapters of the first volume of the Handbook of Experimental Economics,

including discussions of intertemporal decision making by Camerer (1995), tion problems by Ochs (1995), and asset prices by Sunder (1995), though the coverage

Trang 27

coordina-here will not be restricted to these topics alone Most of the literature surveyed coordina-here

was published since 1995, the date of the first Handbook volume In addition, this

chapter builds on, complements, and extends earlier surveys of the macroeconomicexperimental literature by myself, Duffy (1998, 2008), and by Ricciuti (2008)

2 DYNAMIC, INTERTEMPORAL OPTIMIZATION

Perhaps the most widely used model in modern macroeconomic theory is the sector, infinite-horizon optimal-growth model pioneered by Ramsey (1928) and furtherdeveloped by Cass (1965) and Koopmans (1965) This model posits that individualssolve a dynamic, intertemporal optimization problem in deriving their consumptionand savings plan over an infinite horizon Both deterministic and stochastic versions ofthis model are workhorses of modern real business cycle theory and growth theory

one-In the urge to provide microfoundations for macroeconomic behavior, modernmacroeconomists assert that the behavior of consumers or firms can be reduced to that

of a representative, fully rational individual actor; there is no room for any “fallacies

of composition” in this framework It is, therefore, of interest to assess the extent towhich macroeconomic phenomena can be said to reflect the choices of individualsfacing dynamic stochastic intertemporal optimization problems Macroeconomists havegenerally ignored the plausibility of this choice-theoretic assumption, preferring instead

to examine the degree to which the time-series data on GDP and its components move inaccordance with the conditions that have been optimally derived from the fully rationalrepresentative-agent model and especially whether these data react predictably to shocks

or policy interventions

2.1 Optimal Consumption/Savings Decisions

Whether individuals can in fact solve a dynamic stochastic intertemporal optimizationproblem of the type used in the one-sector optimal growth framework has beenthe subject of a number of laboratory studies, including Hey and Dardanoni (1988),Carbone and Hey (2004), Noussair and Matheny (2000), Lei and Noussair (2002),Ballinger, Palumbo, and Wilcox (2003), Carbone (2006), Brown, Chua, and Camerer(2009), Ballinger and others (2011), Crockett and Duffy (2013), Carbone and Duffy(2014), and Meissner (2016), among others These studies take the representative agentassumption of modern macroeconomics seriously and ask whether subjects can solve adiscrete-time optimization problem of the form

household’s time t wealth.

Hey and Dardanoni (1988) assume a pure exchange economy, where wealth evolvesaccording toω t = R(ω t−1− c t−1)+ y t, withω0> 0 given Here, R denotes the (con- stant) gross return on savings and y is the stochastic time t endowment of the single

Trang 28

good; the mean and variance of the stochastic income process is made known to subjects.

By contrast, Noussair and associates assume a nonstochastic production economy,whereω t = f (k t)+ (1 − δ)k t , with f (·) representing the known, concave production function, k t denoting capital per capita, andδ denoting the depreciation rate In this framework, it is public knowledge that an individual’s savings, x t , are invested in capital and become the next period’s capital stock, that is, x t = k t+1 The dynamic law

of motion for the production economy is expressed in terms of capital rather than

wealth: k t+1= f (k t)+ (1 − δ)k t − c t , with k0> 0 given The gross return on savings is endogenously determined by R = f(k t)+ (1 − δ).

Solving the maximization problem given before, the first-order conditions imply thatthe optimal consumption program must satisfy the Euler equation

u(c t)= β RE t u(c t+1)where the expectation operator is with respect to the (known) stochastic process forincome (or wealth) Notice that the Euler equation predicts a monotonic increasing,decreasing, or constant consumption sequence, depending on whetherβ R is less than, greater than, or equal to 1 Solving for a consumption or savings function involves

application of dynamic programming techniques that break the optimization problem

up into a sequence of two-period problems; the Euler equation characterizes thedynamics of marginal utility in any two periods For most specifications of preferences,analytic closed-form solutions for the optimal consumption or savings function are notpossible, though via concavity assumptions, the optimal consumption/savings programcan be shown to be unique

In testing this framework, Hey and Dardanoni (1988) addressed several tation issues First, they chose to rule out borrowing (negative saving) in order toprevent subjects from ending the session in debt Second, they attempted to implementdiscounting and the stationarity associated with an infinite horizon by having a constantprobability that the experimental session would continue with another period.6Finally,rather than inducing a utility function, they supposed that all subjects had constant ab-solute risk-aversion preferences, and they estimated each individual subject’s coefficient

implemen-of absolute risk aversion using data they gathered from hypothetical and paid choicequestions presented to the subjects Given this estimated utility function, they thennumerically computed optimal consumption for each subject and compared it with theiractual consumption choice To challenge the theory, they considered different values for

R and β as well as for the parameters governing the stochastic income process, y.

They report mixed results First, consumption is significantly different from optimalbehavior In particular, there appears to be great time dependence in consumptionbehavior; that is, consumption appears dependent on past income realizations, which

is at odds with the time-independent nature of the optimal consumption program.Second, they find support for the comparative statics implications of the theory That

is, changes in the discount factor,β, or in the return on savings, R, have the same effect

on consumption as under optimal consumption behavior So they find mixed supportfor dynamic intertemporal optimization

Carbone and Hey (2004) and Carbone (2006) simplify the design of Hey andDardanoni First, they eliminate discounting and consider a finite-horizon, twenty-fiveperiod model They argue, based on the work of Hey and Dardanoni, that subjects

“misunderstand the stationarity property” of having a constant probabilistic stoppingrule Second, they greatly simplify the stochastic income process, allowing there to be

Trang 29

just two values for income—one high, which they refer to as a state where the consumer

is “employed,” and the other low, in which state the consumer is “unemployed.” Theyuse a two-state Markov process to model the state transition process: conditional onbeing employed (unemployed), the probability of remaining (becoming) employed was

p(q), and these probabilities were made known to subjects Third, rather than infer

pref-erences they induce a constant absolute risk-aversion utility function Their treatment

variables were p, q, R and the ratio of employed to unemployed income; they considered

two values of each, one high and one low, and examined how consumption changed inresponse to changes in these treatment variables relative to the changes predicted by theoptimal consumption function (again numerically computed) Table 1.1, shows a few oftheir comparative statics findings

An increase in the probability of remaining employed caused subjects to overreact intheir choice of additional consumption relative to the optimal change regardless of theiremployment status (unemployed or employed), whereas an increase in the probability ofbecoming employed—a decrease in the probability of remaining unemployed—led to anunderreaction in the amount of additional consumption chosen relative to the optimalprediction On the other hand, the effect of a change in the ratio of high-to-low income

on the change in consumption was quite close to optimal Carbone and Hey emphasizealso that there was tremendous heterogeneity in subjects’ abilities to confront the life-cycle consumption savings problem, with most subjects appearing to discount old-ageconsumption too heavily (when they should not discount at all) or optimizing over

a shorter planning horizon than the twenty-five periods of the experiment.7Carboneand Hey conclude that “subjects do not seem to be able to smooth their consumptionstream sufficiently—with current consumption too closely tracking current income.”

Interestingly, the excess sensitivity of consumption to current income (in excess of

that warranted by a revision in expectations of future income) is a well-documentedempirical phenomenon in studies of consumption behavior using aggregate field data(see, e.g., Flavin 1981; Hayashi 1982; Zeldes 1989) This corroboration of evidence fromthe field should give us further confidence in the empirical relevance of the laboratoryanalyses of intertemporal consumption-savings decisions Two explanations for theexcess sensitivity of consumption to income that have appeared in the literature are(1) binding liquidity constraints and (2) the presence of a precautionary savings motive(which is more likely in a finite-horizon model) Future experimental research mightexplore the relative impacts of these two factors on consumption decisions

Meissner (2016) modifies the finite-horizon, life-cycle planning environment ofCarbone and Hey (2004) to allow subjects to borrow and not just to save In particular,Meissner studies two regimes, one in which an individual’s stochastic income processhas an upward-sloping trend and a second regime where this income process has a

Trang 30

downward-sloping trend Optimal behavior in the first regime involves borrowing in theearly periods of life so as to better smooth consumption, while optimal behavior in thesecond regime involves saving in the early periods of life to better smooth consumption.Meissner parameterized the environment so that the optimal consumption path wasthe same in both income treatments, and subjects were given three opportunities

or “lifetimes” to make consumption/savings/borrowing decisions in each of the twoincome treatments, that is, he uses a within-subjects design A main finding is that

in the decreasing-income regime, subjects have no trouble learning to save in theearly periods of their life and can approximately smooth consumption over theirlifetime By contrast, in the increasing-income regime, most subjects seem averse toborrowing any amount, so that consumption deviates much further from the optimalpath; consumption decisions in this treatment more closely track the upward-trendpath of income and there is not much difference with replication (i.e., there is littlelearning) Meissner attributes the latter finding to “debt aversion” on the part of hisuniversity student subjects It would be of interest to explore whether such debt aversioncontinues in more-general subject populations involving individuals who may havesome homegrown experience with acquiring debt

Noussair and Matheny (2000) further modify the framework of Hey and associates

by adding a concave production technology, f (k t)= Ak α

t, α < 1, which serves to

endogenize the return on savings in conformity with modern growth theory Theyinduce both the production function and a logarithmic utility function by giving

subjects schedules of payoff values for various levels of k and c, and they implement an

infinite horizon by having a constant probability that a sequence of rounds continues

Subjects made savings decisions (chose x t = k t+1) with the residual from their budgetconstraint representing their consumption Noussair and Matheny varied two model

parameters, the initial capital stock k0 and the production function parameter α.

Variation in the first parameter changes the direction by which paths for consumptionand capital converge to steady-state-values (from above or below) while variations in the

second parameter affect the predicted speed of convergence; the lower is α, the greater

is the speed of convergence of the capital stock and consumption to the steady state

of the model Among the main findings, Noussair and Matheny report that sequencesfor the capital stock are monotonically decreasing regardless of parameter conditions,and theoretical predictions with regard to speed of convergence do not find muchsupport Consumption is, of course linked to investment decisions and is highly variable

They report that subjects occasionally resorted to consumption binges, allocating nearly

nothing to the next period’s capital stock in contrast to the prediction of consumptionsmoothing However, this behavior seemed to lessen with experience A virtue of theNoussair-Matheny study is that it was conducted with both US and Japanese subjects,with similar findings for both countries

One explanation for the observed departure of behavior from the dynamicallyoptimal path is that the representative-agent assumption, while consistent with thereductionist view of modern macroeconomics, assumes too much individual rationality

to be useful in practice.8Information on market variables (e.g., prices) as determined by many different interacting agents, may be a necessary aid to solving such complicated

optimization decisions An alternative explanation may be that the standard model of

intertemporal consumption smoothing abstracts away from the importance of social norms of behavior with regard to consumption decisions Akerlof (2007), for instance,

suggests that people’s consumption decisions may simply reflect their “station in life.”College students (the subjects in most of these experiments) looking to their peers,

Trang 31

choose to live like college students with expenditures closely tracking income Both of

these alternative explanations have been considered to some extent in further laboratorystudies

Crockett and Duffy (2013) explore whether groups of subjects can learn to porally smooth their consumption in the context of an infinite-horizon, consumption-based asset-pricing model, specifically, the Lucas tree model (Lucas 1978) In theenvironment they study, the only means of saving intertemporally is to buy or sell shares

intertem-of a long-lived asset (a Lucas tree), which yields a known and constant divided (amount

of fruit) each period Subjects are of two types, according to the endowment of incomethey receive in alternating periods; odd types receive high income in odd-numberedperiods and low income in even-numbered periods, while even types receive highincome in even-numbered periods and low income in odd-numbered periods In one ofCrockett and Duffy’s treatments, subjects’ induced utility function over consumption isconcave so that subjects have an incentive to intertemporally smooth their consumption

by buying the asset in their high-income periods and selling it in their low-incomeperiods (the heterogeneity of subject types allows for such trades to occur) Asset pricesare determined via a double-auction mechanism, and these prices can be observed byall subject participants Crockett and Duffy report that with these asset price signals,most subjects have little difficulty learning to intertemporally smooth their consumptionacross high- and low-income periods Future experimental research on consumptionsmoothing through the purchase and sale of long-lived assets might investigate a morerealistic, stochastic, life-cycle income process

Ballinger, Palumbo, and Wilcox (2003) explore the role of social learning in amodified version of the noisy pure exchange economy studied by Hey and Dardanoni(1988) In particular, they eliminate discounting (presumably to get rid of time depen-dence), focusing on a finite sixty-period horizon Subjects are matched into three-person

“families” and make decisions in a fixed sequence The generation 1 (G1) subject makesconsumption decisions alone for twenty periods; in the next twenty periods (21–40), his

or her behavior is observed by the generation 2 (G2) subject, and in one treatment, thetwo are free to communicate with one another In the next twenty periods (periods 41–

60 for G1, periods 1–20 for G2), both generations make consumption/savings decisions.The G1 subject then exits the experiment The same procedure is then repeated with thegeneration 3 (G3) subject watching the G2 subject for the next twenty rounds, and so on.Unlike Hey and Dardanoni, Ballinger and others induce a constant relative risk-aversionutility function on subjects using a Roth and Malouf (1979) binary lottery procedure.This allows them to compute the path of optimal consumption/savings behavior Thesepreferences give rise to a precautionary savings motive, wherein liquid wealth (saving)follows a hump-shaped pattern over the sixty-period lifecycle

Ballinger, Palumbo, and Wilcox’s (2003) main treatment variable concerns thevariance of the stochastic income process (high or low), which affects the peak ofthe precautionary savings hump; in the high case they also explore the role of al-lowing communication/mentoring or not (while maintaining observability of actions

by overlapping cohorts at all times) Among their findings, they report that subjectstend to consume more than the optimal level in the early periods of their lives,leading to less savings and below-optimal consumption in the later periods of life.However, savings are greater in the high- as compared with the low-variance case,which is consistent with the comparative statics prediction of the rational intertemporalchoice framework They also find evidence for time dependence in that consumptionbehavior is excessively sensitive to near lagged changes in income Most interestingly,

Trang 32

they report that consumption behavior of generation 3 is significantly closer to theoptimal consumption program than in the consumption behavior of generation 1,suggesting that social learning by observation plays an important role and may be amore reasonable characterization of the representative agent.

Ballinger and others (2011) study a similar life-cycle consumption/savings problembut focus on whether cognitive and/or personality measures might account for theobserved heterogeneity in a subject’s savings behavior, in particular, the subject’s use ofshorter-than-optimal planning horizons Using a careful multivariate regression analy-sis that accounts for potentially confounding demographic variables, they report thatcognitive measures and not personality measures are good predictors of heterogeneity

in savings behavior In particular, they report that variations in subjects’ cognitiveabilities, as assessed, using visually oriented “pattern-completion” tests and “workingmemory” tests that assess a subject’s ability to control both attention and thought, canexplain variations in a subject’s life-cycle savings behavior and that the median subject

is thinking just three periods ahead

Lei and Noussair (2002) study the intertemporal consumption savings problem inthe context of the one-sector optimal growth model with productive capital Theycontrast the “social planner” case, where a single subject is charged with maximizingthe representative consumer firms’ present discounted sum of utility from consumptionover an indefinite horizon (as in Noussair and Matheny (2000)), with a decentralizedmarket approach, wherein the same problem is solved by five subjects looking at priceinformation In this market treatment, the production and utility functions faced bythe social planner are disaggregated into five individual functions assigned to the fivesubjects that aggregate up to the same functions faced by the social planner Forexample, some subjects had production functions with marginal products for capitalthat were higher than for the economy-wide production function, while others hadmarginal products for capital that are lower At the beginning of a period, productiontook place, based on previous period’s capital, using either the individual productionfunctions in the market treatment or the economy-wide production function in thesocial-planner treatment Next, in the market treatment, a double-auction market foroutput (or potential future capital) opened up Agents with low marginal products ofcapital could trade some of their output to agents with high marginal products forcapital in exchange for experimental currency units (subjects were given an endowment

of such units each period, which they had to repay) The import of this design wasthat the market effectively communicated to the five subjects the market price of aunit of output (or future capital) As future capital could be substituted one for onewith future consumption, the market price of capital revealed to subjects the marginalutility of consumption After the market for output closed, subjects in the markettreatment could individually allocate their adjusted output levels between future capital

k t+1or savings and experimental currency units or consumption c t By contrast, in thesocial-planner treatment, there was no market for output; the representative individualproceeded directly to the step of deciding how to allocate output between future capital(savings) and current consumption At the end of the period, subjects’ consumptionamounts were converted into payoffs using the economy-wide or individual concaveutility functions, and loans of experimental currency units in the market treatment wererepaid

The difference in consumption behavior between the market and agent–social planner treatments is illustrated in Figure 1.1, which shows results from arepresentative session of one of Lei and Noussair’s treatments In the market treatment,

Trang 33

10 15 20 25

Figure 1.1: Consumption choices over two indefinite horizons (a, b) compared with optimal

steady-state consumption (C bar) Market treatment (top) versus social planner treatment (bottom) Source: Lei and Noussair (2002).

there was a strong tendency for consumption (as well as capital and the price of output)

to converge to their unique steady-state values, while in the social planner treatment,consumption was typically below the steady-state level and much more volatile

In further analysis, Lei and Noussair (2002) make use of a linear, panel regression model to assess the extent to which consumption and savings (or anyother time-series variable for that matter) can be said to be converging over timetoward predicted (optimal) levels.9 In this regression model, y j,t denotes the average

data-(or economy-wide level) of the variable of interest by cohort/session j in period

t = 1, 2, , and D j is a dummy variable for each of the j = 1, 2, , J cohorts.

The regression model is written as

of the variable y to which all J cohorts of subjects are converging; notice that the α

coefficients have a full weight of 1 in the initial period 1 and then have exponentiallydeclining weights, while the single β coefficient has an initial weight of zero that

increases asymptotically to 1 For the dependent variable in (1), Lei and Noussair (2002)

use: (1) the consumption and capital stocks (savings) of cohort j , c j ,t , and k j ,t+1, (2) theabsolute deviation of consumption from its optimal steady-state value,|c j ,t − c∗|, and

(3) the ratio of the realized utility of consumption to the optimum, u(c j ,t)/u(c∗) For thefirst type of dependent variable, the estimate ˆβ reveals the values to which the dependent

Trang 34

variable, c j,t and k j,tare converging across cohorts; strong convergence is said to obtain

if ˆβ is not significantly different from the optimal steady-state levels, cand k∗ For thesecond and third types of dependent variable, one looks for whether ˆβ is significantly

different from zero or one, respectively Lei and Noussair also consider a weaker form

of convergence that examines whether ˆβ is closer (in absolute value) to the optimal,

predicted level than a majority of the ˆα j estimates Using all four dependent variables,they report evidence of both weak and strong convergence in the market treatment, butonly evidence of weak (and not strong) convergence in the social planner treatment.10

Tests of convergence based on the regression model (1) can be found in severalexperimental macroeconomic papers reviewed later in this chapter This methodologyfor assessing convergence of experimental time series is one of several methodologiesthat might be considered “native” to experimental macroeconomics Therefore, allow

me a brief digression on the merits of this approach First, the notion that strongconvergence obtains if ˆβ is not significantly different from the predicted level, y∗,while weak convergence obtains if| ˆβ − y| < | ˆα j − y| for a majority of js is somewhat

problematic, as strong convergence need not imply weak convergence, as when theˆα j

estimates are insignificantly different from ˆβ Second, if convergence is truly the focus,

an alternative approach would be to use an explicitly dynamic adjustment model for each cohort j of the form

y j,t = λ j y j,t−1 + μ j +  j,t (2)

Using (2), weak convergence would obtain if the estimates, ˆ λ j, were significantly less

than 1, while strong convergence would obtain if the estimate of the long-run expected value for y j, ˆμ j /(1 − ˆλ j), was not significantly different from the steady-state prediction

y∗; in this model, strong convergence implies weak convergence, not the reverse.11

Finally, analysis of joint convergence across the J cohorts to the predicted level y∗could

be studied through tests of the hypothesis:

where I J is a J -dimensional identity matrix.

Returning to the subject of dynamic, intertemporal life-cycle consumption/savings

decisions, recent work has explored subject behavior in the case where there are two

(as opposed to just one) state variables: an individual’s wealth (or “cash on hand” )

ω t and some induced “habit” level for consumption h t (following the macroeconomicliterature on habit formation), so that the period objective function is of the form

u(c t , h t ) Brown, Chua, and Camerer (2009) study the case of internal habit formation, where each individual subject i has his or her own, personal habit level of consumption that evolves according to h i t = αh i

t−1+ c i

t (α < 1) and has a period utility function that is increasing in the ratio of c i

t /h i

t Carbone and Duffy (2014) study the case of

external habit formation, where h t is the lagged average consumption of a group of N identically endowed subjects (i.e., h t = N−1 N

i=1c i

−1) and u is an increasing function

of the difference, c t − αh t(α < 1) Both studies also explore social learning in this more

complex environment, with Brown and others exploring intergenerational learning and

Trang 35

Carbone and Duffy exploring peer-to-peer social learning Both studies report thatsubjects have some difficulty with habit-formation specifications as they require thatsubjects optimally save more early on in their life-cycle (relative to the absence of

a habit variable) to adjust for the diminishing effect that habits have on utility overthe lifecycle, and consistent with earlier studies (without habit), consumers typicallyundersave early on in their life-cycle Brown and others find that information on the life-cycle consumption/savings choices made by prior experienced generations of subjects(intergenerational learning) improves the performance of subsequent generations ofsubjects (in terms of closeness to the optimal path) However Carbone and Duffy reportthat social information on the contemporary consumption/savings choices of similarlysituated peers (peer-to-peer learning) does not improve performance in the model with(or without) habit in the utility function

Future experimental research on dynamic, intertemporal consumption/savings plansmight explore the impact of other realistic but currently missing features, such asmortality risk, an active borrowing and lending market among agents of different ages,consumption/leisure trade-offs, and the consequences of retirement and social securitysystems

2.2 Exponential Discounting and Infinite Horizons

It is common in macroeconomic models to assume infinite horizons, as the tative household is typically viewed as a dynasty, with an operational bequest motive

represen-linking one generation with the next Of course, infinite horizons are not operational

in the laboratory, but indefinite horizons are As we have seen, in experimental studies,

these have often been implemented by having a constant probabilityδ that a sequence

of decision rounds continues with another round.12 Theoretically this practice shouldinduce both exponential discounting of future payoffs at rateδ per round as well as the

stationarity associated with an infinite horizon, in the sense that, for any round reached,the expected number of future rounds to be played is alwaysδ + δ2+ δ3+ , or, in the

limitδ/(1 − δ) Empirically, there is laboratory evidence that suggests that probabilistic

continuation does affect subjects’ perceptions of short-run versus long-run incentives

as predicted by theory For instance, Dal Bó (2005) reports lower cooperation for duration experiments in comparison to indefinite-duration experiments having thesame expected length In particular, Dal Bó reports that aggregate cooperation rates arepositively correlated with the continuation probability implemented

finite-To better induce discounting at rateδ, it seems desirable to have subjects participate

in several indefinitely repeated sequences of rounds within a given session—as opposed

to a single indefinitely repeated sequence—as the former practice provides subjects withthe experience that a sequence ends and thus a better sense of the intertemporal rate ofdiscount they should apply to payoffs A further good practice is to make transparent therandomization device for determining whether an indefinite sequence continues or not,for example, by letting the subjects themselves roll a die at the end of each round using

a rolling cup A difficult issue is the possibility that an indefinite sequence continuesbeyond the scheduled time of an experimental session One approach to dealing withthis problem is to recruit subjects for a longer period of time than is likely necessary,say, several hours, and inform them that a number of indefinitely repeated sequences ofrounds will be played for a set amount of time—say for one hour following the reading

of instructions Subjects would be further instructed at the outset of the session thatafter that set amount of time had passed, the indefinite sequence of rounds currently in

Trang 36

play would be the last indefinite sequence of the experimental session In the event thatthis last indefinite sequence continued beyond the long period scheduled for the session,subjects would be instructed that they would have to return at a later date and time thatwas convenient for everyone to complete that final indefinite sequence.

In practice, as we have seen, some researchers feel more comfortable working withfinite-horizon models However, replacing an infinite horizon with a finite horizonmay not be innocuous; such a change may greatly alter predicted behavior relative

to the infinite-horizon case For instance, the finite-horizon life-cycle model of theconsumption-savings decision greatly increases the extent of the precautionary savingsmotive relative to the infinite-horizon case Other researchers have chosen not to tellsubjects when a sequence of decision rounds is to end (e.g., Offerman et al 2001), or toexclude data from the end rounds (e.g., Ule et al 2009) as a means of gathering data from

an approximately infinite horizon A difficulty with that practice is that the experimenterloses control of subjects’ expectations regarding the likely continuation of a sequence ofdecisions and appropriate discounting of payoffs This can be a problem if, for instance,the existence of equilibria depend on the discount factor being sufficiently high Yetanother approach is to exponentially discount the payoffs that subjects receive in eachround but at some point in the session switch over to a stochastic termination rule (e.g.,Feinberg and Husted 1993) A problem with this approach is that it does not implement

the stationarity associated with an infinite horizon.

2.3 Exponential or Hyperbolic Discounting?

Recently, there has been a revival of interest in time-inconsistent preferences withregard to consumption-savings decisions, where exponential discounting is replaced by

a quasi-hyperbolic form so that the representative agent is viewed as maximizing

(self-control problems) in that they systematically prefer to reverse earlier decisions,for example, regarding how much they have saved Thus, a possible explanationfor the departures from optimal consumption paths noted before in experimentalstudies of intertemporal decision making may be that subjects have such present-biasedpreferences Indeed, Laibson (1997), O’Donoghue and Rabin (1999), and several othershave shown that consumers with such preferences save less than exponential consumers.Although time-inconsistent preferences have been documented in numerous psy-chological studies (see, e.g., Frederick, Loewenstein, and O’Donoghue (2002) for asurvey) the methodology used has often consisted of showing inconsistencies in hy-pothetical (i.e., unpaid) money-time choices (e.g., Thaler 1981) For example, subjects

are asked whether they would prefer $D now or $D(1 + r) t periods from now, where variations in both r and t are used to infer individual rates of time preference Recently,

nonhypothetical (i.e., paid) money-time choice experiments have been conducted thatmore carefully respect the time dimension of the trade-off (e.g., Coller, Harrison, andRutström (2005); Benhabib, Bisin, and Schotter (2010) These studies cast doubt onthe notion that discounting is consistent with either exponential or quasi-hyperbolic

Trang 37

models of discounting For instance, Benhabib and others report that discount ratesappear to vary with both the time delay from the present and the amount of futurerewards in contrast to exponential discounting However, Coller and others show that

in choices between money rewards to be received only in the future, for example, sevendays from now versus thirty days from now, variations in the time delay betweensuch future rewards do not appear to affect discount rates, which is consistent withboth exponential and quasi-hyperbolic discounting but inconsistent with continuoushyperbolic discounting Consistent with quasi-hyperbolic discounting both studiesfind that a small fixed premium attached to immediate versus delayed rewards, canreconcile much of the variation in discount rates between the present and the future andbetween different future rewards However, this small fixed premium does not appear

to vary with the amount of future rewards (Benhabib et al.) and may simply reflecttransaction/credibility costs associated with receiving delayed rewards (Coller et al.),making it difficult to conclude definitively in favor of the quasi-hyperbolic model.Anderson et al (2008) make a strong case that time preferences cannot be elicitedapart from risk preferences Prior studies on time discounting all presume that sub-jects have risk-neutral preferences However, if subjects have risk-averse preferences(concave utility functions) as is typically the case, the implied discount rates from

the binary time-preference choices will be lower than under the presumption of risk

neutrality (linear utility functions) Indeed, Anderson et al (2008) elicit joint time andrisk preferences by having each subject complete sequences of binary lottery choices (ofthe Holt and Laury (2002) variety) that are designed to elicit risk preferences as well assequences of binary time-preference choices that are designed to elicit their discountrates (similar to those in the Coller et al study) They find that once the risk aversion

of individual subjects is taken into account, the implied discount rates are much lowerthan under the assumption of risk-neutral preferences This finding holds regardless ofwhether discounting is specified to be exponential or quasi-hyperbolic or some mixture

Of course, one must use caution in extrapolating from experimental findings onintertemporal decision making to the intertemporal choices made by the representativehousehold, firm, government agencies, or institutions in the macroeconomy Internal,unaccounted-for factors may bias intertemporal decision making in ways that exper-imental evidence cannot easily address; for example, election cycles or other seasonalfactors may influence decision making in ways that would be difficult to capture in alaboratory setting

2.4 Expectation Formation

In modern, self-referential macroeconomic models, expectations of future endogenousvariables play a critical role in the determination of the current values of thoseendogenous variables; that is, beliefs affect outcomes, which in turn affect beliefs, whichaffect outcomes, and so on Since Lucas (1972) it has become standard practice to

assume that agents’ expectations are rational in the sense of Muth (1961), and indeed

most models are “closed” under the rational expectations assumption The use ofrational expectations to close self-referential models means that econometric tests ofthese models using field data are joint tests of the model and the rational expectationsassumption, confounding the issue of whether the expectational assumption or otheraspects of the model are at fault if the econometric evidence is at odds with theoreticalpredictions While many tests of rational expectations have been conducted using surveydata, (e.g Frankel and Froot 1987), these tests are beset by problems of interpretation,

Trang 38

for example, due to uncontrolled variations in underlying fundamental factors, or tothe limited incentives of forecasters to provide accurate forecasts, or to disagreementabout the true underlying model or data-generating process By contrast, in the lab

it is possible to exert more control over such confounding factors, to know forcertain the true data-generating process, and to implement the self-referential aspect

of macroeconomic models

Early experimental tests of rational expectations involved analyses of subjects’ casts of exogenous, stochastic processes for prices, severing the critical self-referentialaspect of macroeconomic models but controlling for the potentially confounding effects

fore-of changes in fundamental factors (e.g., Schmalensee 1976; Dwyer et al (1993) Laterexperimental tests involved elicitation of price forecasts from subjects who were simul-taneously participants in experimental asset markets that were determining the pricesbeing forecast (Williams 1987; Smith, Suchanek, and Williams (1988) As discussed

in the prior handbook surveys by Camerer (1995) and Ochs (1995), many (thoughnot all) of these papers found little support for rational expectations in that forecasterrors tended to have nonzero means and were autocorrelated or were correlated withother observables Further, the path of prices sometimes departed significantly fromrational expectations equilibrium However, most of these experimental studies involveanalyses of price forecasts in environments where there is no explicit mechanism bywhich forecasts determine subsequent outcomes, as is assumed in forward–lookingmacroeconomic models Further, some of these experimental tests (e.g., Smith et al.)involved analyses of price forecasts for relatively short periods of time or in empiricallynonstationary environments where trading behavior resulted in price bubbles andcrashes, providing a particularly challenging test for rational expectations hypothesis.Marimon and Sunder (1993, 1994) recognized the challenge to subjects of bothforecasting prices and then using those forecasts to solve complicated dynamic op-timization problems They pioneered an approach that has come to be known as

a learning-to-forecast experimental design, another methodology that might be

con-sidered “native” to experimental macroeconomics In their implementation, subjectswere asked each period to form inflationary expectations in a stationary overlapping-generations economy These forecasts were then used as input into a computer programthat solved for each individual’s optimal, intertemporal consumption/savings decisiongiven that individual’s forecast Finally, via market clearing, the actual price levelwas determined and therefore the inflation rate Subjects were rewarded only for theaccuracy of their inflation forecasts and not on the basis of their consumption/savingsdecision, which was, after all, chosen for them by the computer program Indeed,subjects were not even aware of the underlying overlapping-generations model in whichthey were operating—instead they were engaged in a simple forecasting game Thislearning-to-forecast approach may be contrasted with a “learning-to-optimize” exper-imental design, wherein subjects are simply called upon to make choice decisions (e.g.consumption/savings) having intertemporal consequences but without elicitation oftheir forecasts (which are implicit) This is an interesting way of decomposing theproblem faced by agents in complex macroeconomic settings so that it does not involve

a joint test of rationality in both optimization and expectation formation; indeed,the learning to forecast experimental design has become a workhorse approach inexperimental macroeconomics—see Hommes (2011) for a comprehensive survey.More recently some macroeconomists have come to believe that rational expec-tations presumes too much knowledge on the part of the agents who reside withinthese models For instance, rational expectations presumes common knowledge of

Trang 39

rationality Further, rational expectations agents know with certainty the underlyingmodel, whereas econometricians are often uncertain of data-generating processes andresort to specification tests Given these strong assumptions, some researchers havechosen to replace rational expectations with some notion of bounded rationality andask whether boundedly rational agents operating for some length of time in a known,

stationary environment might eventually learn to possess rational expectations from

observation of the relevant time-series data (see, e.g., Sargent (1993, 1999) and Evansand Honkapohja (2001) for surveys of the theoretical literature)

Learning to forecast experiments have played a complementary role to the literature

on learning in macroeconomic systems This literature imagines that agents are edly rational in the sense that they do not initially know the model (data-generatingprocess) and behave more as econometricians, using possibly misspecified modelspecifications for their forecasting rules, which they update in real time as new databecome available In addition to the work of Marimon and Sunder (1993, 1994), thisreal-time, adaptive expectations approach has been explored experimentally using thelearning to forecast design by Bernasconi, Kirchkamp, and Paruolo (2006), Hey (1994),Van Huyck Cook, and Battalio (1994), Kelley and Friedman (2002), Hommes and others(2005, 2007), Heemeijer and others (2009), and Bao and others (2012), Bao, Duffy,and Hommes (2013) The use of the learning to forecast methodology has becomeparticularly important in assessing policy predictions using the expectations-based NewKeynesian model of the monetary-transmission mechanism in experimental studies byAdam (2007), Pfajfar and Zakelj (2015), Assenza and others (2013), and Petersen (2015),

bound-as will be discussed in Section 5.3

Hommes and others (2007) provide a good representative example of this literature.They consider expectation formation by groups of six subjects operating for a long time(in the laboratory sense)—fifty periods—in the simplest dynamic and self-referentialmodel, the cobweb model.14 In each of the fifty periods, all six subjects are asked to

supply a one-step-ahead forecast of the price that will prevail at time t, p e

i ,t, using all

available past price data through time t− 1; the forecast is restricted to lie in the interval(0, 10) These price forecasts are automatically converted into supply of the single good via a supply function s ( p e

i ,t; λ), which is increasing in p e

i ,tand has common parameter

λ governing the nonlinearity of the supply function Demand is exogenous and given by

a linear function D( p t ) The unique equilibrium price p∗is thus given by

p t= D−1 6

i=1

s ( p i e ,t)

that is, it is completely determined by subjects’ price forecasts However, Hommes andothers add a small shock to exogenous demand, which implies that prices should evolve

according to p t = p

t + , where  ∼ N(0, σ2

) Thus, under rational expectations, all

forecasters should forecast the same price, p∗ In the new learning view of rationalexpectations, it is sufficient that agents have access to the entire past history of prices forlearning of the rational-expectations solution to take place Consistent with this view,Hommes and others do not inform subjects of the market clearing process by whichprices are determined Instead, subjects are simply engaged in forming accurate priceforecasts and individual payoffs are a linearly decreasing function of the quadratic loss

( p t − p e

i ,t)2 The main treatment variable consists of variation in the supply functionparameter λ, which affects the stability of the cobweb model under the assumption

Trang 40

2 4 6 8 10

0 10

2 4 6 8 10

0.0 0.5 1.0

–1.0

2 4 6 8 10 12 14 16 18 20 –0.5

0.0 0.5 1.0

Figure 1.2: Actual prices (top) and autocorrelations (bottom) from three representative sessions

of the three treatments of Hommes et al (2007): strongly unstable, unstable, and stableequilibrium under nạve expectations

of naive expectations (following the classic analysis of Ezekiel (1938)) The authors

consider three values forλ, for which the equilibrium is stable, unstable, or strongly

unstable under naive expectations.15 Their assessment of the validity of the expectations assumption is based on whether market prices are biased (looking at themean), whether price fluctuations exhibit excess volatility (looking at the variance), andwhether realized prices are predictable (looking at the autocorrelations)

rational-Figure 1.2 shows a representative sample of prices and the autocorrelation ofthese prices from the three representative groups operating in the three differenttreatment conditions This figure reveals the main finding of the study, which is that

in all three treatments, the mean price forecast is not significantly different from therational expectations value, though the variance is significantly greater (there is excessvolatility) from the rational expectations value,σ2

 = 0.25, in the unstable and strongly

unstable cases Even more interesting is the finding that the autocorrelations are notsignificantly different from zero (5% bounds are shown in the figures) and there is nopredictable structure to these autocorrelations The latter finding suggests that subjectsare not behaving in an irrational manner in the sense that there is no unexploitedopportunities for improving price predictions This finding is somewhat remarkablegiven the limited information subjects had regarding the model generating the data,though coordination on the rational expectations equilibrium was likely helped byhaving a unique equilibrium and a limited price range (0, 10).

Adam (2007) uses the learning to forecast methodology in the context of the equation, multivariate New Keynesian “sticky price” model that is a current workhorse

two-of monetary policy analysis (e.g., Woodford 2003).16 In a linearized version of thatmodel, inflation, π t , and output, y t, are determined by the system of expectationaldifference equations,

π e

t+1 + cv t

Ngày đăng: 06/01/2020, 09:15

🧩 Sản phẩm bạn có thể quan tâm