1. Trang chủ
  2. » Luận Văn - Báo Cáo

Ebook Experimental business research: Marketing, accounting and cognitive perspectives (Volume III) Part 2

188 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Exploring Ellsberg's Paradox In Vague-Vague Cases
Tác giả Karen M. Kramer, Edward Mines Jr., David V. Budescu
Trường học University of Illinois, Urbana-Champaign
Thể loại research paper
Năm xuất bản 2005
Thành phố Mines, IL
Định dạng
Số trang 188
Dung lượng 9,97 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Continued part 1, part 2 of ebook Experimental business research: Marketing, accounting and cognitive perspectives (Volume III) provides readers with contents including: exploring ellsbergs paradox in vaguevague cases; overweighing recent observations: experimental results and economic implications; cognition in spatial dispersion games; cognitive hierarchy; partition dependence in decision analysis, resource allocation, and consumer... Đề tài Hoàn thiện công tác quản trị nhân sự tại Công ty TNHH Mộc Khải Tuyên được nghiên cứu nhằm giúp công ty TNHH Mộc Khải Tuyên làm rõ được thực trạng công tác quản trị nhân sự trong công ty như thế nào từ đó đề ra các giải pháp giúp công ty hoàn thiện công tác quản trị nhân sự tốt hơn trong thời gian tới.

Trang 1

EXPLORING ELLSBERG'S PARADOX

We explore a generalization of EUsberg's paradox to the Vague-Vague (V-V) case,

where neither of the probabilities (urns) is specified precisely, but one urn is always

more precise than the other We present results of an experiment explicitly designed

to study this situation The paradox was as prevalent in the V-V cases, as in the

standard Precise-Vague (P-V) cases The paradox occurred more often when

dif-ferences between ranges of vagueness were large Vagueness avoidance increased

with midpoint for P-V cases, and decreased for V-V cases Models that capture the

relationships between vagueness avoidance and observable gamble characteristics

(e.g., differences of ranges) were fitted

Key words: EUsberg's paradox, ambiguity avoidance, vagueness avoidance, vague

probabilities, imprecise probabilities, probability ranges, logit models

Over eighty years ago Knight (1921) and Keynes (1921) independently

distin-guished between the problems of choice under uncertainty and ambiguity Forty

years later, Ellsberg (1961) demonstrated the relevance of this distinction with the

following simple problem: A Decision-Maker (DM) has to bet on one of two urns

containing balls of two colors, say Red and Blue The composition (proportions of

two colors) of one urn is known, but the composition of the other urn is completely

unknown Imagine that one of the colors (Red or Blue) is arbitrarily made more

desirable, simply by associating it with a positive prize of size $x If DMs are asked

to choose one urn when each color is more desirable, many are more likely to select

the urn with known content/<9r both colors and "avoid ambiguity^" This pattern of

choices violates Subjective Expected Utility Theory (SEUT), and this tendency is

widely known as the "(two-color) EUsberg's paradox"

The most common and appealing explanation of EUsberg's paradox (e.g., Camerer and Weber, 1992) is that it is due to "ambiguity (or, in our terms, vagueness)

131

R Zwick and A Rapoport (eds.), Experimental Business Research, Vol Ill, 131-154

© 2005 Springer Printed in the Netherlands

Trang 2

aversion" The logic of this explanation is straightforward and compelling - If within

each pair, most DMs choose the more precise urn, the modal pattern of joint choices

(across the two replications when Red or Blue are the target colors) would,

neces-sarily, lead to the paradox Various psychological explanations were offered for the

subjects' preference for the more precise urn Subjects may simply choose the urn

about which they have more knowledge and information (Edwards, cited in Roberts,

1963, footnote 4; Baron and Frisch, 1994; Keren and Gerritsen, 1999) The different

levels of information may induce various levels of competence (Heath and Tversky,

1991) Other, more complex, explanations rely on perception of "hostile nature"

(Yates and Zukowski, 1976; Keren and Gerritsen, 1999), anticipation of

evalua-tion by others (Ellsberg, 1963; Fellner, 1961; Gardenfors, 1979; Knight, 1921;

MacCrimmon, 1968; Roberts, 1963; Toda and Shuford, 1965; Slovic and Tversky,

1974), self-evaluation (Ellsberg, 1963; Roberts, 1963; Toda and Shuford, 1965),

perception of competition (Kiihberger and Pemer, 2003), and others (see reviews by

Camerer and Weber, 1992 and Curley, Yates, and Abrams, 1986) Curley et al

(1986) tested empirically some of these theories and suggested that "evaluation by

others" is the most promising for future research of the phenomenon's psychological

rationale Regardless of the underlying psychological reason(s), Ellsberg's paradox

has become almost synonymous with vagueness avoidance In fact, most empirical

research has focused on single choices between pairs of gambles varying in their

precision, and only very few studies (e.g., MacCrimmon and Larsson, 1979) have

actually replicated the full paradoxical pattern across two choices

Many researchers have tried to model the behavior underlying this paradox (see Camerer and Weber, 1992 for a comprehensive review, and Becker and Brownson,

1964; Curley and Yates, 1985, 1989; Einhom and Hogarth, 1986; for typical

stud-ies) Most of this research has used Precise-Vague (P-V) cases, where the

prob-abilities of the two colors in one urn are known precisely, but the probprob-abilities in

the other urn are vague (specified imprecisely) This work has identified some of

the factors and conditions that contribute to the intensity of the preference for

precision For example, Einhom and Hogarth (1986) used probability predictions,

insurance pricing, and warranty pricing tasks, to show vagueness avoidance at

moderate to high probabilities of gains, and vagueness seeking for low

probabil-ities of gains Kahn and Sarin (1988) and Hogarth and Einhom (1990) confirmed

these results

An interesting trend in the literature has been the extension of the paradox

to new, more general, situations It is possible to show that the paradoxical pattem

of choices is obtained when the vagueness in the second um is only partial, i.e.,

when the DM knows that Pr(Red) > jc, Pr(Blue) > >', s.t., 0 < x, y < 1, but (jc -H >')

< 1 This implies that x < Pr(Red) < (1 - >'), i.e., Pr(Red) is within a range of size

R = {\ - x-y) centered at M = (1 -H JC - y)l2 Similarly, y < Pr(Blue) < (1 - x),

i.e in a range of size R = {\ -x-y) centered a t M = ( l + y - x)l2 The current

study follows this trend by extending the paradox to Vague-Vague (V-V) cases,

where the composition of both ums is only partially specified Typically, the range

of possible probabilities in one um is narrower than the range of the second um, but

Trang 3

both ranges share the same central value Thus, Pr(Red|Um I) > JC^, Pr(Blue|Um I)

> ji, Pr(Red|Um II) > ^2, and Pr(Blue|Um II) > y2, subject to the constraints:

0 < Xi, ji, X2, j2 ^ 1 (-^1 + Ji) < 1 te + yi) < 1- Furthermore, \x^ - y^\ = \x2-y2l

but R^ = (l - Xi - yi) ^ R2 = (I - X2 - ^2)- In other words, x^ < Pr(Red|Um I) <

(1 - ji) and X2 < Pr(Red|Um II) < (1 - J2)' ^^^ ^he common midpoint of both ranges

is M = (1 + Xi - j i ) = (1 + X2 -

}^2)-The effects of vagueness in P-V cases are relatively well understood (see for example the list of stylized facts in Camerer and Weber's 1992 review), but the

V-V case is more complicated Becker and Brownson (1964) found inconsistencies

when they tried to relate vagueness avoidance to differences in the ranges of vague

probabilities, and Curley and Yates' studies (1985, 1989) were inconclusive with

regard to the presence and intensity of vagueness avoidance in V-V cases Curley

and Yates (1985) examined the choices subjects made in the P-V and V-V case as a

function of the width(s) of the range(s) and the common midpoint of the range of

probabilities They showed that people were more likely to be vagueness averse

as the midpoint increased in P-V cases, but not in V-V cases Neither vagueness

seeking nor avoidance was the predominant behavior for midpoints < 40 The range

difference between the two urns was not sufficient for explaining the degree of

vagueness avoidance, and no effect of the width of the range was found in

prefer-ence ratings over the pairs of lotteries

Undoubtedly, the range difference (wider range - narrower range) is the most salient feature of pairs of gambles with a common midpoint, and one would expect

this factor to influence the degree of observed vagueness avoidance Range

differ-ence captures the relative precision of the two urns, and DMs who are vagueness

averse are expected to choose the more precise urn more often In fact, it is sensible

to predict a positive monotonic relationship between the relative precision of a pair

of urns and the intensity of vagueness avoidance displayed It is surprising that

Curley and Yates could not confirm this expectation We will consider this

predic-tion in more detail in the current study

However, the relative precision of a given pair can not fully explain the DM's preferences in the V-V case Consider, for example, the following three urns: Urn A:

0.45 <p< 0.55; Urn B: 0.30 <p< 0.70; Urn C: 0.15 <p< 0.85, where p is the

probability of the desirable event (Red or Blue ball) All urns have a common

midpoint (0.5) but vary in their (im)precision Urn A has a range of 0.10, Urn B

has a range of 0.40, and Urn C spans a range of 0.70 Imagine that a DM has to

choose between A and B, and between B and C In both pairs the range difference

(relative precision) is the same (0.30), but vagueness avoidance is expected to be

stronger for the A, B pair, because most people would prefer the higher certainty

associated with A If, on the other hand, there is a fair amount of vagueness in

both urns, people may feel that vagueness is unavoidable, and may focus their

attention on other features For example, they may notice that, in the best possible

case Urn C offers a very high probability (0.85) of the desirable event This shift of

attention may reduce the tendency to avoid vagueness and may lead to indifference

or vagueness seeking

Trang 4

This example highlights the importance of the more precise urn in the pair

The range width of probabilities in this urn represents the greatest possible (an

upper bound on) precision, which is what most DMs tend to seek (Becker and

Brownson, 1964) We refer to this value as the pair's minimal imprecision We

predict that, everything else being equal, vagueness avoidance should increase as

the minimal imprecision decreases Conversely, as minimal imprecision increases

(i.e., as the more precise urn becomes more vague), we should observe more

instances of indifference between the two urns, and an increased tendency of

vagueness preference

The P-V pairs represent a special case in which the minimum imprecision is always 0 Thus, only considerations of relative precision are relevant for these choices

Otherwise, the level of vagueness avoidance depends on both minimal imprecision

and relative precision But the two factors are negatively correlated Thus, one is

unlikely to encounter large levels of relative precision in cases with large minimal

imprecision For example, if the more precise urn in a pair has a high minimal

imprecision, say 0.70, the relative precision cannot exceed 0.30 On the other hand,

if the more precise urn in the pair has a low minimal imprecision, say 0.20, the

relative precision can be as high as 0.80 In general, Max(Relative Precision) <

(1 - Minimal imprecision), or Max(Minimal imprecision) < (1 - Relative

Pre-cision) One factor that constrains the minimal imprecision (and, indirectly, the

relative precision) in a pair is the midpoint of the range Note that for any urn,

Max(Minimal imprecision) < 2 x [Min{M, (1 - M)}], where M is the midpoint of

the range, subject to 0 > M > I? Thus, the effects of the two types of (im)precision

may interact with the midpoint of the pair

Choices in the V-V case can be summarized by the following reasonable scenario: DMs identify and focus first on the more precise urn If it is "sufficiently

precise" and/or "substantially more precise" than the other member of the pair,

DMs are most likely to choose it If, however, the narrower range urn is "not

sufficiently precise" nor "substantially more precise" than the other member of the

pair, DMs may be indifferent between the urns, and in some cases they may be

tempted to favor the less precise urn Choices in the P-V reflect only considerations

of relative precision This qualitative description avoids the difficult questions of

what exactly constitutes "sufficient precision", what is considered "substantially

more precise", and what is the relative salience of these two factors We will address

these issues in more detail when we fit quantitative models to the tendency to avoid

vagueness

A good portion of the literature on choice under vagueness focuses on the ranges

of the two urns, and a good deal of the experimental work (e.g., Curley and Yates,

1985; Yates and Zukowski, 1976) has studied the effects of the ranges, /?,, (/ = 1, 2),

and midpoints, M, (/ = 1, 2), on DM's choices Consistent with this approach our

models will also emphasize the midpoint, relative precision, and minimal

impreci-sion of the pair, where the latter two factors are defined by the range of probabilities

of the two urns

Trang 5

1 CURRENT STUDY

The purpose of the present study is to study DM's choices in the presence of

vagueness, and their tendency to succumb to EUsberg's paradox in the domain of

gains We will be especially concerned with the V-V case, where both lotteries are

imprecise and will contrast them with the choices in the "standard" P-V case, using

a design similar to the one used by Curley and Yates (1985) We will, however use

a much larger number of V-V pairs covering more ranges at three different

mid-points The subjects' choices in each pair will be classified as vagueness seeking,

vagueness avoiding, or indifferent to vagueness, and the proportions of vagueness

avoidance choices will be analyzed as a function of the pairs' minimal imprecision,

relative precision and their common midpoint

As indicated earlier, vagueness avoidance is expected to increase with relative precision and with reduction in minimal imprecision There is empirical evidence

that the intensity of vagueness avoidance increases with midpoint (Curley and Yates,

1985; Einhom and Hogarth, 1986), and the midpoint may interact with the two

precision measures of a pair For example, we expect pairs with low midpoints will

induce less vagueness avoidance than pairs with high midpoints In addition, if the

more precise urn's range is closer to the other urn's range, people are expected to

feel more indifferent (and possibly be more vagueness seeking) between the urns

For low midpoints, this behavior may exist with greater values of relative precision

and smaller values of minimal imprecision than for other midpoints

In our experiment we present each pair of urns twice, and make a different event (i.e., marble color) the "target" (i.e., the more desirable one) on each presentation

This allows us to analyze the subjects' choices not only in terms of their attitude to

(im)precision on each trial but also in terms of the emerging response patterns when

matched pairs are considered simultaneously These patterns are (a) the classical

Ellsberg 's paradox (choosing twice the more precise urn); (b) the reversed paradox

(choosing twice the more vague urn); (c) consistency (choosing different urns on the

two occasions); (d) indifference on both occasions; and weak indifference (being

indifferent on one occasion and exhibiting a clear preference on the other)

Thus, the experiment verifies the presence of the paradoxical pattern in the V-V case, and compares its prevalence with the P-V case The prevalence of the paradox

will be analyzed as a function of the midpoint, range widths, and/or range

differ-ences In general, we expect the factors that induce higher levels of vagueness

avoidance to also increase the frequency of the paradoxical pattern, but an intriguing

question that was never fully examined is whether the occurrence of the paradox

can be predicted precisely from the subjects' attitudes towards precision We expect

EUsberg's paradox to be the modal, but not the universal, pattern In those cases

when the paradox does not occur, we predict different patterns as a function of the

common midpoint We expect subjects to exhibit more indifference for pairs with a

midpoint of 50, where it is easier and more natural to either imagine symmetric

distributions of probabilities (Ellsberg, 1963; footnote 8), and/or a greater number of

Trang 6

possible distributions (Ellsberg, 1961; Roberts, 1963), than with extreme midpoints

On the other hand, we expect subjects to be consistent with SEUT more often with

extreme midpoints, where the imagined distributions are more likely to be

asym-metric and to be skewed in opposite directions

2 METHOD

Subjects: Subjects were 107 undergraduates registered in an introductory psychology

class at the University of Illinois in Urbana-Champaign They received an hour of

credit for participation, and had a chance to win additional money at the end of the

experiment

Stimuli: The subjects saw representations of 63 different pairs of urns The colors of

marbles in the two urns were red and blue The pairs varied in terms of the

(com-mon) midpoint, and the ranges of values in each urn Fifteen pairs had a midpoint of

20, fifteen pairs had a midpoint of 80, and thirty-three pairs had a midpoint of 50

Throughout the paper the midpoint is equivalent to the "expected" number of red

marbles (and 100- the "expected" number of blue marbles) in each urn under a

uniform distribution Six different range widths were used with a midpoint of 20 or

80 (0, 2, 20, 30, 38, 40), and ten ranges were used with a midpoint of 50 (0, 2, 20,

30, 38, 40, 50, 80, 98, 100)

Two groups of subjects were recruited In one group (80 subjects) the urn with the narrower range was always presented on the left; in the second group (27 sub-

jects) the placement of the urn with a narrower range was randomly determined on

every trial Our analysis did not indicate any position effect, so the data from both

groups were combined

Procedure: Subjects were run individually on personal computers in a lab In the

first part of the experiment, each of the 63 pairs was presented twice In one

pres-entation the desirable outcome was associated with the acquisition of a red marble

In the other presentation, the desirable outcome was associated with the acquisition

of a blue marble The 126 pairs were presented, one at a time, in a different

random-ized order for each subject For each pair the subjects had to decide whether to select

Urn I, Urn II, or either urn (i.e., express indifference) Figure 1 shows an example of

the display for a midpoint of 20 (which is equal to a blue midpoint of 80)

Before the experiment, subjects were told that two pairs would be randomly selected and played at the conclusion of the experiment, and that if they had selected

"either urn" a coin toss would determine the urn choice These instructions

encour-aged subjects to choose one urn, yet allowed them the opportunity to express

indif-ference if truly desired

In the second part of the experiment, the same 63 pairs were presented in random order and subjects were asked to indicate, on a scale from 1-7, how dissimilar the

contents of the two urns were These judgments were used to examine the subjects'

subjective perceptions of the urns The results of this (multidimensional scaling)

Trang 7

Urn I either urn Urn II

Figure 1 Example of a choice trial, red midpoint = 20 Actual colors were used with the

words in the urn depictions

analysis indicated a high similarity of subjectively scaled values to the actual stated

values, so further discussion of these findings is unnecessary

On average, subjects completed the experiment in approximately 30 minutes

At the conclusion of the experiment, a pair of urns was chosen, and the subjects'

choices for each color were noted To determine the subject's payoff, this pair of

urns was prepared by placing 100 red and blue marbles in each urn A random

number generator, which used a uniform distribution over the relevant ranges of

values^ was used to determine the number of red marbles in the two urns A marble

was removed from the urn the subject (or the coin) selected If the color of the

selected marble matched the target color, the subject won $3 Otherwise, the subject

did not receive any money Twenty-one subjects received $0, 59 gained $3, and

27 gained $6 (average payoff = $3.17)

3 RESULTS

EUsberg's paradox refers to an inconsistent pattern of revealed preference in two

related choice problems The first section of the analysis will focus on the intensity

of the paradoxical pattern in these joint choices It is common to attribute the

paradoxical pattern to the subjects' tendency to avoid the more vague of the two

gambles Of course, this avoidance of vagueness can only be observed directly in a

single choice, between gambles that vary only with respect to their imprecision The

second part of the analysis will focus on these choices and will model subjects'

propensity to choose the more precise gamble within a pair

3.1 Analysis of joint choice patterns

Distribution of responses: For any given pair of urns there are nine distinct possible

responses that can be classified into five patterns: classic paradox (CP), reverse

Trang 8

Table 1 The possible patterns of joint selection for any given pair

/ Weak Indifference (WI) Indifference (I) Weak Indifference (WI)

VS

Consistency #1 (C) Weak Indifference (WI) Reverse Paradox (RP)

Note: VA-vagueness avoidance, I-indifference, VS-vagueness seeking

paradox (RP), indifference (I), consistency (C), and weak indifference (WI)

Indif-ference and consistency conform with SHUT Weak indifIndif-ference does not allow an

unequivocal test of the paradox All the patterns are illustrated in Table 1

The distribution of responses was determined for each pair across all subjects and was compared to the expected distribution under the null hypothesis of random

responses using x^ tests."^ All the x^ values had right-hand p-values less than 05, and

61 (97%) had /7-values less than 01 Thus, we reject the possibility that subjects'

choices were random

The distributions of choices over the nine patterns for P-V and V-V cases and for all midpoints are summarized in the various panels of Table 2 Panels 1-3 contain

information for each midpoint separately and panel 4 is a subset of panel 2 that

contains information for a midpoint of 50 but only for those ranges that were also

Table 2 Percentages of each pattern for the P-V and V-V cases, by midpoint

2.1 Red Midpoint = 20

A^=535(P-V) A^= 1070 (V-V)

Red

VA

I

VS Total

Blue

VA P-V 33.60 9.20 24.90 67.70

V-V 28.50 6.20 13.40 48.10

I P-V 3.70 8.60 2.80 15.10

V-V 6.20 8.20 2.10 16.50

VS P-V 7.10 1.90 8.20 17.20

V-V 18.30 4.60 12.50 35.40

Total P-V 44.40 19.70 35.90 100.00

V-V 53.00 19.00 28.00 100.00

Note: VA = vagueness avoidance, I = indifference, VS = vagueness seeking

Trang 9

Table 2 (cont'd)

2.2 Red Midpoint = 50 (includes all pairs)

A^=963(P-V) A^ = 2568 (V-V)

V-V 38.00 5.20

IJO

50.90

I P-V 5.40 16.50 3.30 25.20

V-V 5.80 18.50 3.50 27.80

VS P-V 6.10 3.10 10.20 19.40

V-V 7.90 3.50 10.00 21.40

Total P-V 52.40 26.70 20.90 100.00

V-V 51.70 27.20 21.20 100.00

Note: VA = vagueness avoidance, I = indifference, VS = vagueness seeking

Blue

VA P-V 32.70 5.80 11.70 50.20

V-V 30.50 6.90 18.80 56.20

I P-V 7.30 9.20 2.10 18.60

V-V 4.70 8.70 3.20 16.60

VS P-V 20.00 2.20 9.00 31.20

V-V 13.40 2.50 11.40 27.30

Total P-V 60.00 17.20 22.80 100.00

V-V 48.60 18.10 33.40 100.00

Note: VA = vagueness avoidance, I = indifference, VS = vagueness seeking

used for the midpoints 20 and 80 Finally, panel 5 is a summary across all midpoints

based on the subset of common ranges (i.e., panels 1, 3 and 4)

The marginal distributions (the last row and column in the table, which are labeled Total) document the predominance of vagueness avoidance for each color

and each midpoint, for P-V and V-V cases They also revealed a greater tendency

of vagueness seeking than indifference for the extreme midpoints (20 and 80),

and a reversed trend (more indifference than vagueness seeking) for the midpoint

of 50

Trang 10

Table 2 (cont'd)

2.4 Red Midpoint = 50 (including only ranges used for all midpoints)

A^ = 535 (P-V) A^ = 1070 (V-V)

Red

VA

I

VS Total

Blue

VA P-V 38.70 6.70 7.20 52.60

V-V 33.40 5.50 7.10 46.00

I P-V 6.00 17.80 3.00 26.80

V-V 6.80 20.90 4.40 32.10

VS P-V 6.90 3.00 10.70 20.60

V-V 7.00 3.70 11.10 21.80

Total P-V 51.60 27.50 20.90 100.00

V-V 47.20 30.10 22.60 100.00

Note: VA = vagueness avoidance, I = indifference, VS = vagueness seeking

2.5 All red midpoints, with only comparable pairs (Tables 2.1 + 2.3 -i- 2.4)

Blue

VA P-V 35.00 7.20 14.70 56.90

V-V 30.80 6.20 13.10 50.10

I P-V 5.70 11.80 2.60 20.10

V-V 5.90 12.60 3.20 21.70

VS P-V 11.30 2.40 9.30 23.00

V-V 12.90 3.60 11.70 28.20

Total P-V 52.00 21.40 26.60 100.00

V-V 49.60 22.40 28.00 100.00

Note: VA = vagueness avoidance, I = indifference, VS = vagueness seeking

The distribution of the five general patterns for P-V and V-V cases are displayed

in Figure 2 There is some slight variation across midpoints but, in general, the

classic paradox was the most prevalent, and the reverse paradox was the least

pre-valent one As predicted, indifference was almost twice as prepre-valent for a midpoint

of 50 than for the other two midpoints Conversely, consistency was twice as

fre-quent for extreme midpoints than for the midpoint of 50 In general, the results for

P-V and V-V pairs were highly similar

Consider again Table 2 that summarizes all choices and patterns The margins documented the predominance of vagueness avoidance, and the upper left cell

Trang 11

Precise-Vague cases Vague-Vague cases

classic paradox

(n = 562)

consistency

(n = 417)

weak indifference

(n = 287)

indifference

(n = 190)

reverse paradox

{n = 375)

• midpoint = 20

n midpoint = 50

Q midpoint = 80

Figure 2 Distribution of the five general patterns for P-V and V-V cases, by midpoint

(YATVA, e.g., 33.60 and 28.50 in Table 2.1) in every sub-table indicated that the

classic paradox was the modal pattern A natural question is whether the frequency

of the paradox can be predicted exclusively from the subjects' global tendency

to choose the more precise lottery In other words, is Pr(Classic Paradox) =

Pr(VA|Red) x Pr(VA|Blue)? Surprisingly, the answer is negative! In fact, in all

tables the paradox occurred more frequently than one would predict from

independ-ent vagueness avoidance choices (overall, 5.83% above expectation) Conversely,

the indifferent pattern and the reverse paradox were under-predicted by the marginal

distributions (by 7.67% and 3.60%, respectively) Clearly, the rate of the various

patterns (e.g., CP) was not driven exclusively by a constant tendency to avoid/prefer

vagueness The intensity of this tendency varied as a function of various features of

the gambles The rest of this paper is devoted to modeling the effects of these

features on the intensity of vagueness avoidance

Log-linear models of the joint patterns: The frequency of each of the five patterns

in Figure 2 was tabulated as a function of the urns' midpoint and their relative

precision Log-linear models were fit to each pattern, to determine the effect of

the two factors on the observed frequency of the target pattern The saturated

model is:

Trang 12

^Mfij) - A + ^Mii) + ^D(j) + ^MDUj) (1)

where M is the Midpoint effect,

D is the range Difference effect, and

MD is the interaction of these effects

Reduced models are defined by constraining some of the parameters to equal 0 The

fits of reduced versions of model (1) for the classic paradox are presented in Table 3,

separately for the P-V and V-V pairs For each case we show the frequencies being

modeled, as well as the results of the model fits For each model, we report the

degrees of freedom (df), the likelihood ratio (G^) and the ratio G^/df Usually, the

model's goodness of fit is tested by comparing G^ with its asymptotic sampling

distribution (x^)- In this situation, this would be inappropriate because the

observa-tions are not independent, as required for a valid application of this test An

alternat-ive procedure is to use the ratio G^/df as a descriptalternat-ive measure of the fit of a

model In general, the closer the G^/df ratio is to 1, the better the fit of the model

(e.g., Goodman, 1971a, 1975; Haberman, 1978) In both cases, the reduced model

including the range difference effect alone was the best, judged by the proximity

of its G^/df ratio to unity It appears that the pair's relative precision is the most

important predictor of the incidence of CP

Table 3 Log-linear analysis of frequency of the Classic Paradox

3.1a Frequency table of CP in the P-V Case

G^df*

.42 1.50 65 * (A^ = 562)

Note: * if G^df^ 1, model fits

Trang 13

Gydf"

.88 8.51 1.03 *

{N = 988)

Note: ^ifoydf- 1, model fits

Set-association models: A more detailed analysis distinguishes between pairs with

various levels of minimal imprecision Table 4.1 shows the frequency of the CP

pattern as a function of the narrower and wider ranges of the urns involved (across

all three midpoints) This analysis involves constrained (triangular) arrays of

fre-quencies, and requires fitting special types of log-linear models to measure the

effects of the relevant factors The set-association model (e.g., Wickens, 1989),

allows testing the significance of hypothesized "treatment effects" in such triangular

arrays of frequencies The most general form of the model is:

ln(f,) •• A + A^(/) + ^W(j) + '^Tik) (2)

where N is the Narrower range effect,

W is the Wider range effect, and

T is the "treatment effect."

Naturally, when X^,^ = 0, there is no treatment effect and we obtain the

"quasi-independence model", that is similar to a regular "quasi-independence model but applies

to partial tables (Bishop, Fienberg, and Holland, 1975; Wickens, 1989; Rindskopf,

1990) A variety of treatment effects can be specified to reflect various

hypo-theses We fitted two such "effects" The first was the "CP pattern" in which it was

Trang 14

hypothesized that the frequency of the Classic Paradox pattern would be greater for

pairs where the relative precision was larger and the minimal imprecision was smaller.^

The second model simply distinguished between the P-V and V-V cases All three

models for the classic paradox are shown in Table 4, across all midpoints as well as

Table 4 Set-association models of Classic Paradox frequencies

4.1 Triangular table of frequencies over all midpoints

G V J / *

3.67 4.74 4.27 (A^ = 485)

Note: * if GVdf^ 1 model fits

4.3 Set-association model results, midpoint = 50

model

Quasi-independence P-V vs V-V

G^/df^

10.47 13.33 7.50 (A^ = 564)

Note: * if G V J / - 1 model fits

Trang 15

Table 4 (cont 'd)

4.4 Set-association model results, midpoint = 80

model

Quasi-independence P_V vs V-V

G^/df*

6.66 8.16 3.74 (A^=501)

Note: * if G^/df-^ 1 model fits

4.5 Set-association model results, all midpoints

model

Quasi-independence P-V vs V-V

G^/df*

17.72 22.22 12.92 (A^=1550)

Note: * if GVdf-^ 1 model fits

for each midpoint separately Again, the closer the ratio G^/dfis to 1, the better the

fit of the model Note that the G^/df ratios of the models with the "P-V vs V-V"

treatment were comparable to those of the quasi-independence model, which

sug-gested that subjects did not treat P-V and V-V pairs differently, and the paradoxical

pattern occurred with similar intensity in both cases On the other hand, for

mid-points greater than, or equal to, 50 and over all midmid-points, the model including the

"CP pattern" is clearly superior over the quasi-independence and the "P-V vs V-V"

models Thus, EUsberg's paradox was more likely to occur in pairs with large

relative precision and small minimal imprecision when the midpoint was greater

than 20 With the low midpoint, the occurrence of the paradox appears to be

independent of these joint effects of relative precision and minimal imprecision

3.2 Analysis of choices within a single gamble

Distribution of responses: We have shown in Table 2 that in most cases subjects

tend to choose the more precise of the two gambles in a pair The marginal means

of Table 2.5 indicate that across all (4,815 x 2 =) 9,630 cases examined, the more

precise option was chosen (2,426 + 2,520 =) 4,946 times (i.e., 51.36% of the time)

Vagueness preference was observed (1,325 + 1,274 =) 2,599 times (in 26.99% of

Trang 16

<

>

1.0 9 8 7 6 5 4 3 2 1 0.0

Pr(VS)

Figure 3 Proportions of VA and VS choices in P-V and V-Vpairs, for 107 subjects

the cases), and subjects expressed indifference towards (im)precision on (1,021

4- 1,064 =) 2,085 occasions (21.65% of the cases) This general pattern held for

extreme midpoints, for both colors and for the two types of pairs (P-V and V-V)

The distribution over the three choices varied slightly over midpoints, colors, and

types of pairs (in particular, for the midpoint of 50, indifference was more

preval-ent than vagueness preference) However, the distinct preference for precision was

almost constant across all cases

The predominance of vagueness avoidance holds for most individual subjects as well Figure 3 displays the trinomial distribution of choices for all 107 subjects, for

P-V and V-V cases Each subject is represented by two points (P-V and V-V cases)

in the plane whose coordinates are the probability of choosing the more vague

gamble, Pr(VS), on the jc-axis, and the probability of choosing the more precise

gamble, Pr(VA), on the y-axis The third probability (of being indifferent) is implied

by these two, and it can be determined by simple subtraction: Pr(Ind) = 1 - Pr(VA)

- Pr(VS), and inferred from each point's location relative to the origin, where

Pr(Ind) = 1, and the negative diagonal (where Pr(Ind) = 0) The most important

feature of this display for the current purposes is that 83 subjects (78%) for P-V, and

81 subjects (76%) for V-V are located in the upper comer (above the main diagonal

along which Pr(VA) = Pr(VS)), indicating that they displayed vagueness avoidance

much more frequently than vagueness seeking

Modeling vagueness avoidance: In this section we seek to model the subjects' choices

at the pair level as a function of the pair's type (P-V or V-V), midpoint, relative

Trang 17

precision, minimal imprecision, and the interactions among these factors We focus

on those cases where the subjects expressed a clear preference between the two

options, and discard cases where subjects expressed indifference The dependent

variable is the log-odds (also called the logit) of choosing the more precise urn in a

pair, i.e., Log{Pr(VA)/Pr(VS)}, as measured across the two complementary color

choices for each pair The predictors used in the model are:

1 The pair's Relative Precision (RELPR) = Difference in widths between the two

urns;

2 The pair's Minimal Imprecision (MINIM) = Width of the imprecise range of the

more precise urn;

3 The pair's Midpoint (MID);

4 The pair's type (TYPE) = a binary variable that distinguishes between the V-V

and the P-V cases; and

5 All pair-wise interactions between these four (centered) factors

The models were fitted to 57 of the pairs examined We excluded six pairs with

minimal imprecision greater that 40, because such extreme values are incompatible

with the extreme midpoints (20 and 80)^ The best model without interactions has

an R^ of 0.29 (Rl^j = 0.26) and is achieved by the following equation (all coefficients

are standardized):

Logit(VA) = 0.40*RELPR - 0.24*MINIM,

As predicted, the tendency to avoid vagueness depends primarily on the relative

precision (r = 0.50) and, to a lesser degree, on the minimal imprecision (r = -0.40)

Although the midpoint and the type of the pair are not significant predictors (r =

0.02 and 0.21, respectively), they contribute to the prediction of the target behavior

through their interactions with other factors A model with the four factors and

two interactions involving the midpoint, achieves an impressive fit of R^ of 0.71

(Rl,, = 0.68):

Logit(VA) = 0.40*RELPR - 0.22*MINIM -H 0.05*MID - 0.03*TYPE

- 0.54*(MINIM*MID) - 0.17*(TYPE*MID)

To fully understand the effects of the two interactions, consider Table 5 that lists the

mean probability of choosing the more precise option (and avoid vagueness) for all

relevant combinations of the factors in question The first column of the table shows

that for the P-V pairs the tendency to avoid vagueness peaks at the highest midpoint

(80) In the other columns (corresponding to the V-V pairs) the pattern is reversed

with the weakest vagueness aversion measured at the high midpoint (80) The table

also shows that the tendency to avoid vagueness across various levels of minimal

imprecision depends on the midpoint: Vagueness avoidance decreases for high

midpoints (50 and 80), but it increases for the low midpoint of 20, as minimal

Trang 18

Table 5 Interaction between the absolute imprecision (range width) of the pair of urns and

P-V

0

.59 (5) 73 (9) 76 (5) 70 (19)

Minimum imprecision/ range width of pair

V-V

2 63 (4) 73 (8) 69 (4) 70 (16)

20

.68 (3) 70 (7) 57 (3) 67 (13)

30

.71 (2) 67 (2) 49 (2) 62 (6)

38

.70 (1) 55 (1) 34 (1) 53 (3)

All

.64 (15) 71 (27) 53 (15) 68 (57)

Notes: - In each cell, the probability of choosing the more precise of the two urns is

displayed This probability is inferred from the mean Log{Prob(VA)/Prob(VS)}

- Number in parentheses indicates the number of pairs

imprecision increases This pattern is inconsistent with the "perceived information"

effect described by Keren and Gerritsen (1999)

The two interactions are not distinct because all P-V pairs have a minimal cision of 0 Thus, it is possible to fit a simpler version of the model by including

impre-only one interaction term, without sacrificing much in term of goodness of fit

Indeed, the model:

Logit(VA) = 0.40*RELPR - 0.24*MINIM + 0.06*MID - 0.64*(MINIM*MID),

fits the data almost equally well (R^ = 0.70, Rl^^ = 0.67) This model does not include

the binary factor corresponding to the sharp dichotomy (P-V vs V-V), but rather

a continuous variable that captures the level of minimal imprecision This

high-lights the fact that the two situations are not qualitatively distinct It is, however,

instructive to note that in the P-V case, where the minimal imprecision is 0, the

relative precision is, simply, the range of the vague urn and the model is reduced to

simple additive form involving the common midpoint (center) and the range of the

more vague urn, as suggested by Curley and Yates (1985)

4 DISCUSSION

This study shows that people prefer precisely specified gambles and succumb to

Ellsberg's paradox in "dual vagueness" (V-V) situations The tendency to avoid the

more vague urn and the prevalence of the classic paradox is similar in the P-V and

the V-V situations Our results indicate that P-V and V-V cases are not qualitatively

Trang 19

different, and it is more appropriate to think of them as defining a continuum of

"degree of vagueness" In both cases, the prevalence of the paradoxical pattern of

choices depends primarily on the ranges of the two gambles (i.e., the relative precision

and minimal imprecision of the pair) and, to a lesser degree, on the pair's common

midpoint The model fitted for the choices within a single pair also shows that

the subjects' tendency to choose the more precise urn does not reflect a sharp P-V

vs V-V dichotomy Rather, it is determined by the degree of minimal imprecision

The P-V case is just one, admittedly critical and intriguing, point on this imprecision

continuum

Several empirical regularities apply to all cases (P-V and V-V) One is the robust effect of the common midpoint: There are more choices consistent with SEUT for

extreme midpoints, and a higher rate of indifference for the central value of 50 This

can be attributed to the symmetry that underlies all the decisions for the 50 midpoint

In this case most, if not all, hypothetical and imagined distributions over the range

are symmetric and the midpoint is the most salient focal point of the range, regardless

of the range width This, of course, can increase the likelihood of indifference between

the two urns For the extreme midpoints, 20 or 80, the most salient feature is the

asymmetry between the two colors, which favors consistent choices over indifference

Becker and Brownson (1964) suggested that subjects are sensitive to the amount

of information in each urn when making their decisions, and this resonates in some

of the modem behavioral work (e.g Heath and Tversky, 1991; Keren and Gerritsen,

1999) A sensible index of the differential level of information in the two urns

is obtained by considering the difference in the range width (relative precision)

between the two urns Log-linear models confirmed the relevance of the relative

precision as a predictor of the rate of paradoxical pattern, and the logit models

results confirm the importance of relative precision for predicting the rate of

vague-ness avoidance within single pairs These results indicate, unequivocally, that as

relative precision increases, vagueness avoidance (and the tendency to succumb

to the famous paradox) increases Interestingly, this robust observation contradicts

one of the conclusions drawn by Curley and Yates (1985) who determined that

"ambiguity avoidance did not significantly increase with the interval range /?."

Relative precision is the most important, but not the single, predictor of the regularities in the data We have argued that its effects are complemented by, and

contingent on, the minimal imprecision in a pair, as measured by the width of the

narrower range This expectation was also confirmed by two analyses The fit of the

set-association model results for predicting the rate of paradoxical pattern, and of

the logit model for predicting the rate of vagueness avoidance within a single pair,

was increased by the addition of predictors that capture the effect of the minimal

imprecision and its interaction with the midpoint

Although the P-V and V-V cases are similar, they are not identical Indeed, we have uncovered several subtle, but systematic, differences between them The first

difference highlights the distinction between the two extreme midpoints The

mar-ginal frequencies in Tables 2.1 and 2.3 show that for the P-V case there is less

vagueness avoidance (and more vagueness seeking) for the low midpoint (20), than

Trang 20

for the high midpoint (80) On the other hand, for V-V pairs, we found more

vagueness avoidance (and less vagueness seeking) for the low midpoint than for the

high midpoint.^ This difference is reflected in the results for the two consistent

patterns: Although the overall level of consistency is about equal for the two types,

as the midpoint increases there is a greater tendency to choose the more precise

gamble in a P-V pair, whereas in the V-V case there is an opposite trend that favors

less vagueness avoidance (see similar results in Curley and Yates, 1985; Einhom

and Hogarth, 1986; and Gardenfors and Sahlin 1982, 1983)

What psychological processes can account for the particular pattern of observed differences between the P-V and V-V cases? In the P-V case the precise urn provides

a clear reference point and subjects have to consider primarily the parameters of the

vague urn Its upper limit offers an attractive probability (higher than that of the

precise), but this is accompanied with the risk of a lower probability (the lower limit)

The subjects' behavior in these cases seems to indicate that when the precise

prob-ability is "sufficiently high" (i.e., high midpoint) they resist the temptation of the upper

limit and prefer the security of the precise urn (hence, the high level of vagueness

avoidance) But for low midpoints the security offered by the precise option is not

sufficient, and there is a greater tendency to opt for the vague urn, presumably

because of its attractive upper limit (see Stasson et al 1993, for a similar approach)

The V-V cases do not guarantee a security level since the more precise urn is also vague In most cases one would expect DMs to focus on the lower limits

to ascertain the guaranteed security level in each urn The higher security level

would always be found in the more precise urn, hence for low midpoints DMs are

likely to choose the more secure (i.e., the more precise) urn However, the concern

with security decreases for higher midpoints Thus, vagueness avoidance decreases

as the midpoint increases in the urns

An alternative explanation for behavior in the V-V choices is that when paring two vague urns with a common midpoint, subjects focus on the information

com-available about the frequency of the two colors In particular it is easy to imagine

that the unknown marbles in the urn are distributed according to the same rule as the

known marbles Consider two hypothetical urns (consisting of 100 marbles) with the

same (high) midpoint of 70 Red marbles If the DM knows that in Urn A there are

50 Red marbles and 10 Blue marbles (so, the number of Reds is between 50 and 90),

he/she may estimate the ratio of Red and Blue among the other (unknown) 40

marbles to also be 5:1 The DM's best guess would be that (100*5/6 =) 83 of the

marbles in Urn A are Red and (100 - 83 =) 17 are Blue Imagine that in Urn B there

are 60 Red marbles and 20 Blue (so the number of Reds is between 60 and 80) The

DM may infer that the ratio of the two colors is the same for the 20 unknown

marbles, and his/her best guess would be that (100*3/4 =) 75 of the marbles in Urn

B are Red, and the remaining (100 - 75 =) 25 are Blue In this case, the DM would

be more likely to choose the more vague Urn A, because he/she would expect it to

have more marbles that are Red If however the DM had to choose between the two

urns when Blue marbles are desirable (low midpoint = 30), he/she would be more likely

to pick the more precise Urn B This is, indeed, the observed pattern in the data

Trang 21

5 AN ALTERNATIVE CLASS OF MODELS

We conclude by pointing out that the DM's evaluations of vague options can also be

modeled in terms of the (lower and upper) bounds of the ranges that are, typically,

presented numerically and/or graphically to the subjects Specifically, let /, and M, be

the lower and upper bounds of range / (/ = 1, 2), respectively, and assume that when

faced with a range of probabilities, the DM "resolves its vagueness" by considering

a weighted average of the two end points: v, = w/, + (1 - W)M,, where 0 < w < 1

indicates the relative salience of the lower bound.^ Then the choice between the

two vague lotteries can be thought of as a choice between two regular lotteries

with probabilities Vj and V2, respectively From a modeling point of view, focusing

on the two bounds suggests a different parameterization of the problem, but the

new parameters are simple linear transformations of the midpoints and ranges:

li = Mi - RJ2 and M^ = M, + Rill Note that if w > 0.5, the DM would, necessarily,

exhibit vagueness avoidance, and if w < 0.5 he/she will appear to favor imprecision

And, if w = 0.5 the DM is insensitive to the range's (im)precision Thus, we can

think of w as a "coefficient of vagueness avoidance"

The two forms can be used interchangeably and most models based on the ranges can be mapped into models involving lower and upper bounds For example, con-

sider the probabilistic model that assumes that the tendency to choose the more

precise urn depends on the difference between the two ranges:

log[Pr(VA)/Pr(VS)] = (v^ - v^) = w{h - h) + (1 - ^^^)iu, - u^) (3)

It is easy to see that (l^ - I2) = -{u^ - U2) = RELPR/2 (i.e., half of the relative

precision) Thus, fitting model (3) amounts to fitting a model invoking only relative

precision The coefficient of vagueness avoidance, w, can be inferred from the

coefficient associated with the pair's relative precision

Although the two classes of models are statistically interchangeable, one form can be chosen over the other on the basis of its psychological plausibility, i.e., the

congruence between its formulation and the assumed psychological processes

under-lying the subjects' behavior We believe that the "end points" form of the model

captures the psychological process involved in tasks where the subjects are required

to evaluate one prospect at a time (see Budescu, Kuhn, Kramer, and Johnson, 2002;

for studies of the CEs of vague lotteries) On the other hand, we think that when the

DMs are asked to perform pair-wise choices between vague lotteries, as in the

present study, they do not necessarily resolve the vagueness of each lottery before

choosing Rather they are more likely to rely on direct comparisons of key features

of the two alternatives, such as the relative and absolute (im)precision, as indicated

in our models

This distinction is based on the lucid analysis offered by Fischer and Hawkins

(1993), who distinguished between qualitative and quantitative response tasks

Quan-titative tasks (pricing, rating, ranking, and matching) are, typically, compensatory

and rely on quantitative strategies involving trade-offs between the various attributes

Trang 22

that define the options Qualitative tasks (choice, strength of preference judgments)

are non-compensatory and rely on a multi-stage mix of qualitative and quantitative

strategies applied in a dimension-wise fashion The non-compensatory rules are

self-terminating and do not necessarily exhaust all the attributes of the options being

compared Fischer and Hawkins (1993) have argued that in a direct qualitative

choice where neither option strongly dominates the other, people choose the option

that is superior on the more important (prominent) dimension (see also, Slovic,

1975) The more quantitative rating task is expected to induce a mental strategy

of trade-offs between attribute values and, therefore, the more prominent attribute

is not weighted as heavily These principles apply here as well and suggest an

intriguing possibility that attitudes to vagueness may vary across tasks, inducing a

"reversal" of attitudes to imprecision This hypothesis should be tested

systemat-ically in future studies

ACKNOWLEDGMENT

This research was supported, in part, by a National Science Foundation grant

(SBR-9632448) Karen Kramer's work was supported, in part, by a NIMH National

Research Service Award (MH 14257) to the University of Illinois at

Urbana-Champaign The research was conducted while the first author was a predoctoral

trainee in the Quantitative Methods Program of the Department of Psychology,

University of Illinois at Urbana-Champaign

NOTES

' We will use the terms "vagueness" and "imprecision" interchangeably instead of the usual (but in our

opinion, inaccurate) "ambiguity" (e.g., Budescu, Weinberg and Wallsten, 1988; Budescu, Kuhn, Kramer, and Johnson, 2002)

^ This implies that the effects of minimal imprecision can be best studied by focusing on M = 0.5

^ No reference was made to a uniform distribution during the study when subjects were making their

choices, so their preferences were not affected by an assumption of equal chances This distribution was chosen because of its convenience and intuitive appeal to determine the payoffs to the subjects

"^ If subjects choose Urn I, Urn II and indifference randomly (i.e., with equal probability) and

independ-ently across the various pairs, we should observe the following distribution: (11% CP, 11% RP, 11% I, 22% C, and 44% WI)

^ We distinguished between two classes of pairs One class consisted of all pairs where the narrower

range was under 5 and the range difference was greater than 15 We expected that in all 8 pairs with these characteristics the frequency of the CP pattern would be higher than in the other (7) pairs where the ranges were closer to each other in size

^ We also fitted all the models to the full data set including the 63 pairs All the qualitative trends were

replicated and the quantitative details varied only slightly, so we do not reproduce these results here

^ A blue midpoint of 20 is equivalent to a red midpoint of 80, and a blue midpoint of 80 is equivalent to

a red midpoint of 20, when examining the marginals Table 2 is organized by the red midpoint

** This form is closely related to the one proposed by Ellsberg in his 1961 paper

Trang 23

REFERENCES

Baron, J., and Frisch, D (1994) "Ambiguous Probabilities and the Paradoxes of Expected Utility", in

Wright, G and Ayton, P (Eds.), Subjective Probability, Chichester: John Wiley & Sons Ltd

Becker, S W., and Brownson, F O (1964) "What Price Ambiguity? Or the Role of Ambiguity in

Decision-Making." Journal of Political Economy, 72, 62-73

Bishop, Y M M., Fienberg, S E., and Holland, P W (1975) Discrete Multivariate Analysis, Cambridge,

MA: MIT Press

Budescu, D V., Kuhn, K M., Kramer, K M., & Johnson, T (2002) "Modeling certainty equivalents

for imprecise gambles." Organizational Behavior and Human Decision Processes, 88, 748-768

(Erratum in the same volume, page 1214)

Camerer, C , and Weber, M (1992) "Recent Developments in Modeling Preferences: Uncertainty and

Ambiguity." Journal of Risk and Uncertainty, 5, 325-70

Curley, S P., and Yates, J F (1985) "The Center and Range of the Probability Interval as Factors

Affecting Ambiguity Preferences." Organizational Behavior and Human Decision Processes, 36,

273-87

Curley, S P and Yates, J F (1989) "An Empirical Evaluation of Descriptive Models of Ambiguity

Reactions in Choice Situations." Journal of Mathematical Psychology, 33, 397-427

Curley, S P., Yates, J F., and Abrams, R A (1986) "Psychological Sources of Ambiguity Avoidance."

Organizational Behavior and Human Decision Processes, 38, 230-56

Einhom, H J., and Hogarth, R M (1986) "Decision Making under Ambiguity." Journal of Business, 59,

Fischer, G W., & Hawkins, S A (1993) "Strategy compatibility, scale compatibility, and the prominence

effect." Journal of Experimental Psychology: Human Perception and Performance, 19, 580-597

Gardenfors, P (1979) "Forecasts, Decisions, and Uncertain Probabilities." Erkenntis, 14, 159-81

Gardenfors, P., and Sahlin, N E (1982), "Unreliable Probabilities, Risk Taking, and Decision Making."

Synthese, 53, 361-86

Gardenfors, P., and Sahlin, N E (1983) "Decision Making with Unreliable Probabilities." British

Journal of Mathematical and Statistical Psychology, 36, 240-51

Goodman, L A (1971a) "The Analysis of Multidimensional Contingency Tables: Stepwise Procedures

and Direct Estimation Methods for Building Models for Multiple Classifications." Technometrics, 13,

33-61

Goodman, L A (1975) "On the Relationship Between Two Statistics Pertaining to Tests of Three-Factor

Interaction in Contingency Tables." Journal of the American Statistical Association, 70, 624-25

Haberman, S J (1978) Analysis of Qualitative Data, New York: Academic Press

Heath, C , and Tversky, A (1991) "Preference and Belief: Ambiguity and Competence in Choice under

Uncertainty." Journal of Risk and Uncertainty, 4, 5-28

Hogarth, R M., and Einhom, H J (1990) "Venture Theory: A Model of Decision Weights."

Manage-ment Science, 36, 780-803

Kahn, B E., and Sarin, R K (1988) "Modeling Ambiguity in Decisions under Uncertainty." Journal of

Consumer Research, 15, 265-72

Keren, G., and Gerritsen L E M (1999) "On the Robustness and Possible Accounts of Ambiguity

Aversion." Acta Psychologica, 103, 149-172

Keynes, J M (1921) A Treatise on Probability, London: Macmillian

Knight, F H (1921) Risk, Uncertainty, and Profit, Boston: Houghton Mifflin

Kuhberger, A., and Pemer, J (2003) "The Role of Competition and Knowledge in the EUsberg Task."

Trang 24

MacCrimmon, K R (1968) "Descriptive and Normative Implications of the Decision Theory

Postu-lates," in Borch, K., and Mossin, J (Eds.), Risk and Uncertainty, London: MacMillan

MacCrimmon, K R., and Larsson, S (1979) "Utility Theory: Axioms versus 'Paradoxes,'" in AUais, M.,

and Hagen, O (Eds.), Expected Utility and the AUais Paradox, Dordrecht, Holland: D Reidel

Rindskopf, D (1990) "Nonstandard Log-Linear Models." Psychological Bulletin, 108, 150-62

Roberts, H V (1963) "Risk, Ambiguity, and the Savage Axioms: Comment." Quarterly Journal of

Economics, 11, 327-36

Slovic, P (1975) "Choice between equally valued alternatives." Journal of Experimental Psychology:

Human Perception and Performance, 1, 280-287

Slovic, P., and Tversky, A (1974) "Who Accepts Savage's Axiom?" Behavioral Science, 19, 368-73

Stasson, M P., Hawkes, W G., Smith, H D., Lakey, W M (1993) "The Effects of Probability

Ambigu-ity on Preferences for Uncertain Two-Outcome Prospects." Bulletin of the Psychonomic Society, 31,

Trang 25

OVERWEIGHING RECENT OBSERVATIONS:

EXPERIMENTAL RESULTS AND

We conduct an experimental study in which subjects choose between alternative risky

investments Just as in the "hot hands" belief in basketball, we find that even when

subjects are explicitly told that the rates of return are drawn randomly and

independ-ently over time from a given distribution, they still assign a relatively large decision

weight to the most recent observations - approximately double the weight assigned

to the other observations As in reality investors face returns as a time series, not as a

lottery distribution (employed in most experimental studies), this finding may be more

relevant to realistic investment situations, where a temporal sequence of returns is

observed, than the probability weighing of single-shot lotteries as suggested by Prospect

Theory and Rank Dependent Expected Utility The findings of this paper suggests a

simple explanation to several important economic phenomena, like momentum (the

positive short run autocorrelation of stock returns), and the relationship between

recent fund performance and the flow of money to the fund The results also have

important implications to asset allocation, pricing, and the risk-return relationship

1 INTRODUCTION

Normative economic theory of decision-making under uncertainty asserts how

people should behave Experimental studies dealing with choices under conditions

of uncertainty report how people actually do behave when they are faced with

several hypothetical alternative prospects In many cases there is a substantial

dis-crepancy between the observed experimental investment behavior and the normative

theoretical behavior This discrepancy casts doubt on the validity of the theoretical

economic models which rely on the normative behavior,^ and may explain several

economic "anomalies" This paper experimentally investigates and quantitatively

155

R Zwick and A Rapoport (eds.), Experimental Business Research, Vol Ill, 155-183

© 2005 Springer Printed in the Netherlands

Trang 26

measures individuals' tendency to overweigh recent observations, and analyzes the

economic implications of this behavioral phenomenon to capital markets

The importance of overweighing recent information in capital markets is not new and has been noted by several researchers Arrow [1982], in the context of a

discussion of Kahneman and Tversky's work, highlights

" the excessive reaction to current information which seems to characterize all the securities and futures markets." (p 5)

De Bondt and Thaler [1985] assert that:

" investors seem to attach disproportionate importance to short-run ment", (p 794)

develop-The present paper is an attempt to experimentally quantify this phenomenon, and to

estimate some of its economic effects

The result asserting that subjects tend to interpret a series of i.i.d observations in

a biased fashion is not new The "Law of Small Numbers" (see Tversky and Kahneman

[1971]) shows that subjects exaggerate the degree to which the probabilities implied

by a small number of observations resemble the probability distribution in the

pop-ulation The overweighing of recent observations can be considered as a special case

of the "representativeness heuristic" suggested by Tversky and Kahneman [1974],

by which people think they see patterns even in truly random sequences For example,

the pioneering work of Gilovich, Vallone, and Tversky [1985] shows that basketball

fans believe that players have "hot hands", meaning that after making a shot a player

becomes more likely to make the next shot This belief is very widely held despite of

the fact that it is statistically unjustified (see also Albright [1993] and Albert and

Bennett [2001]) Similarly, Kroll, Levy and Rapoport [1988] study an experimental

financial market and show that subjects look for trends in returns even when they are

explicitly told that returns are drawn randomly from a given distribution

In a series of papers Rapoport and Budescu [1992, 1997] and Budescu and Rapoport [1994] document the phenomenon of "local representativeness", by which

subjects expect even short strings within a long sequence of binary i.i.d signals to

contain proportions of the two outcomes which are similar to those in the

popula-tion Rabin [2002] presents a model with the following results: when the proportions

of the two possible outcomes in a binary i.i.d process are known, a draw of one

outcome increases the belief that in the next draw the other outcome will be realized

However, when the proportions of the two outcomes are unknown, subjects infer

these proportions from very short sequences of outcomes For example, if subjects

believe that an average fund manager is successful once every two years, then they

believe that an observation of two successful years in a row indicates that the

manager has good investment talent As we shall see below, the experimental results

we obtain conform with this assertion by Rabin

Another related issue is that of subjective probability distortion, or the use of decision weights (see Preston and Baratta [1948], Edwards [1953], [1962], Kahneman

Trang 27

and Tversky [1979], Tversky and Kahneman [1992], and Prelec [1998]) In most of

the above studies related to decision weights, the subjects choose between two

options (x, p(x)) and (y, p(y)), but the payoffs, x and y, are not given as time series

Thus, we have single-shot decisions The subjects have to choose between two

lotteries, or one lottery and a certain income Such experiments may have limited

relevance for actual investing as, in practice, investors in the market observe rates

of return as time series, e.g., several years of corporate earnings, several years of

mutual fund returns, etc Therefore, the time dimension may be very important to

investors, and thus should be incorporated into the analysis In the present study,

which is relevant for phenomena taken from the capital market, we present the

subjects with a choice between two alternatives with given historical time series of

returns, (Xf) and (j^), where t stands for time (year, month, etc.) Subjects are told

that the time series are generated randomly from fixed distributions, thus they

should rationally attach the same weight to each observation We test whether they

indeed do so, or whether they attach more weight to the recent observations Thus,

we are dealing with the subjective distortion of probabilities as a function of the

temporal sequence, not as a function of the probability itself as in the more standard

frameworks of decision weights (e.g Prospect Theory, CPT, or Quiggin's [1982]

Rank Dependent Expected Utility (RDEU)), which ignore the temporal sequence

This paper has three main goals:

(i) To experimentally test whether the most recent observations are overweighed even though the subjects are told that rates of return are i.i.d

(ii) To estimate quantitatively the magnitude of the decision weights that the

sub-jects attach to the most recent observations

(iii) To analyze the economic implications of this phenomenon in terms of

momen-tum (the positive autocorrelation of stock returns), the relationship between mutual fund performance and the flow of money to the fund, and in terms of asset pricing

The structure of the paper is as follows: Section I describes the experiments and provides the results In Section II we suggest a method of quantitatively estimating

the overweighing of the most recent observation Section III discusses the economic

implications of the results Section IV concludes the paper

2 THE EXPERIMENTS AND RESULTS

In order to investigate the importance attached to recent observations we take two

approaches In the first approach we compare the choices of subjects among a set of

alternative risky investments under two setups: once when the subjects are given the

means and standard deviations of the normal return distributions, and once when

instead they are given a time series of the returns on the alternative investments,

such that the means and standard deviations are exactly as before This approach is

employed in Experiment I In the second approach (Experiment II) we provide only

Trang 28

the time series of the returns on the alternative investments All subjects are given

the exact same returns, but different subjects get a different time ordering of the

returns In this experiment we test directly whether the order of the returns affect the

subjects' choices, i.e., whether they assign a higher decision weight to the most

recent observation

Altogether we have 287 subjects who made 415 choices (128 subjects made two choices each) The subjects are business school students and practitioners in financial

markets (financial analysts and mutual funds' managers)

All of the subjects successfully completed at least one statistics course and were familiar with the normal distribution and the concept of independence over time

and, in particular, with the random walk In all the tasks where rates of return are

available, the subjects were told that the rates of return were drawn randomly and

independently (i.i.d.) from fixed normal distributions Moreover, in all tasks, the

subjects were explicitly told that the next realized rate of return (which is relevant

for their investment) is drawn randomly and independently from the corresponding

normal distribution These facts were emphasized in the instructions to the subjects

2.1 Experiment I

In this experiment we have 128 subjects, 64 of them third-year undergraduate

busi-ness students and 64 of them mutual fund managers and financial analysts whom we

call "practitioners".^ All of the subjects had the questionnaire for a relatively long

period of time (at least a week), hence, they could make any needed calculation and

make the choices without any time pressure

The experiment, as many other experiments, did not involve any real financial reward or financial penalty to the subjects, which may constitute a drawback However,

Battalio, Kagal and Jiranyakul [1990] have shown that experiments with and without

real money differ in the magnitude of the results but not in their essence Harless and

Camerer [1994] have shown that when real money is involved, the variance of the

results decreases Thus, it seems that the absence of money does not drastically

change the results.^ Yet, because no real money was involved one always suspects

that the subjects may fill out the questionnaire randomly without paying close

atten-tion to the various choices Fortunately, this was not the case, as shown below

In this experiment the subjects are requested to complete two tasks In Task I they are presented with five mutual funds and are told that the return distribution for

each of the funds is normal, with given parameters, as presented in Table 1 The

subjects are asked the following question: ''Assuming that you wish to invest in only

one mutual fund for one year, which fund will you select?""

In Task II the subjects are again asked to choose one of five mutual funds, and again they are told that the return distributions are normal and that returns are

independent over time However, in this task the subjects are given the last 5 annual

return observations of each fund instead of the fund's mean and standard deviation

(see Table 2) The returns in Task II are constructed such that the means and

standard deviations of each fund are exactly identical to those in Task I

Trang 29

Table 1 Means and Variances of Returns in Experiment I Task I

Fund

Mean Standard Deviation

- 2 %

2.1.1 Results

Table 3 reports the choices in Tasks I and II corresponding to the 5 mutual funds

As there are no significant differences in the choices of the students and the

practi-tioners, we report here only the aggregate results The main results are as follows:

1) The choices are not random: we test whether the subjects filled out the

ques-tionnaire randomly to quickly "get it over with", by employing the Chi-square goodness-of-fit test To illustrate, in Task I, the subjects had to choose one out of five mutual funds If the subjects select the fund randomly, we expect on average 128/5 = 26 subjects choosing each fund Using the observed choices, and the expected number of choices of each fund, we employ the Chi-square goodness-of-fit test with four degrees of freedom We obtain in Task I a sample statistic

of ;f4 = 129.3, when the 1% critical value is 13.3 In Task II the sample statistic

is xl = 100.4 Thus, both the sample statistics are substantially larger than the

corresponding critical value, hence regarding each of the two tasks, the thesis that the subjects made a random choice is strongly rejected Thus, it seems that despite the fact that there was no financial reward/penalty, most of the

Trang 30

hypo-Table 3 Results of Experiment I Fund

2) When the return distributions are normal, the mean-variance rule is well known

to be optimal under risk aversion (see Tobin [1958]) Moreover, it is also optimal under the Markowitz [1952b] reverse S-shape value function, and under the CPT S-shape value function (see Levy and Levy [2003]) Thus, it is natural to examine the mean-variance efficiency of the subjects' choices Figure 1 presents the five funds in the mean-standard deviation space It can easily be seen that funds {D, C, E} are mean-variance efficient and funds {B, A} are inefficient (see Figure 1) The inefficient funds, A and B, together were selected by only 3 out of

128 subjects in both Task I and in Task II

Thus, we have the encouraging results showing that 98% of the choices are mean-variance efficient Thus, "framing" the choices in terms of ju-c or in terms

of annual rates of retum does not affect the percentage of the efficient choices, which remains very high

3) In Task I, the choices were mainly of C and D and not E Looking at Table 1,

we see that Fund E has a little higher expected retum than Fund C but much larger standard deviation It is possible that this risk-return tradeoff induces most

of the subjects to select Funds C and D and not Fund E."^

4) The importance of the time sequence: Because rates of retum are i.i.d.,

theoret-ically framing the choices in two ways should not affect the choices This is not

the case, because choices have been dramatically changed within the efficient set

While in Task I, choices C and D were very popular, in Task II there is a substantial shift from Funds C and D to Fund E, which became the most popular choice with almost half of the subjects selecting it (compared to less than 11%

selecting E in Task I) Focusing on the shifts in choices in Task I and II within

the efficient set, we conducted a x^ test to examine whether the shifts are

sig-nificant We obtain a sample statistic of 44.0 while the 1% critical value is only

Trang 31

Figure 1 The Funds in Experiment I

9.2, hence the change in choices is highly significant There is a wide range of possible explanations as to why subjects switched from C and D to E However,

a close look at the rates of return in Table 2 reveals two important characteristics:

in four out of five years, E shows a higher rate of return than D, and more importantly, in the last two years the returns on E are better than the returns on

D Though this information is irrelevant under the i.i.d property, it seems that the subjects made use of this information This experimental finding, i.e., switching

to the fund with the highest short-term performance (e.g., the performance in the last two years) conforms with the results of KroU Levy and Rapoport [1988], with Rabin [2002], and with Arrow's [1982] assertion of an "excessive reaction

to current information" Thus, despite of the randomness and independence over time of rates of return, investors switch between funds based on short-term performance

The comparison of the rates of return on E and C is a little more involved: in two years they have the same rates of return, in two years E is better and in one year C

is better (see Table 2) However, in the last year, which probably was more

import-ant to the subjects, E is better, even though by only 1% Thus, the "seemingly"

superiority of E over D is stronger than the superiority of E over C, which may

explain why a larger shift occurred from D to E than from C to E (see Table 3)

Trang 32

Regardless of whether all rates of return affect choices, or only the last one or two

observations affect the switch in the choices, one thing is clear: the subjects either

misperceive randomness and overweigh recent outcomes, do not believe the i.i.d

information or do not believe the normality

To sum up, the subjects create patterns, and draw conclusions from the irrelevant order of the historical rates of return This is because, theoretically, under the i.i.d

information, and the data of Tasks I and II, no switch in choices should occur

Finally, it is possible that in Task II the subjects do not assign relatively large decision weights to the last 1-2 observations, but rather employ some other com-

plicated decision rules, e.g., "select the fund with the highest possible gain and the

smallest possible loss" (like Fund E), or select the mutual fund based on mean,

variance and, say, skewness, though skewness is irrelevant under normal

distribu-tion To address this issue, in Experiment II we refine the analysis regarding the role

that recent rates of return play in decision making This experiment is very simple,

and more directly attempts to figure out the role of the most recent observation on

the decision making process

2.2 Experiment 11

The subjects participating in this experiment are 159 undergraduate business school

students The subjects have to choose between only two investment alternatives

As in Task II of the first experiment, the last five returns of each of these alternatives

are presented to the subjects, and the subjects are told that the returns are drawn

randomly and independently over time from normal distributions We divide the

subject population into two groups, and each subpopulation is given a different

version of the questionnaire One subpopulation is presented with two investment

alternatives exactly identical to Funds D and E of Task II in Experiment I (see

Questionnaire 1 in Table 4) The other subpopulation is presented with the same

Trang 33

Table 5 Results of Experiment II (in percent)

Questionnaire 1 (n = 66)

D

E Total

45%

55%

100%

set of returns for each fund, but the time ordering of the returns are different

(see Questionnaire 2 in Table 4) Specifically, Questionnaire 2 is designed such

that if more weight is assigned to recent returns Fund D becomes more

attract-ive Note that if an equal weight of 0.2 is attached to each observation the results

in the two questionnaires should be roughly the same However, if one assigns

a relatively large decision weight to the last one or two years, then E is improved

relative to D in Questionnaire 1, while D is improved relative to E in

Question-naire 2

2.2.7 Results

The results of Experiment II are reported in Table 5 Only 29% of the choices were

D in Questionnaire 1 versus 45% in Questionnaire 2 K 'x^ test with one degree of

freedom reveals that the differences are significant with a - 5%, with a sample

statistic of chi-square of 4.35, while the critical value is xli^^%^ - 3.84 Thus, there

is a significant change, albeit not a very strong one, in choices in favor of the fund

with the relatively good performance in the last two years This is so despite the

fact that the returns are exactly identical in the two questionnaires Thus,

Experi-ment II clearly reveals that the last two observations have an important role in

determining choices

We advocate in this paper that probability is distorted in a particular way, emphasizing the last one or two observations This is in contradiction to the CPT

and RDEU probability distortion For example, by the CPT distortion, probabilities

should be distorted in the same way in both questionnaires 1 and 2, overweighing

the extreme probabilities of - 2 % and 45% in Fund E and - 5 % and 20% in Fund D,

regardless of the sequence of appearance of these observations Therefore,

accord-ing to CPT the choices should not change across the two questionnaires This is not

the case in our experiment, indicating that the CPT weighing function may be

inappropriate for time series returns, as observed in the capital market

Finally, as not all subjects choose E in Questionnaire 1, and not all subjects choose D in Questionnaire 2, it is obvious that the decision weight assigned to the

last 2 observations is less than 100%, and some of the investors may perceive

randomness correctly In many cases some complicated decision rules are probably

employed Yet, it is enough that some investors overweigh recent observations to

Trang 34

create several important economic phenomena In the next section we attempt to

quantitatively estimate the overweighing of the most recent observation

3 ESTIMATING THE DECISION WEIGHTS

In this section we estimate the decision weights corresponding to temporal sequence

data, which is conceptually different than the decision weights in single-shot

lottery-type situations, as suggested by Prospect Theory and other models In order to

analyze the shift in choices and the decision weights applied to the most recent

observations one needs to make some assumptions regarding preferences We start

with general assumptions about the preference class (e.g., risk aversion), and then

we refine the analysis by employing specific commonly acceptable utility/value

functions

Under the assumptions of normal rate of return distributions and risk aversion, the optimal investment rule which is consistent with von-Neumann and Morgenstem

[1953] expected utility maximization is the Markowitz [1952a] mean-variance rule

(see Tobin [1958] and Hanoch and Levy [1969]) In this case the mean-variance

rule coincides with Second degree Stochastic Dominance (SSD) When rates of

return are drawn randomly and independently from normal distributions then the

best estimates of the mean and variance are the corresponding sample statistics,

assuming each observation has an equal weight of 1/n, n being the number of

observations Our findings imply that in expected utility calculation decision weights,

w(p(x)), are employed rather than the objective probabihties, p(x), where w(p(x)) >

p(x) for the last one or two observations In this section, we attempt to estimate

w(p(x)) We take two approaches The first is the Stochastic Dominance approach

which allows us to place an upper bound on w(p(x)) In the second approach we

assume various typical utility functions and obtain estimates of the median w(p(x))

in the population

Several studies highlight the importance of overweighing the most recently served return (see Kroll, Levy and Rapoport [1988], ChevaUer and Elhson [1997],

ob-and Rabin [2002]) The results of Experiment I support this view An increase in the

decision weight of the most recent return explains the shift in choices from Funds C

and D in Task I to Fund E in Task II In contrast, the penultimate observation is not

overweighed much, because such overweighing would have implied a shift in the

choices to Fund B in Task II, a shift which did not occur (in the 4^^ year, the rate of

return on Fund B was 34%, much higher than the 15% of Fund E, see Table 2)

Thus, from the rates of return data and from the specific shift in choices, we conclude

that the overweighing of the most recent return is probably the main factor, albeit not

the only factor, inducing the shifts in choices observed in our experiments Therefore,

in what follows we analyze the subjects' choices by making the assumption that for

the 5'^ year w^ip) >p = 02 and for all the other four years w^ip) = ^ - ^ < 0.2,

where w^p) is the decision weights corresponding to year / (/ = 1, 2, 3 and 4).^ As

Trang 35

we employ Stochastic Dominance rules in estimating w^ip), let us first define

these rules

3.1 Stochastic Dominance Approach

a Definitions

Consider the funds in Experiment I When decision weights are employed such that

the most recent observation is overweighed Fund E becomes more attractive relative

to the other funds In employing the stochastic dominance approach we ask the

following question: what should w^ip) be such that E will stochastically dominate

the other funds? The answer to this question gives an upper bound on w^(p), because

if all subjects assign a weight equal or greater than this critical value of w^(p) to the

fifth observation, they should all prefer Fund E in Task II We investigate the critical

value of w^(p) by employing First and Second degree Stochastic Dominance rules

These decision rules are defined below

i) First degree Stochastic Dominance (FSD):

Distribution F dominates distribution G for all increasing utility functions if and only if F(x) < G(x) for all x, and there is a strict inequality for some value

XQ Namely, F{x) < G{x) for all x <=> EpU{x) > EaU(x) for all U, with t/' > 0 (1)

ii) Second degree Stochastic Dominance (SSD):

Define F and G as before, and L^ is a concave utility function (U' > 0,

U'' < 0) Then,

[G(t) - F(t)]dt > 0 for all X <=^ EpU(x) > EGH{X) (2)

for all U with U' > 0, V' < 0

Thus, if risk aversion is assumed, SSD can be employed.^'^ Though we focus in this study on SSD (i.e risk aversion), experimental studies show that risk-seeking

also exists in preferences (see Friedman and Savage [1948], Markowitz [1952b], and

Kahneman and Tversky [1979]) In particular Levy and Levy [2001] show that at

least 50% of the subjects are not risk averse Hence, if preferences other than

risk-aversion are assumed, the corresponding Stochastic Dominance criteria should be

employed For example, the Prospect Stochastic Dominance (PSD)^ rule corresponds

to the class of all Prospect Theory S-shape value functions, and the Markowitz

Trang 36

Stochastic Dominance (MSD)^ rule corresponds to the class of all reverse S-shape

value functions as suggested by Markowitz [1952b] Here we focus on risk-aversion

and the SSD rule.^°

b Implementation of the Stochastic Dominance Rules

First, note that Fund E dominates Fund A by FSD with the objective probabilities

Pi - 0.2 (see Table 2) Any overweighing of the fifth year probability, w^ > 0.2,

does not affect this FSD dominance

Now let us turn to the more interesting case of Funds D and E, as given in Table 2 (and Questionnaire 1 in Table 4) Figure 2a provides the cumulative

distributions of these funds when an equal probabiUty of p = 0.2 is assigned to each

observation, as should be done with a random sample composed of five

independ-ent observations As we can see, the two cumulative distributions F^, and F^

inter-sect, so by equation (1) neither fund dominates the other by FSD

^1

Also, as can be seen from Figure 2a, (FEM ~ ^ D W ) ^^ < 0, hence D does not

2%

dominate E by SSD, (see equation (2)) and (F^ix) - F^ix)) djc < 0, hence E does

not dominate D by SSD Also, there is no dominance by the Mean-Variance rule

Thus, it is very reasonable that with the objective probability p = 0.2, some risk

averters (SSD or Mean-Variance decision makers) will select Fund D and some

would select Fund E Let us now demonstrate how with w^ip) > 0.2, Fund E may be

considered better by some risk averters, and beyond some critical value w^(p) Fund

E even dominates Fund D by SSD, i.e., should be preferred by all risk-averters (SSD

dominance)

Assume that w^ip) > 0.2 As the most recent observation is also the smallest

return for both Funds D and E, this overweighing corresponds to an increase of the

first positive area (see Figure 2b) and the negative area decreases (recall that

increas-ing w^ip) induces a decrease in the other decision weights Wjip), i = 1, 2, 3, 4)

Thus, there is some critical value wf(p) such that the negative area will be equal to

the first positive area, hence E will dominate D by SSD To find the critical wf(p)

the following condition must be fulfilled (i.e., equating the first areas enclosed

between the two cumulative distributions):

1 — wi(p) wf(p)(-2 - (-5)) = ;^i^(12 - (-2))

or: 3wf(p) = (1 - wf(p))(W4), Hence,

I2wf(p) = 14 - Uwf(p)

Trang 37

a: With Objective Probabilities

Trang 38

which finally yields,

>V5*(P) = — = 0.54

26 and the other subjective probabilities are:

w.(p) = - ^ = — (for / = 1, 2, 3, 4) '^ 26-4 26

Figure 2b draws the cumulative distributions of E and D with these decision weights,

denoted by F^ and Ff As can be seen from the figure, with the decision weights

14 the negative area is equal to the first positive area (because (-2-(-5))

3 ^^

= (12 - (-2))—) Because all other areas enclosed between the two distributions

26

X

are positive, we have: {F^(t) - F^{t)) dt > 0, for all JC, (with at least one strict

inequality for some JC), when the superstar emphasizes that these are subjective

cumulative distribution with decision weights rather than the objective cumulative

distributions, F^ and F^ (compare Figure 2a and 2b) Thus, with w(p) > w*(/7) Fund

E (subjectively) dominates Fund D by SSD, and all risk averters are expected to

choose E Hence, with risk aversion w f(p) = 0.54 is an upper bound on the fifth year

decision weight If all subjects were risk-averse and had w(p) > w*(/7), they would

all choose Fund E in Task II As 62 out of the 128 subjects selected Fund E and 34

still selected Fund D, we conclude that either these 34 subjects are not risk averse, or

that for these subjects W5 < 0.54

Using the same technique in the comparison of Funds C and E, we find that

wf(p) = 0.5, i.e., for 0.2 < w^(p) < 0.5, some subjects may switch from C to E, and

for w^(p) > 0.5 all risk averters are expected to shift from C to E For the sake

of brevity, we do not provide the detailed calculation of wf(p) corresponding to C

and E

c Relaxing the Risk-Aversion Assumption

The Second degree Stochastic Dominance approach is non-parametric, hence it does

not make assumptions about the specific utility function This approach provides us

with an upper bound on the decision weight in the sense that with risk aversion the

experimental results reveal that it is not possible that all subjects have w{p) > w*(/7)

Alternatively, it is possible that not all subjects are risk averters Thus, in what

follows we do not confine ourselves to concave preferences In particular, we

discuss Prospect Theory's S-shape preferences and Markowitz's reverse S-shape

preferences (about these two preference types, see footnote 10)

Trang 39

cl PSDandMSD

So far we employ SSD in the comparison of E and D The experimental results can

also be explained with non-concave preferences Employing MSD and PSD reveals

the following results: Fund E dominates Fund D by MSD (for the MSD rule see

footnote 9) This dominance holds for the objective probabiHties, p^ = 0.2, as well as

for any overweighing of the most recent observation, ^5 > 0.2 On the other hand,

neither E nor D dominate one another by PSD (see footnote 8 for the PSD rule), and

this is true both for the objective probabilities and for any overweighing w^ > 0.2

Therefore, the results of Table 3 regarding Funds D and E conform either with

risk-aversion and an increase in W5, or alternatively, with no overweighing and with

about 2/3 of the choices (62 out of 96) conforming with MSD, i.e., with a reverse

S-shape value function

3.2 Direct Estimation of Ws(p)

Assuming a specific utility function enables a direct estimation of w^ip)

Surpris-ingly, the estimates obtained under different utility functions are very similar, which

makes the results quite robust Below we describe the estimation of w^ip) under the

assumption of a logarithmic utility function, a linear utility function, the Prospect

Theory S-shape value function suggested by Kahneman and Tversky [1992], and the

reverse S-shape value function suggested by Markowitz [1952b]

In applying the direct estimation approach it is beneficial to employ naire 2 of Experiment II, because here the subjects' choices were split almost evenly

Question-between the two funds (see Table 5) This allows us to obtain an estimate of the

median w^ip), as detailed below

Logarithmic Utility Function

Consider Funds D and E of Questionnaire 2 in Experiment II (see Table 4) What is

the value of w^ip) which makes an individual with logarithmic preferences

indiffer-ent between these two funds? The answer is given by the solution to:

Wi \og(W(l - 0.05)) + W2 log(W(l + 0.12)) + W3 log(W(l + 0.14))

-H W4 log(W(l -H 0.12)) -H W5 log(W(l + 0.20))

= Wi log(W(l - 0.02)) + W2 log(W(l + 0.15)) 4- W3 log(W(l + 0.45))

+ W4 log(W(l - 0.02)) + Ws log(W(l + 0.14)) where W is the initial wealth, and W; is the decision weight of observation i

1 — Wc

Recalling that in our framework Wi = for / = 1, 2, 3, 4, and noticing that W

cancels out, we have:

f ^ ^ l [ l o g ( 0 9 5 ) + log(1.12) + log(1.14) + log(1.12)] + w, log(1.20) =

I i z 2 ^ J[log(0.98) + log(1.15) -H log(1.45) + log(0.98)] + w, log(1.14)

Trang 40

which yields:

[log (0.98) + log (1.15) + log (1.45) + log (0.98)] - [log (0.95) + log (1.12) + log (1.14) + log (1.12)]

[log(0.98) + log(1.15) + log(1.45) + log(0.98)] - [log(0.95) + log(1.12) + log(1.14) + log(1.12)] + 4(log(1.2) - log(1.14))

or:

Ws = 0.44

Suppose that different individuals with this specific type of preferences overweigh

the fifth observation differently Any individual with log utility who assigns a weight

higher than 0.44 to the fifth observation prefers Fund D over E, and any individual

who assigns a weight lower than 0.44 to the fifth observation prefers Fund E

Assuming a logarithmic utility function, the fact that approximately half of the

subjects chose Fund D and half chose Fund E (see Table 5) implies that the median

W5 in the population is approximately 0.44.''

Prospect Theory Value Function

Tversky and Kahneman [1992] suggest that preferences are described by the

follow-ing value function:

f jc« if j c > 0

V{x) = \

[-A(-Jc)^ if jc < 0

where x is the change in wealth, and a, P, and A are constants which Tversky and

Kahneman experimentally estimate as: a = 0.88, /? = 0.88, and X = 2.25 With this

value function, an indifference between Funds D and E implies:

Ngày đăng: 10/01/2024, 00:23

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w