§10 signals a difference of opinion between the two authors, with List (and I) not agreeing with Harrison’s qualifying his studies with general population (in Denmark) instead of students, and completely artificial otherwise, as field studies. %}
Harrison, Glenn W. & John A. List (2004) “Field Experiments,” Journal of Economic Literature 42, 1009–1055.
{% Much of the paper, such as the first half of the abstract, is a general discussion of the pros and cons of a controled laboratory experiment versus less internally valid but more externally valid field data, a general topic extensively discussed in psychological textbooks and elsewhere.
Do not do experiment with students in lab, but in a major coin show in Orlando with attendants of that show who could participate voluntarily receiving $5 participation fee + performance-contingent payment, serving as an intermediate step between laboratory experiments and real field situations. They consider monetary prizes, and prizes in terms of special coins that have extra uncertainty regarding their value. The finding of this paper is that there is more risk aversion for the second outcomes than for the first. The authors discuss this finding in detail.
They also discuss background risk in detail, in particular that it cannot be ignored. Their study, however, takes background risk in the narrow sense concerning only the extra uncertainty of the special coins and not in the grand sense of all risks that we are facing regarding our investments etc., so that they are overclaiming.
In footnote 3 on p. 434 they argue that regarding the Rabin (2000) paper, they side with the Cox & Sadiraj (2006) and Rubinstein (2002) criticisms (that I strongly disagree with).
Unfortunately, they do not use the correct formula (for x > y > 0)
xpy --> w(p)U(x) + (1-w(p))U(y)
which is the correct one not only for the 1992 updated (“cumulative”) prospect theory BUT ALSO for the original 1979-prospect theory (Kahneman & Tversky 1979, p. 276 Eq. 2). Instead they make the well-known mistake of using the formula xpy --> w(p)U(x) + w(1-p)U(y). See their footnote 23 on p. 448 where they apparently think that the correct formula only applies to the new cumulative version, and p. 451 below Eq. 7.
equate risk aversion with concave utility under nonEU: p. 455 makes the same mistake as do so many economists of equating risk attitude with utility curvature if the working hypothesis is not EU but is a nonEU model, prospect theory in this case. When on p. 455 the authors report the results of prospect theory (taking the Tversky & Kahneman 1992 parametric families), they discuss dependence of the (power-) utility parameter in detail, but of the probability weighting parameter they only report the average value of 0.83.
They test power utility U(x) = xr/r but also the translated power utility U(x) = (x+w)r/r with w an extra parameter, but find only small values of w (p. 455) (they have no loss outcomes).
Footnote 31, p. 456, shows how far the authors got carried away in their interpretation that their coins with extra risk represent everything relevant in life outside the lab, including every possible background risk, because they apparently feel it necessary to negate this suggestion and explain that for instance for health outcomes things may be different than for their special coins …
P. 456 illustrates again how the authors got carried away with their mission: “Indeed, in transferring the insights gained in the laboratory with student subjects to the field, a necessary first step is to explore how market professionals behave in strategically similar situations.” [italics added here].
They measure probability weighting but use the RIS. %}
Harrison, Glenn W., John A. List, & Charles Towe (2007) “Naturally Occurring Preferences and Exogenous Laboratory Experiments: A Case Study of Risk Aversion,” Econometrica 75, 433–458.
{% Selten, Sadrieh, & Abbink (1999) argued against paying in probability of gaining a prize, but this paper tries to restore. %}
Harrison, Glenn W., Jimmy Martínez-Correa, & J. Todd Swarthout (2013) “Inducing Risk Neutral Preferences with Binary Lotteries: A Reconsideration ,” Journal of Economic Behavior and Organization 94, 145–159.
{% Selten, Sadrieh, & Abbink (1999) argued against paying in probability of gaining a prize, but this paper tries to restore, as did the 2013 paper. %}
Harrison, Glenn W., Jimmy Martínez-Correa, & J. Todd Swarthout (2014) “Eliciting Subjective Probabilities with Binary Lotteries,” Journal of Economic Behavior and Organization 101, 128–140.
{% Selten, Sadrieh, & Abbink (1999) argued against paying in probability of gaining a prize, but this paper tries to restore, as did the 2013 & 2014 papers. %}
Harrison, Glenn W., Jimmy Martínez-Correa, & J. Todd Swarthout (2015) “Eliciting Subjective Probability Distributions with Binary Lotteries,” Economics Letters 127, 68–71.
{% %}
Harrison, Glenn W., Jimmy Martínez-Correa, & J. Todd Swarthout (2015) “Reduction of Compound Lotteries with Objective Probabilities: Theory and Evidence,” Journal of Economic Behavior and Organization 119, 32–55.
{% %}
Harrison, Glenn W., Jimmy Martínez-Correa, J. Todd Swarthout, & Eric R. Ulm (2017) “Scoring Rules for Subjective Probability Distributions,” Journal of Economic Behavior and Organization 134, 430–448.
{% Ask subjects to rank mortality causes according to their believed likelihood. Give real payment according to how close the reported ranking is to the real statistical ranking.
real incentives/hypothetical choice: hypothetical ranking and real-incentive ranking give same results. %}
Harrison, Glenn W. & E. Elisabet Rutström (2006) “Eliciting Subjective Beliefs about Mortality Risk Ordering,” Environmental & Resource Economics 33, 325–346.
{% survey on nonEU: a comprehensive, colored, review of measurements of risk attitudes.
Appendix F is in apr09 the best reference for Harrison’s econometric Stata analysis technique.
Section 1.4, Appendix D seems to criticize BDM (Becker-DeGroot-Marschak). %}
Harrison, Glenn W. & E. Elisabet Rutström (2008) “Risk Aversion in the Laboratory.” In Jim C. Cox & Glenn W. Harrison (eds.) Risk Aversion in Experiments, Research in Experimental Economics Vol. 12, Bingley.
{% random incentive system: uses it but, to my regret, pays three choices to each subject (p. 138 beginning of §2) so that the main purpose of the system, avoiding income effects, is not served.
Fits mixture model to data, where the mixture is of PT (in fact SPT as explained below; I from now on write SPT) and EU. EU and SPT are not nested because another utility function is taken for EU ((s+x)r with x the lottery payment and s the prior endowment (losses from prior endowment mechanism) at the beginning of the experiment) than for SPT (xr for gains and xr´ for losses). P. 137 has nice history of mixture models in other fields. They measure probability weighting but use the RIS.
Because the statistical techniques of the authors, apparently, can only handle preference data they, strangely enough, do not use indifferences in their data, even though indifferences are more informative than preferences (p. 139 end of §2 (“indifferences .. to simplify we drop those.”, with footnote 14 there: “For the specification of likelihoods of strictly binary responses, such observations add no information.”) If the technique cannot draw info from indifference, then this is a problem for the technique!
Unfortunately, what this paper calls prospect theory is not prospect theory, neither in the new (1992) version nor in the original (1979) version. The paper writes, incorrectly (p. 140): “There are two variants of prospect theory, depending on the manner in which the probability weighting function is combined with utilities. The original version proposed by Kahneman & Tversky (1979) posits some weighting function which is separable in outcomes, and has been usefully termed Separable Prospect Theory (SPT) by Camerer & Ho [1994, p. 185]. …”
True, that 1979 OPT cannot be used for more than two nonzero outcomes. The separable Edwards-type version used here, as used by Camerer & Ho (1994), however, does not work at all for three and more outcomes, leading to great over- and underweightings and violations of highly unrealistic stochastic dominance. All the more reason to turn to the new 92 version of prospect theory!
They suggest that 60 choice questions is about the maximum that can be asked in one experiment.
The mixture model is WITHIN each subject and within each choice. That is, there is a mixture probability . Consider a single subject. We specify both an EU model and an SPT model for this one subject (specify means choosing a utility function, probability weighting function, and loss aversion parameter, as the case may be). For each choice, there is a probability such that the subject does EU with probability and SPT with probability 1 . All choices within the subject are independent here. (Later an error theory will be added where the errors for different choices of one subject are related, so that within a subject in that sense there is no complete independence, but this only concerns the choice error and not the theory choice.) Thus the subject is not described by one model, but has a dual self. It is a bit like quantum mechanics, where properties such as location of a particle may be a probability distribution over the locations that in no way can be pinned down deterministically. Conte, Hey, & Moffat (2007) consider a between-subject mixture where a subject with probability is EU and then does EU for all choices, and with probability 1 is SPT and then does SPT for all choices.
I would actually interpret the approach of this paper as representative agent because the same mixture model with parameters will be fit to each subject. It is indeed not one fixed model for all subjects the same, but it is a mixture of two models for all subjects the same.
The authors find that a mix of EU and SPT works well and, hence, the funeral is for the representative agent. Can reinterpret it as a resurrection of the representative agent, where we only need two of them.
If they fit SPT with T&K’92 parameters and with representative agent, then they find loss aversion of about 1.38. If they do a mix model with about half EU and about half the subjects SPT, then for the SPT subjects a loss aversion parameter of 5.78 results. A problem then is that power for losses is different than for gains, so that loss aversion is not well defined. Probability weighting has parameter = 0.91 if not as mixture model.
Intro p. 136 writes that primary methodological contribution is … co-existenc of EUT and SPT …, but §1 describes many applications of mixture models used before in the literature, also in decision under risk (Wang & Fischbeck 2004). %}
Harrison, Glenn W. & E. Elisabet Rutström (2009) “Expected Utility Theory and Prospect Theory: One Wedding and a Decent Funeral,” Experimental Economics 12, 133–158.
{% backward induction/normal form, descriptive; random incentive system
The paper essentially redoes the test of Starmer & Sugden (1991 AER) for probability weighting, but, unlike S&S, finds differences. It is written in a misleading manner. First it claims that RIS (the authors call it RLIM) needs EU and that, therefore, all researchers using RIS to investigate probability weighting or other violations of EU are completely wrong (bipolar), for instance in the abstract. But later it points out that RIS can also be justified without EU. Even, in the 3rd para of p. 436, they state that they will continue to use RIS themselves (as do all others in the field, in the absence of a better alternative), which they indeed do in all their other papers. Here is a detailed account:
The authors (H&S henceforth) criticize researchers who estimate deviations from expected utility (EU) but still use the RIS. This would be a valid criticism if those researchers were to defend RIS by assuming EU. Such people could be called bipolar, as proposed by this paper. But this does not happen. Researchers justify RIS assuming something often called isolation. H&S mention this, and counter that violations of a general isolation exist. But the point is, the researchers need not assume general isolation, but only for their particular stimuli, presented in ways that minimize the risk of violations of isolation. This is in fact what H&S do themselves. In the following text, take the first independence condition as just general independence giving EU, and the second as only isolation for the particular stimuli and presentation of the experiment considered. Then H&S write, on p. 436 3rd para:
“A final implication is to just be honest when presenting experimental findings on RDU and CPT models about the assumed neutrality of the experimental payment protocol. In effect this is just saying that there might be two independence axioms at work: one for the evaluation of a given lottery in a binary choice setting, and another one for an evaluation of sets of choices in 1-in-K settings. If one estimates RDU and CPT models with a 1-in-K protocol one might claim to be allowing the first axiom to be relaxed while maintaining the second. It is logically possible for the latter axiom to be empirically false while the former axiom is empirically true. In the absence of better alternatives, we do this in our own ongoing research using 1-in-K protocols.”
Another good reason for using RIS, despite any problem it has, is that other mechanisms only have bigger problems. H&S in some places suggest to let each subject make only one single choice, but properly mention the drawbacks: (1) it is very expensive, (2) it gives too little info within any individual, so that one can only make inferences about group averages, and (3) the revealed data may in fact be of low quality because subjects should learn stimuli before revealing their true preferences. H&S (p. 435) also suggest alternative procedures by Cox et al. (2011), now appeared as Cox, Sadiraj, & Schmidt (2015 EE), which concern for instance paying all choices or the average over all choices. I add here that those procedures have obvious problems. In the second, subjects know beforehand that they get about the average payoff, and that whatever choice they do affects their payoff very little.
To check out that the first author himself invariably uses the RIS, also when measuring probability weighting (I had to do this for another purpose), I typed the search words
Glenn Harrison probability weighting
into google scholar on 8 March 2018 and then checked out the five most cited references:
Harrison, Glenn W. & E. Elisabet Rutström (2009) “Expected Utility Theory and Prospect Theory: One Wedding and a Decent Funeral,” Experimental Economics 12, 133–158.
P. 138: “After all 60 lottery pairs were evaluated, three were selected at random for payment.” [small variation of RIS] Figure 6 last panel reports results on probability weighting.
Harrison, Glenn W., John A. List, & Charles Towe (2007) “Naturally Occurring Preferences and Exogenous Laboratory Experiments: A Case Study of Risk Aversion,” Econometrica 75, 433–458.
P. 439: “The subject chooses A or B in each row, and one row is later selected at random for payout for that subject.” P. 455: “The probability weighting parameter γ is estimated to be 0.83”
*Harrison, Glenn, Steven J. Humphrey, & Arjen Verschoor (2010) “Choice under Uncertainty: Evidence from Ethiopia, India and Uganda,” Economic Journal 120, 80–104.
“At the end of the experiment one of the eight tasks was selected at random for each subject and the lottery chosen in that task was played-out for real money.” Figure 2, P. 90, reports results on probability weighting.
Andersen, Steffen, John Fountain, Glenn W. Harrison, & E. Elisabet Rutström (2014a) “Estimating Subjective Probabilities,” Journal of Risk and Uncertainty 48, 207–229.
P. 213: “One choice was selected to be paid out at random after all choices had been entered.” P. 219 Figure 3 reports results on probability weighting.
Andersen, Steffen, Glenn W. Harrison, Morten I. Lau, & E.Elisabet Rutström (2014b) “Discounting Behavior: A Reconsideration,” European Economic Review 71, 15–33.
P. 21: “There were 40 discounting choices and 40 risk attitude choices, and each subject had a 10% chance of being paid for one choice on each block.” [small variation of RIS] P. 24: “We model lottery choices behavior using a Rank-Dependent Utility (RDU) model, since all choices were in the gain frame, and find evidence of probability weighting ”
I also checked out a recent (at this moment of writing, 8 March 2018) study co-authored by the first author:
Andersen, Steffen, James C. Cox, Glenn W. Harrison, Morten I. Lau, E. Elisabet Rutström, & Vjollca Sadiraj (2018) “Asset Integration and Attitudes to Risk: Theory and Evidence,” Review of Economics and Statistics, forthcoming.
A footnote writes: “For each type of decision task the subjects had a 10% chance of getting paid. If they were paid in the part of the experiment analyzed, one of the 60 decision tasks was randomly selected and the chosen lottery was played out for payment.” The conclusion writes: “we find evidence of modest probability weighting”
Weird is that in the beginning the authors do not acknowledge ways to reconcile RIS with violations of EU, as properly written in many places later in their paper, but misleadingly write the opposite, overly eager to score their point on claimed bipolarity. Here is the beginning of their abstract:
“If someone claims that individuals behave as if they violate the independence axiom (IA) when making decisions over simple lotteries, it is invariably on the basis of experiments and theories that must assume the IA through the use of the random lottery incentive mechanism (RLIM). We refer to someone who holds this view as a Bipolar Behaviorist, exhibiting pessimism about the axiom when it comes to characterizing how individuals directly evaluate two lotteries in a binary choice task, but optimism about the axiom when it comes to characterizing how individuals evaluate multiple lotteries that make up the incentive structure for a multiple-task experiment.” [italics from original; bold added]
This text directly contradicts the 3rd para on p. 436, where they write that they themselves will continue to use the RLIM. Therefore, the term bipolar applies to the authors themselves. %}
Harrison, Glenn W. & J. Todd Swarthout (2014) “Experimental Payment Protocols and the Bipolar Behaviorist,” Theory and Decision 77, 423–438.
{% Subjects usually prefer new medical treatments over existing ones. But, if they are pointed out that the new treatment comprises more ambiguity about downsides, then this preference disappears. %}
Harrison, Mark, Carlo A. Marra, & Nick Bansback (2017) “Preferences for ‘New’ Treatments Diminish in the Face of Ambiguity,” Health Economics 26, 743–752.
{% discounting normative: argues that discounting is irrational. Unfortunately, the author uses complete discounting, where the future is completely ignored, as a straw man and most of his paper only argues against that. As usual with philosophical writings, clarification and abbreviation could have been obtained by formal notation. The author points out (e.g. p. 47) that discounting often does not result from time per se but from other factors such as uncertainty. Compares discounting of the future with discounting of the past. Direct “psychological” utility with utility derived from future consequences, even if after one’s death. P. 56, next-to-last paragraph, brings up a good argument, that is that “reason” (something like normative appropriateness) should be irrespective of time. This argument amounts to dynamic consistency (forgone-branch independence), the optimal decision should not depend on the time point of decision. %}
Harrison, Ross (1981-1982) “Discounting the Future,” Proceedings of the Aristotelian Society 82, 45–57.
{% cancellation axioms: the authors show that in absence of completeness, the weakest version of cancellation is really weaker than some other versions. %}
Harrison-Trainor, Matthew, Wesley H. Holliday, & Thomas F. Icard III (2016) “A Note on Cancellation Axioms for Comparative Probability,” Theory and Decision 80, 159–166.
{% discounting normative: seems to argue so. %}
Harrod, Roy F. (1948) “Towards a Dynamic Economics: Some Recent Developments of Economic Theory and Their Application to Policy.” MacMillan, London.
{% Uses the veil of ignorance, mentioned before by Vickrey (1945, p. 329). The term veil of ignorance seems to have been introduced only later, by Rawls. People should accept a social arrangement independently of the position they will have in it. Everyone should be able to imagine that the positions will be exchanged one day. Thus, it should be guided by a probability distribution over these positions. From this Harsanyi derives that welfare- cardinal utility is equal to risky cardinal utility.
risky utility u = strength of preference v (or other riskless cardinal utility, often called value): Harsanyi derives that from his result. %}
Harsanyi, John C. (1953) “Cardinal Utility in Welfare Economics and in the Theory of Risk-Taking,” Journal of Political Economy 61, 434–435.
{% Individual utility of a social state is consequentialistic in the sense that it can depend on the commodity bundles of the other individuals, equity in the social state, etc. The latter is described as “owing to external economies and diseconomies” (e.g. p. 311 Footnote 5).
P. 313 l. 2/3 of first column claims that EU is normative.
P. 315: the individual utilities to be aggregated should be the subjective ones, not the ethical ones.
P. 316: veil of ignorance has equal chances to end up in each position.
P. 317 suggests the term “principle of unwarranted differentiation”: if you have observed everything of two individuals that you can think of, and it was all identical, then you can assume that they have the same level of utility. A nice term!
A nice paradox that I like to give to Ph.D. students: let X be the set of social states, Ui : X --> Re the utility of individual i, n the number of individuals, and W : X --> Re the utility of society. Harsanyi only assumes expected utility for individuals and society (postulates A and B), and Pareto optimality (postulate C); i.e., society is indifferent between two prospects over social states if all individuals are indifferent. How is it possible that this rules out equity considerations, and generates utilitarianism W(x) = a1U1(x) + … + anUn(x)? Pareto optimality is completely harmless and self-evident, and so are the expected utility assumptions. Harsanyi’s paradox! Assume richness; i.e., for every n-tuple of individual utilities, a social state exists that generates this n-tuple.
After a while, I add a hint: assume the above three postulates, and W(x) = U1(x) + … + Un(x) + Uj(x) where j is the individual with lowest utility, Uj(x) Ui(x) for all i. W comprises some equity and clearly is not utilitarian, violating joint independence (for n=3 and coordinates utilities, (1,3,0) ~ (2,2,0) but (1,3,4) (2,2,4)). Which axiom of Harsanyi is violated??
Answer: Pareto optimality is violated. For n=2, social states denoted as pairs of individual utilities, 0.5 a probability, and prospects written between [], the prospect (1,1)0.5(0,0) is strictly preferred to the prospect (1,0)0.5(0,1) by society, but both individuals are indifferent.
Pareto optimality is strong. It implies that for society the evaluation of a prospect over n-tuples of individual utilities depends only on the marginal distributions and not on correlations etc., which is Fishburn’s (1965) additive independence condition. This implies additive decomposibility of W and rules out equity considerations. It also follows that Anscombe & Aumann (1963) is a corollary of Harsanyi (1955).
All these classical theorems are corollaries of a mathematical result, stated as follows by Wakker (1992, Economic Theory): “A linear function is a function of linear functions if and only if the linear function is a linear function of the linear functions.” %}
Harsanyi, John C. (1955) “Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility,” Journal of Political Economy 63, 309–321.
{% %}
Harsanyi, John C. (1968) “Games with Incomplete Information Played by “Bayesian” Players, Parts I, II, III,” Management Science 14, 159–182, 320–334, 486–502.
Share with your friends: |