Bibliography


linear utility for small stakes



Download 7.23 Mb.
Page47/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   43   44   45   46   47   48   49   50   ...   103

linear utility for small stakes: they make this assumption for pragmatic reasons.
The authors conjecture (p. 72 penultimate paragraph) that their deviating findings may be due to their stimuli of risky versus riskless choices, claiming that this is different to almost all prior work. This is not so, Tversky & Kahneman (1992) and many others also considered such choices (not doing WTP but choice). %}

Harbaugh, William T., Kate Krause, & Lise Vesterlund (2002) “Risk Attitudes of Children and Adults: Choices over Small and Large Probability Gains and Losses,” Experimental Economics 5, 53–84.


{% equate risk aversion with concave utility under nonEU: p. 597: unfortunately, they use the term risk neutral for linear utility, also under PT, even though with linear utility there then can still be large deviations from risk neutrality due to probability weighting. They mention that only few studies have tested the fourfold pattern using choices. The following search key words in this bibliography can give related references:
concave utility for gains, convex utility for losses;
Risk averse for gains, risk seeking for losses

PT falsified
risk seeking for small-probability gains

P. 598 last para explains why their 2002 study is so unique.
losses from prior endowment mechanism: subjects received $22 in beginning, well, it was put on a table in front of them and apparently not yet put in their pocket. They might have to pay back from that.
random incentive system: each subject was paid twice, so there is income effect. When they played their first choice they did not yet know a second would come (p. 601 l. 6), so this can be taken as without the income effect (but then with a minor deception) (deception when implementing real incentives). Second time they were, again, endowed with $22.
Although pricing tasks confirm 4-fold pattern, I find it hard to interpret the stimuli and results. Subjects had to pay their WTP to get a gain prospect, so that losses could be involved and it was not really a gain prospect. The authors point this out in footnote 8 (p. 599) and discuss it more in §5, but nevertheless analyze what they call gain prospects as if gain prospects. Further complication is that, with prior endowment put on table before them, it is not clear to me if subjects integrated or not, took it as house money or not, and so on.
P. 602 writes that loss aversion can explain that for losses the WTP in absolute value was usually found to be larger than for gains. If subjects took the prospects as the authors analyze and describe them (gain-prospects and loss-prospects) then there would be no mixed prospects and loss aversion had no role to play. (loss aversion: erroneously thinking it is reflection)
Pp. 602-603 finds relations at individual level between gain- and loss-attitudes, different than Cohen, Jaffray, & Said (1987) who found no relation.
In the choice task where subjects chose between prospects and their expected values, but were endowed with $22, not given but put on the table before them. They found mostly nonsignificant deviation from EV, and the deviations all suggested to go opposite to the 4-fold pattern. I find it hard to assess the effect of the prior endowment mechanism though. Much of this evidence does not only go against PT, but against any theory we know.
In some places the authors put forward the dual self theories when discussing their results. %}

Harbaugh, William T., Kate Krause, & Lise Vesterlund (2010) “The Fourfold Pattern of Risk Attitudes in Choice and Pricing Tasks,” Economic Journal 120, 595–611.


{% Soft discussion about HP-testing %}

Harcum, E. Rae (1990) “Distinction between Tests of Data or Theory: Null versus Disconfirming Results,” American Journal of Psychologicy 103, 359–366.


{% foundations of statistics, many nice references %}

Harcum, E. Rae (1990) “Deficiency of Education Concerning the Methodological Issues in Accepting Null Hypotheses,” Contemporary Educational Psychology 5, 199–211.


{% PT, applications, loss aversion: seem to find asymmetric price elasticities. %}

Hardie, Bruce G., Eric J. Johnson, & Peter S. Fader (1993) “Modeling Loss Aversion and Reference Dependence Effects on Brand Choice,” Marketing Science 12, 378–394.


{% %}

Hardin, Curtis & Michael H. Birnbaum (1990) “Malleability of “Ratio” Judgments of Occupational Prestige,” American Journal of Psychology 103, 1–20.


{% Use hypothetical choice, defended on basis of large outcomes and losses, something that I agree with.
Find that fixed-cost for delay, both for gains and losses, and independent of outcome-magnitude, explains much, and for instance explains a bias, confirmed empirically, to prefer immediate losses to future losses, whereas classical theories predict the opposite. %}

Hardisty, David J., Kirstin C. Appelt, & Elke U. Weber (2013) “Good or Bad, We Want It now: Fixed-cost Present Bias for Gains and Losses Explains Magnitude Asymmetries in Intertemporal Choice,”Journal of Behavioral Decision Making 26, 348–361.


{% Study intertemporal choice, for money, health, and environment, with delays of 0, 1, or 10 years. Use hypothetical choice which I think is best for such intertemporal studies.
For money they assume linear utility, and for health and environment they take nr. of days (or weeks) of exposure to some gain or loss as unit of which utility is taken linearly just as money when calculating discounting. They find that discounting is similar for money, health, and environment (maybe for gain health some more discounting and for loss health some less), so that this aspect of outcomes does not matter much. But sign of outcome (“valence”) matters much, with gains discounted way stronger than losses.
P. 330 column 1 makes the strange claim that the dominant “rational-economic” assumption is that risk attitude should be independent of the outcome. I think, however, that no economist will think that utility should be the same for money, wine, life years, and the exponential of money. The authors add a clause “after adjusting for differences in the marginal value of outcomes in different domains” but it is unclear what that marginal value is other than utility, and adjusting for utility gives expected value so risk neutrality if I understand right. Maybe they think of probability weighting with this claimed to be the same across domains?
To fit data, they use hyperbolic discounting 1/(1+kt) with k the discount parameter. They find strong discounting for gains, with $250 today equivalent to $337.50 next year, and weak for losses, with losing $250 today equivalent to losing $265 next year (pp. 332). Correlations between gains and losses were weak. %}

Hardisty, David J. & Elke U. Weber (2009) “Discounting Future Green: Money versus the Environment,” Journal of Experimental Psychology 138, 329–340.


{% bisection > matching: they measure discounting using matching, choice list (they call it fixed-sequence choice titration), and bisection (they call it dynamic “staircase”). Compare and discuss them. Matching better fits hyperbolic discounting. Choice list better predict real choices. The authors are negative on bisection.
End of §1.1, p. 3: the authors study discounting for periods taking up to 50 years. They use hypothetical choice. They properly motivate this, and I agree:
“Studying the discounting of complex outcome sets on long timescales can be logistically difficult in the lab, if the goal is to make choices consequential: tracking down past participants in order to send them their “future” payoffs is hard enough one year after a study, but doing so in 50 years may well be impossible. Truly consequential designs are even trickier when studying losses, since they require researchers to demand long-since endowed money from participants who may not even remember having participated in the study. Fortunately, hypothetical delay-discounting questions presented in a laboratory setting do appear to correlate with real-world measures of impulsivity such as smoking, overeating, and debt repayment (Chabris et al., 2008; Meier & Sprenger, 2012; Reimers et al., 2009), suggesting that even hypothetical outcomes are worth studying.”
As do Ariely, Loewenstein, & Prelec (2001), they use the nice term “coherent arbitrariness” for coherent choices that are coherent biases rather than coherent genuine preference. It is what Loomes, Starmer, & Sugden (2003 EJ) call the shaping hypothesis. Methods that can elicit more inconsistencies/noise can be good. The authors use the nice term “ability to detect inattentive participants” for it.
utility = representational?: although the authors do not really get into that, the term coherent arbitrariness nicely indicates disagreement with coherentism. %}

Hardisty, David J., Katherine F. Thompson, David H. Krantz, & Elke U. Weber (2013) “How to Measure Time Preferences: An Experimental Comparison of Three Methods,” Judgment and Decision Making 8, 214–235.


{% All comments below refer to 2nd edn.
Watch out that these authors use the term convex to designate only midpoint convexity. I will use the term in the usual way below.
Section 2.20, the definition of average, reminds me of Blackwell’s theorem, but I will not try to check out the link now.
Ch. 3: considers probability-contingent prospects (q1:x1,…,qn:xn) with all qj’s positive and summing to 1 and the xj’s real-valued. They take the prospects as abstract mathematical objects and never refer to probabilities or anything. I could not find out from the text if they assume n variable or fixed. Most theorems and proofs seem to hold for both, as long as n if fixed is at least 2. What they call means are what DUR calls certainty equivalents under expected utility with possibly nonlinear utility U. Theorem 82 shows that the CE (certainty equivalent) is uniquely determined if U is continuous and strictly monotonic. Theorem 83 shows that CEs (certainty equivalents) determine U uniquely up to level and unit and sign of unit. P. 67 bottom states that we can always take U strictly increasing. (For just CEs it does not matter if we take U or U).
Theorem 84 shows that CE is homogenous, which is equivalent to constant relative risk aversion (CRRA), if and only if U is of the log-power family! This precedes Pfanzagl (1959) and others.
Theorem 85, and also Theorem 92, show the Pratt-Arrow-Yaari result that EU under U has lower certainty equivalents than EU under V iff U is a concave transformation of V. Theorem 243 extends this to nonsimple distributions.
Section 3.5-3.8 give results on convex functions that are useful in decision theory (midpoint convexity and the like).
Section 3.15 compares sums instead of averages, and Section 3.17 compares sets (I am not sure but maybe this book lets set refer to n-tuples).
Section 3.16 has all kinds of results on concavity of higher derivatives, that might be related to prudence.
Observation 88 in §3.7 (p. 73 in 2nd edn.) gives a beautiful result on convexity (full-force, and not just midpoint convexity) for continuous functions: they are convex as soon as for each pair of arguments there exists an argument in between them for which the function is below the chord. Beautiful proof:
“Suppose that PQ is a chord, and R a point on the chord below the curve. Then there is a last point S on PR and a first point T on RQ in which the curve meets the chord: S may be P and T may be Q. The chord ST lies entirely below the curve, contradicting the hypothesis.”
Observation 111, §3.18 (p. 91) shows that on any open interval, midpoint convexity plus boundedness on some nondegenerate subinterval imply continuity and full convexity on the whole open interval.
They refer to Jessen (1931) and M. Riesz (1927) for this result.
Theorem 215 gives the von Neumann-Morgenstern EU axiomatization if certainty equivalents exist!! The domain is the set of all simple prospects over , as explained in §6.19. The necessary and sufficient conditions for EU with a continuous strictly increasing utility U are:
[1] CE(x) = x;
[2] Strict stochastic dominance;
[3] CE(F) = CE(F*) ==> CE(tF+(1-t)G) = CE(tF*+(1-t)G) for all 0 < t < 1.
Condition [3], called quasi-linearity on p. 161, is nothing other than the celebrated independence condition. Footnote a then cites three references, by Nagumo, Kolmogoroff, and … de Finetti(1931) “Sul Concetto di Media”! They then say that they follow de Finetti’s proof. Note how continuity of CE, and the vNM Archimedean axiom, all follow from the conditions, mostly CE existence. P. 161 last two lines state uniqueness up to level and unit.
Theorem 216: velocity averaged by time is less than velocity averaged by distance.
Theorem 236 (p. 168): defines comonotonicity, called similarly ordered there.
Theorem 249 and 250 shows that second-order stochastic dominance is necessary and sufficient for preferability under every concave utility function. This can be seen as follows: take a = 0, b = 1, and let f be the generalized inverse of the distribution function F of a prospect that I will denote F, and let g be the generalized inverse of the distribution function G of a prospect that I will denote G. Then the integral from 0 to 1 of f is EV(F), and the integral from 0 to 1 of psi(f) is the EU of F under utility function psi. The inequality of integrals written in the beginning means that F is preferred to G under every convex utility. The necessary and sufficient condition is that F and G have the same expected value and every above truncation of the two at level y has higher expectation under F than under G. A discrete analog is in Theorem 108. That theorem compares n-fold sums. We can as well take averages and then have equal-probability lotteries, which captures all rational-probability lotteries. Then the majorization amounts to 2nd stochastic dominance, I guess, but do not try to check more. %}

Hardy, Godfrey H., John E. Littlewood, & George Pòlya (1934) “Inequalities.” Cambridge University Press, Cambridge. (2nd edn. 1952, reprinted 1978.)


{% Shows how errors in choice can affect choice paradoxes. %}

Harin, Alexander (2012) “Data Dispersion in Economics(I)—Possibility of Restrictions,” Review of Economics & Finance 2, 59–70.


{% %}

Harin, Alexander (2012) “Data Dispersion in Economics(II)—Inevitability and Consequences of Restrictions,” Review of Economics & Finance 2, 24–36.


{% PT falsified; They ask subjects introspective question about values of positive and small negative amounts. For small amounts they find stronger evaluations of positive amounts, deviating from loss aversion. For large amounts they find loss aversion. Experiment 1: how nice/unnice is it to gain/lose money. Experiment 2 repeats it for money gained/lost against a bookmaker. A control question could have been how happy subjects feel if they neither gain nor lose, so as to determine what the value of the reference point is and if it is really the neutrality point of the scale the authors use.
Another aside is that loss aversion may be due to the overweighting of the loss experience/anticipation and not to the experience itself.
risk seeking for symmetric fifty-fifty gambles: experiment 3 asks for x such that (x, p; y) ~ (a, p; b) (not incentivized).
Problem with small amounts is that distorting factors such as joy of playing and framing decide. %}

Harinck, Fieke, Eric van Dijk, Ilja van Beest, & Paul Mersmann (2007) “When Gains Loom Larger than Losses,” Psychological Science 18, 1099–1105.


{% Not easy to see if more risk aversion for gains than risk seeking for losses, e.g. because of different prizes. %}

Harless, David W. (1992) “Predictions about Indifference Curves inside the Unit Triangle: A Test of Variants of Expected Utility Theory,” Journal of Economic Behavior and Organization 18, 391–414.


{% %}

Harless, David W. (1992) “Actions versus Prospects: The Effect of Problem Representation on Regret,” American Economic Review 82, 634–649.


{% error theory for risky choice;
results are sensitive to the specifications of the respective theories that were chosen, for instance to whether convexity and concavity are taken strict or weak. For RDU/PT the most relevant specification, i.e., of inverse-S weighting functions was not investigated.
losses from prior endowment mechanism: real payments with losses are implemented by subtraction from prior endowment. Further comments on this are on p. 1281.
EU is quite good for same supports, but is very bad when different supports (then dominated by either nonEU or EV)
The study deliberately avoids mixed gambles (Camerer, March 2002, personal communication) and, therefore, does not consider loss aversions. Means that one aspect at which prospect theory excels is excluded from the game!
P. 1263 claims that average inconsistency rate is 15–25%, and gives references to it
P. 1276 real incentives/hypothetical choice [italics from original]: paying participants appears to lower the error rate, increasing rejection of EU and many other theories rather than inducing conformity to them; P. 1281: no other differences between real and hypothetical payments.
P. 1268 (also 1281, 1282): EU violations in the interior of the triangle are less, but do not disappear.
P. 1281: no reflection for small gains and losses in the interior of the triangle; may be due to the real incentives where losses were subtracted from prior endowment, which for several/many? subjects means that they integrated payments and took these losses as gains. (Suggested in Footnote 24 on that page.)
P. 1281: curvature of indifference curves in depends on stakes
P. 1285: nonlinear weighing of small probabilities is important (gives citation of Morgenstern)
P. 1286: the authors give a piece of their mind to people who cling to EU. %}

Harless, David W. & Colin F. Camerer (1994) “The Predictive Utility of Generalized Expected Utility Theories,” Econometrica 62, 1251–1289.


{% %}

Harless, David W. & Colin F. Camerer (1995) “An Error Rate Analysis of Experimental Data Testing Nash Refinements,” European Economic Review 39, 649–660.


{% Got this reference from Ido Erev on September 5 1990 %}

Harley, Calvin B. (1981) “Learning the Evolutionarily Stable Strategy,” Journal of Theoretical Biology 89, 611–633.


{% Seems to argue for forward induction in game theory. %}

Harper, William L. (1986) “Mixed Strategies and Ratifiability in Causal Decision Theory,” Erkenntnis 24, 25–26.


{% Seems to argue for forward induction in game theory. %}

Harper, William L. (1991) “Ratifiability and Refinements in Two-Person Noncooperative Games.” In Michael Bacharach & Susan Hurley (eds.) Foundations of Decision Theory, 263–293, Basil-Blackwell, Oxford.


{% foundations of probability; foundations of statistics; Dutch book
Discuss matching probabilities and Dutch books, and their role in axiomatizations. But it brings in causal decision theory, and it is the philosophical style where no model is pinned down, making it more ambiguous but also more open to new ideas. %}

Harper, William, Sheldon J. Chow, & Gemma Murray (2012) “Bayesian Chance,” Synthese 186, 447–474.


{% foundations of statistics and foundations of probability %}

Harper, William L. & Clifford A. Hooker (1976, eds.) “Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science;” Vol. I, II, III. Reidel, Dordrecht.


{% Seems to have terms paramorph (model gives good empirical predictions without reflecting underlying process) and homeomorph (model also matches underlying process). %}

Harré, Rom (1970) “The Principles of Scientific Thinking.” MacMillan, London.


{% Investigates time preference for losses. For money discounting is positive (preference for deferring losses), but for other dreadful experiences it can be anything, and often is negative (prefer to have dreadful outcome soon). No relation between discounting for gains and for losses. They considered hypothetical choices (although there were questions about real experiences in Study 5). %}

Harris, Christine R. (2012) “Feelings of Dread and Intertemporal Choice,” Journal of Behavioral Decision Making 25: 13-28.


{% Adapt the well known Exponential Euler Equation for equilibrium path in intertemporal consumption to nonconstant, quasi-hyperbolic, discounting. A convex combination of  and  replaces the classical discount factor. %}

Harris, Christopher & David Laibson (2001) “Dynamic Choices of Hyperbolic Consumers,” Econometrica 69, 935–957.


{% A sort of continuous extension of quasi-hyperbolic, a variation of Jamison & Jamison’s (2011) split rate quasi-hyperbolic discounting (not cited in this paper). Time is taken continuously. Then first during some period, “extended present” (my term) there is constant discounting (say the period during which present self controls), but after it suddenly drops by a factor, but other than that keeps the same exponential. There are some drawbacks to this model (see my comments there). The present paper varies by taking the extended present to be random. A deterministic model would result if we’d take expected discounting as resulting from the above process and take that as deterministic, but no standard mathematical tools can be provided yet (p. 213 last para). I do not see whether or not it avoids the problem of Jamison & Jamison (2011). %}

Harris, Christopher & David Laibson (2013) “Dynamic Choices of Hyperbolic Consumers,” Quarterly Journal of Economics 128, 205–248.


{% Seems to argue that life duration is incommensurable with quality of life, and never one should be traded for the other. %}

Harris, John (1987) “QALYfying the Value of Life,” Journal of Medical Ethics 13, 117–123.


{% Z&Z; elderlys choices among health plans and supplemental insurances from Minneapolis 88 St. Paul Medicare health plan data. Statistical techniques to also estimate preferences on onobservable attributes. Authors use term IIA not in sense of social choice (Arrow 51), and neither in sense of individual-choice-revealed-preference (Nash 51, Arrow 59), but in probabilistic-choice sense as the central axiom of Luce (1959) where choice proportions are unaltered if third alternatives are dropped. %}

Harris, Katherine M. & Michael P. Keane (1999) “A Model of Health Plan Choice: Inferring Preferences and Perceptions from a Combination of Revealed Preference and Attitudinal Data,” Journal of Econometrics 89, 131–157.


{% %}

Harris, Lawrence (1991) “Stock Price Clustering and Discreteness,” Review of Finance 16, 1533–1597.


{% %}

Harris, Matthew C. & Jennifer L. Kohn (2017) “Reference Health and the Demand for Medical Care,” Economic Journal, forthcoming.


{% Points out that adaptive stimuli can distort incentive compatibility. Apparently BDM (Becker-DeGroot-Marschak) applied their method in an adaptive context and were unaware of the distortion mentioned. Then this paper measures certainty equivalents and risk attitude under EU in a nonadaptive way. %}

Harrison, Glenn W. (1986) “An Experimental Test for Risk Aversion,” Economic Letters 21, 7–11.


{% First obtains independent measurement of risk attitude, and then considers bargaining behavior of subjects. Discusses the issue of strategically reporting untrue risk attitude so as to improve the outcome of a bargaining game. %}

Harrison, Glenn W. (1986) “Risk Aversion and Preference Distortion in Deterministic Bargaining Experiments,” Economic Letters 22, 191–196.


{% Raises the “flat-payoff” criticism in the context of experiments by Smith, Walker, & Cox. Argues that Nash equilibrium payoff functions did not provide sufficient payoff saliency so as to observe deviations from equilibrium, or to distinguish risk-averse from risk-neutral bidders. It is a general difficulty with optimization problems that the payoff functions are flat near the optimum, so that small deviations from the optimum are punished little. Reassuring is that subjects often think long when choosing between options that are almost equivalent, where the value difference is only a few cents. Also reassuring can be, under single choice, that these few cents are only for a few seconds of work. The latter reassurance does not apply under RIS, when the few cents difference concern all efforts throughout the experiment. Harrison (2010, footnote 4, and in his earlier works) cites preceding works, including Winterfeldt, Detlof & Ward Edwards (1986, Chapter 11), who raised the flat payoff issue before.
The data do suggest risk aversion.
Seems to criticize BDM. %}

Harrison, Glenn W. (1989) “Theory and Misbehavior of First-Price Auctions,” American Economic Review 79, 749–762.


{% Christiane, Veronika & I: discusses the issue of changing currency without changing values on p. 233. Mentions the nice term “numeraire illusion.”
real incentives/hypothetical choice: for moderate amounts ($5, $1, $0) 3 out of 20 subjects do Allais with real payment, 7 out of 20 with hypothetical. This difference is not significant.
Criticizes real-incentives experiments by Kahneman & Tversky in sense that payments are too low, and wrong decision in each choice pair constitutes an expected loss of only some cents (the point raised before by Harrison 1989; for further discussion see my comments there).
Bayes rule-performance gets better with real payment and learning. %}

Harrison, Glenn W. (1994) “Expected Utility Theory and the Experimentalists,” Empirical Economics 19, 223–253.


{% real incentives/hypothetical choice: the topic of this paper. It reanalyzes Battalio, Kagel, & Jiranyakul (1990) and Kagel, MacDonald & Battalio (1990) at individual level, finding that real incentives gives more risk aversion for losses but less (rather than the commonly believed more) for gains. This is also found in the present paper of Harrison, analyzing data of Harrison & Rutström (2005) on hypothetical choice that were collected but not published.
parametric fitting depends on families chosen: p. 61 explains that findings of parametric fittings with error theory and maximum likelihood depend much on the parametric families and error theories chosen.
P. 62 nicely explains that, if unrealistic info is given to subjects in an experiment, then they will replace it with their own ideas about what is plausible.
P. 64: “In any event, the mere fact that hypothetical and real valuations differ so much tells us that at least one of them is wrong!” %}

Harrison, Glenn W. (2006) “Hypothetical Bias over Uncertain Outcomes.” In John A. List (ed.) Using Experimental Methods in Environmental and Resource Economics, 41–69, Elgar, Northampton, MA.


{% %}

Harrison, Glenn W. (2007) “Making Choice Studies Incentive Compatible.” In Barbara Kanninen (ed.) Valuing Environmental Amenities Using Stated Choice Studies: A Common Sense Guide to Theory and Practice, Springer, Dordrecht.


{% %}

Harrison, Glenn W. (2010) “The Behavioral Counter-Revolution,” Journal of Economic Behavior and Organization 73, 49–57.


{% I comment on the version of May 11, 2011.
This paper criticizes the statistical tests in the main text of
Abdellaoui, Mohammed, Aurélien Baillon, Laetitia Placido, & Peter P. Wakker (2011) “The Rich Domain of Uncertainty: Source Functions and Their Experimental Implementation,” American Economic Review 101, 695–723.
As one of the authors criticized, my role may be more be someone involved than outside commentator. At any rate, I mostly use t-tests or Wilcoxon to test (in)equalities. I like them because they do not make any assumption about probabilistic error distributions within-subjects-between-stimuli. In particular, they do not assume those to be statistically independent. They only assume between-subject statistical independence, which I find more convincing.
Many econometric analyses do add assumptions about probabilistic error distributions within-subjects-between-stimuli, and often that they are independent. As usual, there are pros and cons, with different preferences in different fields. Harrison however only knows the latter econometric approach, says that one must specify within-subject errors, does not know that one can do without in t-tests and Wilcoxon, and claims that our tests are wrong for not doing what he knows. My cv on my homepage shows that I have a degree in mathematics with statistics as one specialization, and that until 1995 most of my teaching was in statistics, to mathematical, psychological, and medical students. I should know about t-tests! Harrison is effectively claiming that vitually every t-test used in the literature is wrong. He erroneously thinks that variables that, in his terminology, are estimates, cannot be submitted to t-tests. In regressions as commnly used in econometrics, unlike in t-testds, it is often required that the independent variables have no errors. Maybe this is confusing Harrison. An alternative source of confusion may be that econometric analyses often impose error assumptions (often normality) on basic measurements, and then for derived concepts one has to investigate how the assumed errors propagate, and one cannot just impose normal distributions on derived concepts. But we do not do anything of this kind.
Details: Abstract and many places; The criticism that we do not worry about sampling errors is because Harrison does not understand that we can avoid assumptions about within-subject errors.
P. 1 footnote 1: we do use calculations within subjects, getting indexes and parameters of utility and so on, sometimes based on minimizing squared distances. These are mathematical calculations and recodings of data. We do not assume any probabilistic theory and, in return, not any statistical claim is associated with these within-subject calculations. The results of such calculations can be submitted to (between-subject) t-tests or Wilcoxon tests. Not any speculation on within-subject errors needs to be made for that. (Errors there will contribute to variance of the t-statistic, but this variance is handled properly.) Harrison confuses recodings of data with estimations-endowed-with-statistical-claims.
A didactical example to clarify the difference between calculation/recoding and statistical estimation: imagine that one wants to investigate whether the relative density (weight per volume unit) of men is bigger than that of women. One measures the body weight and also body volume of every person in a representative sample. And then, one does mathematical calculation and recoding and not statistical estimation by calculating the ratio of weight per volume for every person. Then one uses a t-test to compare those ratios. Glenn’s view is that this is wrong, that our ratio taking was a statistical estimation, that we have not specified the errors involved in this process, and so on. Will he want to forbid to ever use a t-test to test relative density? Maybe he adds a reference to the statistical principle that estimations should be based on more than two observations (our weight-per-volume was calculated using only two observations, weight and volume), and that doing it by only two observations is too unreliable? Would he then want to forbid worldwide that someone ever calculates relative densities of any human being? Anyway, he is just confusing general calculations/recodings with statistical estimations.
P. 4 2nd para: the random incentive system assumes isolation, which is one implication of independence (and a dynamic principle). Independence (+ dynamic) is sufficient, but not necessary, for validity of the random incentive system. Harrison misunderstands this point. Bardsley et al. (2010) explain the point well.
P. 4 footnote 6: This specification, rather than the main text, is required. Comparisons across different sources are not to be done directly through utility values (which are from different scales) but through certainty equivalents.
P. 6 & Table 1 do within-subject statistical tests for every subject. Unlike with us, errors within-subject-between-stimuli are assumed independent here. Our design was not made for this purpose, and the choices per subject are too few to get statistical conclusions this way. (Another problem is that statistical conclusions are inflated because the choices of one individual are not really independent according to my preferred views.) Table 1 indeed shows no statistical power. Harrison blames our design for it rather than his unfortunate test.
PP. 6-7 criticizes the semi-parametric fitting introduced by Abdellaoui (who has a degree in economnetrics). The method first does parametric fitting to obtain a power utility function. Then, in the second stage, it uses that to estimate the things that interest us most: the event weighting functions. And the latter then is non-parametric. This two-stage way emphasizes that for the weighting function not any parametric assumption is made. For this reason, the first-stage estimates of w(0.5) are not used in the second stage (another thing criticized by Harrison on p. 7). In addition, Abdellaoui uses this method to stay close to techniques in decision analysis. The procedure here is within-subject, with not any probabilistic assumption or statistical conclusion made at that stage. Again Harrison confuses calculations with statistical estimations by criticizing our absence of statistical assumptions/conclusions. The whole rest of the paper is confusing calculations and estimations, and between- and within-subject errors. %}

Harrison Glenn W. (2011) “The Rich Domain of Uncertainty: Comment,” working paper.


Incorporated in Harrison (forthcoming) “The Methodologies of Behavioral Econometrics.”
{% Glenn expresses his characteristic opinions in his characteristic style:
“In general the book confounds scholarship with advocacy in a way that is now all too common in behavioral economics.”
“I am tired of reading scholarly work in this vein, and feeling the need to constantly check the record against what is alleged.”
That Glenn only knows econometric methods of doing statistics, and thinks that all else unknown to him must be wrong, appears for instance from the following text and its context:
“in general we need both theoretical and econometric assumptions to identify and estimate the latent construct”
Here is how he cites his friend Rabin (2000):
“The folk theorem on calibration of risk preferences for “small stakes,” originally stated by Hansson (1988) and popularized by others” %}

Harrison, Glenn W. (2015) Book Review of: Daniel Friedman, Mark R. Isaac, James Duncan, & Sunder Shyam (2014) “Risky Curves: On the Empirical Failure of Expected Utility, Routlege, New York,” Journal of Economic Psychology 48, 121–125.


{% %}

Harrison, Glenn (2017) “The Methodologies of Behavioral Econometrics.” In Michiru Nagatsu and Attilia Ruzzene (eds.) Philosophy and Interdisciplinary Social Science: A Dialogue, @–@, Bloomsbury, London, forthcoming.


{% equate risk aversion with concave utility under nonEU: they explicitly state, somewhere in the middle, that risk aversion, risk seeking, and so on, refers only to utility curvature, also under prospect theory. Confusing, because then we do not know how to refer to what is traditionally called risk aversion (preference of EV, involving both utility, probability weighting, and loss aversion)! Unfortunately, the paper, whereas mentioning original 1979 prospect theory, the separable-weighting generalization often used (though not really prospect theory), and the new 1992 version, but leaves it completely unspecified which of these versions is used in the analysis, for instance, by not giving the formula.
They measure probability weighting but use the RIS. %}

Harrison, Glenn, Steven J. Humphrey, & Arjen Verschoor (2010) “Choice under Uncertainty: Evidence from Ethiopia, India and Uganda,” Economic Journal 120, 80–104.


{% decreasing ARA/increasing RRA: this is a comment on Holt & Laury (2002, AER) “Risk Aversion and Incentive Effects.” It shows empirically that there is an order effect for the high-real payment treatment, which always followed after the low-real payment treatment. They did it now (for 10 times higher payments, not 20 times) both with and without the order effect, and without the order effect the increase in risk aversion versus the low-payment group was reduced by about a factor two. This order effect may be due to loss aversion (see my comments on the Holt & Laury paper). This study confirms the order effect empirically. On the positive side, it shows that half of the high-low-real-payment difference effect of Holt & Laury is not due to the order effect and is genuine.
Confirm Holt & Laury (2002) on the following: women more risk averse than men for low payment but not for high payment (gender differences in risk attitudes). %}

Harrison, Glenn W., Eric Johnson, Melayne M. McInnes, & E. Elisabet Rutström (2005) “Risk Aversion and Incentive Effects: Comment,” American Economic Review 95, 897–901.


{% decreasing ARA/increasing RRA: find increasing RRA.
Point out that empirical studies of the common ratio effect etc. can gain power if conditioning on degree of risk aversion. The first pages mention that in existing studies there can always exist as yet unknown confounding factors, which of course holds for every statistical study. Also point out that subjects may be almost indifferent between al kinds of choices, so that these do not give much information, and estimating their risk aversion helps us detect such almost-indifferences.
They use questions similar as in Holt & Laury (AER 2002), estimate CRRA parameter from it, and use that as index of risk aversion to condition on. %}

Harrison, Glenn W., Eric Johnson, Melayne M. McInnes, & E. Elisabet Rutström (2003, March) “Individual Choice and Risk Aversion in the Laboratory: A Reconsideration,” Dept. of Economics, Moore School of Business, University of South Carolina, USA.

Published as

Harrison, Glenn W., Eric Johnson, Melayne M. McInnes, & E. Elisabet Rutström (2007) “Measurement with Experimental Control.” In Marcel Boumans (Ed.), Measurement in Economics: A Handbook, Ch. 4, 79–104, Elsevier, Amsterdam.


{% %}

Harrison, Glenn W. & Morten I. Lau (2005) “Is the Evidence for Hyperbolic Discounting in Humans just an Experimental Artefact?,” Behavioral and Brain Sciences 28, 657–657.


{% This paper considers the Rabin (2000) paradox, but, unfortunately, has many weaknesses.
Rabin (2000) puts loss aversion forward as the main factor to explain his paradox in the last para of his main text (pp. 1288-1289). This involves reference dependence, the main ingredient of prospect theory, the theory sharing the 2002 Economics Nobel prize with its 1979 introductory paper the 2nd most cited in economics. Reference dependence indeed is the main factor explaining Rabin’s paradox. Then how is it possible to write a paper on this topic while never even mentioning reference dependence or loss aversion? Yet this is what this paper does. It also does not cite Kahneman or Tversky.
Although the authors informally use terms utility of final wealth versus utility of income to refer to the aforementioned difference, they do not formalize it, so that they cannot analyze the case properly. Their writing w+x suggests that wealth goes into outcomes (changes-with-respect-to-reference-point) and leaves ambiguous the essence, where w should go into the reference point rather than into the outcome. They should have used a notation such as xw, denoting the reference point w differently than the outcome x, so that the readers can know. They also do not make this difference explicit in their experiment. The experiment, thus, seems to test constant absolute risk aversion, finding decreasing absolute risk aversion. This has been found in dozens of studies before, and is generally assumed. See the keyword decreasing ARA/increasing RRA in this bibliography for many references. It is implied by the common parametrizations of prospect theory, with power utility.
There is another problem. Rabin did not claim that all choices are invariant under wealth changes. He only claimed it for the preference 1100.5(100)  0. The authors consider 28 different lottery pairs in their Table 1 (p. 27), and not Rabin’s pair. So, they tested a different phenomenon and then for different stimuli. (And for a third problem: the largest wealth change is about $120, which is not enough to be very relevant.)
The animosity between experimental economists and behavioral economists that was strong until about 2010, and that is described by Svorenčíkj (2016), but still is very present in this paper, contributing to the confusions and non-objectivity in this paper. This explains not only why Kahneman & Tversky are not cited, and why the 2017 Nobel prize winner Thaler is insulted in footnote 6, but also that, whenever Rabin is cited, one can recognize an implicit negative suggestion. Below, italics are always added by me, and the first two cases are debatable but fit the picture, and the last case (5) is clearest:
(1) P. 25 1st para: “Rabin (2000) … Although primarily used as an argument against EUT, it is now well-known that this logic applies to a much wider range of models that assume the argument of the utility function to be terminal wealth (Cox and Sadiraj, 2006; Safra and Segal, 2008).”
Here it suggests that Rabin himself did not see the wider implication of terminal wealth being violated. Well, Rabin himself, in his conclusion, immediately suggested that loss aversion (and, therefore, reference dependence) is the most likely cause, which violates terminal wealth.
(2) P. 25 3rd & 4th para: “We refer to this claim as the HRC, for ‘‘Hansson–Rabin calibration,’’ acknowledging Hansson (1988) and Rabin (2000). … using the simple example from Hansson (1988) since it is not widely known and illustrates the basic points. The generalization by Rabin (2000) can then be quickly stated.”
Here it downplays Rabin’s contribution by ascribing much to Hansson. Hansson, cited and credited by Rabin, had part of the idea being the calibration effect, but did not convey the wide implications. As an aside, Hansson’s work was brought to Rabin’s attention by Prelec (personal communication).
(3) P. 25, 2nd column, 2nd para: “Indeed, the only empirical example offered by Rabin (2000) uses a bounded CARA function.”
Here it suggests that Rabin was weak on empirical evidence.
(4) Rabin (2000) draws the implication that P must then be false, and that one should employ models of decision-making under risk that relax proposition Q, such as Cumulative Prospect Theory. As a purely logical matter, of course, this is just one way to resolve this calibration puzzle.
Here it suggests that Rabin’s conclusion is arbitrary.
(5) 2007). “Rabin and Thaler (2002, p.230) make exactly this mistake in misunderstanding the existing experimental literature: “We refer any reader who believes in risk neutrality to pick up virtually any experimental test of risk attitudes. Dozens of laboratory experiments show that people are averse to far more favorable bets for smaller stakes. The idea that people are not risk neutral in playing for modest stakes is uncontroversial; indeed, nobody to our knowledge interprets the existing evidence as arguing that expected-value maximization (risk neutrality) is a good fit’’.|
The authors here insult not only Rabin, but also the 2017 Nobel prize winner Thaler. There is nothing wrong with the content of the text by Rabin & Thaler, although I would have preferred a different style. The text by R&T is fully relevant to the issue at stake here, which escapes Harrison et al. because they are confused on the role of the reference point.
The paper overstates its (claimed) novelty of doing within-subject on p. 25 2nd para (“the absence of empirical tests is remarkable”) and 1st para in §3 (“All of the evidence claimed to support the premiss that decision makers in experiments exhibit small stakes risk aversion for a large enough finite interval comes from designs in which subjects come to the lab with varying levels of wealth and are faced with small-stakes lotteries.”) because Cox et al. (2013) tested within-subject variations before. The authors only cite Cox et al. for this in a footnote, Footnote 2 on p. 25. (Comes to it what I wrote before, that the authors are doing a within subject test of constant absolute risk aversion which has been done in dozens of papers before. But this is a matter of confusion, rather than deliberately ignoring preceding work.)
As do most papers on individual choice today, the authors use the Random incentive system (RIS), called RLIM by them, to implement real incentives. This is even though the first author, Harrison, has tried to criticize RIS on many occasions by erroneously claiming that it is valid only under expected utility (e.g., Harrison & Swarthout 2014, abstract) . Footnote 9 gives a supposed justification. First follows the justification there that motivates everyone. But then, to be consistent with the EU claim made elsewhere, the footnote writes a weak claim: “The second reason was that the null hypothesis being tested is normally stated assuming EUT [expected utility, which I abbreviate EU], and RLIM is valid under EUT.” This claim is weak because many studies have shown that expected utility is empirically violated. The stated null hypothesis can immediately be rejected based on an ocean of literature, making further tests redundant!
P. 27 l. -5: for higher levels of wealth, the authors seem to find a tendency for risk seeking (they do not state statistical level), deviating from the common findings of weak aversion. %}

Harrison, Glenn W., Morten I. Lau, Don Ross, & J. Todd Swarthout (2017), “Small Stakes Risk Aversion in the Laboratory: A Reconsideration,” Economics Letters 160, 24–28.


{% between-random incentive system (paying only some subjects): Footnote 16 reports a little side experiment to test the random incentive system by, in one treatment, of each subject one choice was paid, and in the other treatment for each subject at the end the payment was done with probability only 1:10. They found no significant difference of RRA coefficient.
decreasing ARA/increasing RRA: find bit of increasing RRA but close to constant;
253 people from general population, and real incentives; relate to demographic variables; mean power of utility found is 0.36 (= 1  RRA coefficient). They only do EU data fitting, and no nonEU.
The Appendix discusses Rabin’s calibration argument. The authors correctly cite Rabin’s text pointing out that loss aversion is the main explanation and also correctly equate this with what experimental economists call utility of income. That Cox & Sadiraj and Rubinstein then having nothing to add anymore, is not stated clearly but is left ambiguously.
gender differences in risk attitudes: no difference %}

Harrison, Glenn W., Morten I. Lau, & E. Elisabet Rutström (2007) “Estimating Risk Attitudes in Denmark: A Field Experiment,” Scandinavian Journal of Economics 109, 341–368.


{% One point is that if you randomize subjects then by coincidence one group may have more risk averse subject than the other, which can be prevented by measuring the risk attitudes of the subjects. %}

Harrison, Glenn W., Morten I. Lau, & E. Elisabet Rutström (2009) “Risk Attitudes, Randomization to Treatment, and Self-Selection into Experiments,” Journal of Economic Behavior and Organization 70, 498–507.


{% Although the paper
Harrison, Lau, & Rutström (2007) “Estimating Risk Attitudes in Denmark: A Field Experiment,” Scandinavian Journal of Economics 109, 341–368
has been criticized for using the term field experiment for nothing other than that the sample were no students, this paper continues to use the term (smoking is not enough of being a field activity, and is more of a demographic variable).
They use the same data set as Harrison, Lau, & Rutström (2007), and the same estimation of discounting (taking as intertemporal utility the risky utility estimated from risky decisions assuming EU), but now add relations with smoking. I hoped that §2, entitled “Identifying risk and time preferences” would discuss the identification of one with the other, but it did not. Instead, the title refers to just identifying each without looking at the relation between them.
Male smokers discounter more than male nonsmokers. No difference with women. If I understand right, they find no relation between smoking and time inconsistency (parameter of hyperbolic discounting). %}

Harrison, Glenn W., Morten I. Lau, & E. Elisabet Rutström (2010) “Individual Discount Rates and Smoking: Evidence from a Field Experiment in Denmark,” Journal of Health Economics 29, 708–717.


{% Measure risk attitudes of individuals over own risks, and over risks for others. Is done by usual choice list and assuming EU, as in Holt & Laury (2002). Find no difference if risk attitudes of others are unknown, but more risk aversion for choices concerning others if the risk attitudes of others are known. %}

Harrison, Glenn W., Morten I. Lau, E. Elisabet Rutström, & Marcela Tarazona-Gómez (2012) “Preferences over Social Risk,” Oxford Economic Papers 65, 25–46.


{% real incentives/hypothetical choice: for time preferences: use real payments, for discounting of 6 months or … or some three years. Find average discount rate of 28%. Discusses censoring effect, that for too low interest subjects may refuse because the market gives it better, i.e., subjects may arbitrage between experiment and market. Cite a Coller & Williams (1989) paper for this point.
The only text to explain how the future (could be 3 years later) payments were implemented is on p. 1610 near end.
between-random incentive system (paying only some subjects) was used. The authors write:
Subjects were then told that a single payment option would be
chosen at random for payment, and that a single subject would
be chosen at random to be paid his preferred payment option for
the chosen payoff alternative. The payment mechanism was explained
as follows:
HOW WILL THE ASSIGNEE BE PAID?
The Assignee will receive a certificate
which is redeemable under the conditions
dictated by his or her chosen payment op-
tion under the selected payoff alternative.
This certificate is guaranteed by the Social
Research Institute. The Social Research In-
stitute will automatically redeem the certif-
icate for a Social Research Institute check,
which the Assignee will receive given his or
her chosen payment option under the se-
lected payoff alternative. Please note that all
payments are subject to income tax, and
information on all payments to participants
will be given to the tax authorities by the
Social Research Institute.
Pp. 1612-1613 acknowledges the point that the subjects may not trust the implementation of the real incentives and may, therefore, discount extra. P. 1613 points out that experiments with hypothetical choices typically find discount rates of even more than the 28% as found here. %}

Harrison, Glenn W., Morten I. Lau, & Melonie B. Williams (2002) “Estimating Individual Discount Rates in Denmark: A Field Experiment,” American Economic Review 92, 1606–1617.


{% §2.1, p. 1012, gives six criteria for when a study can be considered a field experiment:
The nature of the subject pool;
the nature of the information that the subjects bring to the task;
the nature of the commodity;
the nature of the task or trading rules applied;
the nature of the stakes;
the nature of the environment that the subject operates in.
P. 1027: “by some arbitrator from hell.”
P. 1028 has nice discussion “Context is not a dirty word.” About whether choice alternatives should be abstract, or have a concrete context. Is related to my lesson to learn when teaching to medical students: when I tried to attach real diseases etc. to branches in decision trees, the students would start discussing the diseases and not the decision-theoretic risk-tradeoffs. So, I learned to use abstract diseases (disease 1, 2, …, etc.) to designate the branches.
P. 1031, in reply to the criticism of real incentives that they are too small, makes the common mistake of many experimental economists to put forward Holt & Laury (2002) as counterargument. For any practical purpose, the amounts in Holt & Laury (2002) of some hundreds of dollars are SMALL! No one would do a decision analysis for such stakes. Below three months of salary, utility should be linear and nothing going on.

Download 7.23 Mb.

Share with your friends:
1   ...   43   44   45   46   47   48   49   50   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page