Bibliography


paternalism/Humean-view-of-preference



Download 7.23 Mb.
Page51/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   47   48   49   50   51   52   53   54   ...   103

paternalism/Humean-view-of-preference: they argue that normative theory can help to correct deviations.
P. 207: “When we make decisions for ourselves, consideration of our own regret may be rational (especially if we think we cannot avoid it).” The bracket remark is, I think, the essence.
P. 208 argues that standard gamble utility measurement may be distorted because of certainty effect. “In particular, many people prefer sure things to gambles on general principles, as it were.” (SG doesn’t do well)
Suggest direct measurement of utility difference as alternative.
P. 208: “On the other hand, a feeling of ambiguity is often a sign that there are additional data we ought to be seeking or waiting for.”
paternalism/Humean-view-of-preference: p. 210: “Subjects should be confronted with their discrepancies from normative models –or discrepancies between decisions resulting from different ways of presenting the same problem –and asked to explain themselves.” %}

Hershey, John C. & Jonathan Baron (1987) “Clinical Reasoning and Cognitive Processes,” Medical Decision Making 7, 203–211.


{% inverse-S, utility elicitation results suggest such probability transformations;
SG higher than CE: probability equivalents give more risk aversion than CEs (certainty equivalents).
Some nice things for Z&Z on p. 949/950: people are more risk averse when a choice question is formulated as taking insurance than as gambling.
nonlinearity in probabilities %}

Hershey, John C., Howard C. Kunreuther, & Paul J.H. Schoemaker (1982) “Sources of Bias in Assessment Procedures for Utility Functions,” Management Science 28, 936–953.


{% PT falsified & reflection at individual level for risk: they present data that violate reflection by measuring risk attitudes for both gains and losses, both between and within subjects. There are no clear patterns and findings, and there are relations in all directions. Unfortunately, they do not report correlations, but only patterns of risk seeking/risk aversion, which is similar to median splits. Tversky & Kahneman (1992, p. 308) will criticize this research for underestimating the unreliability of individual choices.
Table 3 and p. 409: more risk aversion for gains than risk seeking for losses.
Risk averse for gains, risk seeking for losses: Table 3 is nice way to inspect data. Fourfold pattern is confirmed with one exception: for gains with probabilities below .01, down to .001, they do not find risk seeking. For probabilities .1 and .2 they do. For losses they do find the fourfold pattern of risk aversion for small probabilities but risk seeking for moderate and high probabilities. %}

Hershey, John C. & Paul J.H. Schoemaker (1980) “Prospect Theorys Reflection Hypothesis: A Critical Examination,” Organizational Behavior and Human Performance 25, 395–418.


{% Seem to find that presenting risky decisions in context of insurance enhances risk aversion. %}

Hershey, John C. & Paul J.H. Schoemaker (1980) “Risk Taking and Problem Context in the Domain of Losses: An Expected Utility Analysis,” Journal of Risk and Insurance 47, 111–132.


{% utility elicitation
concave utility for gains, convex utility for losses: find that;
SG higher than CE: best reference for viewpoint that extreme risk aversion in PE version of standard gamble results from loss aversion. That is, the subject chooses certain outcome as status quo, then gamble becomes mixed (has gain and loss), and then loss aversion leads to extreme risk aversion. Robinson, Loomes, & Jones-Lee (2001) give a nice confirmation through qualitative interviews. %}

Hershey, John C. & Paul J.H. Schoemaker (1985) “Probability versus Certainty Equivalence Methods in Utility Measurement: Are They Equivalent?,” Management Science 31, 1213–1231.


{% %}

Herstein, Israel N. & John Milnor (1953) “An Axiomatic Approach to Measurable Utility,” Econometrica 21, 291–297.


{% A survey.
own little expertise = meaning of life: intro does the usual overselling of suggesting that “decision from experience” capture all decisions in life that are not “decisions from description.” %}

Hertwig, Ralph (2012) “The Psychology and Rationality of Decisions from Experience,” Synthese 187, 269–292.


{% First three pages give nice overview of the basic approach. P. 536 seeks to disentangle direct experience and repeated decisions as causes of underweighting of unlikely events, by comparing repeated decisions design with sampling design. These both have that outcomes are not experienced (being informed about points added is not experiencing outcomes I think).
own little expertise = meaning of life: p. 535:
“Outside the laboratory, however, people often must make choices without a description of possible choice outcomes, let alone their probabilities. Because people can rely only on personal experience under such conditions, we refer to these as decisions from experience. Only a few studies have investigated decisions from experience in humans. In one (Barron & Erev, 2003) …” %}

Hertwig, Ralf, Greg Barron, Elke U. Weber, & Ido Erev (2004) “Decisions from Experience and the Effect of Rare Events in Risky Choice,” Psychological Science 15, 534–539.


{% real incentives/hypothetical choice: authors discuss topic mostly from an economic perspective (criticizing psychologists). For instance, p. 384 2nd para ends with:
The experimental standards in psychology, by contrast, are
comparatively laissez-faire, allowing for a wider range of
practices. The lack of procedural regularity and the
imprecisely specified social situation “experiment” that results
may help to explain why in the “muddy vineyards” (Rosenthal
1990, p. 775) of soft psychology, empirical results “seem
ephemeral and unreplicable” …
This is, indeed, negative about psychology.
Then there follows a very long list of comments by many people, many prominent, and a reply, up to p. 451. Impressive!
P. 402 footnote 9 on definition of deception. %}

Hertwig, Ralf & Andreas Ortmann (2001) “Experimental Practices in Economics: A Challenge for Psychologists?,” Behavioral and Brain Sciences 24, 383–403.


{% %}

Hertwig, Ralf & Andreas Ortmann (2004) “The Cognitive Illusion Controversy: A Methodological Debate in Disguise That Matters to Economists.” In Rami Zwick & Amnon Rapoport (eds.) Experimental Business Research, 361–378, Kluwer, Dordrecht.


{% Test comprehension of probability in representative Swiss sample, finding that exposure to games of chance and education increase understanding, but more so in abstract problems than in real-world problems. (cognitive ability related to risk/ambiguity aversion) %}

Hertwig, Ralph, Monika Andrea Zangerl, Esther Biedert, & Jürgen Margraf (2008) “The Public's Probabilistic Numeracy: How Tasks, Education and Exposure to Games of Chance Shape It,” Journal of Behavioral Decision Making 21, 457–470.


{% Principal-agent with agent loss averse à la Köszegi-Rabin. %}

Herweg, Fabian, Daniel Müller, & Philipp Weinschenk (2010) “Binary Payment Schemes: Moral Hazard and Loss Aversion,” American Economic Review 100, 2451–2477.


{% Hesiodus is a Greek poet living maybe 8th century before Christ. Not clear if before or after Homerus.
“A fool is he who learns not until from experience” (my translation of “Een dwaas is hij die pas door ondervinding wijs wordt”, the Dutch translation of the Greek text.)

The fool knows after he's suffered;

The fool by suffering his experience buys

Even a fool learns by experience

Experience is the mistress of fools. %}

Hesiodus (800). In Wolther Kassies (2002, translator) “De Geboorte van de Goden van Werken en Dagen,” Athenaeum—Polak & Van Gennep.


{% Seems to write also on: total harm of seeding hurricanes is reduced but they went to Cuba and Castro objected so US stopped. %}

Hess, Wilmot N. (1974) “Weather and Climate Modification.” Wiley, New York.


{% revealed preference: probabilistically %}

Heufer, Jan (2011) “Stochastic Revealed Preference and Rationalizability,” Theory and Decision 71, 575–592.


{% Efficient ways to test quasi-concavity of preference in the probability triangle from observed choices from budget-subsets. %}

Heufer, Jan (2012) “Quasiconcave Preferences on the Probability Simplex: A Nonparametric Analysis,” Mathematical Social Sciences 65, 21–30.


{% revealed preference: shows that deriving SARP from WARP is equivalent to a question on Hamiltonian graphs. Gives graph-theoretic meaning to revealed preference. %}

Heufer, Jan (2014) “A Geometric Approach to Revealed Preference via Hamiltonian Cycles,” Theory and Decision 76, 329–341.


{% Revealed-preference implementation of Yaari’s (1969) more-risk-averse relation. Theory and an application to Choi et al.’s (2007) data set. %}

Heufer, Jan (2014) “Nonparametric Comparative Revealed Risk Aversion,” Journal of Economic Theory 153, 569–616.


{% %}

Hevell, Steven K. & Frederick A.A. Kingdom (2008) “Color in Complex Scenes,” Annual Review of Psychology 59, 143–166.


{% %}

Hey, John D. (1984) “The Economics of Optimism and Pessimism,” Kyklos 37, 181–205.


{% error theory for risky choice; Best core theory depends on error theory: seems to be %}

Hey, John D. (1995) “Experimental Investigations of Errors in Decision Making under Risk,” European Economic Review 39, 633–640.


{% Repetition reduces noise for some subjects, but not for all. %}

Hey, John D. (2001) “Does Repetition Improve Consistency,” Experimental Economics4, 5–54.


{% dynamic consistency: paper nicely and clearly emphasizes that plans in themselves cannot be inferred from observed choice in any obvious way. P. 125: “self-reported plans—for which there is no incentive for correct reporting.”
Do a new experiment where people announce a plan and then can deviate if they are willing to pay a little fee for that. Then people do not want to deviate. Maybe, more than the cost of deviating itself, is it that people then become aware that it makes sense to be dynamically consistent. %}

Hey, John D. (2005) “Do People (Want to) Plan?,” Scottish Journal of Political Economy 52, 122–138.


{% Best core theory depends on error theory: seems to be %}|

Hey, John D. (2005) “Why We Should not Be Silent about Noise,” Experimental Economics 8, 325–345.


{% survey on nonEU, regarding ambiguity. %}

Hey, John D. (2014) “Choice under Uncertainty: Empirical Methods and Experimental Results.” In Mark J. Machina & W. Kip Viscusi (eds.) Handbook of the Economics of Risk and Uncertainty Vol. 1, 809–850 (Ch. 14), North-Holland, Amsterdam.


{% They test EU against two betweenness theories, finding that one improves EU but the other does not. %}

Hey, John D. & Daniela Di Cagno (1990) “Circles and Triangles: An Experimental Estimation of Indifference Lines in the Marschak-Machina Triangle,” Journal of Behavioral Decision Making 3, 279–306.


{% quasi-concave so deliberate randomization: take this as hypothesis to justify stochastic choice, and use quadratic utility; empirically it did not work well. %}

Hey, John D. & Enrica Carbone (1995) “Stochastic Choice with Deterministic Preferences: An Experimental Investigation,” Economics Letters 47, 161–167.


{% dynamic consistency; find in experiments that part of the subjects plan through the whole decision tree, and some don’t plan at all. %}

Hey, John D. & Julia A. Knoll (2007) “How Far Ahead Do People Plan?,” Economics Letters 96, 8–13.


{% Let subjects work out three-stage dynamic decision tree with software that makes recoverable what subjects did. Some do backward induction, but most don’t do anything clear, and there is no clear conclusion. %}

Hey, John D. & Julia A. Knoll (2011) “Strategies in Dynamic Decision Making – An Experimental Investigation of the Rationality of Decision Behaviour,” Journal of Economic Psychology 32, 399–409.


{% random incentive system: test it and find it confirmed. Closing sentence (p. 263): “The conclusion seems to be that experimenters can continue to use the random incentive mechanism and that this paper can be used as a defence against referees who argue that the procedure is unsafe.” Argue that isolation facilitates the RIS. %}

Hey, John D. & Jinkwon Lee (2005) “Do Subjects Separate (or Are They Sophisticated)?,” Experimental Economics 8, 233–265.


{% random incentive system: test it and find it confirmed. Test spill-over effect—whether answers in experiments are affected by previous questions (like learning)—and find no evidence for it. %}

Hey, John D. & Jinkwon Lee (2005) “Do Subjects Remember the Past?,” Applied Economics 37, 9–28.


{% dynamic consistency; Test the dynamic decision approaches, resolute, naïve, sophisticated, empirically, in a nice design to disentangle them. Also ask for evaluations of trees so as to test for indifference versus strict preference. Unfortunately, the data are noisy and give no clear pattern. Maybe the stimuli were too complex. There is a confound in their design. In Trees 3 and 4 (p. 8) there is an alternative that clearly dominates one other. It is well known that this generates a context effect of attracting subjects to take the dominating alternative more than the nondominated alternative, as demonstrated by Tversky & Simonson (1993) and many others. It is indeed what happens in the data.
deception when implementing real incentives: I regret much that the authors used deception, not playing for real what is suggested to the subjects in the beginning. There is no good reason for doing so, and the authors did it only to reduce their work load; i.e., the number of subjects to be run and the money to be paid to subjects (p. 13 last para).
Unfortunately, the second sentence of §2 incorrectly claims that the authors are the first to test the conditions with real incentives. They next modify by saying that they will only consider studies with “appropriate” real incentives. This is characteristic of a bad convention among experimental economists: if person A first developed some idea, and tested it with hypothetical choice, and then person B does all the same but with real incentives, then experimental economists will credit all priority to person B and completely ignore person A. Even if we ignore this point, the authors have a second problem: contrary to what they write in footnote 13, Busemeyer et al. (2000) did use real incentives, in their experiments 2 and 3. This paper by Hey & Lotito has enough extra to offer, such as the nice considerations of strengths of preferences. %}

Hey, John D. & Gianna Lotito (2009) “Naïve, Resolute or Sophisticated? A Study of Dynamic Decision Making,” Journal of Risk and Uncertainty 38, 1–25.


{% Incentives: use RIS; losses from prior endowment mechanism (subjects can lose £10, but are paid £10 a priori).
Use nice bingo blowers, a transparent device containing balls in three colors that are continuously moved around, so that subjects can only vaguely see the composition of the urn and have degrees of ambiguity. The more balls the harder to assess, so the more ambiguity.
Urn 1 (15 subjects): 2 pink, 5 blue, and 3 yellow balls;
Urn 2 (17 subjects): 4 pink, 10 blue, and 6 yellow balls;
Urn 3 (16 subjects): 8 pink, 20 blue, and 12 yellow balls (p. 90).
Each subject sees only one urn. Each next urn is more ambiguous than the one before.
Nicely, they test all kinds of theories of uncertainty/ambiguity. They consider three outcomes, being £10, £10, and £100. They use cross-validation: one part of the data set is used to calibrate the parameters of the models, and then another part is used to see if the model predicts the choices properly there.
The data set, and the general scheme of testing many popular ambiguity theories, making them all tractable, are great, and could have led to a top paper. Unfortunately, there are many theoretical mistakes. The authors use several wrong formulas especially regarding the two versions of prospect theory. This invalidates the results and claims made.
I here use their notation CPT for the new 92 prospect theory, rather than my own (and Tversky’s!) preferred PT. And I use their PT iso my preferred OPT for the 79 version of prospect theory.
They test:
1. EV, 3 parameters: 2 probabilities and error variance s.
2. EU, 4 parameters: EV-ones + one U parameter (U(10) = 0; U(100) = 1; U(10) is only U-parameter; p. 89);
3. CEU (Choquet expected utility), 8 parameters: EU-ones + 6  2 (iso 2 subjective probabilities, now 6 for the capacity for six of the eight events, with the empty and universal event not counting because there the capacities are fixed at 0 and 1);
4. CPT of ’92, 9 parameters: CEU parameters +, supposedly and incorrectly, one more for loss aversion.
5. PT of Kahneman & Tversky ’79, 6 parameters: the EU ones + one more because subjective probabilities need not sum to 1 (or any other constant) + one more for loss aversion. This time loss aversion does genuinly generate an extra parameter, unlike with their CPT, because the decision weights need not sum to 1 implying that the 0 point of U matters.
6. DFT (Decision Field Theory of Busemeyer & Townsend 1993; called random SEU there), 4 parameters: as EU but different error theory. It, nicely, has the randomness on statewise utility differences and their probabilities.
7. Maxmin EU, 5 parameters: like EU, but with 3 minimum probabilities per state (so the family of all priors where each state has at least that min. probability; the mins are supposed to add to less than 1) iso 2 subjective probabilities (p. 95 footnote 16 and p. 109 are not clear on whether it is min or max probability, but it is min, as reanalyses by Amit Kothiyal showed).
8. Maxmax EU, 5 parameters: like maxmin.
9. -maxmin (EU), 6 parameters: like maxmin but  is one more.
10. Maxmin, 1 parameter, probability of trembling-hand theory.
11. Maxmax, 1 parameter like maxmin.
12. Minimal, 1 parameter regret like maxmin.
They do not test the smooth model because they have no multiple stages. P. 103 top, correctly points out that with the two-stage decomposition endogenous, as in the smooth model of KMM, there are too many parameters.
There are two problems with their CPT calculation (p. 88 & p. 108).
PROBLEM 1. They have no sign-dependence of weights. CPT in full generality would have all weights for losses completely independent of those for gains. This in its full generality means more parameters, which is not always good. If we don’t want to increase the number of parameters relative to CEU, then a plausible special case is taking the nonadditive measure the same for losses as for gains, but then using the formula of CPT rather than of CEU, which means weighting the losses dually relative to gains (à la reflection), and not the same as under CEU. Then the total weights need not sum to 1 as they do under CEU (and then CEU would not be nested in CPT or vice versa). This non-summing to 1 gives empirical meaning to setting utility 0 at a reference point (say, 0). The authors do the weighting fully the same as under CEU, so that the weights always sum to 1. Given that they also have a fixed reference point (0) under what they call CPT, what they call CPT is a special case of CEU. The utility- and loss-aversion-part is further discussed in the next Problem 2, where I will show that what they call CPT is data-equivalent to what they call CEU.
PROBLEM 2. They think to implement loss aversion for CPT by not normalizing U(10) = 0 and instead normalizing U(0) = 0 (p. 89 l.-5 of middle para), leaving U(10) < 0 free. But this does not work. What they call CPT is data-equivalent to CEU. It all has to do with, for a fixed reference point as is the case here (0 is the reference point), CPT generalizing CEU only because of sign-dependence of decision weights which they do not have, and for CEU the rescaling of U(0) = 0 having no empirical impact. Here is a more detailed explanation:
Recall that event-weighting in their CPT is done the same way as in CEU. In particular, the decision weights of the events always sum to 1, something typical of CEU. This means that utility is unique up to unit and level (cardinal, interval scale). In other words, adding any constant to utility and multiplying utility by any positive constant at the outcomes 10, 10, and 100 does not affect the preference relation. The former increases all values of prospects by that same constant which does not affect preference, and the latter multiplies all values of prospects by that same positive constant which again does not affect preference.
OBSERVATION 1. Any CPT representation in their paper is a CEU representation.
PROOF. Denote the utility function under CPT by U. I define a U' leading to a CEU representation as follows:
U'(.) = [U(.) - U(-10)]/[U(100) - U(-10)].
U'(-10) = 0 and U'(100) = 1, as desired. Thus any representation called CPT in their paper can be turned into a representation called CEU that represents the same preference relation.
QED QED QED QED QED QED QED QED QED QED QED

What they call CPT therefore does not generalize CEU.


OBSERVATION 2. Any CEU representation in their paper is a CPT representation.
PROOF. Denote the utility function under CEU by U. I define a U* leading to a CPT representation. There are several ways to do this. At any rate we will have
U*(100) = U(100) = 1.
Further
U*(-10) < U(-10) = 0
implies that also
U*(10) < U(10).
Further
U*(0) = 0
implies
U*(10) > 0.
So we can define
U*(10) = z
for any value z with
0 < z < U(10) (< 1).
Then we define, at 100, 10, and -10:
U*(.) = {(1-z)/[1-U(10)]}U(.) - [U(10)-z]/[1-U(10)].
We, indeed, have U*(100) = 1, U*(10) = z, and
U(-10) = - [U(10)-z]/[1-U(10)] < 0.
So we can define
U*(0) = 0
which does not affect preference but has utility increasing by being between
U*(-10) and U*(10).
Thus, if we start from a CEU representation as in this paper, then we can choose any value U*(10) strictly between 0 and U(10), and then get a CPT representation with that U* that represents the same preference relation.
QED QED QED QED QED QED QED QED QED QED QED

SUMMARY of Problem 1: what this paper calls CPT is, regarding core theory, identical to what it calls CEU, representing the same preference relations.


Another way to put the point is that for three outcomes -10, 10, and 100, only one parameter of utility is relevant when weights always add to 1 (which in fact is CEU): the ratio of utility differences
[U(10) - U(-10)] / [U(100) - U(-10)] .

In view of the above, differences in predictions (via statistical fittings) of CEU and CPT can result only from the error theory working out differently numerically under the different scalings of utility (although the division by V(xmax) - V(xmin) on p. 91 l.-3 in their probabilistic theory suggests that rescaling of utility will not matter).

Besides the above two problems for CPT, there are more problems.

PROBLEM 3. This problem concerns the implementation of PT (p. 88 & pp. 107-108). PT of KT 79 was originally defined for risk with given probabilities. This paper extends it to uncertainty by assuming subjective probabilities (probabilistic sophistication) and then applying (supposed to be) PT formulas. Extending to uncertainty this way in itself is fine. One problem is that PT is defined only for two nonzero outcomes, and this paper has three. For some prospects (only two outcomes, and both gains, so being 10 and 100) PT as defined by K&T 79 is RDU, using rank-dependent weighting, but this paper does not do that. What his paper does is more like an attempt to use Edwards-type transformation of separate-outcome probabilities (Wakker 2010 Eq. 5.3.3), which is called Separable Prospect Theory (SPT) by Camerer & Ho (1994, p. 185) for instance.


However, this is still not what they really do. Problem is that for 2-color events they take as weight simply the sum of the weights of the two colors (this appears for instance from only taking weights of the three single-color events on p. 95 -see also p. 108 top para-, and not of 2-color events), whereas a crucial point of the theories mentioned is nonadditivity: the weight of a 2-color event is NOT the sum of the two 1-color events. So they just have additivity there. Nonadditivity only shows up with the 3-color event involved.
They write on p. 88 l. 4 that, indeed, their theory is like EU the only difference being that the sum of weights of the three atomic (“singular”) events, concerning one color, need not be 1. Big question is then how they take the weight of the (3-color) universal event, relevant for sure outcomes. If they take the sum of the three probabilities then this is just data-equivalent to EU, dropping the normalized probability 1, and there is no violation of monotonicity, but also this is just EU which is very bad given that it is called PT. It seems that they take weight 1 for the universal three-color event, and not the sum of the three probabilities, and then there can be violations of monotonicity. Their theory then is EU with the only exception being that sure outcomes are over- or underweighted in utility relative to all else. This is in fact a (probabilistically sophisticated version of) a model called utility of gambling. The latter has EU for nondegenerate prospects but degenerate prospects are evaluated using a different utility function, reflecting the utility of (not) gambling. If the utility function for uncertainty is U then the utility function for certainty is kU for a k not equal to 1. Diecidue, Schmidt, & Wakker (2004, JRU, Observation 7) shows that this necessarily violates stochastic dominance. This also happens if k > 1, where k is the reciprocal of the sum of the three probabilities. This means that subadditivity does not help here, somewhat unlike a suggestion, not very explicitly, in footnote 10 on p. 88. That footnote suggests that they assume subadditivity, and erroneously ascribes it to Kahneman & Tversky (1979). Empirically, superadditivity is commonly found and especially Tversky argued for it in support theory.
SUMMARY OF PROBLEM 3. What they call PT is a version of the utility of gambling models. It is too distinct from PT, and even from the separable version of it, to be called PT.
PROBLEM 4. A fourth unsatisfactory implementation concerns the different treatment of the multiple priors models relative to the rank-dependent models. For multiple priors they take a tractable 3-dimensional subset (of all probability distributions for which the probabilities of the single events exceed a lower bound. The three lower bounds are the parameters. But for CEU/CPT they do not do this and take CEU/CPT in full generality. In a 2007 version of their paper they wrote that multiple priors (then taken in full generality) is simply too general to fit any data. Hence to make it work they were forced to take a subset of the theory. But for CEU/CPT it would have been fair to do the same. Given that their source of uncertainty (one urn per subject) is reasonably uniform in the terminology of Abdellaoui et al. (AER, 2011), CEU/CPT would be nice with probabilistic sophistication and a one- or two-parameter fitted weighting function, having only 1 or 2 parameters more than EU, and being the same in this regard as multiple priors.
PROBLEM 5. The fifth problem (p. 85) concerns the distinction between direct decision rules and preference functionals. They consider maxmin and maxmax (and minimal regret) to be direct decision rules, but these obviously are preference functionals just as much, with max or min outcome as preference functional value. The distinction becomes unfortunate because they use different error theories for what they call direct decision rules (p. 91). Because the three direct-decision-rule theories are not very important anyhow, this fifth problem is not important.
The main text suggests that there is another problem with MaxMin and MaxMax, that the appendix however seems to put right. Main text: whereas for the alpha model the authors seem to appropriately take a set of priors, for G&S MaxMin they seem to take minimum probabilities per event, and not minimums of probability distributions, with similar problems for MaxMax. It may, for instance, happen for MaxMax that to get maximum probability at £100, the probability at £10 should not be maximal. The appendix pp. 108-109 puts things right by having MaxMax and MaxMin as special cases of alpha.
END OF FIVE PROBLEMS

Because of the problems mentioned, the empirical conclusions of this paper are not informative. These conclusions are that maxmax priors does best, maxmin and  maxmin do well also, and others do worse. Big pity that such a nice experimental data set has been analyzed incorrectly.


P. 83: when criticizing statistical testing of theories, the authors only consider the case where one theory is nested within another.
second-order probabilities to model ambiguity: p. 84 4th para: they point out that second-order probabilities are not really ambiguity, and nicely explain that implementing ambiguity is not so easy.
suspicion under ambiguity: p. 84 5th para: they claim that their bingoblower is not subject to suspicion, but do not argue clearly why. Why could not the researcher do visual tricks with it, or systematically have few balls of the winning color hoping for overestimation? That the subjects bet both on and against each color can help to rule out suspicion. A small remaining problem is that subjects may not know this and may still suspect that the ball compositions are deliberately unfavorable for the particular choice they consider.
P. 87 footnote 8 incorrectly suggests that the competence effect is [only] relevant for laboratory data. It also suggests that it can play no role in their study, but it can because urn 3 generates the least competence and urn 1 generates the most. P. 101 continues on this.
P. 88 writes, erroneously, that CPT assign (subjective) probabilities to events and then transform these. Then CPT would imply probabilistic sophistication, which is not correct. P. 93 will write that CEU is nested within CPT, so that they did not assume probabilistic sophistication. P. 95 writes that for CPT the weights (capacities?) are “weighted probabilities,” but I am pretty sure that they treated them just as the capacities for CEU.
P. 89 writes that explaining BDM (Becker-DeGroot-Marschak) is too complex.
P 89: every subject must make 162 binary choices. Must take at least 30 seconds per choice. So the experiment takes more than 81 minutes per person. With so many choices for so much time, subjects can be expected to resort to a particularly simple heuristic. With outcomes 100, 10, and 10 it is mostly optimal to just maximize the chance/likelihood at 100. So subjects are prone to just do this always (suggested by the authors on p. 103 penultimate para). This may explain why the maxmax model does best, and better than maxmin.
The paper sometimes claims, holding it against CEU and CPT, that models with more parameters always predict better and, hence, should be punished for the extra parameters. More parameters always give better fits, but for predicting they may mostly pick up noise (overfitting) and then predict worse, so they are no clear advantage for prediction purposes.
They use Bayesian information criterion rather than AIC to account for extra parameters. Sometimes (p. 96 penultimate para, p. 98 2nd para) says that theories with more parameters should be judged more negative for it. But this feels like double counting because the info criteria and predictions already punishes for many parameters.
Summarizing, I admire the empirical setup with marvelous stimuli (based on big money and time investments, with the marvelous idea of the bingo-blower), and also the general plan of testing many ambiguity theories. Maybe from now on every new ambiguity theory should be forced to be calibrated on this data set. But there are several problems with the core-theoretical parts underlying the analyses in this paper, invalidating the empirical claims. %}

Hey, John D., Gianna Lotito, & Anna Maffioletti (2010) “The Descriptive and Predictive Adequacy of Theories of Decision Making under Uncertainty/Ambiguity,” Journal of Risk and Uncertainty 41, 81–111.


{% %}

Hey, John D., Gianna Lotito, & Anna Maffioletti (2011) “The Descriptive and Predictive Adequacy of Theories of Decision Making under Uncertainty/Ambiguity; Corrigendum; working paper.


{% Real incentives: everything is incentivized, using RIS. N = 24 subjects were interviewed five times, about half an hour per time. Consider 4 outcomes (0, 10, 30, 40 in £), and 28 probability distributions over them. Consider binary choices, bid-prices, ask-prices, and BDM (Becker-DeGroot-Marschak). Fit EU and RDU with an error theory added. Last para of §2 states that they assume all choices statistically independent, also within subjects. Find that RDU does not fit much better. One clear finding is that binary choice has less noise than the other (matching) procedures. In RDU, utility changes more than probability weighting between different elicitation methods. Utility parameters are even negatively correlated between different elicitation methods. %}

Hey, John D., Andrea Morone, & Ulrich Schmidt (2009) “Noise and Bias in Eliciting Preferences,” Journal of Risk and Uncertainty 39, 213–235.


{% error theory for risky choice; Best core theory depends on error theory: seems to be inconsistency of 25%; conclude that expected utility with noise is most plausible explanation %}

Hey, John D. & Chris Orme (1994) “Investigating Generalizations of Expected Utility Theory Using Experimental Data,” Econometrica 62, 1291–1326.


{% Use bingo blower (as in Hey, Lotito, & Maffioletti 2010) with three colors.
Treatment 1 (66 subjects): 2 pink, 5 blue, and 3 yellow balls (66 subjects);
Treatment 2 (63 subjects): 8 pink, 20 blue, and 12 yellow balls (63 subjects).
Treatments are between subjects.
Subjects can invest an amound of money x in one event and mx in another, where one event E1 concerns one color and the other E2 either one color (then no payment if the 3rd color) or two other colors. They receive e1x if E1 happens and e2(mx) if E2 happens, where the exchange rates e1 and e2 are set by the experimenter and vary over choices (if I understand right). A problem with such linear multiple-choice sets is that many functionals will usually predict corner solutions. Functionals that don’t (such as with power utility because it has infinite derivative at 0, so no 0 investment in an optimum) don’t do so because of a weak point. In reality subjects choose interior solutions because of the compromise effect and maybe experimenter demand effects.
All prospects considered are two-outcome. 60 randomly chosen questions were used to calibrate the functionals, and then 16 for prediction.
The authors consider five theories that are all special cases of biseparable utility (see the unnumbered equation between Eqs. 14 and 15 on p. 16), although the authors use different names. For multiple prior theories they use, as sets of priors, sets with lower bounds for the three probabilities: P(pink)  p1, P(blue)  p2, P(yellow)  p3, with the pj summing to less than 1. (As in Hey, Lotito, & Maffioletti (2010), who did not write this point clearly.) So it gives three free parameters. They consider no losses, so CEU is the same as PT.
Their theories (with nr. of parameters specified on p. 17 l. -3) are:
(1) SEU with 4 parameters (2 subjective probabilities, 1 utility, 1 error variance);
(2) CEU (biseparable utility in full generality) with 8 parameters; (6 capacities; 1 utility; 1 error variance);
(3) -maxmin(AEU) with 6 parameters (3 for set of priors; 1 for ; 1 utility, and 1 error variance)
(4) What they call vector expected utility (VEU), but what in fact is biseparable utility with w(p) = p for a  that usually is positive but that is also allowed to be negative. This violates stochastic dominance if the best outcome has outcome-probability < . The authors always restrict  to less than the minimal probability occurring in their experiment, but this is ad hoc and this specification of binary RRDU is therefore not useful. (I guess a similar restriction w.r.t. maximal probabilities applies for negative , but did not check.) It does the opposite of inverse-S for small probabilities, not overestimating them but underestimating them. It is in fact neo-additive probability weighting with the two parameters the same except that one has the wrong sign. This theory has 5 parameters (2 subjective probabilities, 1 for , 1 utility, and 1 error variance);
(5) The contraction model (COM); note that the contraction model has the sets of priors  as exogenously given, whereas this paper takes them as endogenous. Thus the contraction model simply is identical to maxmin EU. The  factor in their Eq. 13 is unidentifiable. 6 parameters (3 set of priors; 1 utility; 1 for , and 1 error variance);
Specification 1 assumes linear-exponential (CARA) utility, and specification 2 log-power (CRRA) utility. Specification 2 does better, and I think that this is because it accommodates the compromise effect better.
P. 3 discusses the difficulty of testing two-stage models experimentally.
P. 4 2nd para does not understand the role of the subjective (also called ambiguity neutral) probabilities used by Abdellaoui et al. (2011), based on Chew & Sagi (2008), because of which it is NOT the same as CEU but a special case.
In their results (p. 18 top), CEU performs poorly which is because it is given way too many parameters, as explained for instance by Kothiyal, Spinu, & Wakker (2014 JRU), leading to great overfitting with the parameters picking up more noise than system; COM (= maxmin EU) performs poorly with its unidentifiable ; AEU does some better because they don’t have redundant parameters, SEU yet better (although AEU is better on p. 25 l. 1), and VEU (vector EU) is best. In the results section they describe statistical tests, but I did not understand why they did not just do Wilcoxon to compare the predictive likelihoods of all theories.
P. 28 last para: for the more ambiguous blower the main change is that subjects take subjective probabilities closer to uniform, nicely confirming the cognitive interpretation of inverse-S. (cognitive ability related to likelihood insensitivity (= inverse-S)) %}

Hey, John D. & Noemi Pace (2014) “The Explanatory and Predictive Power of Non Two-stage-Probability Theories of Decision Making under Ambiguity,” Journal of Risk and Uncertainty 49, 1–29.


{% dynamic consistency: subjects can divide money over two risky prospects (say investments) in a first stage, and then, after risk of first stage resolved, can divide the remainder again over two risky prospects. They must announce beforehand what their second-stage division will be, but in the second stage get the chance to revise. Thus we can test for dynamic decision principles. By looking at investment we get continuum observation and can test more. The authors fit RDU with the usual 4 dynamic types: resolute, sophisticated, naïve, and myopic (the latter meaning at stage 1 they only optimize the stage-1 rewards, completely ignoring the investment to be made after). They get, roughly, 55% resolute, 23% sophisticated, 13% myopic and 10% naïve.
As always in John Hey’s papers, the 1992 probability weighting function family of Tversky & Kahneman (1992) is ascribed to Quiggin (1982). Footnote 22 of H&P refers to Quiggin “proposing” the T&K family without the 1/g exponent in the denominator. However, this family has been well known long before, and Quiggin properly cites Karmarkar for using it. More precisely, Quiggin and Karmarkar consider a normalized version.) Quiggin then in fact criticizes it, for still violating stochastic dominance in the old fixed-probability transformation theory. %}

Hey, John D. & Luca Panaccione (2011) “Dynamic Decision Making: What Do People Do?,” Journal of Risk and Uncertainty 42, 85–123.


{% real incentives/hypothetical choice: they asked N = 9 subjects to express indifferences. Hypothetical choice that is in a paper by John Hey! %}

Hey, John D. & Elisabetta Strazzera (1989) “Estimation of Indifference Curves in the Marschak-Machina Triangle,” Journal of Behavioral Decision Making 2, 239–260.


{% Seems to show that moments do not characterize distribution, but Im not sure %}

Heyde, Christopher C. (1963) “On a Property of the Lognormal Distribution,” Journal of the Royal Statistical Society, Series B, 25, 392–393.


{% crowding-out %}

Heyes, Anthony (2005) “The Economics of Vocation or `Why Is a Badly Paid Nurse a Good Nurse’?,” Journal of Health Econonomics 24, 561–569.


{% Seems to have argued against EU, and for moment models. %}

Hicks, John R. (1931) “The Theory of Uncertainty and Profit,” Economica 32, 170–189.


{%. Commonly taken as the main paper to establish the ordinal view of utility in economics. Seems to show that indifference curves can be employed to reconstruct the theory of consumer behavior on the basis of ordinal utility, and to have emphasized how much one can do with only ordinal utility.
Edwards (1954): “This paper was to economics something like the behaviorist revolution in psychology.”
Zeuthen (1937) cites parts of it, for example from p. 225: “A theory aiming at establishing the results of human choices in terms of quantities exchanged and the ratios of such quantities (i.e., prices) may dispense with any assumption which is not purely behaviouristic, while a theory of human welfare must go back to psychological introspection.”
In relation to that, seems to be a major paper to make economics exclude survey data and introspection from its domain, and rely exclusively on observable choice. %}

Hicks, John R. & Roy G.D. Allen (1934) “A Reconsideration of the Theory of Value: I; II,” Economica n.s., 1.1: 52–75; 1.2: 196–219.


{% Consider case where uncertainty can be reduced to uncertainty about own subjective discounting in the future. %}

Higashi, Youichiro, Kazuya Hyogo, & Norio Takeoka (2009) “Subjective Random Discounting and Intertemporal Choice,” Journal of Economic Theory 144, 1015–1053.


{% Correct a mistake in Mukerji & Tallon (2003 JME). %}

Higashi, Youichiro, Sujoy Mukerji, Noreo Takeoka, & Jean-Marc Tallon (2008) “Comment on “Ellsberg's Two-Color Experiment, Portfolio Inertia and Ambiguity,” International Journal of Economic Theory 4, 433–444.


{% Considers decision from experience. If subjects can quickly and easily do very much sampling, then they properly estimate probabilities of rare events, so neither over- nor underweighting. %}

Hilbig, Benjamin E. & Andreas Glöckner (2011) “Yes, They Can! Appropriate Weighting of Small Probabilities as a Function of Information Acquisition,” Acta Psychologica 138, 390–396.


{% conservation of influence: seems to have written: freedom is the opportunity to make decisions. %}

Hildebrand, Kenneth (date unknown).


{% referaat David November 9 1994; information aversion: p. 97, aversion to information as normative argument regarding choices of binary tests %}

Hilden, Joergen (1991) “The Area under the ROC Curve and Its Competitors,” Medical Decision Making 11, 95–101.


{% revealed preference %}

Hildenbrand, Werner (1989) “The Weak Axiom of Revealed Preference for Market Demand is Strong,” Econometrica 57, 979–985.


{% %}

Hildenbrand, Werner & Alan P. Kirman (1976) “Introduction to Equilibrium Analysis.” North-Holland, Amsterdam.


{% %}

Hildenbrand, Werner & Alan P. Kirman (1976) “Introduction to Equilibrium Analysis.” North-Holland, Amsterdam.


{% Simple proofs of relations between revealed preference axioms and Slutski matrix properties %}

Hildenbrand, Werner & Michael Jerison (1990) “The Demand Theory of the Weak Axioms of Revealed Preference,” Economics Letters 29, 209–213.


{% %}

Hilhorst, Cokky, Piet Ribbers, Eric van Heck, & Martin Smits (2008) “Using Dempster–Shafer Theory and Real Options Theory to Assess Competing Strategies for Implementing IT Infrastructures: A Case Study,” Decision Support Systems 46, 344–355.


{% %}

Hill, Brian (2009) “Living without State-Independence of Utility,” Theory and Decision 67, 405–432.


{% %}

Hill, Brian (2009) “When is there State Independence?,” Journal of Economic Theory 144, 1119–1134.


{% This paper presents advanced maths, to obtain a state-dependent version of Savage (1954) using useful techniques of Krantz et al. (1971) in an interesting way. It, thus, aims to obtain a genuine state-dependent generalization of Savage (1954). Wakker & Zank (1999, MOR) did some in this direction but, as the author correctly points out, they still needed monotonicity (ordinal state independence). Further, they used richness of outcomes rather than of states.
There still remain some mathematical problems in the results of this paper. A counterexample results from Savage (1954) in his original setup, with the power set of S as sigma algebra (event space). As is well known (Ulam 1930), countable additivity of the probability measure P must be violated here. Then also the EU functional violates countable additivity by considering indiator acts of events revealing the noncountable additivity of P. But, yet, Savage satisfies all axioms (A1-A5) of this paper. The measure U claimed in this paper is supposed to be countably additive though. The problems in the proof leading to this are, first, that the operation 0 for countably many events (p. 2050 line 3 ff.) need not be well defined (it should be shown that it does not matter which countably many representative partial acts are chosen from indifference classes), which gives problems in the derivation of Archimedeanity (point 8 on p. 2053) and in the derivation of countable additivity (Proof of Proposition 2; p.m 2053). %}

Hill, Brian (2010) “An Additively Separable Representation in the Savage Framework,” Journal of Economic Theory 145, 2044–2054.


{% The paper considers preference with a level of confidence in preference playing a role. No uncertainty is considered, but later social choice is considered. A person does not have one preference, but a set of possible preferences; big sets reflect low confidence. For each decision situation an importance level is specified. If the importance is very high, only the most plausible preferences are accepted and, hence, there is more incompleteness. It reminds me of Nau (1992). %}

Hill, Brian (2012) “Confidence in Preferences,” Social Choice and Welfare 39, 273–302.


{% A generalization of multipkle prior models. There is not one set of priors, but there are different levels of confidence (taken ordinally), and for each level of confidence there is a set of priors, being the priors that have at least that confidence. These sets are nested. The level of confidence chosen in a decision problem depends on the stakes of the decision problem. It reminds me of Nau (1992). I wonder how the model of this paper is related to Hill (2012), which seems to be similar. The paper does not discuss this relation. Refining the crude nature of multiple priors (in or out) is desirable of course. The model is very general, in requiring many sets of priors, and assigning such sets of stakes. Given a stake and a set of priors, the paper is pessimistic and does maxmin.
The paper uses Anscombe-Aumann.
P. 681 1st para points out that the paper takes the lowest (nonnull) outcome of an act as stake. So stake is minimum in this paper. It will suggest an interest in generalizations in §4. Note, to avoid terminological confusion, that stake is the opposite of goodness. Increasing the minimal outcome means decreasing the stake. The paper assumes that decreasing the worst outcome (“increasing the stake”) leads to bigger sets of priors and, hence, more ambiguity aversion. This is empirically violated by the commonly found ambiguity seeking for losses with ambiguity aversion for gains. %}

Hill, Brian (2013) “Confidence and Decision,” Games and Economic Behavior 82, 675–692.


{% Remarks about Johnstones sufficiency postulate, work on Zipfs law, also fiducial inference, species problem %}

Hill, Bruce M., David A. Lane & William D. Sudderth (1987) “Exchangeable Urn Processes,” Annals of Probability 15, 1586–1592.


{% %}

Hill, Clara E. & Michael J. Lambert (2004) “Methodological Issues in Studying Psychotherapy Processes and Outcomes.” In Michael J. Lambert (ed.) Bergin and Garfield’s Handbook of Psychotherapy and Behavior Change, 84–135, Wiley, New York.


{% %}

Hill, R. Carter, William E. Griffiths, & Guay C. Lim (2008) “Principles of Econometrics;” 3rd edn. Wiley, New York.


{% Consider welfare models with inequality aversion, diminishing sensitivity (w.r.t. the absolute value of the difference in income), and the Robin Hood principle (take from rich and give to poor), and logical relations between these. %}

Hill, Sarah A. & William Neilson (2007) “Inequality Aversion and Diminishing Sensitivity,” Journal of Economic Psychology 28, 143–153.


{% Splits up risk premium under RDU into one for utility and one for probability weighting. %}

Hilton, Ronald W. (1988) “Risk Attitude under Two Alternative Theories of Choice under Risk,” Journal of Economic Behaviour and Organization 9, 119–136.


{% information aversion %}

Hilton, Ronald W. (1990) “Failure of Blackwells Theorem under Machinas Generalization of Expected-Utility Analysis without the Independence Axiom,” Journal of Economic Behavior and Organization 13, 233–244.


{% Good book for statistics I and II, used by Thom Bezembinder %}

Hinkle, Dennis E., William Wiersma & Stephen G. Jurs (1988) “Applied Statistics for the Behavioral Sciences.” Houghton, Boston.


{% %}

Hinnosaar, Toomas (2016) “On the Impossibility of Protecting Risk-Takers,” Economic Journal, forthcoming.


{% Seems to find violation of RCLA %}

Hirsch, Mauric L. Jr. (1978) “Disaggregated Probabilistic Accounting Information: The Effect of Sequential Events on Expected Value Maximization Decisions,” Journal of Accounting Research 16, 254–269.


{% %}

Hirschman, Albert O. (1992) “Rival Views of Market Society and Other Recent Essays.” Harvard University Press, Cambridge, MA.


{% Ch. 7 seems to shows that intertemporal preferences have to reckon with subjective preferences if the market is not perfect, with different borrowing and lending rates. %}

Hirshleifer, Jack (1970) “Investments, Interest, and Capital.” Englewood Cliffs, Prentice-Hall, NJ.


{% Seems to have been the first to show that further info for the society can lead to loss of utility for all. For example, insurance will collapse under perfect information.
value of information %}

Hirshleifer, Jack (1971) “The Private and Social Value of Information and the Reward to Incentive Activity,” American Economic Review 61, 561–574.


{% value of information: shows that the value of information can be negative for society because it destroys risk sharing. Reminds me of how it can destroy insurance. Zilcha called this the “Hirshleifer effect.” %}

Hirshleifer, Jack (1975) “Speculation and Equilibrium: Information, Risk and Markets,” Quarterly Journal of Economics 89, 519–542.


{% Part I is on DUU.
§1.2 expresses the extreme viewpoint that decision under risk with objective probabilities is illusionary and that probabilities should always be taken as subjective. Argues that Knights distinction is, therefore, not very useful.
§1.4.2: substitution-derivation of EU (P.s.: works only for !extraneous! probabilities, not for subjective/endogenous!)
§1.5 on risk aversion iff U is concave, Friedman & Savage (1948).
§1.6 has framing, Ellsberg, Allais, paradoxes.
Ch. 2 on optimal asset allocation, complete/incomplete markets, state-dependence, mean-variance analysis
Ch. 3 is on comparative statics. Pratt-Arrow index, index of RRA, stochastic dominance.
Ch. 4 is on market equilibrium under uncertainty.

Download 7.23 Mb.

Share with your friends:
1   ...   47   48   49   50   51   52   53   54   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page