Bibliography



Download 7.23 Mb.
Page94/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   90   91   92   93   94   95   96   97   ...   103
§4.1.1 argues that axioms should be independent. Related to this is the principle that axioms should be as weak as possible. This need not hold in descriptive studies that want axioms to be as strong as possible so as to test theories as much as possible.
criticizing the dangerous role of technical axioms such as continuity: §4.1.3 p. 338 penultimate para discusses the point that continuity can add empirical content to other axioms, but is completely optimistic and positive about it without seeing dangers. For instance, Pfanzagl (1968 §6.6) discusses this point but is more negative on there being dangers, and I agree. See also §9.1 of Krantz et al. (1971).
I like §4.3, that axioms should be conceptually compatible. As I see it, in Arrows voting paradox IIA and group-preference-transitivity are conceptually incompatible, the former requiring that a choice between two alternatives have no info about a third, and the latter requiring that all choices between pairs of alternatives be made in same states of info.
§4.4 is strong on it not being bad to have many axioms, a point that I dont really understand.
A point that I missed in the discussion is that axiomatizations can give empirical meaning to theoretical constructs, and justify the use of the latter, for instance in the way that de Finetti justified subjective probabilities. %}

Thomson, William (2001) “On the Axiomatic Method and Its Recent Applications to Game Theory and Resource Allocation,” Social Choice and Welfare 18, 327–386.


{% %}

Thomson, William & Lin Zhou (1993) “Consistent Solutions in Atomless Economies,” Econometrica 61, 575–587.


{% foundations of statistics %}

Thorburn, Daniel (2005) “Significance Testing, Interval Estimation or Bayesian Inference: Comments to “Extracting a Maximum of Useful Information from Statistical Research Data,” by S. Sohlberg & G. Anderss,” Scandinavian Journal of Psychology 46, 79–82.


{% foundations of statistics: give decision foundation for the use of inversion of credible sets to test point-hypotheses. %}

Thulin, Måns (2013) “Decision-Theoretic Justifications for Bayesian Hypothesis Testing Using Credible Sets,” Journal of Statistical Planning and Inference 146, 133–138.


{% Conjunction fallacy %}

Thüring, Manfred & Helmut Jungermann (1990) “The Conjunction Fallacy: Causality vs. Event Probability,” Journal of Behavioral Decision Making 3, 61–74.


{% error theory for risky choice; Generally credited for introducing random utility, also developed by McFadden. %}

Thurstone, Louis L. (1927) “A Law of Comparative Judgment,” Psychological Review 34, 273–286.


{% Points out some sophisticated problems in an equal-interval-judgment experiment. %}

Thurstone, Louis L. (1929) “Fechner’s Law and the Method of Equal Appearing Intervals,” Journal of Experimental Psychology 12, 214–224.


{% Famous paper, measuring utility empirically through hypothetical choice over coats and hats. Can be credited for introducing hypothetical choice to measure preference. %}

Thurstone, Louis L. (1931) “The Indifference Function,” Journal of Social Psychology 2, 139–167.


{% %}

Tian, Guoqiang (1993) “Necessary and Sufficient Conditions for Maximization of a Class of Preference Relations,” Review of Economic Studies 60, 949–958.


{% information aversion: for genetic diseases such as Huntingtons disease people can have themselves tested but there is no cure for the disease. For example, if your father has it you have .5 probability of also having it. Some want to have that test, others really do not want to know if they have the bad gene. %}

Tibben, Aad, Petra G. Frets, Jacques J.P. van de Kamp, et al. (1993) “Presymptomatic DNA-Testing for Huntington Disease: Pretest Attitudes and Expectations of Applicants and Their Partners in the Dutch Program,” Am. J. Med. Genet. 48, 10–16.


{% information aversion %}

Tibben, Aad, Petra G. Frets, Jacques J.P. van de Kamp, et al. (1993) “On Attitudes and Appreciation 6 Months after Predictive DNA Testing for Huntington Disease in the Dutch Program,” Am. J. Med. Genet. 48, 103–111.


{% three-prisoners problem: shows that many empirical studies of cognitive dissonance are simply making the known three-prisoners mistake in their statistics. Very funny! %}

Tierney, John (2008) “And behind Door No. 1, a Fatal Flaw,” New York Times, Science, April 8, 2008.


{% con. probability; Formula of Bayes etc. in legal affairs. Many discussing contributers, a.o. Ward Edwards. %}

Tillers, Peter & Edward D. Green (1988) “Probability and Inference in the Law of Evidence: The Uses and Limits of Bayesianism.” Kluwer Academic Publishers, Dordrecht.


{% %}

Tilling, Carl, Nancy Devlin, Aki Tsuchiya, & Ken Buckingham (2010) “Protocols for Time Tradeoff Valuations of Health States Worse than Dead: A Literature Review,” Medical Decision Making 30, 610–619.


{% probability communication: surveys studies on various ways of communicating risks to patients, focusing on genetic risks. %}

Timmermans, Daniëlle R.M. (2004) “Being at Risk: The Communication and Perception of Genetic Risks,”


{% probability communication %}

Timmermans, Daniëlle R.M., A.C. Molenwijk, Anne M. Stiggelbout, & Job Kievit (2004) “Different Formats for Communicating Risks to Patients and the Effects on Choices of Treatment,” Patient Education and Counseling 54, 255–263.


{% %}

Timmermans, Daniëlle R.M., Peter Politser, & Peter P. Wakker (1995) “Aggregation, Rationality, and Risk Communication: Three Current Debates in Medical Decision Making.” In Jean-Paul Caverni, Maya Bar-Hillel, Francis Hutton Barron, & Helmut Jungermann (eds.) Contributions to Decision Making -I, 111–117, Elseviers Science, Amsterdam.


{% %}

Timmermans, Daniëlle R.M., Arwen J. Sprij, & Chris E. de Bel (1996) “The Discrepancy between Daily Practice and the Policy of a Decision Analytic Model: The Management of Fever without Focus,” Medical Decision Making 16: 357–367.


{% Gives techniques for optimizing a Choquet integral. %}

Timonin, Mikhail (2012) “Maximization of the Choquet Integral over a Convex Set and Its Application to Resource Allocation Problems,” Annals of Operations Research 196, 543–579.


{% utility = representational? %}

Tinbergen, Jan (1991) “On the Measurement of Welfare,” Journal of Econometrics 50, 7–13.


Abstract. The author believes in the measurability of welfare (also called satisfaction or utility). Measurements have been made in the United States (D.W. Jorgenson and collaborators), France (Maurice Allais), and the Netherlands (Bernard M.S. Van Praag and collaborators). The Israeli sociologists S. Levy and L. Guttman have shown that numerous noneconomic variables are among the determinants of welfare. The determinants are numerous; the author proposes a list of about fifty. Various mathematical functions have been proposed, of which the logarithm of the determinants shows the highest correlation with welfare, as measured.
{% conservation of influence: known for putting forward four “why” questions for actions, that are cornerstone of modern ethology:
1st is immediately preceding history: bird sings because its nerves … etc. So, causal, as a physical phenomenon.
2nd concerns longer past. Bird sings because learned from father. So this is development at individual level.
3rd concerns even longer past. Bird sings because genes make it do so. Is evolutionary (but still basically causal as were the preceding two).
4th concerns purpose: bird (say male) sings to attract female. Is functional, about purpose. Good singing improves survival. Fourth question requires consideration of: what would have happened had the bird not sung? Tinbergen did experiments in this spirit. Birds clean nest from remainders of shells. Tinbergen put up artificial nests with remainders of shells, to find out that craws etc. then came to steal. %}

Tinbergen, Niko (1963) “On the Aims and Methods of Ethology,” Zeitschrift für Tierpsychologie 20, 410–433.


{% %}

Tinghög, Gustav, David Andersson, Caroline Bonn, Harald Böttiger, Camilla Josephson, Gustaf Lundgren, Daniel Västfjäll, Michael Kirchler, Magnus Johannesson (2013) “Intuition and Cooperation Reconsidered,” Nature 498 (06 June 2013) pp. E1–E2.


{% %}

Tinghög, Gustav, David Andersson, Caroline Bonn, Magnus Johannesson, Michael Kirchler, Lina Koppel, & Daniel Västfjäll (2016) “Intuition and Moral Decision-Making– The Effect of Time Pressure and Cognitive Load on Moral Judgment and Altruistic Behavior,” PLoS ONE 10, e0164012.


{% P. 152: classical problem that the discounted expected utility model cannot separate risk and time attitude, is explained nicely. %}

Tirole, Jean (1990) “In Honor of David Kreps, Winner of the John Bates Clark Medal,” Economic Perspectives 4 no. 3, 149–170.


{% %}

Tirole, Jean (2002) “Rational Irrationality: Some Economics of Self-Management,” European Economic Review 46, 633–655.


{% crowding-out: seems to have argued that monetary incentives could undermine the sense of civic duty. The example of blood donation seems to have been given in Titmuss (1971). %}

Titmuss, Richard M. (1970) “The Gift Relationship.” Allen and Unwin, London.


{% crowding-out for blood donation. %}

Titmus, Richard M. (1971) “The Gift of Relationship: From Human Blood to Social Policy.” New York: Pantheon Books.


Reprinted in Richard M. Titmus, Brian Abel-Smith, & Kay Titmuss (1987, eds.) The Philosophy of Welfare, Allen and Unwin, London.
{% %}

Toda, Masanao & Emir H. Shuford, Jr. (1965) “Utility, Induced Utility, and Small Worlds,” Behavioral Sciences 10, 238–254.


{% Cites a man called Buffon who argued that all probabilities smaller than the probability for a man of sixty-five to die on a given day (was .0001 then) should be ignored (says Stigler) %}

Todhunter, Isac (1865) “A History of the Mathematical Theory of Probability from the Time of Pascal to That of Laplace.” Cambridge. (New prints 1949, 1965, Chelsea Publication Co, New York.)


{% Asset-pricing models are examined assuming fat-tail rather than normal distributions. %}

Tokat, Yesim, Rachev, Svetlozar T., & Eduardo S. Schwartz (2003) “The Stable Non-Gaussian Asset Allocation: A Comparison with the Classical Gaussian Approach,” Journal of Economic Dynamics and Control 27, 937­969.


{% “All happy families are alike; each unhappy family is unhappy in its own way.” %}

Tolstoy, Leo


{% losses from prior endowment mechanism; Unfortunately they paid three choices (from each of three scanning runs) and not one, so that there is some income effect. Seems that some subjects received the prior endowment earlier than others, and then integrated less, but I should check this out.
Consider acceptance of rejection of 50-50 prospects such as $200.5$10. Gains range from $10 to $40 and losses from $5 to $20. Subjects are asked if they find the prospects very acceptable, a bit acceptable, or very/a bit unacceptable. Acceptability rates (not distinguishing between very or a bit (un)acceptable, so revealed-preference based) suggest, with linear utility,  = 1.93 as median. So in this sense no risk seeking for symmetric fifty-fifty gambles.
They do not have decisions immediately followed by payment, aiming to generate decision utility and not experienced utility. They find no activation of negative emotions in the brain such as fear (amygalda), but activation of parts of the brain associated with evaluation. %}

Tom, Sabrina M., Craig R. Fox, Christopher Trepel, & Russell A. Poldrack (2007) “The Neural Basis of Loss Aversion in Decision Making under Risk,” Science 315, 515–518.


{% ISBN: 978-1-78471-991-3 %}

Tomer, John F. (2017) “Advanced Introduction to Behavioral Economics.” Edward Elgar Publishing, Vermont.


{% %}

Tomoyuki, Nakajima & Herakles Polemarchakis (2005) “Money and Prices under Uncertainty,” Review of Economic Studies 72, 223–246.


{% EU+a*sup+b*inf: Takes RDU for uncertainty as given. Then adds preference conditions, mainly strong null event consistency and extreme outcomes sensitivity (sure-thing principle for intermediate outcomes), that axiomatize the neo-additive case. %}

Toquebeuf, Pascal (2016) “Choquet Expected Utility with Affine Capacities,” Theory and Decision 81, 177–187.


{% %}

Torgerson, Warren S. (1958) “Theory and Methods of Scaling.” Wiley, New York.


{% %}

Torgerson, Warren S. (1958) “Theory and Methods of Scaling.” Wiley, New York.


{% ratio bias: if subjects are asked to produce sequences of equal distances (differences) or of equal ratios, they produce roughly the same sequences. P. 203: “It appears that the subject simply interprets this single relation in whatever way the experimenter requires. When the experimenter tells him to equate differences or to rate on an equal interval scale, he interprets the relation as a distance. When he is told to assign numbers according to subjective ratios, he interprets the same relation as a ratio.” %}

Torgerson, Warren S. (1961) “Distances and Ratios in Psychophysical Scaling,” Acta Psychologica 19, 201–205.


{% Proposes EU with U(x) = x(1+k(x/(x+K))2). The function is concave for losses, tending to  as x approaches K (K is total wealth). It is convex for gains, starting with derivative 1 at x = 0 tending to derivative (1+k) as x tends to . The author does so to accommodate risk seeking for lotteries. This preceeds Friedman & Savage (1948) in seeking to use utility curvature to model risk attitudes, and not just do concave utility to have risk aversion. It has convexity for gains to accommodate gambling, and concavity for losses so as to accommodate insurance. It does not have a concave part for gains, as Friedman-Savage does.
risky utility u = strength of preference v: clearly uses this interpretation. %}

Törnqvist, Leo (1945) “On the Economic Theory of Lottery Gambles,” Skandinavisk Aktuarietidskrift 28, 228–246.


{% risky utility u = strength of preference v (or other riskless cardinal utility, often called value).: p. 132; utility elicitation;
Compares TTO, standard gamble, and category scaling.
SG doesn’t do well: it is only done with the high education group, because it was too complex for the other members from the general public.
Category scaling behaves strangely, deviates from others, is judged difficult. %}

Torrance, George W. (1976) “Social Preferences for Health States: An Empirical Evaluation of Three Measurement Techniques,” Socio-Econ. Plan. Sci . 10, 129–136.


{% utility elicitation; relates SG to TTO?; introduces adaptive method. Takes EU as gold standard with respect to validity. %}

Torrance, George W. (1986) “Measurement of Health State Utilities for Economic Appraisal: A Review,” Journal of Health Economics 5, 1–30.


{% utility elicitation
P. 596 refers to dependence of health state utility on prognosis.
P. 599: SG doesn’t do well, author recommends using either VAS or TTO, but not SG. %}

Torrance, George W. (1987) “Utility Approach to Measuring Health-Related Quality of Life,” Journal of Chronic Diseases 40, 593–600.


{% utility elicitation; risky utility u = strength of preference v (or other riskless cardinal utility, often called value).: use vNM index for interpersonal aggregations.
questionnaire versus choice utility: they tranform direct judgment questions into vNM index by nonlinear transformation, and use the latter for interpersonal aggregations etc. %}

Torrance, George W., Michael H. Boyle, & Sargent P. Horwood (1982) “Application of Multi-Attribute Utility Theory to Measure Social Preferences for Health States,” Operations Research 30, 1043–1069.


{% SG gold standard; p. 560 takes EU normative and SG as gold standard.
Survey of QALYs; use MAUT techniques to combine dimensions in Health utilities index (vision, hearing, speech, dexterity, mobility, cognition, emotion, pain) and others into a QALY index; favor use of standard gamble. %}

Torrance, George W. & David H. Feeney (1989) “Utilities and Quality-Adjusted Life Years,” International Journal of Technology Assessment in Health Care 5, 559–575.


{% %}

Torrance, George W., David H. Feeny, William J. Furlong, Ronald D. Barr, Yuemin Zhang, & Qinan Wang (1996) “Multiattribute Utility Function for a Comprehensive Health Status Classification System: Health Utilities Index Mark 2,” Medical Care 34, 702–722.


{% %}

Torrance, George W., William J. Furlong, David H. Feeny, & Michael H. Boyle (1995) “Multi-Attribute Preference Functions. Health Utilities Index,” PharmacoEconomics 7, 490–502.


{% I thought for some time that they introduced QALYs, together with Patrick, Bush, & Chen (1973). Later I found that Fanshel & Bush (1970, p. 1050) preceded them.
P. 121 points out how prognosis about future health affects the current quality of life. %}

Torrance, George W., David L. Sackett, & Warren H. Thomas (1973) “Utility Maximization Model for Program Evalution: A Demonstration Application, Health Status Indexes.” In Robert L. Berg (ed.) Health Status Indexes: Proceedings of a Conference Conducted by Health Services Research Tucson, Arizona, 1972. Hospital Research and Educational Trust, Chicago IL.


{% utility elicitation; Introduces Time Tradeoff; explains standard gamble for measuring health states. (Although Fanshel & Bush (1970, p. 1050) preceded them.)
P. 120 has the nice example where, for one day, you prefer bed confinement to kidney dialysis, but for five years your preference switches. %}

Torrance, George W., Warren H. Thomas, & David L. Sackett (1972) “A Utility Maximization Model for Evaluation of Health Care Programs,” Health Services Research 7, 118–133.


{% between-random incentive system (paying only some subjects): one of 100 subjects is paid one choice. Given that system is adaptive, it means that in principle it may not be incentive compatible. But for subjects it is totally impossible to recognize that it is adaptive, let be to see how to exploit it. So theoretically it is not incentive compatible, but practically it is.
Use an adaptive system, well known in marketing, for measuring risk and time attitudes, which are measured through choice lists and indifferences derived from those. Adaptive means that for each subject, for each new question, it is calculated from the preceding questions what the most informative new question will be according to some minimization of some correlation-matrix’s determinant or so, and that is asked as next question to the subject. The authors find that people with big debts on their houses discount more than others, but are not different in risk attitude.
They also do a traditional nonadaptive measurement in which they find no significant relation, but here they measured only two indifferences for time and two for risk (where it is not clear to me how they could calculate loss aversion from only two indifferences) so they simply have less data and less power. %}

Toubia, Olivier, Eric Johnson, Theodoros Evgeniou, & Philippe Delquié (2013) “Dynamic Experiments for Estimating Preferences: An Adaptive Method of Eliciting Time and Risk Parameters,” Management Science 59, 613–640.


{% There are circularities in the definitions, and I think that this paper is basically unsound. A first problem is that sets A0, A1, A2 are not well defined: “can be compared” can be interpreted in several ways, none leading to correct results. A second problem is that she only considers one-side-unbounded utility, not two-sided. The latter is the most problematic case because integrals may not just be  or , but may be really undefined (“  ”). A third problem is that extending preferences by independence and monotonicity may lead to intransitivities. I wrote two letters about this to the author end 1980s but she was too busy to reply. %}

Toulet, Claude (1986) “An Axiomatic Model of Unbounded Utility Functions,” Mathematics of Operations Research 11, 81–94.


{% %}

Toulet, Claude (1986) “Complete Ignorance and Independence Axiom: Optimism, Pessimism, Indecisiveness,” Mathematical Social Sciences 11, 33–51.


{% cognitive ability related to likelihood insensitivity (= inverse-S) & inverse-S (= likelihood insensitivity) related to emotions: hypothetical choices of WTP preceded by a task with images on the screen that either induced negative affect (fear) or neutral emotions. Probability weighting was derived assuming linear utility, using the Einhorn-Hogarth family. Also statistical numeracy was measured. For subjects with low statistical numeracy, negative affect increased inverse-S probability weighting. For subjects with high statistical numeracy, no effects were found. Optimism/pessimism never changed.
P. 38 1st-2nd column nicely states the the impact of emotions on probability weighting does not preclude taking it as cognitive: “Emotions are not only a consequence of choices but also often drive the cognitive process to arrive at a decision.” Then it cites some papers on it, including, for probability weighting, Rottenstreich & Hsee (2001). %}

Traczyk, Jakub & Kamil Fulawka (2016) “Numeracy Moderates the Influence of Task-Irrelevant Affect on Probability Weighting,” Cognition 151, 37–41.


{% http://dx.doi.org/10.1080/01973533.2014.865505

foundations of statistics: an editorial saying that H0 testing is not a valid method of inference and banning it from the journal. See also Trafimow & Marks (2015). %}

Trafimow, David (2014) “Editorial,” Basic and Applied Social Psychology 36, 1–2.


{% DOI: http://dx.doi.org/10.1080/01973533.2015.1012991;
foundations of statistics: an editorial saying that H0 testing is not a valid method of inference and banning it from the journal. I Bayesian could not agree more! If not H0, then what alternative approach? The editors give no clear reply, and poiint to the problems of having prior probabilities in the Bayesian approach.. I agree with this. It is a difficult question to which we do not know a clear answer. Better no answer than the invalid Neyman-Pearson hypothesis testing. %}

Trafimow, David & Michael Marks (2015) “Editorial,” Basic and Applied Social Psychology 37, 1–2.


{% In experiment test how students, ranking various distributions over people, trade off between efficiency and equity, for lottery scenario and three social scenarios, with veil of ignorance in varying degrees.
Real incentives: 5 students (also 5 different income groups were distinguished) are randomly drawn (per group I guess) and then one allocation chosen is randomly selected and paid to the five students. Risky utility is not the same as welfare utility. %}

Traub, Stefan, Christian Seidl, & Ulrich Schmidt (2009) “An Experimental Study on Individual Choice, Social Welfare, and Social Preferences,” European Economic Review 53, 385–400.


{% Point out the experimental flaw in Chechile & Cooke (1997). %}

Traub, Stephan, Christian Seidl, Ulrich Schmidt, & Peter Grösche (1999) “Knock-Out for Descriptive Utility or Experimental Error?,” Journal of Economics 70, 109–126.


{% %}

Trautmann, Stefan T. (2009) “A Fehr-Schmidt Model for Process Fairness,” Journal of Economic Psychology 340, 803–813.


{% %}

Trautmann, Stefan T. (2010) “Individual Fairness in Harsanyi’s Utilitarianism: Operationalizing All-Inclusive Utility,” Theory and Decision 68, 405–415.


{% They use hypothetical choice with large outcomes. Prospect theory and construal theory make opposite predictions for low-probability extreme outcomes (p. 256). Prospect theory fits data better than construal level theory. %}

Trautmann, Stefan T. & Gijs van de Kuilen (2012) “Prospect Theory or Construal Level Theory? Diminishing Sensitivity vs. Psychological Distance in Risky Decisions,” Acta Psychologica 139, 254–260.


{% probability elicitation. Compare five belief elicitation methods: through introspection, CE measurement, PE measurement, proper scoring rule assuming risk neutrality, and proper scoring fule with correction for risk attitude. Belief is about behavior of others in ultimatum game. It can serve as a: survey on belief measurement. They consider 4 criteria: two versions of internal validity: (1) additivity; (2) prediction of own behavior; and, further (3) external validity (closeness to objective probability), (4) complexity.
They analyze CE measurement and proper scoring rules with and without correction for risk attitudes. Find that that correction improves, but may be not by very much, so on the one hand they say that increasing complexity does not help but on the other that risk-attitude correction does. A drawback of this analysis, at least from the descriptive perspective, is that the first internal validity criterion, additivity, ignores ambiguity attitude (they only write this in footnote 16, p. 2133, where the same point is implicit in footnote 5).
They do the measurement with and without explicitly saying to subjects that this is about belief measurement, and find that it makes no difference. They cite Offerman et al. (2009) for the same result. (Offerman et al. thought that only the treatment with explicit mention was natural, but had to add the control treatment because a referee and editor required it.)
Results are summerized in §6. §6.1: nonadditivity is strong in all measurements, least so in introspective. §6.2: truth serums improve prediction of own behavior, but it is not very good. §6.3: the methods are all similarly close to true probabilities. %}

Trautmann, Stefan T. & Gijs van de Kuilen (2015) “Belief Elicitation: A Horse Race among Truth Serums,” Economic Journal 125, 2116–2135.


{% %}

Trautmann, Stefan T. & Gijs van de Kuilen (2018) “Higher Order Risk Attitudes: A Review of Experimental Evidence,” European Economic Review 103, 108–124.


{% survey on nonEU: valuable survey on empirical studies of ambiguity.
ambiguity seeking for unlikely: p. 103 ff. documents and reviews this.
ambiguity seeking for losses: they document this.
They write in several places that ambiguity attitudes depend on the likelihood of events (p. 89 l.9: “ This literature has shown that attitudes towards ambiguity depend on the likelihood of the uncertain events.”; also p. 104 penultimate para). I would state this differently, and say that ambiguity aversion depends on likelihood. The latter is true: ambiguity aversion increases with likelihood. The former need not be: there is a-insensitivity everywhere, for all the events the same. It MEANS ambiguity seeking for unlikely and ambiguity aversion for likely.
P. 89 l. -3: “Interestingly, the empirical literature has so far provided relatively little evidence linking individual attitudes toward ambiguity to behavior outside the laboratory in these, theoretically, ambiguity-sensitive decisions.”
P. 94 middle (on 2nd order probabilities for generating ambiguity): “if the theory regards unknown probabilities it might be inappropriate to operationalize them with known-risk compound lotteries.”
natural sources of ambiguity: Pp. 94-96 on natural sources of ambiguity, list three ways to control for unknown beliefs: (1) bets on events and their complements (which in fact is detecting source preference); (2) the source method; (3) first measure subjective beliefs (I assume introspectively) and then compare with bets with same objective probabilities. They give no references, but here are some: Hogarth & Kunreuther (1995) Heath & Tversky (1991), Zeckhauser (2006).
P. 97 l. 10 ff: “Thus, the three-color problem elicits much lower ambiguity aversion than the two-color problem.”
P. 102 middle: “Given that many experiments use designs where risky and ambiguous bets are directly compared, while outside the laboratory there are often few truly unambiguous options, it is not clear how far quantitative laboratory measurements are representative of the preferences in potentially noncomparative real-world settings.”
P. 102 ll. -2/-1: “Interestingly, ambiguity aversion does not seem more justifiable than ambiguity seeking nor vice versa.” Here justifiability refers to group discussions.
P. 107 3rd para: it is correct that ambiguity aversion is a special case of source preference. The authors cite a paper where source preference relations measured for different (pairs of) sources were unrelated, which of course can happen. Then however they are confused to suggest that ambiguity aversion and source preference would be different concepts.
P. 107 end of penultimate para: “However, there is surprisingly little evidence yet in support of the assumed link from Ellsberg-urn ambiguity attitude to behavior outside the laboratory, and thus on the external validity of the ambiguity attitude concept.”
P. 108 penultimate para: One Dimmock et al. paper finds significant relation with a-insensitivity and not with ambiguity aversion, and the other finds it the other way around. These findings are not contrary because finding a null of no relation does not mean much.
P. 109 2nd para: A careful consideration of these gain-loss differences seems warranted in applications in insurance of health, where losses play an important role.
P. 109 1st para of conclusion: “Given the relevance of these domains in the field, the universal focus of theoretical work on ambiguity aversion seems misplaced.”
P. 109 1st para of conclusion: “Are the psychological mechanisms leading to ambiguity aversion in one domain and ambiguity seeking in another domain the same?” My answer: Yes! The fourfold pattern of ambiguity all results from insensitivity.
P. 110 endnote 4: “It is noteworthy that the comparative-ignorance effect does not typically lead to decreased valuations for the ambiguous act, but to increased valuations of the risky act. Loosely speaking, the presence of ambiguity seems to make known‐probability risk look nicer. This can have implications for the elicited risk attitudes when measured jointly with ambiguity attitudes (see the section, Correlation between risk and ambiguity attitudes).” %}

Trautmann, Stefan T. & Gijs van de Kuilen (2015) “Ambiguity Attitudes.” In Gideon Keren & George Wu (eds.), The Wiley Blackwell Handbook of Judgment and Decision Making (Ch. 3), 89–116, Blackwell, Oxford, UK.


{% dynamic consistency: test whether subjects who beforehand subscribe to the a priori oriented process fairness, continue to accept it ex post. Most do. Do it also under ambiguity. This is a test of time consistency. %}

Trautmann, Stefan T. & Gijs van de Kuilen (2016) “Process Fairness, Outcome Fairness, and Dynamic Consistency: Experimental Evidence for Risk and Ambiguity,” Journal of Risk and Uncertainty 53, 75–88.


{% suspicion under ambiguity: This paper offers an original manner to control for suspicion (idea of Vieider): the prizes are videos where only the subjects themselves know which one they like better. So, the expermenter has no possibility and no interest in manipulating. %}

Trautmann, Stefan T., Ferdinand M. Vieider, & Peter P. Wakker (2008) “Causes of Ambiguity Aversion: Known versus Unknown Preferences,” Journal of Risk and Uncertainty 36, 225–243.

Link to paper
{% %}

Trautmann, Stefan T., Ferdinand M. Vieider, & Peter P. Wakker (2011) “Preference Reversals for Ambiguity Aversion,” Management Science 57, 1320–1333.

Link to paper
{% dynamic consistency %}

Trautmann, Stefan T. & Peter P. Wakker (2010) “Process Fairness and Dynamic Consistency,” Economics Letters 109, 187–189.

Link to paper
{% %}

Trautmann, Stefan & Peter P. Wakker (2017) “Making the Anscombe-Aumann Approach to Ambiguity Suitable for Descriptive Applications,” Journal of Risk and Uncertainty, forthcoming.


{% Other things equal, I would prefer the unnown Ellsberg urn to the known urn, because with the known the certainty you have is the certainty that you will never know anythiong relevant, whereas for the unknown you may hope for some relevant info to come. In repeated choice it is clear that the unknown urn is preferable because one can learn. In experiments, subjects irrationally forgo this possibility under repeated choice and because of ambiguity aversion still choose the known urn. This paper shows this experimentally. %}

Trautmann, Stefan T. & Richard Zeckhauser (2013) “Shunning Uncertainty: The Neglect of Learning Opportunities,” Games and Economic Behavior 44–55.


{% Considered health profiles for which there was no special reason to expect that joint independence would be violated. In the pairs of choices that tested independence, more than half were in agreement with independence. This is, of course, a very conservative test of independence. Discusses, at the end, other empirical studies, pointing out that sequencing effects can be due to (negative) discounting. %}

Treadwell, Jon R. (1998) “Test of Preferential Independence in the QALY Model,” Medical Decision Making 18, 418–428.


{% Discuss implications of PT for CEAs (cost-effectiveness analyses), in particular whether quality of life assessment of general public should be used. %}

Treadwell, Jon R., & Leslie A. Lenert (1999) “Health Values and Prospect Theory,” Medical Decision Making 19, 344–352.


{% %}

Treisman, Anne, Daniel Kahneman, & Jacquelyn Burkell (1983) “Perceptual Objects and the Cost of Filtering,” Perception and Psychophysics 33, 527–532.


{% (cognitive ability related to likelihood insensitivity (= inverse-S)?) %}

Trepel, Christopher, Craig R. Fox, & Russell A. Poldrack (2005) “Prospect Theory on the Brain? Toward a Cognitive Neuroscience of Decision under Risk,” Cognitive Brain Research 23, 34–50.


{% Dutch book %}

Trockel, Walter (1992) “An Alternative Proof for the Linear Utility Representation Theorem,” Economic Theory 2, 298–302.


{% %}

Trope, Yaacov & Nira Liberman (2010) “Construal-Level Theory of Psychological Distance,” Psychological Review 117, 440–463.


{% Seems to be review of empirical evidence supporting construal level theory. %}

Trope, Yaakov, Nara Liberman, & Cheryl Wakslak (2007) “Construal Levels and Psychological Distance: Effects on Representation, Prediction, Evaluation, and Behavior,” Journal of Consumer Psychology 17, 83–95.


{% Applications of rank dependence to finance. %}

Tsanakas, Andreas (2008) “Risk Measurement in the Presence of Background Risk,” Insurance: Mathematics and Economics 42, 520–528.


{% utility families parametric: study particular combinations of lotteries over multiattribute utility, and preferences that bad is combined with good. It leads to multiattribute utility functions that are mixtures of exponential functions (mixex), relating it to alternating signs of derivatives. %}

Tsetlin, Ilia & Robert L. Winkler (2009) “Multiattribute Utility Satisfying a Preference for Combining Good with Bad,” Management Science 55, 1942–1952.


{% %}

Tsetlin, Ilia & Robert F. Winkler (2012) “Multiattribute One-Switch Utility,” Management Science 58, 602–605.


{% About the history of decision theory, relating it to related fields such as fuzzy set theory, operations research (and its crisis in the 1970s), and other fields, with 324 references. %}

Tsoukiàs, Alexis (2008) “From Decision Theory to Decision Aiding Methodology,” European Journal of Operational Research 187, 138–161.


{% intertemporal separability criticized: confirm it, and good reference for it. Surveys 38 empirical and theoretical studies of the conditions of QALY such as independence of quality of life from time duration and preceding health states, etc. %}

Tsuchiya, Aki & Paul Dolan (2005) “The QALY Model and Individual Preferences for Health States and Health Profiles over Time: A Systematic Review of the Literature,” Medical Decision Making 25, 460–467.


{% Considers probability transformations for the RDU model (couched in terms of risk measures). What the author calls one-parameter family is
w(p) = (1(p) + ln())
where  can be any strictly increasing and continuous transformation, considered to be “one parameter,” and  is another parameter. %}

Tsukahara, Hideatsu (2009) “One-Parameter Families of Distortion Risk Measures,” Mathematical Finance 19, 691–705.


{% probability communication: Seems to write that statisticians recommend never reporting data using pie charts (as area of probability wheel). Seems that people can’t judge angles well. %}

Tufte, Edward (2001) “The Visual Display of Quantitative Information.” Graphics Press.


{% Seems to be an early mentioner of utility. According to Rothbard (1990), he seems to have said, in the context of time preference for money: “The focus should not be on the amount of metal repaid but on the usefulness of the money to the lender and borrower” %}

Turgot, Robert Jacques (1977) “The Economics of R.J. Turgot.” Edited by Peter D. Groenewegen, Martinus Nijhof, The Hague.


{% To justify a nontrivial statement, one needs another one. To justify that other one, … and so on. This is the regress argument for infinitism, taken by some to prove that one needs infinitely many statements. It is like the childrens’ game of asking, after each answer, again, “Why?”, to quickly generate despair at the other end. Oh well … %}

Turri, John (2009) “On the Regress Argument for Infinitism,” Synthese 166, 157-163.


{% value of information %}

Tuteja, R.K. & U.S. Bhaker (1994) “On Characterization of Some Nonadditive Measures of “Useful” Information,” Information Sciences 78, 119–128.


{% People are not good in generating random sequences. %}

Tune, George S. (1964) “Response Preferences: A Review of Some Relevant Literature,” Psychological Bulletin 4, 286–302.


{% The game of my youth!!! %}

Tversky, Amos (1964) “On the Optimal Number of Alternatives at a Choice Point,” Journal of Mathematical Psychology 1, 386–391.


{% %}

Tversky, Amos (1964) “Finite Additive Structures,” Michigan Mathematical Psychology Program, MMPP 64-6, University of Michigan.


{% SEU = SEU !
real incentives: the random incentive system
P. 177 l. 9–10 suggests that measuring utility when nonlinear probability may be difficult. Tradeoff method of Wakker & Deneffe (1996) show its not so difficult! Tversky writes: “To bypass the serious difficulty involved in simultaneous measurement of utility and subjective probability for each participant, researchers have derived and tested some empirical consequences of the SEU model.”
risky utility u = transform of strength of preference v: utility for money is measured in a riskless context and found to be linear, as follows. For pairs (ci,ca) of cigarettes and candies, W(ci,ca) is buy- or selling price for (ci,ca), W(ci,ca) = f(ci) + g(ca) works well in data, so it is concluded that W(ci,ca) can be interpreted as riskless utility for money and further that therefore riskless utility of money is linear. Then also risky utility for money is measured, unfortunately in a somewhat confused manner. It is not always clear if the model is SEU à la Savage or SEU à la Edwards (and Utility of gambling is involved), and whether or not probability weighting at 1 is defined and is or is not 1. All these cases are discussed. It also seems that the !logarithm of! von Neumann Morgenstern utility is taken as risky utility. It is concluded from data that risky utility is different from riskless.
I like the general conclusion:
“The usefulness of utility theory for the psychology of choice, however, depends not only on the accuracy of its predictions but also on its potential value as a general framework for the study of individual choice behavior.” %}

Tversky, Amos (1967) “Additivity, Utility, and Subjective Probability,” Journal of Mathematical Psychology 4, 175–201.


{% %}

Tversky, Amos (1967) “A General Theory of Polynomial Conjoint Measurement,” Journal of Mathematical Psychology 4, 1–20.


{% N=11. Real incentives: the random incentive system.
P. 35 points out that the overestimation of small probabilities can explain both gambling and insurance.
decreasing ARA/increasing RRA: uses power utility for gains and losses separately. It fits well. Utility is linear for gains and concave for losses.
inverse-S: probability transformation is inverse-S, though not very pronounced. It should be kept in mind though that, because this paper considers one-nonzero-outcome prospects, the powers of utility and probability weighting are in fact unidentifiable. %}

Tversky, Amos (1967) “Utility Theory and Additivity Analysis of Risky Choices,” Journal of Experimental Psychology 75, 27–36.


{% T 69.1; p. 42 points out that choice between two multiattribute objects can be done both by “horizontal” and by “vertical” (first making intradimensional) comparisons. P. 45 writes that transitivity is one of the most basic and most compelling principles of rationality and bases it on the money pump argument. Justifies intransitivities on the basis of computation costs. %}

Tversky, Amos (1969) “Intransitivity of Preferences,” Psychological Review 76, 31–48.


{% Says, according to Birnbaum, that people tend to cancel common aspects in decision situations. %}

Tversky, Amos (1972) “Elimination by Aspects: A Theory of Choice,” Psychological Review 79, 281–299.


{% T 74.1
utility = representational?: & paternalism/Humean-view-of-preference;
Presents some biases and heuristics. P. 158, last two paragraphs, discusses whether internal consistency is the only requirement for rationality. It first mentions that many believe so. Amos then reacts: “I do not believe that the coherence, or the internal consistency, of a given set of probability judgments is the only criterion for their adequacy.” Later: “In particular, he will attempt to make his probability judgments compatible with his knowledge about (i) the subject matter; (ii) the laws of probability; (iii) his own judgmental heuristics and biases. [PW of around 1990: I must say that I see no role for (iii), at most biases are something to !avoid! and correct for. PW of 2016: After digesting behavioral literature for a quarter century, including collaboration with Amos, I conjecture that here he already had in mind the behavioral to use biases to correct for them.] A deeper theoretical analysis of subjective probability will hopefully lead to the development of practical procedures whereby judged probabilities are modified or corrected to achieve a higher degree of compatibility with all these types of knowledge.”
SG doesn’t do well: seems to already argue for that. %}

Tversky, Amos (1974) “Assessing Uncertainty,” Journal of the Royal Statistical Society, Ser. B, 36, 148–159.


{% T 75.,
Nicely explains that in Allais paradox the central issue can be how to define outcomes; what Broome (1991) calls “individuation”
P. 163: “The axioms of utility theory (e.g., transitivity, substitutability) are accepted by most students of the field as adequate principles of rational behavior under uncertainty.” %}

Tversky, Amos (1975) “A Critique of Expected Utility Theory: Descriptive and Normative Considerations,” Erkenntnis 9, 163–173.


{% T 77.1;
measure of similarity;
Tradeoff method: the relation  defined in the appendix, p. 351, is similar to my derived tradeoff relation ~* (denoted ~t in my 2010 book) and the invariance axiom 5 there is similar to tradeoff consistency. %}

Tversky, Amos (1977) “Features of Similarity,” Psychological Review 84, 327–352.


{% %}

Tversky, Amos (1977) “On the Elicitation of Preferences: Descriptive and Prescriptive Considerations.” In David E. Bell, Ralph L. Keeney, & Howard Raiffa (eds.) Conflicting Objectives in Decisions, Wiley, New York.


{% %}

Tversky, Amos (1996) “Contrasting Rational and Psychological Principles in Choice.” In Richard J. Zeckhauser, Ralph L. Keeney, & James K. Sebenius (eds.) Wise Choices, Decisions, Games, and Negotiations, Harvard Business School Press, Boston.


{% T 1996.1
P. 186: “if gambles are represented as random variables, then any two realizations of the same random variables must be mapped into the same object.”
P. 188 bottom has a version of pseudocertainty effect that avoids any dynamic aspect. Very nice! Page restates that this sheds new light on the normative status of the Allais paradox. P. 189, end of §4, points out that this is additional defense for the irrationality of the Allais paradox: “It is noteworthy that generalized utility models can account for the violation of substitution in the comparison of problems 5 and 6, but not for the violations of desciption invariance in problems 6 and 7.”
In many places Amos does not discuss his views of normative, but how most people perceive of normativeness. That is, he takes it as an empirical issue, as he did in Tversky & Slovic (1974). %}

Tversky, Amos (1996) “Rational Theory and Constructive Choice.” In Kenneth J. Arrow, Enrico Colombatto, Mark Perlman, & Christian Schmidt (eds.) The Rational Foundations of Economic Behavior: Proceedings of the IEA Conference Held in Turin, Italy, 185–197, St. Martins Press, New York.


{% Criticizes Lopes (1981) error that expected utility apply only to long-run decisions and not to single decisions. %}

Tversky, Amos & Maya Bar-Hillel (1983) “Risk: The Long and the Short,” Journal of Experimental Psychology: Learning, Memory, and Cognition 9, 713–717.


{% PT: data on probability weighting; natural sources of ambiguity:
inverse-S: found for both risk and uncertainty
real incentives: random incentive system only for second out of three studies.
Basketball fans rather bet on basketball events, even while ambiguous, than on chance with known probabilities.
decreasing ARA/increasing RRA: use power utility;
inverse-S; ambiguity seeking for unlikely: is found here (stated in sentence on pp. 281-282); they have gain outcomes only.
P. 276, 2nd column, l. -3/-2, does a little discussion of measuring power in power utility and uses 1/3 probability gamble for $100 gain because w(1/3) is approximately 1/3 on average.
P. 280: source-preference directly tested. They do it via certainty equivalents. %}

Tversky, Amos & Craig R. Fox (1995) “Weighing Risk and Uncertainty,” Psychological Review 102, 269–283.


{% measure of similarity %}

Tversky, Amos & Itamar Gati (1982) “Similarity, Separability and the Triangle Inequality,” Psychological Review 89, 123–154.


{% %}

Tversky, Amos & Thomas Gilovich (1989) “The Cold Facts about the “Hot Hand” in Basketball,” Chance 2, 16–21.


{% %}

Tversky, Amos & Thomas Gilovich (1989) “The “Hot Hand”: Statistical Reality or Cognitive Illusion?,” Chance 2, 31–34.


{% Last section is nice, on choice versus well-being; p. 113: judgment  choice;
paternalism/Humean-view-of-preference: p. 116: the choice-judgment discrepancy raises an intriguing question: which is the correct or more appropriate measure of well-being? .... we lack a gold standard for the measurement of happiness.
References that people dislike it if all salaries increase, but in unequal ways; whether rich people are more happy than poor people
Final sentence: a few glorious moments could sustain a lifetime of happy memories for those who can cherish the past without discounting the present. %}

Tversky, Amos & Dale Griffin (1991) “Endowment and Contrast in Judgments of Well-Being.” In Fritz Strack, Michael Argyle, & Norbert Schwarz (eds.) Subjective Well-Being, Ch. 6, 101–118, Pergamon Press, Elmsford, NY.


{% Emphasize that scientists should pay more attention to power of tests. %}

Tversky, Amos & Daniel Kahneman (1971) “Belief in the Law of Small Numbers,” Psychological Bulletin 76, 105–110.


Reprinted as Ch. 2 in Daniel Kahneman, Paul Slovic, & Amos Tversky (1982, eds.) Judgment under Uncertainty: Heuristics and Biases, Cambridge University Press, Cambridge.
{% %}

Tversky, Amos & Daniel Kahneman (1973) “Availability: A Heuristic for Judging Frequency and Probability,” Cognitive Psychology 4, 207–232.


Abbreviated as Ch. 11 in Daniel Kahneman, Paul Slovic, & Amos Tversky (1982, eds.) Judgment under Uncertainty: Heuristics and Biases, Cambridge University Press, Cambridge.
{% Anchoring and adjustment heuristic. Discussion, p. 1130: “For judged probabilities to be considered adequate, internal consistency is not enough.” (paternalism/Humean-view-of-preference) %}

Tversky, Amos & Daniel Kahneman (1974) “Judgment under Uncertainty: Heuristics and Biases,” Science 185, 1124–1131.


Reprinted as Ch. 1 in Daniel Kahneman, Paul Slovic, & Amos Tversky (1982, eds.) Judgment under Uncertainty: Heuristics and Biases, Cambridge University Press, Cambridge.
{% %}

Tversky, Amos, & Daniel Kahneman (1977) “Causal Thinking in Judgment under Uncertainty.” In Robert E. Butts & K. Jaako J. Hintikka (eds.) Basic Problems in Methodology and Linguistics, 167–190, Reidel, Dordrecht.


{% %}

Tversky, Amos, & Daniel Kahneman (1980) “Causal Schemata in Judgments under Uncertainty.” In Martin Fishbein (ed.) Progress in Social Psychology, 49-72, Hillsdale, NJ: Erlbaum.


Reprinted as Ch. 8 in Daniel Kahneman, Paul Slovic, & Amos Tversky (1982, eds.) Judgment under Uncertainty: Heuristics and Biases, Cambridge University Press, Cambridge.
{% I only came to read this paper for the first time in January 2001 (having thought before that it would just be a didactical restatement of their earlier work). What a marvelous paper! It is extremely well written, with every line reflecting deep thought. It is one of the most impressive pieces I ever read. I regret that I was not aware of it when Tversky was alive and I would meet him and talk with him.
real incentives/hypothetical choice: all monetary experiments are done both with and without real incentives, these never giving different results.
paternalism/Humean-view-of-preference: the paper presents the various framing effects as deviations from rationality to be avoided if possible. Abstract: “is a significant concern for the theory of rationality.” P. 453 3rd column last para: “expected utility ... is based on a set of axioms, ..., which provide criteria for the rationality of choices.” (This suggests, but does not really 100% say, that EU is rational.) P. 456, 1st para: “The certainty effect reveals attitudes toward risk that are inconsistent with the axioms of rational choice”
p. 456, first para of 2nd column: after having identified an inconsistency of choice they say that one choice must be wrong but that it is hard to determine which. P. 457, 3rd column, 2nd para: “Such a discovery will normally lead the decision-maker to reconsider the original preferences, even when there is no simple way to resolve the inconsistency.” P. 458, 1st column., end of 3rd para, however writes, on consistency: “It implicitly assumes that the decision-maker who carefully answers the question “What do I really want?” will eventually achieve coherent preferences. However, the susceptibility of preferences to variations of framing raises doubt about the feasibility and adequacy of the coherence criterion.”
P. 453 introduces the famous Asian disease problem. I never liked it much. The message “200 people will be saved” does not make clear what will happen to the other 400 people, whether they will die or not.
P. 453 3rd column penultimate para: “a framing effect with contradictory attitudes towards risks involving gains and losses.” This is a common theme throughout the paper. The gain- and loss framing give different results. So, which is wrong, the gain or the loss framing? Answer: neither. The real problem is that preferences deviate from EV too much. (Under EV, a gain- or loss frame would give the same result.) Note that the authors call the attitudes for gains and losses not “different,” but “contradictory.” This word conveys the message, reflecting the deep writing of the authors. P. 454 2nd column last para states that for linear utility and probability weighting, framing would not matter. P. 457 top of 3rd para states that it is always framing together with nonlinearity.
P. 453 last para, last sentence: “When faced with a choice, a rational decision-maker will prefer the prospect that offers the highest expected utility.” This sentence unambiguously states that for the authors EU is rational.
P. 454: the major qualitative properties of decision weights can be extended to cases in which the probabilities of outcomes are subjectively assessed rather than explicitly given. In these situations, however, decision weights may also be affected by other characteristics of an event, such as ambiguity or vagueness (9).” Here endnote 9 refers to Ellsberg (1961) and Fellner (1961). This sentence describes the source method!
Risk averse for gains, risk seeking for losses: p. 453 3rd colums describes the fourfold pattern.
P. 454 1st column 2nd para and endnote (5): note that the authors point out that for pure-gain or pure-loss prospects a different formula should be applied, so that they really do not take the separate-weighting formula.
P. 454 2nd column end of 1st para: “The major qualitative properties of decision weights can be extended to cases in which the probabilities of outcomes are subjectively assessed rather than explicitly given. In these situations, however, decision weights may also be affected by other characteristics of an event, such as ambiguity or vagueness.” This describes the source method!
P. 454 2nd column middle para: “The simultaneous measurement of values and decision weights involves serious experimental and statistical difficulties.” Well, the Tradeoff method gives utilities fairly easily!
reference-dependence test: p. 454, 3rd column (Problem 3): the “Framing of acts” example is particularly interesting. For one thing, it demonstrates isolation beyond any doubt. I consider it to be the most impressive paradox of all of decision theory. Note that they replicated the phenomenon with real incentives (p. 458 Footnote 11): real incentives/hypothetical choice & losses from prior endowment mechanism.
.real incentives/hypothetical choice: between-random incentive system (paying only some subjects): paid one of every 10 subjects in incentivized version of Problems 3 and 4, finding similar results as with hypothetical choices, given on p. 458 footnote 11.
Problems 5-6 test forgone-event independence (consequentialism) and find it well satisfied (22% and 26% choices for the risky option, respectively). The other dynamic decision principles together are strongly violated (58% R choice in Problem 7). P. 455 2nd column first para gives in fact the condition that Hammond (1988) called consequentialism; i.e., same assignments of outcomes to states of the world should be judged equivalently, no matter what the particular dynamic structure is that generates the assignment.
real incentives/hypothetical choice: between-random incentive system P. 458 footnote 15 (paying only some subjects): paid one of every 10 subjects for Problems 5-7. They found similar results, and conclude that the elimination of real payment reduces risk aversion but does not change the pattern.
There is also a discussion of probabilistic insurance.
RCLA: p. 456 1st para of 1st column treats RCLA as a framing phenomenon.
P. 456 3rd column 2nd para ff. discusses lability of reference outcomes. This text continuing on the next page, probably Kahneman wrote this. The sentence “Rather, the transaction as a whole is evaluated as positive, negative, or neutral, depending on ..” (p. 456 penultimate para) suggests that reference points are not chosen attribute-wise but overall, referring to the indifference class of the prospect.
P. 457 Problem 10: ratio bias plays a role here.
P. 458 1st column 2nd para recognizes that the inconsistencies can be considered rational in view of bounded rationality. It then suggests that prospect theory and framing give better models than “ad hoc” appeals to the notion of cost of thinking.
utility = representational?: p. 458, 1st column, third para, describes the strict representational view of preference well: “In order to avoid the difficult problem of justifying values, the modern theory of rational choice has adopted the coherence of specific preferences as the sole criterion of rationality.” I enjoyed how first T&K present, in a factual manner, the, I think overly restrictive, coherence-interpretation of rationality. Then, without being negative, typical of the marvelous Kahneman style (“In order to avoid the difficult problem of justifying values”) they push it aside for better interpretations. In a few sentences four or five philosophical issues, taking others pages to formulate, are taken care of.
P. 458, 1st column, last para, describes the “predictive criterion of rationality”.
utility = representational: somewhat before, referring to March (1978): “the common conception of rationality also requires that preferences or utilities for particular outcomes should be predictive of the experiences of satisfaction or displeasure associated with their occurrence.”
P. 458, 2nd column, first para: “A predictive orientation encourages the decision-maker to focus on future experience and to ask “What will I feel then?” rather than “What do I want now?” (This is opposite to p. 1256 of Weinstein et al.1996 JAMA, claiming that community prefs, not patient prefs., should be used.) The former question, when answered with care, can be the more useful guide in difficult decisions.” They mention the hedonic experience of outcomes.
Then they go on to argue that experiences really following from a frame can be part of a normative analysis. For example, this can be applied to regret. I only partly agree, and am more paternalistic. Perception of goodness is not the criterion, but real goodness of the outcomes is. Perception of goodness only serves as a signal for real goodness of the outcomes. So, framing dependence is normatively acceptable only if it affects the goodness of outcomes, not if it only affects perception of goodness.
ratio-difference principle: people are more willing to drive 20 minutes to save $5 on a cheap calculator than on an expensive one. %}

Tversky, Amos & Daniel Kahneman (1981) “The Framing of Decisions and the Psychology of Choice,” Science 211, 453–458.


{% %}

Tversky, Amos, & Daniel Kahneman (1982) “Judgments of and by Representativeness.” In Daniel Kahneman, Paul Slovic, & Amos Tversky (eds.) Judgment under Uncertainty: Heuristics and Biases, Ch. 6, Cambridge University Press, Cambridge.


{% %}

Tversky, Amos, & Daniel Kahneman (1982) “Evidential Impact of Base Rates.” In Daniel Kahneman, Paul Slovic, & Amos Tversky (eds.) Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge.


{% %}

Tversky, Amos & Daniel Kahneman (1983) “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, 293–315.


{% Central theme of paper: normative and descriptive models must be different, because normative requirements simply are not descriptive.
P. S.253 (= 168 in Bell et al.), under the subheading transitivity: “Thus transitivity is satisfied if it is possible to assign to each option a value that does not depend on the other available options.”
coalescing: p. S263 (p. 178 in Bell et al.), problem 7, is their famous example where by a clever splitting of outcomes (coalescing) stochastic dominance is violated. The general procedure for generating violations of this kind is in Birnbaum (1997).
Interesting is Footnote 3 on p. S 263, especially if compared to the corresponding Endnote 3 in the Bell et al. Chapter (p. 189 there). They discuss the extension to multiple outcomes. The two foot/end-notes are different! The first, in the Journal of Business, shows that at that time they were already aware of Quiggins work, before Quiggins work became widely understood. But they did not understand his model well. In the Bell book they at least avoid the incorrect description of Quiggins idea. Say that the decision weights (my term, they use term weighting function) then depends on all probabilities used (à la Quiggin). Then there is a strange remark on subadditivity in the foot/end note, claiming p(p1) + ... + p(pn)  1 (where p stands for entire probability vector), which is, however, not plausible for n > 2.
real incentives/hypothetical choice: p. S274 (P. 187 in Bell et al.) suggests that real incentives are not important.
P. S251 abstract (not in Bell et al. it seems, where they, apparently, dropped the abstract), on invariance and dominance: “Because these rules are normatively essential but descriptively invalid, no theory of choice can be both normatively adequate and descriptively accurate.”
P. S272 (p. 185 in Bell et al.), about prospect theory: “in being unabashedly descriptive and in making no normative claims”
The paper nowhere states that violations of expected utility can be normative. To the contrary, on p. S267 ff. they put, under term pseudocertainty effect, the dynamic principles forward that imply independence/sure-thing principle, preceding Hammond (1988; T&K had it already in their Science 1981 paper), and argue that these principles have a normative status similar to invariance, which is beyond dispute. P. S268 has nice discussion of regret. P. S270 credits Savage (1954, p. 101-104) and Raiffa (1968, pp. 80-86) for inspiration.
P. S272: “… as shown in the discussion of pseudocertainty. It appears that both cancellation [= s.th.pr. = independence] and dominance have normative appeal, although neither one is descriptively valid.”
They agree with experimental economists that nonEu will be reduced by learning and proper incentives:
“Indeed, incentives sometimes improve the quality of decisions, experienced decision makers often do better than novices, and the forces of arbitrage and competition can nullify some effects of error and illusion. Whether these factors ensure rational choices in any particular situation is an empirical issue, to be settled by observation, not by supposition (p. S273).” %}

Tversky, Amos & Daniel Kahneman (1986) “Rational Choice and the Framing of Decisions,” Journal of Business 59, S251–S278.


Reprinted in David E. Bell, Howard Raiffa, & Amos Tversky (1988, eds.) Decision Making: Descriptive, Normative and Prescriptive Interactions, 167–192, Cambridge University Press, Cambridge.
Reprinted in Robin M. Hogarth & Melvin W. Reder (eds.) “Rational Choice: The Contrast between Economics and Psychology,” 67–94, University of Chicago Press.
{% real incentives/hypothetical choice: p. 90 seems to suggest that there is little improvement of rationality when real monetary rewards are introduced. %}

Tversky, Amos & Daniel Kahneman (1987) “Rational Choice and the Framing of Decisions.” In Robin M. Hogarth & Melvin W. Reder (eds.) “Rational Choice: The Contrast between Economics and Psychology,” 67–94, University of Chicago Press.


{% Does loss aversion for multiattribute, with no risk. Every attribute has a reference point, and loss aversion can be different for different attributes. A specially nice feature is that the paper really considers reference dependence; i.e., how preferences change if reference points change.
Pp. 1046-1047: that prospect theory does not specify what the reference point is, so that in this respect the theory is left unspecified: “A treatment of referent-dependent choice raises two questions: what is the reference state, and how does it affect preferences? The present analysis focuses on the second question.”
standard-sequence invariance?; proof on p. 1059 goes wrong but main theorem is still correct. %}

Tversky, Amos & Daniel Kahneman (1991) “Loss Aversion in Riskless Choice: A Reference Dependent Model,” Quarterly Journal of Economics 106, 1039–1061.


{% biseparable utility
event/utility driven ambiguity model: event-driven
The purported plots of Wi(p) versus p (Fig. 3) are actually of CE(x,p;0). The correct plot is shown in Tversky & Fox (1995).
PT: data on probability weighting; Tradeoff method used theoretically.
P. 299, l. -6, writes, unfortunately, that the violation of stochastic dominance of PT can be handled by normalizing the decision weights so that they add to unity. There is no easy way to make this work. People again and again come up with the idea to consider (SUM w(pj)v(xj))/SUMw(pj), but this formula does not give sensible results and continues to violate stochastic dominance (Wakker 2010 Exercise 6.7.1). For two-outcome prospects it reduces to RDU with symmetric weighting function.
Many people erroneously think that diminishing sensitivity only refers to the value/utility of outcomes, but it is a general principle of numerical perception that applies to the weighting function as well. P. 303 2nd para: “The principle of diminishing sensitivity applies to the weighting functions as well.” (cognitive ability related to likelihood insensitivity (= inverse-S))
Although experimental economists today (2010) usually credit Holt & Laury (2002) for introducing the choice list mechanism for measuring indifferences, this mechanism has been used long before. This T&K paper also uses it. Here is how the authors describe it: p. 305, l.-4 till p. 306, l.8: “The display also included a descending series of seven sure outcomes (gains or losses) logarithmically spaced between the extreme outcomes of the prospect. The subject indicated a preference between each of the seven sure outcomes and the risky prospect. To obtain a more refined estimate of the certainty equivalent, a new set of seven sure outcomes was then shown, linearly spaced between a value 25% higher than the lowest amount accepted in the first set and a value 25% lower than the highest amount rejected. The certainty equivalent of a prospect was estimated by the midpoint between the lowest accepted value and the highest rejected value in the second set of choices. We wish to emphasize that although the analysis is based on certainty equivalents, the data consisted of a series of choices between a given prospect and several sure outcomes. Thus, the cash equivalent of a prospect was derived from observed choices rather than assessed by the subject. The computer monitored the internal consistency …”

Download 7.23 Mb.

Share with your friends:
1   ...   90   91   92   93   94   95   96   97   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page