real incentives/hypothetical choice: the random incentive system was used (good to let know to all the mainstream experimental-economics referees who do not know the individual-choice literature well and start complaining about this incentive system again and again).
Real-incentives low-payoffs and hypothetical high-payoffs had similar risk aversion, and real incentives high-payoffs had more risk aversion (even 40% of participants doing all choices safe there). Comparisons were within-subjects. High-real payment came after low-real payment. To participate in the high-real payments, participants first had to give up their earnings of the low-payment, which they had to declare in writing. I once conjectured that this procedure might have generated a framing effect, where those who gained $3.85 in the first round will take that as status quo, and due to loss aversion will not want to risk ending up with less in the high-payment choice, which makes them avoid the risky option there in the 20x group (not in 50x and 90x groups because there all payments exceed $3.85). It would imply that those who gained $3.85 in the first round would be more risk averse later than others in 20x. Holt (June 20, 2003, personal communication) let me know that this did not happen in the data. Subjects who gained $3.85 in the first round even seemed to be less risk averse than those who gained the low risky outcome there, $0.10. So, my conjecture there does not hold.
The idea of this paper that I like best is that they first do low-stake choice for (quasi)real, and then let subjects pay back before doing big-stakes choice for real. Thus they can observe two real choices without income effect or anything, and do within-subject comparisons of real choices. It has been a fundamental problem of revealed preference that only one choice can be really observed, and the authors have found a way around this very fundamental problem. This is impressive. There is a considerable price to pay for what they achieve. That subjects are told that the small-stakes are real incentives even though it is already known at that stage that these incentives will not be paid for real is a mild form of deception (deception when implementing real incentives). The having-to-give-back can generate all kinds of emotions such as maybe some kinds of loss aversion, which is another drawback. Yet the fundamental revealed-preference problem solved is such a great thing, that it is worth the price.
The definition of the Saha utility in Eq. 2 is not correct for r > 1, when it becomes decreasing. It, therefore, better be divided by 1r, similarly to how this is commonly done for CRRA.
My main problem with the hypo-real test here concerns a contrast effect. If participants have to do hypo but they already know that hypo is surrounded with real before and after, then it is very explicit that there was no necessity for hypo. Subjects will, therefore, not pay much attention to hypo. Because these hypo high-payments came immediately after the low real payments (with the high-real not seen yet), subjects just quickly do there the same as before low real. This is put forward by Harrison, Johnson, McInnes, & Rutström (2003, March) “Risk Aversion and Incentive Effects: Comment,” p. 3: “Subjects who are minimizing decision costs are unlikely to think hard about their choices when offered a hypo task even if the payoffs are higher, and thus would be predicted to anchor to their previous response in the first low real task. The responses in the high hypo treatment indeed look much more like the responses in the low real task #1 than they do the subsequent high real task #3.”
Hypo can be useful I think, but then subjects have to be well-motivated for it, in other ways than through real incentives. Thus real versus hypo is better tested between-subjects.\
The experiment took each subject about an hour (Holt, November 16 ’04, personal communication).
The method of eliciting indifferences through lists of ranked choices, where the switching point indicates indifference, while often ascribed to these authors in experimental economics, has been used before in many papers, for example Kahneman, Knetsch, & Thaler (1990), Tversky & Kahneman (1992, described verbally in Subsection 2.1 pp. 305-306, where they do refinement of the indifference interval in a second stage), Tversky & Fox (1995, described verbally on p. 273, with same procedure as in Tversky & Kahneman 1992), Fox & Tversky (1998, p. 882, again same procedure as T&K’92), Coller & Williams (1999), Gonzalez & Wu (1999), with more references in Mitchell & Carson (1989).
gender differences in risk attitudes: women more risk averse than men for low payment but not for high payment (lack of power there!?).
It is unfortunate that this paper ignored decades of preceding literature on risk attitude measurement, including the 2002 Nobel-awarded prospect theory with Kahneman & Tversky (1979) the 2nd most cited paper in all economic journals, Cohen, Jaffray & Said (1987), and the SURVEY by Farquhar (1987), and that it is/was AER policy to allow for this. The authors do cite K&T79 on p. 1645 top but only fort the question of hypothetical choice, and not for its insights into risk attitudes. The idea of the authors and AER was that experimental economics is better than all that preceded it, and is allowed to ignore everything preceding. The citation of K&T79 on hypothetical choice probably serves to discard them as invalid because of doing hypo. The authors thus ignore the numerous other preceding papers on prospect theory and other nonEU theories that did use real incentives, including Cohen et al. (1987) and Tversky & Kahneman (1981 Science). The latter did all monetary experiments both with and without real incentives, never finding a difference. The idea of the authors and AER facilitates literature-study efforts and priority claiming, and thus appeal to many. That we could ignore all the empirical problems of EU and return to its simplicity asin the 1970s will also appeal to many. Thus I have seen some working papers by young authors who, misled by the reputation of AER, thought that this paper must be the state of the art and embarked on reinventing the certainty equivalent method for instance.
The ignoring of preceding literature reminds me of a quotation by the prominent economist Carver who at the end of his career wrote:
“But if they think that they have built up a complete system and can dispense with all that has gone before, they must be placed in the class with men in other fields, such as chemistry, physics, medicine, or zöology, who, because of some new observations, hasten to announce that all previous work is of no account.” Carver wrote this in his paper in QJE in … 1918! %}
Holt, Charles A. & Susan K. Laury (2002) “Risk Aversion and Incentive Effects,” American Economic Review 92, 1644–1655.
{% Paper confirms and replicates the order effects in Holt & Laury (2002) pointed out by Harrison, Johnson, MvcInnes, & Rutström (2005). It does all choices of Holt & Laury (2002), but between-subjects so that each individual has only one kind of treatment. The increase of risk aversion due to increased stakes indeed becomes smaller but remains. They also do hypo like this, without order effect. Also here, the effects are reduced by do not disappear, although it gets small especially if one compares the random differences between their 2002 and their 2005 data that are of similar seize.
real incentives/hypothetical choice: big problem with hypo here, as in 2002, is that it is surrounded by real-incentive choices, not only for other subjects but also for other experiments that the subjects were involved in simultaneously. So, the order effect due to the preceding low-stake-real-incentive choice of Holt & Laury 2002 was removed, indeed, but there were other order effects due to other experiments, not reported, that the subjects were involved in. This contrast effect encourages the subjects to not take hypo seriously and, hence, what Holt & Laury do here, as in 2002, is not a good hypo experiment. P. 903, footnote 5, cites from instructions for hypo: “Unlike the other tasks that you have done so far today, the earnings for this part of the experiment are hypothetical and will not be added to your previous earnings.” That is, the contrast effect is even made explicit. %}
Holt, Charles A. & Susan K. Laury (2005) “Risk Aversion and Incentive Effects: New Data without Order Effects,” American Economic Review 95, 902–904.
{% Test Bayesian updating by measuring conditional preferences using BDM (Becker-DeGroot-Marschak) to measure matching pobabilities. There is not much new because all these things have been done before (e.g. Ward Edwards), but the authors do not cite preceding work. %}
Holt, Charles A. & Angela M. Smith (2009) “An Update on Bayesian Updating,” Journal of Economic Behavior and Organization 70, 125–134.
{% probability elicitation: proposes to use matching probabilities to measure subjective probabilities. Then it proposes the two-stage choice list to obtain indifferences, in an incentive compatibility way. As with Holt & Laury (2002), it is easy and clean for a general audience of nonspecialists, but novelty and positioning are problems.
The paper never explicitly writes that it assumes expected utility, but all theoretical analyses assume it. The paper claims that matching probabilities provide subjective probabilities while correcting for risk attitude, giving as argument that only two outcomes are involved and that utility can then be normalized (p. 111). Footnote 16 mentions works that use matching probabilities to asses ambiguity attitudes, but does not discuss what the empirical findings of ambiguity aversion, discussed elsewhere, imply for what this paper does.
That matching probabilities are not new is clear, and the paper cites many preceding works, such as Savage (1971). They were commonly used in early decision analysis; see also Raiffa (1968, p. 110, “judgmental probability”).
The paper suggests novelty of the two-stage choice list procedure with incentive compatibility, but it was done exactly the same before for utility measurement by Anderson et al. (2006; cited in Footnote 11, but without discussing the overlap). The idea is to elicit, in a first stage, preferences between E0 (receiving gain > 0 if event E happens and 0 otherwise) and p0 for p = 0/10, 1/10, …, 10/10. If preferences switch between, say, p = 3/10 and p = 4/10, then in a second stage such preferences are measured for p = 30/100, 31/100, …, 40/100. A naive implementation of the RIS (random incentive system) would not work because subjects could manipulate by switching late in the first stage, getting nice options in the second stage. Incentive compatibility is achieved by first randomly selecting a choice from the first stage and implementing it, but when the choice involves the switching value only then a choice is randomly selected from the second stage. Again, this was done by Anderson et al. before.
A small variation of this two-stage procedure was introduced by Abdellaoui, Baillon, Paraschiv, & Wakker (2011 AER). They implemented somewhat differently, in a third stage. In that third stage they put up all 101 preferences between E0 and p0 for p = j/100, indicated all preferences implied by monotonicity there, asked the subject to confirm, and then randomly selected one of these 101 choices for implementation. I think that in this procedure incentive compatability is clearer to subjects. Because of space limitations, Abdellaoui et al. only explained their implementation in the Web Appendix to their paper. But the procedure was used in several follow-up papers by Baillon and others, for instance by Baillon & Blechrodt (2015 AEJ) in this same journal.
P. 135: the BDM method, however, is notorious for being confusing to subjects. %}
Holt, Charles A. & Angela M. Smith (2016) “Belief Elicitation with a Synchronized Lottery Choice Menu That Is Invariant to Risk Attitudes,” American Economic Journal: Microeconomics 8, 110–139.
{% HIV %}
Holtgrave, David R., Ronald O. Valdiserri, A. Russell Gerber, & Alan R. Hinman (1993) “Human Immunodeficiency Virus Counseling, Testing, Referral, and Partner Notification Services,” Archives of Internal Medicine 153, 1225–1230.
{% PT, applications, loss aversion & decreasing ARA/increasing RRA: uses power utility; gaines and losses are treated differently; Risk averse for gains, risk seeking for losses in his model. He reviews some empirical evidence, a couple of studies with each four or five subjects. %}
Holthausen, Duncan M. (1981) “A Risk-Return Model with Risk and Return Measured as Deviations from a Target Return,” American Economic Review 71, 182–188.
{% %}
Hong, Yongmiao &Yoon-Jin Lee (2013) “A Loss Function Approach to Model Specification Testing and its Relative Efficiency,” Annals of Statistics 41, 1166–1203.
{% Study relations between emotions and ways of violating independence and dynamic decision principles. %}
Hopfensitz, Astrid & Frans Winden (2008) “Dynamic Choice, Independence and Emotions,” Theory and Decision 64, 249–300.
{% Poor individuals who are intrinsically risk averse can still exhibit risk-seeking behavior if that can reduce inequality and they are also sensitive to that; %}
Hopkins, Ed (2018) “Inequality and Risk-Taking Behaviour,” Games and Economic Behavior 107, 316–328.
{% probability elicitation; linearly combining well-calibrated experts can destroy calibration %}
Hora, Stephen C. (2004) “Probability Judgments for Continuous Quantities: Linear Combinations and Calibration,” Management Science 50, 597–604.
{% Welfare where utility of individuals depends on utilities of other individuals, leading to implicit equations to be solved. Gives many preceding discussions of this point and seems to put everything right. %}
Hori, Hajime (2001) “Non-Paternalistic Altruism and Utility Interdependence,” Japanse Economic Review 52, 137–155.
{% %}
Hornberger, John C., Donald A. Redelmeier, & Jordan Peterson (1992) “Variability among Methods to Assess Patients’ Well-Being and Consequent Effect on a Cost-Effectiveness Analysis,” Journal of Clinical Epidemiology 45, 505–512.
{% real incentives/hypothetical choice: for time preferences: seems to be %}
Horowitz, John K. (1991) “Discounting Money Payoffs: An Experimental Analysis.” In Stanley Kaish & Ben Gilad (eds.) Handbook of Behavioral Economics, 2B, 309–324, Greenwich: JAI Press.
{% DC = stationarity: §2 nicely and correcty distinguishes between dynamic consistency and stationarity. %}
Horowitz, John K. (1992) “A Test of Intertemporal Consistency,” Journal of Economic Behavior and Organization 17, 171–182.
{% Proposes a more impatient than relation: preferring an early increase more than a late one by 1 should imply the same for 2. A follow-up paper is Benoît & Ok (2007). %}
Horowitz, John K. (1992) “Comparative Impatience,” Economic Letters 38, 25–29.
{% If the value of a good to be priced can depend on which random prize one chooses in BDM (Becker-DeGroot-Marschak), then, obviously, incentive compatibility can be distorted in just any way. This is the main point of the paper. At the end, it erroneously claims that BDM is incentive compatible under RDU. The mistake in the proof is that the integration that is used there implicitly assumes backward induction (“isolation”), because it just substitutes the value of the good also if it is a lottery. But with backward induction, every nonEU model would have incentive compatability under BDM. If subjects do not use backward induction but reduction of compound lotteries, then BDM need not be incentive compatible under RDU as it need not under any nonEU model. %}
Horowitz, John K. (2006) “The Becker-DeGroot-Marschak Mechanism Is not Necessarily Incentive Compatible, even for Non-Random Goods,” Economics Letters 93, 6–11.
{% Opening sentence: “The assumption that having more of a good will lead an individual to place a lower value on an additional unit of that good, which we call diminishing marginal value, is a pervasive component of economists’ belief about human behaviour.” Then some sentence after they relative it to the “Marginalist Revolution” of the 1870s. This misled me on first reading to think that the authors were after the much more interesting diminishing marginal utility, rather than diminishing marginal “value” (which is something like how much money you want to pay). They do distinguish, by e.g. discussing “Gossen’s equivalent marginal utilities” in 2nd para on p. 1. But many readers can easily get confused. In reality they test the much less intesting question of whether marginal rate of substitution decreases in a good, with one special case where one of these two goods is money (the more you have of something the less you pay for an additional unit). They claim that diminishing marginal value has not been tested before but I guess that there must have been many investigations by economists and others into the behavior of marginal rates of substitution, especially if it is about how much money you are willing to pay. %}
Horowitz, John K. & John A. List, & Kenneth E. McConnell (2007) “A Test of Diminishing Value,” Economica 74, 1–14.
{% real incentives/hypothetical choice: review of WTA/WTP. WTA/WTP disparities are not affected much by real-hypothetical choice. Ratio WTA/WTP larger as good is less ordinally %}
Horowitz, John K. & Kenneth E. McConnell (2003) “A Review of WTA/WTP Studies,” Journal of Environmental Economics and Management 44, 426–447.
{% Contains Pascal’s proof of existence of God. %}
Horwich, Paul (1982) “Probability and Evidence.” Cambridge University Press, New York.
{% %}
Hosmer, David W., Jr. & Stanley Lemeshow (1989) “Applied Logistic Regression.” Wiley, New York.
{% proper scoring rules;
The authors paid three decisions, which generates some income effects.
This paper considers paying in probability of gaining a prize for the context of proper scoring rules, so as to have linear utility, given that under EU we have linearity in probability also if no linear utility in money. Thus, in a way, an EU maximizer is turned into an expected value maximizer. Paying in probability underlies the Anscombe-Aumann (1963) model. Selten, Sadrieh, Abbink (1999) made the nice observation that this expected value maximization is in fact generated for every probabilistically sophisticated decision maker (they did not use this term) who satisfies reduction of compound lotteries and prefers a higher to a lower probability at a prize, so that it is way more general than EU.
The present paper extends the technique to expected value optimization for eliciting more general variables than the subjective probability of an event or the mean of some variable (basically, subjective expected value of any given transformation) and scoring rules, following a preceding observation of this kind by Bhattachar & Pfleiderer (1985), and basically the same as the simultaneous independent Schlag & van der Weele (2013, Theoretical Economics Letters), but more general in allowing every transformation. This of course greatly extends the scope. As an example, if the reported number r is punished by |x(s) - r|, being its absolute distance from the realized value to a general random variable x, then under (induced) expected value maximization r will reveal the medianof x. The subjective median of any random variable can be elicited this way. (This had been known before for utility linear in money by B&P’85.) The paper first derives the results assuming subjective expected utility, and then makes the extension that Selten et al. also made, being that EU need not hold and only probabilistic sophistication should.
The paper implements the probabilities through comparisons with uniform rvs. If the value v = R(r,E) of the scoring rule R, depending on the answer r chosen by the subject and the true event E, is below the realization k of a random draw of an independent uniform distribution, then one receives some prize, and otherwise nothing. This means of course that one receives the prize with probability v. I always have some difficulty and need some time before I understand that the comparison with the uniform variable amounts to paying with probability v.
In an experiment, the method, which involves complex stimuli, gets closer to true objective probabilities (known and given to subjects, implying that they could simply take subjective probabilities equal to the objective probabilities readily available) than payment in money with the quadratic scoring rule, a result opposite to Selten, Sadrieh, & Abbink (1999). The authors discuss this point end p. 987 and pp. 997-998. It would be interesting here, and throughout, to redicuss the point using ambiguity theories and probability transformation with backward induction in the two-stage setup of this paper. %}
Hossain, Tanjim & Ryo Okui (2013) “The Binarized Scoring Rule of Belief Elicitation,” Review of Economic Studies 80, 984–1001.
{% %}
Hosszù, Miklós. (1964) “On Local Solutions of the Generalized Functional Equation of Associativity,” Annales Universitatis Scientia Budapest Eötvõs Loránd Sectio Math. 7, 129–132.
{% game theory for nonexpected utility; Nash bargaining solution, applying PT %}
Houba, Harold, Alexander F. Tieman, & Rene Brinksma (1998) “The Nash- and Kalai-Smorodinsky Baraining Solution for Decision Weight Utility Functions,” Economics Letters 60, 41–48.
{% Characterize Sugeno integral. Axiomatizations can also be used to criticize a model. This paper is remarkable in doing so: it criticizes the axioms (their Axiom 4 is the main one carrying the intuition of the Sugeno integral) and thereby (and also because of inspection of examples) writes (p. 14): “In view of all this, it may be concluded that Sugeno preferences must have a very limited field of application, at least in the realm of decision theory.”
The paper does not state uniqueness results. These are, however, interesting, because utility and the capacity//fuzzy measure are jointly-ordinal (if utility is bounded then, after normalization of utility, a common strictly increasing transformation can be applied to the capacity and utility). Hence, the Sugeno integral can be used as an easy heuristic for an ordinal approach to decision theory. (This point I learned from Dubois in June 2000.) If I remember right (think I saw it proved in some paper for additive measures) the Sugeno integral never deviates by more than 25% from the Choquet integral. So it can serve as a heuristic. %}
Hougard, Jens Leth & Hans Keiding (1996) “Representation of Preferences on Fuzzy Measures by a Fuzzy Integral,” Mathematical Social Sciences 31, 1–17.
{% %}
Hougard, Jens Leth & Hans Keiding (2005) “On the Aggregation of Health Status Measures,” Journal of Health Economics 24, 1154–1173.
{% Dutch book %}
Hougard, Jens Leth & Hans Keiding (2005) “Rawlsian Maximin, Dutch Books, and Non-Additive Expected Utility,” Mathematical Social Sciences 50, 239–251.
{% DOI: 10.1111/risa.12359
one-dimensional utility: propose using polynomial functions as utility functions. A pro is that they have a conjugacy-type property in sequential optimization. %}
Houlding, Brett, Frank P. A. Coolen, & Donnacha Bolger (2015) “A Conjugate Class of Utility Functions for Sequential Decision Problems,” Risk Analysis 35, 1611–1622.
{% %}
Houser, Daniel, Daniel Schunk, & Joachim Winter (2010) “Distinguishing Trust from Risk: An Anatomy of the Investment Game,” Journal of Economic Behavior and Organization 74, 72–81.
{% %}
Hout, Ben A. van, Maiwenn J. Al, Gilhad S. Gordon, & Frans F.H. Rutten (1994) “Costs, Effects and C/E-Ratios alongside a Clinical Trial,” Health Economics 3, 309–319.
{% revealed preference %}
Houthakker, Hendrik S. (1950) “Revealed Preference and the Utility Function,” Economica, N.S. 17, 159–174.
{% Considers implications of and relations between additive separability of direct demand and indirect demand. %}
Houthakker, Hendrik S. (1960) “Additive Preferences,” Econometrica 28, 244–257.
{% second-order probabilities; describes the basic issues; not really new %}
Howard, Ronald A. (1988) “Uncertainty about Probability: A Decision Analysis Perspective,” Risk Analysis 8, 91–98.
{% substitution-derivation of EU
“We know from the seminal work of Arrow that there is no group decision process except dictatorship that satisfies a few simple requirements that we would place on any sensible decision process.” %}
Howard, Ronald A. (1992) “In Praise of the Old Time Religion.” In Ward Edwards (ed.) Utility Theories: Measurement and Applications, 27–55, Kluwer Academic Publishers, Dordrecht.
{% small probabilities: uses the term micromort for a 106 probability of dying. Using an EU analysis with a utility function of money and life, we can establish the local exchange rate between money and risk of dying. Although this is only reframing, it will help in clarifying. As the author puts it (p. 408 bottom): “Although this change is cosmetic only, we should remember the size of the cosmetic industry.”
The beginning writes about ethical principle that only person self can decide on own life-death versus money. P. 407: “Our ethical assumption is that each person, and only that person, has the right to make or to delegate decisions about risks to his life or well-being.” This is a strange principle because, in medical decision making, people have to trade off money for others’ lives on a daily basis. P. 411, end of 4th para, on avoiding states of health worse than death: “The restriction to nonnegative weights is, therefore, not a problem for those who have suicide as an option.”
Paper is written in the narrow decision-analysis style of thinking about nothing other than how to handle uncertainty and then nothing other than the expected utility formula. %}
Howard, Ronald A. (1988) “On Fates Comparable to Death,” Management Science 30, 407–422.
{% small probabilities: uses the term micromort for a 106 probability of dying.
A mostly verbal discussion in the narrow decision-analysis style of thinking about nothing other than how to handle uncertainty and then nothing other than the expected utility formula.
P. 362 l. 7: I don’t see why the exchange rate between life duration and money should get infinite at some stage.
Abstract end with a nice sentence: “that precision in language permits the soundness of thought that produces clarity of action and peace of mind.” %}
Howard, Ronald A. (1989) “Microrisks for Medical Decision Analysis,” International Journal of Technology Assessment in Health Care 5, 357–370.
{% %}
Howard, Ronald A. & James E. Matheson (1984, eds.) “The Principles and Applications of Decision Analysis” (Two volumes), Strategic Decisions Group, Palo Alto, CA.
{% %}
Howard, Ronald A. & James E. Matheson (1984) “Influence Diagrams.” In Ronald A. Howard & James E. Matheson (eds.) The Principles and Applications of Decision Analysis, 719–762, Vol. II, Strategic Decisions Group, Palo Alto.
{% simple decision analysis cases using EU;
regret; Total harm of seeding hurricanes is reduced, but still it is not done because then other people will be hurt and the decision makers would be responsible %}
Howard, Ronald A., James E. Matheson, & D.Warner North (1972) “The Decision to Seed Hurricanes,” Science 176, 1191–1202.
{% foundations of statistics; discussions of evidence for hypothesis that can be derived from an observation in philosophers style, with verbal discussions leading to use of probabilities and simple formulas; citing Hempel and Popper who wrote on the same subject. %}
Howson, Colin (1983) “Statistical Explanation and Statistical Support,” Erkenntnis 20, 61–78.
{% foundations of probability %}
Howson, Colin (1987) “Popper, Prior Probabilities, and Inductive Inference,” British Journal for the Philosophy of Science 38, 207–224.
{% %}
Howson, Colin (2008) “De Finetti, Countable Additivity, Consistency and Coherence,” British Journal for the Philosophy of Science 59, 1–23.
{% foundations of probability; Dutch book
Discuss Dutch books, Kyburg’s oppositions, and modifications to avoid those oppositions. %}
Howson, Colin (2012) “Modelling Uncertain Inference,” Synthese 186, 475–492.
{% foundations of probability %}
Howson, Colin & Peter Urbach (1989) “Scientific Reasoning. The Bayesian Approach.” Open Court, Chicago, 1993.
{% Argues for finite additivity and against countable additivity. Against conditioning paradoxes the author argues that conditioning should be rejected. %}
Howson, Colin (2014) “Finite Additivity, Another Lottery Paradox and Conditionalisation,” Synthese 191, 989–1012.
{% information aversion: under ambiguity aversion, people can dislike receiving info because info may turn known probabilities into unknown probabilities. %}
Hoy, Michael, Richard Peter, & Andreas Richter (2014) “Take-Up for Genetic Tests and Ambiguity,” Journal of Risk and Uncertainty 48, 111–133.
{% In separate evaluation, people pay too much attention to attribute that is easy to evaluate in isolation, rather than to important attribute. (“Evaluability hypothesis”). For example, dictionary has torn cover or no defects, and has 10,000 or 20,000 entries. The entries are hard to assess and ignored in separate evaluation. In other words, some attributes are easy to evaluate in a comparative sense but hard in an absolute sense. They receive more attention in choice than in rating or monetary pricing. %}
Hsee, Christopher K. (1996) “The Evaluability Hypothesis: An Explanation for Preference Reversals between Joint and Separate Evaluations of Alternatives,” Organizational Behavior and Human Decision Prcesses 67, 247–257.
{% Violations of monotonicity generated by “evaluability hypothesis” (see his OBHDP 96 paper) in separate evaluations. For example, if people receive an overfilled ice cream serving with 7 oz of ice cream they like it more than an underfilled serving with 8 oz of ice cream. If people receive a dinnerware set with 24 intact pieces, they judge it more favorably than 31 intact pieces (including the same 24) plus a few broken ones. %}
Hsee, Christopher K. (1998) “Less is Better: When Low-Value Options are Valued more Highly than High-Value Options,” Journal of Behavioral Decision Making 11, 107–122.
{% time preference;
Share with your friends: |