real incentives/hypothetical choice: for time preferences: 0, 1, or 2-month delays, testing stationarity, and paid using postal services. Here one payment through RIS (in addition to previous payments).
For fifty-fifty gains people are ambiguity averse, as usual.
Violations of stationarity: not significant.
Risk averse for gains, risk seeking for losses: p. 242 & 244 reports more risk seeking for mixed (what they call losses) than for gains, going against the common hypothesis of loss aversion. Their implementation through prior endowment may have generated it, with too many subjects integrating.
For a gene called 7-repeat allele (having to do with dophamine) they seem to find ambiguity seeking (ambiguity seeking) (p. 245) and no impatience.
reflection at individual level for risk: they have the data but do not seem to report it. %}
Carpenter, Jeffrey P., Justin R. Garcia, & J. Koji Lum (2011) “Dopamine Receptor Genes Predict Risk Preferences, Time Preferences, and Related Economic Choices,” Journal of Risk and Uncertainty 42, 233–261.
{% DOI: http://dx.doi.org/10.1177/0956797610384146
gender differences in risk attitudes: no difference %}
Carr, Priyanka B. & Claude M. Steele (2010) “Stereotype Threat Affects Financial Decision Making,” Psychological Science 21, 1411–1416.
{% Seems to have argued for a role of group selection in evolution. Was sociologist pointing out that people living in small groups of primitive cultures avoided overpopulation by deliberately restraining fertility. Said that this was against selfish maximization of individual fertility and suggested that it must somehow be explained by group selection. %}
Carr-Saunders, Alexander M. (1922) “The Population Problem: A Study in Human Evolution.” Clarendon Press, Oxford.
{% information aversion; for the relation to Wakker (1988, JBDM 1) see my comments to Brocas & Carrillo (2000) %}
Carrillo, Juan-D. & Thomas Mariotti (2000) “Strategic Ignorance as a Self-Discipline Device,” Review of Economic Studies 67, 529–544.
{% Uncertainty increases concavity of consumption function. %}
Carroll, Christopher D. & Miles S. Kimball (1996) “On the Concavity of the Consumption Function,” Econometrica 64, 981–992.
{% In many situations, in particular under sufficient convexity, local incentive compatibility implies it globally. %}
Carroll, Gabriel (2012) “When Are Local Incentive Constraints Sufficient?,” Econometrica 80, 661–686.
{% measure of similarity %}
Carroll, J. Douglas (1976) “Spatial, Nonspatial and Hybrid Models for Scaling,” Psychometrika 41, 439–463.
{% “ ‘When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean—neither more nor less.’ ” %}
Carroll, Lewis (1871) “Alice through the Looking Glass.” MacMillan, London. (1994, Puffin Books, London.)
{% One of three papers in an issue on contingent evaluation. Survey on contingent valuations and stated preferences, starting with history of Exxon Valdez. Concluding remarks (p. 40) argue in favor of contingent valuation because better than doing nothing. Carson is one of the main people working on contingent evaluation, and favoring it most. %}
Carson , Richard T. (2012) “Contingent Valuation: A Practical Alternative when Prices Aren’t Available,” Journal of Economic Perspectives 26, 27–42.
{% %}
Carson, Richard T., Robert C. Mitchell, W Michael Hanemann, Raimond J. Kopp, Stanley Presser, & Paul A. Ruud (1992) “A Contingent Valuation Study of Lost Passive Use Values Resulting from the Exxon Valdez Oil Spill.” Report to the Attorney General of the State of Alaska, Natural Resource Damage Assessment Inc., La Jolla, California, November 10.
{% %}
Carter, Charles F. (1993) “George Shackle and Uncertainty: A Revolution still Awaited,” Review of Political Economy 5, 127–137.
{% Cartesian dualism: res extensa versus res cogitans; there is the external world of things around us, and the internal world that we see when we close our eyes. %}
Cartesius
{% Presented at FUR-Oslo. End of §2 argues in favor of the lottery-equivalent method. On value of a statistical life through road safety: endnote 2 refers to surveys.
adaptive utility elicitation %}
Carthy, Trevor, Susan Chilton, Judith Covey, Lorraine Hopkins, Michael Jones-Lee, Graham Loomes, Nick Pidgeon, & Anne Spencer (1999) “On the Contingent Valuation of Safety and the Safety of Contingent Valuation: Part 2—The CV/SG “Chained” Approach,” Journal of Risk and Uncertainty 17, 187–213.
{% proper scoring rules: shows that proper scoring rules for an RDU maximizer elicit his weighting function if utility is linear or is corrected for, thus generalizing Kothiyal, Spinu, & Wakker (2011, J. Multi-Cr. DA) from binary outcomes to multiple outcomes and general proper scoring rules. It also extends Abdellaoui’s (2000) elicitation method for decision weights, based on indifferences, to incentive compatible choices in proper scoring rule settings. %}
Carvalho, Arthur (2015) “Tailored Proper Scoring Rules Elicit Decision Weights,” Judgment and Decision Making 10, 86–96.
{% Survey on proper scoring rules. Mostly, on the areas where there were publications and how many those publications were. %}
Carvalho, Arthur (2016) “An Overview of Applications of Proper Scoring Rules,” Decision Analysis 13, 223–234.
{% proper scoring rules: %}
Carvalho, Arthur, & Kate Larson (2010) “Sharing a Reward Based on Peer Evaluations.” In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (pp. 1455–1456).
{% proper scoring rules: %}
Carvalho, Arthur, & Kate Larson (2011) “A Truth Serum for Sharing Rewards.” In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems (pp. 635–642).
{% proper scoring rules: %}
Carvalho, Arthur, & Kate Larson (2012) “Sharing Rewards among Strangers Based on Peer Evaluations,” Decision Analysis 9, 253–273.
{% foundations of statistics %}
Carver, Ronald P. (1978) “The Case against Statistical Significance Testing,” Harvard Educational Review 48, 378–399.
Reprinted in Omar F. Hamouda & J.C. Robin Rowley (1997, eds.) “Statistical Foundations for Econometrics.” Edward Elgar, Cheltenham.
{% Author discusses behavioral influences in economics with many points debated as much still today. Writing style is phenomenal, as is so often the case with papers written before the 1930s, and in itself is enough reason to read this paper. The paper points out that behaviorists look at different phenomena, where there is less rationality. Nice final sentence, about new groups of scholars as they behave throughout every discipline of science:
“But if they think that they have built up a complete system and can dispense with all that has gone before, they must be placed in the class with men in other fields, such as chemistry, physics, medicine, or zöology, who, because of some new observations, hasten to announce that all previous work is of no account.”
This sentence may also reflect the intergenerational battle where young people rather claim novelty than credit predecessors, and where older researchers complain that they have seen it all before. The author was 63 in 1918, after a long life with prominent positions. %}
Carver, Thomas N. (1918) “The Behavioristic Man,” Quarterly Journal of Economics 33, 195–201.
{% standard-sequence invariance: don’t really use invariance axiom, but do use standard sequences to get sequences of outcomes that are equally-spaced in utility units.
Use the Gul-independence version of bisymmetry on all two-outcome acts, to get CEU (Choquet expected utility) for all two-outcome acts (proved in Lemma A.5, p. 54) (could also have been done by means of a variation of standard-sequence invariance, or tradeoff consistency as I call it, the more so as they introduce this concept later). Then use standard sequences as in Krantz et al. (1971) to define outcome-mixtures of acts. Use that to adapt constant-act independence and uncertainty aversion of Schmeidler & Gilboa (1989) and of Chateauneuf (1991) to continuous iso linear utility, in their constant-independence axiom 6. Thus, this paper is the first to axiomatize the multiple priors with continuous utility. A valuable result! %}
Casadesus-Masanell, Ramon, Peter Klibanoff, & Emre Ozdenoren (2000) “Maxmin Expected Utility over Savage Acts with a Set of Priors,” Journal of Economic Theory 92, 35–65.
{% Like their JET paper, but does uncertainty-aversion by mixing through B-event à la Gul-independence. Do need a generalized ethically-neutral event for it. Generalized in the sense that SEU should hold for binary acts depending on the event but, contrary to Ramsey (1931) and Faruk Gul (1992, JET), the event need not have probability 0.5 but can have any nondegenerate probability. %}
Casadesus-Masanell, Ramon, Peter Klibanoff, & Emre Ozdenoren (2000) “Maxmin Expected Utility through Statewise Combinations,” Economic Letters 66, 49–54.
{% Subjects can choose to precommit or not. If the usual violation of stationarity is due to intertemporal preference, subjects will prefer to commit (under some assumptions), but if it is instead uncertainty about future outcomes (receiving new info in between) then they will not want to commit. They also get options to increase flexibility. This is tested. %}
Casari, Marco (2009) “Pre-Commitment and Flexibility in a Time Decision Experiment,” Journal of Risk and Uncertainty 38, 117–141.
{% DC = stationarity: this paper carefully distinguishes the three concepts and tests them separately, in particular, employing the longitudinal data required for testing time consistency (also known as dynamic consistency). It is very similar to Halevy (2015), but the two studies were done independently and do not cite each other. The three preference conditions are nicely displayed in Figure 1, p. 128. This paper cites several predecessors in the intro and Section 1. It uses nice terms for the three conditions, being absence of static choice reversal (Halevy: stationarity), absence of dynamic choice reversal (Halevy: time consistency), and absence of calendar choice reversal (Halevy: time invariance). The end of Section 3 properly explains that stationarity and time consistency can be equated only if we assume time invariance, a result stated formally by Halevy.
Prospects to choose from are losses: (1) Listen to 20 minutes of unpleasant noise now; (2) do it in 2 weeks; (3) do it in 4 weeks. Subjects are asked for their preferences now, and after two weeks are again asked for their preferences at that moment over the remaining prospects. The stimuli are really implemented (subjects get paid for it to make up). Subjects have to attend all three sessions anyhow, so no savings of transaction costs in that sense. Under discounted utility, the preferences are determined solely by whether there is impatience (then postpose the unpleasant thing) or negative impatience (then do it as soon as possible). So whether discounting is exponential or hyperbolic or otherwise plays no role.
Big problem of longitudinal choice is that the intertemporal conditions such as time consistency make a big ceteris paribus assumption: in the time between the decisions, nothing relevant should have changed, with no new info received for instance. In reality this is hard to get implemented. For instance not now but in two weeks the subject knows if he has a headache then. There thus is, more or less endogenous, uncertainty about own preference. The authors nicely put this point very central using the term stochastic utility for it (a term elsewhere used mostly for the uncertainty of the analyst, rather than the subject, about preferences). Subjects have an option to pay some for flexibility, which means that in two weeks they get the chance to revise their time-0 choice. If they do pay, then probably there is stochastic utility. Buying flexibility is through an auction, which may encourage subjects to pay (too) much.
Calendar choice reversals (so violations of time invariance) are usually due to factors other than time preference (which makes it understandable that many intertemporal choice studies assume it explicitly-- many do it implicitly). This paper finds it and has to draw the somewhat negative conclusion that other things are going on. As for me, I usually like to get extra things, whether good or bad, over with as fast as possible, simply because then I can forget and need not plan about them anymore. This, rather than negative impatience, can explain why most subjects wanted the noise listening to be done righ away, as people often want to take negative consumptions as soon as possible.
Another nice aspect of the paper is that the stimuli used, nonmonetary, avoid the problem of saving money or getting interest rates from the market, because the stimuli purely concern consumption that cannot be transferred in time. %}
Casari, Marco & Davide Dragone (2015) “Choice Reversal without Temptation: A Dynamic Experiment on Time Preferences,” Journal of Risk and Uncertainty 50, 119–140.
{% Seem to write that body length is often taken as an index of quality of life. %}
Case, Anne, Angela Fertig, & Christina Paxson (2005) “The Lasting Impact of Childhood Health and Circumstance,” Journal of Health Economics 24, 365–389.
{% one-dimensional utility %}
Caserta, Agata, Alfio Giarlotta, & Stephen Watson (2008) “Debreu-Like Properties of Utility Representations,” Journal of Mathematical Economics 44, 1161–1079.
{% %}
Casey, Jeff T. (1991) “Reversal of the Preference Reversal Phenomenon,” Organizational Behavior and Human Decision Processes 48, 224–251.
{% %}
Casey, Jeff T. (1995) “Predicting Buyer-Seller Pricing Disparities,” Management Science 41, 979–999.
{% All hypothetical; ambiguity seeking for losses: they find this.
ambiguity seeking for unlikely: they find this for gains.
Vagues in probabilities is compared to vagueness in outcomes.
reflection at individual level for ambiguity: they have within-individual data but do not report on this. %}
Casey, Jeff T. & John T. Scholz (1991) “Boundary Effects of Vague Risk Information on Taxpayer Decisions,” Organizational Behavior and Human Decision Processes 50, 360–394.
{% Subjects receive a card that is worth $2 (that they will later receive for it). Their subjective value of the card is then measured using BDM. By any rationality standard, BDM should give the value $2. But this does not happen, and the measured value is usually higher. The authors argue that this is so fundamental that it should not be taken to reflect preference, but only that subjects do not understand the decision procedure. For the latter misunderstanding the authors use the strange term game form misconception. This term is strange because it suggests that the authors only think of game theory, and not of the many other preference situations. But so be it. This paper is part of a general direction of research by Plott, arguing that many biases found are too irrational to be taken as reflecting preference. The many biases such as framing are indeed of interest in decision making at low levels of rationality, as with marketing and consumers buying in supermarkets, which is what psychologists often study, but not if we are interested in higher-level preferences such as with financial traders, or if we have normative interests. In the same spirit as Plott, I usually study theories that satisfy transitivity, even if it is violated empirically.
Note that in the terminology of this paper, choice refers to descriptively revealed choice, and preference refers to some sort of true underling rational value system.
P. 1236 has a nice expression: “testing a scale by measuring a known weight.”
P. 1237: “Many decision makers appear to confuse the second-price auction incentives of the BDM with a first-price auction.”
The text is often verbose.
One problem I have with the experiment is that the amount, $2, is so small that subjects just for fun may deviate from the obvious. Another is that Fig. 2, p. 1244, may confuse subjects. Its left to says that subjects will sell the card, and have to name an offer price. This is suggesting to subjects that trading is to come. The bottom of the card explains the BDM payment system, but in no way makes clear that the suggestion of the upper left part will not happen at a later stage, and that this BDM payment is all there will be. The random prize has been determined beforehand, which is nice (the authors point out on p. 1244 that this excludes that the prize offered might depend on what the subjects do, which in fact excludes, in my terminology manipulation), and is tangible in the sense that it is below a covering card to be removed by the subjects, which is also nice. The randomization, however, concerns the random prize only, and not the whole decision situation, which is a deviation from the Prince mechanism.
After a first round, subjects did it a second time. Subjects who in the first round had given a wrong value and lost because of it (the random prize between the stated and true value) did better in the second round, but not perfect.
The authors claim to exclude framing but their claim is incorrect. Subjects after a mistake in the first round usually improve their behavior in the 2nd round due to learning. The authors claim that framing would exclude such learning because the frame stays the same. This claim is incorrect. Nobody studying framing will think that learning cannot exist. %}
Cason, Timothy N. & Charles R. Plott (2014) “Misconceptions and Game Form Recognition: Challenges to Theories of Revealed Preference and Framing,” Journal of Political Economy 122, 1235–1270.
{% risk aversion %}
Cass, David & Joseph E. Stiglitz (1972) “Risk Aversion and Wealth Effects on Portfolios with Many Assets,” Review of Economic Studies 39, 331–354.
{% Seems to do de Finetti-like maths, playing much on finite additivity, in finance, incorporating correlatedness with market. %}
Cassese, Gianluca (2008) “Cassese Asset Pricing with No Exogenous Probability Measure,” Mathematical Finance 18, 23–54.
{% Assume M and m are maximal and minimal outcome, utilities 1 and 0. Then the graph of the utility function can be interpreted as the distribution function of a “benchmark” random variable. The expected utility of a random variable then becomes the probability of the rv exceeding the benchmark rv (assuming stochastic independence). This is nice. Known properties such as concavity of utility are reformulated for the new interpretation. %}
Castagnoli, Erio & Marco LiCalzi (1996) “Expected Utility without Utility,” Theory and Decision 41, 281–301.
{% This paper considers a remodeling of utility as the probability of attaining some goal. In f , is the goal to be attained and can be a random variable, f is an act, and the preference f means that the goal has been attained. Goals may be something like obtaining enough money to pay all bills each month, enough food to survive, producing offspring, etc.
Assume U on [a,b], normalized to U(a) = 0 and U(b) = 1. U(12) = 0.7 now means that the probability of achieving one's goal is 0.7 if the outcome received is 12. Taking as benchmark a random variable with distribution function U, the probability of 12 achieving the goal of exceeding the benchmark is indeed 0.7. This is the basic idea of the model. The benchmark , and its probability distribution, are taken endogenously. This remodeling of utility is interesting. It was introduced in earlier papers by the authors, such as Castagnoli & LiCalzi (1996, Theory and Decision). For more references, see Abbas & Matheson (2009).
The contribution of the present paper is to establish the re-interpretation of utility in a number of commonly used preference representations, primarily additive decomposability of Debreu (1960) and several of its extensions. For infinite state spaces, a complication is that the reinterpretations of utilities as probabilities must be combined with traditional subjective probabilities established in the "overt" state space, and this requires the derivation of nonelementary measure-theory results on the extension of measures from non-algebras to algebras. The authors resolve this complication, with a useful summary of known results in Appendix A.
The material on measures on non-algebras in this paper is of special interest for some recent developments in decision theory, by Zhang (1999, MSS) and Abdellaoui & Wakker (2005, Theory and Decision) %}
Castagnoli, Erio & Marco LiCalzi (2005) “Benchmarking Real-Valued Acts,” Games and Economic Behavior 57, 236–253.
{% %}
Castagnoli, Erio & Fabio Maccheroni, & Massimo Marinacci (2000) “Restricting Independence to Convex Cones,” Journal of Mathematical Economics 45, 535–558.
{% One frictionless asset in market with Choquet expectations as prices forces whole market to be frictionless. Because of this one frictionless asset, there can be no rank-dependent kinks in the weighting function. %}
Castagnoli, Erio, Fabio Maccheroni, & Massimo Marinacci (2004) “Choquet Insurance Pricing: A Caveat,” Mathematical Finance 14, 481–485.
{% %}
Castaldo, Adriana, Fabio Maccheroni, & Massimo Marinacci (2004) “Random Sets and Their Distributions,” Sankhya (Series A) 66, 409–427.
{% Consider reference dependence both regarding peers and aspiration. For poor people aspiration does most, and for rich people peers do. %}
Castilla, Carolina (2012) “Subjective Well-Being and Reference-Dependence: Insights from Mexico,” Journal of Economic Inequality 10, 219–238.
{% %}
Castillo, Ismaël, Johannes Schmidt-Hieber, & Aad van der Vaart (2015) “Bayesian Linear Regression with Sparse Priors,” Annals of Statistics 43, 1986–2018.
{% Present the Chew & Waller (1986) choices (tests of the common consequence effect as in Allais, but with common outcome passing from lowest to intermediate to highest) to 1275 8th grade children. Find that risk aversion correlates positively with fewer disciplinary referrals and completing high school. They find that EU as well fits choices as some nonEU theories, where for PT they unfortunately do not consider inverse-S probability weighting but only convex and concave. As rationality index they take the minimum nr. of choices to change so as to satisfy EU (p. 71). They also assume an error theory (trembling hand) and emphasize its role much.
between-random incentive system: do this. %}
Castillo, Marco, Jeffrey L. Jordan, & Ragan Petrie (2018) “Children’s Rationality, Risk Attitudes and Field Behavior,” European Economic Review 102, 62–81.
{% Didactical explanation of risk aversion under EU through concavity of utility and risk premium, with some real-world data on auto-insurance premiums loading and nice exercises with a practical touch. %}
Cather, David A. (2010) “A Gentle Introduction to Risk Aversion and Utility Theory,” Risk Management and Insurance Review 13, 127–145.
{% %}
Cattin, Philippe & Dick R. Wittink (1989) “Commercial Use of Conjoint Analysis: An Update,” Journal of Marketing 53, 91–96.
{% Try to replicate Dijksterhuis et al. (2004) but find the opposite. %}
Calvillo, Dustin P. & Alan Penaloza (2009) “Are Complex Decisions Better Left to the Unconscious? Further Failed Replications of the Deliberation-without-Attention Effect,” Judgment and Decision Making 4, 509–517.
{% Uses the statistically powerful adaptive technique to compare fit of several discount models. This is at the individual level. Unsurprisingly, quasi-hyperbolic and hyperbolic perform poorly because they cannot accommodate increasing impatience whereas this, even if minority, will still happens frequently and one can’t miss all those individuals. (P. 236: 25% of their subjects have increasing impatience.) Thus, the final sentence of the abstract writes: “specific properties of models, such as accommodating both increasing and decreasing impatience, that are mandatory to describe temporal discounting broadly.”
P. 249: “Another significant result of the present study was the prevalence of increasing impatience (concavity of the discounting curve) in our sample. This phenomenon challenges the prevailing practice in the literature of modeling temporal discounting as exclusively non-increasing, while providing strong confirmation of the results from a small number of recent studies, notably by Attema et al. (2010); Abdellaoui et al. (2010); and Abdellaoui et al. (2013). Among the models we analyzed, only the Constant Sensitivity model can accommodate increasing impatience.
P. 250: “We believe the success of the Constant Sensitivity model demonstrates that increasing impatience and the extended present are likely to be relatively common behavioral variants, which reinforces the value of utilizing models that accommodate this behavior. The success of the neuroscience-inspired Double Exponential model … We anticipate that analysis of the unique characteristics of the Constant Sensitivity and Double Exponential models may yield important results in future studies. In addition, if increasing impatience, the extended present, and mixture are all important for describing discounting behavior, we propose that a mixture of Constant Sensitivity and Double Exponential would be a logical extension.
P. 250: “Several promising models have been developed in recent years that merit inclusion in future model comparison studies (Bleichrodt et al. 2009; Benhabib et al. 2010; Scholten and Read 2006, 2010).” It is useful to note here that the model of Bleichrodt et al. (2009) agrees with and extends Evert & Prelec’s constant sensitivity model in the same way as negative powers extend positive powers for CRRA utility. Bleichrodt, Kothiyal, Prelec, & Wakker (2013) renamed the family “unit invariance.” Bleichrodt et al. (2009) predicted, what this paper confirms, about their families: “They serve to flexibly fit various patterns of intertemporal choice better than hyperbolic and quasi-hyperbolic discounting can do, by allowing any degree of increasing or decreasing impatience. Thus, the CADI and CRDI [now called unit invariance] discount families are the first that can be used to fit data at the individual level.”
P. 250: “In addition, it should be noted that all of the models tested assume linear utility, an assumption which has some support at the aggregate level, but could potentially introduce distortions if there is significant heterogeneity at the individual level (Abdellaoui et al. 2013). However, over the range of reward magnitudes involved in our experiment, any effect of nonlinear utility would likely be small.” (linear utility for small stakes) %}
Cavagnaro, Daniel R., Gabriel J. Aranovich, Samuel M. McClure, Mark A. Pitt, & Jay I. Myung (2016) “On the Functional Form of Temporal Discounting: An Optimized Adaptive Test,” Journal of Risk and Uncertainty 52, 233–254.
{% DOI: http://dx.doi.org/10.1287/mnsc.1120.1558
A theoretical method for optimally designing an adaptive experiment to discriminate between decision theories. Illustrated in simulated data to discriminate EU, weighted utility, OPT, and PT (they write CPT). %}
Cavagnaro, Daniel R., Richard Gonzalez, Jay I. Myung, & Mark A. Pitt (2013) “Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach,” Management Science 59, 358–375.
{% N = 19 subjects. Adaptive method for fitting probability weighting in probability triangle, with outcomes $25, $350, and $1000. Choices were hypothetical. At each question, the computer calculates what is the optimal next question to ask. The paper finds that two-parameter families work way better than one-parameter, especially because there are very optimistic subjects with high elevation which one-parameter families cannot capture (p. 281 para 2). The Prelec 2-parameter and linear-log-odds (Goldstein & Einhorn) are about equally good, although Prelec 2-parameter is mostly better for the subjects with extremely high elevation. P. 281 2nd para: Prelec 2-parameter does not do very well primarily because universal subproportionality does not hold. %}
Cavagnaro, Daniel R., Mark A. Pitt, Richard Gonzalez, & Jay I. Myung (2013) “Discriminating among Probability Weighting Functions Using Adaptive Design Optimization,” Journal of Risk and Uncertainty 47, 255–289.
{% %}
Cebul, Randall.D. (1984) “A Look at the Chief Complaints Revisited: Current Obstacles and Opportunities for Decision Analysis,” Medical Decision Making 4, 271–283.
{% value of information: shows how the Blackwell theorem, of more informative being equivalent to more increasing SEU, can be extended to maxmin EU. %}
Çelen, Bogaçhan (2012) “Informativeness of Experiments for Meu,” Journal of Mathematical Economics 48, 404–406.
{% The authors propose a new risk model that assigns to X = (p1:x1,…,pn:xn) with expected value EV the value EV + 2[E(XEV)+ + (1)E(XEV)]. Here Y is defined as 0, as is often done in decision theory (especially if Y concerns nonquantitative losses for which Y is not defined) but not in mathematical probability theory or measure theory, where Y is usually taken 0. = ½ gives back EV. A pessimist will have < ½. I note that the model could have been rewritten as EV + (21)E(|X-EV|), showing it’s an analog of mean-variance. A behavioral foundation is in Blavatskyy (2010 Management Science), something the authors are not aware of.
They further generalize by replacing EV by (g(p1)x1 + ... + g(pn)xn)/(g(p1) + ... + g(pn)). Wakker (2010 Exercise 6.7.1) showed that this violates stochastic dominance whenever g is nonlinear. So, I would have preferred that the authors had done rank-dependent probability weighting. The authors show how the model can accommodate all kinds of phenomena. They do not provide a behavioral foundation or empirical test.
Rieger (2017) comments, pointing out for instance that the model is close to Guls’ (1992) disappointment aversion model, treating EV the way Gul treats CE. %}
Cenci, Marisa, Massimiliano Corradini, Alberto Feduzi, Andrea Ghenoa (2015) “Half-Full or Half-Empty? A Model of Decision Making under Risk,” Journal of Mathematical Psychology 68-69, 1–6.
{% Generalizes Gilboa, Maccheroni, Marinacci, & Schmeidler (2010). The objectively rational preference is still Bewley-type. The subjective one generalizes the maxmin-EU relation of Gilboa et al. (2010) to the general uncertainty averse (quasi-convex) preferences of Cerreia-Vioglio et al. (2011). So, the paper assumes the Anscombe-Aumann model. %}
Cerreia-Vioglio, Simone (2016) “Objective Rationality and Uncertainty Averse Preferences,” Theoretical Economics 11, 523–545.
{% biseparable utility violated;
The cautious expected utility model takes not one utility function, but a set W of such. Each lottery is evaluated by the, for that lottery, most risk averse utility function in W. That is, the certainty equivalent CE of lottery x is V(x) = infvWCEv(x), where CEv(x) is the CE of x under EU with utility function v. It is dual to maxmin EU for uncertainty, with linearity in probability rather than in utility (maxmin EU has linear utility in the AA sense). Cautious EU can be risk averse if all functions in W are concave, and risk seeking if all those functions are convex. One can increase risk aversion by applying a concave transformation to all functions in W, and increase risk seeking by applying a convex transformation. Thus, the model itself does not very directly speak to risk aversion, but what it adds to EU is entirely in the direction of risk aversion. For comparison, RDU can add risk aversion to EU by adding a convex probability weighting function to EU.
One can readily formulate maxmin generalizations of cautious EU. The model shares with Chew’s (1983) weighted utility (and with the smooth ambiguity model although that is for ambiguity), the spirit of getting the action/variance-in-data from the outcomes, and will not work well to accommodate the fourfold pattern of risk attitude with risk aversion depending on the probabilities considered. For instance, if we face an outcome interval where the utility functions differ much, then the nonEU part of the formula will add much risk aversion. If we then go to another outcome interval where the utility functions are all equal, then there the formula satisfies EU. The outcomes we deal with, and not the events/probabilities, determine risk attitude. This is different for RDU or prospect theory, where the relevant probabilities determine how we deviate from EU.
The cautious model will not be very tractable for calculations, just as with maxmin EU, because for the very evaluation of a single lottery already a minimization problem, minimizing over a set of utility functions, must be carried out.
Whereas for most lotteries the model adds a layer of risk aversion, it does not do so for riskless lotteries. These get a kind of privileged treatment. Thus a necessary axiom is negative certainty independence (NCI):
x ~ x + (1)c + (1)c
for all lotteries x,c, sure outcomes , and 0<<1. A way to see this: if, in x + (1)c, I could for x take the most aversive utility function for x, and for c the most aversive utility function for c, then I would have indifference. In reality I cannot minimize for both x and c at the same time. Putting NCI differently, and assuming RCLA, replacing any sublottery in a multistage lottery by its certainty equivalent always worsens the case. Put yet differently, and very nicely, any conditional CE (recommended to be used by McCord & de Neufville 1986) exceeds the unconditional CE. Thus the model can be taken as a nice new insight into McCord & deNeufville: it characterizes when M&d ALWAYS find lower risk aversion. In combination with the RDU model the condition is very restrictive because it is imposed irrespective of the ranking of the outcomes and, indeed, it cannot be reconciled with RDU (unless EU). Loosely speaking, as soon as there is rank dependence, we can always arrange the conditional CE to come out relatively favorable but also relatively unfavorable and the latter violates NCI.
NCI implies convexity (also called quasi-convexity or quasi-concavity) w.r.t. probabilistic mixing: if x ~ y ~ , then x + (1)y x + (1) +(1) ~ x. That is, in x + (1l)y we twice substitute conditional CEs (p.697 footnote 8), each time worsening the lottery. So, there is a general preference for probabilistic mixing.
Theorem 1 p. 698 shows that under usual monotonicity/continuity/weak ordering, the condition (NCI) is not only necessary, but also sufficient, for cautious EU. Here is again the duality with maxmin EU, with convexity meaning that we have a minimum over dominating linear functions but a certainty independence needed extra because we have ordinal inputs. The negative certainty independence axiom of the authors nicely combines these two conditions. W’s closed convex hull is unique up to redundant utilities (giving too high CEs to ever be minimum, as resulting for instance from any convex transform; they get some sort of Kannai-type minimally concave utilities); see §2.5 p. 701. It is a very, incredibly, appealing mathematical result connecting simple concepts in a way never noticed by anyone before.
I disagree with many empirical claims in the paper though.
(1) Pp. 694-695 mentions Quiggin’s RDU and betweenness as the most popular alternatives to EU, overlooking the Nobel-awarded prospect theory whose 1979 introduction is the second-most cited paper ever in an economic journal. P. 712 writes that the NCI model, like betweenness and RDU, is not designed to distinguish between gains and losses. Here it is strange that PT is not mentioned. Kahneman & Tversky’s papers are only cited for particular empirical facts and in the definition of RDU it is just mentioned as comprising. Cautious utility can capture sign-dependence well in one way: it can let the set of utility functions for losses be very different than for gains. It cannot capture sign-dependence in the sense that its deviation from EU is always to take the minimal EU, both for gains and losses. A sign-dependent generalization could be to take the max for the loss-part, or do maxmin with different for losses than for gains.
(2) P. 695 claims “Third, our model is consistent with the main stylized facts of preferences under risk as surveyed in Camerer (1995) and Starmer (2000).” As most theoretically-oriented economists, the authors are not well aware of the common empirical finding of the fourfold pattern. They show no awareness of risk seeking for small-probability gains. They do explicitly point out that they do not seek to accommodate sign dependence (p. 712), and they do point out that they can accommodate risk seeking for losses (by having utility functions in W convex for losses), but what their NCI adds for losses goes in a risk averse and I think wrong direction for losses. Whereas RDU adds layers to EU that can be risk averse or risk seeking and, importantly, can do so depending on probabilities considered, cautious EU only adds a layer of risk aversion to EU that is outcome-oriented and not probability-oriented.
(2a) Problems for losses: people have a special aversion to sure losses, contra to NCI. The common finding is
100½0 50 (risk seeking)
but, mixing it fifty-fifty with a sure 0, I predict
100¼0 50½0
violating NCI.
(2b) Problems for low-likelihood gains: people will dislike certain outcomes if they compete with small-probability-high-gains (leading to inverse-S under RDU). Thus the common finding is
1061060 1 (risk seeking)
but, mixing it fifty -fifty with a sure 106, I predict
106½+106/20 106½1
violating NCI.
NCI implies universal convexity of preference, but I expect it to be violated in many instances. Wakker (2010 Theorem 7.4.1) shows that under RDU (= PT for gains), convexity of preference (p ~ q p + (1)q q; a condition called quasi-concavity by Wakker) is equivalent to concavity of probability weighting. Most empirical evidence, however, suggests the opposite for gains: convex probability weighting (under inverse-S usually for moderate and high outcomes, although weak in the interior). This gives counterevidence to convexity of preference (modulated by violations of RDU). The authors mention this difference between their model and RDU in footnote 37, p. 713. I expect that neither convexity nor concavity of preference holds very generally (for gains or losses), depending on configurations of lotteries as with inverse-S.
P. 713 suggests that betweenness is more restrictive (= parsimonious) than the NCI model, and that the latter is permissive (= less parsimonious), but then suggests that RDU is even more permissive (although staying vague by saying that “there are instances”). I see this differently. The set W of utility functions (also when modulo closed convex hull, redundant utilities, and affine transformations) is of higher dimensionality than RDU’s (1 utility function + 1 weighting function). The NCI axiom, only imposing some inequalities and not symmetric in left- and right-hand side of preference, is more permissive than comonotonic independence or betweenness. The latter are symmetric in the left- and right-hand side of preference, amounting to invariant preferences and to preserving indifferences. This is the same as convexity being more permissive than linearity.
Related to the above point of cautious utility being permissive, elicitiation will be problematic. The elicitation discussed at the end of section 2 (p 702 bottom) confuses empirical observation with identifiability. It only shows how observations exclude some utility functions, and writes that if we know the whole preference relation than the set W must be identifiable (up to its uniqueness of course). Such observations hold for every model, and give no clue on how much a finite number of observations narrows down the set W. As always, one can do parametric fitting. But then one should not only restrict the utility functions considered, but also the set of utility functions considered. If this is done to a high degree, then cautious EU can become sufficiently parsimonious to be empirically tractable for data fitting and predicting. But it will take creativity to find empirically satisfactory parametric subfamilies.
P. 707 l. 5 claims that RDU has a continuous (onto) weighting function, but this is not common because there is much interest in discontinuities at p = 0 and p = 1.
EVALUATION:
Cautious EU and its axiomatization are mathematically highly appealing and esthetic. In full generality the model is way more general (less parsimonous) than other models and, hence, less tractable. But more restricted (parsimonous) subfamilies can be developed and, in particular, the complexity of solving a minimization problem for every lottery to be evaluated can be made tractable this way.
Empirical problems are that, whereas RDU imposes an extra layer on EU that can give both extra risk aversion or extra risk seeking and, in particular, can have that depend on probabilities which is empirically and psychologically desirable, this model only imposes an extra risk aversion layer (cure: could be modified by maxmin generalizations) that is outcome-oriented (no cure conceivable for this). %}
Cerreia-Vioglio, Simone, David Dillenberger, & Pietro Ortoleva (2015) “Cautious Expected Utility and the Certainty Effect,” Econometrica 83, 693–728.
{% quasi-concave so deliberate randomization %}
Cerreia-Vioglio, Simone, David Dillenberger, Pietro Ortoleva, & Gil Riella (2017) “Deliberately Stochastic.” Unpublished paper, Columbia University.
{% Rational means transitive and monotonic. Then there are in principle mathematical ways to relate preferences to sets of priors. They axiomatize the basic Anscombe-Aumann model with representation I(u f) where f is an act, u a vNM utility function, and I a general functional, which will be by EU for risk plus monotonicity/backward induction. %}
Cerreia-Vioglio, Simone, Paolo Ghirardato, Fabio Maccheroni, Massimo Marinacci & Marciano Siniscalchi (2011) “Rational Preferences under Ambiguity,” Economic Theory 48, 341–375.
{% Assume that the subjective measure of the financial market is nonadditive, and then use the Choquet integral. Assume risk neutrality for given probabilities.
This paper illustrates how many representation theorems of Choquet integrals can be applied in finance %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, & Massimo Marinacci (2015) “Put-Call Parity and Market Frictions,” Journal of Economic Theory 157, 730–762.
{% This paper revives the local utility analysis by Machina (1982), connecting it with the valuable generalization of vNM EU by allowing for incompleteness by: Baucells & Shapley (2008) and Dubra, Maccheroni, & Ok (2004), two papers written independently and simultaneously, using sets of vNM utilities and unanimous agreement. It further shows that prospect theory with risk aversion and prudence must reduce to EU. I conjecture that this is because prudence should then be taken in a comonotonic cosigned way. The authors define prudence in terms of the 3rd derivative of utility in EU, but this is just in that definition and does not refer to the utility actually used, so it does not require the utility actually used to be differentiable. %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, & Massimo Marinacci (2016) “Stochastic Dominance Analysis without the Independence Axiom,” Management Science, forthcoming.
{% Impose preference conditions that are variations of multiple-priors characterization, for generalized coherent risk measures. Using techniques of linear decision theory in finance interpretations, for coherent risk measures à la Artzner et al. Showing that sometimes convexity better be weakened to quasi-convexity to relate to diversification. %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, Massimo Marinacci, & Luigi Montrucchio (2011) “Risk Measures: Rationality and Diversification,” Mathematical Finance 21, 743–774.
{% This paper assumes the AA framework, with linearity of the vNM utility function. Then it gives a general representation for quasi-convex functionals; i.e., it characterizes quasi-convexity of preference, interpreted as uncertainty aversion. For the special case of RDU for uncertainty (also known as CEU), because utility is linear, their quasi-convexity will be equivalent to convexity of the weighting function.
To explain the model, I first discuss concave functionals. (It would be more convenient if the weakening of concavity, called quasi-convexity, were called quasi-concavity here, but I stick with the terminological conventions of this field.)
Assume the usual Anscombe-Aumann model with n states of nature and prize set X. Take u(x), the vNM utility of prize x, as unit of outcome. Take a functional V that now is nothing but a function from u(X)n to . It is well known that V is concave if and only if it is the minimum of the dominating linear functions. In the presence of monotonicity and normalization, we can take those dominating linear functions as EV functionals determined by the subjective probabilities assigned to states. Because EV in u units is usually called expected utility in the Anscombe-Aumann model, I will do so too henceforth. So, a functional then is concave if and only if it is a multiple priors model, which is nice to know.
Gilboa & Schmeidler (1989) characterized multiple priors imposing concavity of preference (uncertainty aversion), which amounts to quasi-convexity, rather than concavity, of the representing functional. They mainly added certainty independence to go from quasi-convex to concave.
The present paper drops concavity of the functional (and certainty independence), imposing only quasi-convexity. Then the functional is not the minimum of a set of dominating EU functionals, but of a quasi-concave G transform of those EU functionals. Here G depends not only on its u(x) input, but it can also entirely depend on the EU functional; i.e., on the subjective probabilities p chosen on the state space. Its quasi-convexity concerns both mixing in u(x) and in p. We need not consider a subset of dominating EU functionals, but can just use all EU functionals, by letting G take value infinite for all the EU functionals to be ignored. The functional is obviously general, depending on all subjective probabilities over S. But it is a convenient way to unify many models.
The paper describes for many models what they mean in terms of their G function, such as the variational model (G is additively decomposable), the Chateauneuf-Faro (2009) model (G is multiplicatively decomposable), the smooth model (for concave), and probabilistic sophistication.
Share with your friends: |