biseparable utility violated %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, Massimo Marinacci, & Luigi Montrucchio (2011) “Uncertainty Averse Preferences,” Journal of Economic Theory 146, 1275–1330.
{% Assume Anscombe-Aumann model.
P. 271, footnote 2 writes that probabilistic sophistication was introduced by Machina & Schmeidler (1992). However, it existed long before. M&S were the first to axiomatize it. Cohen, Jaffray, & Said (1987, first step on p. 1) describe it, for instance.
They take uncertainty aversion in the Schmeidler sense, of quasi-concavity w.r.t. probabilistic mixing. Then they use techniques such as in their 2011 JET paper for the case of probabilistic sophistication. %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, Massimo Marinacci, & Luigi Montrucchio (2012) “Probabilistic Sophistication, Second Order Stochastic Dominance and Uncertainty Aversion,” Journal of Mathematical Economics 48, 271–283.
{% The authors define a statistics model, and a common decision theory model, which assumes Anscombe-Aumann. They define a mechanism to relate the statistical model to the decision theory model, and then show how all kinds of ambiguity models can be related to statistical techniques.
Theorem 6 characterizes the smooth model, but has the two-stage setup exogenous. (See footnote 31.) %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, Massimo Marinacci, & Luigi Montrucchio (2013) “Ambiguity and Robust Statistics,” Journal of Economic Theory 148, 974–1049.
{% The authors take Savage’s SEU model, with state space S and subjective probability P, as point of departure. They assume an additional set M, interpreted as possible models of which we do not know which one is true, and apparently taken to be a set of subjective probability measures m on S. The beginning of the paper carefully explains that S is outcome relevant, and M is only instrumental. They assume that P is a weighted average over M, so is the 2nd-order distribution over S. As a Bayesian I am happy to see that the authors are exemplary Bayesians here! P. 6755 middle of 2nd column writes: “The first issue to consider in our !!normative!! approach” [exclamation marks added], suggesting that the authors consider their approach to be normative.
A necessary and sufficient condition for P to be derivable from M is that if m(A) = m(B) for all mM then 1A0 ~ 1B0 (1A0: get $1 under A and $0 otherwise) (p. 6756 Proposition 1).
A question addressed in this paper is when the 2nd stage can be recovered from P. Without further info about M it obviously cannot. The main case is if all in M is orthogonal (with which the authors indicate disjoint supports) or, more generally, if the elements of M are linearly independent. The authors cite Teicher (1963) for this result on p. 6756 1st para following Proposition 1. Note that this is an extreme case, where the different models considered are completely different. The authors add results referring to supports and absolute continuity. They give a mathematical intertemporal example, stationary and ergodic, where the condition is satisfied.
It is encouraging for theoreticians that PNAS took this mathematical paper. The authors relate to many important ideas, such as Hansen & Sargent’s robust approach, Wald, Marschak, model uncertainty, with much knowledge of history. %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, Massimo Marinacci, & Luigi Montrucchio (2013) “Classical Subjective Expected Utility,” Proceedings of the National Academy of Sciences 110, 6754–6759.
{% The variational model has a cost function c(p) for lottery p. This paper analyses uniqueness, concerning the set of all c-functions that represent preference. It shows that there are a lower c* and an upper d*, and that c can be iff it is between c* and d*. The introductory paper of the variational model, Maccheroni et al. (2006), had an unboundness assumption making d* infinite/redundant. This paper interprets c* as degree of ambiguity aversion and d* as degree of ambiguity, but is is unclear to me how this can be defended. %}
Cerreia-Vioglio, Simone, Fabio Maccheroni, Massimo Marinacci & Aldo Rustichini (2015) “The Structure of Variational Preferences,” Journal of Mathematical Economics 57, 12–19.
{% Investigate several stock market indices for period of ‘97 to ’99, finding that daily returns are nonnormal and autocorrelated, but weekly returns and longer-term returns are normally distributed and independent. %}
Cesari, Riccardo & David Cremonini (2003) “Benchmarking, Portfolio Insurance and Technical Analysis: A Monte Carlo Comparison of Dynamic Strategies of Asset Allocation,” Journal of Economic Dynamics and Control 27, 987‑1011.
{% Study 11,000 (!) Swedish twins. Ask them many simple questions to test for loss aversion, discounting, and so on. Find that loss aversion and ambiguity aversion (and several other anomalies) are partly explained genetically, with some 20% of variance explained this way. Impatience is not genetically related. %}
Cesarini, David, Magnus Johannesson, Patrik K. E. Magnusson, & Björn Wallace (2012) “The Behavioral Genetics of Behavioral Anomalies,” Management Science 58, 21–34.
{% Subjects use whole regions of indifference but only when ambiguity is involved. %}
Cettolin, Elena & Arno Riedl (2015) “Revealed Incomplete Preferences,” working paper.
{% %}
Cettolin, Elena & Arno Riedl (2017) “Justice under Uncertainty,” Management Science, forthcoming.
{% Find that difference in chess performance of men and women can be explained entirely by fewer women playing chess. %}
Chabris, Christopher F. & Mark E. Glickman (2006) “Sex Differences in Intellectual Performance: Analysis of a Large Cohort of Competitive Chess Players,” Psychological Science 17, 1040–1045.
{% Seem to find no relation between risk aversion and impatience. %}
Chabris, Christopher F., David Laibson, Carrie L. Morris, Jonathon P. Schuldt & Dmitry Taubinsky (2008) “Individual Laboratory-Measured Discount Rates Predict Field Behavior,” Journal of Risk and Uncertainty 37, 237–269.
{% value of information %}
Chade, Hector & Edward Schlee (2003) “Another Look at the Radner-Stiglitz Nonconcavity in the Value of Information,” Journal of Economic Theory 107, 421–452.
{% %}
Chai, Junyi, Chen Li, Peter P. Wakker, Tong V. Wang, & Jingni Yang (2016) “Reconciling Savage’s and Luce’s Modeling of Uncertainty: The Best of Both Worlds,” Journal of Mathematical Psychology 75, 10–18.
Link to paper
{% They reanalyze the data of Andreoni & Sprenger (2012 AER 3333-3356) and Augenblick et al. (QJE 2015). They find many violations of elementary WARP and monotonicity, almost exclusively with subjects who did not always make boundary choices. They point out that this is a serious confound. %}
Chakraborty Anujit, Evan M. Calford, Guidon Fenig, & Yoram Halevy (2017) “External and Internal Consistency of Choices Made in Convex Time Budgets,” Experimental Economics 20, 687–706.
{% Risk sharing when different individuals have different ambiguity attitudes, analyzed using RDU for uncertainty. They may not want to share risks for extreme events, something also seen with no-insurance for extreme events. %}
Chakravarty, Surajeet & David Kelsey (2015) “Sharing Ambiguous Risks,” Journal of Mathematical Economics 56, 1–8.
{% doi:10.1111/jpet.12160 %}
Chakravarty, Surajeet & David Kelsey (2017) “Ambiguity and Accident Law,” Journal of Public Economic Theory 19, 97–120.
{% losses from prior endowment mechanism; RIS for each individual.
N = 85; very bright students; use 4 choice list, for gains, losses, known and unknown probabilities (Ellsberg urns), always first with known probabilities so order effects can be (p. 206 bottom).
They consider the smooth model, with a risky x0.50 equivalent to an ambiguous 100E0, and the 2nd order probability of E is 0.5. Under the smooth model this implies
(0.5U(x)) = 0.5(U(100)). (*)
Unfortunately, as a colleague pointed out to me, the paper uses a different, incorrect, equation:
0.5U(x) = 0.5(U(100)). (**)
That Eq. ** cannot be correct can for instance be seen directly because replacing by /2 should not affect preference, which goes wrong in Eq. **.
The authors are not clear and do not write Eq. ** explicitly, but still it can be seen that they use it because: (1) it is suggested on p. 204 top; (2) a colleague of mine could exactly reproduce their Table 2 using Eq. **, and not using Eq. *. (3) their repeated claims that risk attitude cancels when measuring ambiguity attitude (assuming that U and are power functions) only follows from the incorrect Eq. **, and not from the correct Eq. *.
suspicion under ambiguity: subjects can choose color to gamble on, controlling for suspicion.
Risk averse for gains, risk seeking for losses & convex utility for losses & ambiguity seeking for losses: they find risk aversion for gains, risk seeking for losses, ambiguity neutrality for gains, and weak ambiguity seeking for losses. Importantly, note that ambiguity is what is MORE than risk attitude, so that weak ambiguity seeking for losses means somewhat MORE under ambiguity than under risk (uncertainty amplifies risk). Find risk and ambiguity aversion positively correlated for gains, but unrelated for losses (p. 214) correlation risk & ambiguity attitude).
reflection at individual level for ambiguity & reflection at individual level for risk: they find the opposite, both for risk- and for ambiguity attitudes. Subjects risk averse for gains are also mostly risk averse for losses, and risk seeking for gains then mostly so for losses. Subjects ambiguity averse for gains are also mostly ambiguity averse for losses, and ambiguity seeking for gains then mostly so for losses. Unfortunately, they only report preference patterns and no correlations (of utility parameters that can serve as risk/ambiguity aversion parameters). They also find no relation between reflection for risk and reflection for ambiguity at the individual level, but it is not very clear.
They use the KMM (Klibanoff, Marinacci, & Mukerji) model. In their theory, they also allow for “subjective probability” at the extreme outcome as a subjective variable, which amounts to biseparable utility. In their data analysis they, however, do not do this and just take subjective probabilities as 50-50 (pp. 215-216). %}
Chakravarty, Sujoy & Jaideep Roy (2009) “Recursive Expected Utility and the Separation of Attitudes towards Risk and Ambiguity: An Experimental Study,” Theory and Decision 66, 199–228.
{% Assumes a set of priors, and does all kinds of maxmin regret things etc. Focuses on predictive distributions. %}
Chamberlain, Gary (2000) “Econometrics and Decision Theory,” Journal of Econometrics 95, 255–283.
{% Is RDU for uncertainty when nondegeneracy is violated, i.e., there is no more than one nonnull state (no two disjoint nonnull events if state space is infinite) in every comonotonic subset. %}
Chambers, Christopher P. (2007) “Ordinal Aggregation and Quantiles,” Journal of Economic Theory 137, 416–431.
{% proper scoring rules %}
Chambers, Christopher P. (2008) “Proper Scoring Rules for General Decision Models,” Games and Economic Behavior 42, 32–40.
{% This paper assumes that the empirical content of a theory is (at most) what it can predict for a finite data set (p. 2304 penultimate para). UNCAF (universal negation of conjunctions of atomic formula) axioms such as the weak axiom of revealed preference and transitivity are falsifiable and UNCAF, but continuity is not and completeness, under some assumptions about choice, neither is. The paper introduced formal terminology and results for the assumption, referring to mathematical logic and model theory.
P. 2305 has nice citation from Carl Sagan; “Absence of evidence is not evidence of absence.”
P. 2308: two theories are observationally equivalent (Thom Bezembinder used the term data equivalent) if they have the same implications for finite data sets.
P. 2308: “The empirical content of a theory is the most permissive observationally equivalent weakening of the theory.” It is next formalized in Definition 3.
The authors on p. 2311 2nd para say that decision theorists often call continuity technical. I discussed the dangers of continuity, of not just being technical, on several occasions, such as Wakker (1988 JMP p. 432-433). Nice to see that the authors cite (on p. 2314) Adams et al. (1970) and Pfanzagl (1966) on these issues.
While not formalized, I used similar criteria of observability/empirical content in some works. I use it for instance to point out the dangerous empirical status of continuity. My book Wakker (2010 p. 38 penultimate para) writes: “A third argument against completeness concerns the richness of the models assumed, that constitute continuums, with choices between all prospect pairs assumed observable. We will never observe infinitely many data, let alone continuums (Davidson & Suppes 1956). Here completeness is an idealization that we make to facilitate our analyses. Although it has sometimes been suggested that completeness and continuity for a continuum-domain are innocuous assumptions (Arrow 1971 p. 48; Drèze 1987 p. 12), several authors have pointed out that these assumptions do add empirical (behavioral) implications to other assumptions. It is, unfortunately, usually unclear what exactly those added implications are (Ghirardato & Marinacci 2001b; Krantz et al. 1971 §9.1; Pfanzagl 1968 §6.6; Schmeidler 1971; Suppes 1974 §2; Wakker 1988).” The topic is central in Wakker (1988 JMP p. 422 and Example 7.3 and what follows), Köbberling & Wakker (2003 p. 410 last three paras.
P. 2315 2nd para presents Samuelson’s counter to Friedman, where Samuelson very strictly separates falsifiable and nonfalsifiable. If the readers can bear another self-reference, Wakker (2010 p. 3 middle) counters in an opposite direction, by arguing that usually we do not know what will be falsifiable and what not. %}
Chambers, Christopher P., Federico Echenique, & Eran Shmaya (2014) “The Axiomatic Structure of Empirical Content,” American Economic Review 104, 2303–2319.
{% ordered vector space: maths seems to be related to de Finetti’s additive representation but more complex because it involves Scitovsky sets (weakly dominating allocations) and gets a probability distribution over prize vectors. An axiom that joining two societies (they consider populations of variable sizes) should respect separate orderings is close to additivity axiom of de Finetti or independence axiom of vNM. %}
Chambers, Christopher P. & Takashi Hayashi (2012) “Money-Metric Utilitarianism,” Social Choice and Welfare 39, 809–831.
{% %}
Chambers, Christopher P. & M. Bumin Yenmez (2017) “Choice and Matching,” American Economic Journal: Microeconomics, forthcoming.
{% Propose a generalization of mean-variance where the combination of mean and variance can be anything monotonic (so, only weak separability in the two) and, the main contribution, it goes for uncertainty/ambiguity rather than for risk. Assume Anscombe-Aumann, although as often these days they just take a mixture space (p. 616). They mention Anscombe-Aumann (AA) as one case, but explicitly also consider the case of monetary outcomes and linear utility, referring to “finance applications” for its interest. Assuming AA, the mean is mean AA-EU. Instead of variance they take a generalized dispersion measure, satisfying conditions specified below.
A probability measure on the state space S is derived subjectively à la Savage (or AA). The model is very general and encompasses Siniscalchi’s (2009) vector utility, variational, multiplier, and many other models. The authors share with Siniscalchi (2009) a complementarity axiom (here taken objectively rather than subjectively as by Siniscalchi: p. 619 footnote 8) which rules out likelihood insensitivity/inverse-S, slo that I think the model will not be suited to fit empirical ambiguity attitudes. There may be interest in finance though, and the paper is targeted to that. They generalize Grant & Polak (2013) JET mainly by giving up the additive decomposability in mean and dispersion, but only have weak separability and some other (in)equalities there (complementarity independence, common-mean certainty independence, and common-mean uncertainty aversion).
P. 613: a measure of dispersion is the subjective EU an agent would be willing to give up to achieve constant EU over the state space.
P. 613: they argue that ambiguity aversion need not always be constant as in Grant & Polak (2013), which motivates the generalization.
P. 614: the general form is
V(f) = (E(Uf) ), (Uf)) where is bivariately weakly separable, E(Uf) denotes the subjective AA EU, captures dispersion about E(Uf), and (y,0) = y. P. 615: (,0) (,) is the absolute uncertainty premium in utils.
P. 617b lists axioms, including subadditivity (3(b)) and symmetry (3(d)), each ruling out likelihood insensitivity. Symmetry is captured by Axiom A5, complementarity independence (p. 619). Axiom A.6 (p. 620) is common-mean uncertainty aversion and also rules out likelihood insensitivity. Axiom A.7 (p. 620) is common-mean certainty independence, imposed only for acts f and g that have a common “mean” (-EU) . Axioms A.1-A.7 are necessary and sufficient for their model (Theorem 2, p. 621).
P. 623 penultimate para: their A.7 is not weaker than weak certainty independence of Maccheroni et al. (2006 JET), , but common-mean translation invariance is weaker than the translation invariance property implied by weak independence.
Pp. 625-626: they can handle Machina’s examples. Pp. 627: relations to CAPM. %}
Chambers, Robert G., Simon Grant, Ben Polak, & John Quiggin (2014) “A Two-Parameter Model of Dispersion Aversion,” Journal of Economic Theory 150, 611–641.
{% Take beliefs as sets of probabilities. These can be described by their tangents. Results on approximations are given. The authors add interpretations in terms of beliefs. %}
Chambers, Robert G. & Tigran Melkonyan (2008) “Eliciting Beliefs,” Theory and Decision 65, 271–284.
{% Dutch book: consider restrictions on arbitrage. %}
Chambers, Robert G. & John Quiggin (2008) “Narrowing the No-Arbitrage Bounds,” Journal of Mathematical Economics,” 44, 1–14.
{% %}
Chandler, Jesse J. & Emily Pronin (2012) “Fast Thought Speed Induces Risk Taking,” Psychological Science 23, 370–374.
{% %}
Chandrasekhar, Pammi V.S., C. Monica Capra, Sara Moore, Charles Noussair, & Gregory S Berns (2008) “Neurobiological Regret and Rejoice Functions for Aversive Outcomes,” Neuroimage 39, 1472–1484.
{% Test the DUU theory of Chichilnisky. Use Tradeoff method to measure utility and probability weighting. Test the uncertainty theory of Chichilnisky (2009). Problem is that in the experiment extremity of an event is generated by its outcome, whereas in the theory an event is to be extreme irrespective of the outcome. %}
Chanel, Olivier & Graciela Chichilnisky (2009) “The Influence of Fear in Decisions: Experimental Evidence,” Journal of Risk and Uncertainty 39, 271–298.
{% Subjects’ risk aversion is measured before and after a change of wealth derived from a task they carried out. Their change is both absolute in the sense of just getting an extra positive or negative payment for their work, but also relative in the sense of getting more or less than the average of what other subjects get. Risk aversion is measured by fitting EU with log-power (CRRA) utility. Because of several things going on such as perception of inequality it is not easy to interpret the results. %}
Chao, Hong, Chun-Yu Ho, & Xiangdong Qin (2017) “Risk Taking after Absolute and Relative Wealth Changes: The Role of Reference Point Adaptation,” Journal of Risk and Uncertainty 54, 157–186.
{% Shows that for nonexpected utility models, including rank dependence and prospect theory, with first-order risk aversion, heterogeneity can lead to extra deviations from the representative agent model. %}
Chapman, David A. & Valery Polkovnichenko (2009) “First-Order Risk Aversion, Heterogeneity, and Asset Market Outcomes,” Journal of Finance 64, 1863–1887.
{% Explain that reference dependence as solution to Rabin’s paradox is very inconvenient for finance. Propose to assume Rabin’s small-scale risk aversion in a restricted number of choice situations, in which the calibration does not go through and no paradoxes result for large-scale risks. %}
Chapman, David A. & Valery Polkovnichenko (2011) “Risk Attitudes toward Small and Large Bets in the Presence of Background Risk,” Review of Finance 15, 909–927.
{% time preference; in experiment 3, she measured utility under risk (using one gain-choice to fit power-utility for gains and one loss-choice to fit power-utility for losses) and used this measurement to measure discounting of utility rather than of money. Seems to have been the first to have done so for money, although for health it had been done before (Stiggelbout et al. 1994 MDM). %}
Chapman, Gretchen B. (1996) “Temporal Discounting and Utility for Health and Money,” Journal of Experimental Psychology: Learning, Memory, and Cognition 22, 771–791.
{% time preference; argues that people do not always prefer increasing sequences, but instead the kind of sequences that they are used to, for example, decreasing for health %}
Chapman, Gretchen B. (1996) “Expectations and Preferences for Sequences of Health and Money,” Organizational Behavior and Human Decision Processes 67, 59–75.
{% time preference; extends on Chapman (1996). %}
Chapman, Gretchen B. (2000) “Preferences for Improving and Declining Sequences of Health Outcomes,” Journal of Behavioral Decision Making 13, 203–218.
{% %}
Chapman, Gretchen B. & Arthur S. Elstein (1995) “Valuing the Future: Temporal Discounting of Health and Money,” Medical Decision Making 15, 373–386.
{% marginal utility is diminishing: discusses many “local” deviations due to last penny needed to buy a house etc. Does not discuss loss aversion, contrary to what may be suggested by footnote 4 on p. 673 of Robertson (1954) %}
Chapman, Sydney (1913) “The Utility of Income and Progressive Taxation,” Economic Journal 22, 25–35.
{% Investigates standard litigation model, but assumes RDU (= CEU) for ambiguity with neoadditive weighting function. Follows up on Teitelbaum (2007). %}
Chappe, Nathalie & Raphael Giraud (2013) “Confidence, Optimismand Litigation: A Litigation Model under Ambiguity,” working paper.
{% %}
Chareka, Patrick (2009) “The Central Limit Theorem for Capacities,” Statistics & Probability Letters 79, 1456–1462.
{% Measure CEs (certainty equivalents) in game situations. CEs are higher in coordination game (which is cooperative) than in matching pennies (which is competitive). These things are moderated if “opponent” is random computer. Neuroimaging is used to find correlations with brain activities.
A difficulty is that the measurement of the CEs in this paper interferes with the games. What happens is, first, players are asked what they play if they have to play a game. Next, some players are given the choice to either play the game, or instead get a sure outcome for themselves (and then the same sure amount for their opponent). This impacts the game by forward induction. If your opponent had the choice between the game and the sure amount, and chose the game, then this signals that she wants to get more money from the game than the sure money amount, which for instance may rule out some equilibria. In the coordination game it makes it extra safe to also enter there and go for a high amount. Thus, it makes coordination games extra attractive. %}
Chark, Robin & Chew Soo Hong (2015) “A Neuroimaging Study of Preference for Strategic Uncertainty,” Journal of Risk and Uncertainty 50, 209–227.
{% Use real incentives.
Use front-end delay: choices between receiving money after 2 or 9 days (proximate), after 31 and 38 days later (intermediate), and after 301 versus 308 days (remote). They find decreasing impatience when going from proximate to intermediate, but not when going from intermediata to remote. %}
Chark, Robin, Soo Hong Chew & Songfa Zhong (2015) “Extended Present Bias: a Direct Experimental Test,” Theory and Decision 79, 151–165.
{% %}
Chark, Robin, Chew Soo Hong, & Songfa Zhong (2016) “Individual Longshot Preferences,” working paper.
{% %}
Charles-Cadogan, Godfrey (2016) “Expected Utility Theory and Inner and Outer Measures of Loss Aversion,” Journal of Mathematical Economics 63, 10–20.
{% %}
Charnes, Abraham & William W. Cooper (1959) “Chance-Constrained Programming,” Management Science 5, 197–207.
{% %}
Charness, Gary (2004) “Attribution and Reciprocity in an Experimental Labor Market,” Journal of Labor Economics 22, 665–688.
{% correlation risk & ambiguity attitude: seem to find positive correlation (p. 139) %}
Charness, Gary & Uri Gneezy (2010) “Portfolio Choice and Risk Attitudes: An Experiment,” Economic Inquiry 48, 133–146.
{% survey on nonEU: survey a few methods of measuring risk attitudes, mostly from close researchers, pointing out that they do not seek for completeness. They present a section “the multiple price list” as a method, citing some papers that elicited indifferences through what I would call price list rather than multiple price list. %}
Charness, Gary, Uri Gneezy, & Alex Imas (2013) “Experimental Methods: Eliciting Risk Preferences,” Journal of Economic Behavior and Organization 87, 43–51.
{% A refinement of the Charness & Levin (2005) design gives violations of stochastic dominance. The larger the groups to decide and the more transparent the stimuli the fewer the violations of stochastic dominance. %}
Charness, Gary, Edi Karni, & Dan Levin (2007) “Individual and Group Decision Making under Risk: An Experimental Study of Bayesian Updating and Violations of First-Order Stochastic Dominance,” Journal of Risk and Uncertainty 35, 129–148.
{% They study the Linda conjunction fallacy of Kahneman & Tversky (1983). In the replication they find 58% rather than the 85% (note the reversal of digits …; typo!?) that K&T did; here subjects received a flat $2 payment. Then they redid, telling the subjects that there was a correct answer, and paying $4 to who gave the correct answer. This reduced the error rate to 33% (real incentives/hypothetical choice; p. 554). They also let groups of 2 and also of 3 answer. The groups, especially of 3, had much lower error rates, both with answer-contingent payment and with flat payment.
Note that paying for the correct answer versus flat is a way of rewarding different than the real-hypothetical decisions distinction. Here it is not a decision the outcome of which is real or hypothetical, but just a different payment for an effort. In the hypothetical treatment there is no reference to any hypothetical payment. %}
Charness, Gary, Edi Karni, & Dan Levin (2010) “On the Conjunction Fallacy in Probability Judgment: New Experimental Evidence Regarding Linda,” Games and Economic Behavior 68, 551–556.
{% Consider three-color Ellsberg urn with 36 balls (slips in envelope but I write balls), with a known number X of red balls, and 36X black and yellow balls in unknown proportion. They find the switching value X, which is similar to matching probability but not the same because the nr. of black/yellow also changes. Subjects who switch between 11 and 13 are ambiguity neutral. Then choosing known or unknown for X=12 are both taken as ambiguity neutral. The latter is for 60% (n = 164) of their subjects. Subjects who for X = 12 choose risky are categorized as ambiguity averse in most other studies but as neutral in this study; if there are many such subjects, it explains much of their finding. Further, 20% is inconsistent, 12% is ambiguity seeking, and only 8% is ambiguity averse.(ambiguity seeking) Strange that so few of the latter. One might conjecture that many subjects are very weakly ambiguity averse, choosing red in classical Ellsberg experiments and also here when X = 12, in which case the majority of the subjects categorized as ambiguity neutral would choose to bet on red. This did not happen. Footnote 15 (p. 11) points out that only 50% of these subjects (82 of 164) chose red. Given the outstanding nature of red versus the other two colors, this can be taken as roughly ambiguity neutral.
suspicion under ambiguity: §2 discusses an experiment where they did not control for suspicion, then finding 25% ambiguity aversion. In the beginning of the paper the authors suggest that they deviate from most other studies, and find less ambiguity aversion than those others, because they, supposedly unlike the others, control for suspicion. However, as my key word shows, most other studies have controled for suspicion also in the past.
They also study subjects who try to convince each other of their preferences, with an incentive for them to convince each other. Ambiguity neutral subjects can convince ambiguity seeking and inconsistent, but less so ambiguity averse.
For both the first part, individual choice, and the second part (convince others), one choice was paid for real, which entails a mild income effect.
P. 20: “ambiguity aversion by no means seems as prevalent as some studies have suggested.” %}
Charness, Gary, Edi Karni, & Dan Levin (2013) “Ambiguity Attitudes and Social Interactions: An Experimental Investigation,” Journal of Risk and Uncertainty 46, 1–25.
{% updating: Considers case where Bayesian updating means CHANGING successful strategy, so that the former can be distinguished from heuristic-like continuation of strategies that were successful in the past, more or less a myopic version of CBDT, as follows. A coin has been flipped, giving H or T, unknown to an agent. There are an upper and lower urn, containing B and W balls, where the distribution of H will always be more extreme than of the lower. One ball will be drawn from an urn, where B gives a valuable prize and W not, and sometimes you can choose from which urn this is to be done, upper or lower. H is more favorable because, if H, then the upper urn contains 6 B balls and 0 W balls, and the lower urn contains 4 B and 2 W, whereas if T then the upper urn contains 0 B balls and 6 W balls, and the lower urn contains 2 B and 4 W.
H T
{B,B,B,B,B,B} {W,W,W,W,W,W}
{B,B,B,B,W,W} {B,B,W,W,W,W}
A first draw is done from the lower urn, and the agent sees its result. The agent can then choose from which urn the second and last draw should be made. If the first draw from lower is favorable and gives B, then Bayesian updating recommends to switch and 2nd draw should be from the upper urn. If the first draw is unfavorable and gives W, then Bayesian updating recommends not to switch and 2nd draw should again be from the lower urn. Myopic continuation of successful strategy, and changing of bad strategy, would suggest opposite.
In experiment the authors find about fifty-fifty of the two strategies. No payment in first draw reduces error rate. Error rates are also reduced if higher prizes, presence of affect for first draw (if they know before first draw whether B or W will be favorable) and being male do so too (gender differences in risk attitudes; gender differences in ambiguity attitudes). %}
Charness, Gary & Dan Levin (2005) “When Optimal Choices Feel Wrong: A Laboratory Study of Bayesian Updating, Complexity, and Affect,” American Economic Review 95, 1300–1309.
{% %}
Charness, Gary & Matthew Rabin (2000) “Social Preferences: Some Simple Tests and a New Model.”
{% equity-versus-efficiency: seems to be on it %}
Charness, Gary & Matthew Rabin (2002) “Understanding Social Preferences with Simple Tests,” Quarterly Journal of Economics 117, 817–869.
{% In games people behave differently if felt to be part of group, watched by them, than if not. %}
Charness, Gary, Luca Rigotti & Aldo Rustichini (2007) “Individual Behavior and Group Membership,” American Economic Review 97, 1340–1352.
{% Paying people for doing exercise enhances them doing it. %}
Charness, Gary & Uri Gneezy (2009) “Incentives to Exercise,” Econometrica 77, 909–931.
{% gender differences in risk attitudes: women more risk averse than men.
They investigate illusion of control, ambiguity aversion, and myopic loss aversion. In direct choices people behave as usual, preferring to have control and to choose unambiguous. But they do not pay small amounts for their preferences, and do not invest more, suggesting that the effects found are very weak.
P. 137: in Ellsberg subjects can choose the winning color, so, control for suspicion.
The investigate illusion of control for simple risky choices between-subjects so that there is no contrast effect, and find none (p. 138).
P. 139: in Ellsberg, they find no direct ambiguity aversion. However, in a treatment (T8) where subjects can either invest in the ambiguous urn or the unambiguous, but have to pay some for the latter, the appreciation of the former is HIGHER than in the other treatments. This can be explained by the contrast effect known from marketing, where appreciation of an option is increased by adding an irrelevant inferior option (Tversky & Simonson 1993).
P. 141 quotes Albert Einstein, “everything should be as simple as it is, but not simpler.” %}
Charness, Gary & Uri Gneezy (2010) “Portfolio Choice and Risk Attitudes: An Experiment,” Economic Inquiry 48, 133–146.
{% Seem to find that seniors are more risk averse, and more cooperative, than juniors. %}
Charness, Gary & Marie-Claire Villeval (2009) “Cooperation and Competition in Intergenertional Experiments in the Field and the Laboratory,” American Economic Review 99, 956–978.
{% gender differences in risk attitudes: several results
inverse-S (= likelihood insensitivity) related to emotions
Measure probability weighting w. Relate it to the five-factor model of psychology. Use hypothetical choice. Use the Tversky & Kahneman (1992) stimuli except mixed. Find that emotional balance moves w towards EU, both regarding likelihood insensitivity and regarding optimism. Also being male rather than female does so. The one-parameter Prelec family does best, then the one-parameter T&K 92, then the two-parameter Prelec family (compound invariance), and, finally, Goldstein & Einhorn. They test reflection and find it confirmed. For gains, gender matters with men less likelihood insensitive than women. For losses, emotional balance leads to closer conformity with EU both for less likelihood insensitivity and pessimism. Emotional intelligence does more for gains, and emotional balance for losses. Seems that losses are treated more emotionally and less cognitively than gains. Several times no significance was reached. %}
Charupat, Narat, Richard Deaves, Travis Derouin, Marcelo Klotzle, & Peter Miu (2013) “Emotional Balance and Probability Weighting,” Theory and Decision 75, 17–41.
{% value of information; Paper considers multiple priors. “Revising info” is called the info that reduces the number of probability measures to be included in the set of prior probabilities. “Focusing” is, if I understand right, the traditional thing of receiving info about event. %}
Chassagnon, Arnold & Jean-Christophe Vergnaud (1999) “A Positive Value of Information for a Non-Bayesian Decision-Maker,”
{% ordering of subsets: this paper gives necessary and sufficient conditions, in full generality, for existence of probability measure representing qualitative probability relation. The ultimate result!
Assume that is a preference relation on an algebra of events (subsets of a state space S, also called universal event). We call P agreeing if P is a finitely additive probability measure on the algebra, and
E F P(E) P(F).
E F P(E) > P(F).
This amount to the usual E F P(E) P(F) if and only if is a weak order, but it is nicer because it also covers the practically realistic case of incomplete observations. 1E denotes indicator function. The condition necessary and sufficient for comparative probability (existence of agreeing probability) is, besides well boundedness (S and S E for all E):
For all A B there exists > 0 such that:
Ej Fj, j = 1,…,n, m > 0, k 0
&
m´1A + å1Ej + k1 = m´1B + å1Fj + k1S
& k m
cannot be.
For finite S the condition is equivalent to excluding k 0 (or, = 0) and was demonstrated by Kraft, Pratt, & Seidenberg (1959). It then amounts to the well known necessary and sufficient condition for solving linear inequalities. The general way of turning this into preference conditions was explained beautifully by Scott (1964). For infinite S we have to ensure Archimedeanity, and the condition ensures it. Substitution of P shows that reflects P(A) P(B). %}
Chateauneuf, Alain (1985) “On the Existence of a Probability Measure Compatible with a Total Preorder on a Boolean Algebra,” Journal of Mathematical Economics 14, 43–52.
{% %}
Chateauneuf, Alain (1987) “Continuous Representation of a Preference Relation on a Connected Topological Space,” Journal of Mathematical Economics 16, 139–146.
{% The fundamental lemma characterizes multiple priors. %}
Chateauneuf, Alain (1988) “Uncertainty Aversion and Risk Aversion in Models with Nonadditive Probabilities.” In Bertrand R. Munier (ed.) Risk, Decision and Rationality, 615–629, Reidel, Dordrecht.
{% Axiom A5.1 can be used to imply proportionality of additive value functions. Published in JME 32 1999. %}
Chateauneuf, Alain (1990) “On the Use of Comonotonicity in the Axiomatization of EURDP Theory for Arbitrary Consequences,” CERMSEM, University of Paris I; extended abstract presented at Fifth International Conference on the Foundations and Applications of Utility, Risk and Decision Theory (FUR-90).
{% Theorem 2 characterizes the multiple priors model just as Gilboa & Schmeidler (1989, JME) did, but with linearity of utility referring to money-addition and not to the mixing of probabilities as in G&S. Chateauneuf and Gilboa & Schmeidler obtained their results independently, although at a late stage Gilboa helped Chateauneuf to correct a mistake in Chateauneuf’s theorem, acknowledged in Footnote 9 of Chateauneuf’s paper. The “fundamental lemma” on p. 623 of Chateauneuf (1988) stated the same result. Although it referred to a 1986 working paper of Gilboa & Schmeidler’s 1989 paper, the results were obtained independently.
Theorem 1 provides an alternative to Schmeidler (1989), again with monetary outcomes and linear utility. It uses a nice weakening of comonotonic independence building on Anger (1977). Chateauneuf uses mixing independence and not addition independence.
biseparable utility %}
Chateauneuf, Alain (1991) “On the Use of Capacities in Modeling Uncertainty Aversion and Risk Aversion,” Journal of Mathematical Economics 20, 343–369.
{% %}
Chateauneuf, Alain (1994) “Combination of Compatible Belief Functions and Relation of Specificity.” In Ronald R. Yager, Janusz Kacprzyk, & Mario Fedrizzi (eds.) Advances in the Dempster-Shafer Theory of Evidence, 97–114, Wiley, New York.
{% Not all decomposable capacities are distorted probabilities, but many are. There may be some vague similarity with sources of uniform ambiguity. %}
Chateauneuf, Alain (1996) “Decomposable Capacities, Distorted Probabilities and Concave Capacities,” Mathematical Social Sciences 31, 19–37.
{% Tradeoff method: Axiom A4 is a weakened version of tradeoff consistency (if the latter were imposed on all events and not just states of nature). It is used jointly with something like tail independence, and suffices to imply proportionality of the additive value functions.
A4 says for, say, outcomes always ordered from best to worst, so a c and b d:
(1) (p1:x1, p2:a, p3:a) ~ (p1:y1, p2:b, p3:b) and
(2) (p1:x1, p2:c, p3:c) ~ (p1:y1, p2:d, p3:d) imply
(3) (p1:x1, p2:a, p3:c) ~ (p1:y1, p2:b, p3:d).
(1) and (2) imply ab ~*c cd, and so do (1) and (3). So, this is a nice weakening of tradeoff consistency. It kind of implies, loosely speaking, that Vp1+p2 is proportional to Vp2. A reformulation: if replacing the tradeoff ab by the tradeoff cd on an event A does not affect indifference, then neither should it do on any subset of A.
Additionally nice is that it also is a weakening of vNM-probability-mix independence. %}
Chateauneuf, Alain (1999) “Comonotonicity Axioms and Rank-Dependent Expected Utility Theory for Arbitrary Consequences,” Journal of Mathematical Economics 32, 21–45.
{% Corollary 2 on p. 86 shows that risk aversion can hold under rank-dependent utility with a nonconcave (even strictly convex) utility function, as soon as the probability weighting function is sufficiently convex. For example, if U(x) = xn, n > 1, then f(p) pn will do (is actually necessary and sufficient). %}
Chateauneuf, Alain & Michèle Cohen (1994) “Risk Seeking with Diminishing Marginal Utility in a Non-Expected Utility Model,” Journal of Risk and Uncertainty 9, 77–91.
{% %}
Chateauneuf, Alain & Michèle Cohen (2000) “A New Approach to Individual Behavior under Uncertainty and to Social Welfare,” Fuzzy Measures and Integrals: Theory and applications 40, 289–313.
{% This paper contains a sketch of the proof of Savage’s (1954) SEU theorem, based on notes that Jaffray used. During one of my first visits to him, when I was a young researcher, end of the 1980s, he showed me his handwritten notes. Good to now see that they are public. %}
Chateauneuf, Alain, Michèle Cohen, & Jean-Yves Jaffray (2006) “Decision under Uncertainty: The Classical Models.” In Denis Bouyssou, Didier Dubois, Henri Prade, & MarcPilot (eds.) Decision-Making Process: Concepts and Methods, Ch. 9, 385–400, Wiley, New York.
{% %}
Chateauneuf, Alain & Michèle Cohen, & Robert Kast (1997) “Comonotone Random Variables in Economics: A Review of Some Results,” CERMSEM, CEM, University of Paris I.
{% %}
Chateauneuf, Alain, Michèle Cohen, & Isaac Meilijson (1997) “New Tools to Better Model Behavior under Risk and Uncertainty: An Overview,” Finance 18, 25–46.
{% %}
Chateauneuf, Alain, Michèle Cohen, & Isaac Meilijson (2004) “Four Notions of Mean-Preserving Increase in Risk, Risk Attitudes and Applications to the Rank-Dependent Expected Utility Model,” Journal of Mathematical Economics 40, 547–571.
{% %}
Chateauneuf, Alain, Michèle Cohen, & Isaac Meilijson (2001) “Comonotonicity-Based Stochastic Orders Generated by Single Crossings of Distributions, with Applications to Attitudes to Risk in the Rank-Dependent Expected Utility Model,” CERMSEM, CEM, University of Paris I.
{% %}
Chateauneuf, Alain, Michèle Cohen, & Isaac Meilijson (2004) “More Pessimism than Greediness: A Characterization of Monotone Risk Aversion in the Rank-Dependent Expected Utility Model,” Economic Theory 25, 649–667.
{% A characterization of convex Choquet integrals. They do not use a comonotonic-additivity like axiom. General characterizations of not-necessarily convex Choquet integrals are in Wakker (1993 MOR). %}
Chateauneuf, Alain & Bernard Cornet (2018) “Choquet Representability of Submodular Functions,” Mathematical Programming B 168, 615–629.
{% %}
Chateauneuf, Alain, Rose-Anna Dana, & Jean-Marc Tallon (2001) “Optimal Risk-Sharing Rules and Equilibria with Choquet-Expected-Utility,” Journal of Mathematical Economics 34, 191–214.
{% Use Anscombe-Aumann setup as did Schmeidler (1989), and simplify his axioms somewhat. %}
Chateauneuf, Alain, Jürgen Eichberger, & Simon Grant (2003) “A Simple Axiomatization and Constructive Representation Proof for Choquet Expected Utility,” Economic Theory 22, 907-915.
{% event/utility driven ambiguity model: event-driven
Neo-additive means: non-extreme-outcome additive.
The simplest and most well known version of the neo-additive model is EU+a*sup+b*inf. (My 2010 book defines it this way, explaining in Footnote 3, p. 319 that details about null events are ignored.) The autors write it as (1)EU + sup + (1)inf. The authors consider somewhat more general models, first explained intuitively: A subjective probability measure P is given. All events E with positive probability P(E) > 0 are possible and P-nonnull. However, there may be nonempty P-null events E with P(E) = 0 that are still considered to be possible. “Possible” thus is an additional category, broader than P-nonnull. A person maximizes EU w.r.t. P but assigns some extra weight to the infimum and the supremum POSSIBLE outcomes. Given P (in fact, for each P), the maximal set of possible events that can be considered is all nonempty events, leading to the aforementioned well known model EU+a*sup+b*inf. Given P, the minimal set of possible events that can be considered is only all P-nonnull events. This leads to the RDU model with W(.) = w(P(.)) with w a neo-additive probability weighting function (w linear on (0,1) under the most common case of a 0 and a+b 1). This is the probabilistically sophisticated version of the neo-additive model. The authors allow for intermediate cases between these two extremes. In the notation of the authors, the weight for the supremum possible outcome is , and the weight for the infimum possible outcome is (1). indexes distrust in EU, designates optimism beyond EU, and 1 designates pessimism beyond EU.
Both and are from [0,1], and are allowed to take the extreme values 0 or 1. = 1 and = 0 indicate maximal pessimism, going by the infimum possible outcome (most extreme is if all nonempty events are possible, when acts are evaluated by their infimum outcomes, as in the opening formula above).
We can infer possibility from preferences. Event E is possible if and only if xEy /;~ y for some outcomes x,y. xEy denotes the binary act in the usual way. There is a small inaccuracy in the paper regarding null/possible events, explained later. I first explain the paper’s terminology and some other things. The paper uses the term null for impossible, which, as explained, is broader than P-null. Hence nonnull is possible (including both what the authors call universal and what they call essential). They denote the set of null events by N.
For the preference foundation, the authors use a subjective mipoint operation defined by Ghirardato, Maccheroni, Marinacci, & Siniscalchi (2003). This is somewhat complex to observe, especially because it needs many certainty equivalents, but it is possible. Very well, the authors do it only for 50-50 mixtures which, as just explained, are reasonably well observable. They do not use general mixtures as GMMS do and which is not really observable (for instance for a 1/3-2/3 mixture GMMS need infinitely many observations).
P. 544: they interpret as index of confidence in the EU probability, as index of optimism, and (1) as index of pessimism (the authors there confuse optimism and pessimism).
Share with your friends: |