biseparable utility violated %}
Klibanoff, Peter, Massimo Marinacci, & Sujoy Mukerji (2005) “A Smooth Model of Decision Making under Ambiguity,” Econometrica 73, 1849‑1892.
{% Give extension of their 2005 Econometrica paper to a sequential setting. At each time point there is a model to substitute certainty equivalents that works recursively, combining the utility of current consumption with that of the certainty equivalent next time through a discounted utility evaluation. They cite preference axiomatizations on discounted-utility evaluations with no need to write it out in their paper.
A big conceptual decision they took is that this is not a sequential setup of their model, but it is their model of a sequential setup. That is, the ambiguity is at the beginning and concerns the future path as a whole (consumption plans). They then do backward induction. But in their model it is reasonable that ambiguity disappears at future nodes because of more and more repeated observations, which they explain repeatedly (e.g. p. 937 §2.4; p. 952 l. 8). They consider a model where there is a clear well-definable objective probability, the only thing being that this is unknown, and this becoming more and more known as more (frequentist!) info comes in over time, as is common in statistics (p. 937 writes “the true process”). In this sense the ambiguity considered here is not purely subjective but it is iid-type.
I was glad to see that p. 958 points out that the Epstein & Schneider (2003 JET) rectangle version of multiple priors was preceded by Sarin & Wakker (1998). %}
Klibanoff, Peter, Massimo Marinacci, & Sujoy Mukerji (2009) “Recursive Smooth Ambiguity Preferences,” Journal of Economic Theory 144, 930–976.
{% Discuss, within smooth models, some definitions of ambiguity by Epstein, Ghirardato et al., Nehring, and others. I see things differently in the sense that whether an event is ambiguous is better NOT taken as endogenous. We researchers decide beforehand, without having seen any preference, that it is the unknown urn that is ambiguous in the Ellsberg two-urn experiment. %}
Klibanoff, Peter Massimo Marinacci & Sujoy Mukerji (2011) “Definitions of Ambiguous Events and the Smooth Ambiguity Model,” Economic Theory 48, 399–424.
{% For my comments, see Epstein (2010). %}
Klibanoff, Peter, Massimo Marinacci, & Sujoy Mukerji (2012) “Notes and Comments On the Smooth Ambiguity Model: A Reply,” Econometrica 80, 1303–1321.
{% Consider the usual Anscombe-Aumann (AA) approach for ambiguity. Assume a countably infinite sequence of realizations of the state of nature that in a way are iid, and impose event symmetry which is like de Finett’s (1937) exchangeability. Their main axiom, Axiom 5 (p. 1951, event symmetry) requires, more precisely, that mixing an act with a cylinder-event-A-indicator function does not change preference value if a permutation is applied to A.
They get a kind of multiple priors representation. For every prior on the state space there is an EU representation. The representation then is a general overall aggregation of these EU representations.
What I find typical of multiple prior representations as opposed to two-stage representations is that a prior is in or out of the prior set and those in are treated similarly, as are those who are out, with for instance not one receiving higher weight than the other. (The latter happens in two-stage models.) This need not be the case for the general aggregator here, as it is not for the smooth model, which is why their model for me is more two-stage than multiple prior. The model is not like usual two-stage in that one cannot after every resolution of the 1st stage uncertainty plug in any continuation. Instead, there is only an act contingent on the state space, and the second-stage decomposition is endogenous with everything following conditional on a 1st stage resolution of uncertainty relating to that same act contingent on the state space, as in the smooth model. P. 1946 penultimate para assumes so much richness that they come close enough to the product-space richness of regular two-stage models to do the required maths.
They define a prior as nonnull (or relevant) if every of the open sets containing it is nonnull. One can restrict the set of priors aggregated by G to the set D of nonnull priors if one wants.
They formulate the usual Yaari (1969)-type condition of being more ambiguity averse. It implies that (I would then say can be applied only if) the risk attitude (vNM U in EU) must be the same and if nullness of priors (so the above set D) is the same. They interpret this as meaning that the set D captures ambiguity, and G ambiguity aversion. This is plausible and a nice direction. Yet I see limitations. First, going only by priors being null or nonnull is crude. For instance, if two persons a priori do not think that any prior is impossible, then according to this definition they perceive the same ambiguity. But one of the two may be fairly sure about what the right prior is, and the other may be more diffuse, so that they perceive ambiguity differently. A second limitation is that the AA model (through monotonicity on p. 1950) imposes an implausible separability on the ambiguous horse states (Wakker 2010 §10.7.1 and Machina 2014 Example 3), precluding many kinds of ambiguity attitudes. It would be more desirable to also compare ambiguity attitudes of decision makers who have different risk attitudes and different sets of D, using for instance techniques of Baillon, Driesen, & Wakker (2012), and it suggests to me that the Yaari-type condition is too restrictive, in the same way as I consider Yaari (1969) too restrictive for EU. That the condition in this paper restricts to the same set D then does not mean that D has nothing to do with ambiguity aversion, but that the definition is too limited. %}
Klibanoff, Peter, Sujoy Mukerji, & Kyoungwon Seo (2014) “Perceived Ambiguity and Relevant Measures,” Econometrica 82, 1945–1978.
{% source-dependent utility: a subjective version of Kreps & Porteus (1978). The authors assume a finite Savage state space. There are T time points, and at each time point one receives more info about the true state of nature. This can be modeled by T partitions of the state space, each later one getting more refined (a filtration). They assume recursive backward induction with certainty equivalent substitution. At each time point SEU holds within that stage. They consider all kinds of cases, such as a fixed filtration (which I find most interesting) or all filtrations. Can be the same SEU model at each stage, or entirely different, or same subjective probability all of them but different utilities (this is closest to Kreps & Porteus), or the same utilities also.
I regret that the authors have at each stage assumed an Anscombe-Aumann model to derive SEU there. This means there are not T+1 stages, but 2T+2, with at every time point first the event of the partition revealed but then also a lottery carried out. It also means that they still need objective probabilities as did Kreps & Porteus. %}
Klibanoff, Peter & Emre Ozdenoren (2007) “Subjective Recursive Expected Utility,” Economic Theory 30, 49–87.
{% They fit EU, RDU, and PT to data about call options in the S&P500 index, using representative agent, power utility (same power for gains and losses in PT), Prelec and T&K one-parmeter weighting functions, and loss aversion. PT fits best, and all empirical findings of PT are confirmed. Unfortunately, they do rank-dependent integration bottom-to-top, so the wrong way, and the parametric families of T&K and Prelec mean something different than is common in the literature. For EU they cannot reject risk neutrality. Whereas in Prelec's paper his one-parameter family predicts no probability weighting if the best of two outcomes receives probability 1/3, as these authors do it there is no probability weighting if the worst of two outcomes receives probability 1/3. %}
Kliger, Doron & Ori Levy (2009) “Theories of Choice under Risk: Insights from Financial Markets,” Journal of Economic Behavior and Organization 71, 330–346.
{% One of three papers in an issue on contingent evaluation. Gives survey on contingent valuations and stated preferences, starting with history of Exxon Valdez. Passive use value: your value of things existing without you using them.
P. 14: induced value vs. homegrown value. %}
Kling, Catherine L.; Daniel J. Phaneuf, & Jinhua Zhao (2012) “From Exxon to BP: Has Some Number Become Better than No Number?,” Journal of Economic Perspectives 26, 3–26.
{% Subjects judge prospects played once, five times, and fifty times. Confirm fallacies found before, such as overestimation of probability of loss. Also ask for risk perception (or verbal interpretation), and find that probability of loss determines it more than variance. %}
Klos, Alexander, Elke U. Weber, & Martin Weber (2005) “Investment Decisions and Time Horizon: Risk Perception and Risk Behavior in Repeated Gambles,” Management Science 51, 1777–1790.
{% %}
KLST: Krantz, Luce, Suppes & Tversky (1971)
{% Point out that disparity between buyer’s and seller’s point of view is too big to be explained by income effect (whether or not buyer or seller was endowed a priori with lottery or sure amount of money possibly to be exchanged). %}
Knetsch, Jack L. & John A. Sinden (1984) “Willingness to Pay and Compensation Demanded: Experimental Evidence of an Unexpected Disparity in Measures of Value,” Quarterly Journal of Economics 99, 507–521.
{% %}
Knez, Peter, Vernon L. Smith, & Arlington W. Williams (1999) “Individual Rationality, Market Rationality, and Value Estimation,” American Economic Review 75, 397–402.
{% The monetary value of a statistical life is between $7.7 million and $8.3 million per year. They measure WTA and WTP through wage increases for extra risks. %}
Kniesner, Thomas J., W. Kip Viscusi, & James P. Ziliak (2014) “Willingness to Accept Equals Willingness to Pay for Labor Market Estimates of the Value of a Statistical Life,” Journal of Risk and Uncertainty 48, 187–205.
{% foundations of probability
P. 20 and 224 and further (especially p. 226) seem to explain that risk refers to objective probability
Ch. VIII opening page (p. 233 in version I saw): risk is for “measurable” uncertainty; i.e., when there is a “group of instances.” So risk concerns frequentist probability. Uncertainty concerns “unmeasurable uncertainty” which is also designated by “subjective probability” and it concerns the exercise of “judgment …. which … actually guide most of our conduct.”
If we interpret unmeasurable as nonadditive (which I think is not what Knight thought of; I think that additive subjective probability was called uncertainty by Knight), then the two-stage model is suggested, where first a probability judgment is formed that may well be nonadditive, next decisions are derived from it. %}
Knight, Frank H. (1921) “Risk, Uncertainty, and Profit.” Houghton Mifflin, New York.
{% questionnaire versus choice utility: argues for cardinal utility on basis of introspection and psychophysical measurement.
Principle of Complete Ignorance: p. 234 seems to argue that probabilities are irrelevant for single events
P. 305 gives nice comparisons with physical notions such as mass and force, comparing utility with force.
P. 303 suggests measurability measured through tradeoffs with some other quantity that apparently is assumed linear in utility.
P. 304 suggests that introspection can reveal orderings of differences. %}
Knight, Frank H. (1944) “Realism and Relevance in the Theory of Demand,” Journal of Political Economy 52, 289–318.
{% %}
Knoef, Marike & Klaas de Vos (2009) “Representativeness in Online Panels: How Far Can We Reach?,” Working Paper, Tilburg University.
{% %}
Knutson, Brian, Scott Rick, G. Elliot Wimmer, E., Drazen Prelec, & George F. Loewenstein (2007) “Neural Predictors of Purchases,” Neuron 53, 147–156.
{% %}
Knutson, Brian, G. Elliott Wimmer, Scott Rick, Nick G. Hollon, Drazen Prelec, & George F. Loewenstein (2008) “Neural Antecedents of the Endowment Effect,” Neuron 58, 814–822.
{% %}
Köbberling, Veronika (2003) “Risk Attitude: Preference Models and Applications to Bargaining,” Ph.D. dissertation, METEOR, Maastricht University, the Netherlands.
{% %}
Köbberling, Veronika (2003) “Comments on: Edi Karni and Zvi Safra (1998)” Journal of Mathematical Psychology 47, 370.
{% %}
Köbberling, Veronika (2004) Book Review of: Itzhak Gilboa & David Schmeidler (2001) A Theory of Case-Based Decisions, Cambridge University Press, Cambridge; Economica 71, 508–509.
{% strength-of-preference representation %}
Köbberling, Veronika (2006) “Preference Foundations for Difference Representations,” Economic Theory 27, 375–391.
{% game theory for nonexpected utility %}
Köbberling, Veronika & Hans J.M. Peters (2003) “The Effect of Decision Weights in Bargaining Problems,” Journal of Economic Theory 110, 154–175.
{% note 1, p. 224 surveys the findings on convex versus concave utility for gains versus losses %}
Köbberling, Veronika, Christiane Schwieren, & Peter P. Wakker (2007) “Prospect-Theory’s Diminishing Sensitivity versus Economics’ Intrinsic Utility of Money: How the Introduction of the Euro Can Be Used to Disentangle the Two Empirically,” Theory and Decision 63, 205–231.
Link to paper
{% Tradeoff method %}
Köbberling, Veronika & Peter P. Wakker (2003) “Preference Foundations for Nonexpected Utility: A Generalized and Simplified Technique,” Mathematics of Operations Research 28, 395–423.
Link to paper
Background paper, used in proofs
{% Tradeoff method %}
Köbberling, Veronika & Peter P. Wakker (2004) “A Simple Tool for Qualitatively Testing, Quantitatively Measuring, and Normatively Justifying Savage’s Subjective Expected Utility,” Journal of Risk and Uncertainty 28, 135–145.
Link to paper
{% The exponential utility form recommended in this paper to fit loss aversion is found to fit data best by von Gaudecker, Hans-Martin, Arthur van Soest, & Erik Wengström (2011). %}
Köbberling, Veronika & Peter P. Wakker (2005) “An Index of Loss Aversion,” Journal of Economic Theory 122, 119–131.
Link to paper
Link to typo
(Link does not work for some computers. Then can:
go to Papers and comments; go to paper 05.2 there; see comments there.)
{% ambiguity seeking for losses: found
ambiguity seeking for unlikely: found
Confirm the fourfold pattern af ambiguity attitudes: find perfect a(mbiguity-generated)-insensitivity with ambiguity seeking for unlikely gains and ambiguity aversion for moderate and likely gains, and ambiguity seeking for losses. They are probably the first to test ambiguity aversion for mixed prospects, and find neutrality there. So, the common loss aversion for risk is not amplified for ambiguity.
They measure ambiguity attitudes from direct choice between an ambiguous and nonambiguous option where an ambiguity-neutral person should be indifferent, and also from matching probabilities. Differences between gains, losses, mixed, high, and low probabilities are between-subjects. They use same implementations everywhere, giving very clean data.
P. 276, §6.2, points out that the smooth model can accommodate sign dependence, but not insensitivity. Multiple priors models as existing today cannot handle sign/reference dependence, but generalizations are straightforward. %}
Kocher, Martin G., Amrei Marie Lahno, & Stefan T. Trautmann (2018) “Ambiguity Aversion Is not Universal,” European Economic Review 101, 268–283.
{% decision under stress;
losses from prior endowment mechanism: they do not do this, but use a very interesting alternative, in Experiment 1, that can be called “losses from posterior endowment”. Subjects are told that there are two parts, first A and then B. They are told that in A they may lose, and in B they may gain, without being told how much each is. In reality, the gain in Part B will always at least cover the loss in Part A. The endowment is not prior but posterior, so to say. No untrue things are told to subjects here, so in this sense there is no deception. But subjects can come out saying: “They may tell you that you may lose but in reality, don’t worry.” Another small drawback is that there is an income effect of a weak kind. Part B was not relevant so it does not matter there, but in Part A subjects know that more money is coming. Because they don’t know how much, this income effect is really weak. Despite these two minor drawbacks, this is by far the best implementation of real incentives for losses that I ever saw, in fact the only one in the literature so far that I consider valid. Losses from prior endowment mechanism has too big drawbacks, with 1/3 of the subjects integrating the payments. So, this is a very interesting new way to implement losses!
Study time pressure (TP) for choices under risk, for pure gains, pure losses, and mixed prospects (both gains and losses). TP does not affect risk aversion under gains, increases it (turning majority risk seeking into majority risk aversion) for losses, and has a mixed effect for mixed prospects: effect 1: when choosing between a nondegenerate pure-gain prospect and a mixed prospect, TP moves preference towards the pure-gain prospect. Effect 2: when choosing between a nondegenerate pure-loss prospect and a mixed prospect, TP moves preference towards the mixed prospect.
The authors claim that their finding on mixed prospects falsifies PT, but I disagree. It only falsifies PT-with-the-added-assumption-that-no-parameter-of-PT-other-than-loss-aversion-will-be-affected-by-TP. (Then indeed Effect 1 implies increased loss aversion and Effect 2 implies decreased loss aversion. The latter claim is subtle and requires some thinking, but is correct; see Exercise 9.3.8 in my 2010 book.) However, there is too little evidence for the added assumption. For gains they find no change in risk aversion, but this is a null hypothesis accepted, which is weak evidence. Also, they only carry out particular tests of risk aversion, and not of insensitivity. For losses they do in fact find a change of risk attitude, falsifying the above added assumption. A more detailed investigation of the parameters of PT and their interactions, with possibly more detailed data, would be required before we can draw concrete conclusions about PT and its parameters under TP. The big picture of the results is increased insensitivity under TP, agreeing with PT.
As an aside, the EU-with-aspiration is not really a deviation from PT. It is an extreme degree of PT, with extreme insensitivity towards outcomes. Diecidue & Van de Ven show this in a mathematical sense, with the discontinuity of U at 0. This is a natural extension of the steepness of U at 0 that PT postulates.
Whereas PT is not violated by the data as I see it, EU-with-aspiration is in a way. It is violated by the change in attitude for losses, or at best has nothing to say on that.
In summary, I disagree with both of the following sentences in their conclusion “Our results show that typical nonexpected utility patterns as modeled by prospect theory may not provide an appropriate description of choice behavior if time pressure becomes important. We have shown that recently developed models of expected utility with an aspiration level (Diecidue & van de Ven 2008) may be a useful alternative in such situations.”
Experiment 1 had some order effects, but Experiment 2 controled for them and showed that they play no role.
They also study effects of providing info about expected values. This only had effect for the choices with mixed prospects, moving these choices towards expected value maximization. Besides the awareness explanation proposed by the authors in the last para of the paper, it may also be because for mixed prospects, with loss aversion coming in, preferences are volatile rather than conscious, making subjects more open to any kind of external influence. %}
Kocher, Martin G., Julius Pahlke, & Stefan T. Trautmann (2013) “Tempus Fugit: Time Pressure in Risky Decisions,” Management Science 59, 2380–2391.
{% %}
Kocher, Martin G. & Matthias Sutter (2006) “Time Is Money—Time Pressure, Incentives, and the Quality of Decision-Making,” Journal of Economic Behavior & Organization 61, 375–392.
{% Gives statistics about returns on stocks during the past century. %}
Kocherlakota, Narayana R. (1996) “The Equity Premium: It’s still a Puzzle,” Journal of Economic Literature 34, 42–71.
{% Assumes both uncertainty and time. First aggregation is over time, but standard constant discounting. Next aggregation over uncertainty is through maxmin. The model is much the same as Anscombe-Aumann (AA), only with temporal options and discounted utility iso EU. Uncertainty aversion of Gilboa & Schmeidler now becomes preference for smoothing over events rather than over time. It is more involved because the extraneous weights of objective probabilities now are not available, but is achieved with intertemporal hedging (p. 241). Stationarity (p. 243) nicely becomes an analog of certainty independence. The paper provides a related result for the variational model. %}
Kochov, Asen (2015) “Time and No Lotteries: An Axiomatization of Maxmin Expected Utility,” Econometrica 83, 239–262.
{% Has time and uncertainty together. %}
Kochov, Asen (2015) “Stationary Cardinal Utility,” in preparation.
{% %}
Koçkesen, Levent & Efe A. Ok (2004) “Strategic Delegation by Unobservable Incentive Contracts,” Review of Economic Studies 71, 397–424.
{% %}
Kockesen, Levent, Efe A. Ok, & Rajiv Sethi (2000) “The Strategic Advantage of Negatively Interdependent Preferences,” Journal of Economic Theory 92, 274–299.
{% %}
Kóczy, László Á. & Alexandru Nichifor (2013) “The Intellectual Influence of Economic Journals: Quality versus Quantity,” Economic Theory 52, 863–884.
{% %}
Kodrzychki, Yolanda K. & Pingkang Yu (2006) “New Approaches to Ranking Economics Journals,” Contributions to Economic Analysis & Policy 5, Article 24.
{% %}
Koehler, Derek J., Lyle A. Brenner, & Amos Tversky (1997) “The Enhancement Effect in Probability Judgment,” Journal of Behavioral Decision Making 10, 293–313.
{% Nice illustration of ad hoc techniques used in law to deal with probabilities. %}
Koehler, Jonathan J. & Arienne P. Brint (2001) “Psychological Aspects of the Loss of Chance Doctrine,”
{% small probabilities: small probabilities are overweighted if people can easily imagine an example, and underweighted otherwise, also if the imaginability-manipulation is clearly rationally irrelevant. %}
Koehler, Jonathan J. & Laura Macchi (2004) “Thinking about Low-Probability Events; An Exemplar-Cuing Theory,” Psychological Science 15, 540–546.
{% %}
Koele, Pieter & Joop van der Pligt (1993) “Beslissen en Oordeel.” Boom, Amsterdam.
{% ambiguity seeking for unlikely: they confirm this. They also let subjects decide on behalf of others and then find the same. No significant differences with individual choices. Nice thing is that when determining matching probabilities (the authors use the term probability equivalent) for unlikely event they take a choice list that is symmetric for ambiguity neutrality (then p = 0.10; they took 0.10, 0.19, 0.04, 0.16, 0.07, and 0.13; see Table 1) so that there is no center-bias or regression to the mean.
I disagree with the sentence in the final para of the conclusion: “studies, we find that ambiguity attitudes depend strongly on the likelihood range considered.” I think that ambiguity attitude is the same for low and moderate likelihoods: always it is insensitivity. I would agree with the sentence of the authors if they had replaced the term “ambiguity attitudes” with the term “ambiguity aversion.” %}
Koenig-Kersting, Christian & Stefan T. Trautmann (2016) “Ambiguity Attitudes in Decisions for Others,” Economics Letters 146, 126–129.
{% %}
Koerts, Johan & Erik de Leede (1973) “Statistical Inference and Subjective Probability,” Statistica Neerlandica 27, 139–161.
{% Quiggin says he claims that there must be fundamental uncertainty, because otherwise there could not be free will. %}
Koestler, Arthur (1965) “The Roots of Coincidence.” Picador, London.
{% questionnaire for measuring risk aversion: seem to propose questionnaire for risk-attitude. %}
Kogan, Nathan & Michael E. Wallach (1964) “Risk-Taking: A Study in Cognition and Personality.” Holt, Rinehart & Winston, New York.
{% %}
Kohlas, Jürg & Paul-André Monney (1994) “Theory of Evidence—A Survey of its Mathematical Foundations, Applications and Computational Aspects,” ZOR - Mathematical Methods of Operations Research 39, 35–68.
{% dynamic consistency; normal/extensive form; nice exposition of principles in refinements of the Nash equilibrium concepts %}
Kohlberg, Elon (1990) “Refinement of Nash Equilibrium: The Main Ideas.” In Tatsuro Ichiishi, Abraham Neyman, Yair Tauman (eds.) Game Theory and Applications, 3–45, Academic Press, New York.
{% normal/extensive form; decision trees; dynamic consistency. Footnote 3 says: “We adhere to the classical point of view that the game under consideration fully describes the real situation—that any (pre)commitment possibilities, any repetitive aspect, any probabilities of error, or any possibility of jointly observing some random event, have already been modelled in the game tree.” Later, they nicely write that players are in cubicles where there is “not even a window” and, thus, nicely exclude observations of sunspots.
They argue for forward induction (and I agree) in the game where Harsani & Selten (1988) argue for backward induction. Harper (1986, 1991) developed a logic with ratifiability to justify forward induction. Reviewed in Joyce & Gibbard (1998). %}
Kohlberg, Elon & Jean-François Mertens (1986) “On the Strategic Stability of Equilibria,” Econometrica 54, 1003–1037.
{% consistency Observation in §2 (p. 108) shows that under some dynamic conditions, two-stage CEU (Choquet expected utility) must be SEU.
dynamic consistency. Non-EU & dynamic principles by restricting domain of acts: c onsiders a two-stage structure with first-stage events E1,…,En with Ej = {sj1,…,sjnj} and a ranking such that s(j1)nj1 sj1 ... sjnj s(j+1), calling them nest-comonotonic. On this subset we have everything the same as SEU also if we reduce with CE (certainty equivalent) substitution. So here different ways to evaluate dynamic prospects, and to update (Section 4), agree as they do under SEU. The author shows how restrictive backward induction is.
Then he imposes an axiom requiring that the CE substitution for each event E should be independent of the rank of E. It holds if and only if the weighting function W is an exponential transform of a probability measure (also implying probabilistic sophistication.) He assumes richness both for outcomes and for states.
Corollary 2 (p. 113) characterizes CEU with state-dependent utility as in Chew & Wakker (1996). Theorem 2 relates first- and second-stage exponential CEU by the Bayesian update rule for weighting functions. %}
Koida, Nobuo (2012) “Nest-Monotonic Two-stage Acts and Exponential Probability Capacities,” Economic Theory 50, 99–124.
{% Seems to have put forward representative income as analog for welfare of certainty equivalent for expected utility. The AKS (Atkinson-Kolm-Sen) index takes difference between average value and representative utility (which is risk premium and divides by absolute value of average utility. Similar indexes have been used ad hoc in risk theory to measure risk aversion, but their problem is that in the small they tend to 0, as if risk neutralilty. %}
Kolm, Serge-Christophe (1969) “The Optimal Production of Social Justice.” In Julius Margolis & Henry Gutton (eds.) Public Economics, MacMillan, London.
{% risky utility u = transform of strength of preference v, haven’t checked if latter doesn’t exist. Seems to argue that. %}
Kolm, Serge-Christophe (1993) “The Impossibility of Utilitarianism.” In Peter Koslowski & Yuichi Shionoya (eds.) The Good and the Economical: Ethical Choices in Economics and Management, 30–66, Springer, Berlin.
{% %}
Kolm, Serge-Christophe (1998) “Chance and Justice: Social Policies and the Harsanyi-Vickrey-Rawls Problem,” European Economic Review 42, 1393–1416.
{% %}
Kolm, Serge-Christophe (2002) “Modern Theories of Justice.” MIT Press, Cambridge, MA.
{% %}
Kolmogorov, Andrej N. (1930) “Sur la Notion de Moyenne,” Rendiconti della Academia Nazionale dei Lincei 12, 388–391.
{% The “bible” where he lays down the current axiomatic foundations of probability theory. %}
Kolmogorov, Andrej N. (1933) “Grundbegriffe der Warscheinlichkeitsrechnung.” Springer, Berlin. Translated into English by Nathan Morrison (1950) “Foundations of the Theory of Probability,” Chelsea, New York. 2nd English edn. 1956.
{% The “bible” where he lays down the current axiomatic foundations of probability theory. %}
Kolmogorov, Andrej N. (1950) “Foundations of the Theory of Probability,” Chelsea, New York. 2nd English edn. 1956.
{% %}
Kolmogorov, Andrej N: 4 Discussions of his work in The Annals of Statistics 18, (1990), pp. 987–1031.
{% PT falsified: a theory where people choose several reference points, and primarily go by the probability of exceeding those, fits data well. It is like Diecidue & van de Ven (2008) and Payne (2005) although they do not cite those. It is also like Lopes model, which is cited. However, the reference points are simply introduced here physically as thresholds above which the subjects gain points to participate in a bonus. Thus they are just outcomes rather than psychological thresholds and in this sense the paper does not really show that thresholds lead to deviations from just maximizing outcomes. %}
Koop, Gregory K. & Joseph G. Johnson (2012) “The Use of Multiple Reference Points in Risky Decision Making,” Journal of Behavioral Decision Making 25: 49–62 (2012).
{% probability intervals: %}
Koopman, Bernard O. (1940) “The Bases of Probability,” Bulletin of the American Mathematical Society 46, 763–774.
Reprinted in Henry E. Kyburg Jr. & Howard E. Smokler (1964, eds.) Studies in Subjective Probability, Wiley, New York; 2nd edn. 1980, K Publishing Co., New York.
{% %}
Koopman, Bernard O. (1940) “The Axioms and Algebra of Intuitive Probability,” Annals of Mathematics 41, 269–292.
{% %}
Koopman, Bernard O. (1941) “Intuitive Probability and Sequences,” Annals of Mathematics 42, 169–187.
{% P. 140 seems to plead for introspection, though it may be only hypothetical choice as Savage also wanted. %}
Koopmans, Tjalling C. (1957) “The Construction of Economic Knowledge.”. In Three Essays on the State of Economic Science (Tjalling C. Koopmans, ed.) 127–166, McGraw-Hill Book Company, Ch. II.
{% P. 306: stationarity is independence of calendar time %}
Koopmans, Tjalling C. (1960) “Stationary Ordinal Utility and Impatience,” Econometrica 28, 287–309.
{% Kirsten&I; %}
Koopmans, Tjalling C. (1972) “Representations of Preference Orderings with Independent Components of Consumption,” & “Representations of Preference Orderings over Time.” In Charles Bartlett McGuire & Roy Radner (eds.) Decision and Organization, 57–100, North-Holland, Amsterdam.
{% %}
Koopmans, Tjalling C., Peter A. Diamond & Richard E. Williamson (1964) “Stationary Utility and Time Perspective,” Econometrica 32, 82–100.
{% %}
Koopmanschap, Marc A., Frans F.H. Rutten, B.Martin van Ineveld, & Leona van Roijen (1997) “Reply to Johanneson’s and Karlsson’s Comment,” Journal of Health Economics 16, 257–259.
{% Subjects choose between safe options and fifty-fifty risks for gains and losses, always at most one nonzero outcome. They also chose between immediate payment and delayed larger payent. They did so when having pain, and when not. For gains there was more risk seeking under pain, with no difference for losses. Pain increased impatience. %}
Koppel, Lina, David Andersson, India Morrison, Kinga Posadzy, Daniel Västfjäll, & Gustav Tinghög (2017) “The Effect of Acute Pain on Risky and Intertemporal Choice,” Experimental Economics 20, 878–893.
{% Generalizes Savage (1954) to algebras of events. Furthermore, to mosaics of events. Also does it with probabilistic sophistication. He has a finely ranged probability, meaning that for each > 0 there is a partition with all events having smaller probability. %}
Kopylov, Igor (2007) “Subjective Probabilities on “Small” Domains,” Journal of Economic Theory 133, 236–265.
{% EU+a*sup+b*inf: In Anscombe-Aumann (AA) model has direct choice, but also choice maintained after any deferral, as two primitives. It leads to a multiple priors model where choice after any deferral relates to unanimous preference for all priors, and immediate choice goes by a sort of maxmin model, taking times infimum + (1) times EU. The set of priors is derived endogenously here. The same model with this set exogenous is in Kopylov (2016).
Section 2.5 is on complete ignorance (Principle of Complete Ignorance). %}
Kopylov, Igor (2009) “Choice Deferral and Ambiguity Aversion,” Theoretical Economics 4, 199–225.
{% Representation à la Dekel, Lipman, & Rustichini (2009, RESTUD) over menus that can capture temptation and so on. %}
Kopylov, Igor (2009) “Finite Additive Utility Representations for Preferences over Menus,” Journal of Economic Theory 144, 354–374.
{% Extends probabilistic sophistication to infinite and unbounded distributions, so that normal distributions and so on can be handled, mainly by using Arrow’s monotone continuity. %}
Kopylov, Igor (2010) “Unbounded Probabilistic Sophistication,” Mathematical Social Sciences 60, 113–118.
{% Uses techniques (truncation continuity) from my 93 MOR paper, with a countable additivity axiom added. That way it can achieve useful simplifications. Also, very importantly, this paper is the FIRST to axiomatize CONSTANT DISCOUNTING FOR CONTINUOUS TIME. As often as this functional has been used, no one had ever axiomatized it yet. There are close results by Grodal and Vind, and by Harvey & Østerdal, but they did not really have it, and Kopylov is the first. %}
Kopylov, Igor (2010) “Simple Axioms for Countably Additive Subjective Probability,” Journal of Mathematical Economics 46, 867–876.
{% preference for flexibility: Gul & Pesendorfer’s (2001) menu framework. Example: paying for not going to the gym. Avoiding tasks for fear of negative self evaluation. Has a utility component that reflects emotional costs and benefits of perfectionism %}
Kopylov, Igor (2012) “Perfectionism and Choice,” Econometrica 80, 1819–1843.
{% one-dimensional utility: states continuity conditions that are suited for simple proofs and extensions of domains while preserving the continuity. %}
Kopylov, Igor (2016) “Canonical Utility Functions and Continuous Preference Extensions,” Journal of Mathematical Economics 67, 32–37.
{% Considers multiple priors with set of priors exogenously given. Good arguments can be given for using as exogenous. And, as done so often, the Anscombe-Aumann model is used. Model is convex combination of EU and maxmin EU: (1)EUp + infqEUq for a subjective probability measure p. So, a special case of neo-additive (EU+a*sup+b*inf). §1.3 shows that the model can be rewritten as maxmin EU with contamination multiple priors. The set of priors is derived endogenously here. The same model with this set endogenous is in Kopylov (2009).
A monotonicity condition over ensures that p is in . The security level of each act is the EU minimized over . For acts with same security level, vNM independence holds, so that then EU governs. The preference value of an act depends on both the EU mentioned and the security level, leading to the convex combination. Given the linearity present in the Anscombe-Aumann model, the convex combination results. Note that Jaffray (1994 §3.4.3) also characterizes maxmin with exogenously given.
The paper also considers updating with a weakening of dynamic consistency (dynamic consistency). %}
Kopylov, Igor (2016) “Subjective Probability, Confidence, and Bayesian Updating,” Economic Theory 62, 635–658.
{% %}
Korchin, Sheldon J. (1976) “Modern Clinical Psychology.” Harper & Row Inc., New York.
{% %}
Korhonen, Pekka, Herbert Mowkowitz, & Jyrki Wallenius (1992) “Multiple Criteria Decision Support—A Review,” European Journal of Operational Research 63, 361–375.
{% Dutch book: they test this in fact (although not referring to uncertainty and only to multicriteria choice, using hypothetical choice, having 144 students choose between pairs of credit points and grade points for the coming academic year. %}
Korhonen, Pekka J., Kari Silvennoinen, Jyrki Wallenius, & Anssi Öörni (2012) “Can a Linear Value Function Explain Choices? An Experimental Study,” European Journal of Operational Research 219, 360–367.
{% conditional probability %}
Koriat, Asher (2008) “Alleviating Inflation of Conditional Predictions,” Organizational Behavior and Human Decision Processes 106, 61–76.
{% Written from legal perspective, but with very detailed discussion and review of endowment effect. %}
Korobkin, Russell (2003) “The Endowment Effect and Legal Analysis,” Northwestern University Law Review 97, 1227–1293.
{% Explains lattices and Möbius inverses from general mathematics. %}
Koshevoy, Gleb A. (1998) “Distributive Lattices and Products of Capacities,” Journal of Mathematical Analysis and Applications 219, 427–441.
{% game theory for nonexpected utility %}
Koskievic, Jean-Max (1997) “Bargaining and Rank Dependent Utility Model,”
{% value of information: takes emotions of fear for negative information as part of the utility function. Thus, aversion to information can arise. But it can’t be anything and, for instance, it will never make a person go to a bad doctor instead of a good doctor. %}
Köszegi, Botond (2003) “Health Anxiety and Patient Behavior,” Journal of Health Economics 22, 1073–1084.
{% Surveys behavioral ideas, such as loss aversion, in contract theory and mechanism design. %}
Köszegi, Botond (2014) “Behavioral Contract Theory,” Journal of Economic Literature 52, 1075–1118.
{% Biggest contribution of this paper is to give background to what the reference point is, and doing so in a tractable and implementable manner. Big question in prospect theory is what the reference point is. This paper, as explained p. 1136 end, gives an answer, using common economic-model inputs (besides the gain-loss function and interpretations of utility as introspective rather than revealed-preference measurable).
u(c|r) is utility of outcome c if reference outcome is r. The authors consider U(F|G) with F and G prospects (probability distributions over outcomes), F being the prospect received, and G being the reference prospect. (conservation of influence: would be nice to reconsider it from that perspective). So, not only the object received but also the reference point can be random, as in Sugden (2003). G need not be status quo but is EXPECTED prospect. (Here expected could be a natural-language term, but it also is taken as a formal expectation integrating out over a probability distribution over decision situations, where apparently an expectation is the operation to be used but this is only applied to the second-stage probabilities. I will ignore this extra stage in what follows.) If a person decides to choose some F from an available set, then F will also become the expectation, and U(F|F) is the evaluation to be considered. Choosing from an available set then amounts to maximizing the function F --> U(F|F) which, in this interpretation, could be taken as just a consumption utility function of F with no reference dependence involved. Caveat is that F must be a personal equilibrium (PE) in the sense that U(F´|F) should not exceed U(F|F) for the available F´. The best such, maximizing F --> U(F|F), is the preferred personal equilibrium (PPE). F|F is a PE if sufficiently strong assumptions of loss aversion are made, favoring the reference point enough relative to other points. A strange thing is that in all evaluations U(F|G) the authors assume F and G stochastically independent, also if F is G. It means that what is known as disappointment (under regret you compare with other things that could have happened had you acted differently; under disappointment you compare with other things that could have happened had nature, coincidence, acted differently) plays a big role in this model. It is also remarkable that in optimizing F --> U(F|F) (rather than staying put in the first PE one runs into), the reference point is apparently something to choose so as to optimize, and utilities of different reference points are compared to each other. The function U(F|F), with stochastic independence of the one F from the other F, is like the one of Delquié & Cillo (2006). Traditional models only have choices GIVEN a reference point, and endogeneity of a reference point then means no more than that we infer from choices what the reference point is but still without assuming that the reference point was an actual thing to choose.
I next give details about U. The authors propose that utility U(c) consists of two components, first a consumption utility, second a gain-loss component (I would prefer to interpret it as a more general perception component that also captures diminishing sensitivity etc., similar to Sugden’s (2003, JET) gain-loss interpretation which in fact also captures more general psychological perceptions), and get U(c) = m(c) + n(c|r), where n(c|r) = (m(c) m(r)). They take U as sum of m and n, and not as composition where U(c) would be u((c)) with a (mis)perception which would be my preferred way to model. The sum suggests that psychological perception be an additional source (error possibly) of utility, besides consumption, rather than an intervening misperception. This point is essential when they impose the assumptions of prospect theory on the utility-difference transformation .
They propose that the reference point is the expectation of future consumption. If this expectation is related to the decision yet to be taken (rather than a decision made before, in the past), then an implicit definition results. Equilibria are formulated for when this can happen consistently. In the case of multiple equilibria, the one with highest utility is selected, which, if not taken as if, would suggest that the consumer is actively choosing between different reference points to take. Traditionally, reference points are not objects of choice, but aspects determining choice.
The authors derive predictions about more or less willingness to buy depending on whether one had long time to get accustomed to new situation with adaptation of reference point. They also get some self-fulfilling results where a consumer wants to buy iff he expects to buy.
They in fact take multidimensional commodity bundles with, for simplicity, additively separable utility (with a common discussion that separability is justified under proper consequentialistic definition of components)
U(c) = U(c1,…,cn) = U1(c1) + ... + Un(cn) with each Uk(c) = mk(c) + nk(c|r).
For the underlying consequentialism assumption the nicest discussion that I know is in Broome (1991). For the plausibility of this decomposition, separability of the components is crucial, because consumers have to really perceive them separately so as to take reference points for each separately.
Sometimes they take linear outside of 0, so that all it does is generate a kink at the reference point. They explicitly assume that nonrevealed-preference based introspective or psychological inputs are used to determine various components of utility. This interpretation is desirable to justify comparisons between (U(F|G) and U(F´|G´) with G different than G’, as happening in this paper, because it is hard to give revealed-preference foundations to it. U(F|G) is decreasing in G so that if we were completely free to choose G we would simply choose G extremely low to attain infinite happiness.
Whether status quo is different from what is expected is partly terminological. One could argue that status quo by definition incorporates what one then expects.
We all know from everyday experience that we sometimes manipulate our expectations, e.g. lowering them to avoid disappointment. This looks like choosing the reference point. It is, however, a minor marginal effect to change our utilities just a little bit. It can only justify a small part of utility. Making yourself more happy by choosing a different reference point is no more than an illusion. Loss aversion, on the other hand, can more than double our perception of utility. Hence, these two don’t sit together well if treated as the same component as done in this paper.
biseparable utility: the most popular special case, with piecewise linear, is biseparable utility, and even RDU (Masatlioglu & Raymond 2016). %}
Köszegi, Botond & Matthew Rabin (2006) “A Model of Reference-Dependent Preferences,” Quarterly Journal of Economics 121, 1133–1165.
{% Use their 2006 QJE model to predict risk attitudes after small or big gains or losses, being expected or being surprises. %}
Köszegi, Botond & Matthew Rabin (2007) “Reference-Dependent Risk Attitudes,” American Economic Review 97, 1047–1073.
{% Dynamic model on plans for future consumption. Meant to be rational. Loss aversion over changes in beliefs. Reference point endogenously resulting from sophisticated optimization as in their other papers. So one doesn’t improve utility by choosing better alternatives, but by changing one’s perception. A classical modeling would be that one chooses between pairs (F,G), with G a choice object rather than a reference point.
P. 912 Eq. 1: instant utility is sum of reference-dependent classical consumption utility and gain-loss utility derived from changes in belief about future outcomes.
P. 913 2/3: money in prospect theory is news about future consumption.
Pp. 913-914: belief-comparisons go through quantile-comparisons.
P. 914: loss aversion consists of two parts: (1) Kink of utility at 0 (their A4); (2) U´(-x) > U´(x) for all x > 0 (their A2).
Their (A3) has U convex for losses and concave for gains. But they will often assume utility linear for gains and losses.
P. 930: they write that their model crucially depends on what people believe, which makes it hard to test. %}
Köszegi, Botond & Matthew Rabin (2009) “Reference-Dependent Consumption Plans,” American Economic Review 99, 909–936.
{% %}
Kothiyal, Amit (2012) “Subjective Probability and Ambiguity.” Ph.D. dissertation, Erasmus School of Economics, Erasmus University Rotterdam, the Netherlands.
{% proper scoring rules %}
Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2011) “Comonotonic Proper Scoring Rules to Measure Ambiguity and Subjective Beliefs,” Journal of Multi-Criteria Decision Analysis 17, 101–113.
Link to paper
{% finite additivity %}
Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2011) “Prospect Theory for Continuous Distributions: A Preference Foundation,” Journal of Risk and Uncertainty 42, 195–210.
Link to paper
{% http://dx.doi.org/10.1287/opre.2013.1230 %}
Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2014) “Average Utility Maximization: A Preference Foundation,” Operations Research 62, 207–218.
Link to paper
{% DOI: http://dx.doi.org/10.1007/s11166-014-9185-0
Shows that prospect theory with the source method better fits/predicts data than other popular ambiguity models. It thus corrects an analysis by Hey, Lotito, & Maffioletti (JRU, 2010). %}
Kothiyal, Amit, Vitalie Spinu, & Peter P. Wakker (2014) “An Experimental Test of Prospect Theory for Predicting Choice under Ambiguity,” Journal of Risk and Uncertainty 48, 117.
Link to paper
{% Demonstrate complexity aversion (w.r.t. nr. of stages and branches). %}
Kovarik, Jaromir, Dan Levin & Tao Wang (2016) “Ellsberg Paradox: Ambiguity and Complexity Aversions Compared,” Journal of Risk and Uncertainty 52, 47–64.
{% %}
Krabbe, Paul F.M. (1998) “The Valuation of Health Outcomes,” Ph.D. dissertation, Erasmus Uiversity, Rotterdam, the Netherlands.
{% time preference. In a nicely simple setup they show that the value of a health state depends on what came before or after, so there is a sequence effect.
intertemporal separability criticized: sequence effects %}
Krabbe, Paul F.M. & Gouke J. Bonsel (1998) “Sequence Effects, Health Profiles, and the QALY Model: In Search of Realistic Modeling,” Medical Decision Making 18, 178–186.
{% ordering of subsets: show that the five necessary conditions for representability by a finitely additive probability measure of an ordering of subsets are also sufficient if the state space has 4 or fewer elements. If 5 or more, then no more, and counterexamples exist. With 5 states we still always have almost representability, but with 6 or more also that can go wrong. They give necessary and sufficient conditions for finite state spaces, amounting to the duality conditions for solving linear inequalities.
Their result in fact shows that for only two consequences and no more than 4 states of nature, Savage’s (1954) axioms (with the richness condition P6 removed) are necessary and sufficient for SEU. (De Finetti’s additivity axiom is the sure-thing principle if there are only two outcomes.) %}
Kraft, Charles H., John W. Pratt, & Teddy Seidenberg (1959) “Intuitive Probability on Finite Sets,” Annals of Mathematical Statistics 30, 408–419.
{% Ambiguity as 2nd-order probabilities. A choice made can alter subjective beliefs/tastes, with regret coming in, implying Schmeidler’s (1989) quasi-convexity interpreted as uncertainty aversion. %}
Krähmer, Daniel & Rebecca Stone (2013) “Anticipated Regret as an Explanation of Uncertainty Aversion,” Economic Theory 52, 709–728.
{% Door Anne Stiggelbout besproken op 18 nov. 1992
that discounting of money should be as strong as for health states. %}
Krahn, Murray. & Amiram Gafni (1993) “Discounting in the Economic Evaluation of Health Care Interventions,” Medical Care 31, 403–418.
{% %}
Krahnen, Jan-Pieter & Martin Weber (1999) “Does Information Aggregation Depend on Market Structure?,” Zeitschrift für Wirtschafts- und Sozialwissenschaften 119, 1–22.
{% %}
Krahnen, Jan-Pieter & Martin Weber (1999) “Generally Accepted Rating Principles: A Primer.”
{% %}
Krantz, David H. (1975) “Color Measurement and Color Theory. I. Representation Theorem for Grassman Structures,” Journal of Mathematical Psychology 12, 283–303.
{% Tries to characterize belief functions. %}
Krantz, David H. (1982) “Foundations of the Theory of Evidence.” Paper presented at the Society for Mathematical Psychology, Princeton, NJ.
{% %}
Krantz, David H. & Laura K. Briggs (1990) “Judgments of Frequency and Evidence Strength,” Dept. of Psychology, Columbia University, New York.
{% If you study this book, then you will be 100 years ahead of your field. The first chapter explains how measurement starts with counting, and how standard sequences capture this. It gives a geneal technique for getting cardinal measurement in ordinal preference models. I was lucky to be exposed to this technique at young age. In my young years I wrote many papers using this technique, using the term tradeoff. Unfortunately, I co-found the right way to market it, using merely indifferences, only in Köbberling & Wakker (2004 JRU), mathematically matured in the follow-up paper Köbberling & Wakker (2003). The present, 2013, generation (I write this para in 2013) working on ambiguity and uncertainty has forgotten this technique of KLST and, hence, mostly uses the unsatisfactory Anscombe-Aumann (1963) model to get cardinality. The present 2013 generation does not have the insights of the previous generation that by coincidence had some exceptionally deep mathematicians, being KLST. Unfortunately, Luce in his return to decision theory in the 1990s had lost his technique, and used the unsatisfactory joint receipt to get cardinality.
standard-sequence invariance; Fig. 1 in §1.2 (p. 18) depicts the construction of standard sequences.
restricting representations to subsets: p. 276
Pointed out to me by Han Bleichrodt (in Nov. 2003): §6.3.4, p. 266, 3rd and 4th para claim, without any justification, that a local version of triple cancellation implies a global one. In the algebraic approach, no such results are available in the literature though.
Ttm. 4.2: strength-of-preference representation.
Kirsten&I: Ch. 6, with finitely many time points;
Share with your friends: |