Bibliography


loss aversion: erroneously thinking it is reflection



Download 7.23 Mb.
Page44/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   40   41   42   43   44   45   46   47   ...   103

loss aversion: erroneously thinking it is reflection: p. 768 3rd para thinks that loss aversion generates different predictions for losses than for gains, not realizing that loss aversion is only about exchanges between gains and losses.
real incentives/hypothetical choice: for time preferences: they have several references on it on p. 775. %}

Green, Leonard, & Joel Myerson (2004) “A Discounting Framework for Choice with Delayed and Probabilistic Rewards,” Psychological Bulletin 130, 769‑792.


{% Seems that they use Mazur discounting and linear utility;
Choice task between delayed reward (with fixed amount) and immediate reward. Immediate reward was adjusted to find indifference point. Delays between 3 months and 20 years. Delayed rewards between $100 and $100,000.;
Hypothetical questions. Larger amounts are discounted less than smaller amounts. This could be explained by convex utility (and not by concave). Hyperbolic discounting fits data better than exponential, which could also be explained by convex utility (possibly also by concave utility).
Authors give an overview of explanations for the fact that discounting varies with reward size: overview of magnitude effect.
Data of 4 of the 24 subjects plotted at the individual level. %}

Green, Leonard, Joel Myerson, & Edward McFadden (1997) “Rate of Temporal Discounting Decreases with Amount of Reward,” Memory and Cognition 25, 715–723.


{% Seems that they use exponential, Mazur, and general hyperbolic discounting; hypothetical questions; assume linear utility; fit data at individual level; fix delayed amount, 8 delays per subject and find immediate amount; claim that for children in 2 out of 12 cases exponential and hyperbolic discounting could not fit the data (R2 less than (???) or equal to 0), for young adults also 2 out of 12, for older adults 2 out of 32; Fig 1, 2, and 3 may show some concave parts of the discount functions. %}

Green, Leonard, Joel Myerson, & Pawel Ostaszewski (1999) “Discounting of Delayed Rewards across the Life Span: Age Differences in Individual Discounting Functions,” Behavioural Processes 46, 89–96.


{% %}

Green, Paul E. (1963) “Risk Attitudes and Chemical Investment Decisions,” Chemical Engineering Progress 59, 35–40.


{% %}

Green, Paul E. & V. Srinivasan (1978) “Conjoint Analysis in Consumer Research: Issues and Outlook,” Journal of Consumer Research 5, 103–123.


{% %}

Greenberg, Leslie S. (1986) “Change Process Research,” Journal of Consulting and Clinical Psychology 54, 4–9.


{% %}

Greenberg, Leslie S. & William M. Pinsof (1986) “The Psychotherapeutic Process: A Research Handbook.” New York: Guilford Press.


{% risky utility u = strength of preference v (or other riskless cardinal utility, often called value); Participants did direct quantitative judgments of utility. Next they did welfare evaluations, and risky decisions (sure vs. two-outcome gamble) where outcomes were money and where outcomes were their own utility assessments. For utility outcomes, risk aversion remained though less pronounced than for monetary outcomes. For welfare, similar aversion to equity. The result is plausible if risky utility = direct assessment and there is extra risk aversion because of nonEU, say probability transformation. The authors, however, never consider that the subjects may deviate from EU (and additively-separable utilitarianism). Instead they argue that all deviations are caused by misunderstandings of the concept of utility.
P. 245 4th para, about participants facing outcomes in terms of their own direct assessments of utility, and nicely and appropriately suggesting that the participants just treat these as monetary outcomes: “In making such esoteric judgments, do they take the pains necessary to exclude whatever momentarily inappropriate intuitions they have developed over a lifetime of reasoning about the goods of everyday life?”
P. 246 first half gives informal version of the aggregation argument. %}

Greene, Joshua & Jonathan Baron (2001) “Intuitions about Declining Marginal Utility,” Journal of Behavioral Decision Making 14, 243–255.


{% survey on nonEU: well on EU that is. Gives nice survey of empirical risk studies up to that point, especially regarding relations with demographic variables.
questionnaire for measuring risk aversion: uses it. No significant correlation between risk attitude measurements and general insurance questions. Maybe because former are for mixed prospects, and latter for losses. %}

Greene, Mark R. (1963) “Attitudes toward Risk and a Theory of Insurance Consumption,” The Journal of Insurance 30, 165–182.


{% doi: 10.1007/s10654-016-0149-3
foundations of statistics: discusses p-values. The paper does not bring new insights but does an exceptionally thorough job. Especially impressive is that it has 100 or so references on the topic. I kept track of such references all my life and the key word “Foundations of Statistics” gives about 120 references at this moment of writing (01Nov2016).
The paper many times repeats that p-values and the like are only valid if all assumptions made are valid, which I do not find very informative. Only point to note is that p-value is probability conditional on H0 being true.
P. 338 2nd column 1st para: “Many problems arise however because this statistical model often incorporates unrealistic or at best unjustified assumptions. This is true even for so-called ‘‘non-parametric’’ methods, which (like other methods) depend on assumptions of random sampling or randomization.”
P. 338 2nd column 2nd para points out a problem of classical methods that is avoided under the likelihood principle: “There is also a serious problem of defining the scope of a model, in that it should allow not only for a good representation of the observed data but also of hypothetical alternative data that might have been observed.”
P. 338 2nd column 2nd para “many decisions surrounding analysis choices have been made after the data were collected—as is invariably the case [33].”
P. 339 1st column 3rd para “In conventional statistical methods, however, ‘‘probability’’ refers not to hypotheses, but to quantities that are hypothetical frequencies of data patterns under an assumed statistical model. These methods are thus called frequentist methods, and the hypothetical frequencies they predict are called ‘‘frequency probabilities.’’ ”
P. 343 the 16th common misinterpretation of P value comparisons and predictions:
“16. When the same hypothesis is tested in two different populations and the resulting P values are on opposite sides of 0.05, the results are conflicting.
No!”
P. 343 the 17th common misinterpretation of P value comparisons and predictions: “17. When the same hypothesis is tested in two different populations and the same P values are obtained, the results are in agreement. No! Again, tests are sensitive to many differences between populations that are irrelevant to whether their results are in agreement. Two different studies may even exhibit identical P values for testing the same hypothesis yet also exhibit clearly different observed associations. For example, suppose randomized experiment A observed a mean difference between treatment groups of 3.00 with standard error 1.00, while B observed a mean difference of 12.00 with standard error 4.00. Then the standard normal test would produce P = 0.003 in both; yet the test of the hypothesis of no difference in effect across studies gives P = 0.03, reflecting the large difference (12.00 - 3.00 = 9.00) between the mean differences.”
P. 347 penultimate para sings the usual song of statistical analyses. %}

Greenland, Sander, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, & Douglas G. Altman (2016) “Statistical Tests, P Values, Confidence Intervals, and Power: A Guide to Misinterpretations,” European Journal of Epidemiology 31, 337–350.


{% Pp. 36-37: “The term “uncertainty” is meant here to encompass both “Knightian uncertainty,” in which the probability distribution of outcomes is unknown, and “risk,” in which uncertainty of outcomes is delimited by a known probability distribution. In practice, one is never quite sure what type of uncertainty one is dealing with in real time, and it may be best to think of a continuum ranging from well-defined risks to the truly unknown.”
P. 37: “In essence, the risk-management approach to monetary policymaking is an application of Bayesian decision-making.”
P. 37: "Given our inevitably incomplete knowledge about key structural aspects of an ever-changing economy and the sometimes asymmetric costs or benefits of particular outcomes, a central bank needs to consider not only the most likely future path for the economy, but also the distribution of possible outcomes about that path. The decision-makers then need to reach a judgment about the probabilities, costs, and benefits of the various possible outcomes under alternative choices for policy.”
P. 37: “The product of a low-probability event and a potentially severe outcome was judged a more serious threat to economic performance than the higher inflation that might ensue in a more probable scenario.”
P. 38 suggests ambiguity aversion: “When confronted with uncertainty, especially Knightian uncertainty, humans beings invariably attempt to disengage from medium- to long-term commitments in favor of safety and liquidity.”
P. 38: “In pursuing a risk-management approach to policy, we must confront the fact that only a limited number of risks can be quantified with any confidence.”
{P. 38: “…how … the economy might respond to a monetary policy initiative may need to be drawn from evidence about past behavior during a period only roughly comparable to the current situation.”
P. 39, that subjective info cannot be ignored: “Yet, there is information in those bits and pieces. For example, while we have been unable to readily construct a variable that captures the apparent increased degree of flexibility in the United States or the global economy, there has been too much circumstantial evidence of this critically important trend to ignore its existence.”
P. 39: “Thus, both econometric and qualitative models need to be continually tested.”
P. 40: “In fact, uncertainty characterized virtually every meeting, and as the transcripts show, our ability to anticipate was limited.” %}

Greenspan, Alan (2004) “Innovations and Issues in Monetary Policy: The Last Fifteen Years,” American Economic Review, Papers and Proceedings 94, 33–40.


{% foundations of statistics; shows many biases in research results that result from statistical hypothesis testing. Superficial reading suggests it is a nice paper. %}

Greenwald, Antony G. (1975) “Consequences of Prejudice against the Null Hypothesis,” Psychological Bulletin 82, 1–20.


{% Seems to point out that within-subjects has more power. %}

Greenwald, Antony G. (1976) “Within-Subjects Designs: To Use or not to Use?,” Psychological Bulletin 83, 314–320.


{% %}

Greenwood, John D. (1990) “Kants Third Antimony: Agency and Causal Explanation,” International Philosophical Quarterly 30, 43–57.


{% P. 227, middle, on the parameter of exponential utility (denoted ): “Few studies attempt to estimate  though.”
Using comments by Frans van Winden of March 16, 2005:
On Table 4: dividing the implied average coefficients of relative risk aversion, mentioned below the table, by the estimates of absolute risk aversion (alpha-hat in Table 4), I get an estimate of mean consumption which is (roughly) between 1.3 (167/130) and 2 (209/104). Is this 1300 and 2000 dollar, respectively? If so, is it then correct to say that the alpha-hat is between 0.08 (104/1300) and 0.05 (104/2000) in dollars (and somewhat higher if we use 130 instead of 104 as estimate of alpha-hat)? %}

Gregory, Allen W., Jean-François Lamarche, & Gregor W. Smith (2002) “Information-Theoretic Estimation of Preference Parameters: Macroeconomic Applications and Simulation Evidence,” Journal of Econometrics 107, 213–233.


{% Cited by Schkade on SPUDM 97: preference elicitation should be architectural rather than archaeology. It seems that they wrote on p. 179: “not as archaeologists, carefully uncovering what is there, but as architects, working to build a defensible expression of value.” %}

Gregory, Robin, Sarah Lichtenstein, & Paul Slovic (1993) “Valuing Environmental Resources: A Constructive Approach,” Journal of Risk and Uncertainty 7, 177–197.


{% natural-language-ambiguity: seem to investigate tolerance of ambiguity (in general natural-language sense) only from negative perspective regarding threat, discomfort, and anxiety, and not regarding positive aspects such as curiosity and attraction toward ambiguous situations. %}

Grenier, Sebastien, Anne-Marie Barrette., & Robert Ladouceur (2005) “Intolerance of Uncertainty and Intolerance of Ambiguity: Similarities and Differences,” Personality and Individual Differences 39, 593–600.


{% %}

Greiner, Ben (2004) “The Online Recruitment System ORSEE - A Guide for the Organization of Experiments in Economics.” In Kurt Kremer & Volker Macho (eds.) Forschung und Wissenschaftliches Rechnen 2003, 79–93, GWDG Bericht 63 (Research and scientific computation 2003. GWDG report 63), Göttingen: Gesellschaft für Wissenschaftliche Datenverarbeitung.


{% real incentives/hypothetical choice: they seem to have tested it and seem to have found systematic quantitative differences, but same qualitative effects
random incentive system: seems to be one of the first studies to use it. %}

Grether, David M. & Charles R. Plott (1979) “Economic Theory of Choice and the Preference Reversal Phenomenon,” American Economic Review 69, 623–638.


{% reply to Pommerehne, Schneider, & Zweifel %}

Grether, David M. & Charles R. Plott (1979) “Economic Theory of Choice and the Preference Reversal Phenomenon: Reply,” American Economic Review 72, 575.


{% cognitive ability related to risk/ambiguity aversion::
Examine big data set on people’s estimates of their survival probabilities. Inverse-S fits the data well. Likelihood insensitivity correlates well with direct measurements of cognitive ability, supporting its cognitive interpretation. (cognitive ability related to likelihood insensitivity (= inverse-S)).
I would reinterpret this study as one on ambiguity using the source method (Abdellaoui et al. 2011). People face uncertain probabilities and the probability weighting function is a source function. %}

Grevenbrock, Nils, Max Groneck, Alexander Ludwig, & Alexander Zimper (2017) “What Determines Survival Belief Formation beyond Statistical Learning?,” working paper.


{% %}

Grieco, Daniela & Robin M. Hogarth (2009) “Overconfidence in Absolute and Relative Performance,” Journal of Economic Psychology 30, 756–771.


{% Descriptively examine Bayesian updating. Distinguish between strength of evidence, which is what probability it would generate if there were no other evidence (or if its “weight” were infinite), and weight of evidence which is how much this evidence will weigh relative to other (say, prior) evidence. For example, if we make a number of observations strength is the observed relative frequency, and the number of observations is the weight. The authors conjecture that subjects are not sufficiently sensitive to the weight dimension, and treat weights as all the same, “average,” which means underestimating large weights and overestimating small weights. Verify it in a number of experiments. It explains patterns of both over- and under-confidence found in the literature. %}

Griffin, Dale & Amos Tversky (1992) “The Weighing of Evidence and the Determinants of Confidence,” Cognitive Psychology 24, 411–435.


{% They compared betting odds of people with frequency of winning. The former is interpreted as derived from decision weights, the latter as objective probability. E.g. for horses with betting odds derived from decision weight .10 the frequency of winning is smaller, say .08, suggesting that objective probability .08 is transformed into decision weight .10.
inverse-S: racetrack betting finds nonlinear probability inverse-S weights. These data from a different domain do corroborate Preston & Baratta (1948) with intersection of diagonal around .18. Main drawback of horse racing data is that the population is more risk seeking than average people are.
P. 290 argues that people perceive probabilities nonlinearly. %}

Griffith, Richard M. (1949) “Odds Adjustments by American Horse Race Bettors,” American Journal of Psychology 62, 290–294.


{% Seems inverse-S.; not in Holland %}

Griffith, Richard M. (1961) “A Footnote on Horse Race Betting,” Transactions Kentucky Academic Science 22, 78–81.


{% Asks subjects (two population samples of each 10,000) hypothetical choices between (now: $1000) vs. (in 2 years: $1500) and (in 5 years: $1000) vs. (in 7 years: $1500), as tests of patience and one test of stationarity. Relates it to smoking. Present-biased people do not smoke more, but have harder times quitting. %}

Grignon, Michel (2009) “An Empirical Investigation of Heterogeneity in Time Preferences and Smoking Behaviors,” Journal of Socio-Economics 38, 739–751.


{% %}

Grigoriev, Pavel G. & Johannes Leitner (2006) “Dilatation Monotone and Comonotonic Additive Risk Measures Represented as Choquet Integrals, Statistics and Decisions 24, 27–44.


{% First editions of the book were in 1812 (Vol. 1) and 1814 (Vol. 2). The 7th was final. They died after.
conservation of influence: “Hans im Glück” %}

Grimm, Jakob L.K. & Wilhelm K. Grimm (1857) “Kinder- und Hausmärche.” (7th edn.)


{% On disposition effect: people hold on to losing stocks and sell gaining stocks. %}

Grinblatt, Mark & Bing Han (2005) “Prospect Theory, Mental Accounting and Momentum,” Journal of Financial Economics 78, 311–339.


{% foundations of quantum mechanics %}

Grinbaum, Alexei (2007) “Reconstruction of Quantum Theory,” British Journal for the Philosophy of Science 58, 387–408.


{% Gives mixture-like axiom to characterize proportionality of additive value function %}

Grodal, Birgit (1978) “Some Further Results on Integral Representation of Utility Functions,” Institute of Economics, University of Copenhagen, Copenhagen. Appeared in rewritten form in Ch. 12 of Karl Vind (2003) “Independence, Additivity, Uncertainty.” With contributions by B. Grodal. Springer, Berlin.


{% revealed preference %}

Grodal, Birgit & Werner Hildenbrand (1989) “The Weak Axiom of Revealed Preference in a Productive Economy,” Review of Economic Studies 56, 635–639.


{% state-dependent utility %}

Grodal, Birgit & Jean-François Mertens (1976) “Integral Representations of Utility Functions,” Institute of Economics, University of Copenhagen. CORE DP6823.


Appeared in rewritten form as Ch. 11 by Birgit Grodal in Karl Vind (2003) “Independence, Additivity, Uncertainty.” With contributions by B. Grodal. Springer, Berlin.
{% %}

Groes, Ebbe, Hans-Jörgen Jacobsen, Birgitte Sloth, & Torben Tranaes, (1995) “Testing the Intransitivity Explanation of the Allais Paradox,” Theory and Decision 47, 229–245.


{% %}

Groes, Ebbe, Hans-Jörgen Jacobsen, Birgitte Sloth, & Torben Tranaes (1998) “Nash Equilibrium with Lower Probabilities,” Theory and Decision 44, 37–66.


{% %}

Groes, Ebbe, Hans-Jörgen Jacobsen, Birgitte Sloth, & Torben Tranaes (1998) “Axiomatic Characterization of the Choquet Integral,” Economic Theory 12, 441–448.


{% %}

Gronchi, Giorgio & Elia Strambini (2017) “Quantum Cognition and Bell’s Inequality: A Model for Probabilistic Judgment Bias,” Journal of Mathematical Psychology 78, 65–75.


{% %}

Groot Koerkamp, Bas, M. G. Myriam Hunink, Theo Stijnen, James K. Hammitt, Karen M. Kuntz, & Milton C. Weinstein (2007) “Limitations of Acceptability Curves for Presenting Uncertainty in Cost-Effectiveness Analysis,” Medical Decision Making 27, 101–111.


{% CBDT; do one numerical specification of CBDT, and compare it to one other predictive model invented by the authors themselves (a MAX heuristic). They find that CBDT better predicts choices if current info is available, but that their model invented by themselves does better otherwise. A difficulty is how to, when implementing a second memory, make the info of the memory first implemented disappear. The authors do so by telling subjects that for the second memory they should take the info of the first memory as irrelevant. %}

Grosskopf, Brit, Rajiv Sarin, & Elizabeth Watson (2015) “An Experiment on Case-Based Decision Making,” Theory and Decision 79, 639–666.


{% intertemporal separability criticized; seems to question additivity of disjoint time periods. %}

Grossman, Michael (1982) “The Demand for Health after a Decade,” Journal of Health Economics 1, 1–3.


{% This paper shows that subjects have a preference for skewness (always taken to be positive skewness), citing preceding literature finding this too. The paper only considers gains. It presents choices between prospects that have the same expected value and variance (taken as riskiness), but differ in skewnes. If subjects positively evaluate skewness, they are of course willing to take some extra risk so as to get extra skewness, as this paper shows empirically. §4.5, p. 213, shows that preference for skewness is indistinguishable from the overweighting of small probabilities. Thus preference for skewness amounts to the same as inverse-S probability weighting. Prudence amounts to the same. Unfortunately, the authors only cite 1979 prospect theory for it, and not the many more recent papers showing it. The key words “inverse-S” and “risk seeking for small-probability gains” in this annotated biblioography give many papers on it. %}

Grossman, Philip J. & Catherine C. Eckel (2016) “Loving the Long Shot: Risk Taking with Skewed Lotteries,” Journal of Risk and Uncertainty 51, 195–217.


{% %}

Grove, Adam J. & Joseph Y. Halpern (1998) “Updating Sets of Probabilities.” In David Poole et al. (eds.) Proceedings of the Fourteenth Conference on Uncertainty in AI, 173–182, Morgan Kaufmann, Madison, WI.


{% intuitive versus analytical decisions; Mechanical Prediction means based on quantitative (statistical, computer, etc.) analyses, and clinical means direct intuitive judgments by specialists (unfortunate term, originated from medical domain and now has become generally accepted). This meta-analysis finds that in most cases the mechanical predictions did better.
I agree that mechanical does better than commonly thought, and deserves more attention. The work done in decision theory can be considered to be one big attempt at promoting quantitative analyses. Still, mechanical will not be preferable in most cases.
Concerning a different but more interesting question, when can mechanical analysis contribute something at all to other such as clinical analyses, I guess that it can in 1 out of 10,000 cases. 1 out of 10,000 is so MUCH that it is worth dedicating ones life to. So, how come about the finding of this meta-analysis? I think that it was subject to a selection bias. Published studies concern those rare and interesting cases where mechanical can do something. The obvious point that mecanical mostly doesnt work is too trivial to be published. %}

Grove, William M., David H. Zald, Boyd S. Lebow, Beth E. Snitz, & Chad Nelson (2000) “Clinical versus Mechanical Prediction: A Meta-Anaysis,” Psychological Assessment 12, 19–30.


{% %}

Groves, Robert M., Robert B. Cialdini, & Mick P. Couper (1992) “Understanding the Decision to Participate in a Survey,” Public Opinion Quarterly 56, 475–495.


{% Mechanism for public goods avoiding free riding. The payment scheme is quadratic in a way reminiscent of the quadratic proper scoring rule. %}

Groves, Theodore & John O. Ledyard (1977) “Optimal Allocation of Public Goods: A Solution to the “Free Rider” Problem,” Econometrica 4, 783–809.


{% %}

Gruber, Jonathan & Botond Köszegi (2001) “Is Addiction “Rational”? Theory and Evidence,” Quarterly Journal of Economics 116, 1261–1303.


{% Argues against libertarian paternalism, that it is manipulative, deliberately circumventing people’s own deliberations, deliberately not making clear to people what they do, and that it will certainly not work if people see through it. I disagree with all these views. %}

Grüne-Yanoff, Till (2012) “Old Wine in New Casks: Libertarian Paternalism still Violates Liberal Principles,” Social Choice and Welfare 38, 635–645.


{% DOI: 10.1214/16-STS561 %}

Grünwald, Peter (2016) “Contextuality of Misspecification and Data-Dependent Losses,” Statistical Science 31, 495–498.


{% updating; conditional probability: many papers have discussed the issue that conditioning on an observed event can only be done under a ceteris paribus assumption, entailing that the observation does not carry other information, and does not affect anything conditional upon the event. This paper provides mathematical conditions and formulas stating when exactly Bayes formula for conditioning holds and when not, referring to some other recent papers, and many statistical papers, on similar issues. The mathematics by itself is not particularly hard, but is very illuminating by bringing in the right concepts. The three-prisoners problem provides a good illustration of when Bayes formula need not hold. No-one will, after reading this paper, ever again fall victim to forgetting the ceteris paribus condition of Bayes formula. The precise mathematical statements work better than vague philosophical discussions.
Nice concept: the naïve [state] space contains only the states that determine the consequences resulting from acts. There are also observations, which do not directly affect consequences of acts, but only indirectly through their influence/information about the naïve state space. To condition upon information often more than just the naïve state space is required. We also need to know the probabilities of the “sophisticated” state space, which describes both the naïve states and (part of) the observations; i.e., what Shafer called the protocol. In the three-prisoners problem, you also need to know what the jailor does when he has a choice which of the other two prisoners to indicate, before you can calculate posterior probabilities. The sophisticated space should also describe those things. %}

Grünwald, Peter D. & Joseph Y. Halpern (2002) “Updating Probabilities.” In Adnam Darwiche & Nir Friedman (eds.) Uncertainty in Artificial Intelligence, Proceedings of the Eighteenth Conference, 187–196, Morgan Kaufmann, San Francisco, CA.


{% %}

Guala, Francesco (2000) “The Logic of Normative Falsification: Rationality and Experiments in Decision Theory,” Journal of Economic Methodology 7, 59–93.


{% Methodological discussion of debates about preference reversals and BDM (Becker-DeGroot-Marschak) mechanism. %}

Guala, Francesco (2000) “Artefacts in Experimental Economics: Preference Reversals and the Becker-DeGroot-Marschak Mechanism,” Economics and Philosophy 16, 47–75.


{% equity-versus-efficiency: seems to be on it %}

Guala, Francesco & Antonio Filippin 2017) “The Effect of Group Identity on Distributive Choice: Social Preference or Heuristic?,” The Economic Journal, 127, 1047–1068.


{%.benedenstaande achterkant voorblad artikel Gudder geschreven
I spent several hours (spread out over years, starting from Gudders paper) on finding out if not the axiom M5, cancellation, was implied by the others, M1–M4 and M6. It almost is, but not completely. I did observe a possible weakening of M5 in the presence of the other axioms. It can be derived (took me some hours) from Axioms M1–M4 and M6 that [ApC=BpC for some 0 < p < 1] implies [ApC=BpC for all 0 < p < 1]. So then only for p=1 we may have inequality. Hence, Axiom M5 may be weakened to: if ApC=BpC for all 0 < p < 1, then A=B. Examples violating this condition, but satisfying M1–M4 and M6, can be constructed.
An open question to me is if in the axioms, in the presence of the full force of M5, the ‘three-dimensional associativity can be weakened to the ‘two-dimensional associativity as has been used by von Neumann & Morgenstern and others. %}

Gudder, Stanley P. (1977) “Convexity and Mixtures,” SIAM Review 19, 221–240.


{% foundations of quantum mechanics; notion of probability in quantumtheory; compares quantum-probability theory with Kolmogorov-probability theory %}

Gudder, Stanley P. (1979) “Stochastic Methods in Quantum Mechanics.” North-Holland, Amsterdam.


{% %}

Gudder, Stanley P. & Frank Schroeck (1980) “Generalized Convexity,” SIAM Journal on Mathematical Analysis 11, 984–1001.


{% CBDT %}

Guerdjikova, Ani (2004) “Asset Prices in an Overlapping Generations Model with Case-Based Decision Makers with Short Memory.” Working paper nr. 504. University of Mannheim.


{% CBDT: analyzes optimality results when the similarity function is concave in a Euclidean distance measure. Some anomalies of nonexistence can be resolved by allowing convexities in the similarity function. %}

Guerdjikova, Ani (2008) “Case-Based Learning with Different Similarity Functions,” Games and Economic Behavior 63, 107–132.


{% Application of ambiguity theory;
Analyse market populated with EU maximizers and smooth ambiguity maximizers, who will survive in the long run under all kinds of assumptions and who will affect market prices. %}

Guerdjikova, Ani & Emanuela Sciuba (2015) “Survival with Ambiguity,” Journal of Economic Theory 155, 50–94.


{% Social planner trades off preference for flexibility against ambiguity aversion of individuals in a society; axioms are given. %}

Guerdjikova, Ani & Alexander Zimper (2008) “Flexibility of Choice versus Reduction of Ambiguity,” Social Choice and Welfare 30, 507–526.


{% A theoretical paper on auctions with EU, showing that in general the utility function is not identifiable, but it is under some exclusion restrictions. %}

Guerre, Emmanuel, Isabelle Perrigne, & Quang Vuong (2009) “Nonparametric Identification of Risk Aversion in First-Price Auctions under Exclusion Restrictions,” Econometrica 77, 1193–1227.


{% Multiattribute utility à la Keeney & Raiffa, with attributes referring to time points. A nice weakening of utility independence, referring only to preceding time points, leading to semiseparable utility.
Appealing case of Keeney & Raiffa’s (1976) utility independence: attributes 1,…,n refer to time points. Each timeset {j,…,n} is utility independent from past consumption iff a “semi-separable” utility U(x1,…,xn) = SUMj=1n(uj(xj)i=1;j1 ci(xi)). %}

Guerrero, Ana M. & Carmen Herrero (2005) “A Semi-Separable Utility Function for Health Profiles,” Journal of Health Economics 24, 33–54.


{% Defines more risk averse in the smooth ambiguity model, applying the Yaari technique to the vNM utility function. Say it becomes more risk aversion by a concave utility transformation h, replacing u by h(u). Then the smooth ambiguity aversion function  has to be replaced by (h1). So, risk and ambiguity attitude are not well separated. %}

Guetlein, Marie-Charlotte (2016) “Comparative Risk Aversion in the Presence of Ambiguity,” American Economic Journal: Microeconomics 8, 51–63.


{% Happiness depends on income but also on reference level. Reference level has negative effect on utility in Western Europe, but positive in Eastern Europe, probably in being predictor for future utility. %}

Guglielmo Maria Caporale, Yannis Georgellis, Nicholas Tsitsianis & Ya Ping Yin (2009) “Income and Happiness across Europe: Do Reference Values Matter?,” Journal of Economic Psychology 30, 42–51.


{% small probabilities; anonymity protection %}

Guiasu, Radu Cornel & Silviu Guiasu (2010) “New Measures for Comparing the Species Diversity Found in Two or More Habitats,” International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems 18, 691–720.


{% Application of ambiguity theory;
Survey of the use of ambiguity models in finance. %}

Guidolin, Massimo & Francesca Rinaldi (2013) “Ambiguity in Asset Pricing and Portfolio Choice: A Review of the Literature,” Theory and Decision 74, 183–217.


{% CBDT %}

Guilfoos, Todd & Andreas Duus Pape (2016) “Predicting Human Cooperation in the Prisoner’s Dilemma Using Case-Based Decision Theory,” Theory and Decision 80, 1–32.


{% linear utility for small stakes: this is how they justify, in §2, why they use a hypothetical question with a large amount. In this, they correctly specify that they assume expected utility.
decreasing ARA/increasing RRA: this is what they find.
Use household survey data of 8,135 subjects of 1995 Bank of Italy Survey of Household Income and Wealth (SHIW). Risk attitude is measured through the following question, presented "unprepared":
"We would like to ask you a hypothetical question that you should answer as if the situation were a real one. You are offered the opportunity of acquiring a security permitting you, with the same probability, either to gain 10 million lire or to lose all the capital invested. What is the most that you would be prepared to pay for this security?"
Here 10 million lire is about $5000. I am afraid that the question leaves many ambiguities. The authors have in mind that it designate a 50-50 prospect. Problem 1. However, one thing unclear is whether not also other outcomes might occur. In practice that will always be the case, so that it is very likely that subjects will assume that there could be other outcomes.
Problem 2. A second difficulty is the vagueness in "with the same probability." In practice, people never have probabilities given for securities, so the subjects won't know what probability is being referred to, and will have a hard time picking up that these probabilities are the same.
Problem 3. A third difficulty is that the subjects don't know what guarantee they have that their money will be treated in a fair way. If you invest in stocks you may lose all money, but you will read in the paper that that was the "fair" outcome that the bank had to offer you. If you just give money to a stranger under the terms that maybe the stranger will not return the money, and you don't know the rules of the game, you just will not do it because you don't trust the stranger.
The data, indeed, are bad. Of the 8,135, more than half, 4,677 subjects, were either not willing to pay any positive amount for the security. 3,091 wanted to pay only 0 for it, and 1,586 said they did not know. Only 3,458 were willing to pay a positive amount. The authors argue that it is because of the "complexity" of the question and that it is good to get rid of those who don't understand, but I think that the security is way more unfavorable than the authors take it. It is also unfortunate that the subjects dropped are not randomly misunderstanding, but comprise the most risk averse and ambiguity averse among the subjects.
Despite the above problems, the data set is so very nice that it is still very interesting to analyze the relation between the answers given and demographic variables etc., among the 3,458 that did want to pay a positive amount.
In this group, the young take less risks than the old. %}

Guiso, Luigi & Monica Paiella (2003) “Risk Aversion, Wealth and Background Risk,” Bank of Italy Economic Working Paper No. 483.


{% Use same data as in their other paper “Risk Aversion, Wealth and Background Risk.” Their measure correlates significantly with decisions including occupational choice, portfolio allocation, investment in education, job change, and moving decisions, all in predicted direction. It also explains 2.2% of the variation in income, about one-third of what is explained by age, gender, and area of birth. %}

Guiso, Luigi & Monica Paiella (2005) “The Role of Risk Aversion in Predicting Individual Behaviours,” Bank of Italy Economic Working Paper No. 4591.


{% %}

Guiso, Luigi, Paola Sapienza, & Luigi Zingales (2008) “Trusting the Stock Market,” Journal of Finance 63, 2557–2600.


{% %}

Gul, Faruk (1987) “Noncooperative Collusion in Durable Goods Oligopoly,” Rand Journal of Economics 18, 248–254.


{% %}

Gul, Faruk (1989) “Bargaining Foundations of Shapely Value,” Econometrica 57, 81–95.


{% Idea of the model: to prepare, first consider traditional EU for (p1:x1,…,pn:xn), with x1...  xn. Then CE (certainty equivalent) satisfies, with xk  CE  xk+1,
SUMikpi(U(xi)U(CE)) = SUMj>kpj(U(CE)U(xj)).
This paper considers a generalization of EU where there exists a  > 1 such that the CE satisfies
SUMikpi(U(xi)U(CE)) = (1+)SUMj>kpj(U(CE)U(xj)) (*)
That is, the disappointing outcomes (worse than the lottery, so than its CE) are reweighted by a factor 1+. >-0 is in the spirit of loss aversion. In his equation on top of p. 673, the weights are /(1+(1)) and (1+)(1)/(1+(1)), so the bad outcomes are indeed overweighted by (1+) relative to the good outcomes, confirming my Eq. (*). Eq. (*) is easiest to understand and analyze the model, I think.
P. 670 above Def. 1 for once and for all imposes that big sure money amounts are preferred to small ones, which will imply that utility is strictly increasing. Stochastic dominance can readily be inferred from Eq. (*) above, where by transitivity it suffices to consider only improvements that do not cross the u(CE) level : if one outcome is increased then, both if SUMikpi(U(xi)U(CE)) was increased and if SUMj>kpj(U(CE)U(xj)) was decreased, to maintain the equality, the CE value has to increase too. Thus we get classical weak stochastic dominance (increasing any monetary outcome weakly improves the prospect).
biseparable utility: p. 677 points out that for two-outcome lotteries this theory is a special case of rank-dependent utility, with probability weighting function (I write p for probability where Gul writes )
p  p/(1 + (1p)).
If the probability of the worst outcome is 1p, then its weight is (1p)(1+)/(1 + (1p)). In other words, we at first leave the good-outcome probability p unaffected but give the bad-outcome probability 1p and extra weight factor 1+. Then we normalize. This means that the Wakker & Deneffe (1996) Tradeoff method also measures utility for Guls disappointment aversion theory. Pity I did not know this before Sept. 98 so could not mention it in the 96-paper.
Disappointment aversion is a betweenness model, having linear indifference sets and EU within each indifference set, and satisfying quasi-convexity and quasi-concavity w.r.t. probability mixing. (It is not a special case or Chew’s (1983) weighted utility.) I guess that Gul did not know these models when inventing his theory, but with his creativity just automatically invented the best and nicest model that can be. %}

Gul, Faruk (1991) “A Theory of Disappointment Aversion,” Econometrica 59, 667–686.


{% Gives a mixture-like axiom (Assumption 2, nowadays called act-independence) to characterize proportionality of additive value functions;. %}

Gul, Faruk (1992) “Savages Theorem with a Finite Number of States,” Journal of Economic Theory 57, 99–110. (“Erratum,” 1993, Journal of Economic Theory 61, 184.)


{% %}

Gul, Faruk (1996) “Rationality and Coherent Theories of Strategic Behavior,” Journal of Economic Theory 70, 1–31.


{% Aumann (1987, Econometrica) introduced correlated equilibria but based it on an, I think, unsound application of Savages (1954) model. For instance, Aumann had states of the world describe acts and probabilities which cannot be because probabilities and acts can be defined only if first already states of the world have been defined, in Savages model. In this paper, Gul also criticizes Aumanns model. A reply by Aumann follows. %}

Gul, Faruk (1998) “A Comment on Aumanns Bayesian View,” Econometrica 66, 923–927.


{% %}

Gul, Faruk (1999) “Efficiency and Immediate Agreement: A Reply to Hart and Levy,” Econometrica 67, 913–917.


{% %}

Gul, Faruk (2001) “Unobservable Investment and the Hold-Up Problem,” Econometrica 69, 343–376.


{% %}

Gul, Faruk & Dilip Abreu (2000) “The English Auction with Differentiated Bargaining and Reputation, Econometrica 68, 85–117.


{% %}

Gul, Faruk, Salvador Barbera, & Ennio Stacchetti (1993) “Generalized Median Voting Schemes and Committees,” Journal of Economic Theory 61, 262–289.


{% %}

Gul, Faruk, Avinash Dixit & Gene Grossman (2000) “A Theory of Political Compromise,” Journal of Political Economy 108, 531–567.


{% dynamic consistency; all conditions concern sets of optimal probability distributions in a choice situation and, thus, within equivalence classes, which is equivalent to betweenness. %}

Gul, Faruk & Outi Lantto (1990) “Betweenness Satisfying Preferences and Dynamic Choice,” Journal of Economic Theory 52, 162–177.


{% If agents can choose their time of decision, these points seem to be clustered together, because they can anticipate about each others information in some sense. %}

Gul, Faruk & Russell Lundholm (1995) “Endogenous Timing and the Clustering of Agents Decisions,” Journal of Political Economy 103, 1039–1066.


{% %}

Gul, Faruk & David G. Pearce (1996) “Forward Induction and Public Randomization,” Journal of Economic Theory 70, 43–64.


{% dynamic consistency: seem to argue against the multiple-agent view of dynamic decisions. Dynamically consistent agents may prefer that some ex ante inferior options are deleted. %}

Gul, Faruk & Wolfgang Pesendorfer (2001) “Temptation and Self-Control,” Econometrica 69, 1403–1435.


{% %}

Gul, Faruk & Wolfgang Pesendorfer (2004) “Self-Control and the Theory of Consumption,” Econometrica 72, 119–158.


{% dynamic consistency: in dynamic decisions, planned choice usually plays a big role. But we cannot observe plans. This paper does not have plans in the formal model. At time point 1 we choose between decision problems at time point 2. To this they apply principles of revealed preference, and signals of lack of self-control in case of strict preference for subsets, etc. %}

Gul, Faruk & Wolfgang Pesendorfer (2005) “The Revealed Preference Theory of Changing Tastes,” Review of Economic Studies 72, 429–448.


{% A preference axiomatization of random expected utility for random choice: a probability distribution over vNM utilities leads to random choice. Preference axioms: mixture continuity, monotonicity (adding prospect to choice set of feasible prospects does not increase probability of choosing another prospect) and independence. %}

Gul, Faruk & Wolfgang Pesendorfer (2006) “Random Expected Utility,” Econometrica 74, 121–146.


{% %}

Gul, Faruk & Wolfgang Pesendorfer (2006) “The Canonical Type Space for Interdependent Preferences,” working paper.


{% dynamic consistency: compulsive consumption: if deviating from prior-commitment consumption. Addiction: if consumption leads to more compulsive consumption. They do dynamic model with cycles of addiction and voluntary commitment to prohibition. %}

Gul, Faruk & Wolfgang Pesendorfer (2007) “Harmful Addiction,” Review of Economic Studies 74, 147–172.


{% Endnote 3 explains why the authors avoid the term behavioral economics. They focus on the issue of using choiceless inputs in economics, departing from the revealed preference paradigm. However, they then unfortunately mostly focus on one small subset of choiceless inputs: neuro-economics inputs, and often seem to take the latter as fully capturing the former. (P. 9 middle calls “psychology and economics,” their term for behavioral economics, a pedecessor of neuroeconomics!?). This is because they react much to a Camerer, Loewenstein, & Prelec (2005) paper that greatly overstates the role and potential of neuro-economics.
They take a very strict and I think overly dogmatic revealed-preference viewpoint. (Becker & Murphy 1977 is another good reference for such dogmatic viewpoints.) Again and again they argue that economics can ignore choiceless inputs, because, as they argue, those are defined to be outside economics. But it cannot be denied that sometimes choiceless inputs can better predict consumer choices or, say, patient preferences, than choice-based inputs. The authors never take issue with this point, leaving me puzzled. The real reason why the ordinalists in the 1930s chose to go this way is that it gives unambiguous clear definitions, as a pro, with the con of losing inputs and info. The tradeoff between this pro and con cannot be judged on methodological arguments, or in an ivory tower. It came from over half a year of experience, showing that the con of losing inputs and info is too big. Such arguments are not found in this paper. To understand such points, it is better to have worked in a hospital for a year (one can never explain doctors that they should ignore info they read from the faces of patients …) than to have proved theorems .
Typical is p. 2 3rd para, on subjective states and hedonic utility being legitimate topics of study. “This may be true …” So, about the whole field of psychology, they don’t say that it is legitimate, but only that it may be legitimate. %}

Gul, Faruk & Wolfgang Pesendorfer (2008) “The Case for Mindless Economics.” In Andrew Caplin & Andrew Schotter (eds.) Foundations of Positive and Normative Economics, 3–39, Oxford University Press, New York.


{% %}

Gul, Faruk & Wolfgang Pesendorfer (2009) “Partisan Politics and Aggregation Failure with Ignorant Voters,” Journal of Economic Theory 144, 146–174.


{% event/utility driven ambiguity model: partly event-driven, through diffuse events and their approximations, but also partly outcome-driven, through the function  that depends on the minimal outcome m and the maximal outcome M.
This paper is an intriguing variation of Jaffray’s (1989 Operations Research Letters) model of decision under ambiguity. A detailed explanation of Jaffray’s ideas is in Wakker (2011, Theory & Decision). In short, Jaffray adopted a philosophy of complete absence of information, applying to events that I, following F&P, call diffuse here. Consider a partition of the state space S in diffuse events {D1 ,…,Dn}. Such Dj’s are exchangeable (interchanging outcomes of two does not affect preference). But even no statement of D1 being less likely (in gambling on sense) than D2...  Dn, or, for that matter, than D1...  Dn1, is accepted. Thus, with 100D0 meaning that 100 results under event D and nothing otherwise, the problematic indifference
100D10 ~ 100D1 ... Dn10
follows (Wakker 2011 Figure 4.1). (Cohen & Jaffray (1980) take another route by giving up completeness, but we maintain completeness here.) This violation of at least strict monotonicity is the price to pay for avoiding any subjective commitment about uncertainty. G&S treat their diffuse events the same way, mainly through Axiom 3. Under weak monotonicity, if follows that an act conditional on the above ambiguous partition is completely characterized by its inf. outcome m and its sup. outcome M. We denote it, simply, (m,M). Its utility U(m,M) will be U(m) + (1)U(M), with  depending on m and M.
Continuing on Jaffray’s model, he also assumes unambiguous events (similar to the ideal events of G&P except that they are objective and exogenous) that have objective probabilities. He allows for conditioning on unambiguous events (Wakker 2011 p. 18 l. 1). G&P define unambiguous (called ideal) events through this possibility to condition, in other words, these events and their complements satisfy the sure-thing principle/separability (p. 3). Conditional on unambiguous events, we have diffuseness with total absence of information (thus here a strict separation of complete unambiguity and complete ambiguity). Jaffray’s model considers acts conditioned on unambiguous events E1,…,En that have probabilities p1,…,pn, resulting in a probability distribution (p1:(m1,M1),…, pn:(mn,Mn)) and utility p1U(m1,M1) + ... +pnU(mn,Mn). Jaffray gave a preference foundation based on an independence axiom imposed on what amounts to probabilistic mixtures of the above kinds of general acts.
G&P generalize Jaffray’s model by not assuming the objective probabilities p1,…,pn of unambiguous (G&P call it ideal) events given beforehand. Instead, they impose the Savage axioms on the unambiguous/ideal events Ej, and then derive the probabilities pj subjectively from the Savage axioms. They specify this relation with Jaffray’s model on p. 20: “Hence, EUU theory and Jaffray’s model stand roughly in the same relationship as Savage’s theory and von Neumann-Morgenstern theory.” I regret that they did not cite Jaffray’s work earlier than p. 20.
The job of G&P is less trivial than may seem from the above. Several problems to be solved are solved cleverly, leading to tractable modeling. Thus the separation between ambiguous and unambiguous events is obtained nicely endogenously, through the sure-thing principle (allowing conditioning) in their definition of ideal. Mostly by imposing pointwise continuity (Axiom 6, p. 4; they don’t use this term) they ensure at the same time that the probability measure will be countably additive, and that an algebra of ideal events can be extended to a sigma-algebra. And, the sigma algebra need not be all subsets, avoiding the problems demonstrated by Banach & Kuratowski (1929) and Ulam (1930). G&P need not commit to a product structure of ideal/diffuse or these being given a priori, because the ideal/diffuse separation follows naturally from the axioms. I expect that Jaffray would have been delighted to see this work. For one, it revives his ideas, in a refined version. What is not consistent with his views, is that he really only wanted objective probabilities, and not any subjectivity in beliefs such as with subjective Savage probabilities.
For an event F that is neither unambiguous/ideal nor diffuse, we can find maximal ideal events E  F and minimal ideal events E´  F, given a lower- and upperbound probability. The uncertainty “between” E and E´ is treated by complete absence of info (diffuse). Diffuse events can in fact be defined, and this is what G&P do (p. 3), by E =  (or null) and E´ = S (up to null). P. 21 (Conclusion) and throughout say that ideal events are perfectly quantifiable and diffuse events are completely unquantifiable.
P. 3 ll. -9/-6: the text “Note that Savage’s … uncertainty of events” is incomprehensible to me. I do not know what “arbitrarily unlikely” events would be in Savage’s model. Null events would not work here. I also do not understand “possibility for infinite collections of sets,” or why these events could be used to calibrate the uncertainty of events. Maybe the authors refer to finite additivity of P in Savage, where countable partitions of all null events can exist, but this is something different.
P. 5 Theorem 1 gives the representation theorem, a variation of Savage (1954).

Download 7.23 Mb.

Share with your friends:
1   ...   40   41   42   43   44   45   46   47   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page