Bibliography



Download 7.23 Mb.
Page34/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   30   31   32   33   34   35   36   37   ...   103
§9.1, p. 293, discusses the case where we have only one urn available, and find probabilistic sophistication satisfied there. The authors argue that intuitively it may not be clear whether the urn is unambiguous, but there is no behavioral evidence for ambiguity and, hence, they formally call it unambiguous. If behavioral info can’t show if there is ambiguity or not, then I would prefer to leave it unspecified and not to “randomly” choose one option.%}

Epstein, Larry G. & Jiankang Zhang (2001) “Subjective Probabilities on Subjectively Unambiguous Events,” Econometrica 69, 265–306.


{% dynamic consistency
DC = stationarity
Recursive utility: backward induction, CE-substitution (certainty equivalent substitution), see (3.4); they first aggregate over states and only then over time. They resolve the usually considered undesirable equation of risk- and intertemporal attitude by using a Kreps-Porteus (1978) model, where first a u function is used to aggregate over risk and then a nonlinear transform of this u function to aggregate over time.
below: “recursive structure immediately implies the intertemporal consistency of preferences (in the sense of Johnsen & Donaldsen (1985) ... and the stationarity of preference (in the sense of Koopmans (1960), for example).”
Paper assumes special (parametric) families of utility; also considers, in “Class 3,” the Chew/Dekel betweenness family %}

Epstein, Larry G. & Stanley E. Zin (1989) “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework,” Econometrica 57, 937–969.


{% %}

Epstein, Larry G. & Stanley E. Zin (1990) “ ‘First-Order’ Risk Aversion and the Equity Premium Puzzle,” Journal of Monetary Economics 26, 387–407.


{% %}

Epstein, Larry G. & Stanley E. Zin (1991) “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: An Empirical Analysis,” Journal of Political Economy 99, 263–286.


{% Paper was finished around 1991, not published then, but now published in a special issue of the journal dedicated to valuable unpublished works. %}

Epstein, Larry G. & Stanley E. Zin (2001) “The Independence Axiom and Asset Returns,” Journal of Empirical Economics 8, 537–572.


{% Seems that:
real incentives/hypothetical choice: for time preferences %}

Epstein, Leonard H., Jerry B. Richards, Frances G. Saad, Rocco A. Paluch, James N. Roemmich, & Caryn Lerman (2003) “Comparison between two Measures of Delay Discounting in Smokers,” Experimental and Clinical Psychopharmacology 11, 131–138.


{% conservation of influence; dynamic consistency: Roese worked life-long on counterfactual thinking. %}

Epstude, Kai, & Neil J. Roese (2008) “The Functional Theory of Counterfactual Thinking,” Personality and Social Psychology Review 12, 168–192.


{% Risk averse for gains, risk seeking for losses was found in context of drugs with side effects. %}

Eraker, Stephen A. & Harold C. Sox (1981) “Assessment of Patients’ Preferences for Therapeutic Outcomes,” Medical Decision Making 1, 29–39.


{% Given to me by Palli Sipos; foundations of quantum mechanics, in deterministic way %}

d’Espagnat, Bernard (1979) “The Quantum Theory and Reality,” Scientific American 241, Nov. 1979, 158–167, 171–181.


{% %}

Erdös, Paul & Peter Fishburn (1997) “Distinct Distances in Finite Planar Sets,” Discrete Mathematics 175, 97–132.


{% %}

Erdös, Paul, Peter C. Fishburn, & Zoltán Füredi (1991) “Midpoints of Diagonals of Convex n-GONS,” SIAM Journal on Discrete Mathematics 4, 329-341.


{% %}

Erev, Ido (1998) “Signal Detection by Human Observers: A Cutoff Reinforcement Learning Model of Categorization Decisions under Uncertainty,” Psychological Review 105, 280–298.


{% Find that participants who express their uncertainties in terms of probabilities, behave worse in a number of cases (e.g., violate dominance more, in Experiment 2). The tasks are somewhat complex, e.g. there is a game in Experiment 1 where they state probabilities over opponents’ moves and there are income effects etc. in the lottery choices of Experiment 2, so it is not very easy to decide on the real causes for the findings. %}

Erev, Ido, Gary Bornstein, & Thomas S. Wallsten (1992) “The Negative Effect of Probability Assessments on Decision Quality,” Organizational Behavior and Human Decision Processes 55, 78–94.


{% Posted a first data set on internet, that people could use to calibrate their preferred model. Many researchers were invited to try out their preferred model in this competition. Then it was inspected which model best predicted the data in a second data set. An exemplary way of comparing models! For what they call decisions from description, a stochastic variation of prospect theory did best. For what they call decisions from experience, a small-sample model did best. The first three authors organized it, and the last seven were from winning teams. A nice enterprise! %}

Erev, Ido, Eyal Ert, Alvin E. Roth, Ernan Haruvy, Stefan M. Herzog, Robin Hau, Ralph Hertwig, Terrence Stewart, Robert West, & Christian Lebiere (2010) “A Choice Prediction Competition: Choices from Experience and from Description,” Journal of Behavioral Decision Making 23, 15–47.


{% P. 577 suggests that the loss aversion parameter of prospect theory can be replaced by utility curvature; i.e., that these are collinear. I disagree, however. They have many different empirical implications, even if not for the particular choice problems considered by the authors. The Katz (1964) experiment with its many repetitions concerns repeated choice that is subject to the law of large numbers, and not to one-shot decisions. %}

Erev, Ido, Eyal Ert, & Eldad Yechiam (2008) “Loss Aversion, Diminishing Sensitivity, and the Effect of Experience on Repeated Decisions,” Journal of Behavioral Decision Making 21, 575–597.


{% Consider their usual setup of risky/uncertain prospects that the subjects must get to know through sampling. Then, investigate when subjects overweight rare events and when they neglect/underweigh them. They do the St. Petersburg paradox truncated after five times, and find, as predicted by prospect theory because of the overweighting of small probabilities, risk seeking rather than the conventionally assumed risk aversion (§2.4; hypothetical payment). They did this paradox with different framings, finding results depending on the framing. %}

Erev, Ido, Ira Glozman, & Ralph Hertwig (2008) “What Impacts the Impact of Rare Events,” Journal of Risk and Uncertainty 36, 153–177.


{% preferring streams of increasing income %}

Erev, Ido, Shlomo Maital, & Ori Or-Hof (1997) “Melioration, Adaptive Learning and the Effect of Constant Re-evaluation of Strategies.” In Gerrit Antonides, W. Fred van Raaij, & Shlomo Maital (eds.) Advances in Economic Psychology, Wiley, New York.


{% %}

Erev, Ido & Alvin E. Roth (1998) “Predicting how People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria,” American Economic Review 88, 848–881.


{% Propose the ENO of a theory. ENO is the equivalent number of observations. So, how many observations would give equally good information as the theory. Reminds me of the value of prior info in inductive reasoning of Carnap. This paper does it in the context of games and regressions. %}

Erev, Ido, Alvin E. Roth, Robert L. Slonim, & Greg Barron (2007) “Learning and Equilibrium as Useful Approximations: Accuracy of Prediction on Randomly Selected Constant Sum Games,” Economic Theory 33, 29–51.


{% Risk averse for gains, risk seeking for losses. Participants had to play many repeated (single-person) games and were told that the purpose was to maximize total earning. Obviously, that is income effect to an extreme degree. That was further enhanced because total score was always displayed. Any theory will then recommend that in each single game one should maximize expected value. It turned out that participants came closest to the EV maximization if no probabilities were given or jugded, less so if they had to give their subjective probability assessments first, and worst if they were given the objective probabilities. That result is puzzling by !any! weakly-rational theory. Maybe additionally given/judged probabilities caused confusion and overflow of information?
If the results came from single-choices, Exhibit 4 would provide counterevidence against the Tversky & Wakker (1995) claim of higher sensitivity towards chance than towards uncertainty. Given, however, the repeated choices and income effect, this experiment is a different ball game. %}

Erev, Ido & Thomas S. Wallsten (1993) “The Effect of Explicit Probabilities,” Journal of Behavioral Decision Making 6, 221–241.


{% A classic paper. If objective probability is predicted from subjective we see overconfidence, but if subjective probability is predicted from objective we see underconfidence. %}

Erev, Ido, Thomas S. Wallsten, & David V. Budescu (1994) “Simultaneous Over- and Underconfidence: The Role of Error in Judgment Processes,” Psychological Review 101, 519–527.


{% The authors use richness in state space. That is, assume that the grand (Savage) state space is a product of the two issues/sources, issue b with B events and issue a with A events. They take the decomposition as exogenous and not as endogenous as KMM did. So, all events involved are observable. Here is a typical prospect yielding outcome xij for event Ai-intersection-Bj.

x11 … x1m A1


. … .
. … .
xn1 … xnm An
B1 Bm

For their basic theorem, I reinterpret their central axiom 5b (a|b strong comparative probability) so as to make clear that in each partition into B-events each single B event is assumed separable (weak separability w.r.t. the B-partition), implying folding back/backward induction for the B events, something that their paper does not state clearly I think.


For prospects with outcomes depending only on A events, they impose all the Machina & Schmeidler axioms, giving probabilistic sophistication there. Then they assume all B events separable, i.e., we can do folding back (= backward induction) with respect to those events. !!!This assumption follows immediately from their Axiom 5b by taking event B2 empty.!!! Such implications of separability have been known since the 1950s at least, with Strotz (1957; not his famous time consistency paper but another pearl he produced) a nice paper on it. Moreover, Ergin & Gul assume that every preference conditional on any B event agrees with the unconditional preference over the A events (also their Axiom 5b). (This also implies in a way that the events of the two sources are statistically independent.) Then we may as well replace all acts conditioned on any B event (such conditioned acts are then acts with outcomes depending only on the A events; i.e., colums in the above matrix) by what I interpret as a fixed outcome. The latter would be a conditional certainty equivalent if there was richness (continuum) of outcomes. The authors do not assume the latter, but they assume richness of events. Then, with a maximal prize X and a minimal prize x, we can replace every act with outcomes dependent on A events by an equivalent XAx, which can be equated with the event A conditional on which the big prize is obtained, denoted (A:X). Assume that this way we have (A1:x1j, …,An:xnj) ~ (Abj:X) for each j for appropriate event Abj. The above displayed matrix prospect can then be replaced by the equivalent
(B1:(Ab1:X), …, Bm:(Abm:X)).
On these acts all Machina-Schmeidler axioms are imposed (mainly Axiom 5b again). A recursive probabilistic sophistication model results.
The authors have thus axiomatized a version of a two-stage Anscombe-Aumann model where probabilistic sophistication holds for both stages, with subjective probabilities for both stages.
§3 considers axioms concerning second-order risk aversion in some versions that in general are not equivalent but, as the authors show, are equivalent if we impose probabilistic-sophistication rank-dependent utility. Unfortunately, these axioms use probabilities as inputs (as do KMM in their smooth ambiguity aversion). Probabilities are subjective here, so not directly observable, and conditions that use them I prefer not to call preference conditions. They have the same observability status as conditions that use utility numbers as inputs. So, Theorem 2 (p. 911) in this paper, while mathematically and logically correct, does not really give preference conditions. Another point here is that the authors consider source-preference only in an Anscombe-Aumann two-stage setup, whereas it can easily be done for general sources with no need to have a two-stage statistically independent setup.
Theorem 3 shows that if we reinforce the Machina-Schmeidler probabilistic sophistication axioms into the Savage axioms, then we get an axiomatization of recursive expected utility. Theorem 4 gives a result with RDU.
Interesting is the introduction showing that the Ellsberg 3-color paradox can be considered as involving two sources. A nice alternative view of Ellsberg’s three-color example: the three balls are numbered 1,2, and 3, with 3 the number of the known color Red. At a first stage there is uncertainty about the color composition of the urn, at the 2nd about the number of the ball drawn. These two together determine the color. Gambling on known color is gambling on only stage-2 uncertainty, gambing on unknown color involves also stage-1 uncertainty. It does not allow for a completely disjoint source-approach to Ellsberg 3-color. %}

Ergin, Haluk & Faruk Gul (2009) “A Theory of Subjective Compound Lotteries,” Journal of Economic Theory 144, 899–929.


{% On choices between menus. %}

Ergin, Haluk & Todd Sarver (2010) “A Unique Costly Contemplation Representation,” Econometrica 78, 1285–1339.


{% Generalize Kreps-Porteus (1978) by considering choices of menus and hidden actions. %}

Ergin, Haluk Ergin & Todd Sarver (2015) “Hidden Actions and Preferences for Timing of Resolution of Uncertainty,” Theoretical Economics 10, 489–541.


{% DOI: 10.1111/test.12121 %}

Erickson, Tim (2017) “Beginning Bayes,” Teaching Statistics 39, 1–38.


{% Asked academics to judge value of abstracts, where for each in one treatment they had added a nonsensical sentence with an equation, and in the second treatment they had not. The ones with eq. received higher evaluations. Reminds me of the finding “equations reduce citations.” of Fawcett & Higginson (2012). %}

Eriksson, Kimmo (2013) “The Nonsense Math Effect,” Judgment and Decision Making 7, 746–749.


{% A whole issue on ceteris paribus. %}

Erkenntnis (2002), Volume 57 Issue 3.
{% risk seeking for symmetric fifty-fifty gambles; Show that loss aversion is volatile. Their 2013 paper is more extensive. %}

Ert, Eyal & Ido Erev (2008) “The Rejection of Attractive Gambles, Loss Aversion, and the Lemon Avoidance Heuristic,” Journal of Economic Psychology 29, 715–723.


{% Add further evidence that loss aversion is volatile. The authors go further and seriously question the prevalence of loss aversion, and provide balanced evidence to support their view.
They show that six factors can increase risk aversion and, hence, loss aversion: (1) framing safe alternative as status quo (formulating choice as accept/reject lottery rather than binary choice); (2) focusing on probability of gain, 0, and loss; (3) high stakes; (4) high nominal amounts; (5) highly attractive risky prospects elsewhere in experiment creating contrast effect; (6) fatiguing subjects (difficult long experiment).
Study 3 finds central tendency effect (tendency to choose answer in the middle) for choice lists.
Relative loss aversion means that gain prospects, after being translated into mixed prospects, give more risk aversion, confirmed by the well known Payne, Laughhunn, & Crum (1980, 1981). The present paper finds the opposite in several choices, providing the strongest evidence against loss aversion in the literature that I am aware of. Thus the summary at the end, p. 229, writes that they find “weaker risk aversion in choice between mixed prospects than in choice between gains.” An explanation can be that this is always in situations where in the gains-choices the risky gain lottery has a possibility of yielding 0, which generates special aversion. Or it can be that the stakes were so small that joy of gambling came in, but this is admittedly not a strong counter because joy of gambling is hard to model or to give predictions.
P. 227 2nd column 2nd para: much risk neutrality for small stakes (linear utility for small stakes)
risk seeking for symmetric fifty-fifty gambles: not found on p. 220 penultimate para. %}

Ert, Eyal & Ido Erev (2013) “On the Descriptive Value of Loss Aversion in Decisions under Risk: Six Clarifications,” Judgment and Decision Making 8, 214–235.


{% 24 subjects chose between risky and ambiguous options the usual way. 32 subjects got the chance to first sample unlimitedly from the ambiguous option before choosing. In the former case we have the usual likelihood insensitivity and a-insensitivity with preference for ambiguous urn if unlikely and opposite if likely. Still the case is different here because if, for instance, the objective probability is 1/10, the ambiguous urn is described just as unknown prob of win or lose, so dichotomous, so, like ambiguous 0.5 probability, which makes it natural that also Bayesians prefer ambiguous for unlikely and risk for likely.
In second treatment subjects can sample from ambiguous. Then those who happened to have favorable sample will prefer ambiguous, and the others the opposite. Introspective measurements of beliefs suggest that preferences are not due to belief generated by sampling. Hence, due to motivation it may be. %}

Ert, Eyal & Stefan T. Trautmann (2014) “Sampling Experience Reverses Preferences for Ambiguity,” Journal of Risk and Uncertainty 49. 31–42.


{% %}

Esponda, Ignacio & Emanuel Vespa (2014) “Hypothetical Thinking and Information Extraction in the Laboratory,” American Economic Journal: Micro 6, 180–202.


{% EU+a*sup+b*inf: extends the Cohen security/potential model to nonsimple lotteries. %}

Essid, Samir (1997) “Choice under Risk with Certainty and Potential Effects: A General Axiomatic Model,” Mathematical Social Sciences 34, 223–247.


{% %}

Essl, Andrea & Stefanie Jaussi (2017) “Choking under Time Pressure: The Influence of Deadline-Dependent Bonus and Malus Incentive Schemes on Performance,” Journal of Economic Behavior and Organization 133, 127–137.


{% ISBN: 9789462982802 %}

Ester, Peter & Arne Maas (2016) “Silicon Valley: Planet Startup. Disruptive Innovation, Passionate Entrepreneurship & High-tech Startups.” Amsterdam University Press, Amsterdam.


{% %}

Estes, William K. (1956) “The Problem of Inference from Curves Based on Group Data,” Psychological Bulletin 53, 134–140.


{% Strongly argue for cognitive revolution. %}

Estes, William K., Allen Newell, John R. Anderson, John Seely Brown, Edward A. Feigenbaum, James Greeno, Patrick J. Hayes, Earl Hunt, Stephen H. Kosslyn, Mitchell Marcus, Shimon Ullman (1983) “Report of the Research Briefing Panel on Cognitive Science and Artificial Intelligence,” Research Briefings 1983. National Academy Press, Washington DC.


{% dynamic consistency %}

Etchart, Nathalie (2002) “Adequate Moods [Models] for Non-EU Decision Making in a Sequential Framework,” Theory and Decision 52, 1–28.


{% PT: data on probability weighting;
decreasing ARA/increasing RRA: bit less risk seeking for large losses than for small.
concave utility for gains, convex utility for losses: the latter is found;
utility elicitation;
inverse-S is found for losses, both large and small; also upper and lower subadditivity are.
No real incentives, but flat payment.
N=35 subjects. Considers loss outcomes. Tradeoff method: uses that to elicit utility for losses. It is mostly convex, but less so than others (p. 224). With utility for losses given, use CE (certainty equivalent) questions to measure the probability weighting function. Does it for small (down to $1200) and large (down to $14000) losses. Finds more pessimism/risk aversion for large losses than for small. For small probabilities, significantly more pessimistic for large losses, for other probabilities no significant differences. That probability weighting does not depend much on outcomes is good news for PT. (probability weighting depends on outcomes)
P. 218: nice citation of Allais (1988), that risk is too complex to expect one fixed probability weighting function. %}

Etchart, Nathalie (2004) “Is Probability Weighting Sensitive to the Magnitude of Consequences? An Experimental Investigation on Losses,” Journal of Risk and Uncertainty 28, 217–235.


{% Tradeoff method;
Uses the method of Abdellaoui (2000) to measure probability weighting. N = 30 subjects, all interviewed individually. Flat payment. §3.1 suggests that shallow probability weighting in the middle can, besides cognitive, also be strategic, in cases where the distinction does not matter for decisions. I did not fully understand this because it suggests that probability weighting cannot be identified. Probably it means, differently, that the payoff differences were so small that subjects just did not care at all. Something sometimes called the peanut effect.
Basic treatment is with small losses. Change of level means adding a negative constant to all outcomes (as with constant absolute risk aversion), making losses worse without changing differences. It had little effect except some more underweighting near p = 1. Change of spacing means, roughly, not precisely, multiplying the outcomes by a positive constant > 1 (as with constant relative risk aversion), making all distances bigger. It led to more pessimism and much more sensitivity except at small probabilities. (probability weighting depends on outcomes)
Utility is fitted using exponential utility, expo-power utility, or an uncommon inverse-S family (the latter may capture that utility can get concave again for very serious losses, often thought to happen near ruin). The utility family chosen may affect the results on probability weighting. For instance, under power utility, subtracting a constant from all amounts leads to more linear utility, forcing probability weighting to capture more risk aversion and pessimism. But then, it is hard to avoid such things.
losses from prior endowment mechanism: properly criticized on p. 51 middle for, for instance, generating house money effect.
inverse-S: is confirmed.
§4.2, p. 58, retrospectively gives another interpretation for a deviating finding in her 2004 paper. That paper may have mixed level and spacing effects.
§ 4.2, p. 59, top, again discusses cognitive interpretation of inverse-S. (cognitive ability related to likelihood insensitivity (= inverse-S))
§4.1, p. 57 bottom, says that TO method assumes that probability weighting remains constant during the experiment. This is common to any theory. If EU is used, it is not assumed that utility can change halfway the analysis or experiment. %}

Etchart, Nathalie (2009) “Probability Weighting and the ‘Level’ and ‘Spacing’ of Outcomes: An Experimental Study over Losses,” Journal of Risk and Uncertainty 39, 45–63.


{% losses from prior endowment mechanism: a beautiful study, of central importance to real incentives for losses. They use RIS. Do a treatment with real losses! So, they really provide the gold standard to assess other incentive schemes. Compare it with hypothetical choice and losses from prior endowment mechanism; within-subject, the three measurements at least 15 days apart each. Find no differences. Us choice list for gains, finding CEs (certainty equivalents). Biggest loss was €20.
real incentives/hypothetical choice: they do find differences for gains (also showing that their design does have power) with, as usual, more risk aversion under real incentives. In the real loss treatment, 17 subjects actually lost money. There were, however, two other sessions (within subjects it was) where they could make up. In the end, after the three sessions, 2 subjects had really lost money (p. 69 footnote 9). They had small-probability losses and found mostly risk aversion (p. 72).
Use the semiparametric measurement of PT of Abdellaoui et al. (2008).
Risk averse for gains, risk seeking for losses: utility is slightly concave for gains and also slightly concave for losses.
inverse-S: their nonparametric estimations of probability weighting confirm inverse-S for both gains and losses.
convex utility for losses: when they fitted PT (probability weighting and utility) utility was slightly convex but close to linear (p. 75). For gains, utility was concave.
inverse-S: this their probability weighting function is both for gains and for losses, based on fitting at p-values 0.05, 0.25, 0.50, 0.75, 0.95.?
reflection at individual level for risk: they have the within-individual data but do not report on this.
They seem to test for order effects of first presenting gains or losses but find no order effects. %}

Etchart, Nathalie & Olivier l’Haridon (2011) “Monetary Incentives in the Loss Domain and Behavior toward Risk: An Experimental Comparison of Three Reward Schemes Including Real Losses,” Journal of Risk and Uncertainty 42, 61–83.


{% Propose a new preference condition, fatalism. Consider two prospects p and ()p+(), the first yielding good outcome  with probability p and bad outcome  with probability 1p.  and  are posaitive so the second prospect has worse outcomes but better probabilities. If
()p+() 1p 
()p+() 2p
then 2 is more fatalistic than 1. 2 appreciates the improvement in probability less than 1 does. Under RDU for decision makers with the same utility functions, the condition is necessary snd sufficient for w2´(p)  w1´(p).
While formally different than insensitivity (inverse-S), the condition is very similar in spirit. The authors do not refer to insensitivity. They have a nice application: it reflects willingness to invest in prevention. %}

Etner, Johanna & Meglena Jeleva (2014) “Underestimation of Probabilities Modifications: Characterization and Economic Implications,” Economic Theory 56, 291–307.


{% survey on nonEU; a useful concise survey; I focus below on details that I see differently.
Survey mostly theoretical models of ambiguity, but no axioms. Review some empirical findings too. They use terms uncertainty and ambiguity interchangeably. My preference is that uncertainty is general, and ambiguity is the difference between uncertainty and risk. The same concept, although with different terminology, is in Cohen, Jaffray, & Said (1987). The authors cite Wald for the deterministic maxmin. Although Wald also introduced maxminEU, characterized by Gilboa & Schmeidler (1989), they do not cite him for it.
P. 242, cumulative prospect theory (I prefer not to write the term cumulative), unfortunately the authors do not reflect the weighting for losses. Hence what they call the weighting function for losses is the dual of Tversky & Kahneman (1992).
P. 242, §3.2.1 first line: the authors apparently do not know that Wald, and Luce & Raiffa (1957 Ch. 13) for instance, extensively discussed multiple priors way before Choquet expected utility was introduced. Thus they call CEU “first generation” in the beginning of §3.2, and multiple priors a follow-up in the beginning of §3.2.1. Multiple priors existed way before CEU!
P. 248: as many do, the authors give priority to Segal for using multistage probabilities for ambiguity. But many did it before (Gärdenfors 1979; Gärdenfors & Sahlin 1983; Kahneman & Tversky 1975 p. 30 ff.; Larson 1980; Yates & Zukowski 1976). §3.2.1 takes multiple priors endogenous, and §3.3 considers multiple priors with priors exogenous. §3.4 cites Chew & Sagi on sources. But Tversky initiated it, with 1995 papers with me (Wakker) and Fox on it. Chew worked with Tversky in the early 1990s, although they did not finish a paper, and this is how Tversky influenced Chew as he influenced me.

Download 7.23 Mb.

Share with your friends:
1   ...   30   31   32   33   34   35   36   37   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page