biseparable utility violated: Eq. 6 proposes a quadratic form EU1 + (EU2)2/2, with EU2 a different expected utility model than EU1, suggesting that this is about the simplest deviation from expected utility conceivable. It violates biseparable utility. %}
Machina, Mark J. (1982) “ ‘Expected Utility’ Analysis without the Independence Axiom,” Econometrica 50, 277–323.
{% %}
Machina, Mark J. (1982) “A Stronger Characterization of Declining Risk Aversion,” Econometrica 50, 1069–1079.
{% %}
Machina, Mark J. (1983) “Generalized Expected Utility Analysis and the Nature of Observed Violations of the Independence Axiom.” In Bernt P. Stigum & Fred Wendstøp (eds.) “Foundations of Utility and Risk Theory with Applications,” Ch. 12, 263–293, Reidel, Dordrecht.
{% P. 97 argues that any theory violating stoch. dominance will be: “in the author’s view at least, unacceptable as a descriptive or analytical model of behaviour.” %}
Machina, Mark J. (1983) “The Economic Theory of Individual Behavior toward Risk: Theory, Evidence and New Directions,” Technical Report No. 433, Center for Research on Organizational Efficiency, Stanford University, Stanford.
{% %}
Machina, Mark J. (1983) “Axioms and Models in Decision Making under Uncertainty,” A Review of Peter C. Fishburn, “The Foundations of Expected Utility,” Journal of Mathematical Psychology 27, 328–334.
{% dynamic consistency: favors abandoning RCLA when time is physical %}
Machina, Mark J. (1984) “Temporal Risk and the Nature of Induced Preferences,” Journal of Economic Theory 33, 199–231.
{% quasi-concave so deliberate randomization %}
Machina, Mark J. (1985) “Stochastic Choice Functions Generated from Deterministic Preferences over Lotteries,” Economic Journal 95, 575–594.
{% survey on nonEU
P. 148: “Note that it is neither feasible nor desirable to capture all conceivable sources of uncertainty when specifying the set of states for a given problem: it is not feasible since no matter how finely the states are defined there will always be some other random criterion on which to further divide them, an d not desirable since such criteria may affect neither individuals’ preferences nor their opportunities. Rather, the key requirements are that the states be mutually exclusive and exhaustive so that exactly one will be realized, and (for purposes of the present discussion) that the individual cannot influence which state will actually occur.” %}
Machina, Mark J. (1987) “Choice under Uncertainty: Problems Solved and Unsolved,” Journal of Economic Perspectives 1 no. 1, 121–154.
{% %}
Machina, Mark J. (1987) “Decision-Making in the Presence of Risk,” Science 236, 537–543.
{% dynamic consistency: favors abandoning forgone-event independence, so, favors resolute choice,
dynamic choice; see Alias-literature;
Dutch book; dynamic consistency; in mom-example argument that incorporating all relevant aspects in consequences is intractable.
Paper clarifies many issues in this domain and introduces the current terminology for dynamic decisions, although it did not define it explicitly and the readers have to infer it from the context. On p. 1624, 2nd column, l. 13, Mark discusses the, sometimes hidden, assumption of consequentialism (= what I like to call forgone-branch independence). This assumption is discussed on p. 173, as part of the “first objection” in §4 of Wakker (1988) “Nonexpected Utility as Aversion of Information,” JBDM 1 (e.g. through the requirement that information should be free of charge). %}
Machina, Mark J. (1989) “Dynamic Consistency and Non-Expected Utility Models of Choice under Uncertainty,” Journal of Economic Literature 27, 1622–1688.
{% %}
Machina, Mark J. (1989) “Comparative Statics and Non-Expected Utility Preferences,” Journal of Economic Theory 47, 393–405.
{% dynamic consistency %}
Machina, Mark J. (1991) “Dynamic Consistency and Non-Expected Utility.” In Michael Bacharach & Susan Hurley (eds.) Foundations of Decision Theory, 39–91, Basil-Blackwell, Oxford.
{% dynamic consistency; see Alias-literature. %}
Machina, Mark J. (1992) Book Review of: Edward F. McClennen (1990) “Rationality and Dynamic Choice: Foundational Explorations,” Cambridge University Press, Cambridge; Theory and Decision 33, 265–271.
{% Wrote: “The publication history of the rank-dependent expected utility attests to its role as the most natural and useful modification of the classical expected utility formula.” %}
Machina, Mark J. (1994) Book Review of: John Quiggin (1993) “Generalized Expected Utility Theory - The Rank-Dependent Model,” Kluwer Academic Publishers, Dordrecht; Journal of Economic Literature 32, 1237–1238.
{% Uses, nicely, the term probability triangle. %}
Machina, Mark J. (1995) “Non-Expected Utility and the Robustness of the Classical Insurance Paradigm,” Geneva Papers in Risk and Insurance Theory 20, 9–50.
{% %}
Machina, Mark J. (2001) “Payoff Kinks in Preferences over Lotteries,” Journal of Risk and Uncertainty 23, 207–261.
{% Opening para: “The appearance of Ellsberg’s classic 1961 article posed such a challenge to accepted theories of decision making that, after an some initial rounds of discussion, the issues he raised remained well-known but largely unaddressed, simply because researchers at the time were helpless to address them. It took more than a quarter of a century, and the successful resolution of separate issues raised by Allais (1953), before decision scientists were in a position to take on the deeper issues raised by the Ellsberg paradox.” %}
Machina, Mark J. (2001) “Further Readings on Choice under Uncertainty, Beliefs and the Ellsberg Paradox,” Preface to Daniel Ellsberg (2001) “Risk, Ambiguity and Decision.” Garland Publishers, New York, pp. xxxix ff.
{% Assumes preference functional V over acts under uncertainty in Savage-model. State space is interval such as s = temperature of Bejing, etc. Assumes that V is differentiable w.r.t. small variations in state. This implies that acts depending only on kth digit of s become like objective probability distributions as k increases. So, we can infer the risk preference functional therefrom. It has often been said, and I agree, that risk (known probabilities) is not different from uncertainty (unknown probabilities), but instead is a limiting case. This paper substantiates this claim, and even proves it in a formal mathematical manner. Those who say that objective probabilities do not exist and should not be used, and that only a subjective Savage state space should be considered, get objective probabilities delivered in their backyard by this paper.
The two-stage Anscombe-Aumann model with mixing before states is more general than this model as commonly used today, with mixing after the states (the former can allow for correlations, latter can concern only marginals). As a model, Mark’s model comes out equivalent to mixing before, but further preference restrictions follow that in fact make it equivalent to mixing after (p. 16).
As regards dynamic decision principles, the paper seems to assume the reduction + dynamic consistency of Machina (1989) (§2.2 & p. 15) as if generally accepted. The model of the paper comes out equivalent, not by assumption but by implication (p. 16).
P. 23 middle para epresses source dependence. P. 24 uses the term “source of uncertainty.”
P. 32 points out that monotonicity in Grant (1995) can be generated from set-inclusion. %}
Machina, Mark J. (2004) “Almost-Objective Uncertainty,” Economic Theory 24, 1–54.
{% The result of over ten years of work, presented already in Cachan 1992 under the title “Robustifying the Classical Model of Risk Preferences and Beliefs” %}
Machina, Mark J. (2005) “Expected Utility/Subjective Probability’ Analysis without the Sure-Thing Principle or Probabilistic Sophistication,” Economic Theory 26, 1–62.
{% The paper provides two examples of plausible preferences that violate RDU (CEU (Choquet expected utility) as Machina call it) for uncertainty. Baillon, L’Haridon, & Placido (2009) show that the examples also violate most other nonEU models for uncertainty popular today. In particular, the examples violate the comonotonic sure-thing principle and even tail-independence. I find the second example, the reflection example (pp. 389-390), impressive, nay, brilliant. But other than that I prefer different interpretations and explanations than the author gives for almost everything.
The reflection example (with my interpretations): an urn contains 100 balls. 50 balls marked 1 or 2 in unknown proportion, and 50 marked 3 or 4 in unknown proportion. One ball is drawn randomly. Ej: the number drawn is j. Consider (with $1000 as unit) preferences between f5 and f6, and then between f7 and f8:
#50 #50
f5 = (E1:4, E2:8, E3:4, E4:0),
f6 = (E1:4, E2:4, E3:8, E4:0),
f7 = (E1:0, E2:8, E3:4, E4:4),
f8 = (E1:0, E2:4, E3:8, E4:4),
Ambiguity averse people will have f6 > f5 because f6 has one outcome, 4, resulting with known probability ½, whereas f5 has all outcomes ambiguous. For exactly the same reason, ambiguity averse people will have f7 > f8. These claims were confirmed empirically by L’Haridon & Placido (2010).
Btw., because of informational symmetry, f7 is like f6 and f8 is like f5, so that the second preference follows from the first from informational symmetry.
RDU however predicts indifference between the four acts because RDU considers likelihoods of what are known as goodnews (“decumulative;” “ranks”) events. For all four acts, the goodnews event of receiving 8 contains one Ej, the goodnews event of receiving 4 or 8 contains three Ejs, and the goodnews event of receiving 0, 4, or 8 contains all four Ejs. Beause of informational symmetry, each goodnews event has the same weight under each act, implying immediately that the four acts are indifferent by RDU, simply having identical Choquet integals. (Btw: Machina uses a different reasoning, being that the comonotonic sure-thing principle, and even tail independence, require that a strict preference between f5 and f6 be the same as between f7 and f8, rather than between f8 and f7 as informational symmetry has it. Because informational symmetry is unquestionable, RDU hence cannot have strict preference and must have indifferences.)
(Another btw.:
Sarin & Wakker 1992 axiomatized RDU using an axiom that acts are equivalent whenever all goodnews events have the same likelihood, in an axiom called cumulative dominance.)
I like Machina’s reflection example much because it addresses a fundamental issue of RDU (with similar things for most other nonEU theories as demonstrated by Baillon, L’Haridon, & Placido), being that RDU focuses on likelihoods of goodnews events, but Machina’s example shows that subjects are also partially driven by likelihoods of separate-outcome events, as considered in old pre-reank-dependent nonadditive probability models.
I regret that Machina does not refer to the role of separate-outcome events and the unambiguity of one outcome in his reasoning against indifference. Instead he uses a complex riding-on reasoning (f5 has two small ambiguities and f6 one big; if one had something like aversion to mean-preserving spreads one would prefer f5; as Baillon, l’Haridon, & Placido rightfully point out, ambiguity is more cognitive than motivational, is more subject to diminishing sensitivity, and it is more categorical ambiguous versus unambiguous than more versus less, so that the two ambiguities of f5 will count more negatively than the one ambiguity or f6) that can only be understood by specialists, and then after some effort, but that will enter the mind of no natural subject that I can think of. He thus does not choose side for one strict preference or the other even though it is clear enough I think, and he further refers to an unclear tradeoff between objective and subjective uncertainty.
Machina’s 50-51 example, while equally valid as the reflection example, is way less clear. Now unambiguity must be traded against an objective-likelihood argument in a first choice problem (between f1 and f2) and also in a second choice problem (between f3 and f4). In the second choice problem the ambiguity degree of all goodnews events is the same as in the first and it can be proved under RDU that the preference in the second choice problem should be the same as in the first. In the second choice problem the ambiguity degree of all separate-outcome events is not the same as in the first, and therefore choices can be different. Because of the tradeoff with objective probability this example is less clear, and will work less well empirically than the reflection example. Machina’s explanation on pp. 388-389 again (as in the reflection example) does not raise the argument of a separate-outcome event, unfortunately. Instead if raises an unclear correlation argument. One problem is that correlation is not defined as he discusses it. (You need numbers to correlate, so how should this be with events? Indicator functions will not help. He could formalize the first point in terms of stochastic-like or sigma-algebra-like independence. Btw., p. 388 last line “corrected” should be “correlated” and this is a typo.) He proceeds claiming that in the second choice problem some correlations are less, and this obviously is not clear either.
He also overstates implications. P. 389 4th para suggests that models like RDU, that keep comonotonic separability, retain the Ellsberg problem. He tries to suggest there that his example is as strong and fundamental as Ellsberg’s. This is not so; it is different, and less strong, albeit surely interesting.
P. 390 seems to write: “If there is a general lesson to be learned from Ellsberg’s examples and the examples here, it is that the phenomenon of ambiguity aversion is intrinsically one of non-separable preferences across mutually exclusive events, and that models that exhibit full—or even partial—event-separability cannot capture all aspects of this phenomenon.
The closing para suggests that all models of nonEU for ambiguity should consider interactions and violations of separabililty of events. I in fact agree but I disagree that Machina’s examples, which are only two examples, (nor the Ellsberg examples which Machina puts on the same footing there), could prove this in general, as Machina is suggesting. Even worse, Machina claims that every partial form of event-separability will fail. This claim is completely unfounded. Machina has done no more than show a problem for comonotonic separability (sure-thing principle) and even for tail-separability (independence). Theories that completely give up any event-separability will probably be too general to be tractable. For the same reason, the general Machina (1982) nonexpected utility, while useful to bring some theoretical points, is too general for most purposes.
Something else I found amazing is that on several occasions (p. 390 2nd para “the issue is not how individuals ought to choose …” and the closing sentence on p. 391) Machina treats ambiguity purely descriptively, and nothing normatively. I as Bayesian like to have ambiguity only descriptively, but still would not explicitly exclude any normatively-based discussion of it.
P. 390: “… ambiguity aversion is intrinsically one of nonseparable preferences across mutually exclusive events, …” %}
Machina, Mark J. (2009) “Risk, Ambiguity, and the Rank-Dependence Axioms,” American Economic Review 99, 385–392.
{% “Science is the process of distributing zeros throughout the determinant matrix”
citation of which Mark did not remember what the source was. Maybe Samuelson? %}
Machina, Mark J. (2010), lecture
{% Consider the Ellsberg 3-color paradox. The two usually assumed strict preferences violate the sure-thing principle, as is well known. This paper shows, nicely, that one of the two strict preferences implies the other by the sure-thing principle (+ some natural symmetry assumptions), so that one strict preference already gives a violation of the sure-thing principle. Moreover, in the derivation of the one strict preference from the other one only needs the restriction of the sure-thing principle to events with known probabilities, where the sure-thing principle is less controversial. (Jaffray always pleaded for the latter condition.) Thus one of the two strict preference, together with the symmetry conditions, already implies a violation of the sure-thing principle.
Another thing I like is that the paper shows that the Ellsberg 3-color urn is best taken as a mix of two sources of uncertainty (what Mark calls pure objectivity and pure subjectivity). This point had been alluded to before by Ergin & Gul (2009) and Abdellaoui et al. (2011 AER p. 718), but Machina makes it more clear than anyone else could. Unfortunately, he does not explicitly connect to the idea of sources.
There are interpretations in the paper that I find unfortunate. The sure-thing principle for events with known probabilities is best taken as a special case of the general sure-thing principle, and not as a different condition. This paper tries to suggest that the conditions for purely objective and “purely subjective” (a term of this paper that I do not find very useful) are two different animals. What could prove the paper’s claim better than the (erroneous) claim that, whereas the general sure-thing principle is violated by the two strict preferences of Ellsberg, the sure-thing principle for known probabilities would even imply those preferences, rather than be violated by them? So the paper makes this, incorrect, claim (end of abstract: “the standard Ellsberg-type preference reversal is actually implied by the Independence Axiom over its purely objective uncertainty;” there are similar claims on p. 433 1st para & end of p. 435). ). This is not so. Only that condition TOGETHER WITH one of the two strict preferences (and some natural symmetry conditions) does so.
Claims of compatibility with the sure-thing principle over purely subjective uncertainty (p. 433 top) are also misleading, because it is only compatible in the sense of not directly violating a very particular version of the condition restricted to very particular events chosen by Mark.
The demonstration that one strict preference in the 3-color Ellsberg paradox, together with the usual informational symmetries, and the sure-thing principle for events with known probabilities implies the other strict preference, is as follows. Assume 1 R(ed) ball and 2 B(lack) and Y(ellow) balls in unknown proportion, the usual informational symmetries, the sure-thing principle, and 100R0 100B0. Number the three balls, with ball R nr. 1. Denote by BY (ball 2 is B and ball 3 is Y), YB, BB, and YY the four possible compositions of the urn, where the one R ball is suppressed. Now, subtly, as in Table 4 (p. 432), interpret 100B0 as (1/3:0, 1/3:(100{BB,BY}0, 1/3:100{BB,YB}0), where the first probability 1/3 describes what happens under ball 1 (a payment contingent on the composition of the urn, yielding 100 if BB or BY and 0 otherwise), the second what happens under ball 2, and the third what happens under ball 3. Interpret 100R0 as (1/3:100, 1/3:0, 1/3:0). So we rewrite the assumed preference (reordering outcomes for the unambiguous prospect) as
(1/3:0, 1/3:0, 1/3:100) (1/3:0, 1/3:(100{BB,BY}0, 1/3:100{BB,YB}0).
By the s.th.pr. we get, replacing the bold common outcome,
(1/3:100, 1/3:0, 1/3:100) (1/3:100, 1/3:(100{BB,BY}0, 1/3:100{BB,YB}0),
rewritten as
(1/3:0, 1/3:100, 1/3:100) (1/3:100, 1/3:(100{BB,BY}0, 1/3:100{BB,YB}0). The latter says: 100{B,Y}0 100{B,R}0. This is the second strict preference that is traditionally taken as second assumption. QED %}
Machina, Mark J. (2011) “Event-Separability in the Ellsberg Urn,” Economic Theory 48, 425–436.
{% This paper presents some examples on choice under ambiguity that trigger new thoughts and insights. It discusses implications for some theories. I do have different opinions about interpretations about RDU (I prefer this term to CEU (Choquet expected utility)) and about Anscombe-Aumann (AA), explained below.
The paper is entirely focused on the AA model, as if the only way to go as soon as a model has both risk and ambiguity, which is the common thinking in the field today, but I disagree with. A first stage has ambiguous (horse) events, a second stage has risk (roulette) events, and backward induction is used where first the second-stage lotteries are replaced by their certainty equivalents according to EU, then processed according to an ambiguity theory handling the uncertainty about the horses. Not only the EU assumption is empirically questionable here, but also the backward induction assumption is. It entails conditioning on each individual ambiguous event, that is, treating each such event as separable. While still questionable, it is relatively least questionable if the resolution of risk comes in a stage after the resolution of ambiguity, so if it is two-stage as usually assumed in AA and as also assumed above. A typical case is where the second-stage risk is conditional on the first-stage resolution; i.e., the roulette lottery li will only be carried out if horse hi wins the race. This is the case of pp. 3821-3822 where first the composition of the Ellsberg urn (the horse) is determined and the corresponding objective risk (roulette lottery) is only carried out if the corresponding composition of the urn obtains.
Several authors have argued that the two-stage setup of AA with the risky events second and then backward induction is unfortunate. Conditioning and separability are, under the assumption of EU for risk, more plausible for roulette events and, hence, it would work better to put the horse race first. Wakker (2011 Theory and Decision, p. 19 top and p.19 penultimate para) cites Jaffray (personal communication) for this viewpoint. Further arguments are in Wakker (2010 §10.7), Baillon, Chen, & Halevy (2015), and Bommier (2017).
For the above reason, footnote 11 on p. 3818, claiming simultaneous resolution of all uncertainties in this paper, is misleading. It makes the backward induction assumed throughout the paper less convincing. As I wrote above, pp. 3821-3822, describing Ellsberg’s urn as two-stage, is for instance very very hard to reconcile with the simultaneity claim of footnote 11.
P. 3815: the slightly bent coin example is a small variation of Machina’s (2009) reflection example. The risk in the 2009 example need not be perfect risk, but can be a little ambiguity, close to risk, maintaining the paradox. This is what the bent coin example illustrates. These two examples are genuine counterexamples to RDU. RDU assumes the people go entirely by cumulative events, but in reality people are still guided a bit also by single-outcome events, and these examples beautifully show it.
P. 3815 thermometer example uses the basic idea of Machina (2004 ET). If the DM is subject to the Allais paradox for risk, then in a continuum state space with enough differentiability it will show up. For instance, if we measure temperature, we can gamble on the 5th & 6th digits and these are by all means subject to objective probabilities (my country-man the philosophical mathematician L.E.J. Brouwer would say that it is undetermined), and can be used to bring up the Allais paradox with risk. This point can be understood without reading the mathematical proofs that Machina provides for completeness.
P. 3815, the third example on ambiguity at high versus low outcomes, considers Ellsberg’s 3-color paradox, with outcomes 100, c, and 0, where c is the CE (certainty equivalent) of 1000.50, assuming EU for risk. In urn 1 we have the highest outcome 100 at the unambiguous color red, and the other outcomes at the other colors (0 for black and c for white). In urn 2 we have the lowest outcome 0 at the unambiguous color red, and the other outcomes at the other colors (c for black and 100 for white). In the former case, ambiguity is at the lower outcomes, and in the latter it is at the higher outcomes. DMs may well strictly prefer one urn to the other and not be indifferent. If we, however, use an AA model conditioning on the true composition of the urn, then, as shown in the table on p. 3831, conditional on each composition of the urn, the two urns assign the same EU to each composition. Hence all AA based models require indifference. The example, if not giving indifference, falsifies the whole AA approach.
DETOUR [Wakker (2010 Figure 10.7.1] I hope that the readers can now bear a self-reference. Wakker (2010 Figure 10.7.1) illustrates the same kind of failure of AA but I think more clearly. It assumes two horses s1 and s2. In the first choice situation in the left figure the choice is between (s1: 1000.50, s2: 1000.50) and (s1: c, s2: 1000.50). It assumes c such that we have indifference, and assumes c = 40, but I will maintain Machina’s notation c here. Under AA c must then be the CE of 1000.50. The second choice situation in the right figure has a choice between (s1: 1000.50, s2: c) and (s1: c, s2: c). So the common outcome under s2, 1000.50, has been replaced by another, under AA equivalent, common outcome c. It is plausible that ambiguity aversion gives a strict preference for the sure c in the second situation, but AA requires indifference. This example considers two choices as does Machina’s (c is derived from an indifference there too), but has simpler stimuli and the violation of indifference is more plausible, with a clear direction predicted. Note that under AA all four prospects considered in my example assign the same EU to s1 and s2, and should all four be indifferent. A difference with Machina’s example is that my example does not appeal to whether ambiguity aversion is increasing or decreasing in outcomes, but to ambiguity aversion per se.
[END OF DETOUR]
P. 3818: if we face the simultaneous uncertainty of an ambiguous horse race and a risky lottery, then in general under nonEU correlations between conditional lotteries may be relevant. If lottery 1 gives a high outcome conditional on s1, then does lottery 2 give a high outcome under s2? Outside the separability of EU this can be relevant. However by the very notation on p. 3818, by describing only the roulette lotteries conditional on the horses (explicitly called conditional on p. 3820 l. 3), Machina already excludes such info, and is already focusing on the AA model with the questionable conditioning-on/separability-of horses. This affects all models considered, in Eqs. 1-6. Whereas smooth preferences, variational preferences, and nowadays multiple priors, have only been considered in the AA framework, RDU has well been considered outside of it (Gilboa 1987; Wakker 2010), and I regret that Machina implicitly assumes that it satisfies AA. Footnote 34 on p. 3832 states the point, and p. 3835 lines -4/-2 also: “and hence cannot be strictly ranked by models which evaluate the objective uncertainty in mixed prospects solely through these statewise values.”
P. 3821 l. -3 is misleading in calling the AA approach the “appropriate state space” although a linguistic escape route for the author can be that some lines before he has conditioned on the AA model.
P. 3829 l. -3 and elsewhere: I would not take the example as single-source. The 7th decimal generates a different subalgebra than the 1st digit and East-West, and these different subalgebras are better taken as different sources. The whole source method assumes one grand state space, with sources different subalgebras (or more general systems than algebras because intersection-closedness is not a natural requirement here).
Conclusions of the paper such as RDU being violated by examples are often misleading because it is not RDU but it is RDU-joint-with-AA. A linguistic escape route for the author can be that on p. 3819 he defines RDU as incorporating AA, so whatever he says about RDU is to be taken that way.
In the ambiguity at low versus high outcomes problem, RDU (without AA) can very easily accommodate strict preferences. I write v for weighting function rather than the usual W to avoid confusion with W for white. For empirical plausibility we would need nonEU for risk, but let me stay with the paper and AA and have EU for risk. Then we let the decision weight of event R always be 1/3, so v(R E) v(E) = 1/3 for each disjoint event E. We set v(R W) = 0.6 < 2/3 inducing pessimism for the low-ranked events in Urn 1, and v(W) = 0.4 > 1/3 generating optimism for the high-ranked events in Urn 2. A strict preference for urn 2 results. If we want to use a more detailed state space specifying the compositions of the urns, then we take the weighting function v such that the union of all events giving W has v-value 0.4, the union of all events giving R W has v--value 0.6, and so on. The latter weighting function does NOT have the horse events (composition of urn) separable and, hence, does not fit in the AA model, but this is desirable to avoid the unwarranted separabilities.
P. 3835 end of 1st para, misleadingly, writes: “But for that same reason so would each of the four major models, which suggests that correcting for attitudes toward objective risk, none can depart from SEU in the direction of a Friedman-Savage (1948)-type aversion to ambiguity in low-likelihood disasters coupled with a preference for ambiguity in low-likelihood high-stakes gains.” Not only can RDU without AA do this easily, but more than that, what the author describes is the major empirical finding (likelihood insensitivity generated by ambiguity; ambiguity seeking for unlikely).
criticism of monotonicity in Anscombe-Aumann (1963) for ambiguity: p. 385 3rd bulleted point: one has to read this point three times before one sees that this in fact says that the AA model itself is violated here. The AA model is described using the complex words “by models which evaluate the objective uncertainty in mixed prospects solely through these statewise values.” %}
Machina, Mark J. (2014) “Ambiguity Aversion with Three or More Outcomes,” American Economic Review 104, 3814–3840.
{% %}
Machina, Mark J. & William S. Neilson (1987) “The Ross Characterization of Risk Aversion: Strengthening and Extension,” Econometrica 55, 1139–1149.
{% The definitions of mean-preserving spreads were given explicitly by Rothschild & Stiglitz for discrete distributions and density functions. This paper shows that these also hold for general distributions. %}
Machina, Mark J. & John W. Pratt (1997) “Increasing Risk: Some Direct Constructions,” Journal of Risk and Uncertainty 14, 103–127.
{% This paper characterizes the first part of EU (that uncertainties are expressed in terms of probabilities) without requiring the second part (that probability-weighted average utility is used as evaluation), calling the first part probabilistic sophistication. This separation into two steps had often been described before, for instance in Cohen, Jaffray, & Said (1987), but also in decision-analysis works of the 1960s. The present paper is the first to give a decision foundation to it. %}
Machina, Mark J. & David Schmeidler (1992) “A More Robust Definition of Subjective Probability,” Econometrica 60, 745–780.
{% P. 118: beliefs are derived from bets. In several places the authors write that probabilistic sophistication is normative (last sentence of abstract, “correct,” “proper,” last sentence of §1 (“rational formulation”), p. 121 next to last sentence (“proper normative term”). P. 122, point 2, claims that most people think that violations of expected utility are not mostly due to violations of probabilistic sophistication, but are mostly due to violations of expected utility with probabilities given. Both claims go against Schmeidler (1989). Fortunately, both authors have dissociated themselves from both of these claims on later occasions. %}
Machina, Mark J. & David Schmeidler (1995) “Bayes without Bernoulli: Simple Conditions for Probabilistically Sophisticated Choice,” Journal of Economic Theory 67, 106–128.
{% survey on nonEU %}
Machina, Mark J. & Marciano Siniscalchi (2014) “Ambiguity and Ambiguity Aversion.” In Mark J. Machina & W. Kip Viscusi (Eds.) “Handbook of the Economics of Risk and Uncertainty, Vol. 1,” North-Holland, Amsterdam.
{% questionnaire versus choice utility: seems to criticize economists who asked business men for their probability judgments. %}
Machlup, Fritz (1946) “Marginal Analysis and Empirical Research,” American Economic Review 36, 519–544.
{% Find extremity-orientedness in DFE, quite like their apparentyly preceding 2014 JBDM paper. In this paper they only use 50-50 prospects, but find risk seeking for gains (risk seeking for symmetric fifty-fifty gambles) and risk aversion for losses. %}
Madan, Christopher R., Elliot A. Ludvig, & Marcia L. Spetch (2014) “Remembering the Best and Worst of Times: Memories for Extreme Outcomes Bias Risky Decisions,” Psychonomic Bulletin and Review 21, 629–636.
{% real incentives/hypothetical choice: for time preferences: they seem to compare real with hypothetical choice. Discount rate 0.053 for hypothetical and 0.037 for real. %}
Madden, Gregory J., Andrea M. Begotka, Bethany R. Raiff, & Lana L. Kastern (2003) “Delay Discounting of Real and Hypothetical Rewards,” Experimental and Clinical Psychopharmacology 11, 139‑145.
{% %}
Maddy, Penelope (1988) “Believing the Axioms. I,” Journal of Symbolic Logic 53, 481–511.
{% Show that complexity negatively affects the value in choices between lotteries over two-period payments. %}
Mador, Galit, Doron Sonsino, & Uri Benzion (2000) “On Complexity and Lotteries’ Evaluations — Three Experimental Observations,” Journal of Economic Psychology 21, 625–637.
{% Seem to show that default enrolment in pension savings, as in the later paper Thaler & Benartzi (2004), actually reduces total savings because people who by themselves would have saved more now save only the default. %}
Madrian, Brigitte & Dennis F. Shea (2001) “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior, Quarterly Journal of Economics 116, 1149–1159.
{% In agreement with the finding of Tversky & Fox (1995, QJE), they find that WTP and WTA give less ambiguity aversion than pairwise choice. They show how this phenomenon will generate preference reversals. %}
Maffioletti, Anna (2002) “The Effect of Elicitation Methods on Ambiguity Aversion: An Experimental Investigation.”
{% %}
Maffioletti, Anna & Michele Santoni (2000) “Do Trade Union Leaders Exhibit Ambiguity Reaction?,” Rivista Internazionale di Scienze Sociali 4, 357–376.
{% %}
Maffioletti, Anna & Michele Santoni (2002) “Ambiguity and Partisan Business Cycles,” Finanzarchiv 59, 387–406.
{% natural sources of ambiguity:
Throughout, measure everything by asking for minimal selling prices and taking those as certainty equivalents. This can be expected to have generated a general overestimation of the certainty equivalents and, thus, to a general underestimation of risk aversion, which is indeed found.
EXPERIMENT 1 (N=25): do usual Ellsberg urn. In addition, fictitious elections where highly reputable opinion polls agency says the probability of some party winning is from [0.4,0.6], [0.3,0.7], [0.2,0.8], [0.1,0.9], or [0.0, 1.0], respectively. So, it is always for probability 05 plus ambiguity. They find risk neutrality and ambiguity aversion. The latter increases as the ambiguity (the interval around 0.5) gets larger.
ambiguity seeking for unlikely: they report this on p. 222.
EXPERIMENT 2 (N = 34): they used the random incentive system with random prize system (Becker DeGroot Marschak; BDM). Now it referred to some real elections in Italy and the UK, taking place some days after the election (natural sources of ambiguity). First minimal selling task for known probabilities 0.1, 0.2, …, 0.9, where they find risk neutrality. Then subjects had to give minimal selling prices for a number of disjoint events related to the elections. The authors assumed linear utility, and thus derived decision weights. The decision weights turn out to add to considerably more than 1 in total. Together with the risk neutrality found for given probability, it suggests massive ambiguity seeking (comparing with risk neutrality for given probabilities makes it interpersonal comparison, which as such is not affected by the underestimations of risk aversion generated by asking for minimal selling prices). Strange and interesting.
correlation risk & ambiguity attitude: although they have the data, they do not report this. %}
Maffioletti, Anna & Michele Santoni (2005) “Do Trade Union Leaders Violate Subjective Expected Utility? Some Insights from Experimental Data,” Theory and Decision 59, 207–253.
{% The author assumes prospect theory (new ’92 version). He lets -expectation denote the PT value if utility were linear and there were no loss aversion; i.e., of decision weights were probabilities. In terms of these, gives results on risk aversion. In particular he shows that if utility functions for gains and for losses are both power functions, with powers between 0 and 1 and loss-power closer to 1 (closer to linear), then weak risk aversion in sense of -expectation must be violated. Note that the preference conditions use probability weighting as input, and are directly observable to the extent that probability weighting is. This paper further demonstrates that power-utility is questionable near 0. %}
Maggi, Mario A. (2006) “Loss Aversion and Perceptual Risk Aversion,” Journal of Mathematical Psychology 50, 426–430.
{% Kirsten&I: results like: if space of programs is compact, locally convex, etc., and functional is upper continuous, etc., then optimum exists. %}
Magill, Michael J.P. (1981) “Infinite Horizon Programs,” Econometrica 49, 679–711.
{% Final sentence (translated from Dutch to English): “So even if you were not permitted to add up apples and oranges, it is best to still do it, at least if you’re an economist.” %}
Magnus, Jan R. (1997) “Appels en Peren,” Univers 15 (December 11) 4.
{% Shows terminology for doing additive representation theory for economists: technology, input, and the like. %}
Magnus, Jan R. & Alan D. Woodland (1990) “Separability and Aggregation,” Economica 57, 239–247.
{% Reply by Eels (1987); also concerns R.C. Jeffrey model %}
Maher, Patrick (1987) “Causality in the Logic of Decision,” Theory and Decision 22, 155–172.
{% Popper, Kuhn, Bayesians %}
Maher, Patrick (1990) “Why Scientists Gather Evidence,” British Journal for the Philosophy of Science 41, 103–119.
{% The authors use the term preference reversal differently than the experimental decision literature does. They test dynamic and sequential reformulations of the 3-color Ellsberg paradox and find that these reformulations matter. That is, they find some dynamic decision principles violated. %}
Maher, Patrick & Yoshihisa Kashima (1997) “Preference Reversal in the Ellsberg Problems,” Philosphical Studies 88, 187–207.
{% Dutch book: the author proposes an interpretation of Dutch books that implies the laws of probability withouit implying perfect knowledge about oneself. The reasonings involve bets oin own probability judgments. %}
Mahtani, Anna (2015) “Dutch Books, Coherence, and Logical Consistency,” Noûs 49, 522–537.
{% normal/extensive form %}
Mailath, George J., Larry Samuelson, & Jeroen M. Swinkels (1994) “Normal Form Structures in Extensive Form Games,” Journal of Economic Theory 64, 325–371.
{% %}
Mak, King-Tim (1987) “Coherent Continuous Systems and the Generalized Functional Equation of Associativity,” Mathematics of Operations Research 12, 597–625.
{% %}
Mak, King-Tim (1988) “Separability and the Existence of Aggregates.” In Wolfgang Eichhorn (ed.) Measurement in Economics (Theory and Applications of Economic Indices), 649–670, Physica-Verlag, Heidelberg.
{% Review of quality of life measurements of elderly. %}
Makai, Peter, Werner B.F. Brouwer, Marc A. Koopmanschap, Elly A. Stolk, & Anna P. Nieboer (2014) “Quality of Life Instruments for Economic Evaluations in Health and Social Care for Older People: A Systematic Review,” Social Science & Medicine 102, 83–93.
{% Didactical text to show how EU can be used in farming. %}
Makeham, John P., Alfred H. Halter, & John L. Dillon (1968) “Best-Bet Farm Decisions.” The Agricultural Business Research Institute, University of New England, Armidale, Australia.
{% %}
Makridakis, Spyros & Robert L. Winkler (1983) “Averages of Forecasts: Some Empirical Results,” Management Science 29, 987–996.
{% %}
Malakooti, Benham (1991) “Measurable Value Functions for Ranking and Selection of Groups of Alternatives,” Journal of mathematical Psychology 35, 92–99.
{% foundations of quantum mechanics: %}
Maleeh, Reza (2015) “Bohr’s Philosophy in the Light of Peircean Pragmatism,” Journal for General Philosophy of Science 46, 3–21.
{% %}
Maleki, Hamed & Sajjad Zahir (2013) “A Comprehensive Literature Review of the Rank Reversal Phenomenon in the Analytic Hierarchy Process,” Journal of Multi-Criteria Decision Analysis 20, 141–155.
{% %}
Malinas, Gary (1993) “Reflective Coherence and Newcomb’s Problems: A Simple Solution,” Theory and Decision 35, 151–166.
{% %}
Malinvaud, Edmond (1952) “Note on von Neumann-Morgenstern’s Strong Independence Axiom,” Econometrica 20, 679.
{% revealed preference %}
Malishevski, Andrey V. (1993) “Criteria for Judging the Rationality of Decisions in the Presence of Vague Alternatives,” Mathematical Social Sciences 26, 205–247.
{% foundations of quantum mechanics %}
Malley, James D. & John Hornstein (1993) “Quantum Statistical Inference,” Statistical Science 8, 433–457.
{% %}
Malmnäs, Per-Erik (1981) “From Qualitative to Quantitative Probability.” Almqvist & Wiksell International, Stockholm.
{% loss aversion: erroneously thinking it is reflection: in several places, for instance in the title, the authors suggest that they investigate loss aversion. In reality they only investigate reflection, i.e., risk aversion for gains versus risk seeking for losses. Although they are in the context of prospect theory, they unfortunately equate risk aversion with concave utility and risk seeking with convex utility (equate risk aversion with concave utility under nonEU). This is only correct if we assume no probability weighting (an assumption common in finance) and nonmixed prospects. The latter the authors seem to assume throughout although it is not clear (see below).
The authors consider WTP-WTA for gain or loss lotteries, which they designate as gain- or loss domain. My concern here is that if one pays for a gain prospect in WTP, then due to the payment one may still lose. Hence one in reality then deals with mixed prospects, and not with gain prospects as the authors assume. The authors, however, throughout assume to be either in a gain-domain where there are only gains, or in a loss domain where there are only losses. Then loss aversion never plays a role. All their speculations, indeed, only concern reflection and not loss aversion, although they suggest otherwise.
P. 104 bottom affirmatevely cites a strange claim from another paper that subjects with an unbounded utility function for gains and a bounded utility function for losses are risk seeking, with some other similar claims. Probably this claim was only made for a particular utility family used, probably CARA (linear-exponential).and then in EU I guess.
The first two experiments do WTP-WTA with 2nd price sealed bid auction, and the third does money allocation. The authors investigate patterns of risk attitude such as the fourfold pattern, but find all kinds of patterns (reflection at individual level for risk). Due to my confusion about whether the authors deal with mixed prospects or not, I do not know how to interpret their results. %}
Malul, Miki, Mosi Rosenboim, & Tal Shavit (2013) “So when Are You Loss Averse? Testing the S-Shaped Function in Pricing and Allocation Tasks,” Journal of Economic Psychology 28, 631–645.
{% intuitive versus analytical decisions; Compare result of decision analysis to directly expressed intuitive preference (which probability at … would make these two treatments indifferent?, etc.)
Their text suggests they take direct intuitive judgment as gold standard and think that decision analysis should merely agree with direct intuition, in deviation from Raiffa’s (1961) citation on decision analysis “We do not have to teach people what comes naturally.” Compare Kimbrough & Weber (1994) who also confront decision analysis results with direct intuitive choices. %}
Man-Son-Hing, Malcolm, Andreas Laupacis, Annette M. O’Connor, Dougal Coyle, Renee Berquist, & Finlay McAlister (2000) “Patient Preference-Based Treatment Thresholds and Recommendations: A Comparison of Decision-Analytic Modeling with the Probability-Tradeoff Technique,” Medical Decision Making 20, 394–403.
{% Introduces fractals (although he does not use that term yet) to suggest that the length of the English coast is infinite. %}
Mandelbrot, Benoit (1967) “Statistical Self-Similarity and Fractional Dimension,” Science 156, 636–638.
{% Seems to have written that private vices lead to public benefits, meaning that if all individuals pursue their self-interest then this will give good results for society. It is, I think, a poem with comments added later. %}
Mandeville, Bernard (1714) “The Fable of the Bees, or Private Vices, Public Benefits.”
{% %}
Mandler, Michael (1999) “Dilemmas in Economic Theory: Persisting Foundational Problems of Microeconomics.” Oxford University Press, New York.
{% A discussion piece arguing for incomplete preference %}
Mandler, Michael (2004) “Status Quo Maintenance Reconsidered: Changing or Incomplete Preferences?,” Economic Journal 114, F518–F535.
{% A discussion piece arguing for incomplete preference. Distinguishes between actively chosen bundels and passively retained bundles. %}
Mandler, Michael (2005) “Incomplete Preferences and Rational Intransitivity of Choice,” Games and Economic Behavior 50, 255–277.
{% Argues for non-revealed-preference inputs, such as neuroeconomic measurements of utility.
Cardinality and ordinality are meta-properties in the sense that they relate to properties of, say, utility functions. Consider the property of being vNM utility in the sense that probability-weighted average represents choices over prospects (EU). For each preference relation, the set of vNM functions consists of one such function together with all of its strictly increasing affine transforms. The general concept of vNM utility can be called cardinal. Also each single vNM utility u can be called cardinal. One can look at the set of all strictly increasing affine transforms of u; i.e., the set of all possible u’s to represent the given preference through probability-weighted average. Work by Luce & Narens on n-point uniqueness and m-point homogeneity, and other works by Eichhorn if I remember right and in Foundations of Measurement Vol. II if I remember right, give reasons why ordinal and cardinal scales naturally arise, as do nominal scales, ratio scales, absolute scales, and possibly metric scales (preserving orderings of differences which need not be cardinal if the range is coarser than a continuum) and why other kinds of scales are not natural to arise. One thing is that sets of admissible transformations have nice group structures.
This paper looks only at the latter thing, taking sets of functions. It designates such sets with the broad term psychology. It considers, for instance, the set of all concave functions, or the set of all continuous functions, without yet relating it to defining properties. A nice illustration is from work on stochastic dominance and incomplete preferences: if we only know that utility is concave, already we can conclude that a prospect is more preferred than a mean-preserving spread.
The paper organizes concepts such as one “psychology” being weaker than another if being a superset; etc. Thus, obviously, the set of all concave utility functions is intermediate between an ordinal and cardinal class.
A nice example of a singleton psychology is in health, where cardinal utility is further pinned down by setting U = 0 at death and U = 1 for perfect health, so that utility is uniquely determined and so that all utility results from all different studies can immediately be compared.
The paper points out that now properties such as continuity of utility can be given a background justification, being of a continuous psychology (p. 1131). %}
Mandler, Michael (2006) “Cardinality versus Ordinality: A Suggested Compromise,” American Economic Review 96, 1114–1136.
{% %}
Mandler, Michael (2007) “Strategies as States,” Journal of Economic Theory 135, 105–130.
{% On revealed preference with choice functions and incompleteness.
Considers sequential trades but one-shot consumption at the end of all trades. Thus we can observe several preferences. Distinguishing incompleteness and indifference is not possible in one-shot decisions, but it is in sequential decisions. Real indifference is preference substitutability. Incompleteness must be involved if a sequence of nonpreferences, if taken as indifference, would lead to the choice of a dominated option. %}
Mandler, Michael (2009) “Indifference and Incompleteness Distinguished by Rational Trade,” Games and Economic Behavior 67, 300–314.
{% Imagine maximization of a preference relation with 2n indifference classes. We can do this maximization by asking n yes-or-no questions, each time dropping the alternatives with the “no” answer: first question separates upper and lower half, 2nd separates upper half of upper half and upper half of lower half from lower half of upper half and lower half of lower half, and so on. So this is an efficient procedure-like way to maximize utility. Nice! %}
Mandler, Michael, Paola Manzini, & Marco Mariotti (2011) “A Million Answers to Twenty Questions: Choosing by Checklist,” Journal of Economic Theory 147, 71–92.
{% N = 74. Hypothetical (footnote 11, p. 447: because BDM (Becker-DeGroot-Marschak) needs (according to the authors) EU. Btw, although EU, implemented the natural way in dynamic choice, is sufficient for BDM, it is not necessary! A common confusion.
PT falsified: when they tried to refine EU by CEU (Choquet expected utility), they actually got worse results. So CEU picks up more noise than essential things. To elicit CEU, they first assume EU for given probabilities so as to get utility and then elicit capacities from that. Or they equate the capacity of an event with the probability of a matched known-probability event, which also requires EU for risk. Martin Weber (personal communication) conjectured that the poor performance of CEU may be due to participants first getting many known-probability questions preceding the ambiguity questions which may have distorted their ambiguity perception.
ambiguity seeking for losses: they find ambiguity aversion for gains but, on average, ambiguity neutrality for losses. P. 448 2nd para: significant difference between gains and losses. Capacities for losses are significantly different than for gains.
reflection at individual level for ambiguity: although they have the data, within-subject, they do not report it. %}
Mangelsdorff, Lukas & Martin Weber (1994) “Testing Choquet Expected Utility,” Journal of Economic Behavior and Organization 25, 437–457.
{% They give subjects hypothetical info, such as imagining $1500 damage to their car and what would they do, and then after do simple cognitive task. Poor people do worse on the cognitive task than rich people. In a control treatment poor behave as well as rich, so it is the info that does it. In a 2nd treatment, they ask farmers in India to do the cognitive task shortly before their harvest (then pressure and uncertainty) and after. Again, before the farmers do worse than after. The authors interpret their findings as meaning that financial uncertainty makes poor cognitively worse and, hence, makes them take worse decisions (poverty trap). Problem is that there are too many confounds. It may just be that the hypothetical info in the first treatment just at that moment annoys poor people more than rich and nothing more than that causes the difference. The authors try to control for some things such as physical measurements of stress, but there remain too many emotions uncontroled to come to their interpretations. Psychologists, when studying such vaguely defined concepts, will use 15 rather than 2 experiments, each individually questionable but together making the story plausible. There are many studies into priming effects, where small ad hoc details rather than something as far reaching as cognitive ability impacts choices.
When the authors write, top middle column first page: “This suggests a causal, not merely correlational, relationship between poverty and mental function. We tested this using two” they are overly optimistic about the possible correlations found and even more about that being causal. %}
Mani, Anandi, Sendhil Mullainathan, Eldar Shafir, & Jiaying Zhao (2013) “Poverty Impedes Cognitive Function,” Science 341, 976–980.
{% Try to replicate findings by Ariely, Loewenstein & Prelec (2003) with N = 116 subjects on anchoring. They find the same effects, but considerably weaker. They argue that fundamentals in economics may be less in danger than often thought and suggested by Ariely et al.
Simonsohn, Simmons, & Nelson (2014) criticize this study, arguing that it has the same effect size as Ariely et al, but has too much noise to draw any conclusion, so that it does not disprove the findings of Ariely et al., and does not provide the new evidence claimed in the title.
This paper next presents a theoretical model, with researcher competence as a parameter, to analyze how big the chance at false positives is. Has to do with the publication bias.
It is not very surprising that findings of great irrationality are volatile and can much depend on very small details, in the same way as loss aversion is very volatile. Yet such irrationalities, such as loss aversion, are often so strong that we should reckon with them. %}
Maniadis, Zacharias, Fabio Tufano, & John A. List (2014) “One Swallow Doesn’t Make a Summer: New Evidence on Anchoring Effects,” American Economic Review 104, 277–290.
{% crowding-out: p. 62 seems to point out that a rise in interest rate crowds out private investment. %}
Mankiw, N. Gregory (1994) “Macroeconomics.” Worth Publishers, New York.
{% Compare VAS evaluations from the general public with those from patients in a health state. Cannot compare well in an absolute sense because of different endpoints to the scale. But relative weightings can be compared. There is agreement on physical dimensions such as mobility, but there is discrepancy regarding mental aspects such as fear and suffering from pain. %}
Mann, Rachel, John Brazier, & Aki Tsuchiya (2009) “A Comparison of Patient and General Population Weightings of EQ-5D Dimensions,” Health Economics 18, 363–372.
{% %}
Manne, Alan S. (1952) “The Strong Independence Assumption-Gasoline Blends and Probability Mixtures,” Econometrica 20, 665–668.
{% Z&Z
The RAND Health Insurance Experiment, seems to have shown that under free health care the consumers spend 46% more than in a plan with 95% coinsurance. %}
Manning, Willard G., Joseph P. Newhouse, Naihua Duan, Emmett B. Keeler, Arleen Leibowitz, & M. Susan Marquis (1987) “Health Insurance and the Demand for Medical Care: Evidence from a Randomized Experiment,” American Economic Review 77, 251–277.
{% P. 416 defines uncertainty as decisions with known probabilities; i.e., what is more commonly called risk. P. 416: “For whatever reason, the study of decisions under ambiguity has remained a peripheral concern of the profession.”
Ambiguity is handled through statistical identification techniques.
Seems to allow for incomplete preferences under ambiguity, and writes on p. 418 and elsewhere as if a general fact that addition of new choice alternatives may lead to inferior action under ambiguity, something that in fact only follows in the very particular model that the author will consider later. %}
Manski, Charles F. (2000) “Identification Problems and Decisions under Ambiguity: Empirical Analysis of Treatment Response and Normative Analysis of Treatment Choice,” Journal of Econometrics 95, 415–442.
{% proper scoring rules-correction;
Based on lecture and, hence, not judged by the usual criteria of conciseness, innovativeness, and completeness of references.
probability elicitation;
questionnaire versus choice utility: pleas for incorporating also choiceless data. Reports some studies by himself such as telephonic interviews asking people for direct probability judgments.
He takes “rational expectations” to mean that consumers know true probabilities. His “solution” to the problem of ambiguity is that subjects be allowed to express intervals of probability.
P. 1337, for economists’ reasons to exclude choiceless data: “I sought to determine the scientific basis underlying economists’ hostility to measurement of expectations [direct judgments of subjective probabilities], but found it to be meager.” He then, however, does not connect with the broader issue of the ordinal revolution, but considers only discussions of probability judgment.
P. 1343, on problem whether direct judgments of probability (expectations) are valid, mentions that they have “face validity,” and then continues: “Having demonstrated that probabilistic questioning does “work” … “
Share with your friends: |