Bibliography



Download 7.23 Mb.
Page46/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   42   43   44   45   46   47   48   49   ...   103

ambiguity seeking: dictators prefer ambiguous unfair allocations to unambiguous unfair allocations because then their selfishness is harder to criticize. %}

Haisley, Emily C. & Roberto A. Weber (2010) “Self-Serving Interpretations of Ambiguity in Other-Regarding Behavior,” Games and Economic Behavior 68, 614–625.


{% Dutch book: under arbitrage, which is the same as a Dutch-book, your neutral decisions can be combined into a sure loss, which is bad. The paper opens with a purported counterargument: your neutral decisions then can also be combined into a sure gain, and isnt’ that something very good for you? Oh well … It continues with many arguments in the same spirit. P. 801: “I have not seen any argument that in virtue of avoiding the inconsistency of Dutch-bookability, at least some coherent agents are guaranteed to avoid all inconsistency.” %}

Hájek, Alan (2008) “Arguments for–or against–Probabilism?,” British Journal for the Philosophy of Science 59, 783–819.


{% %}

Hájek, Alan & Harris Nover (2012) “Rationality and Indeterminate Probabilities,” Synthese 187, 33–48.


{% Logarithmic utility seems to be induced by a growth-optimal model (p. 350); argues strongly against mean-variance, that it violates stoch. dom. etc. %}

Hakansson, Nils H. (1971) “Capital Growth and the Mean-Variance Approach to Portfolio Selection,” Journal of Financial and Quantitative Analysis 6, 517–557.


{% dynamic consistency
Discussed in Paris on March 8, 1999.
Dynamic consistency condition definition seems to comprise RCLA, and is restricted to fixed counterfactual strategies.
Uses a “generalized conditional dominance condition”: “If, given every element of a partition, I prefer replacing f by g only given that one element of the partition, then I prefer replacing f by g in total. Given dynamic consistency (which is defined in this paper to imply reduction of events), the condition is weaker than forgone-event independence but is “in that spirit.” The condition was introduced independently by Grant, Simon, Atsushi Kajii, & Ben Polak (1999) “Decomposable Choice under Uncertainty,” %}

Halevy, Yoram (1998) “Trade between Rational Agents as a Result of Asymmetric Information,” .


{% Considers an experiment with four urns with each 10 balls of two colors, red and black.
Urn 1 is fifty-fifty;
Urn 2 is unknown composition
suspicion under ambiguity: as it should, subjects can choose the color on which to gamble, so no suspicion.
second-order probabilities to model ambiguity; Urn 3 is two-stage, first stage randomly chooses one of the 11 compositions of the urn and then stage two carries out the drawing of the ball from that composition.
Urn 4 is also two-stage, but chooses randomly only a 0-10 or 10-0 composition. So, urn 4 is quite like urn 1.
The author compares the explanation of ambiguity aversion through violations of probabilistic sophistication (Epstein 1999) with the one of Segal that assumes that for the ambiguous urn 2 the subjects subjectively assume a two-stage uncertainty with the first stage uncertainty about the composition of the urn, coupled with violations of RCLA. He makes the plausible but debatable assumption that probabilistic sophistication does not assume violation of RCLA. The two theories then differ regarding predictions about urn 3:
The probabilistic-sophistication explanation says urns 1, 3, 4 all have known probabilities and will be treated alike, with only urn 2 valued lower.
The RCLA-violation-explanation says that urns 2 and 3 will be treated similarly, and will be valued lower than urn 1.
The data find the latter prediction, with urns 2 and 3 valued lower than urn 1. At group average level urns 2 and 3 are treated alike, but I guess there remain many differences at the individual level. The author distinguishes two subgroups with different attitudes.
Urn 4 also seems to be treated like urns 2 and 3. Subjects may simply have a general dislike of complex urns. %}

Halevy, Yoram (2007) “Ellsberg Revisited: An Experimental Study,” Econometrica 75, 503–536.


{% Assumes consumption stream (c0,c1, …). Assumes that for each time point there is an r probability of death (“implicit risk”), to be modeled as the 0 consumption outcome from there on, so that the probability of consumption of ct (and then all preceding consumptions) is (1r)t. In some formal results (Theorem 1) conditions are assumed over all values of r. Risk is processed using nonEU. Each consumption ct is assumed nonnegative; i.e., it is at least as good as death (stated on p. 1152 2nd para l. 2).
The representation is of the form SUMtg((1r)t)tu(ct) where u is utility (with the scaling u(0) = 0),  is intertemporal discount rate, and g is a probability weighting function. Thus, constant discounting results iff g is linear (EU, discount rate being (1r)), diminishing impatience corresponds with convex g and increasing impatience corresponds with concave g. In this way the immediacy effect of intertemporal choice becomes the certainty effect of decision under risk. This analogy has been alluded to many times in the literature but this paper gives a formal model capturing it.
The author uses “diminishing impatience” for the immediacy effect and otherwise uses the expression strongly diminishing impatience. I next discuss separability issues, resulting from emails with the author in March 09.
Saito (2011) will show that there is a confusion of these concepts and that in Theorem 1 there (p. 1150) diminishing impatience (in Halevy’s terminology) does not imply common ratio, but instead is equivalent to the certainty effect, and it is strong diminishing impatience (in Halevy’s terminology) that is equivalent to the common ratio effect.
OBSERVATION 1. The SUM representation above can be obtained by first aggregating risks at each time point, and only then aggregating over time. At each time point t, the probability of consuming ct is (1r)t and the probability of consuming 0 is 1  (1r)t. The RDU value at time t is g((1r)tu(ct). Next these RDU values are aggregated over time, discounted by , giving the above SUM.

OBSERVATION 2. The SUM representation above can also be obtained by first aggregating over time, i.e., by considering all consumption paths and their probabilities, and then calculating RDU. This is explained in §4, in particular Theorem 2, p. 1154. This is the author's interpretation in the paper.

The two observations displayed here imply that we have weak separability of time points (even strong, additive) and also weak separability of the risky events. It is well known, by applications of Gorman's (1968) theorem, that this implies strong complete additive separability of time and risk. So, a puzzle for the readers maybe, how can we then still have nonEU? The answer is that we are considering a restricted, comonotonic, domain. For two uncertain events always the one with the longest life duration has the best outcomes. The events always have the same ranking position and we look at RDU within one comonotonic set. Thus, even the sure-thing principle holds for uncertainty, and replacing a common outcome on an uncertain event by another one will not affect preference. The model very efficiently combines aggregation conveniences of classical expected utility and discounted utility models (making the model tractable) with empirical features of nonexpected utility. %}

Halevy, Yoram (2008) “Strotz Meets Allais: Diminishing Impatience and the Certainty Effect,” American Economic Review 98, 1145–1162.


{% DC = stationarity: this paper carefully distinguishes the three concepts and tests them separately, in particular, employing the longitudinal data required for testing time consistency (also known as dynamic consistency). It is very similar to Casari & Dragone (2015), but the two studies were done independently and do not cite each other.
The paper uses the common term time consistency for what could be called decision-time independence (the calendar time of consumption remains fixed, but the calendar time of decision-taking is changed; it is a between-preference-relations conditon if we take preference relations at different times as different preference relations), the common term stationarity for what could be called consumption-time independence (the calendar time of decison remains fixed, but the calendar time of consumption is changed; it is the only within-preference-relation conditon), and the term time invariance for what could be called age independence (the whole decision situation, with both time of decision and time of consumption) shifted in time. Time invariance means that we can use stopwatch time. Although the terms by themselves do not describe the concepts, and could from this perspective be interchanges, they have several advantages:
- they have all been used before in the sense used here;
- they are short;
- time consistency can be argued to be normative, so, the strong term consistency works well;
- time invariance is not normative but is empirically plausible as ceteris paribus; causes of violation can be taken as distortions; here the immediacy effect does not imply violations; the neutral term variance fits well with this role.
The definitions are in §3, p. 341.
P. 341 Proposition 4 states that every two conditions implies the third. I once jokingly said that this result is a corollary of transitivity of the identity relation. This claim is clearest in Fact 5: stationarity holds iff x2 = x1; time invariance iff x21 = x1, and time consistency iff x21 = x2. One can also do it by each condition requiring that choices in two of three choice situations are the same.
As argued above, violations of time invariance are a bit like violations of ceteris paribus. This paper finds that mostly time invariance is violated. It may be because the late time points in the experiment were close to the end of the term, or because students had then gotten used to the experiment, or built up confidence seeing that experiment did pay in early times.
P. 342 following Proof of Proposition 4 points out that much of the literature has taken time invariance implicitly. This annotated bibliography has a key word DC = stationarity: for studies that made this confusion, and studies that did not.. %}

Halevy, Yoram (2015) “Time Consistency: Stationarity and Time Invariance,” Econometrica 83, 335–352.


{% biseparable utility violated; Consider unknown urn of Ellsberg as second-order probability distribution. In repeated choice, if the compositions of the various unknown urns are positively correlated, then aversion to mean-preserving spreads will imply aversion to repeated choices + repeated payments on the unknown urn versus the known (would be opposite if the urns are negatively correlated). In single-choice situations people may have been conditioned to act as if repeated. Thus, ambiguity aversion could be generated by aversion to mean-preserving spreads. %}

Halevy, Yoram & Vincent Feltkamp (2005) “A Bayesian Approach to Uncertainty Aversion,” Review of Economic Studies 72, 449–466.


{% biseparable utility violated; Considered a combination of the models of Seo (2009) and Ergin & Gul (2009). Use a domain similar as Seo (2009), with a state space and then objective probabilities both before and after the states. Their domain is actually smaller, with compound lotteries and Savage acts. Thus they can use the prior probabilities to calibrate subjective probabilities over the state space with matching probabilities as Seo did, and they need not invoke the second-order acts of KMM. They generalize Seo’s model by not assuming expected utility within each stage, but only probabilistic sophistication, similar to Ergin & Gul (2009. Their model is supported empirically by evidence from Halevy (2007). %}

Halevy, Yoram, Massimiliano Amarante, & Emre Ozdenoren (2008) “Uncertainty and Compound Lotteries: Calibration,” working paper, University of British Columbia.


{% %}

Haliassos, Michael & Christis Hassapis (2001) “Non-Expected Utility, Saving and Portfolios,” Economic Journal 111, 69–102.


{% %}

Hall, Jane, Karen Gerard, Glenn Salkfeld, & Jeff Richardson (1992) “A Cost Utility Analysis of Mammography Screening in Australia,” Social Science and Medicine 34, 993–1004.


{% %}

Hall, Robert (1988) “Intertemporal Substitution in Consumption,” Journal of Political Economy 96, 221–273.


{% Suppose that L is a set of lotteries over a finite set of prizes, and a vNM pref rel. Suppose g of L to L s.t.  > ' iff g() > g('). Mayby DM misperceives probabilities and thinks g(l) iso l? and does vNM (with different u) over various g? This model is described.
Problem: g leaves much freedom. %}

Haller, Hans (1985) “Expected Utility and Revelation of Subjective Probabilities,” Economics Letters 17, 305–309.


{% Presented at University of Saerbrücken, Dept. of Economics, July 1996, Saerbrücken, Germany.
equilibrium under nonEU; Nice paper that points out how definition of support for nonadditive measures determines what kind of equilibrium results. The definition of support should not be chosen ad hoc merely to get the kind of equilibrium wanted, but the other way around, first one should find good reasons for defining the support and then one should see what equilibrium results.
The definition of support is important for what people call the consistency requirement of Nash equilibrium. Here consistency requirement means that the equilibrium strategies in the support are all optimal. %}

Haller, Hans (2000) “Non-Additive Beliefs in Solvable Games,” Theory and Decision 49, 313–338.


{% %}

Halmos, Paul R. (1950) “Measure Theory.” Van Nostrand, New York.


{% restricting representations to subsets: for virtually all representation theorems in the literature, richness of structure is essentially. This paper proves this point for Coxs famous axiomatization of probability. See also criticizing the dangerous role of technical axioms such as continuity: %}

Halpern, Joseph Y. (1999) “A Counterexample to Theorems of Cox and Fine,” Journal of Artificial Intelligence Research 10, 67–85.


{% %}

Halpern, Joseph Y. (2002) “Characterizing the Common Prior Assumption,” Journal of Economic Theory 106, 316–355.


{% Theoretically oriented book on reasoning with probabilities and generalizations of probability. Each chapter has many, 40 or so, exercises.
Ch. 1 starts with some classical probability-reasoning puzzles.
Ch. 2 does probability with a betting axiomatization, upper and lower probability, Dempster-Shafer belief, and the most general, possibility measures (assigning max to disjoint union, as in fuzzy logic), and the most general, plausibility measures which generalize weighting functions/capacities by having as range a partially ordered set.
updating; Ch. 3 considers updating.for various non-Bayesian belief indexes. Ch. 4 is on independence and Bayesian networks, considering it also for nonadditive measures of beliefs. Ch. 5 is on expectation, inner and outer, and then based on this decision theory in §5.4. §5.4.3 has the marvelous generalized expected utility developed by Chu & Halpern (2008, Theory and Decision). Ch. 6 considers multi-agents, with the important topic of protocols in §6.6. Ch. 7 develops logic for uncertainty reasonings, and Ch. 8 is on defaults and counterfactuals. Ch. 9 is on belief revision, comparing it with conditional logic. Ch. 10 brings 1st order modal logic, and Ch. 11 is on a very interesting topic: “From Statistics to Beliefs,” discussing for instance reference classes and random worlds. %}

Halpern, Joseph Y. (2003) “Reasoning about Uncertainty.” The MIT Press, Cambridge, MA.


{% conditional probability; Presents mathematical relations between the concepts, that can be equivalent under countable additivity, finite state space, and so on. %}

Halpern, Joseph Y. (2009) “Lexicographic Probability, Conditional Probability, and Nonstandard Probability,” Games and Economic Behavior 68, 155–179.


{% conservation of influence: determining causality is like determining influence. Involves counterfactual thinking. %}

Halpern, Joseph Y. (2016) “Actual Causality.” The MIT Press, Cambridge, MA.


{% updating; Paper distinguishes between belief functions as “generalized probability,” which reflects belief of a person that he is willing to act upon (which may be subjective in my terminology although Joe in a discussion with me in Jan. 2002 never wanted to commit to this term), and that we are born with and keep up updating, and as “evidence,” which is a piece of (in my terms, objective) information that need not reflect anybodys belief. E.g. evidence may be sample size + relative frequencies in a data set. Paper has the nice interpretation that evidence is to be taken as an updating function mapping prior beliefs to posterior beliefs (claimed as possibly new on p. 290). Two pieces of evidence can be combined as through Dempster/Shafers formula, a belief can be updated through evidence. The writings of Shafer, Dempster, Smets, do not always fit very clearly/completely in one or other category. P. 289 points out that combining beliefs, as with combining experts, requires subjective judgment of importance weights of the experts.
Second part of paper, starting in §4 on p. 288, assumes the special model as in statistics, where there are hypotheses with nonprobabilized uncertainty and then, conditional on each hypothesis, probabilized uncertainty about the observations. It imposes that evidence, to be proper, should map each Bayesian prior probability towards a posterior Bayesian probability. This then implies that evidence should be like a normalized likelihood function; i.e., as an additive probability (e.g., Theorem 4.6, p. 301). And this is a conclusion of the paper, that evidence is best represented as (Bayesian, additive) probability.
P. 303 gives refs to papers showing that under some axioms à la Cox, evidence must be represented by likelihood. %}

Halpern, Joseph Y. & Ronald Fagin (1992) “Two Views of Belief: Belief as Generalized Probability and Belief as Evidence,” Artificial Intelligence 54, 275–317.


{% Generalize the model by Chateauneuf & Faro (2009). %}

Halpern, Joseph Y. & Samantha Leung (2016) “Maxmin Weighted Expected Utility: A Simpler Characterization,” Theory and Decision 80, 581–610.


{% %}

Halpern, Joseph Y. & Michael O. Rabin (1987) “A Logic to Reason about Likelihood,” Artificial Intelligence 32, 379–405.


{% %}

Halpern, Joseph Y. & Mark R. Tuttle (1993) “Knowledge, Probability, and Adversaries,” Journal of the ACM 40, 917–960.


{% foundations of quantum mechanics %}

Halpin, John E. (1991) “What is the Logical Form of Probability Assignment in Quantum Mechanics,” Philosophy of Science 58, 36–61.


{% About half of 1075 (p. 126 bottom) farmers were asked hypothetical questions about willingness to pay or accept for risky gains and losses. Assuming EU, their utility functions were derived. Their utility curvature was related to their kind of business, whether more or less risky, and to other characteristics. Farmers with weak loss aversion (utility steep for losses and shallow for gains) engaged in risky activities such as cash crops en fat-stock feeding. Farmers with strong loss aversion engaged in safe activities such as general farming. Qualitative relations are reported, but no statistics. The authors use an unclear terminology of high and low marginal utility where high for gains probably means more convex, so more risk seeking, and high for losses means the opposite.
risky utility u = transform of strength of preference v: they clearly and repeatedly favor this view, e.g. footnote 4 p. 119.
Pp. 122-123 lists the vNM axioms without independence, but with a nice point 4 which is like the DUR assumption 2.1.2 of my book (only generated probability distribution over outcomes matters) or like no-framing.
P. 123 penultimate para explains that direct matching would be too complex for subjects, so they derived from choices. P. 124 text explains that they took midpoints between switches of preferences as indifference point. Not very clear to me what exactly their stimuli were in Table 1.
P. 124 note a at the table writes that they only use small probabilities so as to avoid distorting effects of what we nowadays (2013) call probability weighting.
SG doesn’t do well: p. 124 note c at table says that variations in outcomes are easier to understand than variations in probability.
P. 131 l. 9 reporst a 26 times higher marginal utility for losses than for gains but it is not clear. %}

Halter, Alfred N. & Christopher Beringer (1960) “Cardinal Utility Functions and Managerial Behavior,” Journal of Farm Economics 42, 118–132.


{% risky utility u = strength of preference v (or other riskless cardinal utility, often called value) %}

Halter, Alfred N. & Gerald W. Dean (1971) “Decisions under Uncertainty: With Research Applications.” South-Western Publishing Co., Cincinnati, Ohio.


{% Real incentives, Dutch book, or reference-dependence test: consider repeated private value auctions, where commonly repeated payments are used and it is assumed that prior gains do not affect behavior. These authors, however, show that cash balance does affect bidding behavior.
random incentive system: this paper gives evidence to support it.
Get some evidence for target and aspiration levels. %}

Ham, John C., John H. Kagel, & Steven F. Lehrer (2005) “Randomization, Endogeneity and Laboratory Experiments: The Role of Cash Balances in Private Value Auctions,” Journal of Econometrics 125, 175–205.


{% %}

Hamao, Yasushi, Ronald W. Masulis, & Victor Ng (1990) “Correlations in Price Changes and Volatility across International Stock Markets,” Review of Financial Studies 3, 281–307.


{% %}

Hamilton, Barton H. (2000) “Does Entrepreneurship Pay? An Empirical Analysis of the Returns to Self-Employment,” Journal of Political Economy 108, 604–631.


{% Introduces a mathematical preference model combining health and wealth evaluations, giving preference axiomatizations, and implications for QALY, DALY, and so on. %}

Hammitt, James K. (2013) “Admissible Utility Functions for Health, Longevity, and Wealth: Integrating Monetary and Life-Year Measures,” Journal of Risk and Uncertainty 47, 311–325.


{% %}

Hammond, John S., Ralph L. Keeney, & Howard Raiffa (1999) “Smart Choices.” Harvard Business School Press, Boston.


{% utility = representational?: Coherence means internal consistency. Correspondence means good relations to external world. %}

Hammond, Kenneth R. (2006) “Beyond Rationality.” Oxford University Press, New York.


{% intuitive versus analytical decisions: cognitive continuum theory: people combine analytic and intuitive judgments. %}

Hammond, Kenneth R., Robert M. Hamm, Janeth Grassia, & Tamra Pearson (1987) “Direct Comparison of the Efficacy of Intuitive and Analytical Cognition in Expert Judgments,” IEEE Transactions on Systems, Man, and Cybernetics 17, 753–770.


{% %}

Hammond, Kenneth R. & Doreen Victor (1988) “Annotated Bibliography for Risk Perception and Risk Communication,” Center for Research on Judgment and Policy, University of Colorado at Boulder.


{% dynamic consistency: favors abandoning time consistency, so, favors sophisticated choice, because he considered precommitment only viable if an extraneous device is available to implement it (p. 162/163). P. 162 defines sophisticated and myopic choice; also defines precommitment (called resolute choice by McClennen) but, similar to me, thinks that that is not really an available option. Hammond says that if it is indeed available, then it should be added as a new decision option, a new branch in the tree. (To which McClennen would probably reply that precommitment is in the head and needs no additional decision option, and Machina would reply that tastes themselves have changed and thus generate what seems to be precommitment.)
Hammond takes paths (called "branches") as primitives. n = x(t) means that node n occurs at time point t in the path x. In a decision node, all paths emanating from it are simply in the choice set. The one actually happening from there on is the one most preferred by the choice function, but not necessarily in a preference sense and a myopic person will therefore end up with addiction. For any subset of paths, one considers the choice between them by simply snipping off all other paths and otherwise leave the tree as is. Then from that one sees what choice is revealed. Preference between two paths x and y in a node n is then inferred by deleting !all! other paths, and then see what is chosen.
Such procedures do not seem to be useful if there are interactions between paths in the sense that the preference between x and y can be affected by another path z, such as happening in game theory when other actors also choose. Also it is problematic for DUU and nonEU when there is nonseparability (e.g., my paper "counterfactual”).
Coherence: Choice function over paths in some fixed node satisfies some revealed preference conditions to agree with a (weak) ordering.
Consistency: choices at different time points reveal the same preferences between paths; it is, basically (6.2 suggests, but I am not 100% sure), the thing violated by myopic choice.
Endogeneously changing tastes describe changes due to previous decisions (so, violations of DC (dynamic consistency), e.g., previous decisions of chance). Exogeneously changing tastes describe changes due to the progression of time (say, factors not in the tree; so, violations of stationarity). Seems that first may rather be violation of history-independence and second of stationarity???
The paper shows that in trees where sophisticated choice is coherent, it agrees with myopic choice. In other words, whenever myopic choice leads to irrationality, then sophisticated choice is not coherent.
There may be a point in the last lines of the conclusion. Sophisticated choice may seem like some sort of resolution of changing taste, but it still is incoherent so the basic irrationality still remains. %}

Hammond, Peter J. (1976) “Changing Tastes and Coherent Dynamic Choice,” Review of Economic Studies 43, 159–173.


{% dynamic consistency %}

Hammond, Peter J. (1977) “Dynamic Restrictions on Metastatic Choice,” Economica 44, 337–350.


{% dynamic consistency %}

Hammond, Peter J. (1983) “Ex-Post Optimality as a Dynamically Consistent Objective for Choice under Uncertainty.” In Prasanta K. Pattanaik & Maurice Salles (eds.) Social Choice and Welfare, 175–205, North-Holland, Amsterdam.


{% dynamic consistency %}

Hammond, Peter J. (1986) “Consequentialist Social Norms for Public Decisions.” In Walter P. Heller, Ross M. Starr, & David A. Starrett (eds.) Social Choice Public Decision Making: Essays in Honor of Kenneth J. Arrow, Vol. I, 3–27, Cambridge University Press, Cambridge.


{% %}

Hammond, Peter J. (1987) “Subjective Probabilities with State Independent Utilities on State Dependent Consequence Domains,” Stanford University, Institute of Mathematical Studies in the Social Sciences, Economics Technical Report No. 520.


{% Short accessible version of his idea; dynamic consistency %}

Hammond, Peter J. (1988) “Consequentialist and the Independence Axiom.” In Bertrand R. Munier (ed.) Risk, Decision and Rationality, 503–515, Reidel, Dordrecht.


{% dynamic consistency; see Alias-file %}

Hammond, Peter J. (1988) “Consequentialist Foundations for Expected Utility,” Theory and Decision 25, 25–78.

First version seems to have been:

Hammond, Peter J. (1985) “Consequential Behavior in Decision Trees and Expected Utility,” Working Paper no. 112, Institute for Mathematical Studies in the Social Sciences, Stanford University, Palo Alto, CA, USA.


{%; dynamic consistency %}

Hammond, Peter J. (1989) “Consistent Plans, Consequentialism, and Expected Utility,” Econometrica 57, 1445–1449.


{% Seems to be strong on the impossibility of interpersonal comparisons of utility. %}

Hammond, Peter J. (1991) “Interpersonal Comparisons of Utility: Why and How They Are and Should Be Made.” In John Elster & John E. Roemer (eds.) Interpersonal Comparisons of Well-Being. Studies in Rationality and Social Change, Cambridge University Press, New York.


{% %}

Hammond, Peter J. (1998) “Objective Expected Utility.” In Salvador Barberà, Peter J. Hammond, & Christian Seidl (eds.) Handbook of Utility Theory, Vol. 1, Principles, 145–211, Kluwer Academic Publishers, Dordrecht.


{% %}

Hammond, Peter J. (1998) “Subjective Expected Utility.” In Salvador Barberà, Peter J. Hammond, & Christian Seidl (eds.) Handbook of Utility Theory, Vol. 1, Principles, 213–271, Kluwer Academic Publishers, Dordrecht.


{% foundations of statistics: collect classical papers. %}

Hamouda, Omar F. & J.C. Robin Rowley (1997, eds.) “Statistical Foundations for Econometrics.” Edward Elgar, Cheltenham.


{% %}

Hampton, Jean (1994) “The Failure of Expected-Utility Theory as a Theory of Reason,” Economics and Philosophy 10, 195–242.


{% %}

Hamrich, Harvey J. & Joseph M. Garfunkel (1991) “Clinical Decisions: How Much Analysis and how Much Judgment?” (Editors Column), Journal of Pediatrics 118, 67.


{% Analyzes newsvendor where only mean and variance are known, and ambiguity aversion is captured through maxmin evaluations. %}

Han, Qiaoming, Donglei Du, & Luis F. Zuluaga (2014) “A Risk- and Ambiguity-Averse Extension of the Max-Min Newsvendor Order Formula, Operations Research 62, 535–542.


{% dynamic consistency: favors abandoning forgone-event independence, so, favors resolute choice %}

Hanany, Eran & Peter Klibanoff (2007) “Updating Preferences with Multiple Priors,” Theoretical Economics 2, 261–298.


{% game theory for nonexpected utility %}

Hanany, Eran & Zvi Safra (1998) “Existence and Uniqueness of Ordinal Nash Outcomes,” University of Tel-Aviv.


{% %}

Handa, Jagdish (1977) “Risk, Probabilities, and a New Theory of Cardinal Utility,” Journal of Political Economy 85, 97–122.


{% DOI: http://dx.doi.org/10.1257/aer.103.7.2643
An American health insurance company forced its clients to change health insurance in 2004. (So it is not nudge.) Following years, clients were free to change or not. The author can measure inertia and adverse selection (they have data on client claims). He finds that removing inertia primarily increases adverse selection. This agrees with Wakker, Timmermans, & Machielse (2007) who also found that helping clients by providing health-expenses info is not good because it enhances adverse selection too much. %}

Handel, Benjamin R. (2013) “Adverse Selection and Inertia in Health Insurance Markets: When Nudging Hurts,” American Economic Review 103, 2643–2682.


{% foundations of statistics: a revival of Fisher’s fiducial approach. Abstract writes: “The main idea of GFI is to carefully transfer randomness from the data to the parameter space using an inverse of a data-generating equation without the use of Bayes’ theorem.” %}

Hannig, Jan, Hari Iyer, Randy C. S. Lai, & Thomas C. M. Lee (2016) “Generalized Fiducial Inference: A Review and New Results,” Journal of the American Statistical Association 111, 1346–1361.


{% %}

Hanoch, Giora (1977) “Risk Aversion and Consumer Preferences,” Econometrica 45, 413–426.


{% Z&Z; Examines welfare effects of compulsory insurance versus free-market versus a mix of compulsory plus voluntary, a variation of Dahlby (1981), a paper which seems to be a classic. Assume that all individuals have the same utility function. %}

Hansen, Bodil O. & Hans Keiding (2002) “Alternative Health Insurance Schemes: A Welfare Comparison,” Journal of Health Economics 21, 739–756.


{% one-dimensional utility; considers relative risk premium (risk premium expressed in terms of percentage of wealth) and characterizes its decreasingness in terms of sums of utility functions on a particular domain of prospects. %}

Hansen, Frank (2007) “Decreasing Relative Risk Premium,” B.E. Journal of Theoretical Economics: Vol. 7: Iss. 1 (Topics), Article 37.


{% %}

Hansen, Kristian Schultz & Lars-Peter Østerdal (2006) “Models of Quality-Adjusted Life Years when Health Varies over Time: Survey and Analysis,” Journal of Economic Surveys 20, 229–255.


{% Summarizes views from other papers. %}

Hansen, Lars P. (2007) “Beliefs, Doubts and Learning: Valuing Macroeconomic Risk; Richard T. Ely Lecture,” American Economic Review, Papers and Proceedings 97, 1–30.


{% DOI: 10.1214/16-STS570 %}

Hansen, Lars Peter & Massimo Marinacci (2016) “Mean–Variance and Expected Utility,” Statistical Science 31, 511–515.


{% %}

Hansen, Lars P. & Thomas J. Sargent (2001) “Robust Control and Model Uncertainty,” American Economic Review, Papers and Proceedings 91, 60–66.


{% %}

Hansen, Peter & Thomas J. Sargent (2007) “Recursive Robust Estimation and Control without Commitment,” Journal of Economic Theory 136, 1–27.


{% %}

Hansen, Peter & Thomas J. Sargent (2007) “Robustness.” Princeton University Press, Princeton.


{% PT, applications: nonadditive measures, large market-based measures of risk aversion; robust decision makers want robustness against specification errors about income shocks. uncertainty amplifies risk: they seem to argue for this, where uncertainty is model-uncertainty and the phenomenon amplified is aversion. %}

Hansen, Lars Peter, Thomas J. Sargent, & Thomas D. Tallarini (1999) “Robust Permanent Income and Pricing,” Review of Economic Studies 66, 873–908.


{% P. 78 argues against the dynamically consistent rectangular multiple priors model that was argued for by Epstein & Schneider (2003) (that had been put forward by Sarin & Wakker, 1998, JRU, pp. 87–119, §2) before. %}

Hansen, Lars Peter, Thomas J. Sargent, Gauhar A. Turmuhambetova, & Noah Williams (2006) “Robust Control and Model Misspecification,” Journal of Economic Theory 128, 45–90.


{% proper scoring rules: invented around end 1995 that one can let people bet on scientific predictions by email, à la W.K.B. Hofstee. %}

Hanson, Robin (2002) Piece entitled “Wanna Bet?” in Nature 420, November 2002, pp. 354–355.


{% revealed preference %}

Hansson, Bengt (1968) “Choice Structure and Preference Relations,” Synthese 18, 443–458.


{% Considers combinations P*Q of prospects P and Q, interpreted as receiving both of them where they are played independently. Assumes that if P~Q, then P*C ~ Q*C. Under EU, it is implied by constant absolute risk aversion but need not hold in general, similarly as with Samuelson’s colleague-paradox. The author doesn’t seem to be aware of this. He points out that P*Q can be nonequivalent to P´*Q even though P ~P´ if P´ is more risky than P, under EU. Taking his operation too seriously, he does not conclude from it that his operation is no good, but instead that EU must be no good and that we should reckon with riskiness beyond EU (p. 181 middle sentence). §2 discusses reference dependence, but lowest para of p. 183 confuses money and utility. P. 184 compares level of U(m,x) with level of U(m´,x´), where m is reference point and x is money, in direct manners. However, the common thinking is that preferences can only compare alternatives under one same reference point. Hence, U(m,x) is a ratio scale that is completely independent of U(m´,x´), and comparisons of their levels is not meaningful. We can compare their degree of concavity yes, but their level no. %}

Hansson, Bengt (1975) “The Appropriateness of the Expected Utility Model,” Erkenntnis 9, 175–193.


{% linear utility for small stakes: gives a nice argument.
Nice example, showing that, if a person is indifferent between (.5, W; .5, W+21) and W+7 for sure, for all W, then the person prefers a sure gain of 7 to the gamble (.4,M; .6,0) for all M! I got this reference from footnote 2 of Rabin (2000, Econometrica), who presents similar ideas. %}

Hansson, Bengt (1988) “Risk Aversion as a Problem of Conjoint Measurement.” In Peter Gärdenfors & Nils-Eric Sahlin (eds.) Decision, Probability, and Utility; Selected Readings, 136–158, Cambridge University Press, Cambridge.


{% PT, applications %}

Hansson, Helena & Carl Johan Lagerkvist (2014) “Decision Making for Animal Health and Welfare: Integrating Risk-Benefit Analysis with Prospect Theory,” Risk Analysis 34, 1149–1159.


{% probability elicitation: applied to experimental economics.
They measure matching probabilities of events using BDM (Becker-DeGroot-Marschak), but in a particular way. To “control for belief,” and to focus entirely on the (un)clarity of the mechanism, they take matching probabilities of events with known probability, such as the event of winning from a bag A with 10 chips, 2 of which are winning. Let us focus on the latter event. In the “declarative” design (direct matching in fact) they present subjects with an alternative bag B, with an unknown composition of winning chips, that has 1, …, or 9 winning chips, each with probability 1/9 of being the true bag. So this B is an Ellsberg-type bag with an unknown nr. of winning chips, generated using second-order probabilities (second-order probabilities to model ambiguity). The subjects perceive ambiguity (or second-order probability) at this stage, but will like the unknown bag more because the known one has only two winning chips.
Then the subjects have to submit a nr. X. If the nr. of winning chips  X, so the unknown bag B is more favorable, then the draw will be from B, and otherwise from A. Given that they depict the unknown bag with a question mark, some subjects may have misunderstood and may have erroneously thought that they are supposed to guess the right nr. of winning chips. Another misunderstanding may be that subjects first make up their mind that they like bag B more, and then think that they always get their preferred bag B if they submit 0, thus encouraging them to submit 0. The design encourages the subjects not to perceive the possible decision situations in isolation, as desirable for BDM (Becker-DeGroot-Marschak), but as an integrated meta-lottery.
Once the subjects understand the decision task properly, they understand that it is a trivial decision task (a test of stochastic dominance). In a lecture in Atlanta Oct. 2010, the first author explained that in the experiment subjects were encouraged to follow their “gut-feeling,” so as to make it seem less trivial probably.
The design reminds me some of that of Bohnet et al. (2008 AER) which, when properly understood, was only the elicitation of a SG (standard gamble) probability, but the BDM mechanism was implemented, not through an ambiguous bag, but through the percentage of subjects in an experiment that deceived in a trust game, arousing trust- and indignification emotions with subjects who do not see through the BDM mechanism.
The authors are enthusiastic, expressing it at the end of their abstract: “Our findings hold practical value to anyone interested in eliciting beliefs from representative populations, a goal of increasing importance when conducting large-scale surveys or field experiments.” %}

Hao, Li & Daniel Houser (2012) “Belief Elicitation in the Presence of Naïve Respondents: An Experimental Study,” Journal of Risk and Uncertainty 44, 149–160.


{% Under EU with homogeneous beliefs but heterogeneous utility (“risk aversion”), if all consumers have convex absolute risk aversion then so has representative agent. %}

Hara, Chiaki, James Huang, & Christoph Kuzmics (2007) “Representative Consumer's Risk Aversion and Efficient Risk-Sharing Rules,” Journal of Economic Theory 137, 652–672.


{% Analyze all logical impications of subsets of the vNM EU axioms. They take as nice starting point a characterization of all preference relations that satisfy vNM independence and nothing else. They assume in this that the outcome set is a separable metric space. Then the characterization is that there is a collection of sets of continuous utility functions such that x R y (lottery x is preferred to lottery y) if and only if for every set in the collection there is one utility function whose EU accommodates the preference. So, within each set there is a “there exists” quantification, but across sets there is a “for all” quantification. The first can deliver all required richness, the second all required restrictions. This paper is the linear analog of Nishimura & Ok (2016). With linearity added the results are nicer. There is no clear uniqueness result for the sets to be chosen. As with N&O, because there is much richness in the sets to be chosen, one can always choose the utility functions continuous. The authors call their representation coalitional minmax. %}

Hara, Kazuhiro, Efe A. Ok, & Gil Riella (2015) “Coalitional Expected Multi-Utility Theory,” working paper.


{% PT falsified: this paper falsifies any other classical economic theory as well, with its extensive risk seeking, especially for gains.
Choices between one nonzero outcome prospects, and the sure outcome which was always the expectation of the prospect. Did it for children, young adults, and adults, ages 5-8, 9-13, 14-20, and 21-64. Did it for probabilities 0.02, 0.10, 0.80, and 0.98. Find in everything the almost exact opposite of the fourfold pattern predicted by prospect theory: people seem to underweight small probabilities and overweight high probabilities, both for gains and for losses, yielding the exact opposite of the fourfold pattern. As people are older they are closer to expected value maximization. People are closer to expected value maximization for gains than for losses. People are more risk averse for gains than for losses.
Real incentives: random incentive system where one choice is played for real. Implementation of losses: through prior endowment mechanism to ensure no real loss.
P. 59: people who violated monotonicity tended to be more risk averse.
P. 60 bottom: strange is that the majority choices, 56%, were risk seeking, and were so mostly for gains. Maybe the design generated a strong joy of gambling? This is evidence against prospect theory, but against any other current theory as well.
Download 7.23 Mb.

Share with your friends:
1   ...   42   43   44   45   46   47   48   49   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page