Bibliography


questionnaire versus choice utility



Download 7.23 Mb.
Page76/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   72   73   74   75   76   77   78   79   ...   103

questionnaire versus choice utility: the author interprets the likelihood relation as a cognitive primitive, not based on observable preference and not uniquely related to betting-on but only through a one-sided implication (called likelihood compatibility on p. 1056). He has been sympathetic to such interpretations since his youth, often referring to it in personal communications. He argues that taking the ordering as primitive is more convincing than taking the set of priors as primitive. He does not impose many restrictions on the likelihood relation and preferences over event-contingent prospects (acts), only a kind of stochastic dominance relation. He argues for the desirability of not having many such relations.
There is some rhetorics: p. 1056 penultimate para incorrectly suggests that the model advanced here is “the” formalization of verbal statements by Ellsberg (1961) and Schmeidler (1989), suggesting words from their mouths about incomplete cognitive likelihood ordering that they did not write themselves. A second example is p. 1057 top on models that relax the onesided implication of likelihood compatibility where the author writes that this “severs radically the connection between belief and preference” but has no argument to offer other than restating definitions.
P. 1070 has a mysterious suggestion that belief not just be cognitive likelihood relation but also corresponding behavior, which may reflect other suggestions elsewhere in the paper, also hard to understand for me, that the cognitive relation be only part of the belief and that there be more to belief.
End of paper considers utility-sophistication (preferences depend only on utilities of outcomes through some functional) and in terms of this derives results that multiple-prior preferences have the exact same set of priors as resulting from the likelihood ordering, with a central role for the dyadic events where all priors agree. %}

Nehring, Klaus (2009) “Imprecise Probabilistic Beliefs as a Context for Decision-Making under Ambiguity,” Journal of Economic Theory 144, 1054–1091.


{% ordering of subsets %}

Nehring, Klaus D.O. & Clemens Puppe (1996) “Continuous Extensions of an Order on a Set to the Power Set,” Journal of Economic Theory 68, 456–479.


{% %}

Nehring, Klaus D.O. & Clemens Puppe (1998) “A Theory of Diversity,” working paper.


{% %}

Neil Yu, Ning & Thorsten Chmura (2013) “Belief-Ordering Identification of Ambiguity Attitudes with Application to Partnership Dissolving Experiments,”


{% coalescing %}

Neilson, William S. (1992) “Some Mixed Results on Boundary Effects,” Economics Letters 39, 275–278.


{% %}

Neilson, William S. (1992) “A Mixed Fan Hypothesis and Its Implications for Behavior towards Risk,” Journal of Economic Behavior and Organization 19, 197–211.


{% Published as Neilson (2010, JRU). %}

Neilson, William S. (1993) “Ambiguity Aversion: An Axiomatic Approach Using Second Order Probabilities,” working paper, Dept. of Economics, University of Tennessee, Knoxville, TN.


{% game theory for nonexpected utility %}

Neilson, William S. (1994) “Second Price Auctions without Expected Utility,” Journal of Economic Theory 62, 136–151.


{% If a person does RDU, and turns down a gamble (p, 125; 1p, 100) at every level of wealth, where w(p) = 1/2, then we get the same phenomena as Rabin (2000, Econometrica) got for the special case of w(1/2) = 1/2 (this is EU). Of course, under common assumptions on w, such gambles have to be more extreme and the examples are not empirically realistic anymore so I think that this is no paradox for RDU. %}

Neilson, William S. (2001) “Calibration Results for Rank-Dependent Expected Utility,” Economics Bulletin 4, 1–5.


{% %}

Neilson, William S. (2002) “Comparative Risk Sensitivity with Reference-Dependent Preferences,” Journal of Risk and Uncertainty 24, 131–142.


{% This paper considers a structure that is isomorphic to an additively decomposable structure, where the isomorphism (from the additively representable space to our structure) is (x0,x1,…,xn) --> (x0,x1x0,…,xnx0). It translates axioms that characterize additively decomposable representations through this isomorphism. That is, with xici denoting x with xi replaced by ci, for all i not equal to 0, xici > yici if and only if xi(ci+) > yi(ci+), for all variables in question, which is as usual. For the 0th coordinate, however, we now have (c0,x1,…,xn) >(c0,y1,…,yn) if and only if (c0+,x1+,…,xn+) >(c0+,y1+,…,yn+). The condition just stated is equivalent to the author’s self-referent separability. The additive representation maps, through the isomorphism, into u0(x0) + u1(x1x0) + … + un(xn  x0). It means that the xjs designate final wealth.
The Fehr & Schmidt (1999) model is a special case of this model. I do not agree with the author’s suggestion, on top of p. 687 and in the abstract, that he has now axiomatized the Fehr-Schmidt model. One reason is that an axiomatization of a special case of a general model can be way different than the general model (e.g. all quantitative models are special cases of the general quantitative representation that is characterized by transitivity, completeness, and countable-denseness, which does not mean that the latter result can claim all existing axiomatizations).
The model gives a nice point of departure for reference-dependence through differences, which can be useful in welfare evaluations and risky choice (prospect theory), etc. A difficulty with prospect theory is that under prospect theory it is natural to compare different options only if they have the same reference level.
The author defines constant absolute risk aversion by relating it to a common increase of reference level x0 and the other final wealth levels xj, so that changes w.r.t. x0 (xi  x0) are unaffected. This implies separability w.r.t. the 0th coordinate and implies that the model depends only on the deviations w.r.t. x0, x1x0,…,xnx0, and not on x0 itself. It does not imply exponential utility. %}

Neilson, William S. (2006) “Axiomatic Reference-Dependence in Behavior toward Others and toward Risk,” Economic Theory 28, 681–692.


{% biseparable utility violated; source-dependent utility; event/utility driven ambiguity model: utility-driven:
this is the published version of Neilson (1993). Nothing essential was changed. The paper considers a two-stage setup as in Anscombe-Aumann with known probabilities and vNM EU in the second stage, but unknown (so then subjective) probabilities and Savage-EU in the first stage. So uses richness of state space. The utility functions in the two stages can be different, so that reduction of compound lotteries is violated. So it is the smooth model of KMM, but with the two stages exogenously given, meaning that it is in fact the Kreps & Porteus (1978) model only with the first-stage probabilities subjective iso objective.
The first-stage (first here refers to left stage, the one resolved first temporarily) utility is more concave than the second-stage (interpreted as ambiguity aversion) if and only if weak risk aversion in the first stage holds in terms of second-stage utility units. This condition has a drawback. Using second-stage utility as inputs is not a big problem because these can readily be expressed as second-stage probabilities. However, using the first-stage subjective probabilities needed to define first-stage expectations in weak risk aversion is problematic because these are not given as empirical primitives, unlike in Kreps & Porteus where the first-stage probabilities were objective and not subjective. %}

Neilson, William S. (2010) “A Simplified Axiomatic Approach to Ambiguity Aversion,” Journal of Risk and Uncertainty 41, 113–124.


{% PT falsified: a useful paper putting PT to new tests and demonstrating that we need better parametric families.
The defenses of PT demonstrating that it accommodates the Allais paradox, gambling, insurance, etc., have usually focused on only one of these phenomena. Parametric fittings of PT have not been checked yet for what they say about these known phenomena. This paper is the first, to my knowledge, to see if the parameters found for PT can do more and explain known patterns of choices jointly, and if the parameters found give plausible behavior outside the immediate paradoxes. The current parametric families dont perform well. For example, the T&K families, if explaining the Allais paradox, must be very risk averse, too much to give much gambling for low probabilities. Similar observations apply to coexistence of gambling and insurance. Risk premia are calculated and often are not very plausible. %}

Neilson, William S. & C. Jill Stowe (2001) “A Further Examination of Cumulative Prospect Theory Parameterizations,” Journal of Risk and Uncertainty 24, 31–46.


{% %}

Neilson, William S. & C. Jill Stowe (2003) “A Theory of Other-Regarding Preferences with Rank-Dependence,”


{% foundations of probability: argues that reasoning should be based on conditional probabilities, which can exist in a deterministic world if the conditioning statement need not be a complete description. Seems to assume, à la Carnap’s logical probability, that such conditional probability is objective. Then many philosophical problems can be solved. %}

Nelson, Kevin (2009) “On Background: Using Two-Argument Chance,” Synthese 166, 165–186.


{% Do belief measurement in games for continuum of events, by assuming parameteric family. Over strategies of each individual opponent: a unimodal beta distribution, a triangular distribution, the union of two or three triangular distributions, or the union of a unimodal beta and a triangular distribution, depending on what best fits. Joint distributions are probably obtained by assuming stochastic independence. %}

Neri, Claudia (2015) “Eliciting Beliefs in Continuous-Choice Games: A Double Auction Experiment,” Experimental Economics 18, 569–608.


{% %}

Nestle, Frank O., Hannes Speidel, & Markus O. Speidel (2002) “High Nickel Release from 1- and 2-Euro Coins,” Nature 419, 132.


{% Can measure gravity at quantum level better than done before. So, they can better than before test the equivalence principle: gravitational mass (how much a body of mass attracts other bodies; in Dutch “zware massa”) and inertial mass (how much a body of mass itself is attracted by other bodies; in Dutch “trage massa”) are the same. %}

Nesvizhevsky, Valery V., Hans G. Börner, Alexander K. Petukhov, Hartmut Abele, Stefan Baeler, Frank J. Rue, Thilo Stöferle, Alexander Westphal, Alexei M. Gagarski, Guennady A. Petrov, & Alexander V. Strelkov (2002) “Quantum States of Neutrons in the Earth's Gravitational Field,” Nature 415 (January 17) 297–299.


{% concave utility for gains, convex utility for losses: gives an evolutionary explanation. Considers repeated-decisions problems from evolutionary perspective, building on Robson (2001). Takes utility as rewarding system optimized by individual, and sees when it best serves evolutionary survival. Then utility should be steepest in regions met most frequently, and where mistakes have most serious consequences. For intertemporal it can generate violations of stationarity. For risk it leads to a utility function convex below some point, concave above, where the point is the status quo that occurs most frequently. So, quite as prospect theory has it. %}

Netzer, Nick (2009) “Evolution of Time Preferences and Attitudes toward Risk,” American Economic Review 99, 937–955.


{% Dutch book %}

Neuefeind, Wilhelm & Walter Trockel (1995) “Continuous Linear Representability of Binary Relations,” Economic Theory 6, 351–356.


{% lecture of Jan 2009 %}

Neugebauer, Tibor (2009) “The Petersburg Paradox: 300 Years of Introspection and Experimental Evidence at Last,”


{% New Investigator award, Aug. 23, 1992, 9:00: %}
{% information aversion!! Demonstrates a.o. that prospect theory can sometimes in special circumstances lead to information aversion; i.e., that there exists an example %}

Newman, D. Paul (1980) “Prospect Theory: Implications for Information Evaluation,” Accounting Organizations and Society 5, 217–230.


{% %}

Newton, Isaac (1687) “Philosophiae Naturalis Principia Mathematica.”


{% I can calculate the motion of heavenly bodies, but not the madness of people. %}

Newton, Isaac


{% value of information: seems to argue that receiving info is always good in game theory, as long as opponents are not aware of it. %}

Neyman, Abraham (1991) “The Positive Value of Information,” Games and Economic Behavior 3, 350–355.


{% Studies critical regions based on maximal likelihood ratio from point of view of posterior probability, as Neyman & Pearson (1933) formulate it. %}

Neyman, Jerzy (1928) “Contribution to the Theory of Certain Test Criteria,” Bulletin de lInstitut International de Statistique 24, 44–86.


{% Seems to argue that the performance of a statistical procedure is only relevant in the repeated use and that it is a mistake to think in terms of learning about a particular . %}

Neyman, Jerzy (1977) “Frequentist Probability and Frequentist Statistics,” Synthese 36, 97–131.


{% Introduce “principle of likelihood.” For simple hypotheses that means going by the likelihood ration which is Bayesian. For composite hypotheses, you take quotient of upperbound likelihood over H0 and upperbound likelihood over H1.
I think that they did not use size of test as criterion here because in a later paper they will present that as new. %}

Neyman, Jerzy & Egon S. Pearson (1928) “On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference: Part I,” Biometrika 20A, 175–240.


{% %}

Neyman, Jerzy, & Egon S. Pearson (1928) “On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference: Part II,” Biometrika 20A, 263–294.


{% foundations of statistics; This paper introduces their classical Neyman-Pearson model. In earlier paper they had introduced “principle of likelihood” which for simple hypotheses amounts to likelihood ratio and Bayesianism. (For composite hypotheses it does some, more or less ad hoc, upperbound taking of likelihoods before taking quotient.) Three things make NP take power and size, rather than likelihood ratio, as the basis of statistics. (1) Their desire for not using prior probabilities. (2) The frequentist interpretation that can be given to size and power. (3) The nice extension to composite hypotheses of size and power through uniformly most powerful tests in some important cases.
P. 293 and several other places refer to earlier Biometrika paper for introduction of “principle of likelihood” (see at that reference).
Maybe this paper is the first that relates it to the size and, thus, makes all of humanity go wrong for a whole century, in my (Bayesian) opinion. They explicitly motivate their approach by the desire of not using prior probability.
Introductory, p. 291, chooses words to go towards where they want to go: “Without hoping to know whether each separate hypothesis is true of false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong. Here, for example, would be such a “rule of behavior”: to decide whether a hypothesis, H, of a given type be rejected or not, calculate a specified character, x, of the observed facts; if x > x0 reject H, if x  x0 accept H. Such a rule tells us nothing as to whether in a particular case H is true when x  x0 or false when x > x0. But it may often be proven that if we behave according to such a rule, then in the long run we shall reject H when it is true not more, say, than once in a hundred times, and in addition we may have evidence that we shall reject H sufficiently often when it is false.”
End of introductory, p. 293, on the principle of likelihood: “It was clear, however, in using it that we were still handling a tool not fully understood, and it is the purpose of the present investigation to widen, and we believe simplify, certain of the conceptions previously introduced.”
P. 295, around Eq. 11: “Principle of likelihood.” For simple hypotheses that means going by the likelihood ration which is Bayesian. For composite hypotheses, you take quotient of upperbound likelihood over H0 and upperbound likelihood over H1.
P. 296, again talking towards where they want to go: “From the point of view of mathematical theory all that we can do is to show how the risk of the errors may be controlled and minimised.
The principle upon which the choice of the critical region is determined so that the two sources of errors may be controlled is of first importance.”
P. 296 explains, on the two errors in statistics: “in determining just how the balance should be struck, must be left to the investigator.”
P. 297, 1st paragraph (last 1.5 page of §II), then says that the probability of incorrectly rejecting H0 can be controled to be what they denote by  (thats the level of significance), and rest of paper then takes that as criterion. So here is the dramatic moment when the 20th century statistics went the wrong way. P. 298, Eq. (15), displays the significance level criterion formally.
The same page says “as far as our judgment on the truth or falsehood of H0 is concerned, if an error cannot be avoided it does not matter on which sample we make it.” I disagree. First, the more extreme the sample, the more one will believe the incorrect hypothesis. Further, if there are several alternative hypotheses, I can imagine that the error of kind I (false rejection of H0) is more serious as the sample suggests more that an alternative far remote from H0 is true. I do not understand the footnote added by NP there. NP continue with “It is the frequency that matters” which is of course where they are heading for so which may explain their assumption. They argue for the same point more explicitly in their 1933 paper in Proceedings of the Cambridge Philosophical Society 29, p. 497, where they write that errors of type I (incorrect rejection of H0) are essentially different than of type II. They write that all incorrect rejections of H0 are equivalent, no matter what the sample, but not so all incorrect acceptances of H0 (then it will depend on alternative hypothesis that is true they say). They probably write this to justify their consideration of frequency of incorrect rejections of H0. It seems quite implausible to me. The more extreme the sample is, the more one, incorrectly, believes in the alternative and, if there are more alternatives, the more remote is the alternative hypothesis now incorrectly assumed instead of H0 so, the worse it seems to me.
P. 300, Eq. 24 derives lemma of Neyman-Pearson as it is called nowadays, that, for simple hypotheses, to have most powerful test at given significance level, one should maximize likelihood ratio. Then later it is extended to composite hypotheses. P. 301 points out, in the context of simple hypotheses, that also Bayesian approach would go by likelihood ratio: “In this case even if we had precise information as to the a priori probabilities of the alternatives H1, H2, ... we could not obtain an improved test.”
P. 308 second paragraph discusses prior probabilities. “But in general, we are doubtful of the value of attempts to combine measures of the probability of an event if a hypothesis be true, with measures of the a priori probability of that hypothesis. The difficulty seems to vanish in this as in the other cases, if we regard the  [ is likelihood ratio criterion] surfaces as providing (1) a control by the choice of  of the first source of error (the rejection of H0 when true); and (2) a good compromise in the control of the second source of error (the acceptance of H0 when some H1 is true). The vague a priori grounds on which we are intuitively more confident in some alternatives than in others must be taken into account in the final judgment, but cannot be introduced into the test to give a single probability measure.”
P. 313, on prior probabilities over composite hypothesis to take some average of size etc.: “We have, in fact, no hesitation in preferring to retain the simple conception of control of the first source of error (rejection of H0 when it is true) by the choice of , which follows from the use of similar regions. This course seems necessary as a matter of practical policy, apart from any theoretical objections to the introduction of measure of a priori probability.”
Rest of paper elaborates on many cases and examples.
Summary repeats criterion of first fixing level of significance and then optimizing power, calling it “A new basis has been introduced” %}

Neyman, Jerzy & Egon S. Pearson (1933) “On the Problem of the Most Efficient Tests of Statistical Hypotheses,” Philosophical Transactions of the Royal Society of London A 231, 289–337.


{% foundations of statistics; This paper comes after the 1933 one in “Philosophical”
P. 493 is explicit on their desire not to use prior probability and also on them being seduced by the unfortunate coincidence of size having a long-run meaning: “Yet if it is important to take into account probabilities a priori in drawing a final inference from the observations, the practical statistician is nevertheless forced to recognize that the values of i [the prior probabilities of the hypotheses] can only rarely be expressed in precise numerical form. It is therefore inevitable from the practical point of view that he should consider in what sense, if any, tests can be employed which are independent of probabilities a priori. Further, the statistical aspect of the problem will appeal to him. If he makes repeated use of the same statistical tools when faced with a similar set of admissible hypothees, in what sense can he be sure of certain long run properties?”
P. 502/503 points out that sometimes numerical measures can be assigned to the consequences of both types of error and then expectation of those measures should be taken.
P. 507, Definition D, in definition of most powerful test given significance level, uses explicitly the words “independent of the probabilities a priori.” %}

Neyman, Jerzy & Egon S. Pearson (1933) “The Testing of Statistical Hypotheses in Relation to Probabilities A Priori,” Proceedings of the Cambridge Philosophical Society 29, 492–510.


{% foundations of statistics %}

Neyman, Jerzy & Egon S. Pearson (1936) “Contributions to the Theory of Testing Statistical Hypotheses,” Statistical Research Memoirs 1, June 1936.


{% cancellation axioms: %}

Ng, Che Tat (2016) “On Fishburn’s Questions about Finite Two-Dimensional Additive Measurement,” Journal of Mathematical Psychology 75, 118–126


{% cancellation axioms: Consider for finite two-dimensional set X1  X2 with |X1| = m, |X2| = n, how many cancellation axioms are needed to imply all cancellation axioms. m = 4 and n = needs cancellation axioms up to order 6. %}

Ng, Che Tat (2018) “On Fishburn’s Questions about Finite Two-Dimensional Additive Measurement, II,” Journal of Mathematical Psychology 64, 409–447.


{% Utility of gambling %}

Ng, Che-Tat, R. Duncan Luce, Anthony A.J. Marley (2009) “Utility of Gambling when Events are Valued: an Application of Inset Entropy,” Theory and Decision 67, 23–63.


{% risky utility u = strength of preference v (or other riskless cardinal utility, often called value), is based on just noticeable difference %}

Ng, Yew-Kwang (1984) “Expected Subjective Utility: Is the von Neumann-Morgenstern Utility the Same as the Neoclassicals?,” Social Choice and Welfare 1, 177–186.


{% Total utility theory;
P. 1848, on ordinalistic revolution: “In a very important sense, these changes represent an important methodological advance, making economic analysis based on more objective grounds. However, the change or correction has been carried to an excess, making economics unable to tackle many important problems, divorced from fundamental concepts, and even misleading.”
P. 1848 also describes the similar behaviorist/cognitive (citing Chomsky on latter) revolutions in psychology.
P. 1848 and 1854 mention that Arrows impossibility theorem shows that social choice without cardinal utility doesnt work.
P. 1851 cites many hostile references against [risky utility u = strength of preference v (or other riskless cardinal utility, often called value) ]
P. 1851 and further assume as given a cardinal index of happiness and suggest that as basis of cardinal utility, also: risky utility u = strength of preference v (or other riskless cardinal utility, often called value), based on just noticeable difference. %}

Ng, Yew-Kwang (1997) “A Case for Happiness, Cardinalism, and Interpersonal Comparability,” Economic Journal 107, 1848–1858.


{% risky utility u = strength of preference v (or other riskless cardinal utility, often called value); p. 213: “Thus, these subjective cardinal utility functions exist before the vNM construction is used.” [italics from original] Gives many nice refs. %}

Ng, Yew-Kwang (1999) “Utility, Informed Preference, or Happiness: Following Harsanyis Argument to Its Logical Conclusion,” Social Choice and Welfare 16, 197–216.


{% Tradeoff method: Assumption 2 is an analogue of TO consistency, stated directly in quantitative terms. %}

Ng, Yew-Kwang (2000) “From Separabilility to Unweighted Sum: A Case for Utilitarianism,” Theory and Decision 49, 299–312.


{% C-E analyses for public funding etc. from happiness perspective. %}

Ng, Yew-Kwang (2003) “From Preference to Happiness: Towards a More Complete Welfare Economics,” Social Choice and Welfare 20, 307–350.


{% %}

Nguyen, Hung T. (1978) “On Random Sets and Belief Functions,” Journal of Mathematical Analysis and Applications 65, 531–542.


{% Consider distortion functions as coherent risk measures. Those distortion functions are nothing but Quiggin’s (1982) RDU for risk, but there is no cross reference, although they do cite Schmeidler for the Choquet integral. Consider transformation functions derived from probability distribution functions and their roles in Black-Scholes, for instance. Under realistic generalizations of B-S, risk neutral probabilities are less convincing. %}

Nguyen, Hung T., Uyen H. Pham & Hien D. Tran (2012) “On Some Claims Related to Choquet Integral Risk Measures,” Annals of Operations Research 195, 5–31.


{% Seems to have given a nice example of purported violation of transitivity: “Nothing is better than eternal happiness. A ham sandwich is better than nothing. Therefore, a ham sandwich is better than eternal happiness.” %}

Nickerson, Raymond S. (1986) “Reflections on Reasoning.” Erlbaum, Hisdale, NJ.


{% foundations of statistics %}

Nickerson, Raymond S. (1999) “Statistical Significance Testing: Useful Tool or Bone-Headedly Misguided Procedure?,” Book Review of: Lisa L. Harlow, Stanley A. Mulaik, & James H. Steiger (1997, eds.) What if there Were No Significance-Tests?, Erlbaum, Mahwah, N.J.; Journal of Mathematical Psychology 43, 455–471.


{% foundations of probability; foundations of statistics %}

Nickerson, Raymond S. (2004) “Cognition and Chance—The Psychology of Probabilistic Reasoning.” Lawrence Erlbaum Associates, Mahwah, New Jersey.


{% Give statistical arguments that gains and losses cannot be combined just like that and better be treated separately, in a large-scale study of some 6,0000 patients. %}

Nichol, Michael B. & Joshua D. Epstein (2008) “Separating Gains and Losses in Health when Calculating the Minimum Important Difference for Mapped Utility,” Quality of Life Research 17, 955–961.


{% JRU misspelled the name Aylit Tina Romm, but here it is done correctly.
Subjects repeatedly gamble on drawings with replacement from an unknown Ellsberg urn, where the sure-thing principle is tested each time. In one treatment, subjects are informed about the result of the drawing each time, so that they get to know the composition of the urn, and in the other treatment they are not. The latter is called “learning through mere thought,” and the former is called “statistical learning.” Learning through mere thought reduced violations of the sure-thing principle, but statistical learning does not. The latter is surprising and the authors write that they have no explanation for it. %}

Nicholls, Nicky, Aylit Tina Romm, & Alexander Zimper (2015) “The Impact of Statistical Learning on Violations of the Sure-Thing Principle,” Journal of Risk and Uncertainty 50, 97–115.


{% ISBN 0-324-27086-0 %; ISBN for non-USA: 0-324-22505-9; prijs in 2004: €48}

Nicholson, Walter (2005) “Microeconomic Theory; Basic Principles and Extensions” 9th edn. South-Western, Thomson Learning, London.


{% %}

Niederée, Reinhard (1992) “What Do Numbers Measure? A New Approach to Fundamental Measurement,” Mathematical Social Sciences 24, 237–276.


{% In 1943, Niebuhr wrote the following prayer, often cited and called the Serenity Prayer:
“God, give us grace to accept with serenity the things that cannot be changed,
courage to change the things that should be changed,
and the wisdom to distinguish the one from the other.”
He wrote it for the Congregational church in the hill village of Heath, Massachusetts. It is quoted as an epigraph in the beginning of the 1976 book, on the page preceding the preface. This book contains sermons etc. by him, edited by his wife Ursula M. Niebuhr after his death. She explains about the serenity prayer on p. 5.
Two Dutch translations are:
Geef mij de kalmte om te aanvaarden
wat ik niet kan veranderen
de kracht om te veranderen wat ik kan
de wijsheid om het onderscheid te zien.
(Amnesty International, 1999)
and
Geef mij de innerlijke rust om de dingen, die ik niet kan veranderen, te aanvaarden, de moed om datgene te veranderen waartoe ik bij machte ben, en de wijsheid om te zien waar het verschil ligt (source unknown).
A variation of the prayer is cited in Vonnegut, Kurt (Jr.) (1969). %}

Niebuhr, Reinhold (1976) “Justice and Mercy.” Harper and Row, New York.


{% Life expectancy cannot be an ultimate criterion because the utility of life duration can be nonlinear. %}

Nielsen, Jytte Seested, Susan Chilton, Michael Jones-Lee, & Hugh Metcalf (2010) “How Would You Like Your Gain in Life Expectancy to Be Provided? An Experimental Approach,” Journal of Risk and Uncertainty 41, 195–218.


{% %}

Nielsen, Lars Tyge (1984) “Unbounded Expected Utility and Continuity,” Mathematical Social Sciences 8, 201–216. (See also Nielsen, Lars Tyge (1987) “Corrigenda,” Mathematical Social Sciences 14, 193–194.)


{% This paper assumes EU with risk aversion, implying concave utility. Then we are close to differentiability. Given concave utility, necessary and sufficient conditions are given for differentiability which amount to excluding first-order risk aversion, by requiring risk premia and probability-risk-premia to vanish when stakes get small. Such a preference condition involving limits has the same observability status as continuity. %}

Nielsen, Lars Tyge (1999) “Differentiable von Neumann-Morgenstern Utility,” Economic Theory 14, 285–296.


{% common knowledge %}

Nielsen, Lars Tyge, Adam Brandenburger, John Geanakoplos, Richard McKelvey, & Talbot Page (1990) “Common Knowledge of an Aggregate of Expectations,” Econometrica 58, 1235–1239.


{% Measure risk attitudes in a number of ways. One is by the choice list. Others are by introspective and hypothetical questions N=300 households. They have significant but small correlations. Associated with age, gender, education, but not with wealth. %}

Nielsen, Thea, Alwin Keil, & Manfred Zeller (2013) “Assessing Farmers’ Risk Preferences and Their Determinants in a Marginal Upland Area of Vietnam: A Comparison of Multiple Elicitation Techniques,” Agricultural Economics 44, 255–273.


{% %}

Nielsen, Thomas D. & Jean-Yves Jaffray (2001) “An Operational Approach to Rational Decision Making Based on Rank-Dependent Utility,”


{% %}

Nielsen, Thomas D. & Jean-Yves Jaffray (2006) “Dynamic Decision Making without Expected Utility: An Operational Approach,” European Journal of Operational Research 169, 226–246.


{% Seems to have written: “For believe me the secret for harvesting from existence the greatest fruitfulness and the greatest enjoyment is to live dangerously! Build your cities on the slopes of Vesuvius! Send your ships into unchartered seas! Live at war with your peers and yourselves! Be robbers and conquerors as long as you cannot be rulers and possessors, you seekers of knowledge!” %}

Nietzsche, Friedrich (1882) “Die Fröhliche Wissenschaft.” Translated into English by Walter Kaufmann n(1974) “The Gay Science.” Vintage Books, New York.


{% CPB %}

Nieuwenhuis, Ate (1994) “Simultaneous Maximization, the Nash Noncooperative Equilibrium, and Economic Model Building,” Central Planning Bureau, The Hague, the Netherlands.


{% intuitive versus analytical decisions: a meta-analysis of the hypothesis of unconscious thought, which claims that decisions improve if you distract people before so that they cannot give it conscious thought and have to do purely, in lack of a better term, intuitively. The hypothesis had been advanced by Dijksterhuis et al. (2004) and many others. The paper is negative on this hypothesis. %}

Nieuwenstein, Mark R., Tjardie Wierenga, Richard D. Morey, Jelte M. Wicherts, Tesse N. Blom, Eric-Jan Wagenmakers, Hedderik van Rijn (2015) “On Making the Right Choice: A Meta-Analysis and Large-Scale Replication Attempt of the Unconscious Thought Advantage,” Judgment and Decision Making 10, 1–17.


{% ordering of subsets %}

Niiniluoto, Ilkka (1972) “A Note on Fine and Tight Qualitative Probabilities,” Annals of Mathematical Statistics 43, 1581–1591.


{% %}

Nilsson, Nils J. (1986) “Probabilistic Logic,” Artificial Intelligence 28, 71–87.


{% Nice explanation of hierarchical Bayesian estimation, done for PT. The authors use exactly the same parametric family as T&K 92 and as in Example 9.3.1 of Wakker (2010). They run into big numerical problems for estimating loss aversion and discuss it extensively but do not pin down the mathematical reason. That mathematical reason is described in §9.6 of Wakker (2010). P. 89 2/3 at 1st column: the authors recommend using the same power for gains and losses so as to fix utility and disentangle utility from loss aversion, and use this as  =  restricted PT in the rest of the paper. That this restriction avoids all kinds of numerical problems is explained in §9.6 of Wakker (2010). %}

Nilsson, Håkan, Jörg Rieskamp, Eric-Jan Wagenmakers (2011) “Hierarchical Bayesian Parameter Estimation for Cumulative Prospect Theory,” Journal of Mathematical Psychology 55, 84–93.


{% %}

Ninio, Anat, & Daniel Kahneman (1974) “Reaction Time in Focused and in Divided Attention,” Journal of Experimental Psychology 103, 393–399.


{% How people develop awareness of probability/statistics, and how that is also matter of evolution of awareness. %}

Nisbett, Richard E., David H. Krantz, Christopher Jepson, & Ziva Kunda (1983) “The Use of Statistical Heuristics in Everyday Inductive Reasoning,” Psychological Review 90, 339–363.


{% %}

Nisbett, Richard E. & Lee Ross (1980) “Human Inference: Strategies and Shortcomings of Social Judgment.” Prentice-Hall, London.


{% %}

Nisbett, Richard E. & Timothy D. Wilson (1977) “Telling More than We Can Know: Verbal Reports on Mental Processes,” Psychological Review 84, 231–259.


{% revealed preference: on compact path-connected space, a single-valued choice function defined on all finite subsets cannot be continuous. %}

Nishimura, Hiroki & Efe A.Ok (2014) “Non-Existence of Continuous Choice Functions,” Journal of Economic Theory 153, 376–391.


{% Shows that every (continuous and) reflexive binary relation on a (compact) metric space can be represented by means of the maxmin, or dually, minmax, of a (compact) set of (compact) sets of continuous utility functions.
Maxmin utility representation: x  y supSSsupuS(u(x)u(y)  0. Here S is a collection of sets of utility functions, and S is a set of utility functions. This can be done with u continuous for every reflexive . One can also take, dually, a minmax representation. There is no clear uniqueness result for the sets to be chosen. Because there is much richness in the sets to be chosen, one can always choose the utility functions continuous. %}

Nishimura, Hiroki & Efe A. Ok (2016) “Utility Representation of an Incomplete and Nontransitive Preference Relation,” Journal of Economic Theory 166, 164–185.


{% revealed preference: a variation of Afriat’s theorem that allows for general choice domains. It considers a one-dimensional representation, defining rationalizability (this formal term is common in this field although I regret it) as the choice set being a SUBSET of the preference-best elements but, and this is the central issue of this paper, the preference relation should satisfy a dominance relation. Richter (1966) gave completely general (for general choice domains) necessary and sufficient conditions when rationalizability is defined in the more common sense of a choice set being identical to the prefernece-best elements. Further results are given, including continuity and intertemporal properties. %}

Nishimura, Hiroki, Efe A. Ok, & John K.-H. Quah (2017) “A Comprehensive Approach to Revealed Preference Theory,” American Economic Review 107, 1239–1263.


{% Define more uncertainty averse under CEU (Choquet expected utility) as one capacity dominating the other. Show then that more uncertainty averse makes laborers search shorter for new job, whereas more risk averse makes them search longer. %}

Nishimura, Kiyohiko G. & Hiroyuki Ozaki (2004) “Search and Knightian Uncertainty,” Journal of Economic Theory 119, 299–333.


{% Whereas an increase in risk increases the value of irreversible investment, an increase of ambiguity (equated with multiple priors here) decreases it. %}

Nishimura, Kiyohiko G. & Hiroyuki Ozaki (2007) “Irreversible Investment and Knightian Uncertainty,” Journal of Economic Theory 136, 668–694.


{% %}

Noël, Marie-Pascale & Xavier Serron (1997) “On the Existence of Intermediate Representations in Numerical Processing,” Journal of Experimental Psychology: Learning, Memory, and Cognition 23, 697–720.


{% conservation of influence: Proved that in every situation of symmetry there is a concept subject to a conservation law. She proved that symmetry w.r.t. time translations imply the law of conservation of energy, a useful result for relativity theory. %}

Noether, Emmy A. (1918) “Invariante Variationsprobleme,” Nachr. König. Gesellsch. Wissen. Göttingen, Math-Phys. Klasse, 235–257; translated into English by M. A. Travel (1971), Transport Theory and Statistical Physics 1, 183–207.


{% updating: %}

Noguchi, Yuichi (2015) “Merging with a Set of Probability Measures: A Characterization,” Theoretical Economics 10, 411–444.


{% Explains immediacy effect and decreasing impatience by a model with constant discounting but time-dependent utility (as the author puts it: no constant marginal utility over time), including time-dependent background consumption. Reminds me of the hidden stakes for decision under uncertainty by Kadane & Winkler (1988). Has numerical illustration with power utility.
Download 7.23 Mb.

Share with your friends:
1   ...   72   73   74   75   76   77   78   79   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page