Bibliography



Download 7.23 Mb.
Page45/103
Date28.05.2018
Size7.23 Mb.
#51225
1   ...   41   42   43   44   45   46   47   48   ...   103
§3 ff.gives comparative results on uncertainty aversion, being the usual stronger preference for sure outcomes or unambiguous acts, leading to lower s in the mixtures of m and M and/or concave transformation results. As usual, such comparative results require the same preferences over unambiguous/ideal acts (p. 9 bottom). As usual in the ambiguity literature, the authors only think of aversion/seeking and not about my “hobby” of insensitivity.
P. 13 points out that the authors are providing a subjective behavioral foundation of belief functions in the sense of Shafer. They also refer to Dempster, but with him the probabilities of the unambiguous events were objective as in Jaffray’s works. §5 (p. 14 ff.) discusses the separation of uncertainty perception and attitude.
P. 17 uses the term and concept of source introduced by Tversky, imposing probabilistic sophistication as with the uniform sources of Wakker (2008 New Palgrave). However, they only cite Epstein & Zhang (2001) here. I disagree with this reference because Epstein equated probabilistic sophistication with unambiguity (criticized by Wakker 2008) and did not have the general concept of source. This concept was introduced by Tversky in the early 1990s, with Heath & Tversky (1991) and Tversky & Kahneman (1992) already mentioning the concept and Tversky & Fox (1995) and Tversky & Wakker (1995) developing it.
The Jaffray-G&P model is intellectually very interesting, and Jaffray based it on principled philosophies. Yet I think that it will not work well practically. The decisions contingent on diffuse events are too crude to be realistic (in a way similar to  maxmin, but more extreme). In this sense, diffuse events are different, and do not have the same idealized status, as the fine events of Savage (1954) for instance. Although we can never really get the infinite refinedness that Savage requires, we can get events close to it, and then preferences will be as Savage assumes. Diffuse events is different. Diffuse events concern not only events assumed, but also preferences on them. I cannot imagine events WITH preferences even close to what is assumed for diffuse events.
As with most theoretical ambiguity models in the literature today, G&P do not state whether the model is meant to be normative or descriptive. They only show that they can accommodate the Ellsberg and Machina paradoxes, but this is typical economic arm-chair. The authors do not relate to empirical findings otherwise. Neither normatively nor descriptively do I think that there will be behavior w.r.t. diffuse events close to what this model assumes. Well, on normative, I think that Jaffray meant his model to be normative, but few will follow his extreme aversion to using subjective inputs to model uncertainty (only utility of outcomes can be subjective for him). The analogy with  maxmin that GP put up (p. 7), in considering only inf and sup and having a kind of violation of strict dominance, is exactly where  maxmin is also unsatisfactory. Comes to it that in  maxmin things are not as extreme as here with diffuse events, the sup and inf being over expectations wrt. probabilities rather than directly over outcomes.
Another problem I have is that in EUU, ambiguity attitude is partly outcome driven through the  weight when mixing m and M, which is similar to the smooth model, but I think that it should be event driven, for instance to accommodate the empirically prevailing insensitivity.
I wonder, if we have an Anscombe-Aumann two-stage model with ideal events in the second stage, if then the first-stage events regarding horses could all be diffuse, as the horizontal events in Figure 2, or what else the diffuse events then would be. %}

Gul, Faruk & Wolfgang Pesendorfer (2014) “Expected Uncertain Utility Theory,” Econometrica 82, 1–39.


{% biseparable utility: satisfied.
event/utility driven ambiguity model: event-driven: relative to their 2014 (ECMA) theory, here the parameter  is independent of outcomes, so that the theory is no more utility driven.
This paper axiomatizes and analyzes an interesting special case of their 2014 expected uncertain utility theory. It reinforces their original axiom 4 (Savage’s P4) to hold for all events, and not only for the ideal (unambiguous) events. This brings many new concepts, in particular related to source dependence. Tversky pushed source dependence in ambiguity theory in the early 1990s and influenced young authors, including Craig Fox and me in those days, and also Chew Soo Hong. Gul independently developed the same understanding, together with his co-author Ergin, in their 2009 paper where they used the term issues. This paper pursues these ideas. It brings in empirical realism by not focusing on ambiguity aversion (in my terminology, which involves what these authors denote ; see below). They consider uncertainty loving at poor odds.
Their basic model of Hurwicz expected utility (HEU) is
. (*)
It has three subjective inputs, being the Hurwicz pessimism index, the utility function, and a sort of unambiguous-subjective probability measure explained later. is a Savage act mapping states from state space to outcome set [l,m]. The outcome set is compact and utility is continuous and, hence, bounded, avoiding many problems about integrals being undefined. A prior is a sigma-algebra endowed with a countably additive atomless probability measure. There is a special prior, being the in (*). Events in its sigma-algebra are called ideal, and are to be taken as unambiguous. Here the subjective probabilities are assumed to be, let me say, not unknown. P. 469 3rd line below Eq. 1: “the events the decision maker perceives to be least uncertain.” As is common in the field today, ambiguity is considered here to be some sort of objective property of events/info, to be “perceived” (subjectively indeed) by a decision maker (see also p. 468 3rd para l. 2). I rather take ambiguity as sensitive/cognitive without committing to whether it is in the events or in the decision maker’s mind. Savage’s sure-thing principle is imposed on ideal events, implying EU there; this is what defines ideal events. I did not study the paper enough to see why/how the ideal events are a sigma-algebra, a  system, or what. contains all priors that are extensions of So, on they are uniquely determined, and outside they can be what they want.
Diffuse events are defined to intersect every nonnull ideal event. They will be taken to be completely ambiguous and handled in the complete-absence-of-info sense of Jaffray, taking an /1 mix of infimum and supremum outcomes without using likelihood info in any way. They are centrally used in the theoretical derivation. But I think that such events will not be found in reality, neither empirically nor normatively (the latter for people who unlike me study ambiguity normatively).
Axiom 6 combines pointwise convergence continuity, which reinforces finite additivity into countable additivity, and supnorm continuity, implying continuity in the utility dimension without restricting event weighting. Axiom 2 imposes a kind of strict monotonicity that, as natural as it looks, need not hold under finite additivity (Wakker 1993 RESTUD). But finite additivity-type phenomena are probably excluded by Axiom 6.
The paper does not impose some grand sigma-algebra of events but apparently allows for all subsets (although each prior has a sigma-algebra specified), and then, by the choice axiom (continuum hypothesis), many weird things can be around. I did not study the paper enough to see why diffuse events would exist. The authors seem to use the continuum hypothesis for it, so then it is not very constructive. The diffuse events are important because Axiom 3 concerns them, and is the only non-Savage-type axiom, which should determine the  maxmin nature with respect to all extensions of  that is characteristic of the preference functional.
Then the paper turns to sources. They are priors on sigma-algebras where probabilistic sophistication holds. For sources, the multiple priors representation turns out to be equivalent to an RDU representation with a weighting function  + (1)´ (the latter is the dual of ).  is convex (mentioned in passing by on p. 476 2 lines below Proposition 6). Thus within a source they have an RDU (the authors write the inconvenient and outdated RDEU) representation with as weighting function an /1 mixture of a convex weighting function and its dual. (Stated again in Proposition 6.) This is like Abdellaoui et al.’s (2011) source method with the weighting function as source function. The authors interpret, much in deviation from my interpretations,  as a person-dependent source-independent ambiguity attitude, and  as a source-dependent risk attitude (p. 468 3rd para last sentence). Thus they come to the unfortunate claim that ambiguity attitude is source-independent but risk attitude is.
Proposition 2: the authors get every  as a power series, and every such  existing. This is a richness somewhat like the existence of diffuse events. The power-series result reminds me of the Weierstrass theorem, that the polynomials are dense in the set of continuous functions on a compact interval. The richness means that almost any ambiguity attitude allowed by this model is actually exhibited for particular events. This is big richness.
P. 479, middle para: “For the remainder of this paper, we fix , the agent’s uncertainty perception.” This statement in one sentence in the flow of the text is restrictive and, more importantly, is not directly behavioral. It means that later comparative results, such as Proposition 3 on more uncertainty averse, have a nonbehavioral component.
This paper is close to my opinions on ambiguity, but still I prefer some different terminologies and interpretations. First, I think that sources are better not defined endogenously, but exogenously, in the modeling stage, like commodities. After all, that the known Ellsberg urn is known is not derived from preference, but decided exogenously. Then, subcases of sources are called uniform sources, and they are of special interest, with probabilistic sophistication. This is indeed endogenous. Risk better only refer to objective known probabilities (decided exogenously). The differences between source-dependent functions  then reflect differences in ambiguity attitudes, and not in risk attitude. This is why Abdellaoui et al. (2011) called their corresponding weighting function (so, the /1 mixture of ) source function, where its difference with the objective-risk-weighting function reflects ambiguity. Speaking of source-dependent risk attitude may work easiest when first presenting to uninitiated audiences, but cannot survive. Not so much risk attitudes, but rather ambiguity attitudes, are a rich domain. Kilka & Weber (2001) used the same unfortunate terminology of source-dependent risk attitude. It is so confusing that I usually avoid citing it, even though it otherwise has many great ideas.
The paper claims to handle Allais, but I disagree. Allais entails deviation from EU under known objective, unambiguous, probabilities. For those probabilities (, although  is not meant to be objective, but at least unambiguous) this paper assumes EU. It interprets the  part of what I would call source function as source-dependent risk attitude and then says that EU deviations captured by those are Allais-type risk attitudes.
The aim to handle Allais-type deviations from EU under risk, Ellsberg-type deviations from EU under ambiguity, and source-dependence in general, in one unified theory, an aim of this paper (p. 466 1st para), is shared by prospect theory for ambiguity, of which the source method is a special case. My 2010 book writes on p. 2, penultimate para: “At this moment of writing, 30 years after its invention, prospect theory is still the only theory that can deliver the full spectrum of what is required for decision under uncertainty, with a natural integration of risk and ambiguity.” In many places in my book and papers, and in many applications of the source method, it is emphasized that there is no commitment to EU for risk (or unambiguous events).
P. 471 l. 5 is important in excluding source-preference within a source, which leads to Wakker’s (2008) uniform sources. Proposition 3 compares different sources regarding their  from betting on ambiguous events, a condition that can readily be reformulated in terms of matching probabilities. Caveat is that the result of the authors requires identical ’s. Their more uncertain preference condition again is no preference condition because it involves . The corresponding dominance condition for function  comes close to more inverse-S because the weighting function is a combination of  and its dual. P. 473 bottom points this out by discussing uncertainty loving at poor odds, which is related to the overweighting of unlikely events.
§4.1 puts forward that the set of unambiguous events need not be an algebra.
Section 5 presents results for RDU such as higher aversion to mean-preserving spreads corresponding with more concave utility and more pessimistic probability weighting. Problem is that mean-preserving spreads are defined using subjective probabilities, implying that they are not really preference conditions.
P. 468 bottom & p. 477 footnote 22 points out that the model of this paper not only has preferences for one-source actss, but also for multi-source acts. Abdellaoui et al. (2011) have that too, where for multi-source acts they have the general prospect theory representation using general nonadditive event weighting functions. Their source method is a specification of prospect theory and is within the general prospect theory. Footnote 22 and the top of p. 477 are right that Abdellaoui et al. provide no very explicit preference foundation. Such a preference foundation by the way follows from the preference foundation of general prospect theory (Tversky & Kahneman 1992) with added the preference foundation of Chew & Sagi (2008) within sources. But this paper gives an alternative route with a unified axiomatization and many powerful tools and concepts for analyzing source dependence.
Note that, with expected utility available on a Savage-type rich domain, RDU can easily be axiomatized by cumulative dominance (Sarin & Wakker 1992). %}

Gul, Faruk & Wolfgang Pesendorfer (2015) “Hurwicz Expected Utility and Subjective Sources,” Journal of Economic Theory 159, 465–488.


{% Ambiguity=amb.av=source.pref %}

Gul, Faruk & Wolfgang Pesendorfer (2014) “Calibrated Uncertainty,” working paper.


{% %}

Gul, Faruk & Andrew Postlewaite (1992) “Asymptotic Efficiency in Large Exchange Economies with Asymmetric Information,” Econometrica 60, 1273–1292.


{% %}

Gul, Faruk & Ennio Stacchetti (1999) “Walrasian Equilibrium with Gross Substitutes,” Journal of Economic Theory 87, 95–124.


{% %}

Gul, Faruk & Ennio Stacchetti (2000) “The English Auction with Differentiated Commodities,” Journal of Economic Theory 92, 66–95.


{% %}

Gul, Faruk & Hugo Sonnenschein (1988) “On Delay in Bargaining with One-Sided Uncertainty,” Econometrica 56, 601–611.


{% %}

Gul, Faruk, Hugo Sonnenschein, & Robert Wilson (1986) “Foundations of Dynamic Monopoly and the Coase Conjecture,” Journal of Economic Theory 39, 155–190.


{% %}

Gumen, Anna, Efe Ok, & Andrei Savochkin (2014) “Decision-Making under Subjective Risk: Toward a General Theory of Pessimism,” working paper.


{% DOI: 10.1002/bdm.1840
Redo a Dutt et al. (2014) study with some modifications. Dutt et al. generate ambiguity through second-order probabilities. But in the DFE treatment they let subjects sample only the outcome with no knowledge of the 2nd order process, so that subjects in fact sample a fifty-fifty 1st order process. This paper lets subjects sample from the 2nd order distribution; i.e., lets them sample what the 1st order composition is. Thus the subjects experience the 2nd order distribution. The subjects know it is one of three, one dichotomous (1st order p is 0 or 1), one normal, and one uniform. Experience reduces ambiguity aversion relative to description. I agree that this paper better brings out the 2nd order distribution. But a problem is that for all 2nd order distributions, the 1st order distribution is 1/2. If subjects understand this, then they know that it does not matter what the 2nd order distribution is. Both Dutt et al. and this paper, in the experienced ambiguity treatment, renew the procedure each time so that the previous observations don’t inform about the actual process faced next.
It is natural that the 50% of subject for whom sampling from ambiguous happened to come out favorably, prefer ambiguous (as reported in last sentence of abstract), and the other 50% disprefer ambiguous. %}

Güney, Şule & Ben R. Newell (2015) “Overcoming Ambiguity Aversion through Experience,” Journal of Behavioral Decision Making 28, 188–199.


{% %}

Guo, Xianping (2007) “Continuous-Time Markov Decision Processes with Discounted Rewards: The Case of Polish Spaces,” Mathematics of Operations Research 32, 73–87.


{% %}

Guppy, Andrew (1992) “Subjective Probability of Accident and Apprehension in Relation to Self-Other Bias, Age, and Reported Behavior,” Accident Analysis and Prevention 25, 375–382.


{% questionnaire for measuring risk aversion %}

Gupta, Nabanita, Anders Poulen, & Marie-Claire Villeval (2005) “Do (Wo)Men Prefer (Non-)Competitive Jobs?,” working paper.


{% %}

Gurevich, Gregory, Doron Kliger, & Ori Levy (2009) “Decision-Making under Uncertainty—A Field Study of Cumulative Prospect Theory,” Journal of Banking & Finance 33, 1221–1229.


{% %}

Gustafsson, Anders, Andreas Herrmann, & Frank Huber (2007) “Conjoint Measurement: Methods and Applications (2nd edn.).” Springer, Berlin.

{% P. 342: “Utilities as well as subjective beliefs, e.g. in the form of subjective probabilities, are not directly observable: how should they if they do not exi[s]t?!” %}

Güth, Werner (1995) “On Ultimatum Bargaining Experiments –A Personal Review,” Journal of Economic Behavior and Organization 27, 329–344.


{% %}

Güth, Werner (2008) “(Non)behavioral Economics: A Programmatic Assessment,” Zeitschrift für Psychologie/Journal of Psychology 216, 244–253.


{% Seem to find that the strategy method gives different results than posterior choice. %}

Güth, Werner, Steffen Huck, & Wieland Müller (2001) “The Relevance of Equal Splits in Ultimatum Games,” Games and Economic Behavior 37, 161–169.


{% %}

Guthrie, Chris (2003) “Prospect Theory, Risk Preference, and the Law,” Northwestern University Law Review 97, 1115–1163.


{% %}

Guttman, Louis (1944) “A Basis for Scaling Qualitative Data,” American Sociological Review 9, 139–150.


{% foundations of statistics %}

Guttman, Louis (1985) “The Illogic of Statistical Inference for Cumulative Science.” In Omar F. Hamouda & J.C. Robin Rowley (1997, eds.) “Statistical Foundations for Econometrics.” Edward Elgar, Cheltenham.


{% preferring streams of increasing income
intertemporal separability criticized: sequence effects %}

Guyse, Jeffery L., L. Robin Keller, & Thomas Eppel (2002) “Valuing Environmental Outcomes: Preferences for Constant or Improving Sequences,” Organizational Behavior and Human Decision Processes 87, 253–277.


{% Use hypothetical choice. Given that they consider serious time delays and losses, I agree with their decision.
dominance violation by pref. for increasing income: not exactly that, but general preferences for sequencing effects, which do imply intertemporal separability criticized. Discuss discrepancies between matching vs. choice. They do not consider binary choice but rankings of multiple alternatives. They are maybe the first to investigate the choice-matching discrepancy in intertemporal choice within subjects.
decreasing/increasing impatience: Find no evidence for decreasing (or increasing) impatience (p. 245, 2nd column, 2nd para: it is interesting to observe that short/long term asymmetry did not surface in our within-subjects design for either elicitation technique.” %}

Guyse, Jeffery L. & Jay Simon (2011) “Consistency among Elicitation Techniques for Intertemporal Choice: A within-Subjects Investigation of the Anomalies,” Decision Analysis 8, 233–246.


{% Seems to have used VAS to measure discounting. %}

Gyrd-Hansen, Dorte (2002) “Comparing the Results of Applying Different Methods of Eliciting Time Preferences for Health,” Health Economics in Prevention and Care 3, 10–16.


{% discounting normative: seems to argue against discounting. %}

Gyrd-Hansen, Dorte & Jes Søgaard (1998) “Discounting Life-Years: Whither Time Preference?,” Health Economics 7, 121–127.


{% The author expresses an extreme econometric viewpoint in preface 2nd para: “The method of econometric research aims, essentially, at a conjunction of economic theory and actual measurements, using the theory and technique of statistical inference as a bridge pier. But the bridge itself was never completely built. So far, the common procedure has been, first to construct an economic theory involving exact functional relationships, then to compare this theory with some actual measurements, and, finally, "to judge" whether the correspondence is "good" or "bad." Tools of statistical inference have been introduced, in some degree, to support such judgments, e.g., the calculation of a few standard errors and multiple-correlation coefficients. The application of such simple "statistics" has been considered legitimate, while, at the same time, the adoption of definite probability models has been deemed a crime in economic research, a violation of the very nature of economic data. That is to say, it has been considered legitimate to use some of the tools developed in statistical theory without accepting the very foundation upon which statistical theory is built. For no tool developed in the theory of statistics has any meaning-except, perhaps, for descriptive purposes-without being referred to some stochastic scheme.” %}

Haavelmo, Trygve (1944) “The Probability Approach in Econometrics,” Econometrica 12, supplement, July 1944, pp. iii-vi+1-115.


{% Presidential address, meeting of Econometric Society, Philadelphia, Dec. 29 1957; pleads for the use of subjective probability. “are realities in the minds of people” and “ways and means can and will be found to obtain actual measurements of such data.”
P. 357: “I think most of us feel that if we could use explicitly such variables as, e.g., what people think prices or incomes are going to be, or variables expressinig what people think the effects of their actions are going to be, we would be able to establish relations that could be more accurate and have more explanatory value. But because the statistics on such variables are not very far developed, we do not take the formulation of theories in terms of these variables seriously enough. It is my belief that if we can develop more explicit and a priori convincing economic models in terms of these variables, which are realities in the minds of people even if they are not in the current statistical yearbooks, then ways and means can and will eventually be found to obtain actual measurements of such data.” %}

Haavelmo, Trygve (1958) “The Role of the Econometrician in the Advancement of Economic Theory,” (Presidential address, Econometric Society, Philadelphia, Dec. 29, 1957) Econometrica 26, 351–357.


{% DOI: http://dx.doi.org/10.1111/risa.12025
Present subjects with hazards and their objective probabilities, and then ask them to express subjective degrees of likelihood/probability. The severity of the hazard does not affect the expressed degrees. %}

Haase, Niels, Frank Renkewitz, & Cornelia Betsch (2013) “The Measurement of Subjective Probability: Evaluating the Sensitivity and Accuracy of Various Scales,” Risk Analysis 33, 1812–1828.


{% Chapters on cognitive neuroscience, attention, recognition and action, representation of knowledge: neural networks, learning and memory, language, reading and writing, problem solving, reasoning and choice, and applications.
Final page, p. 440/441, discusses whether it is better to investigate cognitive psychology in the laboratory or in the real world. (cognitive ability related to risk/ambiguity aversion) %}

Haberlandt, Karl (1994) “Cognitive Psychology.” Allyn and Bacon, Boston. (2nd edn. 1980, Krieger, New York.)


{% This paper investigates if a status quo, or an expectation just prior to it, serves as reference point.
In the first (“indirect”) experiment, a choice list determines the CE (= certainty equivalent) of (0.5: CHF10, 0.5: 0). Payment is in Switzerland CHF. This determines the risk aversion of N=121 subjects, with the random incentive system used to implement real incentives. This is done for a control treatment and for two experimental treatments. This first receive a sure prior endowment of CHF4, the second receive (0.5: CHF4, 0.5: CHF8), and the third receive (0.75:CHF4, 0.25:CHF12). The two experimental groups have expected prior endowment CHF6. The prior endowments were carried out prior to the choice lists, that is, the lotteries of the two experimental groups were carried out before the aforementioned measurement of risk aversion. For the subjects who received CHF4 as prior endowment in the control groups, we can see if they take that 4 as reference point so behave as in the control group, or if they take the CHF6 expectation as reference point and behave differently. It turns out that they, for both experimental groups, are somewhat less risk averse than the control group. (The evidence for group 1 is not so strong, p = 0.04 one-sided.) The two experimental groups are mutually similar. Had they taken the expected CHF6 as their reference point, then the prospect would have been perceived as mixed leading to greater, and not smaller, risk aversion for the experimental groups. So other things must be going on. %}

Hack, Andreas & Frauke Lammers (2011) “The Role of Expectations in the Formation of Reference Points,” working paper.


{% %}

Hacking, Ian (1965) “Logic of Statistical Inference.” Cambridge University Press, New York.


{% conditional probability; dynamic consistency; %}

Hacking, Ian (1967) “Slightly More Realistic Probability,” Philosophy of Science 34, 311–325.


{% foundations of probability; seems to be very good, authoritative. Seems to write that from the beginning probability had a dual role, one to reflect empirical frequencies, and the other to reflect subjective degree of belief. %}

Hacking, Ian (1975) “The Emergence of Probability.” Cambridge University Press, New York.


{% Introduce second-order stochastic dominance. %}

Hadar, Joseph & William R. Russell (1969) “Rules for Ordering Uncertain Prospects,” American Economic Review 59, 25–34.


{% %}

Hadar, Josef & Tae Kun Seo (1995) “Asset Diversification in Yaaris Dual Theory,” European Economic Review 39, 1171–1180.


{% Argues against C/E (cost-effectiveness) analyses. P. 2219 seems to write: “There is a fact about the human psyche that will inevitably trump the utilitarian rationality that is implicit in cost-effectiveness analysis: people cannot stand idly by when an identified persons life is visibly threatened if effective rescue measures are available.” He seems to have proposed, on p. 2223, a rule where “cost is not considered in determining the importance of treatment.” %}

Hadorn, David C. (1991) “Setting Health Care Priorities in Oregon: Cost-Effectiveness Meets the Rule of Rescue,” JAMA 265, 2218–2225.


{% Despite the broad title, they only investigate the Asian disease problem with militaries, to find that those are generally risk seeking. %}

Haerem Thorvald, Bård Kuvaas, Bjørn T. Bakken, & Tone Karlsen (2011) “Do Military Decision Makers Behave as Predicted by Prospect Theory?,” Journal of Behavioral Decision Making 24, 482–497.


{% risky utility u = strength of preference v (or other riskless cardinal utility, often called value) %}

Hagen, Ole (1984) “Neo-Cardinalism.” In Ole Hagen & Fred Wendstøp (eds.) Progress in Utility and Risk Theory, 145–164, Kluwer (was Reidel), Dordrecht.


{% measure of similarity %}

Hahn, Ulrike, Nick Chater, & Lucy B. Richardson (2003) “Similarity as Transformation,” Cognition 87, 1–32.


{% %}

Hahnemann, W. Michael (1991) “Willingness to Pay and Willingness to Accept: How Much Can They Differ?,” American Economic Review 81, 635–647.


{% Test Benartzi-Thaler myopic loss aversion, finding it, surprisingly, even more pronounced for professional traders than for students. %}

Haigh, Michael S. & John A. List (2005) “Do Professional Traders Exhibit Myopic Loss Aversion? An Experimental Analysis,” Journal of Finance 60, 523–534.


{% Application of ambiguity theory;
Download 7.23 Mb.

Share with your friends:
1   ...   41   42   43   44   45   46   47   48   ...   103




The database is protected by copyright ©ininet.org 2024
send message

    Main page