Working knowledge

Download 1.09 Mb.
Date conversion06.08.2017
Size1.09 Mb.
1   ...   10   11   12   13   14   15   16   17   ...   35

This utilitarian idea underlay the classical economic distinction between "necessities" and "luxuries." However, many of us learn from life that it is best to give up inessential necessities so as to keep the essential luxuries.

The classical distinction implies that we really want the money and material comforts, and we don't really want the other stuff—we just think we do. This bit of folk psychology (masquerading as classical economics) does not deserve the effort of refutation. Not even God can distinguish a "real" want from a want we only think we feel. Certainly Downs knew perfectly well that people work for the approbation of their friends and loved ones.

Gary Becker opposes rationality to various other things: “...custom and tradition, the compliance somehow induced by social norms, or the ego and the id” (Becker 1986:117, quoted in Green and Shapiro 1994:186). This is actually a good start at a "rational" definition. Becker has moved progressively toward accommodating such things as conformity, social pressure, guilt, and desire to please into his models of rational behavior. Moreover, he now admits (dare I say: a bit lamely?): "My work may have sometimes assumed too much rationality, but I believe it has been an antidote to the extensive research that does not credit people with enough rationality” (Becker 1996:155-156).

Economists and others also tend to confuse "rational" with "self-interested." Some even confuse "self-interested" with "selfish." For some economists, it would appear, the rational person occupies his or her time trying to rip off his or her fellow citizens, freeloading on the public good (the famous "free-rider problem"; Olson 1965.)

“Self-interest" is not necessarily "rational." People do irrational things every day for impeccably selfish motives. They make stupid mistakes. They act from passion of the moment, without reflection. They endanger their lives for short-term, trivial pleasures. This has led some modelers to drop the "rational" and talk only of self-interest (Mansbridge 1990b:255); this, however, makes modeling impossible and brings us back to the vapid claim.

Rational choice theory depends on a folk psychology that sees people as acting on their beliefs to satisfy their desires (Frank 1988; Rosenberg 1992). The language of "beliefs and desires" is inadequate in and of itself (Rosenberg 1992).

Yet a third definition of rationality, identified in contemporary writings with the economic philosopher Jon Elster, focuses on a quite different issue: Rationality critically involves people being able to take one step backward in order to take two forward. Elster likes to quote the French phrase reculer pour mieux sauter, “step back to jump better.” New information should powerfully influence such choices. If Robinson Crusoe learned how to make a fishnet, he could decide to catch fish by hand, dry them, and then go without fishing for a few days while he wove a fishnet (Elster 1993:144-146; see also Elster 1989).

Rational-choice theorizing might better seen as normative. Unfortunately, while clearly a liberal vision by comparison with autocratic and totalitarian ideas (Holmes 1990; Mansbridge 1990a), it is not a norm that deserves to be accepted without a hard look. Economists are trying to sell an ideal: the ideal of the isolated, competitive person. Sociability and emotionality are downvalued. Cutthroat competition and social irresponsibility are seen as inevitable (Olson 1965) and even desirable (Becker 1996; for problems with these approaches, see Elster 1989, Green and Shapiro 1994, Sen 2010).

The more highly specialized and extremely complex the rational-choice models become, on the whole, the less believable they are. As calculations become more complex, there is more opportunity to mix emotion and mistake into the equations. Ten calculations provide, other things being equal, ten times as many chances for mistakes as one calculation. (See Russell Hardin 1988:8-10, on the need to invoke “bounded rationality” in accounting for ethical choices).

All forms of rational choice are subject to the criticism that they require certainty about the world—at least about one’s feasible set and one’s goals. Modern rational choice theories have developed beyond this, and can accommodate risk and uncertainty. “Risk” refers to a chancy situation with known risks, like roulette, or like entering a forest where you know that exactly one out of a hundred people is eaten by a jaguar. “Uncertainty” refers to true ignorance of the situation; one does not know whether the forest has jaguars or not. Humans fear being out of control, and especially being ignorant even of how one could exert control. So uncertainty is harder on people than risk. Turning uncertainty into risk, and risk into certainty, is always a major goal of real-world learning and research.

Until we have such certainty, it is often rational not to act "rational." The best way to maximize our immediate utility is to wing it. Spending inordinate time calculating the absolute best course of action is irrational. Sometimes there is no optimum to find; at best, it might take years to find it (Elster 1989; Green and Shapiro 1994). The beginning of an infinite regress is here: rational-choice diehards say we have to calculate—rationally, of course!—when and how much to optimize....

More fatal is the difficulty of separating goals and ends in human action (Whitford 2002). Almost all human goals are only way-stations on the route to higher-level goals. People are not always sure of exactly what their goals are. Some goals are culturally absorbed, and—again—only partially known on a conscious level. Others are vague: one starts out the day to do something—anything—and gradually falls into doing some particular thing because of trivial contingent factors.

People also naturally try to accomplish several goals with one act. I tend to use a rough “rule of three”: For people to do anything really important, they have to have about three good reasons. Calculating rationality in such cases is difficult at best, and impossible (at least in real-world circumstances) if one of the reasons is a higher-order goal while the others are not.

Herbert Simon wrote that people resort to “bounded rationality” (Simon 1967). That phrase is something of a cop-out; it lets us use the word “rationality” for intuitive approximations and inspired improvising that are not really rational calculations. Some users of the “bounded rationality” concept imply that people somehow rationally calculate when not to calculate. This is another “infinite regress trap” that needs no discussion.

Short-term vs long-term tradeoffs are intractable problems for rational theorists. Gary Becker (1996) alleges that drug addicts and alcoholics are those who value the very short-term; they act for immediate enjoyment. The non-addicts think more of the long term. Quite apart from the absurdity of this as a full explanation of addiction, it is not even a good partial explanation, since it tells us nothing of why people differ in this regard. It merely restates the fact that some people are addicts, in pseudo-economic language.

If the Darwinian anthropologists are right (and they are), is the fact that many of our goals and many of our means of achieving them are set by our genes, without our awareness. Our prehistoric ancestors evolved in a world of vast and unknowable dangers. They could not analyze the best way to get an ideal mate. They could not accurately predict storms, climate changes, or predator attacks. They evolved to be increasingly able to deploy rules of thumb, applied by inspired intuition. Such behavior usually outperformed rational calculus when people were fleeing bears or taking advantage of an unexpected opportunity to shoot an antelope.

The human tendency to stick with the tried and true is “irrational” (as pointed out by the sociologist Randall Collins, 2001:171f), but it is surely sensible. Risk-taking is bad enough in the stock market and other economic spheres. In daily life, everyone learns sooner rather than later that the known and safe is generally a better bet than the unknown and chancy, even when the potential payoffs of risk-taking are huge. Most of us learn this the hard way. “Experience keeps a dear [i.e., very expensive] school, but fools will learn in no other,” said Benjamin Franklin.

Some have argued that emotions are deployed in the service of reason, or are rationally managed. I have no idea what lives these sages live, but suffice it to say that my own experience is quite the reverse. My life and my family’s and friends’ lives would certainly have been profoundly different and vastly better if these grave experts were right. I can only wish that people could learn to use emotions that way (on this issue, see Frank 1988; Gibbard 1992; Nussbaum 2001; Sen 2009; and on the general case Stets and Turner 2006).

Politics is dominated by emotion, not by rational choice (Caplan 2007; Westen 2007). We would like to think otherwise, but the facts are clear. Drew Westen points out that an intuitive assessment of a candidate’s character is probably a better way of assessing her than a considered take on what little the media usually present of the candidate’s actual positions. Not only do we have imperfect information; the candidate has no way of knowing what emergencies will arise. The unexpected is the rule. So we are wise to trust our intuitions (cf. Gigerenzer 2007) in choosing a candidate who can be trusted to lead and choose wisely if things suddenly get tough.

The paradox of "self-interested" behavior in which the group defines the "self-interest" has often been raised, but only partially expounded. It deserves more attention by social theorists. In many groups, it is self-interested to do something against one’s self-interest, because that shows you will do anything for the group, and thus gets you accepted (Atran 2002). The more self-harming or self-endangering the behavior, the more loyalty it shows. Judith Rich Harris (1998) has stressed the importance of the peer group, and thoughtfully criticized the enormous (irrational!) underestimate of that importance in most psychological literature. And peer groups typically insist on such tests.

By contrast, people are amazingly sloppy about material and monetary goals. Chimpanzees throwing darts at a board—in actual experiments, computers generating random bets—routinely equal the best stock analysts in predicting the stock market. Usually, buyers’ and sellers’ errors correct each other in the stock market, so it seems to be performing rationally, but its true character is revealed when people rush to one extreme or another, generating boom and bust cycles.

Every con man knows that a con game requires a willing sucker—someone who wants to believe the unlikely. Political “spin” artists propagate the most amazingly improbable beliefs in the teeth of universally publicized evidence to the contrary. People also put up with "users"—Individuals who demand special favors and are highly incensed if they don't get them—instead of driving them off, as rational choice theory and “cheater detection” would predict. The warm, friendly con artist, the confident, sophisticated "dude," and the appealing user never lack for willing victims. They take advantage of deep social instincts that always seem to outweigh rationality.

A final insight is provided by experimental economics: people primed by exposure to pictures of money, or word games involving it, later act more selfish, calculating, and “rational” (in narrow economic terms) than people not so primed (Vohs et al. 2006). This is related to the observation that economics students act more selfish and calculating than other students (Henrich et al. 2004). The experimenting teams speculate on the evolutionary contours involved; presumably humans evolved in a world in which some things were scarce—food in a famine year, for instance—and thus evolved an ability to slip into rational self-interest when sociability became suicidal. If true, this would prove that “irrational” sociability was normally the more rational course in terms of personal action for successful reproduction, since natural selection made it the default.

Of course, one can say, with Jon Elster (1989:48): "Being irrational and knowing it is a big improvement over being naively and unthinkingly irrational" (his emphasis). This is perhaps true, but to save the general theory, one must specify how people actually choose. Otherwise, rational choice is almost to the level of the old story of the man who got into the burning bed because "it seemed like a good idea at the time" (Fulghum 1991).

Grave authors have argued over whether the "ultimate" goal of humans is happiness, self-preservation, Darwinian reproduction of the species, or virtue (Sidgwick 1907). But recall Viktor Frankl's "meaning." People have to live for something: a family, an ideal, a dream, even a pet—something beyond and outside themselves, to which they have committed themselves (Frankl 1959, 1978). If it is necessary, for our self-interest, to have higher interests in mind, how do we calculate utilities? And, by definition, it would be impossible for a selfish individual, desperate for Franklian meaning, rationally to choose impassioned commitment. We cannot rationally force ourselves to be happy, any more than we can obey the command to be disobedient (Elster 1983). A state trying to maximize welfare can only provide opportunities—“the pursuit of happiness.” It cannot "make" its citizens happy. Some states have given their citizens the pathological "meaning" of fanatic loyalty, but they cannot "make" their citizens find Franklian meanings.

The final and ultimate finding in this chain is that people are made happier by spending on others than by spending on luxuries for themselves. Elizabeth Dunn and coworkers (2008) performed several experiments and also asked people who had won lotteries and otherwise gotten unexpected windfalls about their happiness levels. It turned out that money does buy happiness—if you give it to someone who needs it. Spending on self soon grows wearying.

An argument by Paul Bloom (2010) that morality owes much to rational and logical thought fails to convince me. He maintains that morality has been carefully thought out by many grave minds throughout history. Anthropological evidence does not confirm this. Morality is learned through culture, and changes only slowly, in response to people learning from daily interaction that people are getting helped or hurt. The strong emotional reactions to these helps and hurts are what matter. The supposed reasoning of philosophers like Immanuel Kant and John Rawls is rather widely said to be largely apologetics for their pre-existing, basically Christian, moral positions. Both Kant and Rawls come up with versions of Jesus’ Golden Rule as their moral touchstone.

Finally, Amartya Sen (2009, esp. pp. 176-193) has turned his brilliant gaze on rationality, but again he assumes too much of it in people. He knows that human history has seen many more genocides, senseless wars, and unnecessary famines than policies grounded on anyone’s genuine material self-interest or justice, but he writes in hope that we will be more reasonable.

Rationality Limited and Defended
“Now, in his heart, Ahab had some glimpse of this, namely: all my means are sane, my motive and my object mad.” -Melville (Moby Dick, chapt. 41, last page)
Rationality, at least in the ordinary sense of the word, does exist. The trouble with it is exactly what Ahab saw: it is often put to the service of irrational ends. Extreme anti-tax agitators become so blinded by minimizing their tax bites that they forget that without taxes they would have no police, fire, or armed forces protection. Rational maximizers of income or prestige or calling may literally work themselves to death; I have known several people who did exactly that.

People are perhaps most rational when pursuing intangible ends. No one is more rational in the narrow sense—rational about planning the means to get to a specific goal—than a patriotic soldier undertaking a dangerous mission or suicide bomber planning the details of her bombing run. I have heard terrorists described as “animals,” but never is a human more human—more rational, careful, and moral—than when devising a plan for self-sacrifice in the name of religion or group hate. Conversely, some people rebel just to rebel, without being rational about it and without pursuing a higher goal. And we do not understand them, either.

Microeconomic theory provides good descriptions and tolerable predictions of human behavior in the business world, and commercial farming—even the small-scale peasant farming once thought to be "irrational" (Barlett 1980, 1982). Even simpler cultures that lack money, or any concept of a general and fungible currency, have their production, distribution and exchange activities that can be analyzed by economic theory, or by the spinoff of neoclassical economics known as optimal foraging theory (Smith 1991; Smith and Winterhalder 1992). Much of the time, people really are choosing after conscious deliberation of the alternatives, and really are choosing the best way to get what they want. In short, rational-choice models very often work. Their failures, unfortunately, are often in precisely the situations where we most need to know what we are doing.

Old models that talk as if "society" or "culture" or "the mob" acted are simply wrong, and are now thoroughly superseded. Moreover, where once anthropologists thought that people in a given culture mindlessly conformed to that culture's "rules," we now know that culture always offers choices, and we know that individuals have considerable scope for independent action and for strategic and tactical choice (see e.g. Bourdieu 1977, 1990). If rational-choice models are inadequate, at least they save us from the mystical, mass-action-at-a-distance models that once bedevilled social science.

Microeconomic theory is much better at modeling problems than at predicting what solutions a given society will invoke. (Hence the need for macroeconomic theory.) The Tragedy of the Commons is easily modeled by microeconomics, and is remarkably universal. Successful strategies for preventing it, however, differ considerably from place to place. Each cultural group draws on its own history and on its own preferred methods of coping with social problems. Some groups invoke religious taboos, others use taxation, others use force. In some very abstract sense, all these ways might be seen as equivalent. In real-world terms, an activist or developer or politician might find it very hard to see the equivalence. We cannot predict from strict micro theory what a given society will do.

On the basis of such findings, Michael Taylor, formerly a rational choice theorist, has qualified his stance. Experience with Native Americans and their utter devotion and dedication to their land was a major factor (Taylor 2006). He found that the Native people often would not take, and sometimes genuinely did not want, any financial compensation for land lost. It was irrelevant; the land was gone. They kept fighting for land rights, but did not see anything as replacement. Taylor noted that this “irrational” stubbornness worked well enough to allow them to hang onto a few fragments of land; individual rational choosers gave up and accommodated, and wound up worse off in the long run.

The failure of rational self-interest to predict all of human behavior explains the curious fact that early-day writers from Aristotle to Gibbon seem so much more accurate and in tune with their subject than modern ones. The early ones recognized the importance of emotion, and explained everything from economics to the fall of Rome with that in mind. The moderns tend to seek rational explanations for grand human actions, and they fail.

Donald Calne has advocated rationality as a norm rather than a fact, in his book Within Reason (1999). Like Calne, I see rationality as a desirable ideal to strive for—rather than an abstract mathematical concept, a vague something-we-are-born-with, or a coldly calculating selfishness. Unlike Calne, I do not see it as separable from emotion. I also have, perhaps, a livelier sense of the difficulties of avoiding irrationality, especially the irrationalities of hatred and prejudice. We need more than rationality; we need some emotional management. But we do need the rationality. Unfortunately, people usually do the most rational thing only after exhausting all available alternatives.

George Marcus (2002), pointing out much of the above, counseled politicians to go with emotion, and rather downplayed rationality. This I believe is a mistake. It would be enormously better if we were more rational—not in the sense of individual selfishness, but in the sense of thinking about what we were doing. The best hope is probably Alan Gibbard’s: “Wise choices, apt feelings” (1992). We can control our emotions enough to make them work for us; we can damp down hate and viciousness, work on love and care, and otherwise improve ourselves, as counseled by philosophers from Aristotle and Confucius onward. We can then use emotion in its proper role, motivating and shading cognition. The result would be more reasonable choices—not for individual self-interest, but for the common good.

We can now harness the enormous power of computers to get far, far closer to adequate information than we could in earlier times. This not only will make decision-making easier; it will allow us to go more safely with hunches and intuition and insight. The computers “have our backs.” They have enough stored data to let us check our hunches at a few key-clicks.

Southwestern Native Americans tell stories of the creation of the world by Wolf, the elder brother who did everything right, and Coyote, the younger brother who did everything foolish. Often, they end a story with the line: "We should have followed Wolf, but we followed Coyote." People are not very stupid, not usually very bad, not very selfish--but the stupidity, evil, and selfishness interact, and in a multiplicative, not additive, way.
Summing up all that has gone before, we find that people act above all for social reasons: acceptance, love, care, position and status, regard and respect. They act emotionally, for emotionally-given ends.

Even when they do work for money, they often work counterproductively. They often act foolishly because of simplifying assumptions or because of prejudice and hate. Competition should theoretically cause the most rational to succeed, but this fails if competition is imperfect or if all the competitors are equally wrong-headed.

On the other hand, they can be highly rational in pursuit of these goals, and structuring institutions by assuming rational self-interest usually works up to a point. People want money for social and status reasons more than for “material wealth,” but they do want money, and thus raising or lowering pay scales does matter, even when people are religious enough to choose God over Mammon. The point is not to deny rationality but to see it as only one way to move toward goals. The problem with rational choice theory has been the insidious and thoroughly wrong assumption that people work simply for material wealth or for ill-defined “power.”


VI: Culture
"Nor can it be but a touch of arrogant ignorance, to hold this or that nation Barbarous, or these or those times grosse, considering how this manifold create Man, wheresoever hee stand in the world, hath alwayes some disposition of worth."  
Samuel Danyel, poet, 1603; quoted in Penguin Book of Renaissance Verse, ed. David Norbreak and H. R. Woudhuysen, 1993, p. xxii.

Culture from Interaction

From all the above, we can construct culture. It is rooted in biology, developed from individuals, and constructed by interactions.

Today we are fairly used to respecting other cultures and finding every culture worthy of respectful attention as a creation of the human spirit. This valuing diversity, as opposed to judging every culture according to how similar it is to our own, is a very new thing. In spite of foreshadowings like Danyel’s beautiful quote above, the idea of valuing diversity was basically invented by one man, Johann Gottfried von Herder, a German theologian of the late 18th century (see Herder 2002). It propagated through anthropology. Herder influenced the German Enlightenment thinkers who inspired early ethnologists like Adolf Bastian and Franz Boas. The normal human way, unfortunately, is still to consider “different” as a synonym for “inferior.” This not only prevents mutual respect; it prevents the biased ones from learning valuable knowledge.

1   ...   10   11   12   13   14   15   16   17   ...   35

The database is protected by copyright © 2016
send message

    Main page