Muhlenbein, M. (ed.) 2015. Basics in Human Evolution. Elsevier ( in press ). Chapter 27: agriculturalism



Download 135.24 Kb.
Page2/3
Date18.10.2016
Size135.24 Kb.
#2340
1   2   3

The Columbian Exchange

Prior to 1492, for the most part people grew crops derived from the agricultural origin center closest to them, typically with the addition of a small number of locally derived species. After 1492, there was a massive exchange of organisms – pathogens, crops, domestic animals, weeds, and “varmints” such as rats – between continents, which Crosby (1973) famously dubbed the Columbian Exchange. The European colonial powers consciously introduced crops all over the globe, with the result that many species found favorable habitat overseas. The classic example was the introduction of the potato from Peru to Ireland, where it was much better adapted than the Near Eastern grains. One net result was a global increase in agricultural productivity, because farmers now hadcrops that were better matched to their environments.

Crosby (1986) proposed that European crops, weeds, grazing animals, and pathogens all worked together mutualisticallyto replace native ecosystems in the “neo-Europes” – overseas regions with climate similar to that of Europe. His thesis is overstated, but not entirely without merit (Blumler, 1993a); it is more applicable, however, to the “neo-Fertile Crescents” than to neo-Europes. Construction of theFertile Crescent wild cereal/annual grassland ecological niche overseas after 1492caused rapid and massive impacts on places with winter-rain, summer-dry climates such as California. There, while a significant portion of the native vegetation was lost to agriculture, even the remaining wildlands suffered as both the crops (to some extent: radish is an example [Panetsos and Baker, 1967]), and much more so the associated weeds, were able to become invasive species, and dominate over huge areas (Blumler, 1995; 2011).

Ecological imperialism in a different sense is illustrated by the mint and parsley families. Both are diverse in the Mediterranean and adjacent Near East, and many members of both became culinary herbs (Table 3). Both families are diverse in other parts of the world, too, yet few of these are readily available in stores. For instance, Mexico and California have hundreds of species of sage (Salvia), but the commonly used species is native to the Mediterranean region, where the genus is less diverse. In effect, these seasoning herbs have come along for the ride, hitchhiking on the spectacular diffusion of the Near Eastern agro-pastoralist system.

Another significant outcome of the Columbian Exchange was the diffusion of the tropical plantation system and slavery, though not its origin: sugar plantations with slaves were widespread across the Mediterranean already before 1492 (Lewis, 1990). The colonial powers very quickly discovered that the key to plantation success was to grow the crop overseas from the region where it had originated. By moving the crop to another continent or island, they enabled it to escape its native insect pests and pathogens. Most plantation crops, such as sugar, cotton, indigo, and tobacco, were not nutritious; plantation sugar, in particular, may mark the inception of the modern problem of overproduction of cheap “junk food” that we tend to overeat, to thedetriment of our health (see Low et al. chapter).

Scientific breeding’, and the Industrial Revolution

The rise of scientific breeding in the nineteenth century further extended the Columbian Exchange as agronomists searched the globe for crop varieties that would grow in their home environments. Subsequently, yields skyrocketed as fossil fuels were applied to agriculture first through machinery such as tractors and then through other inputs such as artificial fertilizer, herbicides, pesticides, and pumped irrigation water (Blumler, 2008; 2009). Smil (2006) argued cogently that the Haber-Bosch process (using methane to manufacture artificial fertilizer from nitrogen gas) was the most important invention of the twentieth century because of its dramatic effect on agricultural yields. These practices constituted a ‘Green Revolution’, though the term was only coined when the practices were extended to poor countries after World War II. Until then, agriculture had relied entirely upon renewable resources; since fossil fuels are not renewable, it is uncertain whether current yields are sustainable in the long run. Much hope rests with further plant breeding, but so far the much debated GMOs feature little besides the insertion of herbicide resistance genes. In short, our crop plants remain surprisingly similar to their wild progenitors physiologically, though not morphologically (Blumler, 1994; 2008; cf. Bamakhramah et al., 1984; Amir and Sinclair, 1994).

Crop Evolution

There are several excellent, illuminating treatments of aspects of crop domestication(e.g., Harlan 1975; Harlan et al., 1973; Hancock, 1992; 2005; Doebley et al., 2006; Purugganan and Fuller, 2009). These are incomplete, however, and since archaeologists typically do not know a great deal about biology, there has been little critical feedback within the agriculture origins literature (Blumler, 1991b; Blumler and Byrne, 1991). Crop genetics and evolution are complicated, and the agricultural origins literature sometimes oversimplifies in the interest of searching for generalities. For instance, there has been much discussion of the rate at which domesticated traits such as indehiscence would have evolved, with some arguing for rapid (Zohary, 1969; Iltis, 1983; Hillman and Davies, 1990; Paterson, 2002) and others for protracted evolution (Harlan, 1975;Purugganan and Fuller, 2011). The reality is that either is possible, depending on circumstances, species, and trait (Blumler and Byrne, 1991). On the other hand, the agronomic literature related to crop breeding is enormous, with an emphasis on the pragmatic; therefore, informed historical perspective tends to be scattered and buried within the vast literature. Here, I describe some generalities and emphasize those aspects of crop evolution that relate to human diet and the evolutionary mismatch question.



Crop Choice

Diamond (1997) argued that agriculture began wherever thereweredomesticable wild plants and/or animals. This is clearly untenable: for instance, the wild progenitors of many early African domesticates grow both in the Sahel and south of the equator, but domestication only occurred in the Sahel.Wild onions grow all over the Northern Hemisphere, but only species located in or near centers of origin were domesticated. One could multiply such examples indefinitely. Nonetheless, it probably is true that only certain species were domesticable initially, while others could be added to the system as it developed. Zohary and Spiegel-Roy (1975) noted that in the Near East the earliest domesticated woody species– the vine, the olive, the date, and the fig – have in common that they can be propagated vegetatively, without employing complicated techniques such as grafting. Woody species are outcrossers, and as such, cannot be expected to breed true. Vegetative propagation allowed humans to maintain the variants that they wanted. Tree crops such as almonds that were grown from seed, were domesticated later.

Phylogenetic constraint clearly played a role. Many of our domesticates came from certain characteristic families: grasses for cereals; legumes; mints and parsleys for seasoning (and mints for medicinal teas); mustard and spinach families for vegetables; the poisonous nightshade family for vegetables, including staples such as potatoes and eggplant; the squash family for cucumber, melons, and gourds as well as the squashes themselves; and the rose family for most of our temperate fruits (apple, pear, peach, apricot, plum, cherry, quince, strawberry, raspberry, blackberry, including more than one species in several cases, plus additional species from east Asia [loquat, etc.]). The aforementioned almond also is a rose family member.

Vavilov (1917) suggested that some crops originated as weeds: as a cropwas taken into a new, and not so favorable, environment, the weed might be better adapted to that environment and overwhelm the crop. Then the farmers would begin to eat the weed. Vavilov pointed out that rye is cold-tolerant and frequently a weed in wheat fields, and suggested that it became a crop as wheat moved north. He made a similar argument regarding oats (Vavilov, 1926). Vavilov called this secondary domestication. It is likely that many crops arose in this manner.



Crop Mimics

Some secondary domesticates began as crop mimics (Barrett, 1983; Hancock, 2005). These are weeds that hide from the humans attempting to eliminate them by resembling the crop. The wild progenitor of the crop has the potential to be a particularly good mimic, and may pick up useful genes from the crop through hybridization. In the case of mimics unrelated to the crop, seeds often are larger than those of its wild ancestor, enabling the weed to mimic the large seeds of the domesticate with which it grows. Since traditional seed-cleaning entailed running the harvest through sieves of different sized meshes, the mimic would stay with the crop while the smaller-seeded wild form would be eliminated (Hillman, 1984). Also, some mimics have become indehiscent, and are harvested unintentionally with the crop, and then sown again with it, also unintentionally. In this case, the mimic is entirely dependent on humans for its survival, though not yet used as food. The ultimate stage is when people begin intentionally planting the mimic, usually under conditions that it tolerates better than the crop. Sand oat (Avenastrigosa) is an example. Until recently, it was cultivated on sandy soils on which other grains yielded poorly. It existed not only as a crop, but also as a crop mimic in common oat (A. sativa), and as a wild plant. The crop and crop mimic are indistinguishable. The crop and crop mimic are becoming rare today, close to if not entirely extinct except in seed banks, as agricultural systems continue to change. The spread of mechanical seed-cleaning technology is eliminating many seed mimics, so much so that some now are classified as endangered species.



Characteristic Evolutionary Changes in the Crops

Unconscious (Automatic) vs. Conscious Evolution

Some traits would have developed as a result of repeated cultivation, harvest, and re-planting, whether those traits were selected consciously or not (Table 4). Other characteristic changes probably were consciously selected by farmers (Blumler and Byrne, 1991; Blumler, 1994). The trends listed in Table 4 represent generalizations only: there are several exceptions to many of them. The literature on this topic tends to over-emphasize a few tendencies, for instance, it has long been asserted that under domestication perennials evolve the annual habit. While this is certainly the case for a few species, such as cotton, most if not all of the cereal progenitors and most of the legumes were annuals prior to cultivation and domestication. In part, this reflects the fact that annual-dominated vegetation is much easier to manipulate with primitive tools than that which is vegetated with perennials, and so cultivation is more likely to arise there than where there is a dense, perennial sod (Blumler, 1992b; 1998a). Similarly, Harlan et al. (1973) and Purugganan and Fuller (2009) asserted that cultivation produced natural selection for increased seedling vigor and therefore, larger seeds. This is unlikely, as if anything wild progenitor seedlings appear to be more vigorous than their domesticated descendents(Chapin et al., 1989;Blumler, 1992b; 1994; 1998a). Seed size did increase under cultivation in many species, but the initial increase probably was phenotypic, reflecting the comparatively good growing conditions in the cultivated field. Subsequently, farmers may have consciously selected for larger seeds, as is generally true of the plant part harvested for food.


Much has been made of the breeding of improved varieties as a part of the Green Revolution, but in fact, the changes have been minor. Primarily they have involved adjusting the crop to the exigencies of mechanical harvest and abundant supplies of nutrients and water. The former require uniform height and maturity, while the latter can cause traditional varieties to lodge, not because they grow poorly, but because they grow too well (Blumler, 2008).

Loss of the Competitive, Protective, and Dispersal Functions

Wild plants must compete with their neighbors, protect their seeds from predators, and disperse the seeds to suitable germination sites. In domesticated crops, these functions are largely taken over by the farmer. Wheat is an example. Fig. 3shows a diaspore (seed dispersal unit) of wild emmer wheat. Typically, it tightly encloses (and protects) two grains, which are elongated to fit into the enclosing glumes. Zohary (1965) described the diaspore as arrow shaped. Recurved barbs along the awns protect it from birds and also ensure that it can only move in the direction the arrow is pointing. The heavy ‘arrowhead’ ensures that it tends to fall near the mother plant, and then creeps over the ground, thanks in part to the hygroscopic awns which spread apart and draw close again with changes in humidity. If it encounters a crack in the soil or adjacent to a rock, it burrows in and plants itself (see Blumler, 1991b; 1999).

Under cultivation, this structure was no longer needed. Selection favored (or at least, no longer disfavored) mutations that caused the diaspore to open up, so that the seed could become more plump, and even ‘naked’ (not enclosed within hulls) (Blumler, 1998b). This made for easier processing of the harvested grain, and increased harvest index since photosynthate that had gone to creating the diaspore could now go into seed-filling.But it was dependent on humans protecting the ripening grain from birds and other granivores.

It is the enormous amount of effort required to protect cultivated plants, from competition (by plowing and weeding) and predation, and to disperse (plant) the seeds, that causes farming to be hard work compared to hunting and gathering. One could argue that the plants that were domesticated have enslaved humans, in a sense!!



Speciation

Most crop evolution has occurred at the intraspecific level, for example, emmer and durum wheat though morphologically very different from wild emmer are still fully interfertile with it and with each other (Zohary, 1969; Harlan, 1975). But there have been a few cases of speciation under domestication. The most important example is bread wheat, which is an allopolyploid derivative of emmer and a closely related goatgrass, Aegilopssquarrosa, that apparently arose more than once in cultivated fields (Dvorak et al., 1998; Giles and Brown, 2006; Gu et al., 2006). A more recent example is triticale, another allopolyploid, in this case of bread wheat and rye, intentionally created by crop geneticists.Genetic interactions among polyploid crops and their wild or weedy relatives can be very complex, and allow for rapid evolution (Zohary, 1965; Harlan, 1975; Paterson, 2002; Hancock, 2005).



Co-Evolution

Crop domestication is a co-evolutionary process. In evolutionary biology, co-evolution occurs when a genetic change in species A leads to a genetic change in species B, which then causes further genetic change in species A. Domesticated plants and animals clearly underwent significant genetic changes as a result of their adoption into a farming system, but it is less certain that humans have always responded with genetic changes. As Zuk (2013) pointed out, our co-evolutionary response can be 1) genetic; 2) microbial; and/or 3) cultural. Perhaps the best studied example is the digestion of milk, which in most mammals ceases after early childhood. Consequently, the enzyme lactase, which digests lactose, is no longer active in most adult mammals. Some pastoralists have evolved lactose tolerance, that is lactase remains active throughout their lives (see M. Little’s chapter). But as Zukpointed out, there also are groups of people who appear to have intestinal flora that digests lactose. This would be an example of a co-evolutionary response that is microbial. Finally, in many societies, dairy consumption is primarily as cheese or yogurt, which contain little lactose. This would reflect a cultural response since it depends on human behavior. (Humans intentionally create the conditions that enable the appropriate microbes to interact with the milk and break down the lactose). Another example of cultural co-evolutionary interaction was outlined above: the loss of competitive, dispersal and protective functions in the crop in conjunction with the taking over of those functions by humans.



Due to Human Preferences

Since agriculture began, humans have consciously selected for increased yield, and for palatability. Palatability tends to mean more sugar, more calories, and fewer toxins, which often have a bitter taste, but also less fiber, and fewer anti-oxidants (Table 5). There also are many exceptions to these tendencies. For instance, anthocyanin, an anti-oxidant, decreased in some crops such as several cereals, but increased in others such as purple cabbage. Bitterness was selected against in most plants that defended themselves with cyanogenic glycosides, such as almond. But bitter almond is still used as a flavoring, and many seasoning herbs such as oregano and hops remain bitter.

Increasing caloric content and yieldoften entailedincreasing starch in the plant part harvested and eaten. Harlan et al. (1973) and others have stated that protein content of grains decreased under cultivation/domestication, but it appears that this is only true on a percentage basis. In tetraploid wheat, at least, the total amount of protein in the domesticated grain is often about the same as, or even larger than, in its wild ancestor, but the seed size has increased due to the opening up of the investing structures, as described above. The extra volume in the grain floret typically fills with starch, after the protein has first been transported into the developing grain.While protein content of cereals is less on a percentage basis than in their wild ancestors, today they typically are supplemented with legumes, which are high in protein. Legumes were much less available to Paleolithic peoples, because in the wild they usually are toxic except sometimes when still green. Because of their association with nitrogen-fixing bacteria, legumes usually have more nitrogen than they need. The excess typically is used to produce alkaloids and other poisons such as prussic acid. Although legumes became important in all agricultural origin centers eventually, the timing of domestication was highly variable. In the Fertile Crescent, domesticated legumes were about as early as cereals. In China, it appears that no legume was domesticated for many millennia after the appearance of cereal agriculture. This is somewhat paradoxical, in that the Near East also had dairy as a source of protein, while China did not.

Palatability for humans reflects dietary preferences or cravings that go back to hunter-gatherer times or before, and which presumably would have been adaptive then, as evolutionary psychologists have pointed out (Tooby and Cosmides, 1990). This is also the basis of the paleo diet perspective (Eaton and Konner, 1985; Lindeberg, 2010). We crave salt, fat, and sugar. Unless along a seacoast, it was difficult to obtain enough salt, while fat was generally the limiting item in the hunter-gatherer diet. Most game animals are lean, and there are relatively few sources of edible oilseeds in nature. Sugar craving encouraged humans to eat wild fruits. Occasionally a hunter-gatherer band might encounter a beehive, in which case they might be able to feast on honey for a few days, but then they would be back to square one, with little sugar other than that in fruits. With agriculture, we gained the ability to satiate ourselves (and then some!) Sugar, fat, and salt are all now cheap and abundant, with effects on our health that are not good.

Humans need fiber and anti-oxidants, too, and these often are insufficient in the modern diet, but in hunter-gatherer times thereusually would have been more than enough of both, so if anything selection would have favored mutations that encouraged avoidance of these substances.

Evaluating the Paleo Diet Hypothesis

Proponents of the paleo diet differ somewhat; here, I will emphasize the perspective of Lindeberg (2010). His book features a comprehensive review of the nutrition literature. In it, he demonstrates that the consensus beliefs of nutritionists frequently have no or very little empirical support. For instance, there is no evidence that insoluble fiber, the kind present in cereals with the exception of oats, is of any benefit. Lindebergstressed that from an evolutionary perspective it is not normally in a plant’s interest to allow its seeds to be eaten, and so they tend to defend them with toxic compounds. In the case of cereals, these compounds are concentrated in the bran, which raises the question (though Lindeberg does not discuss it) of whether whole grains are perhaps not really as nutritiousas refined cereals. In this regard, Jasny’s (1940) classic study of the reasons people choose one grain over another made the then entirely reasonable assumption that all humans would prefer white flour when they could get it. Entirely reasonable because at that time only monks given to asceticism ate whole grains in preference to refined breads, white rice, etc.

Lindeberg (2010) stated categorically that grains and beans were not eaten in Paelolithic times. This appears extreme: there is some evidence for consumption of grains then (Henry et al., 2010), and both grains and bean family members were or are eaten by modern hunter-gatherers. Wild grass seeds were harvested in significant quantities by some Australian Aborigines (Head et al., 2012), and also by native peoples in the Great Basin and California. California Indians also feasted upon several species of native clover during the spring (clovers are legumes). The native clovers are less well-defended than introduced, Old World clovers, but even so, at times individuals would eat so much that they suffered severely from bloat (Chesnut, 1897). California Indians also ate native vetches (broad bean, pea, and lentil are domesticated vetches of Near Eastern origin) but they ate them when the pods were still immature (much as Americans and east Asians tend to eat peas). In the eastern US, one of the favored wild foods was the groundnut (Apiosamericana), another member of the bean family. British settlers learned about this resource from the native peoples, and often depended on it during the first year of settlement out on the frontier, until their crops began to produce. Of course, these examples are from the ethnographic present, and Paleolithic peoplesprobably did not have the same diet; but what they did eat remains somewhat speculative.

Lindeberg also emphasized that Paleolithic peoples ate a diverse diet, which would have served to mitigate the impacts of plant poisons. Given that the toxic compounds tend to differ from species to species, a mixed feeding strategy can ensure that only small quantities of any one toxin is ingested. This is an entirely reasonableargument, however, the literature on dietbreadth is conflicting. Some believe that diet simplified under agriculture, with its emphasis on a few cereal grains or tubers; but others argue that hunter-gatherers made relatively little use of seasonings, spices, and the like. Certainly, recent times have seen a global narrowing of the food base, with a few staples such as maize, rice, and wheat becoming more and more predominant (Harris, 2009). This has affected especially the urban poor. In contrast, the rich and even the middle class today typically have much greater choice of vegetables and fruits though not necessarily of grains and tubers than ever before. And they have access to fresh vegetables and fruits year-round, which was not the case even 50 years ago. Harris (2009) found that hunter-gatherers are reported to consume from 50 to over 100 plant species. But the Japanese have a folk saying that one should eat at least 25 different foods each day. Thus, it is by no means clear that diet breadth has narrowed for everyone.

Because of the supposed lack of grains in the Paleolithic diet, Lindeberg (2010) argued that starch is a food to which we are poorly adapted. On the other hand, humans have evolved increased copies of the amylase (starch digestion) gene in recent times (Perry et al., 2007). Lindebergasserted that the amount of evolution that has occurred was insufficient to enable us to deal with the very large increase in the amount of starch we consume, but this rests upon his belief that we did not consume grains at all prior to the Neolithic. In addition, in east and southeast Asia people have selected mutant forms of several cereals that are high in amylopectin; the best known example being ‘sticky rice’, also called ‘sweet rice’ (Sakamoto, 1996; Huang et al., 2012). While so far the nutritional research on the sticky cereals has been limited, it is worth noting that traditionally they are fed to invalids. In short, agriculture was associated with a shift to grains or tubers as staples, and along with the increase in proportion of starch in the domesticated grains that ensures that we are eating more starch today – but how much more is unclear.

In any case, refined cereals are useful in another circumstance. Early childhood mortality is the major source of mortality in poor countries, and the major cause of this mortality, directly or indirectly, is dysentery. While dysentery probably has always been with us, it is almost certain that its prevalence increased with the increase in population density associated with farming. The traditional food of choice for individuals suffering from dysentery is starch, the purer the better. Fiber only exacerbates the symptoms of someone with diarrhea (a further complication here is that some fibers, such as in celery, probably are not harmful under these conditions; grain fibers, however, most definitely are).

Anti-nutritional compounds are present in many other domesticates, in addition to the seed crops, as Lindeberg (2010) pointed out. For instance, while the levels of cyanogenic glycosides are generally reduced sufficiently that we can ingest domesticated crops without being fatally poisoned, hereviewed evidence that if ingested in small quantities, the cyanogenic glycosidesaregoitrogenic. This is likely to become particularly problematic when a crop is consumed as a staple. For instance, cabbage is goitrogenic, and the traditional method of boiling it for a long period of time before consumption may have been desirable when it was eaten in large quantities - as it often was.

While a mixed Paleolithic diet would have reduced the deleterious effects of plant toxins, humans also developed processing techniques that reduced the levels of these compounds. These techniques ranged from cooking to fermentation to the ingestion of clays that adsorb toxins (Johns, 1990; Johns and Kubo, 1988). Fermentation is a particularly widespread and interesting case. Yogurt and cheese have already been mentioned. An example involving legumes is soy, whichin its native east Asia is seldom cooked and consumed in the whole, ripe state. Instead, for the most part it is consumed only unripe (edamame, bean sprouts), or after fermentation or similar processing (tofu, soy sauce, miso, black bean sauce, tempeh, natto).




Download 135.24 Kb.

Share with your friends:
1   2   3




The database is protected by copyright ©ininet.org 2024
send message

    Main page