Dietary Reconstruction and Nutritional Assessment of Past Peoples: The Bioanthropological Record

Clark Spencer Larsen. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 1. Cambridge, UK: Cambridge University Press, 2000.

Record

The topics of diet (the foods that are eaten) and nutrition (the way that these foods are used by the body) are central to an understanding of the evolutionary journey of humankind. Virtually every major anatomical change wrought by that journey can be related in one way or another to how foods are acquired and processed by the human body. Indeed, the very fact that our humanlike ancestors had acquired a bipedal manner of walking by some five to eight million years ago is almost certainly related to how they acquired food. Although the role of diet and nutrition in human evolution has generally come under the purview of anthropology, the subject has also been of great interest to scholars in many other disciplines, including the medical and biological sciences, chemistry, economics, history, sociology, psychology, primatology, paleontology, and numerous applied fields (e.g., public health, food technology, government services). Consideration of nutriture, defined as “the state resulting from the balance between supply of nutrition on the one hand and the expenditure of the organism on the other,” can be traced back to the writings of Hippocrates and Celsus and represents an important heritage of earlier human cultures in both the Old and New Worlds (McLaren 1976, quoted in Himes 1987:86).

The purpose of this chapter is threefold: (1) to present a brief overview of the basic characteristics of human nutriture and the history of human diet; (2) to examine specific means for reconstructing diet from analysis of human skeletal remains; and (3) to review how the quality of nutrition has been assessed in past populations using evidence garnered by many researchers from paleopathological and skeletal studies and from observations of living human beings. (See also Wing and Brown 1979; Huss-Ashmore, Goodman, and Armelagos 1982; Goodman, Martin, et al. 1984; Martin, Goodman, and Armelagos 1985; Ortner and Putschar 1985; Larsen 1987; Cohen 1989; Stuart-Macadam 1989. For a review of experimental evidence and its implications for humans, see Stewart 1975.) Important developments regarding nutrition in living humans are presented in a number of monographic series, including World Review of Nutrition and Dietetics, Annual Review of Nutrition, Nutrition Reviews, and Current Topics in Nutrition and Disease.

Human Nutriture and Dietary History

Although as living organisms we consume foods, we must keep in mind that it is the nutrients contained in these foods that are necessary for all of our bodily functions, including support of normal growth and maturation, repair and replacement of body tissues, and the conduct of physical activities (Malina 1987). Estimations indicate that modern humans require some 40 to 50 nutrients for proper health and well-being (Mann 1981). These nutrients are typically divided into six classes—carbohydrates, proteins, fats, vitamins, minerals, and water. Carbohydrates and fats are the primary energy sources available to the body. Fats are a highly concentrated source of energy and are stored in the body to a far greater degree than carbohydrates. Fats are stored in the range between about 15 and 30 percent of body weight (Malina 1987), whereas carbohydrates represent only about 0.4 to 0.5 percent of body weight in childhood and young adulthood (Fomon et al. 1982). Proteins, too, act as energy sources, but they have two primary functions: tissue growth, maintenance, and repair; and physiological roles.

The building blocks of proteins are chains of nitrogen-containing organic compounds called amino acids. Most of the 22 amino acids can be produced by the body at a rate that is necessary for the synthesis of proteins, and for this reason they are called nonessential amino acids. Eight, however, are not produced in sufficient amounts and therefore must be supplied to the body as food (essential amino acids). Moreover, all essential amino acids have to be present simultaneously in correct amounts and consumed in the same meal in order to be absorbed properly. As noted by W.A. Stini (1971: 1021),”a reliance on any one or combination of foods which lacks even one of the essential amino acids will preclude the utilization of the rest, resulting in continued and increased excretion of nitrogen without compensatory intake.”

Vitamins, a group of 16 compounds, are required in very small amounts only. Save for vitamin D, none of these substances can be synthesized by the body, and if even one is missing or is poorly absorbed, a deficiency disease will arise. Vitamins are mostly regulatory in their overall function. Minerals are inorganic elements that occur in the human body either in large amounts (e.g., calcium and phosphorus) or in trace amounts (called trace elements: e.g., strontium, zinc, fluorine). They serve two important types of functions, namely structural, as in bone and blood production, and regulatory, such as proper balance of electrolytes and fluids. Water, perhaps the most important of the nutrients, functions as a major structural component of the body in temperature regulation and as a transport medium, including elimination of body wastes. About two-thirds of body weight in humans is water (Malina 1987).

Throughout the course of evolution, humans, by adaptation, have acquired a tremendous range of means for securing foods and maintaining proper nutriture. These adaptations can be ordered into a temporal sequence of three phases in the evolution of the human diet (following Gordon 1987).The first phase involved the shift from a diet comprised primarily of unprocessed plant foods to one that incorporated deliberate food-processing techniques and included significant amounts of meat. These changes likely occurred between the late Miocene epoch and early Pleistocene (or by about 1.5 million years ago). Archaeological and taphonomic evidence indicates that the meat component of diet was likely acquired through a strategy involving scavenging rather than deliberate hunting. Pat Shipman (1986a, 1986b) has examined patterns of cut marks produced by stone tools and tooth marks produced by carnivores in a sample of faunal remains recovered from Olduvai Bed I dating from 2.0 to 1.7 million years ago. In instances where cut marks and tooth marks overlapped on a single bone, her analysis revealed that carnivore tooth marks were followed in sequence by hominid-produced cut marks. This pattern of bone modification indicates that hominids scavenged an animal carcass killed by another animal.

The second phase in the history of human diet began in the Middle Pleistocene epoch, perhaps as long ago as 700,000 years before the present. This phase is characterized by deliberate hunting of animal food sources. In East Africa, at the site of Olorgesailie (700,000 to 400,000 years ago), an extinct species of giant gelada baboon (Theropithecus oswaldi) was hunted. Analysis of the remains of these animals by Shipman and co-workers (1981) indicates that although the deaths of many were not due to human activity, young individuals were selectively killed and butchered by hominids for consumption.

Some of the most frequently cited evidence for early hominid food acquisition is from the Torralba and Ambrona sites, located in the province of Soria, Spain (Howell 1966; Freeman 1981). Based on an abundance of remains of large mammals such as elephants, along with stone artifacts, fire, and other evidence of human activity, F. Clark Howell and Leslie G. Freeman concluded that the bone accumulations resulted from “deliberate game drives and the killing of large herbivores by Acheulian hunting peoples” (1982: 13). Richard G. Klein (1987, 1989), however, subsequently argued on the basis of his more detailed observations of animal remains from these sites that despite a human presence as evidenced by stone tools, it is not possible to distinguish between human or carnivore activity in explaining the extensive bone accumulations. First, the relatively greater frequency of axial skeletal elements (e.g., crania, pelves, vertebrae) could be the result of the removal of meatier portions of animal carcasses by either humans or the large carnivores who frequented the site. Second, the over-abundance of older elephants could represent human hunting, but it also could represent carnivore activity or natural mortality. Thus, although hominids in Spain were quite likely acquiring protein from animal sources, the evidence based on these Paleolithic sites is equivocal. We know that early hominids acquired meat through hunting activity, but their degree of success in this regard is still unclear.

By later Pleistocene times (20,000 to 11,000 years ago), evidence for specialized hunting strategies clearly indicates that human populations had developed means by which larger species of animals were successfully hunted. For example, at the Upper Paleolithic site of Solutré, France, Howell (1970) noted that some 100,000 individuals of horse were found at the base of the cliff, and at Predmosti, Czechoslovakia, about 1,000 individuals of mammoth were found. Presumably, the deaths of these animals resulted from purposeful game drives undertaken by local communities of hominids. Virtually all faunal assemblages studied by archaeologists show that large, gregarious herbivores, such as the woolly mammoth, reindeer, bison, and horse, were emphasized, particularly in the middle latitudes of Eurasia (Klein 1989). But some of the best evidence for advances in resource exploitation by humans is from the southern tip of Africa. In this region, Late Stone Age peoples fished extensively, and they hunted dangerous animals like wild pigs and buffalo with considerable success (Klein 1989).

Because of the relatively poor preservation of plant remains as compared to animal remains in Pleistocene sites, our knowledge of the role of plant foods in human Paleolithic nutriture is virtually nonexistent. There is, however, limited evidence from a number of localities. For example, at the Homo erectus site of Zhoukoudian in the People’s Republic of China (430,000 to 230,000 years before the present), hack-berry seeds may have been roasted and consumed. Similarly, in Late Stone Age sites in South Africa, abundant evidence exists for the gathering of plant staples by early modern Homo sapiens. Based on what is known about meat and plant consumption by living hunter-gatherers, it is likely that plant foods contributed substantially to the diets of earlier, premodern hominids (Gordon 1987). Today, with the exception of Eskimos, all-meat diets are extremely rare in human populations (Speth 1990), and this almost certainly was the case in antiquity.

The third and final phase in the history of human diet began at the interface between the Pleistocene and Holocene epochs about 10,000 years ago. This period of time is marked by the beginning of essentially modern patterns of climate, vegetation, and fauna. The disappearance of megafauna, such as the mastodon and the mammoth, in many parts of the world at about this time may have been an impetus for human populations to develop new means of food acquisition in order to meet protein and fat requirements. The most important change, however, was the shift from diets based exclusively on food collection to those based to varying degrees on food production.

The transition involved the acquisition by human populations of an intimate knowledge of the life cycles of plants and animals so as to control such cycles and thereby ensure the availability of these nutriments for dietary purposes. By about 7,000 years ago, a transition to a plant-based economy was well established in some areas of the Middle East. From this region, agriculture spread into Europe, and other independent centers of plant domestication appeared in Africa, Asia, and the New World, all within the next several millennia.

It has been both the popular and scientific consensus that the shift from lifeways based exclusively on hunting and gathering to those that incorporated food production—and especially agriculture—represented a positive change for humankind. However, Mark N. Cohen (1989) has remarked that in game-rich environments, regardless of the strategy employed, hunters may obtain between 10,000 and 15,000 kilocalories per hour. Subsistence cultivators, in contrast, average between 3,000 and 5,000 kilocalories per hour.

More important, anthropologists have come to recognize in recent years that the shift from hunting and gathering to agriculture was characterized by a shift from generally high-quality foods to low-quality foods. For example, animal sources of protein contain all essential amino acids in the correct proportions. They are a primary source of vitamin B12, are high in vitamins A and D, and contain important minerals. Moreover, animal fat is a critical source of essential fatty acids and fat-soluble vitamins (Speth 1990).Thus, relative to plant foods, meat is a highly nutritional food resource. Plant foods used alone generally cannot sustain human life, primarily because of deficiency in essential amino acids (see discussion in Ross 1976). Moreover, in circumstances where plant foods are emphasized, a wide variety of them must be consumed in order to fulfill basic nutritional requirements. Further limiting the nutritional value of many plants is their high fiber content, especially cellulose, which is not digestible by humans.

Periodic food shortages resulting from variation in a number of factors—especially rainfall and temperature, along with the relative prevalence of insects and other pests—have been observed in contemporary human populations depending on subsistence agriculture. Some of the effects of such shortages include weight loss in adults, slowing of growth in children, and an increase in prevalence of malaria and other diseases such as gastroenteritis, and parasitic infection (Bogin 1988).

Archaeological evidence from prehistoric agriculturalists, along with observation of living peasant agriculturalists, indicates that their diets tend to be dominated by a single cereal staple: rice in Asia, wheat in temperate Asia and Europe, millet or sorghum in Africa, and maize in the New World. These foods are oftentimes referred to as superfoods, not because of nutritional value but rather because of the pervasive focus by human populations on one or another of them (McElroy and Townsend 1979).

Rice, a food staple domesticated in Southeast Asia and eventually extending in use from Japan and Korea southward to Indonesia and eastward into parts of India, has formed the basis of numerous complex cultures and civilizations (Bray 1989).Yet it is remarkably deficient in protein, even in its brown or unmilled form. Moreover, the low availability of protein in rice inhibits the activity of vitamin A, even if the vitamin is available through other food sources (Wolf 1980).Vitamin A deficiency can trigger xerophthalmia, one of the principal causes of blindness. White rice—the form preferred by most human populations—results from processing, or the removal of the outer bran coat, and consequently, the removal of thiamine (vitamin B1). This deficiency leads to beriberi, a disease alternately involving inflammation of the nerves, or the heart, or both.

Wheat was domesticated in the Middle East very early in the Holocene and has been widely used since that time. Wheat is deficient in two essential amino acids—lysine and isoleucine. Most human populations dependent on wheat, however, have dairy animals that provide products (e.g., cheese) that make up for these missing amino acids. Yet in some areas of the Middle East and North Africa where wheat is grown, zinc-deficient soils have been implicated in retarding growth in children (Harrison et al. 1988). Moreover, the phytic acid present in wheat bran chemically binds with zinc, thus inhibiting its absorption (Mottram 1979).

Maize (known as corn in the United States) was first domesticated in Mesoamerica. Like the other superfoods, it formed the economic basis for the rise of civilizations and complex societies, and its continued domestication greatly increased its productivity (Galinat 1985). In eastern North America, maize was central in the evolution of a diversity of chiefdoms (Smith 1989), and its importance in the Americas was underscored by Walton C. Galinat, who noted:

[By] the time of Columbus, maize had already become the staff of life in the New World. It was distributed throughout both hemispheres from Argentina and Chile northward to Canada and from sea level to high in the Andes, from swampland to arid conditions and from short to long day lengths. In becoming so widespread, it evolved hundreds of races, each with special adaptations for the environment including special utilities for man. (Galinat 1985: 245)

Like the other superfoods, maize is deficient in a number of important nutrients. Zein—the protein in maize—is deficient in lysine, isoleucine, and tryptophan (FAO 1970), and if maize consumers do not supplement their diets with foods containing these amino acids, such as beans, significant growth retardation is an outcome. Moreover, maize, although not deficient in niacin (vitamin B3), contains it in a chemically bound form that, untreated, will withhold the vitamin from the consumer. Consequently, human populations consuming untreated maize frequently develop pellagra, a deficiency disease characterized by a number of symptoms, including rough and irritated skin, mental symptoms, and diarrhea (Roe 1973). Solomon H. Katz and co-workers (1974, 1975; see also Katz 1987) have shown that many Native American groups treat maize with alkali (e.g., lye, lime, or wood ashes) prior to consumption, thereby liberating niacin. Moreover, the amino acid quality in alkali-treated maize is significantly improved. Most human populations who later acquired maize as a dietary staple did not, however, adopt the alkali-treatment method (see Roe 1973). Maize also contains phytate and sucrose, whose negative impact on human health is considered later in this chapter.

Dietary Reconstruction: Human Remains

Chemistry and Isotopy

Skeletal remains from archaeological sites play a very special role in dietary reconstruction because they provide the only direct evidence of food consumption practices in past societies. In the last decade, several trace elements and stable isotopes have been measured and analyzed in human remains for the reconstruction of diets. Stanley H. Ambrose (1987) has reviewed these approaches, and the following is drawn from his discussion (see also van der Merwe 1982; Klepinger 1984; Sealy 1986; Aufderheide 1989; Keegan 1989; Schoeninger 1989; Sandford 1993).

Some elements have been identified as potentially useful in dietary reconstruction. These include manganese (Mn), strontium (Sr), and barium (Br), which are concentrated in plant foods, and zinc (Zn) and copper (Cu), which are concentrated in animal foods. Nuts, which are low in vanadium (V), contrast with other plant foods in that they typically contain high amounts of Cu and Zn. Like plants, marine resources (e.g., shellfish) are usually enriched in Sr, and thus the dietary signatures resulting from consumption of plants and marine foods or freshwater shellfish should be similar (Schoeninger and Peebles 1981; Price 1985). In contrast, Br is deficient in bones of marine animals, thereby distinguishing these organisms from terrestrial ones in this chemical signature (Burton and Price 1990).

The greater body of research done on elemental composition has been with Sr. In general, Sr levels decline as one moves up the food chain—from plants to herbivores to primary carnivores—as a result of natural biopurification (a process called fractionation). Simply put, herbivores consume plants that are enriched with Sr contained in soil. Because very little of the Sr that passes through the gut wall in animals is stored in flesh (only about 10 percent), the carnivore consuming the herbivore will have considerably less strontium stored in its skeleton. Humans and other omnivores, therefore, should have Sr concentrations that are intermediate between herbivores and carnivores in their skeletal tissues. Thus, based on the amount of Sr measured in human bones, it is possible (with some qualifications) to determine the relative contributions of plant and meat foods to a diet.

Nonetheless, in addition to the aforementioned problem with shellfish, there are three chief limitations to Sr and other elemental analyses. First, Sr abundance can vary widely from region to region, depending upon the geological context. Therefore, it is critical that the baseline elemental concentrations in local soils—and plants and animals—be known. Second, it must be shown that diagenesis (the process involving alteration of elemental abundance in bone tissue while it is contained in the burial matrix) has not occurred. Some elements appear to resist diagenesis following burial (e.g., Sr, Zn, Pb [lead], Na [sodium]), and other elements show evidence for diagenesis (e.g., Fe [iron], Al [aluminum], K [potassium], Mn, Cu, Ba). Moreover, diagenetic change has been found to vary within even a single bone (e.g., Sillen and Kavanaugh 1982; Bumsted 1985). Margaret J. Schoeninger and co-workers (1989) have evaluated the extent of preservation of histological structures in archaeological bone from the seventeenth-century Georgia coastal Spanish mission Santa Catalina de Guale. This study revealed that bones with the least degree of preservation of structures have the lowest Sr concentrations. Although these low values may result from diet, more likely they result from diagenetic effects following burial in the soil matrix. And finally, pretreatment procedures of archaeological bone samples in the laboratory frequently are ineffective in completely removing the contaminants originating in groundwater, such as calcium carbonate, thus potentially masking important dietary signatures.

Valuable information on specific aspects of dietary composition in past human populations can also be obtained by the analysis of stable isotopes of organic material (collagen) in bone. Isotopes of two elements have proven of value in the analysis of diets: carbon (C) and nitrogen (N). Field and laboratory studies involving controlled feeding experiments have shown that stable isotope ratios of both carbon (13C/12C) and nitrogen (15N/14N) in an animal’s tissues, including bone, reflect similar ratios of diet. Because the variations in isotopic abundances between dietary resources are quite small, the values in tissue samples are expressed in parts per thousand (o/oo) relative to established standards, as per delta (•) values.

The • 13C values have been used to identify two major dietary categories. The first category has been used to distinguish consumers of plants with different photosynthetic pathways, including consumers of C4 plants (tropical grasses such as maize) and consumers of C3 plants (most leafy plants). Because these plants differ in their photosynthetic pathways, they also differ in the amount of 13C that they incorporate. Thus, C4 plants and people who consume them have q13C values that differ on average by about 14 o/oo from other diets utilizing non-C4 plants. Based on these differences, it has been possible to track the introduction and intensification of maize agriculture in eastern North America with some degree of precision. The second category of dietary identification reflected by q13C values includes primarily marine foods. Marine fish and mammals have more positive q13C values (by about 6 o/oo) compared to terrestrial animals feeding on C3 foods, and less positive values (by about 7 o/oo) than terrestrial animals feeding on C4 foods (especially maize) (Schoeninger and DeNiro 1984; Schoeninger, van der Merwe, and Moore 1990).

Nitrogen stable isotope ratios in human bone are used to distinguish between consumers of terrestrial and marine foods. Margaret Schoeninger and Michael J. DeNiro (1984; see also Schoeninger et al. 1990) have indicated that in many geographical regions the q15N values of marine organisms differ from terrestrial organisms by about 10 parts per thousand on average, with consumers of terrestrial foods being less positive than consumers of marine foods. Recent research on stable isotopes of sulphur (i.e., 34S/32S) suggests that they may provide an additional means of identifying diets based on marine foods from those based on terrestrial foods because of the relatively greater abundance of 34S in marine organisms (Krouse 1987).A preliminary study of prehistoric populations from coastal Chile has supported this distinction in human remains representative of marine and terrestrial subsistence economies (Kelley, Levesque, and Weidl 1991).

As already indicated, based on carbon stable isotope values alone, the contribution of maize to diets in populations consuming marine resources is difficult to assess from coastal areas of the New World because of the similarity of isotope signatures of marine foods and individuals with partial maize diets (Schoeninger et al. 1990). However, by using both carbon and nitrogen isotope ratios, it is possible to distinguish between the relative contributions to diets of marine and terrestrial (e.g., maize) foods (Schoeninger et al. 1990).

Stable isotopes (C and N) have several advantages over trace elements in dietary documentation. For example, because bone collagen is not subject to isotopic exchange, diagenetic effects are not as important a confounding factor as in trace elemental analysis (Ambrose 1987; Grupe, Piepenbrink, and Schoeninger 1989). Perhaps the greatest advantage, however, is that because of the relative ease of removing the mineral component of bone (as well as fats and humic contaminants) and of confirming the collagen presence through identification of amino acids, the sample purity can be controlled (Ambrose 1987; Stafford, Brendel, and Duhamel 1988). However, collagen abundance declines in the burial matrix, and it is the first substance to degrade in bone decomposition (Grupe et al. 1989). If the decline in collagen value does not exceed 5 percent of the original value, then the isotopic information is suspect (see also Bada, Schoeninger, and Schimmelmann 1989; Schoeninger et al. 1990).Therefore, human fossil specimens, which typically contain little or no collagen, are generally not conducive to dietary reconstruction.

Teeth and Diet: Tooth Wear

Humankind has developed many means of processing foods before they are eaten. Nevertheless, virtually all foods have to be masticated by use of the teeth to one extent or another before they are passed along for other digestive activities. Because food comes into contact with teeth, the chewing surfaces of teeth wear. Defined as “the loss of calcified tissues of a tooth by erosion, abrasion, attrition, or any combination of these” (Wallace 1974: 385), tooth wear—both microscopic and macroscopic—provides information on diets of past populations. The importance of tooth wear in the reconstruction of diet has been underscored by Phillip L. Walker (1978: 101), who stated, “From an archaeological standpoint, dietary information based on the analysis of dental attrition is of considerable value since it offers an independent check against reconstruction of prehistoric subsistence based on the analysis of floral,faunal and artifactual evidence.”

Recent work with use of scanning electron microscopy (SEM) in the study of microwear on occlusal surfaces of teeth has begun to produce important data on diet in human populations (reviewed in Teaford 1991). Field and laboratory studies have shown that microwear features can change rapidly. Therefore, microwear patterns may give information only on food items consumed shortly before death. These features, nevertheless, have been shown to possess remarkable consistency across human populations and various animal species and have, therefore, provided insight into past culinary habits. For example, hard-object feeders, including Miocene apes (e.g., Sivapithecus) as well as recent humans, consistently develop large pits on the chewing surfaces of teeth. In contrast, consumers of soft foods, such as certain agriculturalists (Bullington 1988;Teaford 1991), develop smaller and fewer pits as well as narrower and more frequently occurring scratches.

Macroscopic wear can also vary widely, depending upon a host of factors (Molnar 1972; Foley and Cruwys 1986; Hillson 1986; Larsen 1987; Benfer and Edwards 1991; Hartnady and Rose 1991; Walker, Dean, and Shapiro 1991). High on the list of factors affecting wear, however, are the types of foods consumed and manner of their preparation. Because most Western populations consume soft, processed foods with virtually all extraneous grit removed, tooth wear occurs very slowly. But non-Western populations consuming traditional foods (that frequently contain grit contaminants introduced via grinding stones) show rapid rates of dental wear (e.g., Hartnady and Rose 1991). Where there are shifts in food types (e.g., from hunting and gathering to agriculture) involving reduction in food hardness or changes in how these foods are processed (e.g., with stone versus wooden grinding implements), most investigators have found a reduction in gross wear (e.g., Anderson 1965, 1967; Walker 1978; Hinton, Smith, and Smith 1980; Smith, Smith, and Hinton 1980; Patterson 1984; Bennike 1985; Inoue, Ito, and Kamegai 1986; Benfer and Edwards 1991; Rose, Marks, and Tieszen 1991).

Consistent with reductions in tooth wear in the shift to softer diets are reductions in craniofacial robusticity, both in Old World settings (e.g., Carlson and Van Gerven 1977, 1979; Armelagos, Carlson, and Van Gerven 1982; y’Edynak and Fleisch 1983; Smith, Bar-Yosef, and Sillen 1984; Wu and Zhang 1985; Inoue et al. 1986; y’Edynak 1989) and in New World settings (e.g.,Anderson 1967; Larsen 1982; Boyd 1988). In prehistoric Tennessee Amerindians, for example, Donna C. Boyd (1988) has documented a clear trend for a reduction in dimensions of the mandible and facial bones that reflects decreasing masticatory stress relating to a shift to soft foods. Although not all studies of this sort examine both craniofacial and dental wear changes, those that do so report reductions in both craniofacial robusticity and dental wear, reflecting a decrease in hardness of foods consumed (Anderson 1967; Inoue et al. 1986). Other changes accompanying shifts from hard-textured to soft-textured foods include an increase in malocclusion and crowding of teeth due to inadequate growth of the jaws (reviewed by Corruccini 1991).

B. Holly Smith (1984, 1985) has found consistent patterns of tooth wear in human populations. In particular, agriculturalists—regardless of regional differences—show highly angled molar wear planes in comparison with those of hunter-gatherers. The latter tend to exhibit more evenly distributed, flat wear. Smith interpreted the differences in tooth wear between agriculturalists and hunter-gatherers as reflecting greater “toughness” of hunter-gatherer foods.

Similarly, Robert J. Hinton (1981) has found in a large series of Native American dentitions representative of hunter-gatherers and agriculturalists that the former wear their anterior teeth (incisors and canines) at a greater rate than the latter. Agriculturalists that he studied show a tendency for cupped wear on the chewing surfaces of the anterior teeth. Because agriculturalists exhibit a relatively greater rate of premortem posterior tooth loss (especially molars), Hinton relates the peculiar wear pattern of the anterior teeth to the use of these teeth in grinding food once the molars are no longer available for this masticatory activity.

Specific macroscopic wear patterns appear to arise as a result of chewing one type of food. In a prehistoric population from coastal Brazil, Christy G. Turner II and Lilia M. Machado (1983) found that in the anterior dentition the tooth surfaces facing the tongue were more heavily worn than the tooth surfaces facing the lips. They interpreted this wear pattern as reflecting the use of these teeth to peel or shred abrasive plants for dietary or extra-masticatory purposes.

Teeth and Diet: Dental Caries

The health of the dental hard tissues and their supporting bony structures are intimately tied to diet. Perhaps the most frequently cited disease that has been linked with diet is dental caries, which is defined as “a disease process characterized by the focal demineralization of dental hard tissues by organic acids produced by bacterial fermentation of dietary carbohydrates, especially sugars” (Larsen 1987: 375). If the decay of tooth crowns is left unchecked, it will lead to cavitation, loss of the tooth, and occasionally, infection and even death (cf. Calcagno and Gibson 1991). Carious lesions can develop on virtually any exposed surface of the tooth crown. However, teeth possessing grooves and fissures (especially posterior teeth) tend to trap food particles and are, therefore, more prone to colonization by indigenous bacteria, and thus to cariogenesis. Moreover, pits and linear depressions arising from poorly formed enamel (hypoplasia or hypocalcification) are also predisposed to caries attack, especially in populations with cariogenic diets (Powell 1985; Cook 1990).

Dental caries is a disease with considerable antiquity in humans. F. E. Grine, A. J. Gwinnett, and J. H. Oaks (1990) note the occurrence of caries in dental remains of early hominids dating from about 1.5 million years ago (robust australopithecines and Homo erectus) from the Swartkrans site (South Africa), albeit at low prevalence levels. Later Homo erectus teeth from this site show higher prevalence than australopithecines, which may reflect their consumption of honey, a caries-promoting food (Grine et al. 1990). But with few exceptions (e.g., the Kabwe early archaic Homo sapiens from about 130,000 years before the present [Brothwell 1963]), caries prevalence has been found to be very low until the appearance of plant domestication in the early Holocene. David W. Frayer (1988) has documented one of these exceptions—an unusually high prevalence in a Mesolithic population from Portugal, which he relates to the possible consumption of honey and figs.

Turner (1979) has completed a worldwide survey of archaeological and living human populations whereby diet has been documented and the percentage of carious teeth has been tabulated. The samples were subdivided into three subsistence groups: hunting and gathering (n = 19 populations), mixed (combination of agriculture with hunting, gathering, or fishing; n = 13 populations), and agriculture (n = 32 populations). By pooling the populations within each subsistence group, Turner found that hunter-gatherers exhibited 1.7 percent carious teeth, mixed subsistence groups (combining hunting, gathering, and agriculture) exhibited 4.4 percent carious teeth, and agriculturalists exhibited 8.6 percent carious teeth.

Other researchers summarizing large comparative samples have confirmed these findings, especially with regard to a dichotomy in caries prevalence between hunter-gatherers and agriculturalists. Clark Spencer Larsen and co-workers (1991) compared 75 archaeological dental samples from the eastern United States. Only three agriculturalist populations exhibited less than 7 percent carious teeth, and similarly, only three hunter-gatherer populations exhibited greater than 7 percent carious teeth. The greater frequencies of carious teeth in the agricultural populations are largely due to those people’s consumption of maize (see also Milner 1984). The cariogenic component of maize is sucrose, a simple sugar that is more readily metabolized by oral bacteria than are more complex carbohydrates (Newbrun 1982). Another factor contributing to high caries prevalence in later agricultural populations may be due to the fact that maize is frequently consumed in the form of soft mushes. These foods have the tendency to become trapped in grooves and fissures of teeth, thereby enhancing the growth of plaque and contributing to tooth decay due to the metabolism of sugar by indigenous bacteria (see also Powell 1985).

High prevalence of dental caries does not necessarily indicate a subsistence regime that included maize agriculture, because other carbohydrates have been strongly implicated in prehistoric nonagricultural contexts. Philip Hartnady and Jerome C. Rose (1991) reported a high frequency of carious lesions—14 percent—in the Lower Pecos region of southwest Texas. These investigators related elevated levels of caries to the consumption of plants high in carbohydrates, namely sotal, prickly pear, and lecheguilla. The fruit of prickly pear (known locally as tuna) contains a significant sucrose component in a sticky, pectinbased mucilage. The presence of a simple sugar in this plant food, coupled with its gummy nature, is clearly a caries-promoting factor (see also Walker and Erlandson 1986, and Kelley et al. 1991, for different geographical settings involving consumption of nonagricultural plant carbohydrates).

Nutritional Assessment

Growth and Development

One of the most striking characteristics of human physical growth during the period of infancy and childhood is its predictability (Johnston 1986; Bogin 1988). Because of this predictability, anthropometric approaches are one of the most commonly used indices in the assessment of health and well-being, including nutritional status (Yarbrough et al. 1974). In this regard, a number of growth standards based on living subjects have been established (Gracey 1987). Comparisons of individuals of known age with these standards make it possible to identify deviations from the “normal” growth trajectory.

Growth is highly sensitive to nutritional quality, especially during the earlier years of infancy and early childhood (birth to 2 years of age) when the human body undergoes very rapid growth. The relationship between nutrition and growth has been amply demonstrated by the observation of recent human populations experiencing malnutrition. These populations show a secular trend for reduced physical size of children and adults followed by increased physical size with improvements in diet (e.g., for Japanese, see Kimura 1984;Yagi, Takebe, and Itoh 1989; and for additional populations, Eveleth and Tanner 1976).

Based on a large sample of North Americans representative of different socioeconomic groups, Stanley M. Garn and co-workers (Garn, Owen, and Clark 1974; Garn and Clark 1975) reported that children in lower income families were shorter than those in higher income families (see also review in Bogin 1988). Although a variety of factors may be involved, presumably the most important is nutritional status.

One means of assessing nutrition and its influence on growth and development in past populations is by the construction of growth curves based on comparison of length of long bones in different juvenile age groups (e.g., Merchant and Ubelaker 1977; Sundick 1978; Hummert and Van Gerven 1983; Goodman, Lallo, et al. 1984; Jantz and Owsley 1984; Owsley and Jantz 1985; Lovejoy, Russell, and Harrison 1990). These data provide a reasonable profile of rate or velocity of growth. Della C.Cook (1984), for example, studied the remains of a group ranging in age from birth to 6 years. They were from a time-successive population in the midwestern United States undergoing the intensification of food production and increased reliance on maize agriculture. Her analysis revealed that individuals living during the introduction of maize had shorter femurs for their age than did individuals living before, as hunter-gatherers, or those living after, as maize-intensive agriculturalists (Cook 1984). Analysis of depressed growth among prehistoric hunter-gatherers at the Libben site (Ohio) suggests, however, that infectious disease was a more likely culprit in this context because the hunter-gatherers’ nutrition—based on archaeological reconstruction of their diet—was adequate (Lovejoy et al. 1990).

Comparison of skeletal development, a factor responsive to nutritional insult, with dental development, a factor that is relatively less responsive to nutritional insult (see the section “Dental Development”), can provide corroborative information on nutritional status in human populations. In two series of archaeological populations from Nubia, K. P. Moore, S. Thorp, and D. P. Van Gerven (1986) compared skeletal age and dental age and found that most individuals (70.5 percent) had a skeletal age younger than their dental age. These findings were interpreted as reflecting significant retardation of skeletal growth that was probably related to high levels of nutritional stress. Indeed, undernutrition was confirmed by the presence of other indicators of nutritional insult, such as iron-deficiency anemia.

In living populations experiencing generally poor nutrition and health, if environmental insults are removed (e.g., if nutrition is improved), then children may increase in size, thereby more closely approximating their genetic growth potential (Bogin 1988). However, if disadvantageous conditions are sustained, then it is unlikely that the growth potential will be realized. Thus, despite prolonged growth in under-nourished populations, adult height is reduced by about 10 percent (Frisancho 1979). Sustained growth depression occurring during the years of growth and development, then, has almost certain negative consequences for final adult stature (Bogin 1988 and references cited therein).

In archaeological populations, reductions in stature have been reported in contexts with evidence for reduced nutritional quality. On the prehistoric Georgia coast, for example, there was a stature reduction of about 4 centimeters in females and 2 centimeters in males during the shift from hunting, gathering, and fishing to a mixed economy involving maize agriculture (Larsen 1982; Angel 1984; Kennedy 1984; Meikle-john et al. 1984; Rose et al. 1984; and discussions in Cohen and Armelagos 1984; Larsen 1987; Cohen 1989). All workers documenting reductions in stature regard it as reflecting a shift to the relatively poor diets that are oftentimes associated with agricultural food production such as maize in North America.

Cortical Bone Thickness

Bone tissue, like any other tissue of the body, is subject to environmental influences, including nutritional quality. In the early 1960s, Garn and co-workers (1964) showed that undernourished Guatemalan children had reduced thickness of cortical (sometimes called compact) bone compared with better nourished children from the same region. Such changes were related to loss of bone during periods of acute protein energy malnutrition. These findings have been confirmed by a large number of clinical investigations (e.g., Himes et al. 1975; and discussion in Frisancho 1978).

Bone maintenance in archaeological skeletal populations has been studied by a number of investigators.

Most frequently expressed as a ratio of the amount of cortical bone to subperiosteal area—or percent cortical area (PCCA) or percent cortical thickness (PCCT)—it has been interpreted by most people working with archaeological human remains as reflecting nutritional or health status (e.g., Cassidy 1984; Cook 1984; Brown 1988; Cohen 1989). It is important to note, however, that bone also remodels itself under conditions of mechanical demand, so that bone morphology that might be interpreted as reflecting a reduction in nutritional status may in fact represent an increase in mechanical loading (Ruff and Larsen 1990).

Cortical Bone Remodeling and Microstructure

An important characteristic that bone shares with other body tissues is that it must renew itself. The renewal of bone tissue, however, is unique in that the process involves destruction followed by replacement with new tissue. The characteristic destruction (resorption) and replacement (deposition) occurs mostly during the years of growth and development prior to adulthood, but it continues throughout the years following. Micro-structures observable in bone cross sections have been analyzed and have provided important information about bone remodeling and its relationship to nutritional status. These microstructures include osteons (tunnels created by resorption and partially filled in by deposition of bone tissue), Haversian canals (central canals associated with osteons), and surrounding bone.

As with cortical thickness, there is a loss of bone mass that can be observed via measurement of the degree of porosity through either invasive (e.g., histo-logical bone thin sections) or noninvasive (e.g., photon absorptiometry) means. With advancing age, cortical bone becomes both thinner and more porous. Cortical bone that has undergone a reduction in bone mass per volume—a disorder called osteoporosis—should reflect the nutritional history of an individual, especially if age factors have been ruled out (Martin et al. 1985; Schaafsma et al. 1987; Arnaud and Sanchez 1990). If this is the case, then bone loss can affect any individual regardless of age (Stini 1990). Clinical studies have shown that individuals with low calcium intakes are more prone to bone loss in adulthood (Nordin 1973; Arnaud and Sanchez 1990; Stini 1990). It is important to emphasize, however, that osteoporosis is a complex, multifactorial disorder and is influenced by a number of risk factors, including nondietary ones such as body weight, degree of physical exercise, and heredity (Evers, Orchard, and Haddad 1985; Schaafsma et al. 1987; Arnaud and Sanchez 1990; Stini 1990; Ruff 1991; Lindsay and Cosman 1992; Heaney 1993).

Porosity of bone also represents a function of both the number of Haversian canals and their size (Atkinson 1964; Thompson 1980; Burr, Ruff, and Thompson 1990). Therefore, the greater the number and width of Haversian canals, the greater the porosity of bone tissue. The density of individual osteons appears to be related to nutritional quality as well. For example, the presence of osteons containing hypermineralized lines in archaeological human remains likely reflects periods of growth disturbance (e.g., Stout and Simmons 1979; Martin and Armelagos 1985).

Samuel D. Stout and co-workers (Stout and Teitelbaum 1976; Stout 1978, 1983, 1989) have made comparisons of bone remodeling dynamics between a series of hunter-gatherer and maize-dependent North and South American archaeological populations. Their findings show that the single agricultural population used in the study (Ledders, Illinois) had bone remodeling rates that were higher than the other (non-maize) populations. They suggested that because maize is low in calcium and high in phosphorus, parathyroid hormone levels could be increased. Bone remodeling is highly stimulated by parathyroid hormone, a disorder known as hyperparathyroidism.

In order to compensate for bone loss in aging adults (particularly after 40 years of age), there are structural adaptations involving more outward distribution of bone tissue in the limb bones. In older adults, such adaptation contributes to maintaining the bio-mechanical strength despite bone losses (Ruff and Hayes 1982). Similarly, D. B. Burr and R. B. Martin (1983; see also Burr et al. 1990) have suggested that the previously discussed material property changes may supplement structural changes. Thus, different rates of bone turnover in human populations may reflect mechanical adaptations that are not necessarily linked to poor nutrition.

Skeletal (Harris) Lines of Increased Density

Nonspecific markers of physiological stress that appear to have some links with nutrition status are radiographically visible lines of increased bone density, referred to as Harris lines. These lines either partly or completely span the medullary cavities of tubular bones (especially long bones of the arms and legs) and trace the outlines of other bones (Garn et al. 1968; Steinbock 1976). These lines have been found to be associated with malnutrition in modern humans (e.g., Jones and Dean 1959) and in experimental animals (e.g., Stewart and Platt 1958). Because Harris lines develop during bone formation, age estimates can be made for time of occurrence relative to the primary ossification centers (e.g., Goodman and Clark 1981). However, the usefulness of these lines for nutritional assessment is severely limited by the fact that they frequently resorb in adulthood (Garn et al. 1968). Moreover, Harris lines have been documented in cases where an individual has not undergone episodes of nutritional or other stress (Webb 1989), and they appear to correlate negatively with other stress markers (reviewed in Larsen 1987).

Dental Development: Formation and Eruption

Like skeletal tissues, dental tissues are highly sensitive to nutritional perturbations that occur during the years of growth and development. Unlike skeletal tissues, however, teeth—crowns, in particular—do not remodel once formed, and they thereby provide a permanent “memory” of nutritional and health history. Alan H. Goodman and Jerome C. Rose (1991: 279) have underscored the importance of teeth in the anthropological study of nutrition: “Because of the inherent and close relationship between teeth and diet, the dental structures have incorporated a variety of characteristics that reflect what was placed in the mouth and presumably consumed” (see also Scott and Turner 1988).

There are two main factors involved in dental development—formation of crowns and roots and eruption of teeth. Because formation is more heritable than eruption, it is relatively more resistant to nutritional insult (Smith 1991). Moreover, the resistance of formation to environmental problems arising during the growth years is suggested by low correlations between formation and stature, fatness, body weight, or bone age, and lack of secular trend. Thus, timing of formation of tooth crowns represents a poor indicator for assessing nutritional quality in either living or archaeological populations. Eruption, however, can be affected by a number of factors, including caries, tooth loss, and severe malnutrition (e.g., Alvarez et al. 1988, 1990; Alvarez and Navia 1989). In a large, cross-sectional evaluation of Peruvian children raised in nutritionally deprived settings, J. O.Alvarez and co-workers (1988) found that exfoliation of the deciduous dentition was delayed. Other workers have found that eruption was delayed in populations experiencing nutritional deprivation (e.g., Barrett and Brown 1966; Alvarez et al. 1990). Unlike formation, eruption timing has been shown to be correlated with various measures of body size (Garn, Lewis, and Polacheck 1960; McGregor, Thomson, and Billewicz 1968). To my knowledge, there have been no archaeological populations where delayed eruption timing has been related to nutritional status.

Dental Development: Tooth Size

Unlike formation timing, tooth size appears to be under the influence of nutritional status. Garn and coworkers (Garn and Burdi 1971; Garn, Osborne, and McCabe 1979) have indicated that maternal health status is related to size of deciduous and permanent dentitions. Nutrition and tooth size in living populations has not been examined. However, the role of nutrition as a contributing factor to tooth size reduction has been strongly implicated in archaeological contexts. Mark F. Guagliardo (1982) and Scott W. Simpson, Dale L. Hutchinson, and Clark Spencer Larsen (1990) have inferred that the failure of teeth to reach their maximum genetic size potential occurs in populations experiencing nutritional stress. That is, comparison of tooth size in populations dependent upon maize agriculture revealed that juveniles had consistently smaller teeth than adults. Moreover, a reduction in deciduous tooth size in comparison between hunter-gatherers and maize agriculturalists on the prehistoric southeastern U. S. coast was reported by Larsen (1983). Because deciduous tooth crowns are largely formed in utero, it was suggested that smaller teeth in the later period resulted from a reduction in maternal health status and placental environment.

Dental Development: Macrodefects

A final approach to assessing the nutritional and health status of contemporary and archaeological populations has to do with the analysis of enamel defects in the teeth, particularly hypoplasias. Hypoplasias are enamel defects that typically occur as circumferential lines, grooves, or pits resulting from the death or cessation of enamel-producing cells (ameloblasts) and the failure to form enamel matrix (Goodman and Rose 1990). Goodman and Rose (1991) have reviewed a wide array of experimental, epidemiological, and bioarchaeological evidence in order to determine whether hypoplasias represent an important means for assessing nutritional status in human populations, either contemporary or archaeological. They indicate that although enamel hypoplasias arising from systemic (e.g., nutrition) versus nonsystemic factors (e.g., localized trauma) are easily identifiable, identification of an exact cause for the defects remains an intractable problem. T. W. Cutress and G. W. Suckling (1982), for example, have listed nearly 100 factors that have a causal relationship with hypoplasias, including nutritional problems. The results of a number of research projects have shown that a high frequency of individuals who have experienced malnutrition have defective enamel, thus suggesting that enamel is relatively sensitive to undernutrition. Moreover, it is a straightforward process to estimate the age at which individual hypoplasias occur based on matching the hypoplasia with dental developmental sequences (e.g., Goodman, Armelagos, and Rose 1980; Rose, Condon, and Goodman 1985; Hutchinson and Larsen 1988).

Studies based on archaeological human remains have examined hypoplasia prevalence and pattern (reviewed in Huss-Ashmore et al. 1982; Larsen 1987). In addition to determining frequency of enamel defects (which tends to be higher in agricultural populations), this research has looked at the location of defects on tooth crowns in order to examine age at the time of defect development. Contrary to earlier assertions that age pattern of defects are universal in humans, with most hypoplasias occurring in the first year of life (e.g., Sarnat and Schour 1941; see discussion in Goodman 1988), these studies have served to show that there is a great deal of variability in age of occurrence of hypoplasias. By and large, however, most reports on age patterning in hypoplasia occurrence indicate a peak in defects at 2 to 4 years of age, regardless of geographic or ecological setting (Hutchinson and Larsen 1988; Goodman and Rose 1991), a factor that most workers have attributed to nutritional stresses of postweaning diets (e.g., Corruccini, Handler, and Jacobi 1985; Webb 1989; Blakely and Mathews 1990; Simpson et al. 1990). Analyses of prevalence and pattern of hypoplasia in past populations have largely focused on recent archaeological populations. In this respect, there is a tendency for agricultural groups to show higher prevalence rates than nonagricultural (hunter-gatherer) populations (e.g., Sciulli 1978; Goodman et al. 1984a, 1984b; Hutchinson and Larsen 1988).

Unlike most other topics discussed in this chapter, this indicator of physiological stress has been investigated in ancient hominids. In the remains of early hominids in Africa, the Plio-Pleistocene australopithecines, P. V. Tobias (1967) and Tim D. White (1978) have noted the presence of hypoplasias and provided some speculation on relative health status. Of more importance, however, are the recent analyses of hypoplasias in European and Near-Eastern Nean-derthal (Middle Paleolithic) populations. Marsha D. Ogilvie, Bryan K. Curran, and Erik Trinkaus (1989) have recorded prevalence data and estimates of developmental ages of defects on most of the extant Neanderthal teeth (n = 669 teeth). Their results indicate high prevalence, particularly in the permanent teeth (41.9 percent permanent teeth, 3.9 percent deciduous teeth). Although these prevalences are not as high as those observed in recent archaeological populations (e.g., Hutchinson and Larsen 1990; Van Gerven, Beck, and Hummert 1990), they do indicate elevated levels of stress in these ancient peoples. Unlike other dental series, age of occurrence of hypoplasias on the permanent dentition follows two distinct peaks, including an earlier peak between ages 2 and 5 years and a later peak between ages 11 and 13 years. The earlier peak is consistent with findings of other studies. That is, it may reflect nutritional stresses associated with weaning (Ogilvie et al. 1989). The later peak may simply represent overall high levels of systemic stress in Neanderthals. Because genetic disorders were likely eliminated from the gene pool, Ogilvie and co-workers argue against genetic agents as a likely cause. Moreover, the very low prevalence of infection in Neanderthal populations suggests that infection was an unlikely cause, leaving nutritional deficiencies, especially in the form of periodic food shortages, as the most likely causative agents.

Analysis of dental hypoplasia prevalence from specific Neanderthal sites confirms the findings of Ogilvie and co-workers, particularly with regard to the Krapina Neanderthal sample from eastern Europe (e.g., Molnar and Molnar 1985). With the Krapina dental series, Larsen and co-workers (in preparation) have made observations on the prevalence of an enamel defect known as hypocalcification, which is a disruption of the mineralization process following deposition of enamel matrix by ameloblasts.The presence of these types of enamel defects confirms the unusually high levels of stress in these early hominid populations, which is likely related to undernutrition.

Dental Development: Microdefects

An important complement to the research done on macrodefects has been observations of histological indicators of physiological stress known as Wilson bands or accentuated stria of Retzius (Rose et al. 1985; Goodman and Rose 1990). Wilson bands are features visible in thin section under low magnification (X100 t X200) as troughs or ridges in the flat enamel surface. Concordance of these defects with hypoplasias is frequent, but certainly not universal in humans (Goodman and Rose 1990), a factor that may be related to differences in histology or etiology or both (Rose et al. 1985; Danforth 1989). K. W. Condon (1981) has concluded that Wilson bands may represent short-term stress episodes (less than one week), and hypoplasias may represent long-term stress episodes (several weeks to two months).

Jerome C. Rose, George J. Armelagos, and John W. Lallo (1978) have tested the hypothesis that as maize consumption increased and animal sources of protein consumption decreased in a weaning diet, there should be a concomitant increase in frequency of Wilson bands. Indeed, there was a fourfold increase in rate (per individual) in the full agriculturalists compared with earlier hunter-gatherers. They concluded that the declining quality of nutrition reduced the resistance of the child to infectious disease, thus increasing the individual’s susceptibility to infection and likelihood of exhibiting a Wilson band. Most other studies on prehistoric populations from other cultural and geographic contexts have confirmed these findings (references cited in Rose et al. 1985).

Specific Nutritional Deficiency Diseases

Much of what is known about the nutritional quality of diets of past populations is based on the nonspecific indicators just discussed. It is important to emphasize that rarely is it possible to relate a particular hard-tissue pathology with a specific nutritional factor in archaeological human remains, not only because different nutritional problems may exhibit similar pathological signatures, but also because of the synergy between undernutrition and infection (Scrimshaw, Taylor, and Gordon 1968; Gabr 1987). This relationship has been succinctly summarized by Michael Gracey (1987: 201): “Malnourished children characteristically are enmeshed in a ‘malnutrition-infection’ cycle being more prone to infections which, in turn, tend to worsen the nutritional state.” Thus, an episode of infection potentially exacerbates the negative effects of undernutrition as well as the severity of the pathological signature reflecting those effects.

Patricia Stuart-Macadam (1989) has reviewed evidence for the presence in antiquity of three specific nutritional diseases: scurvy, rickets, and iron-deficiency anemia. Scurvy and rickets are produced by respective deficiencies in vitamin C (ascorbic acid) and vitamin D. Vitamin C is unusual in that it is required in the diets of humans and other primates, but only of a few other animals. Among its other functions, it serves in the synthesis of collagen, the structural protein of the connective tissues (skin, cartilage, and bone). Thus, if an individual is lacking in vitamin C, the formation of the premineralized component of bone (osteoid) will be considerably reduced.

Rickets is a disease affecting infants and young children resulting from insufficiencies in either dietary sources of vitamin D (e.g., fish and dairy products) or, of greater importance, lack of exposure to sunlight. The insufficiency reduces the ability of bone tissue to mineralize, resulting in skeletal elements (especially long bones) that are more susceptible to deformation such as abnormal bending.

Both scurvy and rickets have been amply documented through historical accounts and in clinical settings (see Stuart-Macadam 1989). Radiographic documentation shows that bones undergoing rapid growth—namely in infants and young children—have the greatest number of changes. In infants, for example, ends of long bones and ribs are most affected and show “generalized bone atrophy and a thickening and increased density” (Stuart-Macadam 1989: 204). In children, rickets can be expressed as thin and porous bone with wide marrow spaces in general undernourishment. Alternatively, in better nourished individuals, bone tissue is more porous because of excessive bone deposition. Children with rickets can oftentimes show pronounced bowing of long bones with respect to both weight-bearing (leg) and non-weight-bearing (arm) bones.

Both scurvy and rickets, however, have been only marginally documented in the archaeological record, and mostly in historical contexts from the medieval period onward (Moller-Christiansen 1958; Maat 1982). Stuart-Macadam (1989) notes that only in the period of industrialization during the nineteenth century in Europe and North America has rickets shown an increase in prevalence.

Anemia is any condition where hemoglobin or red blood cells are reduced below normal levels. Iron-deficiency anemia is by far the most common form in living peoples, affecting more than a half billion of the current world population (Baynes and Bothwell 1990). Iron is an essential mineral, which must be ingested. It plays an important role in many body functions, especially the transport of oxygen to the body tissues (see Stuart-Macadam 1989). The bioavail-ability of iron from dietary sources results from several factors (see Hallberg 1981; Baynes and Bothwell 1990). With respect to its absorption, the major determinants are the sources of iron contained within foods consumed depending upon the form, heme or nonheme. Heme sources of iron from animal products are efficiently absorbed (Baynes and Bothwell 1990).

In contrast, nonheme forms of iron from the various vegetable foods have a great deal of variation in their bioavailability. Moreover, a number of substances found in foods actually inhibit iron absorption. Phytates found in many nuts (e.g., almonds, walnuts), cereals (e.g., maize, rice, whole wheat flour), and legumes inhibit dietary iron bioavailability (summarized in Baynes and Bothwell 1990). Moreover, unlike the sources of protein found in meat, plant proteins, such as soybeans, nuts, and lupines, inhibit iron absorption. Thus, populations depending on plants generally experience reduced levels of iron bioavail-ability. Tannates found in tea and coffee also significantly reduce iron absorption (Hallberg 1981).

There are, however, a number of foods known to enhance iron bioavailability in combination with nonheme sources of iron. For example, ascorbic acid is a very strong promotor of iron absorption (Hallberg 1981; Baynes and Bothwell 1990). Citric acid from various fruits has also been implicated in promoting iron absorption, as has lactic acid from fermented cereal beers (Baynes and Bothwell 1990). In addition, Miguel Layrisse, Carlos Martinez-Torres, and Marcel Roche (1968; and see follow-up studies cited in Hallberg 1981) have provided experimental evidence from living human subjects that nonheme iron absorption is enhanced considerably by consumption of meat and fish, although the specific mechanism for this enhancement is not clear (Hallberg 1981).

Iron-deficiency anemia can be caused by a variety of other, nondietary factors, including parasitic infection, hemorrhage, blood loss, and diarrhea; infants can be affected by predisposing factors such as low birth weight, gender, and premature clamping of the umbilical cord (Stuart-Macadam 1989, and references cited therein). The skeletal changes observed in the clinical and laboratory settings are primarily found in the cranium, and include the following: increased width of the space between the inner and outer surfaces of the cranial vault and roof areas of the eye orbits; unusual thinning of the outer surface of the cranial vault; and a “hair-on-end” orientation of the trabecular bone between the inner and outer cranial vault (Huss-Ash-more et al. 1982; Larsen 1987; Stuart-Macadam 1989; Hill and Armelagos 1990). Postcranial changes have also been observed (e.g., Angel 1966) but are generally less severe and in reduced frequency relative to genetic anemias (Stuart-Macadam 1989). The skeletal modifications result from the hypertrophy of the blood-forming tissues in order to increase the output of red blood cells in response to the anemia (Steinbock 1976).

Skeletal changes similar to those documented in living populations have been found in archaeological human remains from virtually every region of the globe. In archaeological materials, the bony changes—pitting and/or expansion of cranial bones—have been identified by various terms, most typically porotic hyperostosis or cribra orbitalia. These lesions have rarely been observed prior to the adoption of sedentism and agriculture during the Holocene, but J. Lawrence Angel (1978) has noted occasional instances extending into the Middle Pleistocene. Although the skeletal changes have been observed in individuals of all ages and both sexes, Stuart-Macadam (1985) has concluded that iron-deficiency anemia produces them in young children during the time that most of the growth in cranial bones is occurring. By contrast, the presence of porotic hyperostosis and its variants in adults represents largely anemic episodes relating to the early years of growth and development. Thus, it is not possible to evaluate iron status in adults based on this pathology.

Many workers have offered explanations for the presence of porotic hyperostosis since the pathology was first identified more than a century ago (Hill and Armelagos 1990). Recent discussions, however, have emphasized local circumstances, including nutritional deprivation brought about by focus on intensive maize consumption, or various contributing circumstances such as parasitism, diarrheal infection, or a combination of these factors (e.g., Hengen 1971; Carlson, Armelagos, and Van Gerven 1974; Cybulski 1977; El-Najjar 1977; Mensforth et al. 1978; Kent 1986; Walker 1986; Webb 1989). Angel (1966, 1971) argued that the primary cause for the presence of porotic hyperostosis in the eastern Mediterranean region was the presence of abnormal hemoglobins, especially thalassemia. His hypothesis, however, has remained largely unsubstantiated (Larsen 1987; Hill and Armelagos 1990).

Several human archaeological populations have been shown to have moderate to high frequencies of porotic hyperostosis after establishing agricultural economies. However, this is certainly not a ubiquitous phenomenon. For example, Larsen and co-workers (1992) and Mary L. Powell (1990) have noted that the late prehistoric populations occupying the southeastern U.S. Atlantic coast have a very low prevalence of porotic hyperostosis. These populations depended in part on maize, a foodstuff that has been implicated in reducing iron bioavailability. But a strong dependence on marine resources (especially fish) may have greatly enhanced iron absorption. In the following historic period, these native populations show marked increase in porotic hyperostosis. This probably came about because after the arrival of Europeans, consumption of maize greatly increased and that of marine resources decreased. Moreover, native populations began to use sources of water that were likely contaminated by parasites, which would have brought on an increase in the prevalence of iron-deficiency anemia (see Larsen et al. 1992).

Conclusions

This chapter has reviewed a range of skeletal and dental indicators that anthropologists have used in the reconstruction of diet and assessment of nutrition in past human populations. As noted throughout, such reconstruction and assessment, where we are dealing only with the hard-tissue remains, is especially difficult because each indicator is so often affected by other factors that are not readily controlled. For this reason, anthropologists attempt to examine as many indicators as possible in order to derive the most complete picture of diet and nutrition.

In dealing with archaeological skeletal samples, there are numerous cultural and archaeological biases that oftentimes affect the sample composition. Jane E. Buikstra and James H. Mielke have suggested:

Human groups have been remarkably creative in developing customs for disposal of the dead. Bodies have been interred, cremated, eviscerated, mummified, turned into amulets, suspended in trees, and floated down watercourses. Special cemetery areas have been reserved for persons of specific status groups or individuals who died in particular ways; for example, suicides. This variety in burial treatments can provide the archaeologist with important information about social organization in the past. On the other hand, it can also severely limit reliability of demographic parameters estimated from an excavated sample. (Buikstra and Mielke 1985: 364)

Various workers have reported instances of cultural biases affecting cemetery composition. In late prehistoric societies in the eastern United States, young individuals and sometimes others were excluded from burial in primary cemeteries (e.g., Buikstra 1976; Russell, Choi, and Larsen 1990), although poor preservation of thinner bones of these individuals—particularly infants and young children—along with excavation biases of archaeologists, can potentially contribute to misrepresentation (Buikstra, Konigsberg, and Bullington 1986; Larsen 1987; Walker, Johnson, and Lambert 1988; Milner, Humpf, and Harpending 1989). This is not to say that skeletal samples offer a poor choice for assessing diet and nutrition in past populations. Rather, all potential biases—cultural and noncultural—must be evaluated when considering the entire record of morbidity revealed by the study of bones and teeth.

Representation in skeletal samples is made especially problematical when considering the potential for differential access to foods consumed by past human societies. For example, as revealed by analysis of prevalence of dental caries, women ate more cariogenic carbohydrates than men in many agricultural or partially agricultural societies (reviewed in Larsen 1987; Larsen et al. 1991). Even in contemporary foraging groups where food is supposedly equitably distributed among all members regardless of age or gender, various observers have found that women frequently receive less protein and fats than men, and that their diet is often nutritionally inferior to that of males (reviewed in Speth 1990). In these so-called egalitarian societies, women are regularly subject to food taboos, including a taboo on meat (e.g., Hausman and Wilmsen 1985; see discussions by Spielmann 1989 and Speth 1990). Such taboos can be detrimental especially if they are imposed during critical periods such as pregnancy or lactation (Spielmann 1989; Speth 1990). If nutritional deprivation occurs during either pregnancy or lactation, the health of the fetus or infant can be severely compromised, and delays in growth are likely to occur. Thus, when assessing nutrition in past populations, it is important that contributing factors affecting quality of diet in females or other members of societies (e.g., young children, old adults) and potential for variability in health of these individuals be carefully evaluated.

Of equal importance in the study of skeletal remains is the role of other sources of information regarding diet in archaeological settings, especially plant and animal remains. All available sources should be integrated into a larger picture, including plant and animal food remains recovered from archaeological sites, and corroborative information made available from the study of settlement patterns and ethno-graphic documentation of subsistence economy. The careful consideration of all these sources of information facilitates a better understanding of diet and nutrition in peoples of the past.