Susan Kent. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 1. Cambridge, UK: Cambridge University Press, 2000.
Until the nineteenth century, unspecified chronic anemia was known as chlorosis, or the “green sickness,” referring to the extreme pallor that characterized severe cases. For centuries, “chlorosis, or green sickness, was attributed to unrequited passion. Medieval Dutch painters portrayed the pale olive complexion of chlorosis in portraits of young women” (Farley and Foland 1990: 89). Although such extreme cases are not common in Western societies today, less severe acquired anemia is quite common. In fact, acquired anemia is one of the most prevalent health conditions in modern populations.
Technically, anemia is defined as a subnormal number of red blood cells per cubic millimeter (cu mm), subnormal amount of hemoglobin in 100 milliliter (ml) of blood, or subnormal volume of packed red blood cells per 100 ml of blood, although other indices are usually also used. Rather than imputing anemia to unrequited love, modern medicine generally imputes it to poor diets that fail to replenish iron loss resulting from rapid growth during childhood, from menstruation, from pregnancy, from injury, or from hemolysis. One of today’s solutions to the frequency of acquired anemia is to increase dietary intake of iron. This is accomplished by indiscriminate and massive iron fortification of many cereal products, as well as the use of prescription and nonprescription iron supplements, often incorporated in vitamin pills. However, a nutritional etiology of anemia as dietary has, in the past, been assumed more often than proven. Determining such an etiology is complicated by the fact that the hematological presentation of dietary-induced iron deficiency anemia resembles the anemia of chronic disease. Of the many types and causes of acquired anemia, only those associated with diet and chronic disease are discussed here (for an overview of others, see Kent and Stuart-Macadam this volume).
Anemia Due to Diet or Chronic Disease?
Although causes differ, patients with either iron-deficiency anemia or the anemia of chronic disease/inflammation have subnormal circulating iron levels called hypoferremia. Below-normal levels of circulating iron are manifested in low hemoglobin/hematocrit, serum iron, and transferrin saturation levels. Because serum ferritin is an indirect measure of iron stores, it provides a sensitive index to distinguish these two anemias (Cook and Skikne 1982; Zanella et al. 1989). When the body does not have enough iron as a result of diet, bleeding, or other causes, serum ferritin values are subnormal and reflect the subnormal amount of iron in the bone marrow. In the anemia of chronic disease/inflammation, however, serum ferritin levels are normal to elevated because circulating iron is transferred to storage, as reflected by the combination of subnormal circulating iron levels with normal or elevated serum ferritin values.
Removing iron from circulation reduces its availability to pathogens that require it for proliferation (Weinberg 1974, 1984, 1990, 1992). When compared with bone marrow aspirations, serum ferritin values correctly detected iron deficiency in 90 percent of patients; serum iron in 41 percent; total iron binding capacity in 84 percent; and transferrin saturation in 50 percent (Burns et al. 1990). Erythrocyte sedimentation rate (ESR) is a nonspecific measure of the presence of infection or inflammation. If used in combination with serum ferritin, ESR can be useful in distinguishing anemia of chronic disease from iron deficiency anemia (Beganovic 1987; Charache et al. 1987; Witte et al. 1988).
Although diet is most often attributed as the cause of iron-deficiency anemia, in the absence of blood loss or parasites, even the most frugal diets rarely result in iron-deficiency anemia. According to C. K. Arthur and J. P. Isbister (1987: 173), “iron deficiency is almost never due to dietary deficiency in an adult in our community [i.e., Western society].” The average person requires very little iron intake to replace that lost through perspiration, menstruation, and urination because as much as 90 percent of the iron needed for the formation of new blood cells is derived from recycling senescent red blood cells (Hoffbrand 1981). In addition, individuals suffering from iron deficiency absorb significantly more iron from the same diet and excrete significantly less iron than nondeficient individuals. In other words, “the intestine can adjust its avidity to match the body’s requirement” (O’Neil-Cutting and Crosby 1987: 491). Nonetheless, it has been estimated that approximately 30 percent of a world population of nearly 4.5 billion are anemic, with at least half of this 30 percent (500 to 600 million people) thought to have iron-deficiency anemia (Cook and Lynch 1986). Even in affluent Western societies, 20 percent of menstruating females have been reported to be iron deficient (Arthur and Isbister 1987: 172). Recent surveys in the United States, however, indicate that these figures are much too high (Cook et al. 1986).
Elsewhere, researchers have discovered, to their surprise, that diet is not always responsible for the acknowledged high morbidity of a particular region. According to P. Aaby (1988: 290-1), “We started out by assuming that malnutrition was the major determinant of high child mortality and that changing this pattern would have a beneficial effect on mortality. Much to our distress, we found very little evidence that nutritional status [as measured by the presence of anemia] was strongly related to the variation in nutritional practices.”
Failure to distinguish anemia etiology correctly can be seen in several studies. For example, Arthur and Isbister report that “of 29 patients ‘diagnosed’ as having iron deficiency anaemia, only 11 patients in fact had true iron deficiency when reviewed by the authors. Most patients had the anaemia of chronic disease that was misdiagnosed as iron deficiency. There is a strong clinical impression amongst hematologists that this problem of misdiagnosis is not unique to hospital based specialists but also applies more widely” (Arthur and Isbister 1987: 172). Almost 50 percent of infants diagnosed as iron deficient were found to have occult gastrointestinal blood loss without anatomic lesions, which was the source of their anemia (Fairbanks and Beutler 1988: 208). Although the following discussions are hampered somewhat by the fact that not all studies include serum ferritin tests needed to distinguish dietary iron-deficiency anemia from the anemia of chronic disease, an attempt is made to differentiate when possible and to indicate when this is not possible.
Diet and Iron-Deficiency Anemia
When iron-deficiency anemia is caused by severely deficient diets, it appears to be a straightforward problem: Not enough iron is ingested to replace losses. In reality, many cases of iron-deficiency anemia involve unidentified blood loss that causes, accentuates, and/or perpetuates dietary-induced iron-deficiency anemia. (Note that throughout this paper, we are specifically discussing iron and diet, not protein, vitamins, and other nutrients that are also integral parts of a diet).
Anemia is rarely caused by diet alone. This is partly because healthy humans lose only 1 to 3 milligrams (mg) iron per day and partly because almost all food and, in many areas, drinking water contain some iron. Women in a Swedish study lost between 0.6 and 0.7 mg of iron through menstruation and 95 percent lost on average less than 1.4 mg per day (Fairbanks and Beutler 1988: 205).
Most individuals are able to replenish these losses with a normal diet. For example, Westerners ingest meat almost daily and consume additional iron in fortified foods (such as that contained in many wheat products). Wine and cider can provide 2 to 16 mg or more iron per liter (Fairbanks and Beutler 1988). Moreover, depending on the quantity of the body’s iron stores, between 15 and 35 percent of heme iron, which is found in red meat, poultry, and fish, and between 2 and 20 percent of nonheme iron in plants is absorbed (Monsen 1988: 787). Higher amounts of nonheme iron in vegetables are absorbed when consumed with heme iron in meat or in combination with iron absorption enhancers, such as ascorbic acid (Hallberg, Brune, and Rossander 1989). Several studies of the amount of iron ingested by North Americans indicate that normal individuals who are not suffering from unnatural blood loss consume sufficient iron to prevent deficiency. For example, a study of 51 women found that those who consumed only poultry or fish or who were vegetarians had a mean iron intake of 12 mg, whereas those who regularly ate red meat had a mean iron intake of 13 mg (Monsen 1988: 789). The mean iron intake of 162 women in another study was 11.1 mg (Subar and Bowering 1988). Female long-distance runners ingested between 11.5 mg and 14 mg per day (Manore et al. 1989).
Blood loss and true iron deficiency also result in increased iron absorption. Research shows that iron-replete men with sufficient iron stores absorb 2.5 percent of nonheme iron ingested, as compared to 26 percent of the heme iron consumed (Cook 1990). These percentages can be contrasted with iron-deficient males who absorb 22 percent more nonheme iron than normal males and 47 percent more heme iron (Cook 1990: 304). The same flexibility applies to iron loss. Normal males lose about 0.9 mg of iron per day. In contrast, hypoferremic males lose only about 0.5 mg per day, and hyperferremic males lose about 2.0 mg per day (Finch 1989).
Although 82 percent of women studied in New York State recorded an average consumption of only 75 percent of the United States Recommended Dietary Allowance (RDA) of iron, the amount ingested was a mean of 11.1 mg, which exceeds the amount of iron loss through normal physiological processes (Subar and Bowering 1988; also see Brennan et al. 1983 for similar conclusions based on a study of low-income pregnant women). Thus, if we arbitrarily assume an absorption rate of 18 percent of all iron consumed, the mean iron available is well above the 1.4 mg required by menstruating women. This suggests that previously recommended replacement needs of average menstruating women may have been set too high. More scientists today recommend that average menstruating women need to absorb only 1.4 mg iron daily to replace losses (Monsen 1988: 786). In fact, the RDA is higher than the average iron loss for all groups. As a result, many people take in less than the RDA levels of iron without suffering dietary iron deficiency.
Albeit rare, severely iron-deficient diets do exist, though not necessarily because of low iron intake. Adult East Indian vegetarians ingest large quantities of iron, as much as 30 mg per day (Bindra and Gibson 1986). At the same time, they consume such large amounts of inhibitory substances that some suffer from iron-deficiency anemia (Bindra and Gibson 1986). These items include tannin (especially from tea); dietary fiber; coffee; calcium phosphate; soy protein; and phytates in bran and nuts, such as walnuts, almonds, peanuts, hazelnuts, and Brazil nuts (Macfar-lane et al. 1988; Monsen 1988; Muñoz et al. 1988).The larger the amount of phytate consumed, the less iron is absorbed (Hallberg, Brune, and Rossander 1989). What appears to occur is that although the diet contains sufficient iron for replacement of normal physiological losses, inhibitory factors limit the absorption of that iron.
In addition to malabsorption of iron, moderate-to-heavy consumption of fish has been associated with a significantly increased bleeding time test in both males and females (Herold and Kinsella 1986; Houwelingen et al. 1987; Sullivan 1989). The prolonged bleeding time presumably results in significantly more blood loss and lower iron stores. This may explain why young women who habitually eat fish as their major source of protein have levels of serum ferritin and anemia similar to those of strict vegetarians who consume iron inhibitory substances (Worthington-Roberts, Breskin, and Monsen 1988). Here again, we see that the amount of iron ingested is not directly related to levels of anemia. Occult blood loss and iron absorption are more important as relevant factors fostering iron-deficiency anemia than is the actual amount of dietary iron ingested.
Illness and the Anemia of Chronic Disease
The anemia of chronic disease/inflammation has the same hematological presentation as dietary iron deficiency anemia, with the exception of normal to elevated serum ferritin levels. As stated earlier, both normal-to-elevated serum ferritin (reflecting adequate or above-normal iron stores) and elevated ESR indicate the presence of infection and/or the inflammatory process. In fact, such conditions, created by the body’s generalized nonspecific defense against invading pathogens or neoplasia (cancer), reduce the availability of circulating iron. This denies to many bacteria, parasites, and neoplastic cells the iron they need to proliferate. These hematological changes are caused by the production of interleukin-1, interleukin-6, and tumor necrosis factor alpha (Weinberg 1992).
There is abundant research to support the conclusion that anemia is one of the body’s defenses against disease (fever and the manufacture of various cytokines are other defenses [Kent,Weinberg and Stuart-Macadam 1994]). A broad spectrum of illnesses is associated with the anemia of chronic disease (Cash and Sears 1989). The role of iron in enhancing infections and anemia as a defense have been detailed in a number of publications by Eugene Weinberg (1974, 1984, 1986, 1990, 1992). Most bacteria and parasites require iron but cannot store it. They survive and thrive by extracting needed iron from their host. As a consequence, increased available iron can be detrimental to health, as can be seen in modern populations receiving iron supplements or consuming large amounts of dietary iron. For example, the incidence of serious bacterial infections increased significantly when infants were given intramuscular iron-dextran (Barry and Reeve 1977: 376):1 ”When the iron administration was stopped the incidence of disease in Polynesians decreased from 17 per 1,000 to 2.7 per 1,000 total births.” Other research substantiated the association between an increased incidence of bacterial infections, such as Escherichia coli sepsis, and the administration of intramuscular iron-dextran in neonates (Becroft, Dix, and Farmer 1977).
A number of health problems are connected with increased iron intake. In New Guinea, infants with respiratory infections given iron-dextran had longer hospital stays and higher morbidity than those not given iron supplements (Oppenheimer et al. 1986a). Malarial infection was greatly increased in infants with higher transferrin saturation levels, leading the investigators to conclude that hypoferremia has a protective role against malaria (Oppenheimer et al. 1986b). Furthermore, in a study of 110 patients in Africa, those with anemia had fewer malarial attacks than those with higher iron levels (Masawe, Muindi, and Swai 1974). Anemic patients who did not exhibit signs of malaria developed malaria after iron therapy was initiated. In contrast to nonanemic Turkana from Kenya, mildly anemic Turkana who consume little meat, poultry, or other heme iron have a lower incidence of infectious diseases, including such various infections as malaria, brucellosis, and amebiasis (Murray, Murray, and Murray 1980a). Among the Maasai, anemic individuals had a significantly lower incidence of amebic dysentery. Examination of the cow’s milk consumed by Maasai “showed that it not only had a concentration of iron below the minimum necessary for the growth of E. histolytica [an amoeba] but also contained partly saturated lactoferrin and transferrin, which may actively compete with the parasite in the colon for ambient iron” (Murray, Murray, and Murray 1980b: 1351).
When associated with malaria and other infections, serum ferritin levels are generally increased in anemic children (Adelekan and Thurnham 1990). This indicates that iron stores are present in these children, despite the lowered serum iron, transferrin saturation, and hemoglobin levels, all of which suggests strongly that the anemia is not the result of dietary inadequacies. The body removes circulating iron to storage in an attempt to reduce its availability to invading malaria parasites or bacteria. Food taboos that restrict young children’s intake of iron by prohibiting the consumption of meat are found in many areas where malaria is endemic, from Africa to India to Malaysia and New Guinea (Lepowsky 1985). These taboos may be partially explained as an attempt to aid the body’s hypoferremic defense against malaria (Lepowsky 1985: 120).
Some diseases result in highly elevated transferrin saturation levels that make iron much more accessible to pathogens. In diseases such as leukemia, patients have been observed with sera that were 96 to 100 percent saturated (normal is 16 to 50 percent). As a consequence, leukemia patients are unusually susceptible to infection (Kluger and Bullen 1987). Thus in one study of 161 leukemia patients who died, 78 percent of the deaths were attributed to infection and not to leukemia directly (Kluger and Bullen 1987: 258).
Another investigation provides an excellent demonstration of the direct relationship between anemia of chronic disease and infection/inflammations. Over a period of 30 days, researchers analyzed the blood of healthy, well-nourished, and nonanemic infants before and after they were immunized with live measles virus (Olivares et al. 1989). After immunization, the hemoglobin, serum iron, and transferring saturation levels fell significantly, whereas serum ferritin levels rose significantly as the body shifted circulating iron to storage. These levels persisted for 14 to 30 days, while the body produced hypoferremia, or low levels of circulating iron, in an attempt to make iron less available. Even “in those infants with prior evidence of normal iron status, the viral process induced changes indistinguishable from iron deficiency” (Olivares et al. 1989: 855). The authors noted that changes in the white blood cells mimicked a bacterial infection, as did changes in iron levels; in both cases, rapid proliferation of the virus imitated the rapid proliferation of bacteria (Olivares et al. 1989: 855). The body was unable to differentiate between the proliferation of bacteria and the proliferation of the virus and responded in the same manner by producing hypoferremia.
Anemia in Prehistory
In prehistoric populations, the distribution of anemia equaled or surpassed that found in today’s world. There was a general increase in anemia through time, ranging from a few, extremely rare occurrences in the Paleolithic (Anderson 1968) to the Mesolithic (Janssens 1970), to more common occurrence in the Neolithic (Angel 1967), and the Bronze Age (Cule and Evans 1968). Anemia was even more frequent in recent groups, such as the prehistoric American southwestern Pueblos (El-Najjar 1977).
Prehistoric study is possible because anemia is identifiable in skeletal material by a cranial deformation called porotic hyperostosis (also known as cribia orbitalia when the orbits are affected). Physical effects include cranial lesions that are characterized by a sievelike porosity involving parts of the outer skull, often causing a thinning of the outer table, textual changes, and sometimes a “hair on end” appearance (Moseley 1965; Stuart-Macadam 1987). Porotic hyper-ostosis was originally thought to result from hereditary anemias, such as thalassemia or sickle cell anemia. More recently, chronic acquired anemia has been identified from roentgenograms (X rays) as a cause of porotic hyperostosis visible in living young anemic children (Shahidi and Diamond 1960; Powell, Weens, and Wenger 1965). More recent and conclusive studies confirm the link between porotic hyperostosis and acquired anemia (Stuart-Macadam 1987).
Whereas it is not difficult to identify anemia in skeletal populations, it is most difficult to differentiate dietary iron-deficiency anemia from the anemia of chronic disease. At first it was thought that the chemical makeup of bones was an indirect measure of the amount of meat consumed (Bumsted 1980, 1985). However, problems of chemical contamination through mineral loss and contact with minerals in soil have yet to be resolved. A study specifically designed to test the reliability of current bone chemical analyses concluded that their “results suggest that postmortem alteration of dietary tracers in the inorganic phases of bone may be a problem at all archaeological sites and must be evaluated in each case” (Nelson et al. 1986: 1941) Other problems associated with stable isotope and trace element analyses of bone are numerous, and interpretations utilizing current techniques are not conclusive (Klepinger 1984; Aufderheide 1989; Keegan 1989; Sillen, Sealy, and van der Merwe 1989). However, indirect measures of dietary iron and disease, which rely on a basic understanding of iron, anemia, diet, and disease, are possible. The presence of porotic hyperostosis in nonhuman primates, such as chimpanzees, gorillas, orangutans, and various species of Old World monkeys, including baboons (Hengen 1971), reinforce the unlikelihood that an impoverished diet is frequently the cause.
Anemia in the New World
Poor diet, sometimes in combination with infectious diseases, has served as the usual explanation for skeletal evidence of anemia among prehistoric Native Americans from the eastern United States to South America. However, faunal remains, mobility patterns, new medical information, and other nontraditional lines of evidence indicate that disease, rather than diet, may often be a better interpretation.
Eastern United States. Although some investigators still claim a maize-dependent diet as the sole or primary cause of porotic hyperostosis in prehistoric southeastern skeletal populations (Robbins 1986), more recent reevaluations suggest that diet may not have been a major cause. During the Mississippian period, an increase in porotic hyperostosis in children, particularly those under the age of six, occurred in the lower Illinois Valley. Nutritional anemia alone was probably rare in this age group, especially among breast-fed children. However, the condition coincides with a general increase in destructive bone lesions indicative of chronic inflammations in skeletal populations (Cook 1984: 257-9). During the period of these burials (between A.D. 1100 and 1300), Cahokia grew to be the largest pre-Columbian city in North America, with a population of 25,000 to 43,000 (Cook 1984). Non-Western sedentary, aggregated communities often have heavy pathogen loads, as discussed in the section “Anemia of Sedentism.” The skeletal populations exhibit a concomitant increase in anemia and inflammation. This combination illustrates the link between increased population density and increased health problems. Diet cannot be implicated in anemia because during this time the:
Mississippian diet based on archaeological data and ethnohistoric documentation … seems remarkably abundant in the essential elements: protein from both animal and vegetable sources, carbohydrates from cultivated and collected plant foods, and oils from seeds, nuts, and animal fat, all of these rich in minerals and vitamins from undepleted soils.… This ecological diversity, coupled with the sophisticated systems of multiple cropping, surplus food storage, and redistribution described by early European observers, offered protection against nutritional stress from local fluctuations in weather and resource populations. (Powell 1988: 58)
Also during the Mississippian period at Dickson Mounds, Illinois, the incidence of porotic hyperostosis rose from 13.6 percent to 51.5 percent, which coincided with dramatic increases in infectious bone lesions (Goodman et al. 1984: 289). According to investigators, samples from earlier periods had infection rates of 25 percent of the tibiae examined, but then rose to 77 to 84 percent during the later Mississippian period (Goodman et al. 1984). Although the authors attributed the rise in infectious lesions to nutritional stress brought on by an intensification of agriculture in the later period, a change in demography and, particularly, in aggregation may well be responsible. Studies indicate that the earlier Late Woodland sites in the area had population estimates of 50 to 75 individuals; later Mississippian sites had 440 to 1170 individuals (Goodman et al. 1984). Such an important and dramatic demographic change must have had serious health consequences. These Illinois examples are not isolated cases for the eastern United States. In fact, similar rates of increased infection and porotic hyperostosis through time are reported for the Ohio River Valley (Perzigian, Tench, and Braun 1984) and Georgia (Larsen 1984).
Southwestern United States
A maize-dependent diet has also been implicated in the increase through time of porotic hyperostosis in skeletal material from the southwestern part of the United States. Mahmoud Y. El-Najjar and colleagues were among the first and certainly the most prolific to detail the association between frequency of porotic hyperostosis over time and increasing horticultural activities (El-Najjar et al. 1976; El-Najjar 1977). However, as in the eastern United States, the archaeological data from the Southwest do not seem to support a view of diets sufficiently impoverished as to account for the spectacular increase in porotic hyperostosis through time in Anasazi skeletal populations. Instead, archaeological, ethnographic, and ethnohistorical sources all indicate a varied diet, including meat from both wild and domesticated animals, with the latter including turkeys and dogs (Kent and Lee 1992). In addition, wild plants were consumed in combination with maize, beans, and squash, all of which constituted an adequate diet in terms of iron (Kent 1986).
What, then, caused the increase in porotic hyperostosis? As the Anasazi began to adopt horticulture in general, and maize cultivation in particular, they became more sedentary and aggregated, living in communities called pueblos. Although this was not a linear progression that occurred everywhere, neither was the rise in porotic hyperostosis linear. Higher frequencies of porotic hyperostosis occur in skeletal populations from large, sedentary, aggregated pueblos in which disease was more common than in skeletal populations from smaller, more dispersed settlements where it would have been more difficult for disease vectors to infect populations (Kent 1986). Coprolite data (preserved fecal material) support this interpretation by documenting a lower parasite load for the upland populations in contrast to the valley groups. Coprolites containing parasites from various Anasazi sites are significantly correlated with porotic hyperostosis, whereas those containing different portions of either maize or animal residue are not (Reinhard 1992). In other words, dietary evidence of reliance on meat or maize was not correlated with skeletal evidence of anemia, but parasitism was correlated with porotic hyperostosis.
Elsewhere in the United States
Anemia, apparently caused by chronic diseases and inflammations, produced relatively high levels of porotic hyperostosis throughout the prehistoric United States. It can be seen in coastal California Native American skeletal populations who had a heavy dietary dependence on marine resources (Walker 1986). Eskimo skeletal populations also reveal relatively high percentages of porotic hyperostosis, some of which are even higher than those of southwestern and eastern prehistoric populations (Nathan 1966). Certainly the Eskimo example of porotic hyperostosis was not the result of an insufficient intake of meat. Instead, it was probably the result of seasonally sedentary aggregations and associated disease common to the winter villages. Prehistoric skeletons from all parts of Texas exhibit much lower incidences of porotic hyperostosis; less than 5 percent of 348 adult crania and 15.1 percent of 73 juvenile crania had any evidence of porotic hyper-ostosis (Goldstein 1957). Although precise locations of the various skeletal material were not cited, it is probable that the populations examined represent the nomadic groups that inhabited much of Texas; if so, this would explain the low incidence of infectious disease and porotic hyperostosis.
Central and South America
High frequencies of porotic hyperostosis were found on the crania of prehistoric Maya from Central America. For example, 21 skulls of Mayan children 6 to 12 years old were found in the Sacred Cenote of Chichén Itzá in the Yucatan; 67 percent had porotic hyperostosis. A number of the adult crania from the cenote also had healed examples of the pathology (Hooton 1940). Prehistoric Maya living in Guatemala suffered from both a high rate of infectious diseases, as evidenced by lesions thought to be related to yaws or syphilis, and to dietary inadequacies, such as vitamin C deficiency, as evidenced by bone lesions (Saul 1973). As a consequence, it is difficult to determine whether the porotic hyperostosis was caused by true dietary deficiency, by infectious diseases, or by a combination of the two.
In contrast, a study of Ecuadoran skeletal populations clearly demonstrates the influence of disease in promoting porotic hyperostosis. Although coastal populations ate iron-rich seafood, in addition to meat and cultivated products, skeletal populations reveal increasingly higher frequencies of porotic hyperostosis over time as they became more sedentary and aggregated (Ubelaker 1992). Coastal peoples suffered from conditions aggravated by large-village life, such as probable parasitic infestations, including hookworm, which causes gastrointestinal bleeding.2 An examination of mummies from coastal Peru and Chile indicates that the most common cause of death was acute respiratory disease in adults and children (Allison 1984). Approximately half of the mummified individuals died from their first attack of pneumonia (Allison 1984). In contrast, porotic hyper-ostosis is not common in skeletons of maize farmers who occupied more dispersed habitations in the highlands of Ecuador, where parasites cannot survive because of the altitude and cold climate (Ubelaker 1992).
Although prehistoric cases of dietary-induced porotic hyperostosis may occur, they probably are not representative of entire populations. Iron-deficiency anemia caused by extremely inadequate diets is thought to result from factors not common until the twentieth century. Although not all paleopathologists (particularly those who are more diet-oriented and are less hematologically oriented), agree with the emphasis placed here on the anemia of chronic disease, many recent studies and reinterpretations agree that this is a better explanation than iron deficiency.
Anemia in the Old World
The Old World represents a vast area occupied by humans for a long time period, and thus only a few examples of prehistoric incidences of anemia are presented here.
Hereditary (for example, thalassemia) and acquired cases of anemia from the Mediterranean area cannot be differentiated, although porotic hyperostosis is found in skeletal populations from the area (Angel 1984). In the Levant, Mesolithic Natufians apparently had good health, as did the Paleolithic hunter-gatherers who preceded them, although sample size is very small (Smith et al. 1984). They remained a relatively healthy population during the early Neolithic. Deterioration began in later periods when population aggregations grew larger and more permanent and infectious diseases became endemic. According to P. Smith, O. Bar-Yosef, and A. Sillen, “This deterioration … seems to be related to chronic disease rather than to periodic bouts of food shortages, as indicated by the distribution of developmental lesions in the teeth and bones and the poor condition of all individuals examined” (Smith et al. 1984: 129).
In South Asia, there was a similar trend toward increasing porotic hyperostosis and infection, which has been attributed to nutritional deficiencies and disease (Kennedy 1984) but may also have been caused by demographic factors. Skeletons from the Harappan civilization city of Mohenjo-Daro on the Indus River, Pakistan, have a relatively high incidence of porotic hyperostosis; this led one anthropologist to suggest the possible presence of thalassemia and malaria in the area 4,000 years ago (Kennedy 1984: 183). However, the occurrence of malaria is difficult to evaluate, particularly in light of the high population density of Mohenjo-Daro, which could have promoted a number of infectious diseases common to such sedentary aggregations.
Porotic hyperostosis is found at varying frequencies throughout European prehistory (Hengen 1971), but as no systematic temporal study has been done to date, it is not possible to delineate and interpret trends. However, an interesting geographical pattern has been discerned that suggests that populations located closer to the equator have higher levels of porotic hyperostosis. This might be the result of generally higher pathogen loads in these areas (Stuart-Macadam 1992). However, porotic hyperostosis was well represented, particularly in children, in a large Roman-British cemetery (Stuart-Macadam 1982). This is interesting because it has been suggested that lead poisoning, which can cause severe anemia, was one of the factors that contributed to the ultimate collapse of the Roman Empire (Gilfillan 1965).
According to some research, prehistoric Nubian populations experienced nutritional deficiencies after they became agriculturalists (Armelagos et al. 1984). This interpretation is based on the occurrence of porotic hyperostosis, long-bone growth patterns compared to North Americans, dentition abnormalities, and premature osteoporosis. Moreover, bone growth patterns of skeletons from a medieval Nubian Christian cemetery, (A.D. 550 to 1450) have been used to interpret nutritional stress (Hummert 1983). Nonetheless, it is usually recognized that such stress can be caused by a number of nutritional deficiencies not necessarily related to iron. More recent coprolite data have been employed in support of the hypothesis that an impoverished diet caused the high frequency of porotic hyperostosis in Christian Nubian populations (Cummings 1989). However, it is possible to interpret the data quite differently. For example, it should be noted that meat does not usually preserve in coprolites, and humans do not usually ingest large bones that would be indicative of meat consumption. Therefore, the amount of meat consumed may be substantially underrepresented when interpreting coprolite data. Bearing this in mind, it is impressive that 33.3 percent of the 48 coprolites analyzed contained evidence of fish bones or scales, as well as one pig bone and an unidentifiable mammal bone (Cummings 1989: 86-92). Such evidence contradicts the contention that a diet poor in iron produced the high incidence of porotic hyperostosis in this population.
Anemia in Australia
Frequencies of both infection and anemia (porotic hyperostosis) are low in the desert region of Australia, particularly when compared to other parts of the continent, such as the Murray River area (Webb 1989). Murray River skeletons display a pronounced increase in porotic hyperostosis and infections coinciding with archaeological evidence of restricted mobility, aggregation, and possible overcrowding (Webb 1989: 145-8). Such evidence again suggests that the anemia in question is the result of chronic disease and not diet. Though lower than in the Murray River area, the relative high incidence of anemia and infection that occurred among prehistoric Aborigines occupying the tropical portions of Australia can be attributed to parasitism, such as hookworm (Webb 1989: 155-6).
Acquired Anemia in Today’s World
The perspective of dietary deficiency and the anemia of chronic disease presented here permits new insights into a variety of issues facing contemporary populations. Because iron-deficiency anemia and the anemia of chronic disease have not always been distinguished, particularly in earlier studies, the two are discussed together and are differentiated whenever possible.
Anemia of Sedentism
As noted, heavy pathogen loads are characteristic of non-Western sedentary aggregations. In such settings, high morbidity rates are visible, not only hematologically but also in the frequency of parasitic infections. For example, studies indicate that nomadic Amazonians had a lower frequency of parasitic infection than seminomadic horticulturalists and sedentary villagers. In these latter populations, 100 percent of some age groups were infected with parasites (Lawrence et al. 1980). As a whole, the sedentary populations had many more multiple parasitic infections, ranging from 4.2 to 6.8 species per person (Lawrence et al. 1980). Sedentary and aggregated Brazilian Amazonian villages had roundworm (Ascaris) infections, ranging from 65 to 100 percent of the population, and heavy worm burdens, including hookworm and whipworm (Necator americanus, Trichuris trichura) (Chernela and Thatcher 1989). This situation contrasts with the inhabitants of more nomadic and dispersed villages that had half that rate of roundworm infection (34 percent) and light worm burdens that were asymptomatic (Chernela and Thatcher 1989).
From 60 to 100 percent of the residents of two Colombian villages were found to be infested with parasites (Schwaner and Dixon 1974). In one that lacked sanitation of any kind and in which shoes were rarely worn, 100 percent of the population had either double or triple infections (roundworm, whip-worm, and hookworm). Of the 60 percent of those infected in the other village (that had outdoor latrines and where shoes were worn), 70 percent were infected by more than one species (Schwaner and Dixon 1974: 34). Sedentism, aggregation, and the lack of adequate sanitation are all implicated in creating breeding grounds for parasitic and other types of infections. Such conditions lead to high morbidity and chronic hypoferremia as the body attempts to defend itself against the heavy pathogen load. Furthermore, heavy infestations of hookworm and other parasites cause blood loss that even good diets cannot replenish.
The relationship between sedentism and anemia can be more easily seen by comparing nomadic and sedentary people who consume similar diets, thereby factoring out diet as a causal variable of anemia. Hematological research among recently sedentary and still nomadic Basarwa (“Bushmen” or San, as they have been referred to in the literature) illustrates hypoferremia operating in a modern population. The 1969 population of the !Kung Basarwa (who live in the northwestern part of the Kalahari Desert) contained a few ill individuals, as do most populations, but was judged as mostly healthy (Metz, Hart, and Harpending 1971). Their meat intake did not change dramatically between the dry seasons of 1969 and 1987 (Kent and Lee 1992). There were, however, significant changes in mobility patterns, from a relatively nomadic to a relatively sedentary pattern. Concomitant higher morbidity rates ensued. Also, the number of individuals with below-normal serum iron and transferrin saturation levels rose significantly between 1969 and 1987 (Kent and Lee 1992). It is significant that the 1987 Dobe !Kung population, with a roughly adequate meat intake, had more individuals with subnormal serum iron and transferrin saturation values than did the Chum!kwe !Kung population, with an acknowledged deficient diet (Fernandes-Costa et al. 1984; Kent and Lee 1992). It is furthermore significant that no individuals had low serum ferritin values among the Dobe !Kung, which would indicate a dietary deficiency.
To evaluate the role of sedentism in promoting the anemia of chronic disease, a hematological study was conducted in 1988 among a different group of recently sedentary Basarwa living near the Kutse Game Reserve in the eastern half of the Kalahari (Kent and Dunn 1993). Their diet was repetitive but adequate; meat was consumed several times a week (Kent and Dunn 1993). The Kutse Basarwa often complained of illness; respiratory problems were common. Both serum iron and transferrin saturation means were significantly lower for the 1988 Kutse Basarwa than those obtained in 1969 or 1987 among the Dobe Basarwa (Kent and Dunn 1993). At the same time, serum ferritin levels were higher in the adult Kutse population than in the 1987 adult !Kung. This is consistent with the view that the anemia of chronic disease is more prevalent in situations of high morbidity where the body attempts to protect itself from continuous cycles of insult. The children’s mean ferritin remained approximately the same (Kent and Dunn 1993).
If hypoferremia operates in response to heavy and chronic pathogen loads, then it should be visible in more than just the hematological profile of a population. Ethnographic observations and informant interviews indicate that there is a high level of morbidity at Kutse but not a deficient diet (Kent and Dunn 1993). Both the 1987 !Kung and 1988 Kutse Basarwa hemato-logical studies indicate that the anemia of chronic disease is common in these populations and is activated in situations of high morbidity. A 1989 follow-up hematological study of the Kutse community reinforces the interpretation of an adequate diet but high morbidity as a result of aggregation (Kent and Dunn n.d.). As pointed out, anemia is not a body defect or disease but is actually a defense against a heavy pathogen load, which operates by reducing circulating levels of iron required by organisms to proliferate.
Anemia of Development and Modernization
Development and modernization projects are often promoted both by local governments and by international businesses and foreign governments as a means to westernize a developing country. Generally, nontraditional subsistence activities are encouraged because the government or politically dominant group believes the change to be beneficial. Extremely deficient diets documented by Susan Stonich (1991), Benjamin Orlove (1987), and others are the consequence of governmental pressure to grow export crops for a world market, which is a relatively recent phenomenon. This type of severe dietary deprivation has usually been correlated with colonialism (Franke 1987; Ross 1987).
Nomadic peoples have been targets of development and modernization schemes. For example, sedentarization at Chum!kwe, located in Namibia on the western edge of the Kalahari Desert, began in 1960. At that time the South African administration of the Namibia Territory decided to move all the !Kung Basarwa of Nyae Nyae into a single settlement (Kent and Lee 1992). For the next 20 years the waterhole at Chum!kwe (augmented by boreholes) supported a standing population of 800 to 1,000 people. These conditions of high aggregation, with its attendant social, nutritional, and alcohol-related stresses, were prevailing when a 1981 study was made (Fernandes-Costa et al. 1984).
At Chum!kwe in 1981, hunting and gathering had fallen to very low levels due to the high level of aggregation and reduced mobility of the population (Fernandes-Costa et al. 1984). Store-bought foods and government rations, high in refined carbohydrates, formed the mainstay of the diet; some milk and grain from home-grown sources supplemented the diet. Meat was eaten infrequently. By the 1980s alcohol abuse had become a major social problem, with Saturday night brawls often resulting in injury and death (Volkman 1982). Young children as well as adults drank beer, which sometimes provided the only calories consumed all day (Marshall and Ritchie 1984: 58). The health of the Chum!kwe population was portrayed as poor.
The investigators attributed the incidence of anemia primarily to nutritional causes. However, 35.9 percent of all adults had a serum ferritin value above 100 nanograms per milliliter (ng/mL), indicating the presence of the anemia of chronic disease. No men and only 6 percent of women had subnormal serum ferritin levels (Fernandes-Costa et al. 1984). Diet was also thought to be responsible for the 33 percent of children who had subnormal serum ferritin levels (less than 12 ng/mL), although bleeding from parasitic infections could be another explanation. In fact, 3 of 40 stool samples contained Giardia lamblia and 8 of 40 samples had hookworm (Necator americanus) (Fernandes-Costa et al. 1984: 1302). Both parasites can cause lowered serum ferritin and other iron indices through blood loss and competition with the host for nutrients. Unfortunately, the age and sex of the afflicted are not mentioned; however, children tend to become infected from parasites more often than adults and tend to have a heavier parasite load than adults. Thus, parasites might explain the number of children with subnormal serum ferritin values.
A very similar situation occurred among the Australian Aborigines. They were encouraged to live at settlements where they subsisted on government subsidies of flour and sugar (Taylor 1977). The consequence of this planned social change implemented by the government was malnutrition and frequent infections:
After weaning from breast milk, the children graduated to the staple and nutritionally deficient diet of the settlement and entered its unhygienic, usually overcrowded, and certainly highly pathogenic environment. Here they acquired repeated respiratory and gastrointestinal infections which, if they did not prove fatal, may have impaired the children’s ability to absorb nutrients from the diet [through malabsorption caused by the anemia of chronic disease and by numerous types of infections]. Thus the problem was compounded and a vicious cycle seemed to be set up in which the survivors of the process proceeded to adulthood to rear more children under the same circumstances. (Taylor 1977: 147-8)
Anemia around the World
In the United States the frequency of anemia is fairly low, approximately 6 percent or less for children under 17 years of age; more girls than boys have sub-normal levels (Dallman 1987).The number of children with subnormal hemoglobin values declined from 7.8 percent in 1975 to 2.9 percent in 1985 (Centers for Disease Control 1986). A similar decline occurred in adults as indicated in the Second National Health and Nutrition Examination Survey (NHANES II, 1976-80). In this study of 15,093 subjects, anemia was found in 5.7 percent of infants, 5.9 percent of teenage girls, 5.8 percent of young women, and 4.4 percent of elderly men (Dallman, Yip, and Johnson 1984). The later surveys included serum ferritin measurements and differentiated between iron-deficiency anemia and anemia of chronic disease (Expert Scientific Working Group 1985). Therefore, the decline may be more illusory than real in that the more recent surveys separated the two anemias, which made the overall frequency of each appear to be less. Although based on a small sample size, anemia of chronic disease accounted for 10 percent of anemic females between 10 to 44 years of age, 34 percent of anemic females between 45 and 74 years of age, and 50 percent of anemic males (Expert Scientific Working Group 1985: 1326).
There are a few populations with a lower rate of anemia than that in the United States. One example is the Waorani Indian horticulturalist-hunters of eastern Ecuador. In 1976, none of the 114 individuals studied, including males and females of all ages, had subnormal hemoglobins or hematocrits, despite a few light helminth infestations and four children with presumed bacterial-induced diarrhea (Larrick et al. 1979: 155-7). Once again, the low frequency of anemia was probably the result of a seminomadic, dispersed settlement pattern.
Anemia frequency is much higher in other parts of the world. During the 1970s, studies conducted of nonwhite children in South Africa showed a higher prevalence of iron-deficiency anemia (38.8 percent) and anemia of chronic disease (18.8 percent) than figures reported for the same time period in the United States (Derman et al. 1978).
In regions such as southeast Asia, a high incidence of hookworm infections cause bleeding. Surveys in these areas show that the frequency of anemia is correspondingly high, ranging from 25 to 29 percent in males and from 7 to 45 percent in nonpregnant women and children (Charoenlarp et al. 1988). Approximately 20 percent of the subjects given oral iron for three months as part of a study in Thailand remained anemic, and some of the individuals ended up with serum ferritin levels above 100 • g/L (Charoenlarp 1988: 284-5). Both the lack of response to oral iron supplementation and the serum ferritin levels above 100 are common indications of the anemia of chronic disease; the other incidences of anemia can be attributed to various amounts of hook-worm and other parasitic infections. Between 5 to 15 percent of women, 3 to 27 percent of children, and 1 to 5 percent of men in Burma were anemic, but no serum ferritin values were measured; hookworm infection rates varied from 0 to 70 percent, depending upon locality (Aung-Than-Batu et al. 1972).
Anemia was found in 22 percent of pregnant women, 12 percent of nonpregnant women, and 3 percent of men in seven Latin American countries (Cook et al. 1971). Unfortunately, serum ferritin was not measured, and so it is not possible to distinguish the percentage of individuals with iron deficiency or with anemia of chronic disease.
Anemia and Parasites
Parasite load is an influential factor in the frequency of anemia around the world. Parasites both compete with the host for nutrients and can cause bleeding resulting in substantial iron losses. The body responds to parasitic infections with hypoferremia to deprive the parasites of needed iron. Most common in the tropical regions of the world, parasitic distribution is related to the ecological complexity and diversity that characterize such habitations (Dunn 1968; Stuart-Macadam 1992). Hook-worms can cause as much as 0.21 ml of blood loss per worm per day in infected individuals (Ubelaker 1992).
In Bangladesh, where over 90 percent of children have helminth infections, between 74 percent and 82 percent of the children tested had subnormal hemoglobin levels (Florentino and Guirriec 1984). Serum ferritin studies were not included. However, because of the blood loss associated with helminth infections, individuals were probably suffering from iron-deficiency anemia, albeit not diet induced. Of 1,110 children tested in India, 33.8 to 69.4 percent were anemic; 26.7 to 65.3 percent suffered from roundworm (Ascaris) and Giardia infections, both of which cause anemia (Florentino and Guirriec 1984). Although the link with parasites is a little more tenuous among Nepal villagers, 67.6 percent of nonpregnant women had subnormal hemoglobin levels; the high consumption of dietary iron absorption inhibitors may also be a factor in causing anemia in this population (Malville 1987).
In Indonesia, 37.8 to 73 percent of children studied were anemic, and 22 to 93 percent were infested with hookworm, probably causing their anemia (Florentino and Guirriec 1984: 85). This study also reported that in Malaysia and Singapore, 83 percent of 30 children were anemic; in the Philippines, 21.1 to 47.2 percent of 2,509 children, depending on their age group, were anemic; and in China, 23 percent of 1,148 children were anemic.
Among various Pygmy hunter-gatherer groups, parasite levels are extremely high because of fecal contamination and the high level of parasites characteristic of tropical environments (Pampiglione and Ricciardi 1986). Various types of malaria (Plasmodium falciparum, Plasmodium malariae, and Plasmodium ovale) were found in 18.1 to 59.4 percent of 1,188 people from four different groups of Pygmies living in the Central African Republic, Cameroon, and Zaire (Pampiglione and Ricciardi 1986). Numerous other parasites were also present: hookworms (41.3 to 85.8 percent); roundworms (16.7 to 64.9 percent); amoebas (Entamoeba, 6.4 to 35.8 percent); Giardia (5.3 to 11.4 percent); whip-worm (Trichuris, 77.9 to 91.9 percent); and others (Pampiglione and Ricciardi 1986). Many individuals suffered from several parasitic infections, in addition to yaws (10 percent with active symptoms), scabies, chiggers, and other afflictions (Pampiglione and Ricciardi 1986: 160-1). Neighboring Bantu-speaking farming peoples, despite more access to Western medical care and medicine, also have a high incidence of parasitism. Malaria (P. falciparum) was found in 91.2 percent of 321 persons; 26.8 percent suffered from amoebas; 53 percent from roundworms; 80.3 percent from hookworms; 69.8 percent from whipworm; and 16.5 percent from Giardia (Pampiglione and Ricciardi 1986: 163-4).
In Liberia, pregnant women in one study also had high rates of parasitism: 38 percent had hookworm; 74 percent had roundworm; and 80 percent had whipworm (Jackson and Jackson 1987). Multiple infections were also common; between 24 and 51 percent of the women had two infections and 12.5 percent had three infections (Jackson and Jackson 1987).
In Bolivia, 11.2 to 44 percent of children tested were anemic (Florentino and Guirriec 1984). Of these, 79 percent suffered from roundworm and 12 percent had hookworm infections (Florentino and Guirriec 1984).
The intention here is not to provide a detailed overview of anemia throughout the world, which would require an entire book. Rather, it is to show that anemia is a widespread condition in non-Western societies due to endemic chronic infections and parasitic diseases. In some of the cases mentioned in this section, poor health in general resulted from ill-planned development schemes or encouragement of cash or export crops at the expense of subsistence farming, creating contaminated water supplies and poor sanitation. This should present a challenge to Western societies to eradicate disease aggressively in developing countries. Development agencies need to become more aware of the underlying causes of anemia and high morbidity in many of these countries. Attempts to change agricultural patterns to produce more food per acre or the provision of iron supplements may not achieve the goal of eliminating anemia on a worldwide basis.
Anemia in infants and children. The decline in the number of children with acquired anemia in Western nations is as impressive as the decline in the number of adults with anemia. One study in the United States showed that in 1971, 23 percent of 258 children were anemic, whereas in 1984, only 1 percent of 324 children were anemic (Dallman and Yip 1989). More recent surveys show that of those few children who are anemic, the majority have the anemia of chronic disease rather than dietary iron deficiency, as evidenced by normal-to-elevated serum ferritin levels (Reeves et al. 1984; Jansson, Kling, and Dallman 1986).
There is a vast literature on anemia in infants and children (for example, Pochedly and May 1987). Most relate the anemia to the rapid growth of children and the low iron content of diets, although many do not distinguish between iron-deficiency anemia and anemia of chronic disease (Ritchey 1987). For example, a study of 148 British toddlers claims that lower hemoglobin levels are associated with diet: (1) prolonged lactation and (2) early introduction of whole cow’s milk (Mills 1990). Consumption of cow’s milk before 12 months of age has been shown to cause gastrointestinal bleeding in 39.5 percent of infants fed cow’s milk versus 9.3 percent of infants fed formula (Fomon et al. 1981).
We cannot assume, however, that all of these cases of childhood anemia were the result of dietary factors because serum ferritin was not measured. In fact, there are more compelling reasons to suggest that at least some of the anemia was the anemia of chronic disease. Women who breast-feed their children the longest (a year or more) in Western societies tend to be the underprivileged, who are also subjected to higher rates of infection or inflammation due to crowding, inadequate housing, and the stress of minority status (see the section “Anemia and Sociopolitical Class and Race”). Such children may be at higher risk from pathogens and have greater incidence of the anemia of chronic disease. Clearly, serum ferritin must be measured in order to make reliable conclusions from any study of anemia. It is interesting to note that a study of 9 hospitalized children (6 fed formula and 3 fed cow’s milk) indicated that those with lower hemoglobin levels, attributed to starting cow’s milk at 6 months of age, were less likely to be hospitalized for infections than infants fed formula who had higher hemoglobin levels (Tunnessen and Oski 1987). This could be interpreted as the effect of lower hemoglobin levels, which provides protection against infections both in the contracting of diseases and in their virulence. Subnormal serum ferritin values revealed in the 17.4 percent of infants fed cow’s milk versus only 1 percent of infants fed enriched formula could be the result of increased diarrhea and gastrointestinal bleeding associated with cow’s milk feeding, as well as to allergies to cow’s milk found in 0.3 to 7.5 percent of children (Foucard 1985;Tunnessen and Oski 1987).
Further difficulty in interpreting the etiology of anemia in children is caused by the fact that most minor illnesses that commonly afflict children, as well as immunization, can significantly depress hemoglobin levels (Dallman and Yip 1989). Iron-deficiency anemia is simulated in these cases because hemoglobin levels drop, but the anemia of chronic disease is the actual cause, as evidenced by normal serum ferritin levels. Moreover, serum iron levels drop significantly during the first year of life and are lower than adult levels as a normal physiological developmental change (Saarimen and Siimes 1977; Dallman and Reeves 1984).
In an attempt to combat anemia, most health-care workers routinely recommend iron-fortified infant formula without first ascertaining if an infant is anemic and if so, why. For example, the Women, Infant, and Children (WIC) program requires disadvantaged women to use iron-fortified formulas; nonfortified formulas are not provided (Kent, Weinberg, and Stuart-Macadam 1990). Fortified infant formulas were routinely prescribed by all but 16 percent of 251 physicians in Washington State (Taylor and Bergman 1989). It is hoped that the information presented here demonstrates the potential deleteriousness of this practice. Fortified formulas should be used only in cases where infants have anemia due to bleeding or other conditions known to create insufficient levels of iron to maintain health.
Anemia and Pregnancy
Pregnancy is often associated with a mild iron deficiency that has been linked to the nutritional needs associated with the rapid growth of a fetus. However, a few researchers have suggested that slight hypoferremia may help defend mother and fetus from invading pathogens (Weinberg 1987; Stuart-Macadam 1987). Such a defense may be particularly valuable during the latter phases of gestation when cell-mediated immunity lessens to prevent immunological rejection of the fetus (Weinberg 1987). That lower iron levels are a normal physiological part of pregnancy is supported by a number of studies that have failed to demonstrate any benefit derived from routinely prescribed prophylactic iron (Hemminki 1978; Bentley 1985). Nonetheless, many physicians continue to recommend that pregnant women take oral iron supplements.
There are anemic pregnant women with serum ferritin values below the normal 12 ng/mL. Premature labor was recorded in 48 percent of pregnant women with serum ferritin values below 10 ng/mL, in contrast to 11 percent who had normal serum ferritin levels (Goepel, Ulmer, and Neth 1988; Lieberman et al. 1988). However, a very large study of 35,423 pregnant women reported no reliable association between anemia and problem pregnancies:
When the hematocrits of women in term labor were compared with those of women in preterm labor, a spurious dose-response effect for anemia was created. We conclude that anemia is not a strong factor in the pathogenesis of preterm birth and that comparison of hematocrits from women who are in preterm and term labor produces biased results (Klebanoff et al 1989: 511).
Furthermore, and importantly,”we do not believe that there is sufficient evidence to justify a randomized clinical trial of treatment of borderline anemia during pregnancy” (Klebanoff et al. 1989: 515).
Anemia and the Elderly
Elderly men and postmenopausal women have the highest incidence of the anemia of chronic disease among all North Americans surveyed (Cook et al. 1986). Although some physicians have regarded anemia as a “normal” part of aging, a number of studies have shown that it often is the anemia of chronic disease resulting from increased vulnerability to pathogens and neoplasia, perhaps because of lowered immune defense (Zauber and Zauber 1987; Daly and Sobal 1989; Thomas et al. 1989). Serum ferritin levels rise with age, probably because the elderly have increased susceptibility to systemic insults (Zauber and Zauber 1987; Baldwin 1989; Daly 1989; Stander 1989). Whereas only 13 percent of 259 elderly persons had anemia, 77 percent had the anemia of chronic infections (Guyatt et al. 1990: 206). The frequency of infectious diseases may not be primarily the result of age alone but, instead, of social factors involved with aging in Western society, such as depression, poor care, and crowding in nursing homes. That is, the anemia of chronic disease may be more common among the elderly than the population in general because of social variables unique to this group.
Anemia and Sociopolitical Class and Race
Whatever the cause of anemia, dietary or chronic diseases, it is still the poor person’s disease. While the prevalence of anemia has dropped for African-Americans as it has for all Americans, the frequency of anemia is still higher than among other groups. Anemia among African-Americans dropped from 21 percent between 1975 and 1977 to 19.2 percent between 1983 and 1985 (Yip et al. 1987). Unfortunately, serum ferritin was not measured in earlier surveys, making conclusions based on improved iron and general dietary nutrition problematic. However, later studies that include ESR (sedimentation rate, an indication of infection) show that the anemia of chronic disease is more prevalent among lower socioeconomic classes, including African-Americans (Yip and Dallman 1988).
It was once proposed that a racial characteristic of blacks is a significantly lower hemoglobin level than that of whites, and as a result, some researchers suggested using separate hemoglobin standards based on race (Garn, Smith, and Clark 1975; Garn and Clark 1976; Garn, Shaw, and McCabe 1976). However, only hemoglobin and hematocrit were analyzed in these studies. Genetics, paleontology, physiology, anatomy, and other sciences all suggest that there are neither absolute races of humans nor corresponding genes that are restricted solely to one segment of the human population; that is, there are no “black” genes that all blacks share.4 So-called racial groups are based on continuous traits that are arbitrarily divided into supposedly discrete groups. This is particularly true in Western society where so much gene flow (that is, interracial matings) has occurred.
What, then, accounts for the lower hemoglobin levels among African-Americans? There are a number of sociological reasons that cause one subgroup to be subjected to more bodily insults than another, particularly in a society where racism is, unfortunately, all too common (Kent 1992). There are hematological data to support this contention. Later studies that performed serum ferritin measurements show that whereas African-Americans have lower hemoglobin levels, they concomitantly have higher serum ferritin levels, ranging from 271.7 ng/mL in black men to 111 ng/ml in black women; this contrasts to 93.63 in white men and 37.9 ng/ml in white women (Blumenfeld et al. 1988). Even though sedimentation rates (or ESR) were not elevated among a sample of African-Americans in one study, there was a higher incidence of chronic diseases as indicated by the number of serum ferritin levels above 100 ng/ml, which, in combination with low hemoglobin levels, defines the anemia of chronic disease/inflammation. Other investigations corroborate these findings. Of 78 African-Americans studied, 17.6 percent had elevated serum ferritin levels, leading the investigators to conclude that in addition to the anemia of chronic disease, there also were a number of individuals with iron overload who also suffered from occult inflammatory conditions (Haddy et al. 1986: 1084). The reason that more blacks than whites have the anemia of chronic disease is probably related to poverty. Many suffer from overcrowding, inadequate shelter, poor medical care, and the stress of minority status.
Black and white economic levels were matched in the studies mentioned. However, blacks within the same level of income as whites may suffer more stresses and infectious diseases because of their minority position and associated problems, such as prejudice and alcoholism, which were not taken into account in these studies. Other research indicates that whereas blacks have a lower hematocrit, it is only 0.7 percent lower than that of whites; R. Yip, S. Schwartz, and A. Deinard report the the “lower value in blacks may be accounted for by mild thalassemias, which are associated with lower hematocrit values. The use of the same diagnostic criteria for anemia among all races will permit uniform detection of nutritional anemia” (Yip, Schwartz, and Deinard 1984: 824). The higher incidence of thalassemias associated with blacks in the United States is a geographical, not a racial, trait since whites indigenous to Mediterranean regions cursed in the past with endemic malaria also have a higher rate of thalassemias.
Various other minorities in North America are impoverished and suffer from prejudice, which is reflected in their hematology. The prevalence of anemia among Chinese-Canadians is similar to that of black Americans but dissimilar to that of white Americans (Chan-Yip and Gray-Donald 1987). Hispanic females between the ages of 20 and 44 had statistically higher levels of anemia than whites or blacks, although it is unfortunate that the etiology of the anemia was not determined (Looker et al. 1989). Thus, lower hemoglobin levels are not specifically a black-associated trait but do appear to be a minority-associated trait.
Native Americans are similarly affected with significantly higher incidences of anemia. Between 22 and 28 percent of Alaskan Native Americans were found to be anemic: Of these, 65 percent had iron deficiency and 35 percent had the anemia of chronic disease (Centers for Disease Control 1988). Parasites were not investigated in these studies, but earlier studies of this group indicate that they are common and a potential reason for the reported high frequency of iron deficiency (Rausch 1951; Schiller 1951). Pneumococcal disease also is endemic among Alaskan natives (Davidson et al. 1989). Poverty and poor nontraditional diets, combined with overcrowding in semisedentary and sedentary villages, contribute to this unfortunate situation. As discussed in the section “Elsewhere in the United States,” skeletal material reveals that anemia was common in this population in the past as well, even though large quantities of heme iron in the form of meat were routinely consumed. Such anemia was the result of endemic health problems associated with winter village life and infection with parasites from contact with infected dogs, seals, polar bears, and other animals with which native Alaskans were in contact (Kent 1986).
By contrast, Native American children in Arizona were reported to have lower rates of anemia (Yip et al. 1987). This is difficult to assess because information was not provided as to which group of Native Americans was involved in the study. The frequency of anemia could be related to the dispersed settlement patterns of some groups, such as the Navajos, or to the season when the study was conducted, because many Native American children attend boarding schools during the winter but spend the summer at home. The frequency might also be attributed to the length of time children are breast-fed, which, at least in the past, was of longer duration than among Euroamerican children. In other studies, Native Americans had the same relatively low rate of anemia as Euroamerican children (Yip, Schwartz, and Deinard 1984). Whatever the cause, the lower incidence of anemia among Arizona Native Americans in contrast to Alaskan Native Americans again demonstrates that anemia is not related to genetic or racial factors but is related to environmental and sociopolitical factors, such as poverty and its associated diseases, like alcoholism.
Anemia and Alcoholism, AIDS, Drugs
Anemia, primarily of chronic diseases, is correlated with a number of health problems currently affecting all countries to various degrees. Although alcohol enhances iron absorption and can cause overload and consequent pathologies (Rodriguez et al. 1986), alcoholics may also suffer from anemia. Between 13 to 62 percent of chronic alcoholics are anemic, primarily as a result of acute blood loss and illness, including liver disease (Savage and Lindenbaum 1986).
AIDS, or acquired immunodeficiency syndrome, is associated with anemia, but its interpretation is difficult. Anemia may be the result of the body’s defense against the virus or its defense against the many secondary infections associated with AIDS. Anemia in AIDS patients can also be partly related to malnourishment from malabsorption associated with the condition. The latter is the least likely explanation, however, because the anemia is associated with normal or increased serum ferritin levels (Beutler 1988).
Secondary infections are more likely to cause anemia in this group. Many AIDS patients suffer from neoplasia of various types (especially Kaposi sarcoma and non-Hodgkin’s lymphoma) and from a wide range of bacterial infections that are particularly virulent because of the host’s weakened resistance (Brasitus and Sitrin 1990). As a result of their compromised immuno-logical systems, AIDS patients are often afflicted with atypical mycobacterial infections as well (Ries, White, and Murdock 1990). Whatever the ultimate cause of the anemia, it appears that the hypoferremia associated with AIDS is primarily the anemia of chronic disease and occurs as the body attempts to defend itself against very formidable infections.
Anemia and Performance
It has long been suggested that iron-deficiency anemia adversely affects performance, as measured by a battery of tests designed to evaluate activity, ability, and endurance (Dallman 1989). In fact, many studies claim to demonstrate a relationship between poor mental or physical performance and low hemoglobin level. However, a number of these studies indicate that changes noted were not statistically significant, did not include a control group, or were ambiguous in what prompted the improvements noted (Lozoff and Brittenham 1987; also see Lozoff, Jimenez, and Wolf 1991, who indicate the difficulties in determining cause and effect). The problem encountered here, as throughout this discussion of anemia, is the separation of cause and consequence: physiological problem or defense.5
Furthermore, serious questions exist in almost every study that interprets behavioral, mental, or other functional limitations associated with dietary iron-deficiency anemia. For instance, no one denies that someone severely iron deficient as a result of blood loss will perform poorly compared to a healthy person. However, when anemia is reversed, other health problems are also corrected. Intake of calories, protein, and vitamins is improved in most of the studies designed to investigate performance and anemia. In the case of anemia of chronic disease, disease and/or parasite loads were reduced as the result of medication. Is it the increase in iron that is causing the improvement in skills, or is it the improvement in overall health and nutrition? Ingestion of oral iron can stimulate appetite, and there is often a weight gain associated with higher hemoglobin levels (Kent et al. 1990). It is this overall improvement in nutrition that has been suggested as the cause of the improved mental faculties (Wadsworth 1992).
Iron is necessary to catalyze various facets of the humoral and cell-mediated immune systems (Weinberg 1984). However, severely deficient individuals are not usually part of the performance test groups reported in the literature. Although iron-deficient individuals have a lower maximum work capability and endurance (Gardner et al. 1977; Lozoff and Britten-ham 1987), we again are left with questions. How much of poor performance is related to iron deficiency and how much is related to poor calorie, vita-min, and protein intake and to parasitic or other disease?
Other research cannot be assessed. For example, one study compared anemic and nonanemic Chilean infants by conducting various mental and psychomotor tests that included talking and walking. Anemic infants performed more poorly than nonanemic ones (Walter et al. 1989). The difficulty in interpreting this finding is that the mental and psychomotor skills were calculated for each group according to their iron status and not according to gender. Female infants, in general, maturate more rapidly than males; therefore, the gender composition of each group is vital for determining if the nonanemic group performed better than the anemic one simply because it was composed of more female infants, regardless of the presence or absence of anemia. After three months of iron supplementation, which reversed the anemia, “no significant improvement was detected between the scores at 12 and 15 months of age” (Walter et al. 1989: 12). This suggests either that the gender composition of the groups may have affected initial performance rates or that the anemia itself did not significantly affect the infants’ ability on the tests.
Many studies, particularly earlier ones, also do not measure serum ferritin levels but simply define all anemia as iron deficiency from poor diets and ignore improvements in health during the study (Soemantri, Pollitt, and Kim 1985; Chwang, Soemantri, and Pollitt 1988). In one study, anemic and nonanemic Thai fifth-grade students received either an iron supplement or a placebo concurrently with an anthelminthic drug to kill parasites (Pollitt et al. 1989). In addition, all other infections were treated on the first day of the study. Consequently, the end results are not comparable to the initial test results; that is, the cause of changes could be attributed to improved general health as much as to improved iron status.
As noted by Moussa Youdim (1989), test results often implicate factors other than iron, such as social and economic status, as the causes of differences in anemic and iron-replete students’ performance before treatment. Even without taking this into account, studies indicate no difference in performance between treated and untreated children, and research “provides no support for an assumption of causality [between test scores and iron supplementation]” (Pollitt et al. 1989: 695).
Iron-deficient infants in Costa Rica who were weaned early (average 4.9 months) or never breast-fed (16 percent) were given cow’s milk, known to cause gastrointestinal bleeding in a high percentage of infants (Lozoff et al. 1987). These children performed less well on a series of mental and motor tests than did nonanemic infants (Lozoff et al. 1987). Later, after their anemia was corrected with iron supplements, their performances improved (Lozoff et al. 1987). It is suggested that rectifying blood losses, rather than diet, made the difference.
Nondietary causes of anemia are prevalent in many of the populations studied, as in New Guinea, where both malaria (and anemia of chronic disease) and thalassemia are common. A series of tests to measure attention span were given to New Guinea infants: malaria-positive and negative and iron-dextran supplementation with malaria and without (Heywood et al. 1989). The only significant difference recorded was that malaria-positive infants, regardless of iron supplementation, had longer fixation times than did infants with no evidence of malaria, regardless of iron supplementation (Heywood et al. 1989). As Horowitz (1989) points out, the precise meaning of longer fixation times is not known; differences may not reflect mental development but the opposite, or may simply be a reflection of age, maternal education, or general health status.
The results of another study of iron-deficient children in Indonesia was also ambiguous. After receiving iron supplementation, neither anemic nor iron-replete children significantly improved performance on some tests, but both groups improved on other tasks (Soewondo, Husaini, and Pollitt 1989). Factors not investigated, such as increases in caloric intake, changes in psychological state, or normal physiological maturation, may improve scores, regardless of anemia. If such is the case, it creates problems with many of the studies that attempt to measure anemia and performance. An insightful commentary by Betsy Lozoff on this study concludes that “at this stage in the research on the behavioral effects of ID [iron-deficiency anemia], it seems reasonable to keep asking whether alterations in affect, motivation, or fatigue might underlie cognitive-test-score findings” (Lozoff 1989: 675).
Anemia and Sports
An interesting correlation exists between marathon runners and subnormal serum ferritin levels. Although a high cutoff of less than 20 ng/mL was used to determine iron deficiency among 30 female high school long-distance runners, 45 percent were classified as anemic; among 10 other female runners, 50 to 60 percent were classified as anemic (Rowland et al. 1988; Manore et al. 1989). Several studies indicate that regardless of dietary iron intake, individuals who ran the most miles per week had the lowest ferritin levels; if injury that prevented further running occurred, iron indexes rose (Manore et al. 1989).
Lowered serum ferritin levels in athletes have been attributed to many factors: increased iron loss through heavy sweating; trauma; slightly increased destruction of red blood cells in stressed tissues, such as muscles and the soles of the feet; common use of analgesics, such as aspirin and aspirinlike drugs, which cause blood loss; and gastrointestinal blood loss (Robertson, Maughan, and Davidson 1987). A significant, though clinically unimportant, increase in fecal blood loss occurred in male marathon runners who had not taken any drugs prior to running. The 28 percent who had taken an analgesic known to promote bleeding had blood losses that could eventually result in anemia (Robertson et al. 1987). Most physicians consider the blood loss and its cause inconsequential in healthy athletes and do not recommend routine hematological monitoring or iron supplementation (Wardrop 1987).
Conclusions and Direction of Future Studies
Despite many studies supporting the view that iron fortification and supplementation might be harmful for infants, pregnant women, and others (for example, Hibbard 1988), some physicians continue to advocate indiscriminate iron fortification (Arthur and Isbister 1987; Taylor and Bergman 1989). However, a growing number of studies concerning the anemia of chronic disease conclude that “[o]ur data … do not support the routine prescription of iron … in patients on CAPD [continuous, ambulatory peritoneal dialysis]” (Salahudeen et al. 1988). In fact, even though the use of iron chelators that bind iron to reduce chronic diseases is still experimental, initial studies look promising (Vreugdenhil et al. 1989). That is, less iron rather than more may reduce morbidity.
Treating all anemia as dietary iron-deficiency anemia can have potentially deleterious effects on populations most needing health-care assistance. Iron-deficiency anemia occurs in areas where war, export or cash cropping, and other extreme situations deny people their basic requirements of calories, vitamins, and nutrients to sustain life. Dietary improvements are sorely needed in those situations. Anemia from blood loss primarily from parasites, and also from the premature introduction of cow’s milk to infants, is an acute health-maintenance problem. Rectification through medication and improved sanitation and education is needed. Anemia of chronic disease is a positive defense against infections and inflammation and, as such, should not be interfered with; however, the underlying diseases that cause the anemia need to be eradicated.
Iron supplementation is a relatively easy, inexpensive counter to dietary anemia. Perhaps that partially explains why so many people cling to the idea that iron taken orally or through injections will reduce morbidity. However, the complexity of iron and its relationship to anemia are emphasized here to show that simple solutions cannot solve complex problems.