Susan Kent & Patricia Stuart-Macadam. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 1. Cambridge, UK: Cambridge University Press, 2000.
Iron has played a critical role in the evolution of life. The ancient Greeks, believing iron to be a special gift sent to earth by one of the gods, named it sideros, or star (Liebel, Greenfield, and Pollitt 1979). As the second most common metal, iron accounts for 5 percent of the earth’s crust; it is also found in both sea- and freshwater (Bernat 1983). Scientists believe that the earth’s atmosphere was originally a reducing one with very low oxygen pressure. As a result, large amounts of reduced iron would have been available for living organisms (Bothwell et al. 1979). Iron is an essential element for all organisms, with the possible exception of some Lactobacillus (Griffiths 1987; Payne 1988). In animals, the processes of DNA replication, RNA synthesis, and oxygen and electron transport require iron. Today most iron in the environment exists in an oxidized state and is less available to organisms. However, the problems of extracting insoluble iron have been overcome during evolution. A variety of sophisticated mechanisms have evolved that are specific to different kingdoms and/or different species (e.g., mechanisms plants use to be able to live in acidic or iron-poor environments) (Bothwell et al. 1979). Such mechanisms in animals include iron complexing agents, which transport iron and deliver it to cells, and low-molecular-weight compounds, such as fructose and amino acids, that reduce iron into a soluble form (Griffiths 1987; Simmons 1989: 14).
Metabolic processes within humans involve the presence of free radicals, that is, substances that are reactive because of instability in the arrangement of electrons (Wadsworth 1991). Iron may be present as a free radical. Such instability makes iron highly likely to donate or accept electrons. As a result, iron is versatile and able to serve a number of functions within cells. These functions include acting as a catalyst in electron transport processes and serving as a transporter of oxygen. Iron is a key component of hemoglobin, the oxygen carrier found in red blood cells. It is involved in many other extracellular processes as well (Woods, DeMarco, and Friedland 1990). Iron also is required for collagen synthesis, the production of antibodies, removal of fats from the blood, conversion of carotene to vitamin A, detoxification of drugs in the liver, and the conversion of fuel nutrients to energy (Long and Shannon 1983). In addition to its importance in the maintenance of normal metabolic processes, iron involvement in pathological change and initiation of disease is a critical facet of host defense.
Iron occurs in the human body in two states: circulating in blood and in storage. The absolute amount of iron in the body is actually quite small, despite its critical importance in metabolism. A healthy adult has an average of between 4 and 5 grams (g) of iron with a range of 3 to 6 g (Bernat 1983). Approximately 30 percent of all iron in the body is in storage. Most iron utilized in the body is needed by bone marrow to make new red blood cells. After a life span of about 120 days, red blood cells are destroyed and the released hemoglobin is broken down. Liberated iron from the hemoglobin is stored in macrophages in the liver, spleen, and kidney for future use.
Most iron in a healthy person (2.5 to 3 g) is found in red blood cells in the form of hemoglobin (Moore 1973; McLaren, Muir, and Kellermeyer 1983). Small amounts of iron, approximately 0.13 g, occur in myoglobin, a protein in muscle tissues. Only 0.008 g of iron is located in heme tissue enzymes, 0.5 g is in nonheme enzymes, and about 0.004 g is bound to the iron-binding protein transferrin found in blood (Arthur and Isbister 1987).
Between 0.7 and 1.5 g of iron is found in storage (Bezkorovainy 1989). It occurs in either a diffuse soluble form as ferritin, an intracellular protein, or as hemosiderin, an insoluble aggregate form of ferritin (Bezkorovainy 1989). Ferritin is formed when apoferritin (an iron-free protein) combines with freed iron from the breakdown of hemoglobin in senes-cent red blood cells (Simmons 1989: 14). Basically, ferritin is an apoprotein shell enclosing a core of iron that consists of 24 subunits of either the H or the L type (Worwood 1989). The two types of subunits are of slightly different size and electrical charge. Different organs have various percentages of one or the other type of subunits. For instance, H subunits are predominantly found in ferritin located in the heart and L subunits are predominantly found in ferritin located in the liver. It has been suggested that those organs that have a high iron content, such as the liver and spleen, have a predominance of the L subunits, whereas those with less iron, such as heart, intestine, pancreas, and placenta, have more of the H subunits (Bezkorovainy 1989: 53). Therefore, the L subunit dominated ferritin is thought to be concerned primarily with long-term iron storage whereas the H-dominated ferritin is concerned primarily with preventing iron overload (Bezkorovainy 1989: 53). Although the majority of ferritin is located in the organs that make up the reticuloendothelial system, a small amount exists in plasma. Most ferritin molecules contain only 2,000 atoms of iron (Fairbanks and Beutler 1988). However, up to 4,500 atoms of iron can be incorporated into the ferritin internal cavity. Iron atoms enter the ferritin protein coat through tiny channels or pores, which are formed by the positioning of subunits (Fairbanks and Beutler 1988; Worwood 1980: 204; Worwood 1989). Hemosiderin represents the end point of the intracellular storage iron pathway. Hemosiderin granules are actually denatured, partly degraded, aggregated molecules of ferritin (Bezkorovainy 1989).
Balance is the crucial feature of human iron metabolism. Too much iron is toxic and too little can produce severe anemia (Arthur and Isbister 1987). In fact, much of the toxicity of iron appears to be related to its propensity to form unstable intermediates with unpaired electrons or free radicals (Griffiths 1987). Lethal accumulations of iron can occur in most organs of the body but the liver, spleen, pituitary, and heart are particularly vulnerable to excess iron (Weinberg 1984; Stevens et al. 1988; Cook 1990; Kent, Weinberg, and Stuart-Macadam 1990, 1994). The immune system can also be compromised by excess iron. For this reason it is extremely important that iron be chelated (bound) to prevent uncontrolled free-radical reactions. In normal animals and humans, iron is almost always bound to proteins, leaving only an extremely low concentration of free iron; in fact, in most biological systems iron is bound to complexing agents. Over 99 percent of iron in plasma is chelated to transferrin to prevent uncontrolled free-radical reactions (Griffiths 1987). However, in individuals with hyperferremia (iron overload), as much as 35 percent of iron may not be transferrin-bound, causing iron to accumulate in the liver, spleen, and heart (Weinberg 1989: 10).
By contrast, insufficient iron can produce severe anemia that impairs the quality of life and may eventually lead to cardiac and respiratory failure (Weinberg 1989).The body maintains an equilibrium in part through the transfer of circulating iron to storage, as well as through the conversion of free iron to bound iron. The key to body iron supplies is recycling; only a very small amount is absorbed from food or lost through excretion (sweat, feces, urine, or menstruation). Iron is conserved in a nearly closed system. Each iron atom cycles repeatedly from plasma and extracellular fluid to the bone marrow, where it is incorporated into hemoglobin, and then to the blood, where it circulates for approximately 120 days within erythrocytes (red blood cells). Afterward, iron is released to plasma. This is accomplished by phagocytes (cells that engulf and then consume debris and foreign matter) of the liver or spleen (the reticuloendothelial system 1) that digest hemoglobin and destroy old erythrocytes. The process releases iron to plasma, which continues the cycle (Fairbanks and Beutler 1988: 195). Within each cycle, small amounts of iron are transferred to storage sites, and small amounts of storage iron are released to plasma. In addition, small amounts are absorbed by the intestinal tract from ingested food containing iron and small amounts are lost through sweat, urine, feces, or blood (Fairbanks and Beutler 1988: 195). Consequently, little dietary iron is needed to replace iron loss.
Iron Requirements in Humans
Because most needed iron is recycled, humans actually require very small quantities to maintain health. For example, healthy 70-kilogram (kg) adult males lose approximately 1 milligram (mg) of iron daily through excretion by the skin and the gastrointestinal and urinary tracts (Bothwell et al. 1989). 2Required replacements are 1 mg daily. Children lose only about 0.5 mg daily but their required replacement is approximately 1 mg daily to compensate for the needs of growth. Early estimates of iron loss among women of reproductive age were high, an average of 2 to 3 mg per day, resulting in an equal replacement requirement (Hoffbrand and Lewis 1981; Arthur and Isbister 1987). However, more recent and systematic studies call such estimates into question. A Swedish study showed that the average woman loses between 0.6 and 0.7 mg through menstruation; 95 percent of women lose an average of less than 1.4 mg per day (Fairbanks and Beutler 1988: 205). Scientists are beginning to recommend that average menstruating women absorb only 1.4 mg iron daily to replace losses (Monsen 1988: 786).
Absorption of iron occurs mainly in the duodenum and jejunum, although all parts of the small intestine and even the colon may be involved. Little is known about the precise mechanisms by which the body’s need for iron is communicated to the mucosal cell of the intestine (Cook 1990). But however it is achieved, the body is able to adapt to a wide range of iron requirements and intakes by modifying the rate of gastrointestinal absorption. Absorption of iron varies according to health, physiological status, iron stores, age, and sex. Studies have shown that the absorption of iron from an adequate diet might range from a fraction of a milligram to 3 or 4 mg a day, depending on body iron content. For example, hyperferremia, or iron overload, is associated with decreased iron absorption from dietary sources. Blood loss and true iron deficiency result in increased iron absorption. Absorption also varies according to the iron source. Iron-deficient males absorb 20 percent more nonheme iron and 21 percent more heme iron than iron-replete males (Cook 1990: 304).The same adaptability applies to iron loss. Normal males lose about 0.9 mg of iron per day. In contrast, hypoferremic males lose only about 0.5 mg per day and hyperferremic males lose about 2.0 mg per day (Finch 1989).
According to James Cook (1990), the three major determinants of body iron in healthy individuals are physiological iron demands, dietary supply of available iron, and adaptation. It is the dietary supply of available iron that is most often focused upon by the medical profession and nutritionists. Dietary iron supply is actually determined by three main factors: total iron intake, content of heme iron, and bioavailability of nonheme iron (Cook 1990). However, actual absorption of iron is a function of the adaptability of iron metabolism and an individual’s iron status. Dietary iron comes in two forms – heme and nonheme. The largest percentage of dietary iron is inorganic, or nonheme, which is absorbed primarily in the duodenum. Its absorption is influenced by the presence of inhibitors and enhancers found in ingested foods. The most important inhibitors include tannin (found in tea), coffee, phytates in bran, calcium phosphate, egg yolk, polyphenols, and certain forms of dietary fiber. Although bran and phytate in the diet do inhibit iron absorption, there is speculation that high bran and phytate intake may induce changes in the intestines or its microflora over a prolonged period and reduce the inhibitory effect of these substances (Cook 1990). Enhancers include meat and organic acids such as citric acid, lactic acid, alcohol, certain spices and condiments in curry powder, and ascorbic acid (Bothwell et al. 1989).
Unlike nonheme iron, heme iron is absorbed directly into the intestinal cell as an intact porphyrin complex and is unaffected by other components of the diet. Heme iron usually accounts for only 10 to 15 percent of the total daily intake of iron; however, the absorption of heme iron is relatively high and it can account for as much as 25 percent of the iron absorbed from the daily diet (Hallberg 1980: 118).
Most Westerners, who ingest some kind of meat almost daily, easily obtain sufficient amounts of iron to replace daily losses. In fact, in some cases, dietary consumption is four to seven times greater than the amount of iron needed. They also consume iron through fortified foods, such as most wheat products or wine and cider, which can add 2 to 16 mg or more per liter (Fairbanks and Beutler 1988).A study of military personnel showed that 20 to 30 mg of iron per day was ingested, in part because food was cooked in iron containers (Fairbanks and Beutler 1988: 194-5).
Several factors make it difficult to quantify the bioavailability of iron in the body. Uncertainties concerning the nature of the interactions that occur between enhancers and inhibitors make it difficult to determine the true bioavailability of iron from various foods or diets. Compounding the problem are misconceptions that result from not considering a food source within its natural context. Often quoted is the belief that milk is a poor source of iron. Although this is true for humans drinking bovine milk, it is not true for human infants drinking human breast milk. There is a very high bioavailability of iron in human breast milk for human infants. As much as 40 percent of iron in human milk is absorbed by human infants, whereas less than half that amount is absorbed from bovine milk (Pochedly and May 1987). Iron availability of milk is species-specific, and calves absorb significantly more from bovine milk than they would from human milk (Picciano and Guthrie 1976).
Not only is Western dietary intake of iron high; the Recommended Dietary Allowance (RDA) of the United States Food and Nutrition Board is high: 18 mg per day for premenopausal women and 10 mg per day for men and older women (Eaton, Shostak, and Konner 1988).This high RDA leads to perceived high levels of iron deficiency in populations throughout the world. Medical and government policy has traditionally encouraged more iron consumption over less. Accordingly, iron supplements are routinely administered to pregnant women and infants. Iron supplementation on a massive scale has been carried out in the United States since the 1940s when the Food and Nutrition Board of the National Academy of Science endorsed fortification of wheat flour with iron as well as vitamins. It is now standard practice to fortify a broad range of foods, including many of the items available on the shelves of supermarkets around the world. In fact, the only infant formula available to disadvantaged women participating in the American WIC (Women, Infants, and Children) program is one fortified by iron; non-iron-fortified formula is prohibited (Kent et al. 1990).Yet iron is naturally found in almost all foods.
Particularly rich sources of iron include organ meats, red meat, blackstrap molasses, cocoa, oysters, clams, and dried beans (Long and Shannon 1983). Water also can contain iron; that from wells and bore-holes can have more than 5 mg of iron per liter (Fairbanks and Beutler 1988: 195). This is in addition to the large amount of iron that can be absorbed by food cooked in iron containers. Even if iron were nonexistent in a person’s diet, which is very unlikely, it would take years to produce true dietary iron deficiency without concomitant blood loss that depletes body iron stores: “[I]ron deficiency is almost never due to dietary deficiency in an adult in our community. A diagnosis of dietary iron deficiency in an adult male or postmenopausal female usually means that the site of blood loss has been missed” (Arthur and Isbister 1987: 173). In fact, occult gastrointestinal blood loss has been detected in nearly 50 percent of infants misclassified as dietary-induced iron deficient (Fairbanks and Beutler 1988: 195). In addition, many cases of the anemia of infection/inflammation caused by chronic diseases have been mistaken for iron-deficiency anemia.
Measurement of Iron in Humans
Several blood indices are used to measure iron but only the most common ones are briefly described here. One of the most reliable methods is bone marrow aspiration and biopsy, which permits a direct measurement of the body’s iron stores as indicated by the amount of ferritin and hemosiderin present. However, because of the discomfort to the subject, the possibility of introducing infection, and the costs involved, the procedure is usually conducted only when other, less invasive measurements indicate a need for a highly accurate assessment.
In past studies hemoglobin and hematocrit (a measurement of the packed cell volume) were employed to ascertain iron sufficiency. The widespread use of automated electronic blood cell counters, such as the Coulter counter, makes hemoglobin/hematocrit values easily obtainable and therefore frequently used indices to assess iron adequacy. In addition, the relative ease of determining these indices make them popular measurements in situations where automated machines do not exist. Unfortunately, a number of variables interact with and influence these indices, making them poor measures of iron status. Hemoglobin/hematocrit measurements can be unreliable because they are affected by cultural factors, such as smoking, and environmental factors, such as altitude, and the iron content of drinking water and soil (Kent et al. 1990).
Serum iron is also not a very reliable measure of iron status. It, like hemoglobin and hematocrit, is inf luenced by a number of factors, including a woman’s menstrual cycle, the presence of infection, and the time of day blood is drawn (there is a diurnal cycle wherein serum iron values are as much as 30 percent higher in the morning than in late evening [Fairbanks and Beutler 1988: 200]). Transferrin is another component of blood measured to investigate the iron status of a person. Transferrin is a plasma iron-binding protein. One molecule has two iron-binding sites, each of which can bind one atom of ferric iron at each receptor site (Noyes 1985). Although originally thought to be similar, the two iron-binding sites more recently have been recognized as not chemically or physically equivalent for reasons not well understood (Bezkorovainy 1989). The primary function of transferrin is to transport iron from the intestinal tract, spleen, and liver to sites such as the bone marrow for hemoglobin synthesis, to macrophages for storage, to the placenta for fetal needs, or to other cells for iron-containing enzymes (Fairbanks and Beutler 1988: 200).
Most transferrin, between 70 and 90 percent, is transported to bone marrow (Fairbanks and Beutler 1988: 200). Total iron-binding capacity (TIBC) is the capacity of transferrin to bind iron, and it represents the highest amount of iron that the plasma (i.e., serum) can bind (Bezkorovainy 1980: 10; Fielding 1980: 15). In humans, only about one-third of the transferrin iron-binding sites are occupied by iron; the rest are free (Bezkorovainy 1989b).Transferrin saturation, the amount of transferrin that is saturated with iron (calculated by dividing serum iron value by the total iron-binding capacity), is usually considered a better assessment of iron metabolism than is transfer-rin, serum iron, or total iron-binding capacity alone. Nevertheless, transferrin saturation values are affected by chronic disease and inflammation, in addition to insufficient diet, blood loss, and various types of iron overload. Therefore, by themselves, these measurements are not reliable in determining the etiology of an anemia or iron overload. Furthermore, “transferrin measurements are relatively insensitive, as the degree of change with variation in iron stores is small relative to assay variability. Serum ferritin measurement is a key indicator of iron stores” (Cook and Skikne 1989: 350). In fact, and contrary to conventional practices, physicians are now suggesting different methods to measure iron status: “Traditional tests of serum iron and TIBC may be useful but currently are not recommended for indirect measurement of iron stores [i.e., measurement in the absence of a bone marrow biopsy]” (Beissner and Trowbridge 1986: 88-90). More recently, it has been noted that “a wide range of values for transferrin saturation and total iron-binding capacity is consistent with either iron deficiency anemia or anemia of chronic disease … The two disorders can be distinguished by the serum ferritin level, iron stain of a bone marrow specimen for hemosiderin, or both” (Farley and Foland 1990: 92).
Red blood cell counts are routinely calculated by most automated blood cell counters. Red blood cells are often malformed with reduced life spans in various types of anemia and are associated with ineffective erythropoiesis (or red cell production) (Jacobs and Worwood 1982: 175). However, red blood cells usually have a normal life span in most iron overload disorders without severe liver disease (McLaren et al. 1983). At one time it was thought that red blood cell size might be useful in distinguishing iron deficiency from the anemia of chronic disease. But recent studies have disproved this proposition (e.g.,Thompson et al. 1988; Osborne et al. 1989).
Mean corpuscular volume (MCV) reflects the capacity of the cell.A decreased cell size can signify a hemoglobin synthesis disorder, but a smaller size is also associated with both iron-deficiency anemia and anemia of chronic disease/inflammation and other iron disorders, such as thalassemia and sideroblastic anemia. Moreover, elevated MCV can result from a variety of disorders other than infection, including idiopathic ones. Although widely used because it is measured as part of the automated complete blood cell count, the main disadvantage of MCV is that it does “not distinguish true iron deficiency from secondary iron-deficient erythropoiesis due to inflammatory or neoplastic disease” (Cook and Skikne 1989: 351). As a result, a number of researchers contend that MCV can provide only a crude measure of iron status (Labbe and Finch 1980: 52).
Storage iron is held within the ferritin molecule. Experiments show that serum ferritin is the most noninvasive diagnostic indicator of iron status. The reliability of serum ferritin surpasses that of free erythrocyte protoporphyrin, which in the past was thought by some to be more diagnostic in discriminating the etiology of anemia (Zanella et al. 1989). Serum ferritin is a particularly sensitive measurement because it reflects changes in iron stores before they are completely exhausted or, in the case of overload, increased (e.g., Finch and Huebers 1982; Cook and Skikne 1989). There are now numerous studies that show that serum ferritin is a reliable measure of iron stores, second only to actual bone marrow aspirations (e.g., Thompson 1988; Burns et al. 1990; Guyatt et al. 1990).
These hematological measurements together present a reliable view of the iron status of an individual. However, when all are not available, serum ferritin, in conjunction with a measure of the amount of circulating iron, is minimally necessary for reliable interpretations.
Iron and Infection
One of the most exciting frontiers in iron studies concerns the body’s efforts to diminish iron content in response to disease. Hypoferremia (or low circulating-iron level) is associated with an array of chronic diseases and inflammatory responses, including neoplasia (cancer), bacterial and parasitic infections, and rheumatoid arthritis (Kent 1992). The associated anemia of chronic disease is characterized by subnormal hemoglobin, subnormal serum iron, subnormal transferrin saturation levels, and normal to elevated serum ferritin levels.
Hypoferremia in reaction to chronic diseases and inf lammations has been noted in experimental animals as well as in humans. Studies show that within 24 hours after exposure to pathogenic microbes, serum iron levels fall and a resistance to otherwise lethal doses of a variety of pathogens is produced (Payne 1988).There is even a slight drop in hemoglobin levels after smallpox revaccination, which is thought to represent a rather innocuous procedure (Reizenstein 1983).
The body reduces iron levels by preventing macrophages from releasing the metal from storage to transferrin and by reducing intestinal iron absorption. Transferrin saturation levels drop concomitantly as iron is sequestered in macrophage storage. Serum ferritin levels increase, serving as an indirect measure of body iron stores. The reduction of available iron affects microorganisms and rapidly multiplying neo-plastic (cancer) cells because they require iron for proliferation. However, they either cannot store the element in a nontoxic form, as is the case for a number of pathogens, or they require larger amounts of iron for rapid proliferation than they possess, as is the case for neoplastic cells. Microorganisms requiring host iron include bacteria, fungi, protozoan parasites, and neoplastic cells.
Microinvaders acquire iron from their hosts in several ways. Perhaps most intriguing is acquisition through the production of small iron-binding compounds called siderophores. These iron chelators, or “iron scavengers,” seek out iron in the host and donate it to the microorganism that produced them. In addition, some siderophores may serve as iron donors for another strain or species (Bezkorovainy 1980: 305-6). Siderophores compete with the host for the iron-binding proteins. The body responds by producing fever, which reduces the ability of some bacteria to produce siderophores (Lee 1983; Weinberg 1984).That is, fever serves as a countermeasure to microorganism proliferation by inhibiting their ability to extract iron from the host. The resulting hypoferremia prevents pathogens from obtaining sufficient quantities of growth-essential iron; they therefore fail to multiply or multiply with considerable difficulty (Kochan 1973: 22). Studies have documented that “a single episode of fever and/or inflammation in man appears to restrict the release of iron from effete red blood cells by the reticuloendothelial system, leading to a decrease in serum iron concentration and to stimulation of the production of ferritin [to store iron] for a prolonged period” (Elin, Wolff, and Finch 1977: 152).
It has long been recognized that an important deterrent to pathogens in infants is provided by lactoferrin, an iron-binding protein that occurs in breast milk. Lactoferrin binds free iron and thereby deprives microorganisms of necessary iron; it also has a propensity to interact with or complement specific antibodies (Bullen, Rogers, and Griffiths 1974). When iron-binding proteins become saturated with iron there is an increase in the amount of free iron (i.e., nonbound iron that is more accessible to pathogens). Both in vitro and in vivo experiments have shown that the bacteriostatic properties of lactoferrin, for example, are abolished when iron-binding proteins are saturated with iron, making the body more vulnerable to microorganism invasion (Pearson and Robinson 1976).
Understanding hypoferremia as a defense against disease allows us to understand the geographical distribution of anemia. George Wadsworth (1975), for example, noted that levels of hemoglobin in women living in temperate climates usually exceed 10 g per 100 milliliters (ml), whereas usual levels in women living in tropical climates may be only 8 g per 100 ml. The etiology of hypoferremia provides a clue for this otherwise puzzling observation. Parasitic and bacterial infections are endemic in tropical environments (e.g., Goodner 1933).The higher prevalence of infection has been related to the increased total complexity and diversity of the tropical ecological system (Dunn 1968). Hypoferremia is produced by the body in an attempt to ward off the numerous pathogenic insults associated with tropical environments. However, in these cases, and particularly when exacerbated by blood loss and nutrient malabsorption that often accompany protozoan and bacterial infections, hypoferremia might develop to such a severe state that it becomes a problem in and of itself (Stuart-Macadam 1988; Kent and Weinberg 1989; Kent et al. 1990: 67).
The role of hypoferremia as a nonspecific body defense against disease is not completely accepted, although microbiologists in particular have long recognized the role of iron in host defense. As early as 1868 Armand Trousseau noted that iron supplementation reactivated quiescent pulmonary tuberculosis (Weinberg 1984; Keusch and Farthing 1986). Research in 1944 showed that iron-binding proteins in egg whites and serum were involved in the inhibition of Shigella dysentery (Schade and Caroline 1944). Starving humans who are susceptible to some diseases are immune to others in which microorganisms must exact iron from their hosts to obtain the amount needed to proliferate. For example, refeeding programs in Niger reactivated malaria infections (Murray et al. 1976). Similar refeeding programs in Somalia resulted in a recrudescence of a number of different infections, including parasitic and bacillary dysentery, acute infectious hepatitis, brucellosis, tuberculosis, and others (Murray et al. 1976; Murray and Murray 1977). Moreover, a study of two groups of Kenyan Turkana pastoralists indicated lower iron levels enhanced host immunity. One group drank primarily milk; the other group drank milk and consumed over 150 g of fish per day, ingesting almost a third more iron than the non-fish-eating group. Those who ate fish were not anemic but were plagued by malaria, brucellosis, molluscum contagiosum, common warts, diarrhea, and Entamoeba histolytica. The non-fish-eating Turkana were slightly anemic but had significantly lower incidences of these diseases (Murray, Murray, and Murray 1980).3
There is now a body of literature based on in vivo and in vitro experiments that demonstrates the role of iron in disease (e.g., Weinberg 1966, 1974, 1977, 1984, 1990; Bullen et al. 1974; Masawe, Muindi, and Sway 1974; Strauss 1978; Oppenheimer et al. 1986; Bullen and Griffiths 1987; Crosa 1987; Griffiths and Bullen 1987; Kluger and Bullen 1987; Selby and Friedman 1988; Stevens et al. 1988; Kent et al. 1990, 1994). Even so, a large percentage of the public and even those in the health-care community are not aware that in certain circumstances diminished iron is not a pathological disease state but a physiological defense. They are unaware that the current focus on iron fortification of food and on iron supplementation preparation is misdirected and potentially counterproductive to the body’s attempt to ward off illness.
Supplying iron-fortified foods or vitamins to developing countries and disadvantaged minorities in Western nations without ascertaining the prevalence of dietary-induced, versus disease-induced, anemia may be harming the very people the food is intended to help. The practice of indiscriminate food fortification is putting these populations at risk for contracting a variety of diseases, including cancer (Finch 1989). Studies show that the high iron content of most Western diets particularly affects men and postmenopausal women, who, as a result, suffer more from certain types of neoplasia and from myocardial infarction. In fact, recent treatment uses iron chelators to bind iron at the location of inflammation, such as at the joints in rheumatoid arthritis, where it is thought iron causes tissue damage, among other problems (Biemond et al. 1988). In addition, drugs such as aspirin prevent heart attacks, at least in part, by causing chronic intestinal bleeding, which reduces iron levels (Sullivan 1989; also see Arthur and Isbister 1987 for discussion of aspirin and other anti-inflammatory drugs that can cause occult blood loss).
Iron and Heart Disease
Despite extensive epidemiological studies to identify risk factors for coronary disease, many unanswered questions remain. Individuals who have multiple risk factors often never develop cardiac events whereas others who would seem to be low risk experience cardiovascular problems (Sullivan 1989). Therefore, other possible factors, such as iron levels, should be examined for a contributory role in causing heart disease.
Some studies suggest a link between cardiac diseases and iron levels (Sullivan 1989; Salonen et al. 1992). Excess iron, as occurs in beta-thalassemia and chronic iron overload, has been associated with ischemic heart disease and heart attacks (Leon et al. 1979). Men, with more iron than premenopausal women who regularly lose iron through menstruation, also have more heart disease (Weinberg 1983; Sullivan 1989). Postmenopausal women, who have more iron than premenopausal women, have a higher risk of heart disease, although, of course, they constitute an older group. But women taking oral contraceptives, which reduces the amount of menstrual bleeding, are also more prone to heart disease (Sullivan 1983; Frassinelli-Gunderson, Margen, and Brown 1985; Colditz et al. 1987).
Not all cardiologists agree that the association between iron and cardiac disease is causal. However, J. L. Sullivan (1989) and others (Salonen et al. 1992) have suggested that high serum ferritin values, which measure the amount of iron in storage, are a powerful predictor for heart disease. Common iron chelators used clinically to reduce heart attacks, such as defer-oxamine and aspirin, cause increased bleeding times and occult blood loss. Moderate to heavy fish consumption causes occult bleeding, thereby reducing iron levels and the risk of heart disease (Kromhout, Bosschieter, and Coulander 1985; Herold and Kinsella 1986; Houwelingen et al. 1987). The consequent reduction in iron levels among consumers of fish and iron chelators or drugs that promote bleeding could be responsible for the reduction of heart disease.This is in contrast to persons who consume large amounts of red meat that contain heme iron, which is more readily absorbed than nonheme iron found in vegetables. Another interesting association between available iron and heart attacks is based on the circadian rhythm of the body. Serum iron levels have diurnal patterns with an increase in the early morning. Heart attacks also occur three times more frequently during this time of day (Sullivan 1989: 1184).
Iron and Parasites
Parasites are a serious threat to the health of rural communities in the tropics; over 90 percent of this population are infected with at least one and often several nematode species, including hookworm, roundworm, and whipworm (Behnke 1987). There are basically three ways in which parasites can cause anemia. One is the anemia of chronic disease discussed earlier. A second way is by competing for the same nutrients as the host, including iron. A third way is by causing actual blood loss. Hookworm (Necator sp.), in particular, can cause heavy blood loss and, as a result, anemia. Hookworm infestations have been found in 12 percent to 95 percent of study populations throughout the tropical regions of the world – from the Amazon rain forest to Southeast Asia (Florentino and Guirriec 1984: 65). Malaria parasites are also dependent on iron from their hosts. The distribution of the sickle trait and heterozygous forms of thalassemia, which cause anemia, provide some protection against the malaria parasite.
Iron and Cancer
Another exciting path iron studies are exploring concerns the examination of the role of iron in neoplastic disease. Neoplastic cells require more iron than normal cells because of their rapid proliferation. In an attempt to accommodate their increased need for iron, these cells extract iron from their host. In reaction, the body attempts to prevent their proliferation by producing hypoferremia. Anemia is a commonly associated condition of neoplasia of various types (Miller et al. 1990). The type of anemia that usually occurs with neoplasia is that of chronic disease/inflammation as indicated by elevated serum ferritin levels. For example, patients with acute leukemia, Hodgkin’s disease, multiple myeloma, and other malignancies have elevated to highly elevated serum ferritin concentrations that tend to become greater as the disease progresses (Bezkorovainy 1980;Worwood 1980).
Males with higher iron stores than premenopausal women have a significantly increased risk of cancer of the colon, bladder, esophagus, and lung. Women with higher levels of iron because they are post-menopausal or are hyperferremic also have a greater risk of cancer (Selby and Friedman 1988; Stevens et al. 1988).Very high serum ferritin levels are associated with other types of neoplasia, including malignant histiocytosis, primary hepatocellular carcinoma (liver cancer), and multiple malignancies (Hann et al. 1989; Ya-You et al. 1989). Other studies show that serum ferritin levels are significantly different in patients with benign and malignant effusions (abnormal fluid accumulations). Of the patients with benign nonneoplastic effusions, 96 percent had serum ferritin levels below 1,000 nanograms (ng) per ml; very high serum ferritin levels of over 3,000 ng per ml were encountered only in patients with malignancies (Yinnon et al. 1988).
Even more indicative is a study of patients afflicted with malignant histiocytosis and virus-associated hemophagocytic syndrome. At the onset of their disease these patients were anemic while having levels of serum ferritin above 1,000 ng per ml. As the disease progressed, serum ferritin levels increased to greater than 12,000 ng per ml (Esumi et al. 1988). Patients were given either chemotherapy or prednisone, depending on their disease. All patients with consistently high serum ferritin levels died within three months of treatment, whereas all patients with serum ferritin values less than 1,000 ng per ml lived beyond three months and are alive today with normal serum ferritin levels (Esumi et al. 1988). The more severe the disease, the higher the ferritin levels. We interpret these dramatic findings as the body’s failed attempt to thwart disease by sequestering as much iron as possible. As the threat of disease decreased, serum ferritin levels began to drop. Noriko Esumi and colleagues (1988: 2071) conclude that “serum ferritin level in histiocytic proliferative disorders is a useful indicator of disease activity in both neoplastic and reactive conditions rather than only a marker of malignant process.”
The popular belief that non-Western groups have a lower incidence of cancer because they have healthier diets than Westerners may be correct for reasons other than those usually proposed. As noted earlier, Westerners tend to have high iron levels because of a generally high-iron diet, ubiquitous fortification of cereal products, and the widespread practice of taking vitamins fortified with iron. High iron levels have been implicated in increasing one’s vulnerability to neoplasia. It might be that in an attempt to improve our diets to reduce morbidity through the ingestion of excess iron, we actually worsen our diets and encourage high levels of morbidity.
Hyperferremia: Causes and Problems
In addition to high serum ferritin values and high concentrations of iron in the liver and spleen, increased concentrations of iron and ferritin occur in the bile of hyperferremic individuals. Bile iron is increased in patients with idiopathic hemochromatosis to as much as twice that of normal individuals; bile ferritin is increased to as much as five times that of normal individuals (Hultcrantz et al. 1989).
There are basically two types of hyperferremia. Inherited hyperferremia is usually termed hereditary (or idiopathic) hemochromatosis. This disorder is autosomal (involving a nonsex chromosome) recessive, the responsible gene being located on the short arm of chromosome 6 (Edwards et al. 1988: 1355). Although the full manifestation of the disease occurs only in affected homozygotes, a small proportion of heterozygotes have been found to exhibit minor abnormalities in laboratory tests that measure the body’s iron burden (Edwards et al. 1988).The incidence of hemochromatosis in a presumably healthy population of 11,065 European-Americans (with transferrin saturation values of above 62 percent) was 5 people per 1,000; the amount of iron loading in the liver and consequent organ damage varied widely (Edwards et al. 1988). Studies conducted in Scotland and France yielded a lower number of affected individuals, although the incidence was still substantial, ranging from 1 in 400 to 1 in 517 persons (McLaren et al. 1983: 223).
Thalassemia syndromes (i.e., thalassemia major, intermedia, minor, and minima) are inherited disorders in which intestinal iron absorption is increased due to the hemolytic anemia and ineffective erythropoiesis (i.e., production of red blood cells).The condition is complicated by transfusion therapy (McLaren et al. 1983: 216). The result is hyperferremia. Occasionally a complete absence of serum transferrin (termed atransferrinemia) occurs in which marrow is without iron but heavy iron deposits are found in the heart, pancreas, liver, and mucous glands (McLaren et al. 1983). The cause is linked to the homozygous expression of an autosomal recessive gene. Heterozygous expression of this recessive gene can also result in anemia.
More unusual is hereditary hypochromic anemia, which is a sex-linked recessive disorder that is variable in its manifestation. A poorly understood disorder, porphyria cutanea tarda, results from a defect in the heme synthetic pathway. This condition is often found in combination with hepatic siderosis (iron storage), possibly from increased dietary iron absorption or heterozygosity for idiopathic hemochromatosis (McLaren et al. 1983: 226).
Acquired hyperferremia results from a number of causes.Three of the most common disorders that produce acquired hyperferremia are transfusional iron overload (also called transfusional hemosiderosis), alcoholism and cirrhosis with hemosiderosis, and African siderosis (or Bantu siderosis). Iron overload can be a by-product of prolonged intravenous iron administration or repetitive transfusions. As a common complication of maintenance dialysis, for example, hyperferremia requires regular phlebotomy (bloodletting) to correct the situation (McCarthy et al. 1989).
Alcohol abuse stimulates absorption of dietary iron and reduces the ability of intestinal epithelial cells to prevent excessive transport of iron into the blood (Mazzanti et al. 1987). Hyperferremia can even occur in individuals who have not yet developed liver damage (Rodriguez et al. 1986).
In most parts of sub-Saharan Africa, African or Bantu siderosis results from brewing beer in iron containers and cooking food in cast-iron pots. Large deposits of iron occur in the liver, spleen, and to a lesser extent, in the macrophages of the bone marrow. One study indicated that the average rural southern African male ingests between 50 and 100 mg of iron daily in beer alone (Bothwell et al. 1965: 893)! In a variety of studies of Africans who died from accidental deaths and other causes in South Africa, between 40 and 89 percent of the subjects exhibited varying degrees of iron overload (Bothwell and Bradlow 1960).The condition is detectable as early as late adolescence and becomes most severe between the ages of 40 and 60 years. In another study 75 percent of the males autopsied manifested siderosis compared with 25 percent of the women (Bothwell and Isaacson 1962: 524). The lower incidence of siderosis among women is due to a combination of greater iron loss through menstruation and the cultural practice of women drinking less beer and therefore ingesting less iron.
In the past 25 years the frequency of siderosis has dropped. This decline is particularly notable among urban Africans who do not drink as much of the traditional home-brewed beer as the rural population but still ingest large amounts of alcohol in different forms (MacPhail et al. 1979). A study conducted in the mid-1980s in Zimbabwe revealed that 12 percent of the men tested had an elevated serum ferritin level and a transferrin saturation of over 70 percent (Gordeuk, Boyd, and Brittenham 1986). However, although the frequency of siderosis in various areas of Africa has been declining over the past 25 to 30 years, it remains a serious health problem among a large percentage of rural populations.
Because hyperferremia results in the accumulation of iron in various organs, particularly the liver, spleen, heart, and pancreas, it can lead to severe cirrhosis, heart failure, and diabetes mellitus. Although the precise mechanism(s) involved is not well understood, it appears that tissue damage occurs in the liver and other organs as a result of the toxicity of excess iron. Cirrhosis occurs even in hyperferremic patients who do not drink any alcohol, although the combination of alcohol consumption and hyperferremia exacerbates the cirrhosis (McLaren et al. 1983: 240). In addition, different etiologies of iron overload appear to have slightly different manifestations. For example, the ratio of liver iron to marrow iron was much greater in patients with hereditary hemochromatosis than in those with African siderosis (McLaren et al. 1983: 236).
Hyperferremic individuals are plagued by problems other than those associated with the toxicity of excess iron. For reasons detailed earlier, they also suffer from an increased susceptibility to bacterial and parasitic microorganisms and neoplastic diseases (cancer). For example, one study demonstrated an increased risk of contracting Yersinia enterocolitica and Yersinia pseudotuberculosis bacteremia in dialysis patients with iron overload (Boelaert et al. 1987). Similar studies revealed an increased risk and virulence from several iron-sensitive microorganisms, including Listeria monocytogenes, Brucella spp., and Vibrio vulnificus.
Problems Associated with Insufficient Iron
As in the case of hyperferremia, there are two basic categories of hypoferremia (or low circulating-iron levels) based on their etiology. Most common are the acquired anemias, including iron deficiency, drug-inducedanemia, andanemiaofchronic disease/inflammation. Less common are hereditary anemias, such as sickle-cell anemia and congenital sideroblastic anemia. Some, such as sickle-cell anemia, are classified in a clinical setting by the morphological shape of the hemoglobin.
Macrocytic anemia can be induced by diet, resulting from malabsorption, or it can be caused by an inherited disorder. Because Vitamin B12 (cobalamin) and folic acid (folate) are required for normal red blood cell nuclear growth and synthesis, deficiencies of these nutrients can cause anemia (Simmons 1989: 12-14). Vitamin B12 deficiency from dietary causes is very rare and occurs only in strict vegetarians who exclude all meat, eggs, and milk. However, this deficiency can also arise from a number of disorders, including impaired absorption of B12 or folate such as in pernicious anemia; malabsorption that can result from certain drugs, such as those used to treat sprue and celiac diseases, and gastrectomy; competition from parasites such as the fish tapeworm; hereditary impairment of absorption capabilities; increased requirements of the vitamin due to pregnancy, tumors, and hyperthyroidism; or impaired utilization of the vitamin, as in red cell enzymopathy, abnormal binding proteins, absence of transport protein, or nitrous oxide administration (Simmons 1989: 40).
Folate deficiencies can result from the following: a lack of green vegetables in a diet; alcoholism; impaired absorption due to sprue and celiac diseases; drugs used to treat malignant diseases; malaria and bacterial infections; increased requirements stemming from pregnancy; conditions such as hyperthyroidism; or impaired utilization as occurs with drugs like phenytoin (Simmons 1989: 40-1).
Two types of hypochromic microcytic anemia are most prevalent. One is caused by blood loss that is not counterbalanced by a sufficient dietary intake. The second, less common cause results from unmet nutritional demands even without blood loss. In men, bleeding from the gastrointestinal tract is the most frequent cause, and although this condition may occur in women as well, menorrhagia (excessive menstrual flow) is more often responsible.The other principal cause of this variety of anemia is chronic disease, in which iron deficiency appears to form a nonspecific defense mechanism against disease.
Less common is sideroblastic anemia, which can be acquired or congenital.Acquired sideroblastic anemia can be drug- or toxin-induced from cancer chemotherapy, antituberculous drugs, and ethanol, or it can be idiopathic, such as preleukemic or dysmyelopoietic syndromes (Beissner and Trowbridge 1986). Patients with sideroblastic anemia tend to accumulate excess iron in the tissues and therefore are vulnerable to many of the problems associated with iron overload. Lead poisoning can also cause hypochromic microcytic anemia.
Some anemias are associated with specific geographical regions. For example, thalassemias are a varied group of inherited disorders characterized by one or more defects in the synthesis of the normal alpha or beta globin chains of hemoglobin.They can occur in homozygous or heterozygous states and include thalassemia major, thalassemia intermedia, thalassemia minor, and thalassemia minima (Simmons 1989: 55). The geographical distribution of thalassemia is primarily in the Mediterranean region, although it is also found in Southeast Asia, the Middle East, and the Orient and among immigrants from those areas. There is much variation in the clinical manifestations of these genetic disorders. The defect in hemoglobin chains causes a reduction in hemoglobin in afflicted individuals. Ironically, and as discussed earlier, thalassemia can also result in iron overload of some organs because intestinal iron absorption is increased due to the hemolytic anemia and ineffective production of red blood cells. In the heterozygous form, however, thalassemia has been postulated to be a deterrent to malaria infections, perhaps by causing a reduction in the amount of circulating iron.
A normocytic normochromic anemia that involves the premature destruction of red blood cells is sickle-cell anemia, an inherited autosomal dominant disorder. Sickle-cell anemia is lethal in the homozygous form, but it can also become symptomatic in heterozygotes in situations of oxygen deprivation such as at high altitudes. Geographically, sickle-cell anemia is most common in equatorial Africa but is also found to a lesser extent in the Mediterranean region and India. This distribution has been attributed to the improved immunity individuals who are heterozygous for the sickle-cell trait have from malaria parasites (e.g., Plasmodium falciparum). Approximately 8.5 percent of American blacks are heterozygous for the sickle-cell trait and are relatively symptom-free; in contrast, homozygous individuals suffer from hemoglobin levels between 5 and 9 g per decaliter (dl), leg ulcers, hematuria, and other afflictions (Simmons 1989: 68-70).
Common autosomal dominant normocytic normochromic anemias that represent a defect in the red cell membrane include hereditary spherocytosis, found primarily in people of northern European descent, and hereditary elliptocytosis, found worldwide (Simmons 1989: 62-4). Spherocytic (congenital and acquired) anemia involves a deficiency in the Glucose-6-phosphate dehydrogenase enzyme (abbreviated G6PD) that is sex-linked with full expression in affected males and partial expression in females (Simmons 1989: 65). Its geographic distribution is worldwide, with the highest frequency in African populations, although it is also found in Italian, Greek,Asian, and Jewish populations.
Congenital nonspherocytic hemolytic anemia results from deficiency in several red cell enzymes, including glucose 6-phosphate dehydrogenase (G6PD). This condition is found in Asian, European, and Mediterranean populations, as well as in other populations to a lesser extent (Simmons 1989). This anemia is also caused by ingestion or contact with the fava bean by Mediterranean peoples (even from just inhaling the pollen of the bean among some sensitive males, usually under the age of 12). It can also be triggered by the use of antimalarial and other drugs in African and Mediterranean populations. A third cause can result from infection with viral or bacterial pathogens worldwide in persons with G6PD deficiency – pathogens include Escherichia coli, Salmonella, Streptococcus, Rickettsiae, viral hepatitis, pneumococcal pneumonia, malaria, and rheumatoid arthritis (Simmons 1989: 65).
A variety of anemias are associated with immunological disorders, including transfusion reactions, ABO or Rh blood-group incompatibility between fetus and mother, and autoimmune hemolytic anemia, a condition in which antibodies or lymphocytes attack cells of the person who produced them.The most common type of autoimmune hemolytic anemia is termed warm-antibody because the autoantibody reacts most efficiently with red cells at 37° Celsius (C) and occurs especially in systemic lupus erythematosus and lymphomas. Less common is cold-antibody type, in which the antibodies are optimally reactive at temperatures less than 37° C, including cold hemagglutinin disease and paroxysmal cold hemoglobinuria (Simmons 1989:78-81).
Conclusions and Future Directions for Research
As we learn more about iron, we learn more of its multifaceted interrelationship with health and disease in a variety of conditions and situations. It is also apparent that this interrelationship is a complex one. For example, lowered iron levels do not necessarily mean higher morbidity, nor is anemia primarily a nutritional disorder. We predict that future studies of iron will concentrate on its role in health, especially during the periods of rapid growth that occur in childhood and pregnancy, and in response to chronic infection, inflammation, and malignancy.
Further elucidation of iron as a cause of, and as a defense against, disease should reduce morbidity levels all over the world. Therefore, we have much to gain through multidisciplinary research that investigates all aspects of iron. Presented here is an overview of the fascinating picture of iron, a picture only now coming into focus.