Nutrition and the Decline of Mortality

John M Kim. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas, Volume 2, Cambridge University Press, 2000.

Together with economic growth and technological advances, improvements in health and longevity are the typical hallmarks of a population’s transition to modern society. Among the earliest countries to undergo such experiences were England and France, where mortality rates began declining steadily during the eighteenth century. Elsewhere in western and northern Europe, health and longevity began to improve during the nineteenth century. In the twentieth century, this pattern has been replicated in developing countries throughout the world.

Understanding the causes that underlie this pattern of mortality decline is important not only as a matter of historical interest but also because of the practical implications for policies that aim to improve life in developing countries, and for forecasting changes in mortality in developed countries. Accordingly, there has been much interest in identifying the causes of patterns of mortality decline and measuring their impact. By the 1960s, a consensus had emerged that the factors underlying mortality trends could be delineated within four categories, as reported in a study by the United Nations (UN) (1953): (1) public-health reforms, (2) advances in medical knowledge, (3) improved personal hygiene, and (4) rising income and standards of living. A later UN study (1973) added as an additional category “natural causes,” such as a decline in the virulence of pathogens.

Mckeown’s Nutrition Hypothesis

Against this consensus view, the British epidemiologist Thomas McKeown argued in a series of influential articles and books from 1955 to the mid-1980s that the contribution of medicine to the decline of mortality before the twentieth century had been relatively small. Rather, relying on a residual argument that rejected other factors as plausible explanations, he proposed that improvement in nutrition was the primary cause of the mortality decline.

McKeown’s view was best set forth in his 1976 book, The Modern Rise of Population. In it, he argued that the modern growth of population was attributable to a decline of mortality rather than to changes in fertility, and he sought to identify what had brought about death-rate reductions since the eighteenth century. His investigation relied heavily—indeed, almost exclusively—on cause-of-death information, which had been nationally registered in England and Wales since 1837. McKeown himself was aware that his evidence was geographically limited and did not fully cover the time period during which mortality had declined. Nevertheless, he believed that his findings could be generalized to explain the modern mortality experience of other European countries.

McKeown’s analysis of causes of death in England and Wales during the period from 1848-54 to 1971 led him to the conclusion that the reduction of the death rate was associated predominantly with infectious diseases. Of the decline in mortality during that period, 40 percent resulted from a reduction in airborne diseases, 21 percent from reduction in water-and foodborne diseases, and 13 percent from reduction in other types of infections. The remainder (26 percent) was attributable to a lesser incidence of non-infective conditions. Thus, McKeown found that three-quarters of the mortality decline since the mid-nineteenth century could be explained by the reduction in infectious diseases. He further reasoned that despite sketchy historical evidence from the period before the beginning of cause-of-death registration, this trend could be extrapolated backward to the start of the modern mortality decline around the beginning of the eighteenth century. By his reckoning, 86 percent of the reduction in death rates from then until 1971 had resulted from a decline in mortality from infectious diseases. For McKeown, then, this conclusion was the central feature of mortality decline and constituted evidence against which the merits of alternative explanations of mortality decline would be judged.

McKeown methodically classified possible reasons for the lesser incidence of mortality from infectious diseases into four categories: (1) changes in the character of the diseases themselves, (2) advances in medical treatment, or the prevention and treatment of diseases by immunization and therapy, (3) reduced general exposure to disease, and (4) increased general resistance to disease because of improved nutrition. Taking these categories one at a time, he systematically considered each of the major disease groups (airborne infections, water- and foodborne infections, and other infections) in turn, and concluded that the only category that satisfactorily explained the decline in mortality was increased disease resistance resulting from improved nutrition.

In examining changes in the character of diseases as a possible explanation of mortality decline, McKeown found little reason to believe that this had been responsible for a substantial reduction in deaths from infectious diseases. By changes in the character of diseases he meant changes in the interaction between the infectious microorganism and the human host and whether such changes meant a decline in the virulence of the pathogen or an increased resistance in the human host through natural selection.

Although he acknowledged that a change in the character of scarlet fever did result in a reduction of deaths from that disease in the latter half of the nineteenth century, McKeown thought it “frankly incredible” that the other major airborne diseases of the period, which included tuberculosis, influenza, and measles, had all undergone such fortuitous changes simultaneously. On the contrary, he pointed out that tuberculosis, for example, continues to have devastating effects on populations not previously exposed to it. Nor did he think it likely that natural selection could have increased people’s resistance to such diseases, leading to a decline in mortality. For such genetic selection to have occurred, McKeown pointed out that certain deleterious effects of early industrialization and urbanization, such as crowding, should have produced high mortality in the eighteenth century. Through natural selection, that experience would have left a population with greater resistance to the airborne diseases, which would account for lower mortality rates later on. However, McKeown believed that death rates during the eighteenth century had simply been too low to support such a theory.

Water- and foodborne diseases warranted a similar conclusion. McKeown did not altogether rule out the possibility that changes in the character of those diseases could have played some role in the reduction of mortality associated with them. But he thought that improved hygiene, leading to reduced exposure, was a much more convincing explanation. As for vector-borne diseases, typhus was mentioned as one that might have been affected by a change in its character. However, the contribution of the decline of this disease to the fall of mortality over the past three centuries was small.

The next possible reason for a decline in mortality from infectious diseases that McKeown dealt with was medical advances, and it is here that his arguments impressively marshaled historical evidence. He built his case against a significant role for medical treatment by examining the temporal pattern of death rates of the most lethal diseases. He first took care to point up the distinction between the interests of the physician and those of the patient. Although since the eighteenth century many important advances have been made in medical knowledge and institutions, such advances were not always immediately effective against diseases; they often required considerable intervals before becoming of practical, demonstrable benefit to the patient. In making this distinction, McKeown contended that whether different preventive measures and treatments in history had been effective could not be judged reliably from contemporary assessments; instead, their efficacy would best be determined in light of critical present-day knowledge.

Tuberculosis, the largest single cause of death in the mid-nineteenth century and the decline of which was responsible for a fifth of the subsequent reduction in mortality, served as a case in point. The identification of the tubercle bacillus by Robert Koch in 1882 was an important event in the progress of medical knowledge, but its immediate contribution to reducing tuberculosis was minimal. In addition, of the numerous treatments that were tried in the nineteenth and early twentieth centuries, none could be judged by modern medical knowledge to have been effective against the disease. Rather, McKeown suggested that effective treatment actually began only with the introduction of streptomycin in 1947 and bacille Calmette-Guérin (BCG) vaccination on a large scale in England and Wales from 1954. By these dates, however, mortality from tuberculosis had already fallen substantially. Roughly 60 percent of the decline since 1848-54 had already taken place by the turn of the twentieth century, and this decline continued up to the introduction of effective chemotherapy around 1950. If the medical contribution was meaningful only after 1950, it was therefore impossible for medical advances against tuberculosis to have been a major factor in the mortality decline, most of which had taken place by then.

Similarly, when the temporal patterns of mortality decline from the other major airborne diseases of the nineteenth century were compared with the dates of introduction of effective treatment or immunization against them, McKeown found that most of the fall in the death rates they produced had also occurred before effective medical measures became available. An exception to this was smallpox, the decline of which since the mid-nineteenth century was thought to be the result of mass vaccination. But McKeown pointed out that the reduction in smallpox mortality was associated with only 1.6 percent of the reduction in the death rate from all causes. McKeown also doubted that the rapid decline of mortality from diseases spread by water and food since the late nineteenth century owed much to medical measures. He thought that, in many cases, immunization was relatively ineffective even at the time he wrote, and that therapy of some value was not employed until about 1950.

Finally, reduced exposure to infection was considered as a possible explanation of mortality decline. In the case of airborne diseases, McKeown once again found little reason to think that reduced exposure had been a major factor in their decline. Indeed, he thought that the fall in deaths from measles and scarlet fever owed very little to reduced exposure, and although he allowed that reduced exposure did play a role in the decline of other airborne diseases such as tuberculosis, whooping cough, diphtheria, and smallpox, he believed that this was only a secondary consequence of other influences that lowered the prevalence of disease in the community.

In the case of water- and foodborne diseases, McKeown conceded that reduced exposure had played a greater role in reducing mortality than it had with airborne diseases, especially during the second half of the nineteenth century. Purification of water, efficient disposal of sewage, provision of safe milk, and improved food hygiene all contributed to the decline of mortality. He also felt that personal hygiene, particularly regular bathing, may have encouraged the abatement of typhus in the eighteenth and nineteenth centuries.

In summary, McKeown dismissed the ability of both changes in the character of infectious diseases and advances in medical treatment to account for the modern mortality decline. He thought the contribution of sanitation and hygiene had been somewhat more significant but still of limited scope and primarily confined to the second half of the nineteenth century. Therefore, advances in general health based on improved nutrition constituted the only possibility left that could explain the mortality decline, and, in McKeown’s view, it was also the most credible explanation.

In support of his circumstantial case for nutrition, which he had arrived at by a process of elimination, McKeown offered some pieces of positive evidence. First, he pointed out that the great expansion of the English and Welsh populations during the eighteenth and early nineteenth centuries had been accompanied by an important increase in domestic food production. However, as McKeown conceded, the central question for the nutritional argument was whether the amount of food per capita, rather than total food consumption, had increased during that period. He found that evidence to settle the matter directly was unavailable and chose instead to consider the relationship between malnutrition and infection. Thus, as the second piece of evidence in support of nutrition, he pointed to the situation in developing countries, where malnutrition contributes largely to the high level of infectious deaths. Malnourished populations are more susceptible to infections and suffer more seriously when they are infected. McKeown also emphasized the dynamic interaction that exists between nutrition and infection, frequently characterized as synergism. Since infections adversely affect nutritional status, a vicious cycle between disease and malnutrition often results—a cycle characteristic of poverty and underdevelopment.

Reaction to the Mckeown Thesis

McKeown’s thesis drew the attention of scholars from a wide spectrum of disciplines. As it provided a theoretical framework that wove together such themes as industrialization, urbanization, rising standards of living, changing health, and shifting demographic patterns, it could hardly have failed to attract the interest of social scientists, especially demographers and economic historians. A number of studies extended McKeown’s argument to the history of mortality rates in the United States (Meeker 1972; Higgs 1973, 1979; McKinlay and McKinlay 1977). In all probability, the contemporary interest in McKeown’s nutritional thesis was also in part a consequence of great public concern over the “population bomb”—the fear that the explosive worldwide population increase in the post-World War II period would eventually lead to catastrophic shortages of food and other natural resources. It is not difficult to understand that an audience constantly reminded, and so vividly, of the Malthusian link between food supply and population growth would have been receptive to McKeown’s argument giving primacy to nutrition as the factor explaining the decline of mortality.

Reaction to the McKeown thesis, however, was by no means uniformly favorable. Although acknowledging that nutrition had played a role in mortality decline, many scholars nevertheless felt that McKeown had greatly overstated its importance. Of particular concern to critics of the nutrition hypothesis were the gaps in historical evidence. McKeown himself had freely conceded that the basic data were inadequate, but he had still believed that enough pieces existed to “cover the canvas” with a sketch or a comprehensive interpretation, the details of which could be filled in later as data and methodology improved. Not surprisingly, much of the research that countered his viewpoint addressed this evidential gap: Some offered differing interpretations of the sparse existing data, whereas others unearthed new evidence. Whatever the merits of McKeown’s own arguments, there is no question that the debate he ignited over the role of nutrition in the modern decline of mortality was productive in that it defined the issues to be researched and spurred the search for new evidence.

Some critics were skeptical that an insufficient supply of food had been responsible for the high mortality rates in preindustrial societies. P. E. Razzell (1974), for example, questioned the food-supply hypothesis, citing the absence of a significant mortality differential between social classes. If nutrition was the critical factor, one would expect the aristocracy to enjoy lower mortality levels than the poor. Yet, in what came to be known as the “peerage paradox,” he found that there was little difference in the mortality rates among the peerage and those of the laboring classes in England before 1725, and although presumably the poorer classes should have benefited more from any overall improvement in diet, the reduction in mortality after 1725 was greater among the aristocracy. Moreover, M. Livi-Bacci (1991) noted that in several other European countries as well, the aristocracy had not enjoyed any advantage in mortality over the lower classes. Furthermore, his examination of the European experience from the time of the Black Death to the era of industrialization led him to doubt whether nutritional improvement had shown any long-term interrelationship with mortality rates.

Other criticisms of the McKeown thesis were mainly directed against his underestimation of the role of public health. S. H. Preston and E. van de Walle (1978) concluded that, at least in France, water and sewage improvements had played a major role in urban mortality decline during the nineteenth century. Similarly, S. Szreter (1988) and A. Hardy (1993) argued that in England, preventive public-health initiatives had made significant contributions to the decline in prevalence and severity of a number of diseases, including smallpox, typhoid fever, typhus, and tuberculosis. In addition, purification of milk was thought specifically to have contributed to the fall in infant mortality (Beaver 1973).

McKeown’s heavy reliance on English and Welsh data may also have caused him to overstate the case for nutrition. In an analysis of changes in mortality patterns among 165 populations, Preston (1976) pointed out that the English and Welsh experience was exceptional. Between 1851-60 and 1891-1900, decreased mortality from respiratory tuberculosis accounted for 44 percent of the drop in age- and sex-standardized death rates in England and Wales, whereas the normal reduction in other countries had been 11 to 12 percent. Similarly, a decrease in deaths from other infectious and parasitic diseases accounted for 48 percent of the mortality decline in England, compared to the standard 14 percent elsewhere. Preston (1976: 20) concluded that “the country with the most satisfactory early data appears to offer an atypical account of mortality decline, a record that may be largely responsible for prevailing representations of mortality reduction that stress the role of specific and readily identified infectious diseases of childhood and early adulthood.”

During several decades of often heated debate over the nutrition hypothesis, virtually all aspects of McKeown’s argument have been examined in great detail by critics and supporters alike. The points on which there is agreement after such prolonged and extensive investigation are certainly worth noting. That nutrition did play a role in the mortality decline is not disputed; the disagreement is over the magnitude of its contribution. Although McKeown was generally not receptive to criticisms of his thesis, in a later work (1988) he did acknowledge that the contribution of public-health measures had been greater than he had originally concluded. One important point of McKeown’s that has survived intact is that specific therapeutic medical treatments had little impact on mortality reduction.

Generalizing the last point, some critics have commented that no single factor by itself—including nutrition—appears able to account for the mortality decline. Preston (1976) found that nutrition, as proxied by income levels, accounted for only about 20 percent of the fall in mortality between the 1930s and the 1960s. In fairness to McKeown, however, it should be noted that the period covered by Preston’s study coincides with the era of antibiotics, whereas McKeown’s arguments relied on the trends of mortality patterns before that time.

Diet Versus Nutritional Status

In the debate over the role of nutrition in mortality decline, a great deal of confusion has been caused by differences in the ways the term “nutrition” has been understood and used by different investigators. Some have interpreted nutrition to mean food supply or diet, whereas others have followed epidemiologists or nutritionists by taking it to mean nutritional status, or net nutrition, which is the balance between the intake of nutrients and the claims against it. McKeown, the original proponent of the nutrition hypothesis used the term in both senses, although his writings indicate that he was aware of the difference between the two concepts. Lately, R. W. Fogel (1986, 1993, 1994) has suggested that when nutrition is mentioned in connection with food, the terms “diet” or “gross nutrition” be used. He advocated that the term “nutrition” itself be reserved for use in the sense of net nutrition—the balance of nutrients that becomes available for cellular growth—to avoid any further confusion.

There are clear advantages to adopting such a definition, not least of which is that it clarifies and suggests new avenues of research (as discussed shortly). However, as this new definition of nutrition is still not completely free of some pitfalls that could lead to further misunderstanding, it is worth considering in some detail what it actually means. That is, what are the determinants of nutritional intake, and what factors count as claims against that intake?

In a broad sense, nutritional intake may be taken to mean food supply, as has often been assumed in the debate over the historical role of nutrition. But complications exist even in the quantification of this relatively simple measure. Leaving aside the difficulty of obtaining reliable figures for gross food production in the past, one must also face the tricky issue of how to estimate the losses in nutrients that occur because of different, and often inefficient, food-storage, preparation, and preservation technologies.

These considerations, in turn, immediately suggest that quality, as well as quantity, is an important aspect of nutrition, a fact that may go far in explaining the peerage paradox. McKeown (1976) maintained that the peerage must have been eating unhealthy, or even infected, food, in which case the larger quantities they consumed could hardly be taken as grounds for expecting better health than that found among the lower classes. Fogel (1986) has also pointed out that toxic substances in the diet of the aristocrats, such as large quantities of salt and alcohol, would have had negative effects on their health and mortality. He has emphasized that the impact on the overall mortality rate of the peerage would have been especially large if it showed up mainly as high infant mortality, possibly because of adverse effects on the fetuses of mothers who apparently imbibed huge quantities of wine and ale on a daily basis. Moreover, Fogel has noted that the decline in infant and child mortality among the peerage after 1725 was paralleled by a gradual elimination of the toxic substances from the aristocratic diet between 1700 and 1900.

Another important way in which nutritional intake can be significantly influenced is through the presence of diseases that affect the body’s ability to absorb nutrients. This ability can be measured as Atwater factors—food-specific rates at which the energy value of a person’s food intake is transformed into metabolizable energy. Atwater factors are usually in the 90 to 95 percent range for healthy populations, but they can be as low as 80 percent among under-nourished populations in which recurrent episodes of acute diarrhea may impair the absorption of nutrients (Molla et al. 1983).

Against the nutritional intake, all claims on nutrients must be deducted in order to arrive at the figure for nutritional status, or the balance that can be metabolized for cellular growth. The claims on nutrients can be broadly classified into three categories of energy expenditures: the energy required for basic maintenance of the body, the energy for occupational and discretionary activities, and the energy to fight infections. The first of these categories, basic maintenance, accounts for most of the body’s energy usage and consists mainly of the Basal Metabolic Rate (BMR). BMR is the energy required to maintain body temperature and to sustain the normal functioning of organs, including heart and respiratory action. Roughly speaking, it is equivalent to the energy expended during rest or sleep and can be considered the default cost of survival in terms of energy.

Although there is some variation among individuals, BMR varies mainly by age, sex, and body weight. In particular, the association with body weight is strong enough that within any age/sex category, BMR can be predicted by a linear equation in body weight alone (WHO 1985). The BMR for an adult male, aged 20 to 39 and living in a moderate climate, ranges between 1,350 and 2,000 kilocalories (kcal) a day, which would amount to somewhere between 45 and 65 percent of his total daily energy intake. It should be noted that BMR does not allow for such basic survival activities as eating, digestion, and minimal personal hygiene.

Occupational and discretionary activities account for most, if not all, of the energy requirements beyond basic maintenance. Discretionary activities include walking, recreation, optional household tasks, and exercise. The pattern of energy expenditure among these categories will necessarily vary with individual activity patterns, which are influenced greatly by age, sex, occupation, culture, and technology. A World Health Organization (WHO) report (1985) estimated the energy requirements of a young male office clerk to be 1,310 kcal for basic maintenance (51 percent), 710 kcal for work (28 percent), and 560 kcal for discretionary activities (22 percent). The energy usage of a young subsistence farmer with a moderate work level in Papua New Guinea was given as 1,060 (40 percent), 1,230 (46 percent), and 390 (15 percent) kcal over the same expenditure categories. In yet another pattern of energy usage, a young housewife in an affluent society requires 1,400 (70 percent), 150 (8 percent), and 440 (22 percent) kcal, respectively.

The adverse effects of infections on nutrition go far beyond their impact on the body’s ability to absorb nutrients. Fever directly increases metabolic demands, and the excess energy expenditure so induced by an infection therefore constitutes a separate, additional claim on nutrients. Other effects of infections that are similarly harmful include the loss of nutrients resulting from vomiting, diarrhea, reduced appetite, or restrictions on diet. For instance, R. Martorell and colleagues (1980) estimated that during an episode of diarrhea, the loss of total energy intake from reduced food intake alone can be as much as 40 percent. Malnutrition also has the effect of weakening the immune system and, thus, making the body more susceptible to infections, which can have further negative effects on nutrition in a deteriorating cycle.

The preceding discussion shows that nutritional balance is jointly determined by nutritional intake and various claims on that intake. Both intake and claims comprise a great range of factors, which appears to be broad enough to include almost every determinant of health and mortality. There is obviously a great difference between nutrition in the sense of diet and such a broadly inclusive concept as that of net nutrition. For some, it may seem natural to question whether the precision in the definition of nutritional status actually masks a vagueness in practical usability.

As a hypothetical example, suppose a new medical therapy is introduced that effectively cures a certain infectious disease, and that this therapy has the effect of substantially improving nutritional status by saving nutrients that would formerly have been lost to prolonged infection. If the overall result is a reduction in mortality, credit ought to go to the new medical treatment, but by definition, it can also be said that the lower death rate is the result of improved nutrition. Similarly, an effective public-health measure that lowers the prevalence of some infectious disease could still be considered a case in which improved nutrition leads to mortality decline.

Evidently the different causes of mortality decline can no longer be considered mutually exclusive when nutrition is defined as net nutrition, and it is no longer clear what to make of statements that compare the contribution of nutrition to that of medical treatment or public-health measures. This ambiguity is perhaps especially noticeable and problematic to those who continue to think about the role of nutrition in the context of the debate initiated by McKeown.

A related difficulty that follows from the new definition is that, even today, nutrition as a net balance is quite difficult to measure accurately. Estimating how nutritional status has changed throughout the past several centuries is even harder. In order to calculate claims against the intake of nutrients, the determinants of those claims—such as a population’s body weight distribution (from which BMR is derived), its members’ activity levels in work and leisure, and the prevalence, severity, and duration of infections suffered by them—all must be taken into account. Once again, the new definition does not appear to be very helpful for those interested in applying it directly to resolve issues raised by the nutrition hypothesis of mortality decline.

It should be noted, however, that ambiguity and difficulty of measurement are not problems newly introduced by the adoption of a definition of nutrition as net nutritional balance. Instead, the new definition serves to highlight the fact that the debate over the role of nutrition has lacked agreement on the exact meaning of nutrition. The view of nutrition as the balance between nutrient intake and energy expenditure also suggests that influences on health and longevity cannot be easily or even meaningfully sorted into discrete, measurable categories. Rather, health and mortality outcomes are now viewed as the joint result of several different processes that continually interact to determine health and aging.

In recent years, such a reassessment of the relationship between nutrition, health, and mortality has led some researchers, notably Fogel and his colleagues, to shift their focus away from the original debate over McKeown’s nutrition hypothesis to other aspects of nutrition and mortality. The balance of this chapter briefly considers these new developments.

Anthropometric Measures

The difficulty of measuring or estimating nutritional status—especially with regard to the potential data deficiencies in historical research or in studies of developing countries—has prompted investigators to search for other measures that can be used as proxies for nutrition. Fortunately, there exists a class of measures that are comparatively easily observed (often even far back into the past) and are also known to be sensitive to variations in nutritional status. These are anthropometric measures, especially those of body height and body weight. Since weight is positively correlated to height, a measurement of weight-for-height is usually used, the most popular being the Body Mass Index (BMI), also known as the Quetelet index. It is derived as body weight in kilograms (kg) divided by the square of height in meters (m), or kg/m 2.

Height and BMI reflect different aspects of a person’s nutritional experience. Adult height is an index that represents past cumulative net nutritional experience during growth. Studies in developing countries have shown that nutrition in early childhood, from birth to about age 5, is especially important in determining final adult height (Martorell 1985). Malnutrition among children of this age causes growth retardation, which is known as stunting. Although it is possible for some “catch-up” growth to occur later, this is unlikely to happen in an impoverished environment that was probably responsible for the mal-nourishment and stunting in the first place. BMI, in contrast, primarily reflects a person’s current nutritional status. It varies directly with nutritional balance; a positive balance increases body mass, and a negative balance indicates that the body is drawing on its store of nutrients.

Interest in height and BMI as proxy measures for nutrition has, in the context of research on nutrition and mortality, naturally been directed to their association with morbidity and mortality. Although anthropometric measures had previously been used as predictors of the risks of morbidity and mortality for young children, H. Th. Waaler’s (1984) large-scale study of Norwegian adults was among the first to show that height and weight could be used to predict morbidity and mortality risks for adults as well. When Waaler analyzed age- and sex-specific risks of dying by height classes among 1.8 million Norwegian adults between 1963 and 1979, he found that there was a stable relationship between adult height and mortality risk that could be characterized as a J-shaped curve. Within each age/sex group, mortality risk was highest among the shortest group of people and declined at a decreasing rate as height increased. This negative association between height and mortality risk has received much attention in historical research, which has sought to tie the increasing secular trends in the mean heights of different populations to parallel improving trends in their health and life expectancies (Fogel 1986; Floud, Wachter, and Gregory 1990; Komlos 1994; Steckel 1995).

Waaler also found a stable relationship between BMI and mortality risk, which can be characterized as a U -shaped curve. Risk is unresponsive to weight over a wide range, from about 20 to 28 BMI, but it increases sharply at either tail beyond that range. As historical data on weight distribution are harder to come by than data on height, little research making use of this risk-BMI relationship has yet been done. However, in an interesting development, some attempts have been made to use height and BMI simultaneously to predict mortality risk, rather than using each anthropometric measure separately (Fogel 1993, 1994; Kim 1996).

As height represents an individual’s early nutrition and BMI his current nutritional status, the resulting height-weight-risk surface makes better use of all nutritional information than the height-risk or BMI-risk curves. Fogel (1994) used such a surface to suggest that the combined effect of increases in body height and weight among the French population can explain about 90 percent of the French mortality decline between 1785 and 1870, but only about 50 percent of the actual mortality decline since then. Another study, using a similar surface to track secular changes in height, weight, and the risks of old-age morbidity in the United States, also has found that factors other than height or BMI explain a larger share of elderly health improvement from 1982 to 1992 than during the period from 1910 to the early 1980s (Kim 1996).

In closing this chapter, it seems appropriate to devote a bit more attention to recent methodologies that may help to shed more light on the relationship between nutrition and mortality decline. For example, in addition to height and BMI, waist-to-hip ratio (WHR) is another anthropometric measure that has gained acceptance as a predictor of chronic diseases, especially for coronary heart disease and non-insulin-dependent diabetes mellitus (Bjorntorp 1992; Hodge and Zimmet 1994; Baumgartner, Heymsfield, and Roche 1995). Although BMI and WHR are generally correlated and are therefore both linked to the risk of chronic diseases associated with obesity, a weakness of BMI is that it does not indicate body composition such as lean body mass versus total body fat, nor is it indicative of the distribution of fat within the body.

By contrast, WHR, as a measure of central adiposity, has been found in a number of studies to have predictive power independent of BMI. J. M. Sosenko and colleagues (1993) have reported that WHR was significantly higher among diabetic women and also among men (although not as markedly as among women), whereas BMI failed to differentiate between diabetics and nondiabetics. Similarly, A. R. Folsom and colleagues (1994) examined Chinese men and women aged 28 to 69 from both urban and rural areas and found that abdominal adiposity—represented by an elevated WHR—was independently associated with cardiovascular risk factors. Although BMI was also associated in a similar direction with most of these risk factors, the mean level of BMI in this study was relatively low, ranging from 20.1 to 21.9 among 4 sexand age-groups, confirming that WHR is useful as a predictor of cardiovascular disease even among a lean Asian population. S. P. Walker and colleagues (1996) found in a five-year follow-up study of 28,643 U.S. male health professionals that BMI was only weakly associated with stroke risk, but that WHR was a much better predictor even when BMI, height, and other potential risk factors were taken into account. Yet although these studies suggest that there is a strong case for using WHR in conjunction with or perhaps as a substitute for BMI, it should be remembered that data on BMI are often more easily obtained or reasonably estimated, making BMI useful as a predictor or proxy for health or nutritional status in studies involving historical populations or in other situations in which detailed anthropometric data are lacking.

The preceding discussion on the role of adult height as a predictor of morbidity and mortality in middle or old age, combined with the theory that nutrition during the very early stages of life is a major determinant of adult height, suggests the possibility that the roots of some later-in-life diseases can be found very early in life, and intriguing research carried out by D. J. P. Barker and others (Barker 1993, 1994) has made this possibility seem a probability. In focusing on events surrounding birth for explanations of diseases in later life, Barker and his colleagues located United Kingdom birth records from Hertford-shire, Sheffield, and Preston, all of which contained detailed information on the infants. The Hertfordshire records covered births between 1911 and 1930 and included birth weight, weight at 1 year of age, number of teeth, and other details. The records from Sheffield and Preston covered roughly the same period but were even more detailed, and their inclusion of length from crown to heel, head circumference, biparietal and other head diameters, placental weight, and (after 1922) chest and abdominal circumferences allowed the computation of various body proportions. Linking these birth records with those still surviving as adults and with the records of those who had since died made it possible to investigate any relationship between at-birth measurements and health events in later life.

This research, compactly summarized by Barker (1994), has uncovered numerous associations between various birth measurements and disease in later life, including hypertension, excessive levels of blood cholesterol, non-insulin-dependent diabetes, and death rates from cardiovascular disease. Among men, low birth weight and weight at 1 year of age were associated with premature death from cardiovascular disease. Females whose weight was low at birth but above average as adults also experienced increased death rates. Other measurements at birth that indicate slow fetal growth have also been found to predict higher death rates from cardiovascular disease. These include thinness (as measured by the ponderal index—birth weight/length 3), small head circumference, short length, low abdominal circumference relative to head size, and a high placental-weight-to-birth-weight ratio. As slow growth in utero is often followed by slow growth afterward, these findings suggest one pathway through which nutrition, adult height, and mortality in old age may be related (cf. Barker, Osmond, and Golding 1990).

From birth records linked to survivors rather than to death records, Barker and his colleagues were also able to analyze the connection between measurements at birth and chronic diseases or their risk factors among the survivors. Babies who were small for date or with a higher ratio of placental weight to birth weight, both of which indicate undernutrition in utero, were found to have higher systolic and diastolic blood pressure as children and as adults. These findings were independent of, or dominated, the effects of the later-life environment, including the current weight, alcohol consumption, and salt intake of the subjects. The blood pressure of the mother was also found to be a nonfactor. Reduced liver size, as measured by abdominal circumference, was associated with raised serum concentrations of total and LDL cholesterol levels in both men and women. Based on these findings, Barker has suggested that impaired liver growth in late gestation may permanently alter the body’s LDL cholesterol metabolism, resulting in an increased risk of coronary heart disease later in life. In both men and women, low birth weight predicted higher rates of non-insulin-dependent diabetes and impaired glucose tolerance.

British birth records have also been useful in establishing a link between infectious disease in childhood and chronic disease in later life. A follow-up study of men from the Hertfordshire records, which recorded illnesses periodically throughout infancy and early childhood, showed that death rates from chronic bronchitis were higher among those who had low birth weights and low weights at 1 year of age (Barker et al. 1991). In addition, S. O. Shaheen and colleagues (1995) have reported that among survivors represented in the Hertfordshire and Derbyshire records, men who had suffered bronchitis or pneumonia in infancy had significantly impaired lung function as measured by mean FEV1 (forced expiratory volume in one second).These findings support the hypothesis that lower-respiratory-tract infections in early childhood lead to chronic obstructive pulmonary disease in late adult life.

The relationship of maternal (and thus fetal) nutrition—and that of infants—with diseases of later life was one, of course, that McKeown did not examine. That such a relationship seems to exist is one more powerful example of the complex nature of nutrition on the one hand and mortality on the other.