William Muraskin. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas, Volume 2, Cambridge University Press, 2000.
The importance of nutrition to the preservation of human health cannot be reasonably denied. However, the extent of its power may have been overstated in recent years. For millions of Americans, “natural foods and vitamins” are seen as almost magical preservers of health, beauty, and longevity. Indeed, claims for the healing properties of nutrients have become an integral part of the post-World War II “baby-boomer” generation’s vision of the world. For many, “faith” in the power of proper nutrition is part of a secular religion that comes close to denying the inevitability of aging and death. Vitamin C is considered a panacea one day and beta-carotene the next, as are such foods as broccoli and garlic. With such a cornucopia of natural “medicines,” who would ever think that the history of humankind would reveal so much disease and ill health?
Although such popular exaggerations of the benefits of various nutriments are easily dismissed by serious scholars, other more scholarly claims are not. One suggestion dealing with the historical importance of nutrition has received remarkably widespread support in academic circles and, among historians, has become the orthodox explanation for understanding a key aspect of the modern world: increased longevity.
The McKeown Thesis
The classic formulation of this explanation was provided by the medical historian Thomas McKeown (1976, 1979), who argued that the reasons for the decline of mortality in the Western world over the last three hundred years have been largely the result of rising living standards, especially increased and improved nutrition. Equally important, the decline was not the result, as so many had rather vaguely believed, of any purposeful medical or public-health interventions.
Such a theory, of course, was not long confined to historical debates because of its clear policy implications for the allocation of resources in the developing (and the developed) world today. If the historical decline of mortality in the West occurred independently of science and medicine, then the high death rates in the developing world could best be combated by vigorously pursuing higher living and nutritional standards. Not only was money expended on high-technology hospitals and doctors ill spent, but even funds for immunization campaigns were a waste of limited resources. As McKeown put it: “If a choice must be made, free school meals are more important for the health of poor children than immunization programmes, and both are more effective than hospital beds” (McKeown 1979).
This revolutionary theory has a number of emotional and intellectual attractions. First, it constitutes a devastating attack on the legitimacy and claims of societal “authority,” represented in this case by the medical profession and, to a lesser extent, the public-health and scientific establishments. The “experts” who claim the right to guide and control society and its inhabitants, based upon their past accomplishments, are revealed as self-deluded, if not as outright frauds. Such a critical position resonates very well in an era characterized by relentless attacks on all types of authority and one in which the delegitimization of social institutions is a daily affair.
What radical social critic could have better put what many regard as an imperious medical profession in its place than McKeown, who wrote (in an attempt to dispel the “erroneous belief” that he was hostile to the medical profession):”… if medical intervention is often less effective than most people, including doctors, believe, there is also a need … for greater emphasis on personal care of the sick (the pastoral role of the doctor) …” (McKeown 1979: ix). (Later in the same work [p. 112], he stated that the key advance in personal medical interventions in the past was the control of cavities by the dental profession.)
Second, McKeown’s view of the nature of health is also very congenial to the post-World War II generation’s view of life, death, and aging: “Most diseases, including the common ones, are not inevitable. … [Modern medicine is based on a different idea.] It assumes we are ill and must be made well, whereas it is nearer to the truth to say that we are well and are made ill” (McKeown 1979: 117-18). In addition, his work has an appeal to earlier generations, especially his claim that persons in their 70s who lived a healthy life would be as vigorous and capable as they were in their 30s! What the thesis seems to promise is eternal youth, if not eternal life, completely under the control of the careful and concerned individual without the messy intervention of doctors and other self-proclaimed experts. Such a view, when coupled with impressive scholarship, is very persuasive by itself, and the fact that the medical profession had indeed severely exaggerated its performance historically has also made it easier for many to embrace the argument.
Third, the McKeown thesis can be and has been profitably used by people representing a wide variety of political and ideological positions. In the West, it can be aimed at established authority figures in attempts to delegitimize basic Western institutions. The supposed accomplishments of science, medicine, and the public-health service, which bolster the West’s claims to superiority as a civilization, can be denied.
For leftists in the developing world, the McKeown thesis could free their countries from dependence on Western medical and scientific expertise and technology. Better a homegrown political upheaval with land redistribution than a measles shot or a CAT scan machine. Since good health is promised as a natural by-product of a rising standard of living and better nutrition, its eventuality depends on radical politics, not on “imperialistic” science or medicine (Muraskin 1995).
But the Right also has had reasons to be attracted to the McKeown thesis, which lends support to deemphasizing expensive medical care for the “masses,” both at home and abroad. If medicine and technology are not the key to lower mortality, why waste money on them in a time of escalating, and certainly threatening, medical costs? If the poor are underserved by doctors, maybe they are better off than their richer neighbors who are plagued by ineffective medical personnel and their iatrogenic interventions.
Despite the appeal of the McKeown thesis to so many political sectors, there are a number of reasons for criticism, not the least among them that McKeown reached his conclusion about the importance of nutrition in the mortality decline through the process of elimination of possible alternatives. If all the other possibilities are false, then the last standing explanation is the correct one.
Almost all of those who have questioned McKeown’s argument have been struck by this methodology. For such a procedure to be effective, it is mandatory that all possible alternatives be presented and adequately explored. But this is impossible because history, science, and medicine are full of unknowns. Thus, although such a procedure may be suggestive and help generate a useful hypothesis, strong direct evidence must be presented to make an adequate case.
McKeown postulated that there were only five possible reasons for the decline of mortality in the West:
- Medical intervention (including immunizations).
- Public-health measures (for example, better sanitation, water purification, and food hygiene).
- Changes in the nature and virulence of microorganisms.
- Reduced contact with microorganisms.
- Increased and improved nutrition.
He then proceeded to argue that until recently, it was assumed that “medical” interventions were the primary cause of increased life expectancy. This term, however, tended to include both personal medical care and public-health policies—two types of activities that must be separated for purposes of analysis.
The main thrust of his argument was aimed at the claims of personal medical care. He demonstrated that the decline in mortality was well advanced long before medical science developed effective forms of disease therapy or prevention. His proofs were elaborate and rather convincing. Unquestionably, McKeown has clearly, and probably permanently, deflated the claim that personal medical care played a major role in increasing longevity in the West. But such a claim could exist only because of the ahistorical nature of most public discourse in the West.
If most laypeople and doctors felt that modern medicine had performed miracles, very few of them, if challenged, would have insisted that medicine’s effectiveness went back much before the advent of antibiotics in the 1940s or sulphonamides in the 1930s. Although there were great scientific minds, like Robert Koch or Louis Pasteur, in the late nineteenth century, there was no adequate control over infectious diseases. For most, modern medicine was probably viewed as so miraculous specifically because the nineteenth century (and the first third of the twentieth century) was seen as so bleak and unhealthy. Thus, McKeown’s debunking of a tendency to view earlier medicine with lenses of the present was, in fact, an easy victory.
His larger thesis, however, required similar attacks on other possible causes of mortality decline. He disposed of the importance of immunizations (which he grouped with personal medical care rather than public-health activities) by arguing, on the one hand, that they occurred significantly later than the decline in mortality for the various diseases they were created to treat and, on the other hand, that vaccinations did little or no good in the absence of better nutrition.
He gave considerably more credit for the lowering of mortality to public-health measures, especially clean water and safe food, than he gave to personal medical care, but as with immunizations, he argued that these measures were put in place long after mortality had already significantly declined.
McKeown next considered the possibility that a decline in the natural virulence of microorganisms or a decrease in exposure of the general population to dangerous pathogens led to the decline of mortality in the West. His discussion of these possibilities, however, was far more cursory than his argument about the ineffectiveness of personal medical care. In short, his convincing argument against much of a role for clinical medicine in the improvement of life expectancy did not carry over into an analysis of the other possibilities he set forth. Thus, one could argue that McKeown presented a fascinating and stimulating hypothesis that somehow came to be understood as an established truth.
Attacks on the Mckeown Thesis
One of the most interesting assaults on the credibility of the McKeown thesis originated with a Danish anthropologist, Dr. Peter Aaby. His extensive research on measles mortality in Guinea-Bissau, West Africa, provided a unique opportunity for such an attack because McKeown used the World Health Organization’s (WHO) position on measles to support his argument for the importance of nutrition over immunization in the decline of mortality. McKeown (citing WHO) claimed that vaccination of an underfed child is not protective, whereas a well-nourished child does not need vaccination to survive the disease. In other words, another bowl of rice is preferable to a measles shot.
Aaby claimed to have shared such a view until his discoveries in Guinea-Bissau no longer supported it. He wrote that there are three general ways of looking at measles mortality:
- Emphasizing “host factors” (malnutrition, age of infection, genetics).
- Emphasizing “transmission factors” (greater exposure, more virulent strains, synergism between infections).
- Emphasizing “treatment and medical care factors” (ineffective treatment or neglect of effective treatments). (Aaby 1992: 155-6)
Certainly, most of the emphasis has been on host factors. In the case of measles, which kills about one and half million children a year in the developing world, “[t]hose who die are seen as somehow weaker than other individuals,” and “severe measles has been explained particularly with reference to [host factors such as] malnutrition, the age at infection, genetic susceptibility and underlying disease” (Aaby 1992: 156).
Such an interpretation, Aaby maintained, easily leads to the implication that severe measles is a disease of the weak who are on the road to death, if not from this disease then from another. The implication for policy is that little is accomplished by immunization because of the phenomenon of “replacement mortality.” Thus, the solution to preventing death from measles is not to fight the disease but to fight general malnutrition. Clearly, this is a position strongly supportive of McKeown’s own view.
Aaby, however, found that the situation in Guinea-Bissau was not so supportive. The children who died were not noticeably different in nutritional level from those who survived. Moreover, the general level of nutrition in the country was quite adequate—better, in fact, than in countries (such as Bangladesh) with much lower rates of mortality from measles. Instead of nutrition, according to Aaby, the factor differentiating those who died and those who did not was whether the child was an “index case”or a “secondary case”in a family.
Index cases (the first persons infected) were usually exposed outside the home. By contrast, secondary cases were usually the siblings of those index cases who had brought the infection into the home. The difference in mortality risk between the two groups was striking, with secondary cases much more likely to succumb to the illness. The apparent reason for the disparity was that the index cases had experienced much less exposure to the virus (contracted in social interactions outside the home) than the secondary cases—who were continuously exposed by actually living with an infected sibling. The key, according to Aaby, was not nutritional status in the face of measles, but rather the degree of exposure to the illness.
Aaby also compared rates of measles mortality between different countries. He found that Bangladesh had significantly lower rates than Guinea-Bissau despite poorer levels of nutrition. But in Guinea-Bissau, fully 61 percent of the children under 3 years old were secondary cases (with the case fatality rate [CFR] a horrifying 25 percent), whereas in Bangladesh, secondary cases were only 14 percent, and the CFR was 3 percent.
The reason proposed for the extreme variance was that “[l]arger families and a high incidence of polygamy mean[t] that children in West Africa h[ad] a much greater risk of becoming secondary cases …” because there were large numbers of young, susceptible siblings living at home (Aaby 1992: 160). This factor also seemed to account for the lower mortality rates Aaby found among East Africans when compared to West Africans—that is to say there were fewer wives and fewer children in East Africa.
In addition, Aaby observed that the severity or mildness of the disease in the index case correlated with the likelihood of mortality and secondary cases. In other words, the more severe the index case, the higher the rate of death in secondary cases; and the milder the index case, the lower the death rate of subsequent cases. There also seemed to be an “amplification phenomenon” in which severe cases brought into a household (or institution or military camp) created waves of infection, each more severe than the last.
The Guinea-Bissau research also uncovered other unexpected transmission factors that correlated with significantly higher levels of measles mortality. The most surprising finding was that infection by a member of the opposite sex produced a noticeably greater chance of death than infection by someone of the same gender. Studies in other developing countries have found a similar cross-gender transmission factor, and Aaby, doing historical research, discovered that such a situation existed in Copenhagen at the turn of the twentieth century (Aaby 1992: 162).
In addition, this concentration on transmission factors, rather than host factors, brought to light a “delayed impact,” which constitutes a long-term measles effect. Most studies deal with acute measles death (that is to say within one month of the appearance of the rash). But in Guinea-Bissau,”children who had been exposed to measles at home during the first six months of their lives had a mortality [rate] between ages six months and five years which was three times higher than community controls (34 percent vs. 11 percent)” (Aaby 1992: 164).
When background factors were taken into account, “the mortality hazards ratio was 5.7 times higher … among the exposed children than the controls” (Aaby 1992: 164). The “delayed excess mortality” existed both among children who had measles and among those without clinical symptoms. In light of this finding, it seems possible that the total mortality from measles infection is far higher in the developing world than is assumed.
Aaby has speculated on the possible meanings of this delayed mortality. One is “some form of persistent infection and immuno-suppression” at work. In addition, community studies in Nigeria, Guinea-Bissau, Senegal, Zaire, Bangladesh, and Haiti have shown that children immunized against measles have experienced major drops in overall mortality in the years after vaccination:
In all studies the reduction in mortality was greater than expected from the direct contribution of measles death to over all mortality. For example, in Bangladesh, the reduction in mortality between 10 and 60 months of age was 36% although measles accounted only for 4% of all deaths among the controls … Thus measles immunization seems to be highly effective in preventing both acute and delayed mortality from the disease. (Aaby 1992: 167)
Aaby believed that what he learned from Guinea-Bissau and other developing countries sheds light on the historical decline of mortality in the West. He suggested that transmission factors, rather than improved nutrition (or age of infection), can best account for the decline of measles mortality in the developed world. In summing up his detailed argument he wrote:
It seems likely that the most important causes of measles mortality decline were social changes which diminished the risk of intensive exposure within the family. Chief among these were the fall in family size [that is to say fewer susceptible siblings at home to infect] and greater social contact among young children which increased the risk [and benefits] of infection outside the home. Furthermore, the continual reduction in the numbers of fatal cases has reduced the risk of transmission of measles in a severe form [and thus eliminated the amplification effect]. (Aaby 1992: 170-1)
This conceptualization of the decline of measles mortality in terms of transmission rather than nutrition, Aaby believed, provides a model that can be used for other diseases as well. For example, he maintained that McKeown severely underestimated the importance of smallpox vaccination to the decline of mortality in the West, and he suggested that smallpox (which struck the well nourished and malnourished alike) may have had the same kind of delayed mortality effect as measles. Thus, people weakened by smallpox may have been more vulnerable to tuberculosis (TB) or other diseases. He speculated that the decline of smallpox may have led the way to the decline of TB, and the latter, McKeown maintained, was the key to the decline of mortality in the developed world (Aaby 1992: 178).
The significance for us of Aaby’s work is that transmission factors such as severity of exposure, size of family, cross-sex infection, and delayed mortality were not discussed and then successfully eliminated by McKeown’s analysis. Thus, it would seem that whether Aaby is right or wrong about other diseases, or even about the decline of measles in the West, he has made the point that McKeown’s nutrition thesis is vulnerable and, perhaps, has been embraced too readily.
A second attack was made by Simon Szreter, a specialist in British history, who challenged McKeown’s work not simply on its relevance to other countries but even on its accuracy for Great Britain. He stated his thesis very succinctly:
It will be urged that the public health movement working through local government, rather than nutritional improvements through rising living standards, should be seen as the true moving force behind the decline of mortality in [the late nineteenth century]. (Szreter 1988: 2)
Szreter pointed out that although McKeown explicitly recognized the positive role that hygiene improvements in public-health measures involving municipal sanitation played in saving lives, he nevertheless maintained that “their impact and effects were … very much of a secondary and merely reinforcing kind” (Szreter 1988: 3) compared to better nutrition. It is interesting to note that Szreter believed that the effect of McKeown’s emphasis on nutrition and his “devastating case against the pretension of the ‘technocratic’ section of the post-war medical profession” has led to the belief that “organized human agency in general had remarkably little to do with the historical decline of mortality in Britain …”(Szreter 1988: 3).
This belief, in turn, led Szreter to criticize McKeown for his failure to carefully assess “the independent role of those socio-political developments which were responsible for such hard-won improvements as those in working conditions, housing, education, and various health services” (Szreter 1988: 11). Moreover, given McKeown’s emphasis on the role of food it might have been expected that he would look closely at the history of the fight against food adulteration, but he did not. Rather, McKeown treated political, social, and cultural changes that were also arguably important in the mortality decline as a simple “automatic corollary of changes in a country’s per capita real income” (Szreter 1988: 11).
According to Szreter, though, in the last third of the nineteenth century, after the “heroic age” of public-health activism in Britain had ended without much success on the national level, countless under-paid and overworked officials fought bitterly but successfully for better sanitation and increased disease prevention at the local level. There was nothing automatic about either their struggles or their victories—though historians, said Szreter, have missed the importance of such activities by focusing on the apparent ineffectiveness of the national sanitation reform movement during the middle decades of the nineteenth century and the decline of that movement after 1871 (Szreter 1988: 21-5).
McKeown, however, contended that the key to the decline of mortality in Britain was the decline of TB, which is caused by an airborne pathogen that does not respond to public-health measures but does respond to improved nutrition, and that, in any event, the decline of the disease predated effective medical or public-health measures. In addition, he alleged that TB was in decline before most other major diseases. This chronology is important, because if TB declined after public-health interventions, or if other diseases declined first, then those previous events, rather than improved nutrition, might account for the fall in TB mortality.
McKeown claimed that TB declined in Britain from 1838 onward (that is to say quite early). However, Szreter contended that there was actually a fluctuation in TB mortality, which rose once more after 1850 and did not decline again until after 1866-7. Moreover, Szreter, like Aaby, pointed out that smallpox mortality had declined considerably earlier than TB; thus, even if tuberculosis had started to decline as early as 1838, McKeown would still have underestimated the possible effects of that prior decline on tuberculosis mortality (Szreter 1988: 15).
In addition, McKeown placed his emphasis on the decline in airborne diseases (as opposed to water-and foodborne diseases) because airborne diseases were not amenable to public-health interventions. Thus, their decline would seem to indicate an alternative source of amelioration. However, the airborne disease category that McKeown highlighted included not only TB, which did decline, but also a composite group—bronchitis, pneumonia, and influenza—that constituted the second most important cause of death in the mid-nineteenth century. That group of airborne diseases increased until after 1901, becoming the single most common cause of death and a greater killer than TB had been in 1850 (Szreter 1988: 13).
According to Szreter, one of the major effects of dividing the airborne diseases into two categories—TB and bronchitis/pneumonia/influenza—is that when the increased mortality of the latter group is set against the lower mortality of the former, it leaves the decline in food- and waterborne diseases as the most important reason for the decline of mortality, which does not support McKeown’s argument. Yet Szreter pointed out that the almost complete elimination of smallpox, cholera, and typhoid during the late nineteenth century is proof of the effectiveness of public-health interventions, and that the rise of mortality for the bronchitis/pneumonia/influenza group “may well be evidence that in those areas … where preventive legislation and action was not forthcoming,” problems occurred (Szreter 1988: 27). It is significant that clean air was an issue neglected by Victorian reformers, and the resulting urban smog probably goes far to help explain the high incidence of respiratory disease.
Szreter also indicated that infants did not benefit from the late-nineteenth-century decline of mortality in Britain; yet, after 1901, infant mortality fell rapidly. This reduction of infant mortality required the intervention of social services and the willingness of families to allow middle-class social workers to enter homes to instruct in hygienic food preparation (Szreter 1988: 28-31).
Thus, for Szreter,”[t]he invisible hand of rising living standards, conceived as an impersonal and ultimately inevitable by-product of general economic development, no longer takes the leading role as historical guarantor of the nation’s mortality decline” (Szreter 1988: 34-5).
A third attack on the McKeown thesis comes from Leonard Wilson, an American historian. If Szreter argued that McKeown exaggerated the significance of the decline in TB mortality, Wilson directly challenged McKeown on the reason for TB’s decline, which Wilson attributed not to improved nutrition but rather to segregation of those who had the disease.
In support of his position, Wilson highlighted a fact that exposes one of the more glaring weaknesses in McKeown’s argument: Tuberculosis was widespread among persons of the upper classes, who were, of course, the most likely to be well nourished. McKeown, however, claimed that despite adequate nutrition, their defenses were overwhelmed by constant contact with the lower classes—among whom the bacteria was ubiquitous. Wilson pointed out that the problem with this line of reasoning is that it acknowledges that the key to infection for the upper classes was their degree of exposure, which is one of the possible alternatives to the theory of improving nutrition (Wilson 1990).Thus, at the very least, the McKeown thesis turns out to be a dual theory of the decline of TB: a nutrition theory for the poor and an exposure theory for the rich, which seems to render the nutrition thesis considerably less persuasive.
The thesis that Wilson advanced is that the decline of TB mortality was closely linked to the degree to which individuals with TB were segregated during the periods they were most infectious. He argued that in Great Britain the provision of poor relief in work-houses (rather than at home) and the establishment of sanatoria led to a decline in the TB death rate compared to other societies that allowed infectious individuals to live freely in the community in close contact with their families. He contended that McKeown underestimated the importance of these segregating institutions because most individuals spent only limited time in them, and they did not cure individuals with the disease. But Wilson suggested that some segregation, although less helpful than a lot of separation, was still better than none and was sufficient to account for the declining mortality rate. The failure to cure tubercular individuals, although a tragedy for them, was less important than preventing transmission to others.
To test this thesis, Wilson compared the experience of a number of countries and ethnic groups. He found, for example, that Ireland experienced both declining food prices and rising incomes in the years after 1870, but experienced no significant decline in TB mortality.
Yet during the same period, Ireland did enjoy a decline in the typhus death rate, which no one claims was nutritionally related. Typhus-infected individuals were segregated and the contagion was controlled. By contrast, tuberculosis victims, after being reduced to poverty, as were almost all TB victims, were given home relief, which allowed them to live surrounded by family members whom they continued to infect. But in England, in contrast to Ireland, relief was restricted to poor-law infirmaries and workhouses, which kept infectious individuals out of the community and away from their families during the period when they were most intensively infectious. Those segregating institutions also taught the infected how to dispose of their sputum and lower the danger of spreading the infection when they were again free to go home (Wilson 1990: 384).
According to Wilson, before the advent of antibiotics, the segregation of contagious individuals was a necessity if diseases like TB or typhus were to be controlled. He pointed out that leprosy also declined, in both England and continental Europe, with the isolation of lepers. An exception, however, was Norway, where lepers were not segregated, and there the disease not only failed to decline but actually increased during the nineteenth century (Wilson 1990: 384-5).
Wilson accused McKeown of dismissing the importance of the discoveries of Koch and others in the decline of tuberculosis mortality because no therapies came out of those scientific breakthroughs for generations. Such a view, he asserted, ignores the fact that many nations and municipalities instituted segregation and isolation procedures soon after the cause of TB was discovered. For example, in New York City after 1889, the Health Department emphasized the danger of contagion and pushed for sanitary disposal of sputum, disinfection of rooms, and the opening of special hospitals for TB patients. Vigorous policing helped maintain the long and steady decline of tuberculosis in the city from 1882 to 1918, at which point three large TB hospitals were built.
Thus, Wilson argued that the decline in tuberculosis was not the result of a rising standard of living but rather of reduced opportunities for patients to spread the infection, and he pointed out that the recognition of the importance of segregation came directly from Koch’s discoveries (Wilson 1990: 381).
In another study by Wilson, this time of tuberculosis in Minnesota, he was able to look directly at the effect of standard of living on the decline of the disease and to test the relationship between nutrition (or at least standard of living) and TB in a kind of “natural laboratory” which existed in that state.
For many decades, Minnesota had the reputation of having a remarkably healthy climate—with a very low tuberculosis rate. This changed as European immigrants reached the state, many of them suffering from the disease. The Irish immigrants had the highest rate—in keeping with conditions in their native land, where home relief allowed ill individuals to remain with their families. In Minnesota, the Irish still continued to live at home if infected and, consequently, infected their relatives. Scandinavian immigrants also had high rates of tuberculosis, as did the countries from which they came. However, German immigrants had a low rate of TB, similar to conditions in their homeland.
These different groups settled in Minneapolis, where their social and economic conditions were remarkably uniform. There was no major difference in their standard of living, only in their TB rates, which reflected their countries of origin, not their current conditions. What ultimately brought those rates down for the Irish and Scandinavians was not better food and housing but the decision to build sanatoria and segregate infectious individuals (Wilson 1992).
Thus, Wilson argued, McKeown ignored the key role played by public-health measures in lowering the TB death rate by finding the source of infection and working to prevent its transmission. Going beyond Szreter, Wilson suggested that medical men, from Koch down to doctors in the sanatoria, were vital elements in this process.
The scholarly articles of Wilson, Szreter, and Aaby go a long way toward a successful undermining of the claims that McKeown has made about the pivotal role of nutrition in the decline of mortality in the West. Their arguments have been significantly advanced by the publication of Anne Hardy’s book, The Epidemic Streets: Infectious Disease and the Rise of Preventive Medicine, 1856-1900 (1993), a detailed study of disease in London during the last half of the nineteenth century. Hardy looked intensively at the history of eight major infectious diseases of the period (whooping cough, measles, scarlet fever, diphtheria, smallpox, typhoid, typhus, and tuberculosis) but did not restrict herself to the relatively superficial level of citywide sources. She focused instead on the district level, and on many occasions provided elaborate quantitative analyses of disease incidence street by street in particularly unhealthy areas. What she illustrated is the incredible complexity of the disease situation that broad-based national and city sources obscure. Hardy skillfully integrated quantitative and qualitative materials, and by doing so, enabled the reader to appreciate the immense amount of ambiguity and confusion involved in questions of disease etiology.
Hardy’s discussion required her to investigate the myriad social, cultural, economic, political, and biological factors that influenced the morbidity and mortality rate for each disease. For example, tuberculosis rates were determined by the nature and location of housing, the extent of overcrowding, culturally shaped fears of fresh air, medical and folk-nursing practices, occupational hazards, class-based food preferences, and the synergistic effects of simultaneous infections—to name just a few.
Hardy also discussed “high risk” occupations in which workers were exposed to filth and foul air in closed and unventilated rooms. These included not only tailors and furriers but also such well-paid (and well-fed) workers as printers and clerks. The growth, decline, and geographic concentration of different trades directly influenced the local TB rates. Popular fears of fresh air and night chills led to nursing practices that kept the sick in closed rooms where family members were exposed to concentrated doses of pathogens. Ethnic groups in London (Jewish, Irish, Italian) differed in their cleanliness and general hygienic practices, and in methods of child rearing; this differentially affected their disease rates—independent of their generally inadequate incomes.
Hardy made it quite clear that before investigators can generalize about infectious disease in broad fashion, they must have a firm knowledge of the complexities and subtleties of the ways in which people actually live. In her work, Hardy did what professional historians are best at doing: puncturing those grand theories that social scientists and historical amateurs sometimes produce.
Of course, the confusion and ambiguity of real history is less intellectually satisfying than sweeping theories of cause and effect. Hardy’s detailed discussions often make the reader cry out for simplicity and certainty, but neither history nor Hardy is able to oblige. Nevertheless, Hardy was willing to present some broad conclusions from her study. In looking at the major infectious diseases that afflicted the people of London, she commented:
The epidemiological record clearly suggests … that it was not better nutrition that broke the spiral of deaths from infectious disease after 1870, but intervention by the preventive authorities, together with natural modifications in disease virulence, which reduced exposure to several of the most deadly infections.…McKeown was a scientist speculating on historical phenomena, … and [he was] unfamiliar … with historical realities. (Hardy 1993: 292-3)
Like Szreter, Hardy found that the power of preventive medicine did not derive from national governmental policy but was rooted “in the persevering implementation of local measures based on local observation by a body of professional men whose sole responsibility was to their local community” (Hardy 1993: 292). As England’s population was “above the critical threshold of under-nutrition below which resistance to infection is affected,” it was consciously planned human intervention, not improved nutrition, that was the key to lower mortality rates (Hardy 1993: 292).
In summing up her survey of the eight major diseases she studied, Hardy pointed out the significant difference between those that affected children and those that affected adults. The latter were responsive to preventive actions, whereas the former were not. And it was among the adult diseases that the major drop in mortality in the late nineteenth century actually occurred:
The impact of smallpox, typhoid and typhus, and (more arguably) of tuberculosis, was significantly reduced through the activities of the preventive administration … The reduction in deaths from both typhoid and typhus through diligent local activity and public-health education was a major achievement of the Victorian preventive administration. And for tuberculosis, similarly, general sanitary improvements, in the sense of slum-clearances, new housing, constant water supplies, and the growing emphasis on domestic cleanliness, were probably important. Environmental and occupational factors were clearly of considerable importance … and the Victorian evidence suggests that these were more potent … than … nutritionally satisfactory diet. (Hardy 1993: 290-1)
Hardy did not claim that her work ends the historical discussion of the role of nutrition in the decline of infectious disease in the West, but she has clearly moved the debate to a higher level by demonstrating, as did Szreter, that accurate knowledge of infectious disease requires intensive study of local materials.
Investigators of the modern developing world have also contributed to the search for explanations of mortality declines other than that of a rising standard of living and improved nutrition. John Caldwell, for example, has argued for the importance of “cultural, social and behavioral determinants of health” in the developing world (Caldwell 1993: 125). He reported on a major 1985 conference that looked closely at a group of health “success stories,” in which poor countries achieved high life expectancy rates despite severely limited resources (for example, the Indian state of Kerala [66 years; per capita income $160-270], Sri Lanka [69 years; $320], Costa Rica [74 years; $1,430], China [67 years; $310]). The conference organizers concluded that “the exercise of ‘political will'” by China, and of both “political and social will” by Kerala, Sri Lanka, and Costa Rica were keys to their success. They placed their “emphasis on comprehensive and accessible health programmes with community involvement and the importance of education, especially female schooling”(Caldwell 1993: 126).
Caldwell carried out additional analysis of the conference material, combined it with other data from high-achieving/low-income countries, and found that the strongest correlation with reduced mortality was the educational level of women of maternal age. He contended that the most efficacious of the noncommunist countries have benefited from a historical “demand for health services and education, especially the all-important schooling of girls, arising from the long-existing nature of the societies, particularly the independence of their women, [and] an egalitarian-radical-democratic tradition …” (Caldwell 1993: 126). These positive factors, however, although ultimately vital, could not bear fruit until modern health services became available: “When health services arrived here and elsewhere mortality fell steeply because of a symbiotic relationship between an educated independent population determined to use them and make them work, and readily available health services” (Caldwell 1993: 126).
Caldwell concluded from his research that mortality levels similar to those of industrial societies could be reached in the developing world within two decades if all children were educated through elementary school. Education and modern medicine interact as a potent combination. Thus, if Caldwell and his colleagues provided no direct evidence to undermine McKeown’s claim that the standard of living and nutrition was the key to the decline of mortality in the West, they nevertheless undercut his argument for the relevance of such a notion for the developing world today.
Clearly, Szreter, Caldwell, Wilson, and Aaby were not in agreement as to the reasons for either past or present declines in mortality. But they did agree on the conclusion that the importance of nutrition and rising standards of living has been substantially overstated. Each one of them provided a provocative and plausible alternative explanation that was either unknown to McKeown and his supporters or given insufficient attention. The work of these researchers and others (Christopher Murray and Lincoln Chen 1993) has kept alive the debate over the role of improving nutrition in the decline of mortality in the West.