The State, Health, And Nutrition

Carol F Helstosky. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas, Volume 2, Cambridge University Press, 2000.

Overview

The science of nutrition has influenced consumers in their choices about the kinds and optimal amounts of food to eat, but there are other influences as well, such as prosperity levels within a given population, the efficiency of transportation and distribution systems, and the standards of hygiene maintained by food producers, processors, and retailers.

One factor, however, that has not received much scholarly attention is the increased role of the state, either through direct or indirect means, in the production, distribution, and consumption of food. Only recently have historians addressed the development of food policies (mostly those in Europe) in order to understand the state’s role in controlling and monitoring food supplies.

In early modern European societies, the maintenance of public order and the control of food supply were intimately related; religious and secular authorities alike had a vested interest in ensuring that production met the demands of consumption. The actions of these authorities (the distribution of food or price-fixing, for example) were largely responses to localized crises. What distinguishes modern food policies from their early modern antecedents are the intended goals of these policies, as well as the scientific nature of their implementation.

The rise of industrialization and urbanization in the nineteenth century prompted new concerns about food supplies. The competitive nature of an industrialized, capitalist food market increased popular anxieties about adulteration; one of the more important roles of the state was to regulate the hygienic quality of food supplies. The economic conditions of the nineteenth century provoked greater concern with population’s risking dietary deficiencies and, therefore, poor health. Social reformers and scientific experts took a more active and deliberate interest in monitoring the health of the laboring classes through the measurement of food consumption levels.

It is not mere coincidence that the rise of modern European food policies paralleled the development of the science of nutrition. As physiologists and social scientists explored both the content of foods and the uses which consumers made of them, state policies utilized this information to safeguard public health, thereby increasing the productivity and longevity of the population. This chapter provides a schematic, comparative overview of state intervention in popular diet throughout the nineteenth and early twentieth centuries, a period when there was an increased recognition of the extent and effects of dietary deficiencies but no cohesive state programs to guarantee proper nutrition for all.

It was often for the sake of military strength that European governments showed interest in the nutritional status of their populations, but even then much of the burden for improving diets was placed on the nutritional education of housewives. World War I, as a contest of industrial and military strength, made necessary the efficient management of resources, including food. Governments were thrown into the roles of suppliers and distributors of foodstuffs, and much of food policy formation was piecemeal, at times chaotic, in nature.

Then, after the states had assumed new regulatory powers over food supplies, the question of whether continued intervention was necessary, or even desirable, formed the basis for interwar debates over the link between food policies, civilian health, and greater economic productivity. Although military and economic competition between nations continued to make civilian health important to national well-being, questions of degrees of state intervention to safeguard health were never adequately resolved. What was clear by the time of World War II, however, was that European states had become increasingly interested parties in matters of food consumption as they related to public health.

The Nineteenth Century

Throughout the nineteenth centur y, the twin processes of industrialization and urbanization brought dramatic changes to European food consumption habits. Decreased opportunity for domestic agricultural production placed populations, particularly urban ones, at the mercy of price fluctuations. Public discontent over food issues shifted away from riots and disturbances over food availability and toward forms of protest over wages, prices, and standards of living. The division of labor and the monetary economy in industrial capitalist societies brought the rise of commercial middlemen concerned with profits rather than increasing food supplies. This new business ethic spread fears of food adulteration and unscrupulous business practices for the sake of greater profits. Social scientists and reformers observed that in industrialized areas, the poorer segments of the laboring classes suffered from malnutrition, staving off their hunger with cheap carbohydrates such as those provided by bread, potatoes, liquor, and sugar.

Nonetheless, the nineteenth century can be characterized as one in which the quality and variety of diet slowly improved. In addition to sugar, new foodstuffs like wheaten flour, margarine, coffee, and chocolate became urban dietary staples as industrializing nations experienced what Hans Teuteberg has termed “the democratization of dietary gratification” (Teuteberg 1975: 79). By the latter half of the nineteenth century, the rationalization of agriculture, decreasing transport costs, and the industrial mass production of foodstuffs reduced food prices. In Germany, as household incomes rose, families consumed a richer diet and shifted their preferences from potatoes and grain to dairy products and meat, allotting less income for subsistence in relation to other expenditures (Offer 1989: 40).

This is not to say, of course, that meat was regularly consumed by everyone. Rural consumers might have eaten meat only once a week or on holidays. It is interesting to note, however, such changes in consumption patterns signaled new criteria for evaluating diet. In Great Britain, for example, the increase in meat and sugar consumption was seen as a shift to more energy-producing foods and therefore as more desirable for the average worker (Burnett 1966; Mennell 1985; Shammas 1990).

It was in the late nineteenth century that members of scientific communities began to shape dietary evaluation criteria. The science of physiology broke foods down into the essential components of carbohydrates, proteins, and fats. Social scientists, charitable organizations, and parliamentary committees documented the consumption habits of the laboring classes while evaluating standards of living and health. The scientific community also scrutinized some of the troubling effects of industrialization on dietary patterns. Levels of concern and scientific conclusions depended, of course, on national context. Italian physiologists, observing the effects of late-nineteenth-century industrialization on the population, noted that at the same time consumers could afford more nutritious foods, they could also afford alcohol, tobacco, sugar, and coffee, all of which could cancel out the beneficial effects of an improved diet (Helstosky 1996).

The term “diet,” distinct from food or food consumption, indicated that there were new criteria for evaluating the place of food in everyday life. Diet implied a certain level of adequacy in food consumption patterns, whether measured by the emerging body of scientific knowledge about calories and nutrients or evaluated in terms of food’s capacity to fuel laborers. Therefore, diet was simultaneously a prescriptive and descriptive term, denoting both current habits and a nutritional goal set by members of the scientific community. Statistical knowledge about actual dietary practice relied largely upon concrete records of consumption habits in the form of family budget studies. These studies ranged from government-funded inquests of large populations to studies of smaller groups, sometimes a single family, funded by universities and charitable concerns.

Generally, the period between 1850 and 1910 in western Europe was characterized by a broad range of state and private investigations into the living standards of agricultural and industrial workers (Porter 1986; Hacking 1990). One of the most influential of these, Frederic Le Play’s Enquete agricole—commissioned by the French Ministry of Agriculture, Commerce, and Public Works—was published in 36 volumes between 1869 and 1870. In Le Play’s analysis, dietary habits constituted only a small part of the standard of living; other factors such as literacy, wage rates, housing conditions, and delinquency, for example, also were given analytic leverage.

The European social scientific community followed Le Play’s lead in structuring their own monographs; food consumption habits and spending were only parts of detailed works encompassing a wide array of social problems. Data on working-class family budgets from Belgium became the basis upon which the German statistician Ernst Engel formulated his famous law: The proportion of outgo used for food, other things being equal, is the best measure of the material standard of living of a population; the poorer the individual, family, or people, the greater must be the percentage of the income necessary for the maintenance of physical sustenance (Engel 1895).

Nineteenth-century determinations of living standards underscored the centrality of experts and the use of scientific criteria to define social problems. Such determinations also had the effect of emphasizing the importance of the relationship between dietary practice and the physical and social condition of a given population.

By the turn of the century, concrete budget studies of the laboring classes became the basis from which physiologists made nutritional recommendations for dietary intake. There was, however, little agreement over such recommendations. German physiologists like Carl von Voit and Jacob Moleschott defended high levels of dietary protein (between 100 and 130 grams per day), whereas American physiologist Russell Chittenden claimed the body could function adequately on only 60 grams of protein per day (Offer 1989: 39-44).There is no question that scientific recommendations reflected prevailing habits and consumption levels that varied from nation to nation. Uniform nutritional standards were not formulated until the Interallied Scientific Commission met in 1917 and recommended a daily intake of 145 grams of protein, 75 grams of fat, 490 grams of carbohydrates, and 3,300 calories for the average man of 154 pounds working at least an eight-hour day (“Scientists and the world’s food problems” 1918: 493).

Debates over what did constitute an adequate intake of nutrients and calories, combined with tabulations of consumption habit by class and region, naturally led to questions of how to improve the nutritional quality of diet and who was to be responsible for such improvements. To some extent, European social scientists and physiologists looked to their governments to take a more active role in guaranteeing better nutrition for all. National governments sponsored investigations into living standards, and on the municipal and national levels, social welfare policies ameliorated poor living conditions through housing reform, consumer cooperatives, and health education programs.

There were no national food policies in the nineteenth century. There were, however, governmental actions that had the effect of improving nutritional standards, such as the adoption of standardized supervision against food falsifications and fraud, as well as assistance for specific populations at risk of dietary deficiencies. The first of these interventions, the regulation of the food market, was arguably one of the most important means of protecting public health. Yet most European governments were reluctant to intervene in the food market, even though in Great Britain, for example, works detailing fraudulent practices ranged from chemist Frederick Accum’s Treatise on the Adulterations of Food (1820) to Henry Mayhew’s London Labour and the London Poor (1861).

Such works provoked vigorous political and scientific debates over regulation, but in England there was no governmental attempt to intervene between producer, retailer, and consumer until the enactment of adulteration legislation in 1870 (Burnett 1966). Similarly, in Germany, controls for milk, one of the most commonly adulterated foods, were slow to develop, and it was not until 1879 that uniform food controls and punishments for adulteration were passed (Teuteberg 1994).

In Italy, regulations governing fraudulent retail practices were instituted in 1890 only to be judged ineffectual by the scientific community. As physiologist Adolfo Zerboglio noted in 1897,”[I]t is common knowledge that the poor are obliged to stave off their hunger with spoilt food … everyone knows how certain merchants will push poor quality food onto the poor, so as not to keep it in stock”(Zerboglio 1897:12).

Similarly, governmental interventions to assist populations at risk, such as infants and children, were not fully organized and implemented until the turn of the twentieth century. Diet had improved for many as a result of the increased consumption of fresh fruits and vegetables, butter, and eggs. This was particularly true for Italy and Germany, where the effects of later industrialization were making themselves felt.

In Britain, the inadequate height of military recruits for the Boer War spawned a national debate focused on civilian health and fears of racial degeneration. The minimum height requirement for infantry recruits was lowered from 5 feet 6 inches to 5 feet 3 inches in 1883, and to 5 feet in 1902 (Drummond and Wilbraham 1939: 484-5). Although the stunted growth and malnourished physiques of urban dwellers was of considerable concern, more attention in the ensuing scientific and political debates was given to housing and sanitary conditions than to the influence of inadequate nutrition on public health. The Royal College of Surgeons and Royal College of Physicians were both reluctant to undertake an inquiry into the nutritional status of the population. However, an interdepartmental Committee on Physical Deterioration finally concluded that poor nutrition was in fact playing a role in the continued physical degeneration of the British people. The committee’s findings, issued in 1904, focused considerable attention on the preference for refined, white bread among the working classes, but also concentrated on the feeding of infants and children; working-class mothers were found to be poorly nourished and therefore unable to breast-feed properly (Drummond and Wilbraham 1939: 485-7; Winter 1980).

Mercantilist demographic pressures, coupled with the growing fear of national degeneration, led to a more focused educational campaign to raise the nutritional status of the family. Both charitable organizations and eugenics societies undertook to instruct British women on infant nutrition. These efforts, however, generated feelings of anxiety and inadequacy among working-class mothers faced with the task of juggling household budgets to procure better-quality foods when the formidable appetites of their families demanded quantity (Davin 1978). Social scientists and reformers frequently noted that working-class mothers sacrificed their own health in order to feed their families properly, and pregnant women, younger mothers, and children often subsisted on a diet of bread with margarine and tea (Ross 1993).

Similarly, European-wide campaigns to increase infant birth weight (and reduce infant mortality) focused on the health of the mother without making positive contributions toward maternal nutrition. By the late 1870s, doctors had come to recognize that poor maternal health (including nutrition), as well as short intervals between pregnancies, contributed to low birth weights and consequently to low levels of infant health, and by the end of the century the practice of weighing and measuring newborns in hospitals had become common. Policy suggestions in Switzerland, Germany, the Netherlands, and France, however, focused not on improving maternal nutrition but rather on the importance of a period of rest for mothers before giving birth to increase birth weight and reduce infant mortality (Ward 1993: 24-5).

In Britain, concern for the physical health of the population continued through World War I because of the size and health of military recruits and resurfaced once again during the “hungry thirties” (Winter 1980; Mayhew 1988). During this period, scientific documentation of poor physical stature associated individual health with living conditions, whereas public debates over state intervention linked standards of living to national economic health and military strength. Great Britain was by no means the exception; across Europe in the early twentieth century, solutions to the problem of popular welfare were debated at national and international conferences on population, eugenics, hygiene, infant mortality, and social work.

Such concerns may be understood as characteristic of modernity (Horn 1994: 3-5) and must be seen within a broader context of national and local governmental efforts to “regulate the social” (Steinmetz 1993). But state intervention to improve housing conditions and sanitation was less controversial than efforts to improve diets. European governments were fundamentally uneasy with direct interventions in the everyday life of the family, and any policy that sought to redress nutritional deficiencies for entire populations was not easily reconciled with either liberal democratic governments or capitalist markets.

Even nutritional assistance to populations at risk—infants, schoolchildren, and the poor—was fraught with tension. At the municipal level, assistance was meted out with few problems. In the northern regions of Italy, where pellagra was prevalent, municipal governments worked in conjunction with charitable societies to organize and extend soup kitchens to those consuming the monotonous, maize-based diet that caused the disease (Porisini 1979). And in Germany, socialist and women’s organizations pressured municipal governments to experiment with consumer cooperatives and other means of making food more affordable for the working classes (Steinmetz 1993: 4).

On the national level, however, intervention to provide more nutritious foods for those in need was limited to the provision of school meals and milk. School feedings were justified on both economic and eugenic grounds: to guarantee proper nutrition to the neediest so as to ensure the health of future generations.

Under pressure from school boards, school feedings were incorporated into national education acts on the grounds that improperly nourished children could not perform well. In France, free and subsidized meals for children were considered to be an integral part of the school system, and in Norway, breakfast was offered to all children, rich or poor. In the Netherlands, however, the School Act of 1900 contained provisions that food and clothing be supplied to needy children—a policy judged less successful precisely because school feeding was linked to poverty and therefore tainted by paternalism and charity (den Hartog 1994: 71).

In Great Britain, school-feeding policies grew out of charitable initiatives like the Destitute Children’s Dinner Society (1864), but the government was long reluctant to act on a national level for fear that the poor would become dependent on the state for sustenance and other material needs. National intervention, however, was finally justified by the turn-of-the-century debate over health, efficiency, and empire triggered by the small stature of many who served in the Boer War (1899-1902).

In twentieth-century Europe, such an extension of state responsibility for assuring nutritional adequacy has often resulted from military (rather than economic or social) considerations. School meals were no exception. As John Burnett recently observed, “War, the fear of it and the retreat of the danger of it, has been a major influence throughout the history of school meals” (Burnett 1994: 55). But although the state increasingly justified intervention on the grounds of national survival, local officials and others continued to view the provision of free or cheap meals and milk as an act of charity.

World War I

The circumstances of World War I dramatically altered the attitude of European governments toward promoting civilian health through food policies. They literally were forced by wartime conditions to provide enough food to keep their populations fit for military and civilian service. If the war was a watershed in the history of modern food policy, it was not because governments were able to promote scientific principles of nutrition. Although scientific knowledge about vita-mins and other nutrients in food was developing, it was difficult to incorporate this into wartime policies based on expediency.

Within both belligerent and occupied nations, the consumption of proteins and fats declined, and food shortages developed even in the postwar period. In western Europe, malnutrition aggravated mortality from tuberculosis, nephritis, and pulmonary diseases, especially in Germany and Belgium.

Within the Allied nations, World War I further spread the “democratization of dietary gratification” by narrowing dietary distinctions between socioeconomic classes. The homogenization of consumption patterns has been an overall trend in the twentieth century; however, “the principal agency in narrowing the gap between the rich and the poor was the social effect of the war” (Oddy 1990: 262). Full employment provided regular as well as increased incomes, and the extension of state control over food modified the division of food on the basis of price. To some extent, food consumption levels improved in the Allied nations because military provisioning made more nutritious foods available to soldiers. In the case of Great Britain, “the nation which went to war in 1914 was still so chronically undernourished that for millions of soldiers and civilians wartime rations represented a higher standard of feeding than they had ever known before” (Burnett 1966: 217).There is little doubt that this was the case for soldiers, but to make a similar argument for civilians would be a more complicated matter.

Indeed, a comparison between Great Britain and Italy during the war demonstrates that food policies aimed at civilians had a complex effect on living standards and consumption habits. In both nations, histories of popular diet published after the war claimed that living standards had been so miserable that wartime rationing actually improved the nutrition of many (Bachi 1926; Drummond and Wilbraham 1939). But if these assertions were true, it was for different reasons: The malnourished in Britain comprised the urban poor and the industrial working class, whereas in Italy it was the rural “backwards” peasantry that was in the worst dietary shape. In both nations, full employment and higher wages were the most significant factors in improving living standards. Rationing policies sought to make more food available to all. However, as price was still the limiting factor in working-class consumption, allowances for more expensive goods like meat did not make much difference if consumers could not afford a full ration. And as one history of British food policy points out, scientific experts still used calories to judge the adequacy of diet, so it was entirely possible that experts would not detect an actual decline in nutritional standards based on food substitution (Barnett 1985: 180-1).

In Great Britain and Italy, bread was never rationed but made available in varying weights and consistencies at different, sometimes subsidized, prices. Yet ensuring the availability of cheap bread, as a matter of wartime food policy, made a significant difference in working-class consumption habits. Not only were consumers able to stave off hunger with as much bread (barring occasional shortages) as they could eat, but subsidized bread freed up more of the household budget to purchase other necessity and non-necessity items. Non-necessity items in these cases were often modest: a few extra eggs, dairy products, coffee, or alcohol.

Moreover, even after the war had ended, the public furor ignited by the termination of the Italian bread subsidy in 1919 demonstrates that consumers had come to view cheap staple foods as an entitlement. The postwar bread riots and disturbances in Italy differed from those of earlier times; consumers still clamored for cheap bread, but they did so now in order to afford more coffee, tobacco, and wine (Helstosky 1996). Thus, food policies in Britain and Italy did not transform consumption habits among the lower classes all that dramatically, although there were subtle dietary changes that were significant for those who experienced them. The increased consumption of eggs and dairy products and of refined wheat-flour bread were critical indicators of an improved living standard.

Although the war’s duration called for tightly organized systems of food controls, food policies in Europe progressed on a piecemeal basis, were sometimes chaotic in organization, and mixed private voluntarism with state bureaucracy. The aim was to ensure the health of the labor and military forces, and even in the absence of legal governments, as in the case of Belgium, a national committee (the Comité national de secours et d’alimentation) attempted to coordinate food aid at different levels to guard the welfare of Belgian labor and thus assure future economic security (Scholliers 1994: 40-1).

Whereas national governments coordinated food imports, requisitioning, and shipments of food out to military forces, municipal governments were usually the first to institute policies important for consumers. In Italy, the national government responded to, and coordinated, prefectoral initiatives on price ceilings, rationing, and domestic trade controls. In wartime Berlin, the municipal government acted to control commercial practice in the interests of consumers and preserving domestic order (Davis 1993).

If the implementation of consumer-oriented policies had its origins in local politics, however, such policies branched out into national politics after 1917, when war weariness on both sides demanded a greater equalization of experience within populations, especially in terms of resource sharing and sacrifice. The most commonly discussed example of the shift from producer-oriented, paternalistic policies to consumer-oriented ones has been that of Germany (Kocka 1984; Offer 1989; Davis 1993). Increasing civil annoyance with the uneven distribution of food led to bitter criticism of food policies and mechanisms of distribution. Consumers chafed at the Reich’s calls for a domestic truce (Burgfrieden) when they perceived disparities in the distribution and acquisition of food between regions and classes. The hardships experienced during the “turnip winters” of 1917 and 1918 led to civil unrest and attacks on commercial middlemen and contributed to a more generalized criticism of governmental war aims and military policy (Kocka 1984: 53-4).

As the war dragged on, Allied governments paid increased attention to consumption issues, primarily to keep civilian workers content and productive. Moreover, because the Allies borrowed both money and food from the United States, some consideration had to be given to managing consumption in order to extend food supplies. Voluntary measures urging austerity were common; wartime ministries resorted to rationing only as a last resort. Austerity campaigns forged a new bond between citizens, particularly women, and government. Budget management and providing the family with a more nutritious diet were tasks that fell upon the housewife, whether or not she worked. During the war, “economy could now be urged upon the housewife all the more strongly because it was justified on social rather than private grounds” (Mennell 1984: 249). It is likely that propaganda slogans like “food wasted is another ship lost” publicly reinforced women’s intermediary roles as food preparers and paved the way for future negotiations between state and housewife for provisioning responsibilities.

The amount of food that Allied nations were able to conserve was never enough in the eyes of Herbert Hoover, U.S. Food Administrator after August 1917 (and later the director of U.S. relief efforts). Following the advice of his “diet squad” of American physiologists, Hoover suggested that Europeans do more to curtail consumption by reducing their intake and substituting foods (Offer 1989: 377). It was during the negotiations between U.S. food administrators and European officials that the idea of transforming scientific findings directly into food policies was debated.

The nutritional studies of American scientists claimed that a general reduction in calories lowered individual body weight and basal metabolism, leading to a more economical working of the body. Hoover and others urged a general reduction in food consumption in order to reduce Allied wheat consumption by one-fourth, thereby easing the wartime strain on U.S. grain stocks (Offer 1989: 378). The British Royal Society soundly rejected such a proposal, noting that any such policy of forced reduction would risk both industrial efficiency and political stability.

Wartime conditions brought about a contradictory situation with respect to popular diet. Although the salaries of workers rose to allow for increased consumption, shortages of supplies and pressure from the United States acted to deny it, and whether consumers were able to purchase more nutritious foods with their additional money is open to question. After the war, the Carnegie Foundation sponsored research that examined changes in living standards and food supplies that had occurred during the war. Authors of these books—mostly economists and agricultural experts—observed mixed results. Economist Riccardo Bachi’s study of wartime Italy, for example, concluded that the war improved the nutritional quality of diet primarily because prewar consumption levels were so low (Bachi 1926). Italian physiologists, however, contradicted Bachi’s findings, asserting that consumers used higher wages to purchase non-nutritive goods like tobacco, coffee, and wine, thus succumbing to what one scientist termed “alimentary sensualism” (D’Alfonso 1918: 28).

Dr. M. Demoor of Belgium’s Académie Royale de Médecine rejected the possibility of any objective and scientific study of alimentation during the war: The incomplete documentation of living standards meant that nothing but speculation was possible (Henry 1924: 195). In France, the continuation of subsidized prices for special groups like pregnant women, families with more than two children, and the aged indicated that rations were still not adequate for all segments of the population (Augé-Laribé and Pinot 1927: 258-9). It is difficult, then, to generalize about the nutritional content of diet during and immediately after the war. There were few detailed monographs written on living standards, and those that were usually addressed the situation of workers in industrialized areas.

It is also difficult to generalize about the impact of state policy on food consumption habits, given the wide variation in national experience. Assessments of policy have as much to do with social scientific observations of living standards (which are sorely lacking in many cases) as they do with public perceptions of policy efficacy. Consumers in Germany, where essential foods were rationed and often unavailable, interpreted the effects of food policies far differently from the way that consumers did in Britain, where only sugar and meat were rationed and most goods were continuously available for purchase.

Despite the sometimes chaotic nature of food policy development, what seems clear is that European governments assumed a greater responsibility for civilian health over the course of the war. Whereas state intervention prior to the war was limited to the regulation of food distribution and assistance for the few, wartime policies sought to control the mechanisms of production, distribution, and consumption. The few scientific observations of changing living standards during the war, combined with the standardized nutritional recommendations of the Interallied Scientific Commission, led to an even greater awareness of dietary “averages” to be met as policy goals for the interwar period. There was no question that European governments had become more involved in matters of food consumption as a result of the war; the questions open for debate during the interwar period were whether intervention should continue and for what purposes.

The Interwar Years into World War II

Wartime consumption patterns—more dairy products, meat, fruits, and vegetables, as well as further increases in wheat consumption—were consolidated during the prosperous 1920s and reversed in the following decade of economic crisis. Generally, with the continued development of food processing and retailing in the interwar period, all social classes were enjoying a greater variety of foods. These included industrially created foods like margarine, breakfast cereals, preserves, and meat or fish pastes, along with canned fruits and vegetables. Scientific knowledge about food continued to advance, and vitamins as well as minerals were pronounced essential components of diet. Recommendations for caloric intake, however, fell slightly during this period because greater numbers of people were living more sedentary lifestyles.

Feeding populations at risk became more closely related to the promotion of commercial concerns. The “Milk for Schoolchildren” program in Britain, for example, was sponsored by the National Milk Publicity Council. Similarly, nutritional education programs for housewives and mothers, as well as domestic economy literature, counseled women to buy only standardized, commercial products they could trust (such as foods from Nestlé and Leibig or brand-name products such as Bovril or Kellogg’s Corn Flakes).This was probably useful advice for many housewives, especially wherever consumers patronized local shops with less than ideal hygienic practices.

In terms of broader food policies, the most interesting changes in the interwar period reflected the wide-ranging turbulence created by competing systems of political economy. This was the era when both liberal democracies and authoritarian regimes alike took a greater interest in popular health, although their concerns manifested themselves differently. In Great Britain, dietary standards were debated in an effort to address chronic poverty, although, typically, the state ultimately balked at the idea of assuming broader powers of intervention. In Fascist Italy and Nazi Germany, by contrast, food production and consumption were more carefully controlled so that the dictatorships could successfully implement policies of greater economic self-sufficiency. The economic crisis of the 1930s sharpened both the impulse toward autarky and the debate over state responsibility for minimal versus optimal health.

In Great Britain, the publication of John Boyd-Orr’s Food, Health and Income in 1936 and B. Seebohm Rowntree’s Poverty and Progress in 1941 pushed the issues of dietary standards and civilian health into the political forefront. Boyd-Orr’s study found that 10 percent of the working population during these depression years earned wages that were insufficient to purchase a nutritionally adequate regime. Moreover, he determined that half of the general population consumed a diet that satisfied hunger but was deficient in the nutrients that would maintain what he called optimal health (Boyd-Orr 1937: 8). Lower than optimal dietary standards, he believed, were a financial drain on the state, and he argued for greater state intervention in nutritional matters. Both the findings and the conclusions of Boyd-Orr, however, were roundly criticized in both political and scientific circles for having greatly exaggerated dietary deficiencies in the general population.

Politically, the debate focused on whether the state should work to ensure dietary improvements for all or simply continue protecting populations at risk, such as children. Fear of creeping socialism prevented drastic intervention, but both sides of the political spectrum agreed to focus on improving child nutrition, the left in the hopes of extending state welfare and the right because of eugenic concerns with degeneration (Oddy 1990: 276).The scientific debate over popular nutrition centered upon whether income or food preparation played a more important role in determining the nutritional quality of diet. Scientists were divided on whether malnutrition was a product of ignorance or insufficient income (Mayhew 1988: 450). Despite Boyd-Orr’s calls for greater state involvement, responsibility for proper nutrition ultimately was shifted to consumers; as physiologist E. P. Cathcart of the University of Glasgow stated, “It has been our experience, as a result of repeated dietary studies, that one of the most prominent contributory factors toward defective and deficient dietaries is not so much the inadequacy of income as its faulty expenditure” (Pike 1985: 36).

Thus, Britain did not undertake more drastic measures to improve the overall quality of diet. Governmental involvement remained limited to the continued nutritional education of mothers, school feedings, and interventions to meet the needs of the economic crisis.

With economic and agricultural crisis also came more action from European governments to protect domestic agriculture. These interventions assumed the primary form of tariffs, and when tariffs alone were insufficient, the second line of defense consisted of import quotas and milling requirements for domestic grains. Great Britain abandoned laissez-faire agricultural policies by adopting import duties and creating marketing boards. France organized marketing boards for agricultural staples like wheat and wine; Denmark introduced domestic market supports; Nazi Germany instituted a comprehensive organization of production, marketing, and trade; Italy intensified domestic grain production and tightened controls over imports (Tracy 1989).

Such protectionist policies had indirect effects on food consumption habits in the sense that they made domestic staples like wheat more expensive. If staple goods came to occupy a greater portion of the household food budget, this meant that less money was available for the non-necessity items consumers had become accustomed to purchasing during and after the war.

The effect of economic policies on food consumption habits seems particularly important in light of the fact that an adequate standard of living was tied intimately to the promotion of economic and political systems. In Fascist Italy, for example, the regime founded the Committee to Study the Problem of Alimentation in 1928. One of the Committee’s responsibilities was to organize, conduct, and publicize grand inquests into the living standard, in order to prove to the rest of the world that fascism as an economic system was leading the nation out of its backward status.

By contrast, sociologists in Belgium worked to measure the nutritional standard of the working classes and, as Peter Scholliers has observed, “[t]heir writings had ideological aims, stressing the fact that the capitalist system was, in the long run, capable of ensuring a decent standard of living for all people” (Scholliers 1992: 73). Although authoritarian regimes in Italy and Germany worked to exert tighter controls over food production, distribution, and consumption, the primary goal of such policies was not to improve nutritional standards. Rather, the motive was to ensure that populations could survive on less food, should there be another war or invasion.

The experience of World War II threw European governments back into the roles of providers and coordinators of food supplies. Rationing was implemented earlier and, because of the duration and severity of the conflict, was imposed on more food items than during the previous war. The struggle for adequate sustenance was a more difficult one in many areas, as exemplified by the extension of the black market in foodstuffs and the use of goods as viable currency throughout the continent. As in the case of World War I, European governments again acted out of expediency, making it difficult to ensure proper nutrition through a rationalized program of food controls. It would not be until after the war and well into the economic miracle Europeans experienced in the 1950s and 1960s that consumers in some nations would experience the culture of superabundance and confront for the first time the health problems associated with overconsumption.

Summary

State intervention in matters of nutrition during the nineteenth and early twentieth centuries in Europe can be characterized by a hesitancy and a reluctance to assume greater responsibilities for the overall health of the population. Governmental concern over popular diet had as much to do with mercantilist and militarist anxieties as it did with a growing public awareness about the importance of nutrition in building and maintaining health. Even when limited interventions—such as the feeding of schoolchildren—sought to safeguard the economic and military future of European nations, these actions were rooted in the voluntaristic paternalism that characterized charity in earlier periods. Pressure to guarantee optimal health for entire populations came mostly from the scientific and social-scientific communities, but it was only under wartime conditions that states acted with such broad measures to guarantee a minimal subsistence for all.

This is not to say, however, that state activities did not influence patterns of food consumption. Direct and indirect intervention in food markets affected the allocation of household budgets and therefore the nutritional composition of diets. Over the course of the nineteenth and early twentieth centuries, European governments demonstrated an increasing interest in safeguarding the health of populations through food consumption; but their limited range of activities demonstrates the political and economic constraints under which they functioned. Prior to the formation of the post-1945 welfare state, which ideally regarded optimal health as a right of citizenship, decisions to implement food policies as a means of building labor productivity, reducing mortality, or satisfying eugenic concerns depended upon economic and political circumstances as much as they did on the scientific knowledge of nutrition.