Brian Murton. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 2, Cambridge University Press, 2000.
Despite swelling populations around much of the globe, the enormous expansion of agricultural productivity, the rapid development of transport facilities, and the establishment of globally interlinked market networks have made it theoretically possible to provide adequate food for all. Yet famine and hunger still persist and, indeed, proliferate in some parts of the world. Their durable character represents a perplexing and wholly unnecessary tragedy (Drèze and Sen 1989; Watts and Bohle 1993b). Although the extent of hunger in the world will never be known with precision (Millman 1990), it has been estimated that in the early 1990s, more than 500 million adults and children experienced continuous hunger and even more people were deemed vulnerable to hunger, with over 1 billion facing nutritional deficiencies (WIHD 1992).
The community concerned with world hunger is far from unanimous in its understanding of the situation. S. Millman (1990) likens the situation to the parable of the elephant and the blind men, whereby hunger is perceived differently by those encountering different aspects of it. It is significant that these varying perceptions correspond to particular disciplinary or professional orientations, leading to different diagnoses of the nature of the problem and its underlying causes and implying distinct foci for policy interventions.
Problems of food supply, then, are among the most bewildering, diffuse, and frustrating of humankind’s contemporary dilemmas. Within the lifetime of each of us, official views of the world food situation have oscillated from dire predictions of starving hordes to expectations of a nirvana of plentiful food, then back to impending doom. One expert states that famine is imminent, whereas another declares that our ability to adequately feed the world’s people is finally within reach.
The term “famine” is one of the most powerful, pervasive, and (arguably) emotive words in our historical vocabulary. This in itself makes it all the more difficult to isolate its meaning and wider significance. But to this end, the ensuing chapter addresses the following topics: A brief review of famine in history; a definition of famine and its various dimensions; the identification and description of a range of cultural and social famine coping practices in traditional societies; an examination of the literature on the causes of famine, along with some of the recent attempts to model famine vulnerability and the famine process; and, finally, a focus on past and present attempts to develop famine policies.
Famine in History
Famines are woven into the fabric of history; indeed, as D. Arnold (1988) has pointed out, they are important historical markers, often used by chroniclers and historians in search of points in time when the normal rhythms of human life were interrupted, or when momentous developments appeared to have a discernible beginning or end. In singling out famines in this way, historians have done rather more than just seize upon convenient dates by which to slice up history into manageable portions. They have also followed the perceptions and traditions of the people themselves. In preliterate societies, particular famines, along with other collective catastrophes, served as a common means of recording and recovering the experience of the past.
We know that famines have led to untimely deaths for at least 6,000 years, which makes it necessary to simplify the enormously complex array of data and sources concerning them. W. A. Dando (1980) has identified a system of world famine regions for the period from 4000 B.C. to A.D. 1980. The first region, for the period from 4000 to 500 B.C., is Northeast Africa and West Asia, where the earliest authentic records of famine have been found, including several from the Nile Valley dated as early as 4247 B.C. The famine in Egypt of Joseph’s day (1708 B.C.), recorded in the Old Testament, appears to have been part of a much more widespread scarcity throughout West Asia. Until 500 B.C., the literature of this area continued to be studded with accounts of famines.
In the thousand years after 500 B.C., the heaviest concentration of accounts mentioning famine are found in the region that became the Roman Empire. Imperial Rome may have brought order and organization to subjugated peoples, but Mediterranean Europe still experienced at least 25 major famines during this period. After A.D. 500, western Europe emerged as the region to which the extant accounts of famine most frequently refer. Up to A.D. 1500, the British Isles suffered from at least 95 famines, France suffered 75, and other famines or severe food shortages were recorded throughout these territories as well.
Indeed, famines are a much discussed part of the European experience until the nineteenth century. However, both the incidence and extent of major European famines seem to have begun to decline in the seventeenth and eighteenth centuries (Grigg 1985). Thus, the last major famine in England occurred in A.D. 1620, in Scotland in the 1690s, in Germany, Switzerland, and Scandinavia in 1732, and in France in 1795. During the nineteenth century, many parts of Europe were afflicted with harvest failure and high prices in 1816, and the 1840s were also a period of acute food shortage, including the “Great Famine” in Ireland (1846–51). But in general, even localized food crises diminished in Europe after the eighteenth century, and although until the mid-1800s most Europeans remained in a chronic state of under-nourishment (Cipolla 1976), at least western Europe had shaken off the specter of famine.
The fourth famine region was eastern Europe, whose various sectors experienced more than 150 recorded famines between A.D. 1500 and 1700. Expanding the time frame a little, in Russia at least 100 hunger years and 121 famine years were recorded between A.D. 971 and 1974 (Dando 1980). Famine occurrence in imperial Russia reached its high point in the nineteenth century, but even after the revolution of 1917, the scourge of famine could not be eliminated. In 1921 and 1922, approximately 9 million people starved to death; in 1933 and 1934, the number was between 4 and 7 million, and in 1946 and 1947, it was nearly 2 million. The famines of 1921 and 1922 resulted from the breakdown of order associated with civil war, and those of the 1930s reflect Stalin’s forced collectivization of agriculture. The famine of 1946–7 was also man-made. A decision to rigorously restore the provisions of the collective farm charter, which had been relaxed during World War II, coupled with a drought and the use of scarce grain in other segments of the world to promote communist goals, produced this famine (Dando 1980).
From A.D. 1700 to 1970, Asia emerged as the world’s foremost famine area. In South Asia, legends and records document more than 90 major famines over the last 2,500 years, two-thirds of them after 1700, but most were localized (Kulkarni 1990), with only an occasional one (like the Durga Devi famine of the late fourteenth century in the Deccan) that covered a large area. Although all of India suffered to some extent in the early eighteenth century, without question the late eighteenth and nineteenth centuries were that country’s time of famines. They devastated Bengal in 1770, the Ganges Valley, western India, the Deccan, and Madras in 1783, and almost all of the peninsula in 1790. In the first half of the nineteenth century, major famines took place from 1802 to 1804, in 1806 and 1807, in 1812, 1824, and 1825 and 1826, from 1832 to 1834, in 1837 and 1838, and in 1854. Most of these, however, were limited in their extent, although they did cause intense suffering and death regionally.
But the period between 1860 and 1880 in India was one in which five major famines and three local scarcities followed each other in rapid succession (Kulkarni 1990). The famines were widespread, but after 1880 there was a period when only local scarcities occurred and no famines. Then, in 1895, perhaps the most disastrous famine of the century began in the middle and upper Ganges Valley and spread, in 1896, to the whole South Asian region. This was followed by the famine of 1899 and 1900, which devastated a large area of the peninsula and the northwest as well.
There have also been a number of famines in the twentieth century in South Asia. In 1907 and 1908, one descended on the middle and upper Ganges Valley, but thereafter, although there were scarcities, no famine occurred until that of Bengal in 1943. Nor did scarcity and famine cease after India gained its independence in 1947.There was widespread scarcity (and probably starvation, despite denials of this by the Indian central government and various state governments) in 1952 and 1953, and again between 1965 and 1967, 1970 and 1973, and in 1986 and 1987. In addition, Bangladesh was severely affected by famine in 1974.
Over the past 2,000 years, China has recorded perhaps as many as 90 famines per century. But, as in South Asia, the nineteenth century saw China’s most devastating famines. Droughts, floods, locusts, hurricanes, and earthquakes were natural disasters that induced crop failures, but the breakdown of civil society and warfare also were important factors. Four of these famines alone (in 1810, 1811, 1846, and 1849) are reported to have claimed 45 million lives. Nine million died in the famine from 1875 to 1878 in northern China. Other severe famines were recorded in 1920 and 1929, and there was a particularly harsh one between 1958 and 1961, when it is estimated that between 14 and 26 million (and perhaps even as many as 40 million) people died (Kane 1988; Article Nineteen 1990).
Elsewhere in Asia there have been localized famines in many places over the past 200 years or so, and of late, areas such as Timor and Kampuchea have been afflicted. In the latter country, the decade of the 1970s was one of continuous food crisis, deepening into famine in 1975 and 1979; the latter has been described as the “most catastrophic famine in the history of the Khmer people” (Ea 1984).
Even North and South America, as well as the Pacific, have not escaped unscathed from famine. For example, in the middle of the fifteenth century, a four-year sequence of frosts and droughts produced a terrible famine in central Mexico (Townsend 1992), and famines, caused by a combination of drought and lack of political and economic cohesion, have been frequent in northeast Brazil (Cunniff 1996). Even in the lush, tropical Caribbean islands, famines were common during the seventeenth and eighteenth centuries, through a combination of physicoenvironmental, biological, and socioeconomic circumstances (Watts 1984). In the Pacific, small islands, vulnerable to environmental and societal perturbations, have always been susceptible to food shortages and famine (Currey 1980). Indeed, even in Hawaii, since European contact, famines have occurred on average every 21 years (Schmitt 1970).
Although famines have struck many parts of the world in the past two decades, sub-Saharan Africa has been especially hard hit and has afforded the contemporary world some extremely powerful and distressing visions of famine. However, famine in Africa is not a new phenomenon. Oral traditions from many areas mention numerous occurrences in precolonial times (Arnold 1988b), and colonial records document many more (Hill 1977; Watts 1983). Yet, as late twentieth-century events in Bosnia-Herzegovina reminded us, no place is free from famine, and one of its contemporary faces is worn by victims of war.
Nevertheless, although nearly 200 million people each year continue to be plagued by hunger, the trend in famines since the end of World War II has clearly been a downward one (Kates et al. 1988). This trend reflects both a lessening of famine generally and a major shift in incidence from populous Asia to less populous Africa. The Alan Shawn Feinstein World Hunger Program at Brown University, using data averaged for 7-year periods beginning in 1950, has found that the average number of people residing in countries in which The New York Times reported famine was 790 million annually from 1957 to 1963, but declined to an average of 265 million in the period from 1978 to 1984. Since that time, the average has dropped below 200 million.
Definitions and Dimensions of Famines
Millman (1990) has identified three hunger-related situations in the world today: food shortage, food poverty, and food deprivation. These situations are distinguished from one another primarily by the level of human organization (from an entire population to households to the individual) at which scarcity is manifested.”Food shortage” indicates a situation in which total food supplies within a bounded region are insufficient to meet the needs of its population. “Food poverty” refers to the situation in which a household cannot obtain enough food to meet the needs of all of its members. “Food deprivation” refers to insufficient food availability for an individual. At each level, the commonly used term “food security” can be taken to mean an ability to avoid the corresponding hunger situation. Food shortage is among the causes of food poverty, which in turn is among the causes of food deprivation. However, other factors may operate to cause food poverty when there is no food shortage, and food deprivation where there is no food poverty.
Thus, hunger is not a single, uniform experience. Its manifestations range from vulnerability resulting from dietary traditions that mesh imperfectly with variations in need over the life cycle, to household food insecurity rooted in poverty, to aggregate food-supply shortfalls, which, when they worsen, can become the massive hunger crises affecting large numbers within specified regions and generating substantial increases in mortality. The latter, in popular thinking, are called “famines.” Certainly, the most poignant manifestations of hunger are famines, which we have become accustomed to thinking of as “disasters”of a particularly horrific kind, replete with human misery on a massive, almost unimaginable, scale.
Famine: A Subjective Window on Poverty and Hunger
One of the debates over famine that has emerged involves the question of whether it is a discrete event or merely the tip of an iceberg of underlying social, economic, and political processes (Currey 1992). Many writers (e.g., Mellor and Gavian 1987) have defined famine as a discrete event, separate from chronic hunger. Furthermore, the World Bank (1986) has separated transitory food insecurity from chronic food insecurity, and the World Food Programme has focused most of its attention on nutrition in times of disaster. B. Currey (1992), however, has questioned whether such crisis management is the most cost-effective means of reducing world hunger and has suggested that more efforts be directed toward building resilient agricultural systems and long-term monitoring of rural development.
Such differences in approach highlight one of the problems with famine, which, as a concept and as a historical phenomenon, presents us with a fundamental paradox: It is both event and structure (Arnold 1988). On one hand, it is clearly an “event.” There may be widely different opinions as to exactly when a particular famine begins or ends, but there is common agreement that it occupies a finite span of historical time and human experience. Basically, famine signifies an exceptional (if periodically recurring) event—a collective catastrophe of such magnitude as to cause social and economic dislocation. It generally results in abnormal levels of destitution, hunger, and death. It can lead to the complete disintegration of customary patterns of work and subsistence and can greatly disrupt customary norms of social behavior, not to mention levels of mortality.
And yet, at the same time, famine cannot be meaningfully considered in isolation from the economic, social, and political structures of a specific society. Occasionally, we document a specific disaster, resulting in a famine, in a society that is otherwise relatively secure in its provisioning. More commonly, however, famine acts as a revealing commentary upon a society’s deeper and more enduring difficulties. In other words, famine can be viewed as a subjective window on poverty and hunger. The proximate cause of a famine may lie in some apparently unpredictable “natural disaster,” like a flood or drought, or in a human calamity like a civil war or invasion (Arnold 1988). But these are often no more than “triggering” events, intensifying or bringing to the fore a society’s already extant vulnerability to food shortages and famine. Any historical understanding of famine must, therefore, be alert to its structural causes and to its social and cultural parameters, as well as to what happened during the crisis itself.
This approach to famine, as a meeting and intermingling of event and structure, partially derives from a perspective that sees history not as simply a narrative sequence of great events and personalities but as a search for underlying structures and recurrent patterns, famines among them (Arnold 1988). In a similar manner, it has been recognized among social scientists that individual action or agency is constrained by the relationship between agency and structure, and this dialectic takes place not only locally but on a world scale (Wisner 1993). Thus, although a famine can arise locally, the constraints on humans that create their vulnerability to it can originate in the influence of structures located on other continents.
Famines as History
Famines form an integral part of formal attempts to record and recall the past in many parts of the world. Among other things, they act as an aid to recall, as a reminder of their terrifying consequences, and, sometimes, as a key to their putative meanings. The practice of naming famines is one indication that in the popular memory at least, all famines were far from identical, whatever similarities they might bear for outside observers. For example, India’s famines commonly bore the title of the Hindu calendar year in which they occurred. Others were named after vengeful deities or were seen as marking the onset of the Kali Yuga, the Hindu age of suffering, corruption, and human misery. In Africa, rural people have often named famines after what they exchanged or sold to get food, or after what people ate.
Such examples highlight the fact that famines are “hitching posts” of history (Shipton 1990) and the poles around which experiences and impressions are organized and collected. Thus, famine forms a link between the world of personal memory and the broader domain of collective consciousness, despite the fact that many academic historians have been skeptical about the authenticity of famine accounts found in folklore, oral history, and even chronicles.
In the past, as famines touched the lives of millions deeply and directly, they lived on in collective memory, and terror of their return kept them alive in that memory. We should remember that it has only been in the last century or so that people in Western societies have felt themselves immune to famine and discarded such cumulative folk experience as redundant. Perhaps it is this lack of fear of famine that most critically divides us from our own past and from the lives of a large part of the world’s peoples today (Arnold 1988).
Famine as Demographic Crisis
One of the ways in which famines impress themselves upon collective memory and experience is through the colossal and devastating mortality involved. However, mortality statistics can give only a rough impression of a famine’s magnitude, and it would be unrealistic to credit such data with any real precision. Until the last century or two, few governments kept reliable and detailed records of vital data. Historians who have attempted to reconstruct famine mortality in the remote past have tried to compensate for this deficiency by using parish records, tax returns, and similar sources. But these records seldom provide a dependable picture of mortality trends over a wider area, and even where there was some form of birth and death registration, famine mortality was often grossly underreported. Local officials themselves fell ill, died, or deserted their posts. The deaths of villagers who wandered off elsewhere in search of food passed unrecorded. Thus, it is not surprising that our understanding of the relationships between demographic processes and famine is extremely limited (Hugo 1984).
Indeed, although mortality is viewed as one of famine’s major effects and is, in fact, an integral part of many of its definitions, there is surprisingly little data to precisely quantify the impact of famine on mortality rates, even in modern times; the famines of the 1970s and 1980s stand out as examples (Hugo 1984). Accounts of famines in preindustrial societies usually paint pictures of death on a massive scale. P. A. Sorokin (1942, 1975), one of the earliest writers to systematically summarize the demographic impacts of famine on mortality, has suggested that in affected areas, death rates sometimes reached 200, 500, or even 800 for every 1,000 people, as compared with normal rates of 10 to 30. He maintains that in the Soviet famine of 1921, for example, regional death rates reached 600 per 1,000.
The concept of “excess deaths” has been found to be useful in examining the impact of contemporary famines on mortality. “Excess deaths” refers to the “number of deaths over and above those that would have occurred if previous nutritional conditions had prevailed” (Bongaarts and Cain 1981). Very few studies have employed this concept. One of these, by A. K. Sen (1980), makes several reasonable corrections to the existing mortality data for the 1943 Bengal Famine and concludes that the total figure for excess mortality associated with that famine was 3 million. Another, by J. C. Caldwell (1975), estimates that the excess mortality for the entire Sahelian region during the famines of 1970 to 1974 was no more than 250,000, despite massive publicity that insisted that many more people were dying, and M. Alamgir (1980), in a third study, estimates the excess mortality for Bangladesh in 1974 and 1975 as 1,500,000.
Famines also seem to have distinct phases of mortality response (Ruzicka and Chowdhury 1978; Bongaarts and Cain 1981). There appears to be an initial phase during which mortality rates respond immediately to the food crisis. This is followed by a second phase that sees mortality rates at least twice as high as normal, and a third involving a gradual rate decline. Finally, there is a phase in which mortality rates are actually lower than “normal” because the most vulnerable groups in the population have already been severely pruned.
Demographers have also tended to neglect the differential mortality within subgroups in populations during famines. Best documented are excess death rates among infants and children. Illustrative is the Bangladesh Famine of 1974 and 1975, during which there was a 70 percent increase in the infant mortality rate in a sample of 228 villages containing 120,000 people, which meant 529 deaths per 1,000 live births (Ruzicka and Chowdhury 1978). A study of the 1980 famine in Daramoja, Uganda, measured an infant mortality rate of 607 per 1,000 live births.
The elderly also are especially vulnerable to excess mortality in times of famine (Chen and Chowdhury 1977), as are pregnant and lactating women (Bongaarts and Cain 1981). In addition, there is no doubt that famine mortality disproportionately affects poor and landless people. For example, in Bangladesh, the 1975 crude death rate among landless families was three times higher than among those with at least three acres of land (Chen and Chowdhury 1977). It also is clear that in the 1943 Bengal Famine, the most affected group in terms of excess mortality was that of agricultural laborers (Mukherji 1965).
Although discussion of the demographic impact of famine has concentrated on assessments of mortality, another important dimension is its impact on fertility (the demonstrated capacity of women for reproducing) and fecundity (a predisposition or latent capacity for reproducing). There is considerable evidence to suggest that fertility, as demonstrated by birth rates, follows a distinctive pattern during famines (Bongaarts and Cain 1981). Birth rates initially remain at prefamine levels, then rapidly decline, some nine months after the famine’s onset, to rates only 30 to 75 percent of those that are normal (depending on the severity of the famine). The low point in birth rates usually occurs nine months after the end of the crisis. But following this point, the conception rate recovers quickly, and the famine-induced period of depressed fertility is followed by one in which the birth rate exceeds prefamine levels for up to three years. Reasons for this pattern are varied. Fecundity is decreased by minimal nutrition and psychological stress. Hunger also tends to diminish the frequency of intercourse, and spouses are often separated by temporary migration during famines. In addition, there is an increase in voluntary birth control, sexual abstinence, abortion, and, historically at least, infanticide (Sorokin 1942; Connell 1955; Ruzicka and Chowdhury 1978).
As yet, there is little information on socioeconomic differentiation in the pattern of fertility response to famine, but the scattered evidence suggests that it is the poor and landless who most reduce their rate of conception during such a crisis (Stein et al. 1975; Ruzicka and Chowdhury 1978).
A third important demographic feature of famine is migration, which in traditional societies has historically been one of the most important ways that people have coped with famine. Although migration should be studied in relationship to mortality and fertility, in this chapter it will be considered with other coping strategies.
Famine and Disease
Assessments of the demographic impact of famine are greatly complicated by the disease factor because, in most famines, mortality from epidemic disease has greatly exceeded that from actual starvation. For example, during the course of the Bengal Famine of 1943, starvation was identified as the cause of death in only about 5 percent of cases; cholera, malaria, and smallpox accounted for the great majority of the 3 million deaths. Moreover, in the 1940s, just as in the nineteenth century, the colonial administration in India was loath to acknowledge starvation as a cause of death. This situation has continued since 1947 because famine mortality in India has remained a political matter.
The reasons for the intertwining of epidemics and famine appear to be both physiological and social. Malnutrition can weaken the body’s immune responses, creating a diminished resistance to infection and a reduced capacity to recover from it. Further, migration has the effect of spreading disease to areas not directly affected by hunger. During times of famine, personal hygiene also tends to be neglected. Debilitated people may fail to wash, and they may drink filthy or contaminated water. They may consume “famine foods” (unripe grain, grass, or roots) in an attempt to suppress hunger; this sometimes causes diarrhea and vomiting, which results in a further weakening of the body and a greater risk of spreading disease. The high level of famine mortality, in other words, is partially a consequence of the nature of the expedients people adopt to try to escape from hunger and partly one of the disruption created by famine in customary patterns of social behavior.
Famine Chronology
All famines have their own internal chronology. Because hunger seldom kills outright and immediately, the symptoms of growing hunger may not be apparent to outsiders until destitution and debilitation have already reached an advanced stage. Thus, the duration of famine can be reckoned according to a variety of different criteria, and definitions made by those actually subject to the famine may differ greatly from those made by officials. Those experiencing a famine might date its onset from the first warning signs of approaching calamity, such as the delayed arrival of the rains and the first withering of standing crops. Officials, by contrast, might see a famine as beginning only when food prices climb to abnormal heights or when the signs of distress among the starving poor are given governmental recognition. Equally, an official definition of famine might end with the closing of state-managed relief work, though this may occur well before survivors of the famine feel themselves free of its grip.
Traditional Cultural and Social Coping Practices
“Coping” is the manner in which people act, within the constraints of existing resources and the range of expectations of a situation, to achieve various ends (Blaikie et al. 1994). In general, such action involves no more than “managing resources,” but more specifically, it usually means how this management is accomplished under unusual and adverse circumstances. Thus, coping can include defense mechanisms, active ways of solving problems, and methods for handling stress. Resources for coping with famine include labor power, land, tools, seed for crops, livestock, draft animals, cash, jewelry, other items of value that can be sold, and, of course, storable food stocks, as well as skills and specialized knowledge. In order for tangible resources to be mobilized, people must be entitled to command them, which may be achieved in many ways. Among them are using the market, exercising rights, calling upon obligations (of household members, kin, patrons, friends, and the general public, by appeals to moral duty as in alms and charitable giving), stealing, or even committing violence. In many cases, specialized knowledge is required with certain resources, for instance, in locating wild foods, determining the moisture capacity of certain soils, discovering water sources, or finding wage labor in distant cities or plantations.
Although the range of strategies to prevent or minimize the risk of famine is enormous, two generalizations can be made. First, the objective of many of these strategies is to secure necessities, such as access to a minimum level of food, shelter, and physical security, rather than to increase income. Second, maintaining command of these basic needs in a risky environment usually implies employing multiple, varied methods of access to resources. These include diversifying production strategies, setting up nonagricultural income sources, strengthening or multiplying social support networks, and developing a demographic strategy aimed at the creation, maintenance, and mobilization of human labor.
Diversification is one strategy, and the production of farming people is usually diversified, involving mixed cropping, intercropping, the cultivation of nonstaple crops, and the use of kitchen gardens. The result is often a “normal surplus” in good years because it is planned on the basis of meeting subsistence needs even in bad (but not the worst conceivable) years. Because planting a greater variety of crops provides the best chance of an optimum yield under all variations of weather, plant disease, and pest attack, it represents one of the most important precautionary strategies for coping with food shortages (Klee 1980; Wilken 1988).
Another important preventive/mitigating strategy involves the development of social support networks. These include a wide variety of rights and obligations among members of the same household (for example, wives and husbands, parents and children), within the extended family, and within other groups with a shared identity, such as clan, tribe, and caste. Parents may try to make strategic choices of marriage for children into comparatively wealthy families, which might increase their ability to call on resources in difficult times (Caldwell, Reddy, and Caldwell 1986). Within the household and family, successfully securing resources in potentially disastrous times depends upon the implicit bargaining strength of its members and on their “fallback” positions (Agarwal 1990) or “breakdown” positions, if cooperation in this bargaining position should fail (Sen 1988, 1990).
It has also been argued that in societies where people habitually live in the shadow of hunger, those people develop, or are the beneficiaries of, social and cultural practices that ensure that even the poorest will not starve to death. Such practices are based on ideas of “shared poverty” and “mutual assistance,” which some writers term the “moral economy of the poor” (Scott 1976). Under circumstances of famine, those who have food share it with destitute kinsmen and needy neighbors. These noneconomic relations in times of hardship include those between patrons and clients and between rich and poor. They offer a minimum subsistence and a margin of security, and they constitute a “subsistence ethic” based on reciprocity. There are many examples of this type of relationship (see Bhatia 1963 on India and Scott 1976 on Southeast Asia).
Although most examples are drawn from the past, it would seem that this “moral economy” has not completely broken down in the contemporary world. For example, reports indicate that during the drought and period of extreme food shortage from 1980 to 1983 in southern India, the support system worked well, at least for the aged (Caldwell et al. 1986). Further, A. Gupta (1988) goes so far as to say that the continued existence of such support in present-day India is responsible for the retention of people in the countryside. In Nepal, it has been found that the wealthy are encouraged to avoid reducing daily wages for agricultural work in difficult times and to refrain from selling grain outside the village (Prindle 1979).
In general, it seems that when the outcome of a season is still uncertain, landlords and patrons make some provision for laborers and the poor. But once signs of a famine are evident, landowners respond by reducing their number of field hands. As the crisis deepens, workers in other sectors of an agrarian society, such as fishermen, artisans, and a range of rural dependents, become affected by the lack of patronage and support from the wealthy as well, a situation that has been documented in many parts of the world (Arnold 1988).
Rural households try to build up stores of food and salable assets. However, the first is difficult to achieve for people who are involved in a web of impoverishment and exploitation that is a normal and continuing part of life. In many parts of Africa, Asia, and Latin America (and in Europe in the past), even in “normal years,” most households experience shortfalls in production for their own consumption. Furthermore, some staples, like potatoes (which simply cannot be kept for as long as a year—or until the next harvest), rot and become inedible. Hence the historical attractiveness of cereal crops, because they can be stored for long periods. However, most people do have a range of salable assets (e.g., furniture, cooking utensils, jewelry, farm implements, livestock, and land) that can be converted to food as necessary.
Another strategy to mitigate the potential impact of famine is that of having a large number of children, thus improving security by increasing possible future family income. This strategy is an important one in places like Bangladesh, where children are considered to be a less risky investment than land (Cain 1978).
Once famine has begun, precautionary mechanisms are put into practice. There are others, which cannot be developed in advance but which also come into play as the famine unfolds. When there is a potential food shortage and possible famine, the period during which stress develops can be long, allowing for a succession of strategies. A review of a number of major studies of coping mechanisms in the face of famine (Watts 1983b; Corbett 1988; Rahmoto 1988; Waal 1989; Agarwal 1990; Brown 1991; O’Brien and Gruenbaum 1991) clearly identifies a sequence of activities. These include religious rituals and ceremonies, identification of scapegoats, reduction of the amount of food consumed and a longer spacing out of meals, substitution of lower-quality and wild foods, and calling for resources from others (especially family and kin), along with generating household income by wage labor, petty commodity production, and the sale of easily disposable items (as long as such sales do not undermine future productive capacity). But as the food crisis deepens, loans from moneylenders and the sale of important items, such as draft animals, agricultural implements, and livestock, become common. Finally, if all preceding strategies have failed to maintain minimum food levels, migration often ensues. But let us now examine some of these coping mechanisms in more detail.
The cultural context of famine has everywhere been reflected in religion, and one of the first responses to famine in any society has likely been an intensification of ritual. Prayers offered up in churches, mosques, and temples are supplemented by special rituals and ceremonies in streets, fields, and public places. Deities, saints, even plants and animals, are invoked.
When rituals fail to bring relief, more divisive or desperate responses can follow. Sometimes, scapegoats are sought out. Other times, the physical and spiritual anguish brought on by famine and pestilence has bred religious fanaticism. In some instances, the persistence of famine has caused doubts about the gods of the established pantheon. For example, in the famines of the late 1870s in India and China, Western missionaries won converts by pointing to the apparent failure of local deities to protect worshipers from want. In some societies, however, the people place great faith in divine will and believe it blasphemous to question their gods’ purposes and intentions.
As a famine marches remorselessly onward, societies respond more materially. For example, the planting of a crop might be delayed, crop varieties with shorter growing seasons or lower water requirements might be planted, parts of farms might be abandoned and effort focused on better locations, or seed grain might be consumed rather than sown. While this is happening, households might reduce the amount of food consumed at each meal and space meals out over longer and longer intervals (just as they do during the “hungry gap,” a period of seasonal food shortage that is a part of normal life; during this time, people know that they will lose some weight and then recover) (see Garine 1991).
In order to eke out food supplies, adulteration of staples often occurs. As a crisis deepens further, wild, “famine” foods replace staples. Different items fall into this category in different parts of the world. In Niger, for example, the pith of palm trees and lily roots are used, and in nineteenth-century Ireland, it was nettles, berries, fungi, seaweed, frogs, and rats. In China, people have eaten grass, bark, and even earth to quell hunger. As “famine” foods were often found on communal lands, continuing access to such lands today can be especially significant in densely populated regions (Blaikie, Harriss, and Pain 1985; Agarwal 1990; Chambers, Saxena, and Shah 1990).
At this stage, people usually sell the few assets they have to buy food, especially once food resources available from family and kin have been exhausted, along with other sources of family income such as wage labor, petty commodity production, and artisanal work. Houses are stripped of their furniture, doors, and window frames. Women sell cooking utensils and jewelry, and finally, farm animals and implements are sold, thus jeopardizing prospects of agricultural recovery after the famine ends. People may try to borrow money, but often moneylenders and even banks are loath to give credit in years when there is no harvest to lay claim to. As a last resort, land might be sold, which (along with migration) is one of the unmistakable signs of acute and deepening crisis. In Bengal in 1943, for example, over 250,000 households sold all their land, and 600,000 more sold part of their holdings. Considering the importance of land, such sales are a sign of desperation. As in the past, losing land as a result of famine is still one way in which property holders sink into the residual class of landless laborers. The sale brings short-term relief from starvation but has the net effect of increasing vulnerability.
While all of this is happening, those in the worst-hit portion of the population frequently begin to contest their deprivation. Laborers and tenants petition landlords, patrons, and governments, demanding to be fed. In the past, and in some places today, a lack of response to such pleas for relief has led to the looting of grain stores, market stalls, warehouses, carts, trucks, and barges. It is at such moments of mounting tension, anger, and fear that the idea of the poor having a right to food is most forcefully evinced (Thompson 1971). But food riots in most places and times have not lasted indefinitely; rather, they died away once provision was made for the basic needs of the hungry or, more often, the situation deteriorated further until no food was to be had by any means.
One common alternative has been recourse to what officialdom and the propertied elites have seen as “crime”; like food riots and looting, this measure has been a characteristic and almost universal way of coping with famine. Famine crime has assumed many forms, ranging from an intensification of normal banditry, sheep stealing, and petty theft, to murder. Usually, as society fractures under famine’s pressure, crimes against property and persons soar.
When all attempted strategies have failed to maintain minimum food levels, migration occurs. Although this can be considered a demographic response to famine, it is also an important coping strategy. Much famine migration has been short-term and over relatively short distances, and once conditions have improved, people return to their homes and farms. This was true in Europe in the past (Sorokin 1942), and it certainly has been the case in a variety of other places in the world. During the Indian famines of the nineteenth century, there were many such movements to unaffected areas, and to the cities, in search of relief. In Brazil, the northeast has witnessed temporary flights to the towns in 1878, in 1915, from 1930 to 1932, in 1942, and in 1958. The Sahelian crisis from 1970 to 1974 also produced large-scale population movement (Caldwell 1975), much of which focused on refugee camps. There was also much short-distance local migration, some of it to the cities.
But famine-induced migration also has a permanent dimension, which is one of the more enduring demographic consequences of famine. It can be documented from preindustrial Europe, where this type of “forced” migration resulted in the colonization of new agricultural areas. Perhaps the most spectacular famine-induced migration out of any area in Europe was associated with the Irish famine from 1846 to 1851, when about 1 million people migrated to the United States and England. Between 1852 and 1916, another nearly 5 million Irish left, three-quarters of them bound for the United States. During the Sahelian crisis, much of the famine-induced urbanization, especially of pastoralists, was permanent (Colvin 1981). The sequence of famines in the Brazilian northeast has produced a permanent migration of peasants to the tropical rain forests of Amazonia and to São Paulo and Rio de Janeiro. In the nineteenth century, famine was one of the impelling forces behind the Indian diaspora to Natal, Mauritius, Malaya, Fiji, Guyana, and nearby Sri Lanka (then Ceylon). The exodus from southern India to the tea and coffee plantations of Sri Lanka reached its peak during the 1870s, a time when a high level of labor demand was matched by famine in Madras.
Women and Famine
The burden of famine has fallen, and in many developing-world societies continues to fall, with exceptional severity on women. One reason for this is that, in many parts of the world, women traditionally either have been the main agricultural producers or have constituted a substantial part of the agrarian work-force. Colonial regimes often had no practical interest in developing opportunities for women. Education and new employment opportunities were directed toward men, who took up work in the mines, on the plantations, and in the cities, often far from their home villages, weakening their commitment to subsistence labor, which, more often than not, was left to the women. Thus, the onset of a famine hits women directly. Their food production dwindles, and when field laborers are dismissed or left without employment, they lose cash and in-kind income as well.
The burden of famine also has fallen heavily on women because of their customarily low status in patriarchal societies. In many societies there is a cultural expectation that women will sacrifice their food and, ultimately, their lives to enable their husbands and sons to survive. Women normally eat after the men, and when food is in short supply, female children tend to be neglected and resources concentrated on male children. There is a great deal of historical and contemporary evidence to show that part of the burden of hunger and suffering has been transferred to women through neglect, starvation, abandonment, and sale into prostitution. Women, in short, have been victimized in the interests of male survival.
Famines, therefore, impose enormous physical and emotional suffering upon women. Women have often killed their children; marital relationships can be strained to the point of divorce and abandonment; and hunger can drive women into prostitution and slavery. They have been sold by landowners, money-lenders, and other males with authority over them, for money and for food. Further, in many parts of the world, even today, one of the commonest responses to famine has been the sale of children, especially girls. Thus, the devaluing of life that occurs in a famine has often further favored male power and ascendancy.
Conflicting Ideas about Famine Causation
An enormous literature has grown up to explain why famines occur. Once, “acts of God” and “freaks of nature” were seen as self-sufficient explanations for why people hungered and died. Warfare, blockades, and deliberate hoarding of grain have also been commonly used to explain why famine happened. Today, writers on the causes of famine are more disposed to see these as only precipitating or contributory factors, and, increasingly, famine has come to be regarded as a complex phenomenon, more a symptom than a cause.
One of the main sources of confusion about the subject arises from the multiple causes of famines and their great variety in space and time. Some of the literature makes a distinction between “general and predetermining factors,” or the time–geographic dimensions of famine (long-term, intermediate, and immediate), and the trigger mechanism of the actual famine (Currey 1979, 1980; Murton 1980). W. I. Torry (1986) uses similarly distinct “ultimate” and “proximate” causes of famine, and P. M. Blaikie and colleagues (1994) employ the terms “root causes” and “underlying pressures,” which create “unsafe conditions.” It is also important to acknowledge that if there are many combinations of factors and mechanisms that bring about famine, then each famine is unique. Indeed, the task of building theories of famines is particularly difficult because of the complexity of each specific case. Any theory will involve an understanding not only of the existing systems of production but also the distribution of food in terms of access to land and inputs, as well as the operation of the market, the determination of prices, and the behavior of traders in food staples (Canon 1991). Government policies with regard to food production and distribution (and famine relief) may also play a profound role. Then there is always a series of contextual events peculiar to each famine, a “sequence of events” (Alamgir 1981), or in Currey’s (1984) parlance, a “concatenation.”
Two main (and largely competing) types of famine explanation, based on differing sets of causal mechanisms, can be identified. Many commentators have assumed that a famine arises out of an actual shortfall in the means of subsistence, or as it is commonly labeled, a food availability decline (often abbreviated to FAD). Either some natural disaster occurs, causing a crop failure to reduce the aggregate amount of food available, or population in the long term outstrips the quantity of food available. The other mechanism involves the decline in some people’s entitlements to food (abbreviated to food entitlement decline, or FED). According to this explanation, first articulated by Sen (1981), famine is a result of the ways in which access to food is reduced because of the operation of social and political processes that deny or lessen “entitlement” to food.
These processes may involve a deterioration in the ability of people to grow their own food or to buy it through various forms of exchange. To this context should be added the impact of various hazards that may not reduce the overall amount of food but instead affect the success of different groups of people in fulfilling their entitlements. This type of explanation focuses much more firmly on relations of power within a society that may account for the distribution of assets and income (unequal in “normal” times) that become a matter of life and death in times of famine. This model tends to reduce the causal importance of natural events, which although they may be limited to a decline in the aggregate supply of food (with the impact of drought, flood, or pest attack), are analyzed in the context of the political economy of root causes and predetermining factors. In other words, people are made vulnerable to the impact of a natural hazard by their place in the economic, political, and social processes that affect their exchange entitlements.
Food Availability Decline (FAD)
The school of thought that attributes famine to an aggregate decline in the supply of food is clearly linked to explanations of famine in terms of natural events. In particular, drought has been identified as a major immediate cause of crop failure and, therefore, of a decline in food supply. It is difficult to find pure supply-side explanations of famine in the recent literature, but between the decline in aggregate food supplies and its immediate causes (such as drought) on one side, and the detailed mechanisms that actually precipitate famines on the other, the emphasis is usually on the former (Blaikie et al. 1994).
In addition to analyses of recent famines, the historical literature contains considerable discussion of the long-term shifts of climate that have appeared to gradually undermine a society’s apparently secure subsistence base. Generally, whereas most historians have been wary of embracing the type of “climatic determinism” posited by Ellsworth Huntington (1915, 1919) and have tended to see climatic variations as too short-term and peripheral to provoke major subsistence crises, others have acknowledged that a substantial number of historical famines in many parts of the world were preceded by partial or complete failure of the rains (Arnold 1988).
Unfortunately, much of the writing also tends to subtly argue that if famines could be attributed to natural causes, they could be explained in terms of exceptional events and not by continuing and normal social processes (Hewitt 1983b). K. Hewitt (1983a) even argues that a previous generation of academics and practitioners virtually ostracized those who sought explanations that went deeper than the impact of the natural hazard. Given the dominance of science and technology in the modern era, the publication of any analysis of causes that failed to suggest that hazards could be modified and responded to by technology resulted in the exile of its authors from the mainstream social explanation.
Another important element in the food availability decline approach has been the “overpopulation” thesis. Deeply rooted in Western thought, the thesis is most commonly identified with the writings of Thomas Malthus (Turner 1986), who believed that the food supply was relatively inelastic, increasing at best by arithmetical progression, whereas population rose in geometrical leaps and bounds. In periodically sweeping away the excess population, famines maintained a rough equilibrium between population and subsistence. However, this thesis was to prove untenable for Britain (and Europe generally), where a transformation of industry and commerce improved agricultural productivity and transportation and led to the increased importation of foodstuffs. Not only did rapid population growth fail to trigger famine, but standards of living rose. Increasing prosperity and material security, however, were also accompanied by a growing practice of birth control through various means.
Although Malthus’s thesis seemed to square with nineteenth- and early-twentieth-century reality only in places like India and China, since the 1950s, famines and food crises in Africa and Asia have led to a strong revival of interest in his ideas. Population in the developing world has risen sharply, largely as the result of improved medical services and sanitation. Food production is frequently, but not altogether accurately, assumed to have not kept up, and with too little food to go around and too many mouths to feed, famine has been predicted on a global scale by numerous “prophets of doom,” as for example, Paul Erlich (1968), and W. and P. Paddock (1967). But very often these dire predictions are based on simple measurements of global food supply stocks versus rising population rates, without taking into account high levels of wasteful food consumption in the West, the nature of foodstuffs produced, and the unequal nature of food distribution (between nations, and within states, classes, and even families).
Food Entitlement Decline (FED)
In Poverty and Famines (1981), the Indian economist Amartya Sen challenges the view that famines are caused by food availability decline. As already noted, he views famine as the result of the many and complex ways in which people’s access to food is reduced because of the operation of social and political processes that deny or lessen their “entitlement” to food. Such an approach distinguishes between aggregate availability or supply of food and an individual access to, or ownership of, food. People obtain food through five different types of “entitlement relationships” in private-ownership market economies (Sen 1981; Drèze and Sen 1989, 1990): (1) There is production-based entitlement, which is the right to own food that one produces with one’s own or hired resources; (2) there is trade-based entitlement, which describes the rights associated with ownership when they are transferred through commodity exchange; (3) there is own labor entitlement, which is the trade-based and production-based entitlement when one sells one’s own labor power; (4) there is inheritance and transfer entitlement, which is the right to own what is given by others (gifts) and what is transferred by the state, such as pensions; (5) there are extended entitlements, which are entitlements that exist outside legal rights (for example, ownership) and are based on legitimacy and expectations of access to resources.
Such entitlements are not fixed and equal but vary according to an individual’s position within a wider system of production, exchange, control, and distribution. Entitlements are either owned by a person or can be exchanged by that person for other commodities. People are vulnerable to starvation if their endowment does not contain adequate food or resources to produce food and their capacity to exchange labor or other goods and services cannot be translated into enough food. This situation can occur without a decline in aggregate food supply and without any disruption or malfunction of the market.
The food entitlements decline approach just discussed recognizes the relations of power within a society that may account for the distribution of assets and income and that become a matter of life and death in times of famine. It also acknowledges the importance of changes in purchasing power. Further, it disaggregates regional food production and availability and follows through how food is distributed to individuals (it permits analysis of intrahousehold food allocation and explains why the rich never die in famines and why some classes benefit from them). This approach involves the regional, national, and world economy in the analysis and draws attention to the possible prevention of famines by food imports.
There have, of course, been criticisms of the food entitlement decline approach. First, there is a scale and boundary problem: If the analysis is stretched to include a big enough area, there is always enough food to avert a famine. Second, some famines clearly have had their origins in food availability decline, and although it may be incorrect to identify this as an ultimate or even most important cause, it is inescapable that a fall in the amount of locally produced food (because of war, drought, or longer-term environmental decline) hinders the ability of people to find alternative sources of food. Third, initially, entitlements, as well as resources (endowments), were conceived of as static and given. But recent research (Watts 1991) has pointed out that they are fought over and constitute the terrain of struggle within societies in which group interests (defined by class, caste, gender, age, ethnicity) are in contradiction.
In conclusion, the entitlements approach to the analysis of famine has released famine study from theoretical constraints. However, this pursuit of a single theory of the mechanisms of famine has diverted attention from multiple causality and the possibility of famines at different times in the same place being caused by a mix of factors. This concern has led to the further development of the concept of famine vulnerability, to which the food entitlements decline approach alludes but fails to pursue in depth.
Famine Vulnerability Models
Two models have recently been proposed that attempt to take into account the multiplicity of factors that make people vulnerable to famine. The first, developed by M. Watts and H. G. Bohle (1993a, 1993b), argues that the locally and historically specific configuration of poverty and hunger defines what they call a “space of vulnerability.” These researchers set out to provide a theoretical means by which this space can be “mapped” with respect to its social, political, economic, and structural–historical coordinates. They endeavor to radically extend the concept of entitlements, not simply in a social or class sense but in a political and structural sense, to take account of (1) the particular distribution of entitlements and how they are reproduced in specific circumstances; (2) the larger arena of rights in which entitlements are defined, fought over, contested, and won and lost (that is, empowerment or enfranchisement); and (3) the structural properties (what they call “crisis proneness”) of the political economy that precipitate entitlement crises.
Watts and Bohle review the extensive literature relating to “entitlement and capability,” “empowerment and enfranchisement,” and “class and crisis.” They emphasize that these processes can be grasped only relationally (as congeries of social relations), and they develop a tripartite structure that defines the space of vulnerability through the intersection of the three causal powers: command over food (entitlement); state–civil society relations seen in political and institutional terms (enfranchisement/empowerment); and the structural–historical form of class relations within a specific political economy (surplus appropriation/crisis proneness).
The intersection of these causal powers produces three parallel analytical concepts; economic capability, property relations, and class power. Economic capability emerges from particular configurations of entitlement and empowerment, property relations from the intersection of entitlement and political economy, and class power from specific forms of political economy and empowerment. The three causal powers or processes (entitlement, empowerment, political economy) are conceived of as accounting for mass poverty associated with specific long-term (structural) changes. Famine results from violent short-term changes in these same mechanisms.
The space of vulnerability also has an internal structure in which it is possible to locate vulnerable groups and regions. Because the concept of vulnerability is relational, the space and shape of vulnerability is given by its social relations. For example, if famine is described as a food entitlement problem, vulnerability is located in the realm of economic, and especially market, relations. If, conversely, famine resides in the powerlessness of individuals, classes, and groups to claim and enforce food entitlements, then vulnerability is determined by the power and institutional relations within civil society. Finally, if famine is driven by processes of exploitation and surplus appropriation, it accordingly occupies a location within the space of vulnerability that lies in the realm of class relations.
It is also possible to place both vulnerable groups (social) and vulnerable regions (spatial) within the space of vulnerability. In the former, vulnerable individuals, groups, and classes can be located according to the causal processes that present possibilities and constraints in the sphere of subsistence. Individuals and groups vulnerable to market perturbations and unable to cope with food entitlement decline because they are resource and/or asset poor, may be located in the “economic space” of vulnerability. If the likelihood of deprivation is rooted in politics that can be inscribed in gender (patriarchal politics), work (production politics), and the public sphere (state politics)—all of which may render individuals and groups powerless—their location in the “political space” of vulnerability is determined by power and institutional relations. Finally, if deprivation arises from processes of surplus extraction and appropriation, individuals and groups are located in the “structural–historical” space of vulnerability given by specific configurations of class relations. All these spaces obviously exist simultaneously. Determining the precise weighting becomes important in assessing the ways in which famine differs between Somalia, Kampuchea, or Bangladesh.
This social map of vulnerability has a geographic or spatial counterpart. Vulnerable regions can be located in relationship to the tripartite structure of causal processes. Economically marginal regions that regularly or sporadically experience fluctuation in productivity and prices are most liable to food entitlement crises (they occupy the “economic space” of vulnerability). Peripheral regions experience vulnerability through relations of dependency to a regional core that drains surpluses and resources away (they occupy the “political space” of vulnerability). Finally, regions shaped by endemic crises and conflicts (both economic and ecological) due to processes of commercialization, proletarianization, and marginalization are logically situated in the “structural–historical” space produced by class relations.
In summary, this modeling of the “spaces of vulnerability” integrates many of the factors identified by previous research on famine into a more logical and causal structure. It provides a way to integrate the intersections of structures, tendencies, and conjunctures as they impinge on the famine process. The specific content of the social space of vulnerability, the actual concatenation of events that might trigger famine, and the specific structural forces at work, although deriving from the abstract causal structure, will naturally be time- and place-specific.
Blaikie and colleagues (1994) also have produced a comprehensive dynamic framework, which they call an “access model.” They focus on the way unsafe conditions arise in relation to the economic and political processes that allocate assets, income, and other resources in a society. Natural events are integrated into the model through a focus on how resources are allocated by social processes. “Access” involves the ability of an individual, family, group, class, or community to use resources that are directly required to secure a livelihood.
Access to those resources is always based on social and economic relations, usually including the social relations of production, gender, ethnicity, status, and age. Rights and obligations obviously are not equally distributed among all people, and it is argued that less access to resources leads to increased vulnerability. The model incorporates the notion of “trigger events”: war, as in the case of Ethiopia (1984 and 1990), Angola, Chad, Sudan, and Mozambique (1984); or natural hazards, as in the case of the Sahel (1970–6), Sudan (1985), and Ethiopia (1973). The model analyzes the structures and processes of famine in relation to making a living in normal times. It is an iterative approach in which “external” shocks and triggers have their impact upon the structures and processes of political economy.
Famine Policy
Although an enormous literature has grown up to explain why famines occur and what to do about them, it is clear today that there is disjuncture between explanation and policy. Explanation is largely a product of the academic world. Policies for dealing with famine are a product of famine relief agencies, governmental advisers, and governments. The lack of affinity between the two types of literature is surprising. In an ideal world there should be a progressive and interacting relationship between theories of famine avoidance and relief and policy, but instead they are widely apart. The two sides are separated almost by a different language and are pervaded by different constraints and concerns.
Policy in the Past
Certainly, the fortunes of the state, whether in Europe, Africa, or Asia, have long been bound up with the containment or prevention of famine and, more generally, with provisioning the populace. Protecting its subjects from starvation and extreme want has for centuries been one of the primary functions of government and one of the principal expectations of the public. If we look for pre-twentieth-century evidence of the state as an agency of famine control and as a provider of famine relief, we can find it in places as far apart as China and Europe. In China, state paternalism, fostered by the ideology of the Confucian state, led to the protection of peasants from the worst effects of natural disasters, as well as measures for famine prevention and relief. But this system began to break down in the nineteenth century, and the situation remained chaotic until after 1949. Famine control was one of the first priorities of the new Communist state and, seemingly, one of its great successes, until the famine from 1958 to 1961 cruelly exposed the limitations of China’s agrarian revolution.
In medieval times, European states did what little they could to avert the threat of famine. As Europe crossed the threshold from medieval to modern times, problems of provisioning (especially of the growing cities) increased, as did the problems of maintaining order in food-shortage situations. One response in England in the late sixteenth century was the issuing of orders, which became the bases for the English Poor Law, to counter the problem of vagrancy and destitution. In France, the provisioning of Paris received special attention from the government, and alarm over basic subsistence needs helped to make the revolution of 1789.
In Europe, however, as agriculture became more productive and increasingly market-oriented, governments sought to free themselves from the obligation to feed people and regulate markets. In France, attempts to make subsistence a matter of individual rather than state concern was short-lived, but in England, ideas of “free trade” gained increasing momentum in the late eighteenth century and the early years of the nineteenth. In 1814 and 1815 the Corn Laws, the last bastion of old protectionists and paternalists, were eroded, and in 1846, under Sir Robert Peel, they were finally swept away. By this time in Britain, policy-makers were imbued with the belief that market forces should not be tampered with and that self-reliance must not be weakened or local effort superseded by the activities of the government. This commitment to laissez-faire, and the notion that the state should not intervene in famines, was immediately tested in Ireland, where initially the state’s role was seen as being strictly confined to providing employment on public works. Only in 1847 was emergency relief belatedly instituted.
The Indian Famine Codes
In India, the East India Company administration also strongly supported the new orthodoxy. Even during the famines and shortages of the early nineteenth century, the presidency and provincial governments adhered firmly to the principles of noninterference in the operation of the market and trade. But it became evident, particularly during the heavy mortality of the 1860s and 1870s, that a policy of laissez-faire alone would not meet the extreme recurrent crises of famine. A number of reports demonstrated that the government had lost enormous amounts of revenue by not investing in irrigation, other preventive works, and railways to stimulate production and to move food to food-shortage areas. Although the general principle of nonintervention in the grain trade remained inviolate until World War II, by the late 1870s there were moves, however hesitant, toward greater state responsibility for the Indian economy and for the welfare of the Indian people.
This change in policy resulted in the first coherently written explanation of famine, linked to policy recommendation. From the 1860s extensive reports were written on famine, and in 1878 the First Famine Commission was appointed (Brennan 1984). Its reports of 1880 led to the drafting of the Famine Codes of the mid-1880s by each of the provinces of India. The reports also contained much speculation about the causes of famine. The reports are especially instructive on the relation between the theories of famine causation and policies of prevention, relief, and rehabilitation.
Reasonably effective policies were formulated from these efforts. The Famine Codes reveal a professed dislike of interference in the operation of the market through price controls, and contain the belief that free trade is the best guarantee of satisfying effective demand. There is also an aversion to charity and free handouts and a strong ethic of “self-help.” Thus, the backbone of famine relief was massive public works generating guaranteed employment, plus free assistance for those unable to work. That the latter breached ideology was accepted as necessary to prevent people from dying. Tests were established to ensure that only the deserving received relief. There were detailed instructions in the Codes about early warning signs of impending famine; the duties of the police, medical officers, and other local officials; wages and rations; famine relief works; and many other practical instructions. The codes were used by the British until 1947. Their effectiveness is still the subject of heated debate, with an overstated radical and nationalist critique on one extreme and an imperialist apologist defense on the other, but since 1947, the Codes have continued to form the basis of famine prevention and relief. Thus, the Maharashtra drought of 1970 to 1973 was effectively prevented from triggering a famine by an employment guarantee scheme similar to that found in the earlier Codes.
Contemporary Policy Directions
Contemporary policymaking recognizes that for each link in any explanation of a famine there are a range of policy measures. A key issue that must be addressed is the relationship of policy action to the level of a problem where it can be effectively altered. Although an integrated explanation of famine may be intellectually fulfilling, a policy has to be located at a level at which it can make a significant impact: There must be a short-run effectiveness, as human lives often depend on it. However, often a focus on the short term can lead to a loss of any sense of the real causes of vulnerability. This is the contradiction that faces the makers of famine policy:They are restricted by temporal and spatial scales at which they must work and to which they must fit their policies.
Food Security
At the international and national level, achieving aggregate food security has been an important policy goal. In particular, national food self-reliance is frequently seen as a defense against famine. However, with global increases in flows of goods and information, localized and even national production failures have become increasingly remediable by imports of food, whether as trade or aid. At the global level, food supplies over the last couple of decades have been sufficient to provide an adequate (although near-vegetarian) diet to all, if distributed equitably. However, global food supplies are still not distributed according to need, and shortages at regional, national, and even subcontinental levels have continued.
During the 1980s and early 1990s, some shortages have been caused by the continuing difficulty that international humanitarian relief efforts have had in gaining access to affected populations caught up in civil wars. This has led to the discussion in a number of countries of when and how national sovereignty should yield to humanitarian care for people in disastrous situations. This problem is more than a relief policy issue, and it raises geopolitical and, above all, ethical concerns about the human right to food, shelter, and health care (Waal 1991).
The theoretical advances that Sen’s entitlement theory has afforded have led to some reevaluation of the importance of food security at the national and international levels. The insights that form the theory also suggest that a disaggregated approach to food security is required, which has led to a growing policy concern for vulnerable groups at the local level and the promotion of the idea that entitlements should be strengthened. However, at the level of large regions and nations, concerns over food availability remain important, especially where agreements among countries to exchange grain are involved, as in the case of southern Africa.
Any discussion of food security must also consider the political and economic significance of food aid since World War II. The United States, as a major food surplus state, has used food aid as an important political tool. Food assistance has been withheld from Marxist-run governments but ensured to reliable and dependent allies. Yet, in addition to these political objectives, continuing food aid abroad has helped to save grain farmers from bankruptcy at home. Overseas aid has been vital to the U.S. economy as a safety valve for its domestic overproduction. In 1961 the Kennedy administration was faced with the greatest American food surplus in history, and it was this domestic situation, rather than humanitarian concern, that prompted massive grain shipments overseas in the following years, notably in 1966 when one-fifth of the entire U.S. wheat crop was sent to India to relieve the effects of famine in Bihar.
Food aid has accounted for 28 percent of U.S. overseas development assistance since 1946. The Agricultural Trade Development and Assistance Act of 1954 (Public Law 480) was debated and approved by Congress not for humanitarian reasons or for development ends, but in order to promote trade and dispose of existing surpluses. Food aid has served to open new markets for American farm products, especially in Africa and Asia, often to the detriment of local agriculture, and the threat of food withdrawal or denial has been used to put pressure on countries to accept other forms of American economic intervention and political control.
Relief and Development
The issue of food aid raises another very important question: Should famine prevention efforts be short-term, basically the giving of food relief, or long-term, and more of a developmental nature in scope? In many ways this is something of an academic question, because so long as there are food shortages and famines in the contemporary world and there are donors willing to provide food aid and cash, short-term relief assistance will continue to be important.
Today, in addition to relief provided by national governments in times of crisis, there are a number of United Nations agencies and numerous nongovernmental organizations involved in famine relief (see Busetto 1993; Cullinan 1993; Katona-Apte 1993; Long-ford 1993; Singer 1993). The earliest United Nations agency established to provide humanitarian relief was the United Nations International Children’s Emergency Fund (UNICEF), set up in 1946 to help children in the aftermath of World War II. Various other agencies, such as the United Nations High Commission for Refugees, established in 1951, and the United Nations Relief and Works Agency, established in 1950, have been active in the provision of relief, but since 1963 the World Food Programme has been the primary international agency involved in food aid.
Over the past three decades, the World Food Programme has invested approximately $13 billion and has provided more than 40 million tons of food aid to combat world hunger and to promote economic and social development. Food aid has been used to assist more than 1,600 development projects and 1,200 emergency operations. Within the United Nations system, the World Food Programme is now the largest source of grant assistance to developing countries, the largest source of assistance for poor women, and the largest provider of grant assistance for environmental activities in developing countries. The agency is headquartered in Rome, and although it has employees in 85 countries, it spends less than 6 percent of its budget on administration.
From the outset, the agency aimed to serve as more than a mechanism for surplus disposal or a means of providing charity for the poor. Indeed, by the late 1980s, two-thirds of its resources were being spent on development activities. However, numerous emergencies since that time have channeled 60 percent of its current resources into relief operations, and in 1993, new development commitments received only $250 million, in contrast to $778 million in 1988. At present, the agency has at any one time about 5 million tons of food in transit on 50 chartered ships, but it is hoped that once present emergencies end it will again be able to emphasize development.
Throughout the world, there are also a very large number of nongovernmental organizations, such as Save the Children, Oxfam Lutheran World Federation, Red Cross, Africare, Food for the Hungry, and Catholic Relief, involved in relief activities. Although many have other objectives, because they operate at local levels they are well placed to provide famine relief, sometimes parallel to large-scale governmental and international aid and sometimes as a conduit for that aid. Many also endeavor to administer food relief in ways that strengthen local livelihoods in the long run. Basically, these groups are involved in both relief and development, as they realize that relief alone can create dependency. But the great number of such organizations can occasionally create the problem of “swamping” affected areas by official and nongovernmental organizations who frequently do not coordinate their activities and who often collectively provide excessive relief assistance but not enough help for long-term mitigation.
Longer-term mitigation involves less direct measures for preventing famine, and the most important policy requirement is the strengthening of rural livelihoods. Of course, these measures can have objectives other than famine prevention and have development goals that are justifiable as ends in themselves. For example, in India, the government has attempted land reform, improved agricultural production technology, encouraged better processing and storage, operated a fairly effective system of public distribution of food, and developed a well-tried decentralized emergency response mechanism. All of this means that a reasonably effective famine prevention strategy has emerged, although malnutrition is still widespread. Nevertheless, as we have seen, it is difficult anywhere to demonstrate the level of impact of general rural development policies in famine prevention because the causes of famine are conjunctional and always involve a complexity of factors.
Early Warning Systems
The Indian Famine Codes established the idea of warning indicators that could be used to predict the onset of famine. For long, they were the only “early warning system” in operation, but since the Sahelian famine of the early 1970s, a number of famine early warning systems have been established (Blaikie et al. 1994). There are many different approaches to such systems. Some involve the use of sophisticated technology, such as the Food and Agricultural Organization’s Global Information and Early Warning System, developed in the 1970s, which predicts crop yields by establishing biomass from satellite imagery. Another comparable approach is the United States–funded Famine Early Warning System. Approaches like this are all based on the assumption that famines are primarily caused by natural events and that technological solutions will be important mitigating elements. However, other early warning systems are much less complex and involve the collection and collation of local-level information on food stocks, prices, and such things as the sale of household assets.
Summary and Conclusion
Although there is considerable uncertainty regarding the nature and extent of the world’s food problems, hunger is a contemporary reality, even within rich countries. Everyone must eat, and because of this simplest of all imperatives, famine is a subject of urgent contemporary concern. It casts a harsh but clear light on the nature and problems of the societies it afflicts and the world in which it exists. However, “famine” is a rather imprecise term normally used to describe events associated with natural hazards such as drought, floods, and pest infestations, and with political events such as war, malicious or inappropriate state policies, and ethnic prejudice. Famine is often (though not always) preceded by prolonged malnutrition and hunger, and part of the difficulty in defining it arises from the need to make a distinction between “ordinary” hunger and the phenomenon of famine. The latter term is generally reserved for situations in which starvation affects large numbers of people in a distinct area, which has been influenced by a natural or political event, leading to a larger number of deaths from starvation and illness than would normally be expected. By contrast, ordinary hunger is conventionally explained as a result of the imperfections of economic systems. Clearly, then, famine remains in danger of being explained as something caused by an exceptional political or natural factor—one that is abnormal and outside the responsibility of the usual operation of economic and political systems.
Fortunately, of late it has been recognized that natural hazards or political events are no more than immediate “triggers” or proximate causes. The “ultimate causes” must be sought out and understood in the normal economic and political sphere. Basically, a proper understanding of famine and all its dimensions requires an approach that does not merely deal with shortages of food but focuses instead on the inability of people to consume enough food. Thus, instead of treating famine as a shortage of food, it is preferable to analyze the different ways in which people are prevented (by natural or human events) from getting enough to eat. Using this alternative approach, it can be asked why certain groups of people are predisposed to famine before the impact of any trigger event, natural or political. Such a predisposition has been termed vulnerability, and the ultimate causes of famine can then be analyzed as those that create the conditions in which certain trigger events lead to the collapse in people’s ability to acquire adequate nutrition.
This issue is not merely theoretical because the policy implications of understanding famine as inadequate consumption rather than inadequate availability are likely to be very different. When famine occurs even though food is available, or when famine affects some groups but not others, then food aid may be irrelevant. This issue is now at the core of the interpretation of famine and associated policy directions. It is one not of food security but of entitlement security and its corollary, the reduction of vulnerability among those who are the most vulnerable.