Penelope Nestel. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas, Volume 2, Cambridge University Press, 2000.
One hundred and fifty million children, or one in three, in the developing world are seriously malnourished (United Nations Development Program 1990). This includes 38 million children underweight, 13 million wasted, and 42 million stunted. In addition, 42 million children are vitamin A–deficient (West and Sommer 1987), 1 billion people, including children, are at risk of iodine-deficiency disease (Dunn and van Der Haar 1990), and 1.3 billion have iron-deficiency anemia (United Nations 1991).
Malnutrition, whether undernutrition per se or a specific micronutrient deficiency, is usually the result of an inadequate intake of food because households do not have sufficient resources. In a review of the income sources of malnourished people in rural areas, J. von Braun and R. Pandya-Lorch (1991) found that among households with a per capita income below $600 (U.S.), there was a close relationship between household food security and the nutritional status of children.
The problems associated with nutritional deprivation are compounded when access to sanitation is limited, because poor sanitation and hygiene results in increased morbidity. This condition is often accompanied by a reduction in food intake at the very time when energy and nutrient requirements are high. About one-third of the populations of developing countries has access to sanitation, and just over half have access to safe water, but there are large urban–rural differences. Ease of access to water is 48 percent lower in rural areas than urban areas, and overall access to sanitation is 77 percent lower in rural areas (United Nations Development Program 1990).
Although urban populations may appear better off than rural ones, the plight of the urban poor is getting worse. The average annual growth rate of urban populations in developing countries is 3.7 percent, whereas that of rural ones is 0.9 percent (Hussain and Lunven 1987). However, in urban slums and squatter settlements, the population of which represents one-quarter of the world’s “absolute poor,” the growth rate may exceed 7 percent per annum. This rapidly expanding population comprises people who often lack the resources to obtain an adequate diet for either themselves or their children.
There are also gender, social, and cultural inequalities in addition to urban–rural disparities. In the rural areas of Africa and the urban slums of Latin America, women commonly head the household. In many cases, these women are compelled to be employed, but in general their remuneration is low because they tend to work in subsistence agriculture or in the informal sector, where women’s wages are lower than those of men. Social and cultural inequalities also often favor males over females, which is manifested in, for example, better health care and nutrition for males and neglect of female education. Irrespective of the environmental and political situation, social, economic, and cultural interactions exist that make improving the nutritional status of the poor a formidable task.
Targeting interventions so that those most nutritionally vulnerable, especially children, can benefit is important not only from an economic perspective but also in terms of the nutritional impact of the intervention program. Thus, although policy makers may be interested in improving the nutritional status of preschoolers, to focus solely on this age group is neither realistic nor practical (Berg 1986; Beaton and Ghassemi 1987). Indeed, in order to improve the nutritional status of infants and young children, it is more cost-effective to use programs that would benefit the entire household. The objective of a food-policy program aimed at the poor should be to improve their nutritional status by improving their access to food. However, unless nutritional considerations are explicitly stated in a food policy, it cannot be considered a food policy (Berg 1986).
Food policies can be based on economic and nutrition interventions. Specific economic interventions, which aim to stabilize food prices, may include those to increase agricultural and livestock production, to commercialize agriculture, to hold buffer stocks, and to subsidize consumer food prices. In addition to food policies, there are also other macroeconomic policies that influence the availability and consumption of food. An overvalued foreign exchange rate, for example, may discourage farmers from producing food because imported food can be cheaper. Similarly, food production can decline when industrial development is carried out at the expense of agricultural development.
Nutrition interventions, however, include food fortification, the use of formula foods, supplementary feeding programs, food-for-work and food-for-cash programs, and nutrition education. The boundaries between economic and nutrition interventions are not always distinct. Supplementary feeding and food-for-work programs, for example, can be considered as consumer food-price subsidies or as income transfers.
This chapter reviews the different economic and nutrition interventions that are used as policy tools to increase the energy and nutrient intakes of the poor, thereby increasing their household food security and the nutritional status of their infants and young children.
Agricultural and Livestock Production
Increased agricultural and livestock production are often prerequisites for food security because they generate additional amounts of both food and income. Increasing productivity, however, entails changing or modifying an existing production system to make it more cost-effective. This may be achieved in a number of ways: first, by increasing yields per unit of land area or worker; second, by ensuring the availability of sufficient inputs—fertilizer, pesticides, and so forth; and last, by establishing price relationships that can act as an incentive to both innovation and the mobilization of resources (Mellor undated).
The Green Revolution, through the use of improved technology, was instrumental in expanding world food production during the 1970s—in particular that of rice and wheat in Asia. Similar approaches are needed for the staple food crops grown in other parts of the world, especially in Africa. Specifically, attention needs to be given to root crops as well as to the more drought-tolerant cereals, such as sorghum and millet.
Expanding the area under cultivation and increasing cropping intensity are the means of increasing agricultural production, be it for home consumption or for sale. Outside of Africa, there is little scope to expand the area under cultivation. However, through the wider use of those improved crop varieties that are more resistant to pests and diseases, farm yields could be stabilized. The greater use of early-maturing varieties, which shorten the growing season, would also allow for multiple cropping. Intercropping, too, increases overall yields, stabilizes production, and allows food to be grown throughout a longer season. Management issues such as pest and disease control, the use of crop residues and animal manure as fertilizer, and soil conservation also help to augment yields and at the same time to lower the costs of production. Livestock productivity can be increased through better disease and parasite control. This not only results in farmers’ getting a better yield of milk and a better price for their dairy products or meat but also improves the work performance of draft animals and thus the asset value of the animals themselves. An increase in agricultural and livestock production can result in an improvement in the nutritional status of households through increasing household food security, which can in turn reduce the risk of infants and young children from becoming undernourished.
There is a great deal of evidence to show that small farmers are more productive per unit of land than are large farmers (Lipton 1985). This is because small farmers depend on family labor to grow high-value or double crops, to improve land in the slack season, to reduce fallow lands in area and duration, and to enhance yields through better cultivation practices. Large farmers tend to use imported machinery as a labor-displacing strategy, and such machinery depends on imported fuel and spare parts (Norse 1985; Longhurst 1987).
The inequality in both land ownership and tenure systems is the primary cause of poverty in the rural areas of Asia, Latin America, and, to a lesser extent, Africa (Norse 1985). Access to land clearly determines access to food and income (Braun and Pandya-Lorch 1991). Landless households that depend on the sale of their labor for survival are nutritionally the most vulnerable group. Sharecroppers and tenant farmers, who give part of their crop or work at peak agricultural times in lieu of rent, and very small farmers, are also extremely vulnerable nutritionally.
Availability of Inputs and Services
Unless soils are replenished, the production potential of land slowly declines as soil nutrients are depleted. Inputs, particularly fertilizer and water, are needed in order to maintain production levels. The money needed to finance these inputs, generated through credit, as well as an advisory service in the form of agricultural extension services, must be available. Both inputs and services should be available to women as well as to men because a great number of women are already involved in agriculture and dependence upon women’s earnings increases as employment for men falls (Mencher 1986).
In addition to inputs and services, it is essential that the reliability and efficiency of the marketing infrastructure for both food and nonfood commodities be improved if farm income levels are to be raised. Distortions in policies that adversely affect the availability of agricultural inputs and services to the small farmer will limit any improvement in agricultural production and, ultimately, influence income and the nutritional status of young children.
Commercialization of Agriculture
The integration of small farmers into a market economy entails a shift from subsistence food crop to cash crop production. However, increasing the production of cash crops at the expense of domestic food crops can disrupt traditional economic and social relation-ships. This, in turn, may have an adverse impact on the income, food consumption, and nutritional status of the poor (Senauer 1990). The evidence to support this theory, however, is mixed, with some studies showing commercialization having a positive effect on nutritional status (Braun and Kennedy 1986; Braun 1989) and others showing either no effect (Kennedy and Cogill 1987) or a negative one (Pinstrup-Anderson 1985; Kennedy and Alderman 1987).
Commercialization of agriculture need not necessarily be detrimental to nutritional status for two reasons. First, even though the promotion of cash crop production may decrease domestic food production, the foreign exchange generated from agricultural exports can more than offset the cost of importing food for the domestic market. Second, nutritional problems are not necessarily the result of a lack of food, but may be caused by a lack of access to sufficient food, which in itself is determined by income levels and food prices. Thus, increasing the income of poor farmers, whether from food or non-food crops, rather than simply increasing production, should be a priority for agricultural production programs.
In general, part of any increase in income is spent on food. Among the poor, the additional proportion spent on food may be quite high. This does not, however, necessarily mean that more energy will be consumed, because families may diversify their diet rather than increase the absolute quantity of the foods they already eat (Braun and Kennedy 1986). Thus, an increase in income alone will not necessarily result in better nutrition, and other interventions, such as nutrition education, are also needed.
Another aspect of commercialization is that income flow, as opposed to total income, is important in determining the impact of income on nutritional status. A lump sum of money, such as that generated from growing cash crops, is more likely to be spent on nonfood items such as school fees or housing, whereas continuous income is more likely to be spent on food (Kennedy and Alderman 1987).
The status of the person who controls the house-hold’s money may also be important. In many developing countries, the income generated from commercial crops is viewed as belonging to men, whereas that from food crops goes to women (FAO 1987). When women have some decision-making power, they are more likely than men to purchase food and health care (Kennedy and Cogill 1987). In some countries, such as India, the money earned by mothers has been found to have a greater impact on child nutrition than that from other income sources (Kumar 1979). Braun and Pandya-Lorch (1991) have claimed that women spend more of their income on food, whereas men spend their income according to personal taste on either food or nonfood items. This observation has important implications for programs that encourage expanding the cultivation of a particular crop that can change the way in which a household’s land is allocated for food and nonfood crops. Any program that reduces the amount of land available for food production without altering the control and use of income can adversely affect nutritional status.
Type of income can also influence consumption patterns. In India, S. Kumar (1979) found that payment in kind, rather than in cash, was used for consumption. Likewise, J. Greer and E. Thorbecke (1983) noted that in Kenya, income from food production rather than from off-farm work was associated with increased food intake. E. Kennedy and H. Alderman (1987) suggested that income from home gardens and home production was more likely to increase household food intake than was an equivalent amount of cash income. This interpretation emphasizes the importance of the role of women in agricultural programs, given that, in general, they control the food produced for household consumption and the income generated from the sale of food crops. Obviously, their position has important implications for child nutrition.
Buffer stocks are strategic food stores that are used to meet fluctuations in demand, or to accommodate fluctuations in supply, when there are no alternative sources of food. Buffer stocks are primarily used to stabilize prices—for example, at times of harvest failure or just prior to the harvest season when supplies are low.
In order to stabilize prices, buffer stocks should be released onto the market and sold at a controlled, or intervention, price once a predetermined consumer “ceiling price” has been reached. Over time, the intervention price, paid by the consumer, needs to fall, as does the producer price. In order to avoid a decline in production associated with the use of buffer stocks, grain has to be bought up when the producer price on the free market falls below a predetermined “floor price.”
The margin between the floor price paid to farmers and the ceiling price paid by consumers is crucial to the stabilization desired from the use of the buffer stock. If the range is wide, so that the floor price is low and the ceiling price high, then net flows of food will be small and infrequent. This situation will allow stock levels to be relatively small, but the effect on the market will also be small. In contrast, if the margin is too narrow, then the floor price may rise above the producer price or the ceiling price below the consumer price. This will cause stock levels to fluctuate wildly. Thus, the narrower the margin the larger the food stock required to maintain it, but the greater the stability in prices.
Political pressure—from farmers (to increase producer prices) and from urban consumers (to keep consumer prices low)—can undermine a buffer-stock scheme, as can private commodity dealers. For buffer stocks to be effective in stabilizing prices, the operators must have foresight and knowledge about the different types of price fluctuations (regional, seasonal, annual, inflationary, and so forth). In addition, the administration of buffer stocks should be independent so as to resist political pressure. Finally, storage costs of the buffer stock should be low (Streeten 1987).
Buffer stocks are both difficult to administer and expensive to maintain. For example, 1 ton of buffer-stock wheat held in the Sahel can cost $500 (U.S.), as compared with a price of $200 (U.S.) per ton (including freight) for wheat purchased on the world market (Streeten 1987).Aside from capital costs (such as warehouses, staff training, and the initial cost of the grain) and recurrent costs (wages, pesticides, building maintenance, and so forth), buffer stocks incur such additional costs as the interest on the funds tied up in the stock, the value of losses caused by pests, and losses resulting from deterioration in quality. S. Maxwell (1988) suggests that these additional costs may be larger than the capital or maintenance costs, although they are rarely included in estimates of buffer-stock costs. In Sudan, for example, these additional costs were equivalent to 40 percent of the purchase price of sorghum.
Buffer stocks have been used as a means to get grain to food-deficient areas in Indonesia (Levinson 1982) and as a famine reserve in Ethiopia (Maxwell 1988). These programs were implemented with the assumption that the poor, who are the most nutritionally vulnerable, would not suffer from any further deterioration in nutritional status.
Consumer Food-Price Subsidies
Consumer food-price subsidies are essentially income transfer policies that enable consumers to buy food at a lower price. They exist because cheap food is often regarded as a nutritional as well as a political necessity (Pinstrup-Anderson 1985, 1988).
Food subsidies have been found to have a positive impact on nutritional status in a number of countries (Pinstrup-Anderson 1987). This impact has been achieved in three ways. First, food subsidies increase the purchasing power of households because their members can buy more food for the same price. Second, they may reduce the price of food relative to the price of other goods, thus encouraging households to buy more food. Third, they may make certain foods cheaper relative to others and thereby change the composition of the diet.
Food subsidies, however, are not necessarily the most economic means of increasing the food intake of the poor. This is because they benefit all income levels: The rich, who do not need the subsidy, gain more than the poor because the rich can afford to buy more of the subsidized food. In addition, households do not necessarily increase their intake of the subsidized food; they may instead use the savings from its availability to buy other food or nonfood items that may be of limited nutritional benefit.
There is evidence, however, that food subsidies do increase the real income of the poor. In one study, food subsidies represented the equivalent of 15 to 25 percent of total incomes of poor people (Pinstrup-Anderson 1987). Because poor households spend between 60 and 80 percent of their income on food, the economic benefit of subsidized food as a proportion of current income is larger for the poor than for the rich, even if the absolute value of the subsidy is greater for the rich.
Food-subsidy programs have also been found to increase energy intakes at the household level, although the effect at the individual level is not always apparent. In Kerala, India, Kumar (1979) found that the subsidized ration resulted in an increase in energy intake by the lowest income group of 20 to 30 percent.
Food subsidies are often considered more beneficial to urban than to rural populations. Alderman and Braun (1986) learned that the Egyptian subsidies on bread benefited the urban population more than did subsidies on wheat flour. However, the opposite was true in rural areas where there were fewer bakeries to sell subsidized bread. R. Ahmed (1979), in Bangladesh, found that although only 9 percent of the population lived in urban areas, two-thirds of the subsidized cereal went to urban consumers. In Pakistan, B. Rogers and colleagues (1981) discovered that ration shops, which sold rationed subsidized wheat flour, had no significant effect on the energy intake of the rural population, whereas that of the urban population was increased by 114 calories per capita. Clearly, the transportation and administrative costs involved in serving a scattered rural population may limit the extent of a subsidy network. In addition, the threat of political unrest in urban areas is often a reason for governments to pursue food-price subsidies.
The political and economic climate invariably determines who benefits from a food subsidy program and how the program is implemented. The effect of food subsidies on household food consumption and, ultimately, on the nutritional status of children will depend, in part, on whether there is a general food subsidy, a rationed food subsidy, or a targeted food subsidy. In addition, the choice of the food to be subsidized is important.
General Food Subsidy
A general food subsidy is one in which the subsidized price of a specified food (or foods), in the market is below that of supply. Although politically very popular, such programs may not be the most efficient or effective way to improve the nutritional status of the poor. General price subsidies are costly and may divert financial resources from other programs, such as employment creation and wage increases, that would increase the purchasing power of the poor (Reutlinger 1988). In addition, in order to help pay for the subsidy, the programs themselves may cause increases in the prices of other foods that are important in the diet of the poor.
Yet a number of countries have implemented general food subsidy programs, which, in spite of their costs, have benefited the poor. For example, wheat flour and bread are subsidized throughout Egypt at ration shops, with no restrictions on the amount that may be purchased. Oil, sugar, and rice are also subsidized, but the supply of these is rationed. In addition, there are government-controlled cooperatives that sell subsidized frozen meat, poultry, and fish. Alderman and Braun (1984) have reviewed the Egyptian program, in which over 90 percent of the population derived benefits. In 1980, the subsidies cost the government some $1.6 billion, which was equivalent to 20 percent of concurrent government expenditure. But along with the cost burden of the general subsidy, there were noticeable nutritional gains, as shown in the fact that per capita energy intake in Egypt was greater than that in any of the countries having a per capita gross national product (GNP) up to double that of Egypt.
Access to subsidized food, however, was biased toward the rich in Egypt for a number of reasons (Alderman, Braun, and Sakr 1982). First, the lack of refrigeration facilities in rural areas precluded cooperatives from operating there, although ration shops were omnipresent. Second, the wealthier neighborhoods of Cairo appeared to have better supplies of the subsidized commodities, such as rice, than did the poor areas. In addition, although subsidized rationed commodities, such as oil, were not always rationed in the rich neighborhoods, oil was not even always available in the poor areas. Third, in addition to the regular cooperatives, civil servants and workers had their own workplace cooperative shops (in locations where more than 200 people were employed), which received extra allocations of subsidized meat and poultry. Fourth, wealthier households could afford to employ servants to fetch and queue for the subsidized food. Last, wealthier households could afford to pay bribes to ensure preferential access to subsidized food.
The Egyptian study found that although there were large variations in access to the subsidized food and also inequalities in the distribution system, the poor did benefit from an effective increase in purchasing power as a result of access to cheap food. Indeed, food subsidies accounted for 12.7 percent of expenditures for the lowest income quartile in urban areas and 18 percent of total expenditures of the poorest households in rural areas.
Rationed Food Subsidies: Dual Prices
A dual price system is one in which certain quantities of food are rationed at a subsidized fixed price, and unlimited quantities are available at the prevailing market price. Typically, a household’s quota for a rationed food is less than the total amount required by the household, and the difference is made up from purchases in the open market, where prices are higher. This arrangement effectively allows households to increase their expenditure on food.
Dual price systems, utilizing ration shops, have been implemented in India for wheat, rice, and small amounts of coarse grains (Scandizzo and Tsakok 1985) and in Pakistan for the wheat flour known as “atta” (Alderman, Chaudhry, and Garcia 1988). The ration price in India was 60 percent, and that in Pakistan 51 percent, of the open market price.
Targeted Food Subsidies: Food Stamps
A food-stamp program is one in which coupons are sold to a selected target group who may then buy specific foods of a stated value in authorized stores. The store cashes in the coupons at a bank and the bank is reimbursed by the government. The rationale behind food stamps, in contrast to cash transfers, is that households are more likely to increase food intake with transfers in kind rather than in cash (Kumar 1979). In other words, “leakage,” meaning the purchase of non-food items or more expensive “prestigious” food, is lower. Kennedy and Alderman (1987) noted that unless the value of the food stamps is more than that which a household would normally spend on food, there is no reason to expect the nutritional effect to be greater than that of giving a direct cash transfer. If, however, the cost of the coupons is close to that which the household would normally spend on food but the quantity of food that can be purchased with the coupons is higher, then the nutritional effect would be larger than that of a cash transfer. This is because, in addition to the income effect, food is cheaper at the margin. An example frequently cited is that of the U.S. food-stamp program, which until 1979 enabled a household to spend, say, $100 (U.S.) to receive $150 (U.S.) worth of food stamps.
Although it has been argued that food-stamp programs are costly to implement, S. Reutlinger (1988) has pointed out that the direct costs of administering a food-stamp program could be small because it brings in customers, so there would be no need for shops or banks to charge for processing the coupons. In addition, marketing costs may fall, which could lead to lower food prices because of an increased volume of shop sales. Thus, the only administrative costs are for printing the coupons, handling the distribution, and regulating against abuses.
As with all food-distribution programs, identifying the target group is not easy. Colombia, for example, has had a program in which only households in low-income regions with young children and pregnant or lactating women were eligible to participate. Coupons were distributed through health centers, where growth monitoring and nutrition education were carried out concurrently. The value of the food stamps was 2 percent of average income, but there was no evidence that the nutritional status of the target group improved (Pinstrup-Anderson 1987). In Sri Lanka, only households whose “declared” incomes were below a specified level could receive food stamps. However, this restriction meant that wage-earning workers on the tea estates where hunger was present did not qualify for food stamps even though they were one of the nutritionally most needy groups (Edirisinghe 1987).
In order to increase food intake and, thus, the nutritional status of the poor, the value of the food stamps must be indexed to inflation. This was not done in Sri Lanka, and so, over a four-year period, the value of the food stamps declined to 56 percent of their original value. The net result was that the already low total energy intake of the poorest 20 percent of the population declined by 8 percent.
Like other food-price subsidy programs, coupons do not necessarily result in increased consumption if the food issued does not meet the perceived needs of the household. The latter will depend on a house-hold’s composition and food preferences. In Colombia, for example, the foods that could be purchased with food stamps included highly processed foods not normally consumed by the poor.
In addition, the need for cash to buy the coupons often discriminates against the neediest households, which do not always have a regular source of cash. Moreover, even when a household does have income, it may not be sufficient to cover the cost of the coupons. But unfortunately, setting up a progressive program in which the price of the coupons is based on what a household can pay is not feasible in many societies (Pinstrup-Anderson 1987).
Choice of Subsidized Food
Most programs subsidize a staple cereal, and when the entire population consumes significant quantities of that staple, the cost can become prohibitive. In order to cover the costs and ensure that the poorest sections of the population benefit, the selection of the food to be subsidized is critical, particularly where more than one food is considered a staple. In Brazil, for example, C. Williamson-Gray (1982) found that subsidizing wheat bread resulted in a decrease in the energy intake of the poor. This was because the poor substituted the still more expensive subsidized bread for rice and other foods, thus decreasing their total intake of food. Williamson-Gray suggested that subsidizing cassava, rather than wheat, would have been better. Such a subsidy would have effectively increased the income of the poor because cassava is not prominent in the diet of the rich.
It has been pointed out that where “inferior” foods are subsidized, the subsidy program becomes self-targeting toward the group that consumes the “inferior” food. For this to happen, however, the “inferior” food must be consumed by a large proportion of the targeted population. Various studies cited by P. Pinstrup-Anderson (1988) indicate that subsidizing highly extracted wheat flour benefits the poor more than the rich largely because the latter perceive such flour as lower in quality because of its physical appearance.
As can be seen, a range of economic policies have been used, directly or indirectly, to improve food security among the poor. Undernutrition is inexorably linked with poverty, and unless poor households have the ability and resources to feed themselves, their infants and young children will suffer. For this reason, the focus of food-related economic policies has been at the household level.
Like economic interventions, nutrition interventions can be targeted or nontargeted. Once implemented, nontargeted interventions, such as food fortification, protect the entire population against a specific nutrient deficiency from an early age. Targeted interventions, such as the use of formula foods, feeding programs, and nutrition education, attempt to focus specifically on infants and young children, although many also include pregnant and lactating women. This section reviews the most commonly used nutrition interventions: food fortification, formula foods, supplementary feeding programs, and nutrition education.
Food fortification is the addition of specific nutrients to foods. It can be used to restore nutrients lost in processing or to increase the level of a specific nutrient in a food. Examples of fortified food commonly consumed in developing countries include milk containing vitamin D (originally introduced to prevent infantile rickets) and salt to which iodine is added (to prevent goiter).
To ensure that the most vulnerable members of the population benefit from food fortification, the vehicle for fortification must be a staple food consumed throughout the year by a large proportion of people with relatively little inter- and intraindividual variation. Intake levels must be self-limiting to minimize the possibility of toxicity. The level of fortification must be such that it contributes significantly to nutritional requirements without altering the taste, smell, look, physical structure, or shelf life of the food vehicle. Because control and monitoring procedures must be adopted at the manufacturing level to ensure that fortification levels are adequate, legislation may be necessary (International Nutritional Anaemia Consultative Group 1977, 1981; Bauernfeind and Arroyave 1986; Arroyave 1987).
The advantages of food fortification are many. Such a procedure is socially acceptable; it does not require the active participation of the consumer; it requires no changes in food purchasing, cooking, or eating habits; and the fortified food remains the same in terms of taste, smell, and appearance. Because the product to be fortified should already be marketed and have a widespread distribution system, fortified food can be introduced quickly; its benefits are readily visible; legislation for compliance is possible; its use is relatively easy to monitor; it is the cheapest intervention for a government; and it is the most effective sustainable method of eliminating a micronutrient deficiency (International Nutritional Anaemia Consultative Group 1977, 1981; Bauernfeind and Arroyave 1986;Venkatesh Mannar 1986; Arroyave 1987).
The main disadvantage of food fortification is that although it is applied to a food that is processed and marketed throughout a society, only those who consume that food will benefit. Fortification, for example, will not benefit people who use only locally produced or unprocessed foods. Other disadvantages are that fortified food reaches nontargeted as well as targeted individuals and may not be the most economical way to reach the target group. Moreover, when the cost of fortification is passed on to the consumer, purchasing patterns may change adversely among those most intended to benefit. Fortification also involves recurring costs. Finally, political will, legislation, and mechanisms to enforce decisions to fortify foods are necessary to ensure the success of such programs (Bauernfeind and Arroyave 1986;Arroyave 1987).
Among the nutritional deficiencies of children for which the use of fortified foods has been advocated in the developing world are the control of xerophthalmia, or night blindness (International Vitamin A Consultative Group 1977), goiter (Dunn and van der Haar 1990), and nutritional anemia (International Nutritional Anaemia Consultative Group 1977, 1981; United Nations 1991). One example of the use of a specific commodity as a vehicle for vitamin A fortification to control xerophthalmia is sugar in Guatemala (Arroyave 1986, 1987). Another is the use of vitamin A–fortified monosodium glutamate in the Philippines (Latham and Solon 1986) and Indonesia (Muhilal et al. 1988), although large-scale programs have not been implemented. Iodization of salt to control goiter has been successfully introduced in China, India, and 18 countries in Central and South America, although in the latter case, some countries have had a recurrence of iodine deficiency due to a lack of continuous monitoring and control measures (Venkatesh Mannar 1986). Fortification of wheat flour with an iron salt to control nutritional anemia is being implemented in Grenada. Trials on other nonfortified foods have been implemented in a number of developing countries. In Thailand, for example, fish sauce, a widely used condiment, is fortified with an iron salt (WHO 1972). In South Africa, curry powder has been fortified with iron EDTA (ethylenediaminetetraacetic acid) (Lamparelli et al. 1987), and in Chile, reconstituted lowfat milk is fortified with an iron salt, whereas wheat biscuits have been fortified with bovine hemoglobin (Walter 1990).
Formula foods are premixed foods made from unconventional sources. They may be made by combining a local cereal grain with a vegetable/pulse protein rich in the amino acids deficient in the cereal. Originally, formula foods were developed as low-cost milk substitutes to be fed to weaning children. Two classic examples are a milk substitute made from corn and cottonseed flour (INCAPARINA), and corn-soya-milk (CSM). CSM is an important food-aid commodity and a major staple of many ongoing supplementary feeding programs.
The advantages of formula foods are that they are cheaper than conventional sources of animal protein, vitamins, and minerals and can be formulated to meet the specific nutritional needs of a target group while providing energy at the same time. In addition, they do not require cold storage.
The disadvantages are that although the costs are less than those of animal protein supplements, they are higher than those of cereal staples because of manufacturing and distribution costs. The main beneficiaries of formula foods tend to be urban populations because the formula foods are generally sold through commercial channels and can become prohibitively expensive in rural areas. B. Popkin and M. C. Latham (1973) estimated that formula foods were priced at between 8 and 40 times the cost of homemade traditional foods on a nutrient-per-dollar basis. In addition, a lack of familiarity with formula foods may result in low levels of acceptability. Thus, on balance, the use and potential of such foods appear to be somewhat limited in terms of developing countries, particularly in rural areas.
Supplementary Feeding Programs
Supplementary feeding programs are conducted at health centers, schools, or community facilities, which have the advantage of keeping capital outlays low by using existing institutional infrastructures. They also facilitate decisions on intrahousehold food distribution in situations when food and income are limited. Politically, such programs are acceptable because they often generate support among the recipients.
There are, however, disadvantages involved in feeding programs. First and foremost is their cost. Apart from the possible cost of the food (which may or may not be donated food aid), there are the administrative and operational expenses of the program. These include those of transport, storage, wages, overhead, physical plant, fuel, and cooking utensils. In addition, the operation of feeding programs in health centers or schools imposes extra work on the staff, which may be detrimental to the institution’s core concerns and thus is itself a hidden cost. Further questions include whether the intended beneficiaries actually participate in and benefit from the program, whether a feeding program based on imported food aid encourages black marketeering, whether the food is used for political ends, and whether feeding programs create psychological, nutritional, and political dependence.
Participation in any feeding program depends upon a number of factors, which ultimately determine the success of the program when that success is measured in terms of an improvement in nutritional status. Criteria for success include the quality and quantity of food reaching the recipients, the regularity of supply, the timing of meals, the nutritional status of the participants, and the degree of targeting.
The quality of foods served in a feeding program is important in enhancing the overall quality of the diet provided. For example, powdered milk fortified with vitamins A and D is preferable to unfortified milk. Where premixes are employed, the addition of sugar to an oil/cereal base or oil to a sugar/cereal base not only increases the energy density of the food but also renders it more palatable and, thus, increases the likelihood of its being eaten. The use of premixes has both advantages and disadvantages. On the positive side, leakage of one of the commodities (sharing or selling the commodity) is restricted, especially if a high value is put on a particular food. On the other hand, although premixes may be made up of foods that are consumed locally, the appearance and consistency of a cooked premix may be alien to the intended beneficiaries.
The quantity of food each participant receives is critical to the success of any feeding program. In general, “average” rations are based on the extent to which the general diet is deficient in both energy and specific nutrients. But no allowances are made for individual variation or for leakage. Thus, Kennedy and Alderman (1987), for example, cite a CARE review finding that in four feeding programs, 62 to 83 percent of the total energy gap was the fault of the supplementary foods. In yet another feeding program in India, sharing reduced the amount of the ration consumed by child beneficiaries by 50 percent.
Minimizing leakage through the use of increased rations, or by having on-site feeding programs, will inevitably add to costs, but these measures are more likely to encourage attendance and produce viable results (Kennedy and Knudsen 1985). One difficulty encountered in all supplementary feeding programs, regardless of leakage, is ensuring that children actually eat an adequate amount of food. Undernourished children are often anorexic. Their total intake is likely to be greater if they are fed on a “little and often” basis rather than at set meals. This means that on-site feeding centers need to operate beyond conventional working hours and may require additional staff.
The regularity of supply of supplementary food is essential for it to have an impact. Regularity, however, can be impaired by transportation and bureaucratic constraints. Any uncertainty in supply can result in a reluctance on the part of local administrators to invest either time, personnel, or resources in a supplementary feeding program. Uncertainty in supply may also jeopardize existing programs by generating ill will among the recipients, especially if they have to travel or wait for food that fails to arrive.
The timing of the issuance of supplementary meals is also important for two reasons. First, it influences how much food a child will eat, and second, it may determine the extent to which the supplementary meal is considered as a substitute for the home meal. A meal served early in the morning is unlikely to be fully eaten by a participant if the individual has already eaten at home. At the same time, it may deter a mother from giving her child an adequate meal first thing in the morning if she knows there will be one at the feeding center. The same consideration applies to other supplementary meals given near the times that meals would normally be eaten in the home. To overcome this effect, feeding programs in some countries (for example, Guatemala) changed the time at which the food was distributed so that it was perceived as a “snack” rather than a meal (Anderson et al. 1981; Balderston et al. 1981).
The initial nutritional status of the participants in a feeding program influences the success of the program. Indeed, the greatest benefit from a supplementary feeding program ought to be seen among the most undernourished children (Beaton and Ghassemi 1987). For this reason the “targeting” of feeding programs becomes very important (Kennedy and Knudsen 1985; Kennedy and Alderman 1987).
Yet targeted programs are often politically difficult to implement because they involve singling out one group of people, which may not be socially or culturally acceptable. Conversely, groups not in nutritional danger, who want to benefit from what is perceived as a “free handout,” should be excluded. In order to target successfully, there must be some criteria with which to identify the intended beneficiaries (Anderson 1986). All too often, the data for doing this are limited (Cornia 1987).
Where undernutrition is widespread, feeding programs that are well targeted geographically can be effective in reaching the nutritionally disadvantaged population. This type of targeting is generally used in drought situations—for example, in Sudan and Ethiopia between 1984 and 1986. Geographic targeting is appropriate for areas in which there is a concentration of intended beneficiaries. Kennedy and Alderman (1987) have suggested that if less than 20 percent of households or children in an area are nutritionally needy, then geographic targeting on its own is unlikely to be an effective tool.
Once vulnerable individuals have been identified, targeting at the community level can be directed at households or individuals. This will largely depend on whether the program aims to improve the energy intake of vulnerable households or only that of vulnerable children and women. At the community level, targeting often relies on some arbitrary cutoff point using nutritional criteria such as weight-for-height, weight-for-age, and in some cases, middle-upper-arm circumference. Children are enrolled in and discharged from feeding programs based on nutritional status criteria. Once a child reaches and maintains an acceptable level of nutrition for a specified time period (at least one month), he or she can be discharged from the program.
Because undernutrition often involves not only a simple food deficit but also problems of sanitation and hygiene, it is important to address these factors as well so as to reduce the likelihood of children being caught in the undernutrition–morbidity cycle that inevitably ends in their being readmitted into a feeding program.
A feeding program must not overlook the importance of maintaining an improved nutritional status once it has met its objectives, and the causes of under-nutrition have been removed (Pinstrup-Anderson 1987). G. H. Beaton and H. Ghassemi (1987) have suggested that although the implementation of a feeding program may improve nutritional status, it may also disrupt the equilibrium between a community and its environment. Because of this, when a feeding program is terminated, it is essential that it be phased out over a period of time in order to ensure that the positive results of the program are not lost and that the circumstances that led to the establishment of the program in the first place do not occur again.
Supplementary feeding programs fall into four categories: on-site, take-home, school, and community programs. Each of these can also be used as a vehicle for nutrition education, which is an essential component of any feeding program that hopes to modify nutritional behavior.
On-Site and Take-Home Feeding Programs
On-site and take-home feeding programs are generally aimed at children 6 months to 5 years old, women in the last trimester of pregnancy, and women in the first six months of lactation. On-site programs involve the beneficiaries’ attendance at a feeding center once or twice a day to receive food rations, whereas take-home programs distribute rations at regular intervals.
As previously mentioned, on-site programs tend to cost more than take-home programs, and although the beneficiaries of the former are more likely to consume the food themselves, the coverage of such programs is usually limited in both area and numbers. In addition, there is some evidence that the supplementary food is more likely to serve as a substitute (rather than a supplement) for the food that would otherwise be eaten at home (Beaton and Ghassemi 1987). Kennedy and Alderman (1987) point out that because a household operates as an economic unit, it attempts to promote the well-being of all its members. A child who receives food at a feeding center may be considered to have been fed already and is thus given less food at home, with a resulting net energy intake that is considerably lower than intended by the planners of supplementation. There is also the danger that when the responsibility for feeding a child is removed (albeit temporarily) from the mother, undernutrition may come to be regarded as a disease to be cured through outside interventions, rather than through a reallocation of resources within the household.
Take-home programs have more scope for leakage than do on-site programs because the food may be shared among the whole household or sold. M. Anderson and colleagues (1981) compared supplementary feeding programs in five countries and found that 79 to 86 percent of children attending on-site feeding centers ate their ration, whereas only 50 percent of children receiving a take-home ration did so.
Distributed food is more likely to be shared if it is a food that is widely accepted by the local population as appropriate for consumption. Indeed, Beaton and Ghassemi (1987), in their review of over 200 reports on feeding programs, found that sharing accounted for 30 to 60 percent of the food distributed, and the overall net increase in food intake of the target population was only 45 to 70 percent. The investigators concluded that on-site and take-home programs are not effective in reaching children less than 2 years of age, although such children are nutritionally a very vulnerable group. The reasons cited for failing to enroll children younger than 2 years included the use of unfamiliar weaning foods that are considered inappropriate by mothers, as well as the mothers’ lack of knowledge about feeding children of this age.
Because of the time often required for travel to a feeding center, feeding programs may suffer from low participation and low attendance and have difficulties in reaching the most vulnerable groups. In their review of feeding programs, Beaton and Ghassemi (1987) found that participation rates were 25 to 80 percent of the intended level of distribution. Low participation and attendance result in part from the fact that mothers are often working, either domestically (collecting water or firewood, preparing food, looking after a large family, and so forth), or on their land, or in the informal sector. Participating children left with older siblings do not always benefit from the feeding program because the older children may not know how to prepare the supplementary food or not understand its importance. Indeed, experience in Sudan showed that older children were more interested in playing than in bringing an undernourished child to a feeding center on time or in attempting to feed a child who was anorexic.
School Feeding Programs
School feeding programs have been widely adopted in many countries. Such programs encourage school enrollment and attendance and improve school performance and cognitive development (Freeman et al. 1980; Jamison 1986; Moock and Leslie 1986). Many, however, operate only during the school year. School meals may replace, rather than supplement, meals eaten at home, although in situations when the mother is not at home during the day, it is questionable whether the child would get a midday meal anyway. In the latter situation, the advantage of the school meal outweighs the disadvantage of its possibly being a replacement meal. In India, school feeding programs were found to influence household expenditure patterns because school meals were cereal-based, which allowed households to spend less money on cereals and more on milk, vegetables, fruits, and nonvegetarian foods (Babu and Hallam 1989).
Among the poor, however, school-age children are often required to work to augment the household’s meager income or to look after younger siblings while the adults work. Indeed, in many situations, children from the most vulnerable households are the least likely to be at school and, thus, the least likely to benefit from a school feeding program.
Community Feeding Programs
Strictly speaking, community feeding programs provide replacement or substitute rather than supplementary meals. Community-organized mass feeding programs, known as comedores, have been implemented in Lima, Peru (Katona-Apte 1987). There, members of a group of women take turns preparing morning and midday meals. Standardized portions are sold at set prices to participants, whose entitlement depends on household size. The money collected is used to purchase foods additional to those contributed by donor agencies. Households pay for meals depending upon their circumstances, although the comedor does limit the number of free meals it issues each day.
The advantage of community feeding programs is that they benefit from economies of scale. Thus, not only the quantity but the quality of the meals will most likely be nutritionally superior to that of meals that would have been prepared at home. Communal kitchens also afford participating mothers more time for other activities, and yet one more benefit of a community-based program is that it gives women the opportunity to interact with and help each other.
The disadvantages of community-based feeding programs are that they do not allow for individual food preferences and that they take the control and responsibility of feeding the family away from the household. This runs the risk of child undernutrition coming to be perceived as a problem that the household cannot solve.
Food-for-Work and Food-for-Cash Programs
Food-for-work (FFW) and food-for-cash (FFC) programs are indirect nutrition interventions involving labor-intensive employment programs. Such programs effectively subsidize labor through the provision of either food or wages as remuneration, thereby improving a household’s access to food and reducing the risk of child undernutrition. The short-term outcome of such programs is the creation of employment for the poor, whereas the long-term one is income generation for both poor and nonpoor through the development of an asset base (Biswas 1985; Stewart 1987; Braun, Teklu, and Webb 1991; Ravallion 1991).
The participation and involvement of both the local community and government in the design and implementation of the program is essential if the poorest people, who generally have no power base, are to participate in and benefit from it. Payment, whether in cash or as food, must be determined by the prevailing food-market conditions (Braun et al. 1991). Poor people already spend a large proportion of their income on food, and any increase in income is generally translated into further food purchases (Alderman 1986; Braun and Kennedy 1986). In order to ensure that this change takes place, the existing food market must be such that it can cope with the increased demand without price increases that will negate some positive effects of the program.
There are numerous advantages to FFW or FFC programs in terms of improving household food security and, thus, nutritional status. First, because of their relatively low level of remuneration, they do tend to reach the poor and are, consequently, self-targeting. Second, they are relatively low-cost in relation to the jobs they create. Third, they develop and improve a much needed infrastructure. Fourth, they increase local purchasing power. Fifth, they increase food demand through the generation of income for the poor, which will stimulate the local market. Sixth, there is no notion of charity. Seventh, they enable communities to be involved in their own asset creation, including that of environmental sanitation, thereby reducing the morbidity–undernutrition risk. Eighth, they can be implemented in the slack agricultural season, thereby reducing seasonal fluctuations in income and, thus, dependency on moneylenders in rural areas (Biswas 1985; Stewart 1987; Braun et al. 1991).
The constraints of FFC and FFW programs are that many developing countries—particularly in Africa—do not have the institutional capabilities to set up, monitor, and maintain them. In many situations, it is not easy to determine the wage rate or to identify suitable small-scale infrastructural projects. In addition, the quality of the work carried out may be poor because of insufficient funds or inadequate work performance. Finally, the programs depend not only on the existence of surplus labor but also on the willingness of the laborers to be mobilized (Braun et al. 1991; Ravallion 1991).
Funds for FFW or FFC programs are generated through food aid, which can be used directly or indirectly, with the latter meaning that the food is sold and the proceeds are used to subsidize the program. As a result of administrative costs, food aid has been estimated to cost between 25 and 50 percent more than purely financial aid (Thomas et al. 1975; World Bank 1976), and food aid is not appropriate in rural areas with surplus agricultural production because it can depress local prices and incomes.
The main purpose of nutrition education is to change behavior patterns that determine the distribution of food within a household so as to reduce any intra-household inequality in nutritional status. However, without a proper understanding of the problems and constraints that households face, nutrition education on its own will have little impact. For example, if income is the most limiting factor, then education to reallocate income or food within the household is unlikely to be effective (Pinstrup-Anderson 1987). Indeed, nutrition education is likely to be useful only when households are capable of responding—for example, when a large proportion of the budget is spent on nonessential goods.
The most widely used themes in nutrition education are the encouragement of breast feeding; the introduction of appropriate food supplements for breast-fed infants between the ages of 4 and 6 months; the best use of scarce money to purchase the cheapest combination of nutritious food; the minimization of food wastage through improved food preparation and preservation; safer food preservation procedures; and ways to cook more efficiently, thereby saving on fuel costs (Manoff 1987). However, each theme cannot be treated as a discrete educational entity. For example, the promotion of breast feeding is related to the nutrition of both pregnant and lactating women, the introduction of weaning foods, the prevention and control of diarrheal disease, the adverse marketing practices of commercial breast milk substitutes, changes in hospital practices, and changes in policies such as paid maternity leave and provision of nurseries at the workplace.
Nutrition education techniques have relied largely on mass media campaigns and face-to-face communication through maternal-child-health clinics, women’s groups, agricultural extension agents, and so forth. Few, however, involve the community, although R. C. Hornik (1987) and R. K. Manoff (1987) have suggested that unless communities develop the messages and undertake the administrative and financial responsibility for nutrition education programs on their own, it is unlikely that nutrition education will be effective on a large scale. If people pay for a service, however, they will use it and change their behavior.
Although the mass media has the potential to inform a large number of people, the number of people who can be reached in the lowest economic strata—nutritionally the most vulnerable—is generally not large. Irrespective of literacy levels, the majority of people in developing countries do not have access to newspapers, and there are only about 170 radios and 40 televisions per 1,000 people in the entire developing world (United Nations Development Program 1990). Moreover, in many developing countries there are multiple languages and dialects, as well as differing religious beliefs, ethnic groups, and cultural practices. Nationalized mass media messages may, therefore, not be very appropriate.
Women comprise the majority of the 870 million illiterate adults in the developing world (United Nations Development Program 1990). But mothers are not the only ones to whom messages need to be directed in order to effect a change in behavior. The medical profession, too, could play a much larger role than it currently does. In addition, school-children, religious leaders, and the members of any institutionalized program have a role to play in nutrition education. If behavior is to be truly changed, however, the focus must be on ways to educate and communicate both relevant and appropriate nutrition advice.
The preceding sections show the range of nutrition policies that may be used to improve the nutritional status of infants and young children. Each policy has been analyzed in terms of its benefits and constraints, but it is difficult to ensure that the most nutritionally vulnerable children do, indeed, benefit from an intervention. For this reason, it is more appropriate to identify the vulnerable groups—that is, poor households with specific characteristics—than the vulnerable children. In short, to improve the nutritional status of infants and young children, economic and nutrition interventions must be aimed at poor households.