Paul Rozin. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas, Volume 2, Cambridge University Press, 2000.
We can think of the world, for any person, as divided into the self and everything else. The principal material breach of this fundamental dichotomy occurs in the act of ingestion, when something from the world (other) enters the body (self). The mouth is the guardian of the body, a final checkpoint, at which the decision is made to expel or ingest a food.
There is a widespread belief in traditional cultures that “you are what you eat.” That is, people take on the properties of what they eat: Eating a brave animal makes one brave, or eating an animal with good eyesight improves one’s own eyesight (reviewed in Nemeroff and Rozin 1989). “You are what you eat” seems to be “believed” at an implicit level, even among educated people in Western culture (Nemeroff and Rozin 1989). It is an eminently reasonable belief, since combinations of two entities (in this case, person and food) usually display properties of both. Thus, from the psychological side, the act of eating is fraught with affect; one is rarely neutral about what goes in one’s mouth. Some of our greatest pleasures and our greatest fears have to do with what we eat.
The powerful effect associated with eating has a strong biological basis. Humans, like rats, cockroaches, raccoons, herring gulls, and other broadly omnivorous species, can thrive in a wide range of environments because they discover nutrients in many sources. But although the world is filled with sources of nutrition, there are two problems facing the omnivore (or generalist). One is that many potential foods contain toxins. A second is that most available (nonanimal) foods are nutritionally incomplete. An apt selection of a variety of different plant foods is required for the survival of omnivorous animals to the extent that they cannot find sufficient animal foods. Animal foods tend to be complete sources of nutrition, but they are harder to come by because they are less prevalent and because they are often hard to procure (for example, they move). Hence, the omnivore must make a careful selection of foods, avoiding high levels of toxins and ingesting a full range of nutrients. Any act of ingestion, especially of a new potential food, is laden with ambivalence: It could be a good source of nutrition, but it might also be toxic.
Specialist species, such as those that eat only insects, can identify food as small moving things; specialists that eat only one species or group of plants may identify food by some common chemical property of these plants. There is no simple, genetic way to program an omnivorous animal to identify food by its sensory properties, because anything might be food and anything might contain toxins (or harmful microorganisms). Hence, omnivores must basically learn what is edible and what is not and what constitutes a good combination of edibles. This learning is facilitated by a few biologically (genetically) based biases:
- An innate tendency in many species (including rats and humans) to ingest things that taste sweet (correlated with the presence of calories in nature) and to avoid things that taste bitter (correlated with the presence of toxins in nature [Garcia and Hankins 1975]).
- Conflicting tendencies to be interested in new foods and a variety of foodstuffs but to fear anything new. This results from what I have called the generalist’s (omnivore’s) dilemma; the importance of exploring new foods to obtain adequate nutrition across different areas and seasons and, yet, the dangers of ingesting toxins in doing so. The conflicting tendencies are manifested as a cautious sampling of new foods and a tendency to eat a number of different foods in any day.
- Special abilities to learn about the consequences of ingestion. Here omnivores face a difficult challenge, because both the positive and negative effects of foods occur many minutes to hours or even days after ingestion. But omnivores, like rats and humans, can learn to associate a food with the consequences of ingestion even if they occur some hours later (Garcia, Hankins, and Rusiniak 1974; Rozin 1976).
- A few special preprogrammed food selection systems. These include systems that signal to an animal that it is in need of energy (what we loosely call “hunger”), water (“thirst”), and sodium (“sodium appetite”). The sodium system, at least for the rat, is linked to an innate recognition of the taste of the needed nutrient. As shown originally by the great food selection psychologist, Curt Richter, rats deprived of sodium for the first time in their lives show an immediate preference for the taste of sodium (salt) (Richter 1956; Denton 1982; Schulkin 1991). There is some evidence for something that may resemble an innate sodium-specific hunger in humans (Beauchamp, Bartino, and Engelman 1983; Shepherd 1988).
The psychology of human food-related behavior falls naturally into two areas. One is concerned with how much is eaten (the starting and stopping of eating, defining the meal). Almost all psychological research is devoted to understanding what determines how much people and animals eat and the disorders of this process (such as obesity and anorexia). These issues are discussed elsewhere in this work. But this chapter focuses on the second area, what is eaten, that is, on food selection.
The Human Omnivore
Food selection can be accomplished by genetic programming (for example, the avoidance of bitter tastes), by individual experience with foods (for example, learning that a certain food causes adverse symptoms), or by social transmission. Social transmission obviously has the virtue of sparing an organism the efforts and risks of discovering what is edible and what is not. In nonhuman animals, there is social transmission that can be called inadvertent; that is, animals may learn to eat what their parents or conspecifics eat by some sort of exposure to the conspecifics when eating (see Galef 1988, for a review). Humans, however, have two other powerful modes of transmission of food preferences and information (see Rozin 1988, for a review). One is an indirect social effect: social institutions, cuisines, and technological advances that make certain varieties of nutritious foods easily available (for instance, by agriculture or importation) and reduce the likelihood of contact with harmful potential foods (such as poisonous mushrooms). The second effect is explicit teaching about food preferences by example or by transmission of information (for example, “don’t eat wild-growing mushrooms”). There is no firm evidence for any nonhuman species of explicit (intentional) teaching about appropriate foods (Galef 1988).
Humans share with animals both the nutritive need for food and the derivation of pleasure from ingestion. However, humans supplement these two functions of food with others as well (Rozin 1990b). Food and eating can serve a social function by providing an occasion for the gathering of kin at mealtimes. Food also acts as a vehicle for making social distinctions (as in serving certain foods to indicate high regard for a guest) and as culture-wide social markers (as when cuisines serve as distinctive characteristics of different groups). In the form of cuisine, food also produces an aesthetic response that extends well beyond animal pleasure. And finally, for many people in the world, most particularly Hindu Indians, food is a moral substance; eating certain foods and avoiding others is a means of preserving purity, avoiding pollution, and leading a proper life (Appadurai 1981). The multiple functions of food for humans make the understanding of human uses of food and attitudes to food an extremely complex task.
The Psychological Categorization of Food
We are accustomed to biological/taxonomic (species) and nutritional classifications of foods. But for a psychology of food, the important distinctions are those made in the human mind, and these are only weakly related to scientific classifications of foods.
The simplest approach to human food choice is to determine, for any group of people, the types and amounts of foods consumed (or more easily, purchased). We can call this measure food use. It is of special significance from the point of view of economics, but it has a major shortcoming as a measure of human food selection, since much food use is determined by its cost and availability. It would certainly be inaccurate to assume that because a group consumes more of X than Y, that they prefer X, when it may be that X is simply more available or less costly.
A more appropriate psychological measure is preference, which indicates, with price and availability controlled, which foods a person or group chooses. Yet it would be a mistake to assume that preference is a direct and infallible measure of liking for particular foods. A dieter, for example, might like ice cream better than cottage cheese but prefer (choose) cottage cheese. Liking, then, is a third way of describing food selection. It usually refers to attitudes about the sensory properties of foods, most particularly the oral sensations (tastes, aromas [flavors], “mouth feel”). Thus, a psychology of human food choice must address the question of why particular humans or groups of humans use, prefer, and like particular foods.
Further analysis suggests that there is a richer psychological categorization of foods for humans. Of all the potential edibles in the world (which essentially means anything that can be gotten into the mouth), a first simple division is between items that are accepted and those that are rejected by any individual or group. However, acceptance or rejection can be motivated by any of three reasons (or any combination of these) (see Table VI.11.1; Rozin and Fallon 1980; Fallon and Rozin 1983).
One reason is “sensory-affective.” This has to do with liking or disliking the sensory properties of a food. Foods that are accepted or rejected primarily on sensory-affective grounds can be called “good tastes” or “distastes,” respectively. If X likes lima beans and Y does not, the reason is almost certainly sensory-affective. Most of the differences in food likes within a well-defined cultural group have to do with differences in sensory-affective responses.
Table VI.11.1 Psychological categorization of acceptance and rejection Source: Fallon and Rozin (1983).
|Examples||Beer||Allergy||Grass||Feces||Saccharin||Medicines||Ritual foods||Leavings of|
|Spinach||Carcinogens||foods||foods||ones, or deities|
A second reason for rejecting a food is anticipated consequences. One may reject a food (which we call a dangerous food) because one believes ingestion will be harmful (it may be high in fats, or may contain carcinogens). Or one may accept a food (which we call a beneficial food) because it is highly nutritive or curative (perhaps a vitamin pill or a particular vegetable).
A third, uniquely human reason for food acceptance or rejection is what we call ideational; it is the nature of the food and/or its origin that primarily determines our response. Thus, we reject a very large number of potential foods because we have learned simply that they are not food: paper, tree bark, stones. These (inappropriate) entities may not be harmful and may not taste bad; they simply are not food. This inappropriate category is very large, yet there is a much smaller (appropriate) category on the positive side: foods that we eat just because they are food. Certain ritual foods might fall into this category.
We have now used three reasons to generate three categories of acceptance (good tastes, beneficial, and appropriate) and three of rejection (distaste, danger, and inappropriate). There remains one more major category of rejection that represents, like inappropriate, a fundamentally ideational rejection. We call this category disgust: Although it is primarily an ideational rejection, disgusting items are usually believed to taste bad and are often believed to be harmful. Feces seems to be disgusting universally; in American and many other Western cultures, insects fall within this category as well.
Disgusting items have the unique property that if they touch an otherwise acceptable food, they render that food unacceptable; we call this psychological contamination (Rozin and Fallon 1987). The category of disgusts is large. The opposite positive category, which we call transvalued foods, is very small. These are foods that are uplifting for ideational reasons and which are often thought to be beneficial and tasty. A good example is “prasad” in Hindu culture. Food that has been offered to the gods, and is subsequently distributed to worshippers who believe it to have been partly consumed by the gods, is considered especially desirable (Breckenridge 1986).
A psychological account of food choice would have to explain how foods come to be in any of the eight categories we have generated.
Accounting for Food Preferences and Likes
It must be true that human food attitudes result from some combination of three sources of information or experience: biological heritage (that is, our genes), cultural environment, and unique individual experience. I shall examine each of these contributing causes, with emphasis on the last, most psychological of the three. Of course, the distinction among genetic, cultural, and individual– psychological origins of human preferences is somewhat arbitrary, and there is a great deal of interaction among these forces. Nonetheless, the distinction is very useful.
Genetic Aspects of Food Selection
By their nature, omnivores carry into the world little specific information about what is edible and what is not. I noted taste biases, suspicion of new foods with an opposing tendency to seek variety, and some special abilities to learn about the consequences of food. However, there are individual differences among people that have genetic bases and that influence food selection. The small minority of these directly affect the nervous system and food choice. The best-investigated example is an inherited tendency to taste (or not to taste) a certain class of bitter compounds (identified by one member of this category, phenylthiocarbamide or PTC). This ability is inherited as a simple, single-locus, Mendelian recessive trait (Fischer 1967) and probably has some modest effect on preferences for certain bitter substances, such as coffee. The incidence of PTC tasting differs in different cultural groups.
More commonly, genetic differences in metabolism affect the consequences that different foods have and, hence, whether they might enter the beneficial or harmful category for any individual. A well-investigated example is lactose intolerance, but others include a sensitivity to wheat protein (gluten) and sensitivity to a potential toxin in fava beans (see Katz 1982; Simoons 1982, for reviews). In the case of lactose intolerance, the inability of most human beings to digest milk sugar (lactose) as adults (a genetically determined trait) accounts for the absence or minimal presence of raw milk and other uncultured milk products in the majority of the world’s cuisines. Those who are lactose intolerant can develop lower gastrointestinal symptoms (cramps, diarrhea) upon ingestion of even relatively moderate amounts of uncultured milk.
The fact remains that although there are genetic influences on food choice, they are rather minor, at least at the individual level. An indication of this is that the food preferences of identical twins are not much (if at all) more similar than are the preferences of fraternal twins (Rozin and Millman 1987).
Culture and Food Selection: Cuisine
Although it has never been directly demonstrated, it seems obvious that the major determinant of any individual’s food preferences is his or her native cuisine. Put another way, if one wanted to guess as much as one could about a person’s food attitudes and preferences, the best question to ask would be: What is your native culture or ethnic group? Not only would this response be very informative but there is also no other question one could ask that would be remotely as informative. In terms of the taxonomy of food acceptances and rejections (Table VI.11.1), cultural forces generally determine the ideational categories and strongly influence the good tastes–distastes and danger–beneficial categories.
This influence, obviously acquired in childhood, can be generally described as a body of rules and practices that defines what is appropriate and desirable food and how it is to be eaten. The most systematic description of the food itself comes from Elisabeth Rozin’s (1982, 1992) analysis of cuisine. She posits three components that give a characteristic ethnic quality to any dish: The staple ingredients, the flavor principles (for example, the recurrent soy sauce, ginger root, rice wine combination in Chinese food), and the method of processing (for example, stir-frying for many Chinese foods).
Furthermore, cuisines specify the appropriate ordering of dishes within a meal (Douglas and Nicod 1974) and the appropriate combinations and occasions on which particular foods are consumed (Schutz 1988). All of these factors, along with rules about the manner and social arrangement of eating and the importance and function of food in life, are major parts of the human food experience and are basically part of the transmission of culture. That these cultural forces are strong is indicated by the persistence of native food habits, sometimes called the conservatism of cuisine in immigrant groups. Generations after almost all traces of original-culture practices are gone, the basic food of the family often remains the native cuisine (see, for example, Goode, Curtis, and Theophano 1984).
Psychological Determinants of Food Selection
Within the minimal constraints of biology and the substantial constraints and predispositions imposed by culture, any individual develops a set of culture-appropriate, but also somewhat unique, food preferences. I shall now address what little we know about how this happens.
Early Childhood: Milk and Weaning
Humans and other mammals begin life with one food: milk. It is both nutritive and associated with maternal nurturance. A first trauma in life is weaning away from this “superfood,” which is accomplished in culturally variable ways. There is no evidence for special attachments that humans develop to this earliest food, although such a permanent attachment (liking) has been documented with respect to species recognition in a number of species: Exposure to a member of the species (usually a parent) at a critical period in development leads to a permanent attachment to and preferential recognition of that object. But this process of imprinting would be highly maladaptive for early food for mammals because they would then spend their lives seeking an unattainable food. Milk, for example, only became available as a food for some humans after infancy with the development of dairying, which occurred relatively recently in human history (see Rozin and Pelchat 1988, for further discussion).
Neophilia and Neophobia
As omnivores, humans show both a tendency to be interested in new foods and a fear of new foods. Both tendencies have benefits and risks. The neophilic (attraction to new food) tendency manifests itself not only in an interest in genuinely new foods but also in a desire for variety in the diet. Thus, humans (and other animals) tend to come to like a food less if they eat it almost exclusively (boredom effect) and tend to eat more when confronted with a variety of foods. This general phenomenon has been called sensory specific satiety and has been subjected to systematic analysis by B. J. Rolls and her colleagues (Rolls et al. 1985).
Below the age of about 2 years, children seem to have little neophobia and, quite willingly, put almost anything into their mouths (reviewed in Rozin, 1990c). Presumably, parental monitoring of their food access controls this otherwise dangerous tendency. After 2 years of age, at least in American culture, children sometimes enter a neophobic phase, in which they refuse all but a few types of food. We do not know the origin or adaptive value (if any) of this pattern. In American culture, some adults also find an extremely limited range of foods acceptable, though most children with this extreme neophobia recover from it. Neophobia varies greatly across individuals in North America, and a scale has been developed to measure this tendency in both children and adults (Pliner and Hobden 1992).
Food Preferences and the Adult Taxonomy
We know very little about how, within any culture and individual, specific foods become categorized in accordance with the taxonomy presented in Table VI.11.1. I review here what is known.
Best understood is the origin of the distinction between the danger and distaste categories. Generally, if a relatively new food is consumed, and this is followed within hours by unpleasant symptoms that include nausea, the food comes to be a distaste. However, if nausea is not a part of the symptoms, the result is usually that the food enters the danger category; that is, it is avoided because of anticipated harm, but is not distasteful (Pelchat and Rozin 1982). The taste-aversion learning originally described in rats by J. Garcia and his colleagues (1974) seems to be an example of acquired distaste mediated by nausea. Hence, nausea appears to be a magic bullet, producing distastes. This is not, however, to say that this is the only way in which distastes can be produced. There is evidence, for example, that when a food is accompanied by a bad taste (for example, when a bitter taste is added to a food), the food itself becomes distasteful, even when the originally unpleasant taste is removed (Baeyens et al. 1990).
So far as we know, there is no magic bullet like nausea on the negative side that makes a potential food into a good taste (a liked food). However, this happens very frequently. There is evidence for three processes that contribute to acquired likes.
One of these is mere exposure. That is, simple exposure to a food (meaning its ingestion) is usually accompanied by an increased liking for the food (Pliner 1982). The mechanism for this may be nothing other than mere exposure. The effects of mere exposure on liking have been shown in many domains other than food (Zajonc 1968).
A second process that accounts for acquired likes is classical (Pavlovian) conditioning. That is, if an already liked entity (called an unconditioned stimulus) is paired with (that is, is contingent with) a relatively neutral potential food, the potential food tends to become more liked. This process of change in liking for a stimulus as a result of contingent pairing in humans has been called evaluative conditioning (Martin and Levey 1978; Baeyens et al. 1990). One example involves pairing of a flavor (conditioned stimulus) with the pleasant experience of satisfaction of hunger (unconditioned stimulus). In a laboratory setting, humans have been shown to increase liking for a food the ingestion of which is followed by satiety (Booth, Mather, and Fuller 1982).
Another example involves presentation (contingent pairing) of one flavor of herbal tea (conditioned stimulus) with sugar (unconditioned stimulus) and another flavor of tea without sugar. After a number of presentations, subjects tend to prefer the flavor of the tea that was paired with sweetness, even when that flavor is presented without sugar (Zellner et al. 1983).
Within the context of evaluative conditioning, it is likely that the major class of potent (unconditioned) stimuli that influence food likes is social. In particular, the perception (often, perhaps, by facial expression) of positive affect in another (respected) person in conjunction with consumption of a particular food probably makes that food more liked by the “observer.” Recently, a first study has shown such an effect in the laboratory. Subjects who observe a person indicating pleasure in drinking a beverage from a glass of a particular shape (as opposed to other shapes) come to prefer that particular glass shape (Baeyens et al. 1994).
A third process that influences liking is also social, but it does not operate through conditioning. Rather, the perception of liking or value in a respected other operates more directly to influence one’s own attitudes. For example, when a respected other uses a food as a reward (indicating that he/she values the food), a child tends to come to prefer (like) that food more (Birch, Zimmerman, and Hind 1980). Furthermore, there is evidence, both from the general literature in social psychology (Lepper 1983) and the development of food preferences (Birch et al. 1982), that the perception that the self or others consume a food for clear personal gain (nutrition, social advantage) is destructive to the development of liking. That is, when children perceive that a food preference (in themselves or others) is “instrumentally” motivated, that is, connected to a specific reward, they tend not to shift their liking for the food in question.
Reversal of Innate Aversions
Humans are almost unique among mammalian (if not all) generalists in developing strong likes for foods that are innately unpalatable. These include such common favorites as the oral or nasal irritants in chilli peppers, black pepper, ginger, horseradish, and alcoholic beverages or tobacco, and the bitter tastes in certain fruits, tobacco, most alcoholic beverages, coffee, chocolate, burnt foods, and so forth. Reversals of innate aversions seem to occur in all cultures (cuisines). They might well be accounted for in terms of the operation of the factors (mere exposure, evaluative conditioning, social influence) that have already been identified. But there are also some special mechanisms of acquired liking that require an initially negative response. Two have been identified, with particular reference to the acquisition of a liking for the innately negative oral burn of chilli pepper (Rozin 1990a).
In the service of maintaining the body at an optimum level of function, humans and other species seem to have compensatory mechanisms, which neutralize disturbances by producing, internally, events that oppose the original event. These internally generated events are called opponent processes (Solomon 1980). For example, it is widely believed that endorphins, natural opiatelike substances, are released in the brain to reduce a chronic pain experience. R. L. Solomon (1980) and others have suggested that opponent processes become more potent as they are repeatedly stimulated and have used this feature to account for the basic process of addiction. Normally, when one experiences something painful, one withdraws and avoids the situation in the future. However, in the case of the chilli pepper and other culturally supported innately negative entities, there is a strong cultural force that reintroduces the aversive experience. Thus, children end up repeatedly sampling the unpleasant burn of chilli pepper, permitting the development of a strong opponent process, which may grow to the extent that it over-compensates for the pain and produces net pleasure (perhaps by oversecretion of endogenous opiates) (Rozin 1990a). Hence, the negativity of an innately aversive substance may provide the conditions for an acquired liking.
A more cognitive account of the reversal of innate aversions engages a uniquely human interest in mastering nature. The signals sent to the brain from innately unpalatable substances impel the organism to reject them; they are adaptively linked to a system that rids the organism of harmful entities. But many innately unpalatable substances are not harmful, at least in modest doses (chilli pepper, ginger, coffee). Though chilli pepper makes the novice feel as if his palate will peel away, it is, in fact, harmless. It is possible that the realization that a substance–experience elicits bodily defensive mechanisms, but is actually safe, is a source of pleasure. We call this pleasure that comes from mastery over nature benign masochism (Rozin 1990a).A more striking instance is the unique human activity of enjoying bodily manifestations of fright (as opposed to pain): People presumably enjoy roller coasters because their body is frightened, but they know that they are actually safe.
Disgust: The Food-Related Emotion
The strongest reaction to a potential food is surely the revulsion associated with disgusting entities (Table VI.11.1). The emotion of disgust elicited in these situations is characterized by withdrawal, a sense of nausea, and a characteristic facial expression. According to our analyses (Rozin and Fallon 1987; Rozin, Haidt, and McCauley 1993), following on the seminal contributions of Charles Darwin (1872) and A. Angyal (1941), disgust is originally a food-related emotion, expressing “revulsion at the prospect of oral incorporation of offensive substances. The offensive substances are contaminants, that is, if they contact an otherwise acceptable food, they tend to render that food unacceptable” (Rozin and Fallon 1987: 23). Our analysis holds that disgust originates from distaste (and shares, to some extent, a facial expression with distaste), but is in fact quite distinct from distaste in adults. For example, the idea of a distasteful entity in one’s stomach is not upsetting, but the idea of a disgusting entity in one’s stomach is very upsetting.
The intensity of the disgust experience derives, in part, from the “you are what you eat” principle, since, by this principle, one who ingests something offensive becomes offensive. Virtually all food-related disgust entities are of animal origin. We have proposed that the core disgust category is all animals and their products (Rozin and Fallon 1987). In line with Angyal’s suggestion (1941), we agree that feces is the universal disgust substance, and almost certainly the first disgust, developmentally. The potency of disgust elicitors is evidenced by the contamination property, a feature that seems to characterize disgust cross-culturally.
Disgust provides an excellent example of how a food system becomes a template or model for more elaborated systems. The emotion of disgust seems, through cultural evolution, to have become a general expression of anything offensive, including nonfoods. Our analysis suggests that principal elicitors of disgust cross-culturally include reminders of our animal origin (such as gore, death), a wide range of interpersonal contacts (such as wearing the clothing of undesirable persons), and certain moral offenses often involving purity violations or animality (Rozin et al. 1993).
Transmission of Food Preferences and Attitudes
Most available models for the acquisition of food preferences would predict a rather high correlation between the food preferences of parents and those of their children. For the first years of life, parents are the primary teachers and exemplars of eating and food choice for children. In addition, to the extent that there are individual differences in preferences that have genetic contributions, one would expect parent–child resemblances. Yet, the literature (reviewed in Rozin, 1991) indicates very low correlations between the food preferences of parents and their children (in the range of 0 to 0.3), whether the younger generation in the studies is one of preschoolers or of college age. There is no reasonable account for these low correlations; they do not result from the fact that the parents have different preferences for the foods under study, so that the child gets “mixed messages.” Parent–child correlations remain low even when the parents are very similar in their preferences for the foods under study (Rozin 1991). Alternative sources of influence on children include peers and siblings (for whom there is evidence of some substantial resemblance [Pliner and Pelchat 1986]), and the media. The effectiveness and mode of operation of these and other influences has yet to be evaluated.
The low parent–child correlation for food preferences is part of what has been called the family paradox (Rozin 1991). The paradox is extended by the fact that there is no consistent evidence that mothers (as principal feeders and food purveyors) show higher child resemblance than do fathers, nor that same-sex parents have greater influence than opposite-sex parents—as views about identification might suggest (Pliner 1983; Rozin 1991).
The puzzling lack of parent–child resemblance for food (and other) preferences contrasts with much higher parent–child resemblance for values (CavalliSforza et al. 1982; Rozin 1991). Apparently, values (such as religious or political attitudes) are more subject to parental influence. In this regard, it is of interest that food preferences may, under some circumstances, become values. Food is intimately related to moral issues in Hindu India, such that a leading scholar of this area has described food in Hindu India as a “biomoral substance” (Appadurai 1981). But even in Western cultures, specific foods may enter the moral domain. For example, vegetarianism for some has moral significance. It would be interesting to see if parental influence is greater on children for whom a preference has been moralized than for those for whom the preference is not moralized.
Attitudes and Choice: An Alternative Approach
The framework I have provided for examination of human food selection emphasizes the psychological categorization of foods and the combined influences of biological, psychological, and cultural factors on food selection. The emphasis has also been on the developmental history of food preferences. There are alternative formulations from within both psychology and marketing. A particularly prominent approach, based on the study of attitudes and their relation to behavior, examines the factors currently influencing selection of a particular food, independent of the ultimate origin of these causes. In other words, this approach models what would be going on in the head of a person in a supermarket faced with a choice between two products. For convenience, the forces acting on a person at a time of choice can be categorized as those attributable to the food and its properties, the person, and socioeconomic factors (Shepherd 1988).
The theory of reasoned action (Ajzen and Fishbein 1980) has probably been the most successful framework for analysis of choices in terms of the action of contemporary forces. According to this approach, the principal cause of a behavior (for instance, selecting a particular food) is the intention to behave in such a manner, and the principal predictors of the intention are personal attitudes to the behavior (for instance, whether it is seen as good or bad) and subjective norms (the perceived opinions of significant others about whether the behavior should be performed). Personal attitudes are themselves a function of a set of beliefs about the outcome of the behavior (food choice) in question (will it promote overweight, will it produce a pleasant taste?) and the evaluation of that outcome (is it good or bad?). A set of salient beliefs is determined, and the sum of the product of each belief and its evaluation composes the attitude score. The subjective norm is predicted in the same manner by the sum of normative beliefs about whether specific relevant people or groups think that the target person should perform the behavior, each norm multiplied by the motivation to comply with it.
A number of studies (see Tuorila and Pangborn 1988; Towler and Shepherd 1992) have applied the Ajzen-Fishbein model to food choices, with some success, in terms of prediction of consumption from (usually) questionnaire data on beliefs, evaluations of beliefs, subjective norms, and motivation to comply with those norms. Target foods have included a variety of dairy products, meats, and other high-fat foods. The results, on subjects from the Western-developed world, show a tendency for personal attitudes to be better predictors of choice than subjective norms. They also suggest that taste of food is the most important predictor of attitudes, with health effects often the second-best predictor (see Shepherd 1989b for a general discussion).
Food Selection Pathology in America
Throughout history, food has been viewed as not only the principal source of sustenance but also a major source of pleasure. In the United States, a surfeit of food, a developing concern about long-term harmful effects of certain foods, and a standard of beauty for women (at least among the white middle class in the United States) that is much thinner than the average woman have all contributed to an ambivalent, sometimes negative attitude to food (Polivy and Herman 1983; Rodin et al. 1985; Becker 1986; Rozin 1989; Rodin 1992).
Consumption of foods, especially those that are highly tasty and fattening, becomes for many a source of fear and guilt, rather than pleasure. The focal group for this (what I will call) pathology of food is white middle-class American women, among whom there is considerable concern about obesity and, consequently, considerable efforts aimed at dieting (Rodin 1992). One consequence of this is that eating disorders, such as anorexia nervosa and various forms of bulimia, are increasing in American women, whereas such disorders are at much lower levels in American males and virtually absent in developing countries (McCarthy 1990).
One cause of this hyperconcern about food, as mentioned, is the standard of thinness accepted by American women; unlike American men, American women see themselves as substantially overweight (Rozin and Fallon, 1988). Furthermore, white middle-class American women are more worried about being above ideal weight than are men; that is, even when men recognize that they are overweight, food is less likely to become a source of fear and guilt for them.
Although food is obviously a necessity for survival, potentially harmful effects on health of certain foods have come to be a matter of concern for many Americans. This recent concern results from a number of factors:
- The reduction in death from infectious diseases has produced a corresponding increased incidence of death from degenerative diseases. This changes the focus of food risk from acute to chronic effects.
- Epidemiological evidence on harmful effects of food, which is accumulating at a rapid rate, is conveyed to a public fascinated with such information in a salient—even sensational—way by the media.
- Americans have become obsessed with longevity, or perhaps, immortality.
- Americans (and others) are unprepared to deal with and evaluate the constant wave of food (and other) risk information to which they are exposed. They have never been taught cost–benefit analysis or interpretations of very low probabilities and have beliefs about food that are often incorrect. For example, many believe that if something is thought to be harmful at high levels, then it is also harmful at trace levels. Such dose insensitivity causes some to shun even traces of salt, fats, or sugars (Rozin, Ashmore, and Markwith 1996). In addition, many Americans falsely believe that “natural” foods are invariably healthy, and processed foods are likely to be harmful. In fact, over the short term, the opposite is clearly true (that is, acute illness is more likely to be caused by natural foods), and there is reason to believe that in many cases, the long-term risks posed by the consumption of natural foods are higher than those posed by processed versions of the same foods (Ames et al. 1987). The centrality of food in life and the great concern about putting things in the mouth and body probably causes people to overestimate long-term food risks in contrast to other risks, such as driving.
The result is that eating, which is one of the greatest sources of pleasure and health that humans can experience, has become a nightmare for some and, in fact, threatens to become an American cultural nightmare. We can only hope that this obsessive concern does not, like so many other attitudes and beliefs of Americans, spread around the world.