Marco Antonio Viniegra Fernández. Scientific Thought: In Context. Editor: K Lee Lerner & Brenda Wilmoth Lerner. Volume 1, Gale, 2009.
Few modern scientific disciplines can boast a tradition as long and vast as that of nutrition. From very early in history, physicians and philosophers alike linked food with wellbeing. The ancient Greek philosopher Anaxagoras (c.500-c.428 BC) argued that because food facilitated the body’s growth, it must contain generative components he called “homeomerics.” This was one of the first known mentions of nutrients and an expression of the idea that the primary components of life are ingested.
Using diet as a way to preserve or regain health was a staple of the Western world for millennia. But scientific advances by chemist Antoine Lavoisier (1743-1794) and experimental physiologist François Magendie (1783-1855) during the late eighteenth century laid the foundation for modern nutrition. Thanks to them, ancient ideas about the body’s function were transformed by new discoveries and new ways of thinking. By the nineteenth century, engineering the food system through dietetics had become a method for mastering nature and society. Nutrition was believed to hold the key to life itself.
Since 1785, when Claude Berthollet (1748-1822) first identified nitrogen in animal tissues, 12 vitamins and roughly the same number of minerals have been discovered. Many more are waiting to be studied; these include the so-called phytonutrients found in plants, such as lycopene, which makes tomatoes red. New genetic discoveries may revolutionize our understanding of the effect that food has on our bodies. Nutrition is a living science with a rich history, and all of this started with a simple question that still guides the science of nutrition today: How is the food we eat used in our bodies?
Historical Background and Scientific Foundations
Long before Berthollet, humans learned how to start fires, create tools, hunt, fish, and harvest plants—and, around 8000 BC, how to cultivate them. Our prehistoric ancestors also learned to distinguish between poisonous and therapeutic plants. As early as 3500 BC they had invented the wheel, were aware of the seasons, and recognized the need for specialized medical skills, including herb gathering, bone setting, and childbirth.
Medicine became a way to counter the dangers of life and a way to perceive the surrounding world. Wise men and women were not only recognized for their healing skills, but were also thought to be able to affect and control the mystical powers behind illness and life. “Healing” invoked those mystical powers, and food was part of this mystic process—administering the proper herbs; cleaning, preparing, and blessing them; and honoring the forces of nature.
The Egyptians and Babylonians possessed a complex religious system in which food was part of healing: incantations were used to free the body from the evil spirits thought to cause disease, and food helped the body in this fight. By the fifth century, a group of texts written over the course of two centuries by physicians of the Hippocratic medical school—although attributed to Hippocrates (c.460-c.375 BC) himself—had coalesced into what is known today as the Hippocratic corpus. These influential writings defined health as a state of balance of the body’s internal fluids or humors.
Nutrition was part of a regimen or “diet,” which involved specific exercises, sleep patterns, and even sexual customs. This was believed to be a way to maintain or recover health. People were supposed to eat according to the characteristics of the food itself and their own constitutions. For example, some people were considered to be of cold constitution, and therefore required to follow a diet of hot ingredients, such as beef, to balance the humors. Other people were instructed to follow a diet that complimented a cold, wet, or dry constitution. The regimen was one of the most important innovations of this time: it allowed people to exercise control over their bodies and their health. Medicine was defined by the Hippocratic authors as a theoretical science separate from magic and religion.
In the second century AD a Greek physician named Galen of Pergamum (AD 129-c.216) gave a formal structure to the Hippocratic regimen. He defined the four specific humors that supposedly made up the body, each with distinct qualities: Blood was hot and wet, phlegm was cold and wet, yellow bile was hot and dry, and black bile was cold and dry. Harmonizing the ideas of his day’s authorities—Hippocrates, Plato (428-348), and the Stoics—Galen perfected the regimen and created a medical theory with both physiological and moral relevance that characterized Western medicine and dietetics and remained largely unchanged for more than 1,600 years. Food was the quintessential medical treatment.
The Chemical Revolution
By the eighteenth century, science in general and chemistry in particular had become domains reserved for specialists who shared a complex heritage of knowledge inspired by classical and medieval authorities and were, therefore, holders of unchallengeable, unique truth. Antoine Lavoisier (1743-1794) sought to transform the ancient language of chemists into a more accessible system. His work emphasized experimentation, and his studies of human metabolism earned him the title of father of chemistry and nutrition.
Lavoisier’s measurement of carbon dioxide in exhaled breath challenged the accepted idea that respiration’s purpose was to cool the heart and eliminate remaining ingested materials. With the help of Pierre Simon Laplace (1749-1827), Lavoisier demonstrated that body heat was caused by food oxidation. As a result of their work, scientists began to study the contents of food and its effects. Their discoveries led to a new concept of the human body as a sort of compound of chemical reactions, a machinelike organism that was the result of specific identified and measurable elements, such as minerals and gases. Chemistry came to be seen as the key to understanding this machine.
In 1785 Claude Berthollet discovered the presence of ammonia vapor in decomposing animal matter, proving nitrogen’s presence in animal tissues. By the early 1800s François Magendie had shown that food without nitrogen couldn’t support animal life, which meant that, unlike plants, animals were unable to absorb atmospheric nitrogen to supplement a diet low in nitrogen.
Nitrogen became the main criteria when measuring food value and diet, and it was believed to be the body’s main source of energy. Furthermore, a state called nitrogen balance, measured according to the difference between the amount of nitrogen taken in and the amount excreted or lost, became the ideal for both humans and animals.
In 1803 English meteorologist and chemist John Dalton (1766-1844) first outlined modern atomic theory. While researching the physical properties of the atmosphere and other gases, he came to the conclusion that all substances were chemical combinations of indivisible particles or “atoms.” According to Dalton’s theory, atoms of one element could combine with atoms of others, forming compounds like ammonia, water, and carbon dioxide. These ideas not only revolutionized chemistry, but also influenced nutrition, since they made it clear that food contained a series of elements that, combined, constituted the body’s nutritional requirements.
By the mid-1800s, science was considered the tool with which man would master nature and exploit the physical world—perhaps even allow one nation to dominate others. Nutrition became a tool of social policy. Anything that was faster and bigger was identified as better and healthier: bigger bodies with bigger muscles were identified with the perfect soldier, the untiring worker, and the powerful generative male; larger populations of healthy individuals were associated with social progress and the moral and physical strength of superior nations. Quantity became the focus of nutrition at a time in which abundance of food was associated with power and supremacy.
During the nineteenth century the belief that science was an instrument for controlling nature gained popularity. Nutrition acquired a narrower dimension and a wider scope, becoming an empirical biochemical science as well as a political tool. The perfect example of this new path followed by nutrition was the concept of protein.
Food analysis, under the atomic theory of chemical compounds, led to the idea of “animal substances” that were the primary materials for animal and human bodies. Different forms of these basic constructive blocks were identified in Germany and Britain (albumin, fibrin, casein, etc.). However, a Dutch physician named Gerrit Mulder (1802-1880) suggested in 1839 that all these substances were nothing but compounds of a common radical combined with different proportions of phosphorus, sulfur, or both. This hypothetical radical was named “protein,” from the Greek proteios, meaning “first place.”
These ideas were welcomed enthusiastically by Justus von Liebig (1803-1873), the founder of biochemistry. In his book Animal Chemistry or Organic Chemistry in its Application to Physiology and Pathology, published in Germany in 1842, he argued that because he could not find any presence of fat or carbohydrate in his analyses of muscles, the energy needed for their movement must come from an explosive breakdown of protein molecules, which resulted in urea production and excretion. Protein was then considered the only true nutrient; fat and carbohydrates were believed to be useful only as a sort of protection against the effects of oxygen in the body. The idea of a single nutrient as the source for bodily growth was very attractive. This nutrient became associated with many ideas about the body and society: the powerful body of the soldier, the strong body of the worker, and the energetic bodies of the young were all seen as the result of a diet abundant in protein. The health and growth of the nation was also associated with the abundance of foodstuffs believed to contain larger amounts of such a nutrient.
Edward Smith (1819-1874), a British physician concerned with the welfare of prisoners submitted to forced labor, conducted experiments that revealed some problems with von Liebig’s theories of protein. Smith measured the urea excretion of prisoners both after their day’s work and during their subsequent rest days, and found no difference in the amounts. The constant amount of urea, regardless of physical activity, contradicted Liebig’s theory that the body secreted higher levels of urea as a consequence of protein breakdown.
Further research by German physiologist Adolf Fick (1829-1901) and German chemist Johannes Wislicenus (1835-1902) contradicted Liebig’s results as well: after following a very low nitrogen diet, they collected their urine during and after a climb following a relatively easy path to a hotel atop a Swiss mountain. Their analysis showed that, according to von Leibig’s theory of nitrogen, protein breakdown, and urea excretion, they should not have been able to get enough energy from their diet to make the climb. These results were confirmed in Britain by Fick’s brother-in-law, English chemist Edward Frankland (1825-1899), who had developed a technique for measuring the heat of combustion of foods and urea in the body. Frankland argued that the extra energy must have come from sources in their diets other than protein.
Nevertheless, protein remained the key to both industrialization and social advantage for most of the nineteenth century. Philanthropists and politicians in both Europe and the United States were united under the idea, strongly publicized by von Liebig and his followers, that chemistry was the solution for plants, animal, and human breeding—and would ultimately reveal the secrets of life itself. Nutrition became as much a matter of social policy as it was a matter of chemistry.
Disease as Deficiency
During the late nineteenth and early twentieth centuries, diseases caused by dietary deficiencies were targeted by nutritional science. As diseases became identified with a specific dietary lack, scientists, politicians and the general public began to regard all illness as a consequence of deficiency. Nutrition research began to affect political thought: a rich country needed strong citizens to work and increase the nation’s economic and political resources. A proper diet became an inseparable component of any political agenda. Industrialized nations like Britain, the Netherlands, and the United States adopted social policies that treated food as a means to better development and a more prominent place in the world.
In 1747 British naval physician James Lind (1716-1794) had performed the first known scientific nutrition experiment, discovering that citrus fruit cured scurvy, a deadly and painful disorder, caused by a lack of vitamin C that sailors often developed on long voyages. Even though Lind published his findings in 1753, the discovery was ignored for many years. Similar outbreaks of scurvy occurred during the 1840s in several British and Scottish prisons after they stopped serving potatoes. Serious outbreaks of scurvy began to occur everywhere in Europe, after a fungus attacked the potato harvest. The relationship between diet and the disease became clear. Further research found that green vegetables were an effective treatment for scurvy when neither potatoes nor fruit were available. So even though the vitamin C contained in fruits and vegetables would not be identified until the 1930s, scientists were able to associate this disease with specific dietary elements.
With the development of the germ theory of disease and the possibility that microorganisms could be isolated and treated, an enormous amount of human suffering was reduced. Germ theory became much more than science, it became a way of thinking that touched society in every aspect. Unfortunately, it also caused some deficiency diseases to be attributed to microbes.
After the discovery of iodine in 1811 by the French physician Bernard Courtois (1737-1838), goiter (a swelling in the neck due to an enlarged thyroid gland) was thought to be caused by an iodine deficiency. But because the treatment required high doses of often-toxic iodine it was abandoned, and goiter was attributed to an unknown infectious agent or a microorganism. Because these types of diseases were difficult or even impossible to study in animals, it made research even more difficult.
An interesting consequence of the research on dietary deficiency diseases is the notion of vitamins. Dietary deficiencies were discovered in the early nineteenth century, but it was the Dutch physician Gerrit Grijns, working on beriberi in Java (now Indonesia) at the end of the century, who proved that the disease is caused by a nutritional deficiency (now known to be vitamin B1) and first proposed that some essential organic nutrients existed only in very small quantities.
Other findings followed as scientists realized that something other than proteins and minerals were necessary for an adequate diet. By 1906 Gowland Hopkins (1861-1947), professor of biochemistry at Cambridge University, identified the amino acid tryptophan and noted that it was necessary for the survival of mice. He proposed the idea of “accessory food factors” essential to health that the body cannot synthesize.
The Golden Age of Nutrition
While researching beriberi, Polish-born biochemist Casimir Funk (1884-1967) theorized, in 1912, that the anti-beriberi factor would be an organic base that contained an amine group. He also suggested that diseases like pellagra, scurvy, and rickets were also caused by deficiencies of other factors that had yet to be discovered. He called these “vitamines” (the “e” was dropped after it was shown that they were not all amines). Vitamins became the major research topic in nutrition for many decades to come, a period often referred to by modern nutritionists as the “Golden Age of Nutrition.”
Working at the University of Wisconsin in 1913, American biochemist Elmer McCollum (1879-1967) discovered the first vitamin, which he called “fat-soluble factor A,” in butter and cod liver oil. Four years later, he and his colleagues discovered “water-soluble factor B” (now recognized the vitamin B complex) in milk whey. Incorporating Funk’s term of “vital amine,” McCollum later named them “vitamine” A and “vitamine” B. McCollum published a book called The Newer Knowledge of Nutrition in 1918, and the hunt for other vitamins began. The roster was completed in the 1940s.
McCollum developed the purified diets still used today in research (mixtures made only from major nutrients like protein, carbohydrates, fat, and minerals in forms as pure as possible). Furthermore, he switched from cows and horses to mice and rats for his research at the University of Wisconsin, introducing a more laboratory-based model—research that can be conducted in an enclosed environment, where most of the external factors could be controlled.
In 1919 British physician and pharmacologist Sir Edward Mellanby (1884-1955) applied McCollum’s studies to his work with dogs suffering from rickets, but he wrongly identified the disease as a vitamin A deficiency because it could be cured with vitamin A-rich cod liver oil. In 1922, however, McCollum and his team cured rickets with cod liver oil in which the vitamin A had been destroyed. When they looked for an explanation for this, they discovered vitamin D.
That same year, H.M. Evans and Katherine Bishop, while studying rats at Berkeley, discovered vitamin E, which is essential for growth and pregnancy. After further study, the vitamin was isolated in 1935 and called tocopherol (from a Greek term that means “to carry”). In 1936 the Swiss chemist Paul Karrer (1889-1971) synthesized vitamin E and received the 1937 Nobel Prize in chemistry for his work on several vitamins. In 1935 the Australians Eric J. Underwood (1905-1980) and Hedley R. Marston (1900-1965) independently discovered the need for cobalt in the diet, and by 1936 the American physician Eugene Floyd Dubois (1882-1959) had linked work and school performance to caloric intake.
Since the discovery of proteins, much research has focused on finding the food components capable of maximizing growth rates, on the assumption that bigger was better. But in 1917 American biochemist Lafayette Mendel (1872-1935) and his colleagues at Yale reported that restricting the food supply of female rats during their first year of life led to a longer life span. The rats in question also remained fertile for a longer time and had vigorous offspring. Clive McCay (1898-1967), one of Mendel’s students and, in 1927, a professor at Cornell University, obtained similar results, particularly with male rats. He commented, afterward, that “it seems little short of heresy to present data [supporting] the ancient theory that slow growth favors longevity.”
Nutrition and Social Policy
In 1941 the first recommended dietary allowances (RDAs) were established by Lydia J. Roberts (1879-1965), Hazel K. Stiebeling (1896-1989), and Helen S. Mitchell at the National Research Council (the working arm of the United States National Academy of Sciences and the United States National Academy of Engineering). The idea behind the RDAs was the need for dietary standards, particularly during wartime rationing, when they would be used for nutrition recommendations for the armed forces, civilians, and overseas populations needing food relief. RDAs were also meant to explain food labels and new food products; compose diets for schools, prisons, hospitals, or nursing homes; and inform health policy makers and public health officials. Nutrition was a state business, and social policy was its expression.
In Great Britain, the government wanted to implement the proposals made by nutritional scientists like John Boyd Orr (1880-1971), Jack Drummond (1891-1952), and Hugh Macdonald Sinclair (1910-1990) who proposed new ways to improve diet and distribute better food to a larger number of people. Their nutritional prescriptions became an essential part of the 1939-1945 war effort, when the national food system was engineered to make it more nourishing, and Orr’s proposals developed into social programs. Orr became the first director general of the United Nations Food and Agriculture Organization, and he won the Nobel Peace Prize in 1949 for his advocacy of world food supply equity. His positions have been influential all over the globe.
New Approaches to Nutrition
During the 1960s a research team from Vanderbilt University studying dwarfism and hypogonadism (sexual immaturity) among teenage boys in Egypt discovered that adding zinc salts to their diets stimulated growth and maturity. These studies also proved that knowing how much of a nutrient was in the diet was not enough to determine whether or not it was adequate. A more pertinent question was its bioavailability.
The bioavailability of minerals like selenium, chromium, and B vitamins like niacin depends on their chemical forms or how they are combined with other nutrients. This meant that the idea of the diet as a simple union of specific components was now suspect. A proper diet became a whole diet—that is, a diet in which the interactions of all the different elements present in all the food consumed were integral and inseparable. Nutritionists began to emphasize the importance of a variety of foodstuffs, with vegetables, fruits, and grains taking a central role. Research on cholesterol, fat, and carbohydrates gained importance during this time, changing the understandings of food’s primary components—less fat was better; a diet lower in cholesterol and salt was ideal; a larger variety in greens was essential. The study of nutrition was reshaped by all these findings; it became a science concerned with fat cells, molecules of carbohydrates, and protein components.
Protein … Again!
The role and quality of necessary proteins in the diet remained a central focus. Were animal proteins or synthetic amino acids—an organic compound that is the basic component of proteins—necessary to balance vegetable proteins? How much protein do we really need? Though procedures to determine this have been in development since 1985, no definitive answer has been obtained and scholars still suggest caution in justifying the need for increased protein.
The lack of protein, on the other hand, has been well documented. A West African disease named “kwashiorkor,” first studied in 1935, highlighted the problem of worldwide food distribution. The protein-deficiency disease, whose symptoms include a swollen abdomen, hair discoloration, and depigmented skin, was eventually linked with a protein deficiency. Scientists developed fish protein concentrate (FPC) and single cell (yeast) protein nutrients as supplements for protein-deficient diets.
Protein deficiency was declared a worldwide problem, especially for developing countries, and in 1968 the United Nations made money available for research on FPC. This work, however, did little to improve the lives of those affected, and the quality of food produced under these projects proved poor both in protein and calorie values—furthermore, many of its components were proven toxic. This happened at a time in which nutritional science enjoyed a great reputation, and many resources were available for research. The FPC nutritionists had conducted research projects that actually had very little connection with problems of public concern and had drawn fictitious links with these issues simply to obtain funds.
Is More Always Better?
Since the 1960s, research on food intake has linked the typically excessive Western diet to obesity and related diseases. As a result, self-control and portion size became new trends in nutrition—eat less and watch what you eat. Scientists began a quest to educate the public on the evils of fat-laden diets and the art of avoiding certain foods and portion control. Dieting slowly became a moral value, and a slim or athletic figure was considered a sign of health.
Modern Cultural Connections
Unfortunately, the distance between nutritional science and society has grown during recent years. Research has revealed a gap between the dictates of common sense and scholarly findings. “Common sense” had been a well-recognized dietary guideline since Galen’s time; the idea behind it was to eat according to one’s own constitution and do so in moderation. Second, the conflicting scientific results and disagreements among nutritionists make it difficult for people to know what exactly constitutes “sound nutrition.” Third, from Dr. Atkins’ Diet Revolution of 1972 to the South Beach Diet of 2003, “independent” experts selling dietary advice outside the boundaries contradict academic research. Fourth, the food industry has acquired significant power. A good example of this is the colossal struggle of the 1990 Nutrition Labeling and Education Act (the now-common “nutrition facts” label that appears on almost every edible item). In this fight, the food industry successfully campaigned to secure the right, against Food and Drug Administration’s wishes, to claim health attributes for foods and supplements when supported by the fuzzy standard of “significant scientific agreement among qualified experts.”
Certainly, fraud and economic interest are present in today’s dietary advice, but the distance between science and society seems to be the real enemy: nutritionists are still more concerned with nutrition described in chemical, molecular terms, and still reinforce the idea of “self-control” as a moral value, often ignoring the reasons people actually eat the way they do. However, nutrition has an essential value: it reveals that what we eat affects our bodies and that, against the advice of quacks, we need diversity, variety, and moderation in our diets.
Genetics and Nutrition: The Last Step?
The science of genetics may help clarify the confusing nutrition landscape. After the human genome was decoded in 2003, American and European nutritionists formulated a new dietary approach called nutrigenomics, which uses personalized genetic profiles to help people select foods most convenient to them and most in accordance with their body. Furthermore, many of the studies currently conducted in nutrigenomics in the United States are concerned with how to prevent, delay, and treat diseases such asthma, obesity, type 2 diabetes, cardiovascular disease, and prostate cancer.
Nutrigenomics and other studies currently in development may reintroduce the idea of nutrition as a whole: a set of customs involving individual necessities and personal constitutions that must be tuned according to what nutrition can teach to every single person about their lifestyles. The ancient model established by the Greeks may come back, this time reshaped by genetics and modern science.
Primary Source Connection
The following article by Marian Burros for the New York Times addresses the power of the food industry to influence what are supposed to be “objective” research studies about food and nutrition. The peer-reviewed study cited by Burros indicates that the bias found in such studies affects the general public in negative ways.
Bias is Found in Food Studies with Financing From Industry
Research studies financed by the food industry are much more likely to produce favorable results than independently financed research, a report to be published today said.
The report, in the peer-reviewed journal PLoS Medicine, is the first systematic study of bias in nutrition research.
Of 24 studies of soft drinks, milk and juices financed by the industry, 21 had results favorable or neutral to the industry, and 3 were unfavorable, according to the research led by Dr. David S. Ludwig, director of the Optimal Weight for Life Program at Children’s Hospital Boston and an associate professor at the Harvard Medical School.
Of 52 studies with no industry financing, 32 were favorable or neutral to the industry and 20 were unfavorable. The biases are similar to findings for pharmaceuticals. Bias in nutrition studies, Dr. Ludwig said, may be more damaging than bias in drug studies because food affects everyone.
“These conflicts could produce a very large bias in the scientific literature, influence the government’s dietary guidelines which are science based,” he said in an interview. “They also influence the advice health care providers give their patients and F.D.A. regulations of food claims. That’s a top-order threat to public health.” The American Beverage Association, which sponsored at least one study in the article, said the authors had their own biases.
“This is yet another attack on the industry by activists who demonstrate their own biases in their review by looking only at the funding sources and not judging the research on its merits,” the president of the trade group, Susan K. Neely, said in a statement.
The new study looked at research published in scientific journals from 1999 to 2003. Studies of milk, juice and soft drinks were chosen, Dr. Ludwig said, because they deal with a controversial area that involves children and highly profitable products.
Of 206 articles, 111 reported their sponsors. Two investigators with no knowledge of the sponsors, who wrote or published the articles, or even the articles’ titles, classified their conclusions as favorable, neutral or unfavorable to the industry.
Another investigator, with no knowledge of the articles’ conclusions, determined the financing sources and whether or not the sponsors stood to gain or lose from favorable conclusions.
A study of carbonated beverages in 2003 published in The International Journal of Food Sciences and Nutrition and financed by the American Beverage Association when it was known as the National Soft Drink Association found that boys with high weights did not consume more regular soft drinks than boys who were not overweight but did consume more diet soft drinks.
The soft drink industry has cited the study to bolster its position that soft drinks are not related to obesity.
“My co-authors and I rely heavily on scientific method in order to make sure we do not have bias in our studies,” said Richard A. Forshee, the lead author of that study and deputy director at the Center for Food, Nutrition and Agriculture Policy at the University of Maryland.
Also in 2003, a study of soft drinks in The Archives of Pediatrics and Adolescent Medicine found a direct relationship between the number of soft drinks consumed and obesity. Foundations sponsored that study.
“For people who think science is completely objective, these results might come as a big shock,” said Prof. Marion Nestle of the Nutrition, Food Studies and Public Health Department at New York University.
Burros, Marian. “Bias is Found in Food Studies with Financing from Industry.” New York Times (January 9, 2007).
Primary Source Connection
When results of the first long-term study about the social nature and prevalence of obesity were published in the New England Journal of Medicine, news agencies reported the findings with sensational headlines such as “Obesity May Be Contagious” and “Overweight? Blame It on Your Friends.” The epidemic of obesity is increasing in the United States, and in fact, is spreading among developed nations throughout the world. The following excerpt from the 2007 study reported by scientists Nicholas A. Christakis and James H. Fowler indicates that social ties have indeed fueled the obesity epidemic, but could also be useful in slowing it down.
Nicholas A. Christakis is a physician and professor of sociology and medical sociology at Harvard University. James H. Fowler is a professor in the political sciences at the University of California, San Diego. Both researchers are writing Connected!, a book about how social networks influence health and everyday life, due to be published in 2010.
The Spread of Obesity in a Large Social Network Over 32 Years
The prevalence of obesity has increased substantially over the past 30 years. We performed a quantitative analysis of the nature and extent of the person-to-person spread of obesity as a possible factor contributing to the obesity epidemic.
We evaluated a densely interconnected social network of 12,067 people assessed repeatedly from 1971 to 2003 as part of the Framingham Heart Study. The body mass index was available for all subjects. We used longitudinal statistical models to examine whether weight gain in one person was associated with weight gain in his or her friends, siblings, spouse, and neighbors.
The prevalence of obesity has increased from 23% to 31% over the recent past in the United States, and 66% of adults are overweight. Proposed explanations for the obesity epidemic include societal changes that promote both inactivity and food consumption. The fact that the increase in obesity during this period cannot be explained by genetics, and has occurred among all socioeconomic groups provides support for a broad set of social and environmental explanations. Since diverse phenomena can spread within social networks, we conducted a study to determine whether obesity might also spread from person to person, possibly contributing to the epidemic, and if so, how the spread might occur.
Whereas obesity has been stigmatized in the past, attitudes may be changing. To the extent that obesity is a product of voluntary choices or behaviors, the fact that people are embedded in social networks and are influenced by the evident appearance and behaviors of those around them suggests that weight gain in one person might influence weight gain in others. Having obese social contacts might change a person’s tolerance for being obese or might influence his or her adoption of specific behaviors (e.g., smoking, eating, and exercising). In addition to such strictly social mechanisms, it is plausible that physiological imitation might occur; areas of the brain that correspond to actions such as eating food may be stimulated if these actions are observed in others. Even infectious causes of obesity are conceivable.
We evaluated a network of 12,067 people who underwent repeated measurements over a period of 32 years. We examined several aspects of the spread of obesity, including the existence of clusters of obese persons within the network, the association between one person’s weight gain and weight gain among his or her social contacts, the dependence of this association on the nature of the social ties (e.g., ties between friends of different kinds, siblings, spouses, and neighbors), and the influence of sex, smoking behavior, and geographic distance between the domiciles of persons in the social network….
Our study suggests that obesity may spread in social networks in a quantifiable and discernable pattern that depends on the nature of social ties. Moreover, social distance appears to be more important than geographic distance within these networks. Although connected persons might share an exposure to common environmental factors, the experience of simultaneous events, or other common features (e.g., genes) that cause them to gain or lose weight simultaneously, our observations suggest an important role for a process involving the induction and person-to-person spread of obesity.
Our findings that the weight gain of immediate neighbors did not affect the chance of weight gain in egos and that geographic distance did not modify the effect for other types of alters (e.g., friends or siblings) helps rule out common exposure to local environmental factors as an explanation for our observations. Our models also controlled for an ego’s previous weight status; this helps to account for sources of confounding that are stable over time (e.g., childhood experiences or genetic endowment). In addition, the control in our models for an alter’s previous weight status accounts for a possible tendency of obese people to form ties among themselves. Finally, the findings regarding the directional nature of the effects of friendships are especially important with regard to the interpersonal induction of obesity because they suggest that friends do not simultaneously become obese as a result of contemporaneous exposures to unobserved factors. If the friends did become obese at the same time, any such exposures should have an equally strong influence regardless of the directionality of friendship. This observation also points to the specifically social nature of these associations, since the asymmetry in the process may arise from the fact that the person who identifies another person as a friend esteems the other person.
Finally, pairs of friends and siblings of the same sex appeared to have more influence on the weight gain of each other than did pairs of friends and siblings of the opposite sex. This finding also provides support for the social nature of any induction of obesity, since it seems likely that people are influenced more by those they resemble than by those they do not. Conversely, spouses, who share much of their physical environment, may not affect each other’s weight gain as much as mutual friends do; in the case of spouses, the opposite-sex effects and friendship effects may counteract each another.
Obesity in alters might influence obesity in egos by diverse psychosocial means, such as changing the ego’s norms about the acceptability of being overweight, more directly influencing the ego’s behaviors (e.g., affecting food consumption), or both. Other mechanisms are also possible. Unfortunately, our data do not permit a detailed examination. However, some insight into possible mechanisms can be gained from a consideration of the roles of smoking and geographic distance in obesity.
The tendency of persons to gain weight when they stop smoking is well known, and the coincidence of a decrease in smoking and an increase in obesity in the overall population has been noted. However, the present study indicates that regardless of whether smoking cessation causes weight gain in individual persons, and regardless of whether smoking-initiation or smoking-cessation behavior itself spreads from person to person, any spread in smoking behavior is not a significant factor in the spread of obesity. This finding indicates that smoking behavior does not mediate the interpersonal effect in the spread of obesity. However, in addition, it suggests that the psychosocial mechanisms of the spread of obesity may rely less on behavioral imitation than on a change in an ego’s general perception of the social norms regarding the acceptability of obesity. This point is further reinforced by the relevance of the directionality of friendship.
Hence, an ego may observe that an alter gains weight and then may accept weight gain in himself or herself. This weight gain in an ego might, in turn, be determined by various behaviors that an ego chooses to evince, and these behaviors need not be the same behaviors that an alter evinces. The observation that geographic distance does not modify the effect of an alter’s obesity also provides support for the concept that norms may be particularly relevant here. Behavioral effects might rely more on the frequency of contact (which one might reasonably expect to be attenuated with distance), whereas norms might not.
The spread of obesity in social networks appears to be a factor in the obesity epidemic. Yet the relevance of social influence also suggests that it may be possible to harness this same force to slow the spread of obesity. Network phenomena might be exploited to spread positive health behaviors, in part because people’s perceptions of their own risk of illness may depend on the people around them. Smoking- and alcohol-cessation programs and weight-loss interventions that provide peer support—that is, that modify the person’s social network—are more successful than those that do not. People are connected, and so their health is connected. Consequently, medical and public health interventions might be more cost-effective than initially supposed, since health improvements in one person might spread to others. The observation that people are embedded in social networks suggests that both bad and good behaviors might spread over a range of social ties. This highlights the necessity of approaching obesity not only as a clinical problem but also as a public health problem.
Christakis, Nicholas A., and James H. Fowler. “The Spread of Obesity in a Large Social Network Over 32 Years.” New England Journal of Medicine (2007): 357, 370-9.