Communicating the Complexities and Uncertainties of Behavioral Science

S Stocking & Johnny Sparks. Handbook on Communicating and Disseminating Behavioral Science. Editor: Melissa K Welch-Ross & Lauren G Fasig. Sage Publications, 2007.

Journalists can be gluttons for behavioral science news. On any given day, they may offer selections from a wide-ranging menu of behavioral science findings:

  • Low-calorie diets are good for you.
  • Children who eat dinner with their families are less likely to use drugs and alcohol.
  • Yoga improves respiratory function.
  • Low self-esteem at 11 predicts drug dependency at 20.
  • Mild depression is often a precursor to major depression in the elderly.
  • Prenatal nicotine complicates the breathing of newborns.

It’s a rich news feast, some of it lovingly cooked up and quite tasty and some of it cooked up as quickly as a microwave dinner and destined to give some consumers—especially scientists—indigestion.

What makes the difference between the tasty news stories and the hard-to-digest news stories for scientists is often the way journalists treat the complexities and uncertainties of their research. To the extent that journalists preserve the critical complexities and uncertainties, the meal may satisfy; to the extent that they render complex and uncertain science as simple and sure—or the science as more uncertain than it is—a good many scientists may reach for the Tums.

The significance of journalists’ treatment of complex and uncertain science cannot be underestimated. It is not just a matter of how scientists like the news about their work. It can also be a matter of how those who are the biggest consumers of science news are able to understand and use the science they consume. Indeed, when scientific complexities and uncertainties are poorly rendered, scientific conclusions may be poorly understood by the public, and when they are poorly understood, they can lead to personal and professional decisions that are less than optimal. The stakes, for those who aspire to a society informed by the best that science has to offer, are high.

In this chapter, we will briefly discuss the challenges of communicating scientific complexities and uncertainties to different audiences. We will review some of the more prevalent problems that surveyed scientists, scholars, and others have associated with journalistic coverage of scientific complexities and uncertainties. We will present some solutions (or best practices) based on extrapolations from existing research and informed observation. Finally, we will outline some areas for research that we hope will create more solid understandings of media coverage of scientific complexities and uncertainties. Our hope is that such understandings will make it possible for scientists and journalists to modify their practices in ways that enable members of the public and policy makers alike to make good use of behavioral science.

Challenges of Communicating to Different Audiences

When behavioral scientists write for their colleagues, they operate according to conventions as to what is necessary for peers’ understandings. They have a pretty good idea of what others in their field of expertise want and need to know about populations, measures, methods, limitations of the findings, and all the rest.

When scientists work to communicate their research findings to the public, though, the conventions as to what is necessary for the public’s understanding may be less clear. For one thing, audiences are more variable; what is likely to be wanted and needed by a public that is highly attentive to science, for example, may not be wanted and needed by a less attentive audience (J. D. Miller, 1986). Moreover, much of scientists’ communication to the public is mediated by journalists, who have their own conventions as to what works. Though there is considerable variation among journalists, most are likely to be less interested in pleasing scientists than in engaging their audiences.

In journalism, complexities related to populations studied, measures taken, controls, and so forth—matters of no small significance to scientists—are likely to be omitted, simplified, or amplified in an effort to attract and keep the public’s attention. From the standpoint of scientists who are accustomed to a measure of control over the communication of their findings, the challenges can seem daunting. Depending on the audience and on the claims of other sources journalists may consult, the uncertainties or limitations in the findings may be either downplayed or amplified to a degree that scientists find inaccurate or discomforting. We will discuss each of these problems in turn in this section.

Loss of Scientific Complexity and Uncertainty

For behavioral scientists who are accustomed to carefully explaining the complexities and limits of their methods and findings for other behavioral scientists to evaluate and replicate, journalistic summaries of their research can be a source of bewilderment if not absolute consternation. Indeed, when scientists have been asked to identify problems or errors in science news, omission of information has often been at the top of the list (Borman, 1978; Dunwoody, 1982; Singer & Endreny, 1993; Tankard & Ryan, 1974). In their book-length treatment of social science news coverage, Weiss and Singer (1988) found oversimplification of complexities a top concern for social scientists. In another study, when scientists were asked what they would change in journalists’ stories about their work, their most common response was to add more details. Discovery stories—those ubiquitous stories that present the findings of a new study and their significance—appear to be particularly prone to the kinds of omissions of information that scientists deem important (Broberg, 1973).

Loss of Research Methods

One of the most visible omissions across media, according to studies of science news, is research methods (Frazer, 1995; Pellechia, 1997; Schmierbach, 2005; Weiss & Singer, 1988). If research methods receive a sentence or even a paragraph in print media, that often is considered adequate, if not more than adequate. In broadcast media—where journalists typically have relatively fewer words to work with and often confine themselves to two questions: What did you find, and why is it important?—we might expect methods to get even shorter shrift.

Even in elite print media that have their own science writers (often including social science writers), research methods can fail to make it into news accounts. Consider a New York Times story about new behavioral research findings on the effects of divorce on children. The relatively lengthy article got high marks from the scientist who did the study— until she saw five letters to the editor, three of which sought to counsel her on her research methods (Eisenberg, 2005; Lazarus, 2005; Petrison, 2005; Roughton, 2005; Silverman, 2005). Given what she perceived as public misunderstanding of her work, the scientist was prompted to write her own letter to the editor explaining exactly what she had done in the study (Marquardt, 2005). The original Times story, though otherwise “excellent” in the scientist’s eyes, had said little about her research methods (Lewin, 2005, p. 13).

Failure to discuss research methods may not only bother scientists but mislead the public. Let’s say a journalist covers a study that concludes television is harmful to toddlers. If the journalist fails to specify the nature of the sample studied, parents whose children do not fit the profile of study participants or watch the same kinds of television content may be led to worry unnecessarily about the damaging effects of television on their kids.

Likewise, if news accounts cover claims about the brain-enhancing value of classical music for infants but fail to mention that the original research on which these claims are based was never tested on babies but conducted on adults (and moreover that the effects were short-lived and small), a credulous public may be led to wrongly believe that playing Mozart to their babies will make them into little Einsteins. This may appear to be a fairly benign thing, until you consider that widespread media coverage of the so-called Mozart effect led the governor of one state to authorize the expenditure of taxpayers’ money to send recordings of Mozart home with all new mothers (Sack, 1998). During times of documented social problems and budget shortfalls, public policies based on spurious research are at the very least questionable.

On a larger scale, if methods are routinely slighted in news accounts, it may be difficult for the public to learn to distinguish between different kinds of studies—and the relative certainty they convey. As a result, the public may be led to conclude that any scientific study conveys as much certainty as another.

Obviously, this is not so. In general, experimental studies tend to carry more certainty than nonexperimental studies, and a randomized, double-blind experiment offers greater certainty than a study with a nonrandom sample. Moreover, meta-analyses, which systematically assess research findings across investigations, tend to carry more weight than single studies. If journalists fail to explain the methods that were used and the level of certainty those methods convey or, worse, fail to mention methods at all, they may lead members of the public to assume a study is a study is a study, which is, of course, grossly mistaken.

The failure of journalists to distinguish good studies from bad studies is no academic matter. In education, for example, the failure of many in the press to appreciate the difficulty of arguing causal relations from qualitative methods has led to what some scientists see as distorted coverage of debates over the best approaches to teach reading and has contributed for a long time to poor educational policy. According to these scientists (Lyon, Shaywitz, Shaywitz, & Chhabra, 2005; Reyna, 2005; Shavelson & Towne, 2005), approaches to reading instruction based on subjective impressions and weak qualitative studies have guided the decisions of many teachers and administrators in some schools for decades.

One long popular approach to teaching reading, the whole-language approach, has assumed that learning to read is as natural as learning to speak. Whole-language methods— which teach reading within natural contexts like letter writing and book writing without separating readings skills into discrete, teachable components—can be enjoyable for students and teachers alike, and they certainly have appeared to work in qualitative studies of children whose parents have prepared them well in reading fundamentals at home.

However, as large- and small-scale experiments with diverse populations have demonstrated, efforts to teach reading based on the whole-language philosophy fail miserably for children who have not learned reading fundamentals (phonemic awareness, phonics, fluency, vocabulary and comprehension strategies). This means, of course, that schools that cling to whole-language approaches for poorly prepared children are failing to provide those children with the skills they will need to finish school, get decent jobs, track finances, and take their places as fully functioning citizens in society.

The news media have begun to do a better job of reporting on this contentious issue since two national scientific consensus panels have weighed in against whole-language approaches, according to a scientist who has been at the forefront of efforts to transform education into a scientifically driven enterprise. But in his words, they initially “took any kind of research to support (whole language) claims, no matter whether it was trustworthy or not” (R. Lyon, personal communication, June 6, 2006; also see Moats, 2000).

Loss of Caveats

Like the loss of research methods, the loss of scientific caveats can be bothersome to behavioral scientists, judging from anecdotal accounts, and in point of fact, such losses are not uncommon. Weiss and Singer (1988) have documented the hardening of provisional findings as social science moves from the scientific to the popular press. Discourse analyst Jeanne Fahnestock has found that popular accounts of science exaggerate the knowledge claims and downplay the caveats and other qualifiers (Fahnestock, 1986). Studies of media coverage of the risks associated with hazards (Singer & Endreny, 1993) and studies of science documentaries (Collins, 1987; Hornig, 1990) have likewise found a tendency to minimize uncertainties in popular accounts of science. In a recent study of medical science news, too, stories rarely carried cautions about intrinsically limited research methods (Woloshin & Schwartz, 2006).

Caveats, of course, are crucial to communicating science to other scientists. Not only do they set up the conditions for the construction of knowledge gaps, which scientists may then seek to fill (Stocking & Holstein, 1993; Zehr, 1999), but they also may preempt criticism, enhance credibility, and demonstrate mastery of the process of publication, among other things (Rier, 1999). By pointing to the limitations of the research in their scientific articles, scientists can protect themselves from charges by other scientists of overreaching interpretations of their findings (Stocking & Holstein, 1993). By offering caveats in conversations with journalists, scientists can also work to protect members of the public from overinterpreting results, with serious implications for public perceptions and actions.

To borrow an example from a social psychology textbook (Aronson, Wilson, & Akert, 2005), the media may report that the more time fathers spend with their children, the less likely they are to abuse their children; however, if they fail to caution that correlation does not necessarily mean causation and that other factors may underlie the relationship, credulous members of the public may jump to the conclusion that spending more time tending their children would be an effective intervention with dads at risk for abusing. If, in fact, the reason for this association is that fathers who already possess good parenting skills spend more time with their children, this intervention may make abuse more likely for the at-risk father who lacks such skills.

Loss of Scientific Context

Several studies of scientists’ perceptions of the accuracy of science news have found particularly troublesome the loss of scientific context as science moves from the scientific literature to the popular domain. Tankard and Ryan (1974), for example, found that scientists rated continuity with prior research as one of the top problems in science news. Weiss and Singer (1988), likewise, found that social scientists cited “fragmentation, with no attempt to relate an individual story to a whole body of research,” as one of their five top criticisms of media coverage of social science research (p. 130). Even some journalists have expressed concern that science stories tell but a small part of the whole scientific story (Hartz & Chappell, 1997).

Studies of news media content corroborate these perceptions. Pellechia (1997), for example, has examined science stories over three decades and found that prior research or future studies were mentioned in fewer than 60% of the stories. Food scientists, who since 1997 have done content analyses every other year of news media coverage of nutrition, dietary choices, and food safety, have also consistently found a lack of contextual information; the 2005 study found an uptick in the use of science to buttress claims of benefit or harm, but a lot of the citations were simply studies show, research suggests, or according to the research tags that did nothing to enhance the public’s sense of the overall state of the science with respect to the issues at hand (IFICF & CMPA, 2005).

Discovery stories may be especially prone to minimize context, for in many cases, they are single-source stories, and such stories, as Weiss and Singer (1988) have concluded, take on faith what the investigating scientist says and fail to present the points of view of other scientists who might present another, broader picture.

The problems for audiences with context-free stories are illuminated by a study that found that nonaggressive media content—more than aggressive content—worsened the symptoms of emotionally disturbed children (Gadow & Sprafkin, 1993). If the news media failed to put this study into a larger scientific context, a parent of an emotionally disturbed child might prematurely conclude that aggressive-laden television would actually be better for his or her children than television with nonaggressive content. In fact, this 1993 study flew in the face of a vast majority of studies (Anderson et al., 2003) on media violence and aggression, a point parents of an emotionally disturbed child surely would want to know.

Exaggerations of Scientific Unknowns and Uncertainties

Though the more common complaint from scientists is that journalists make science appear less complex and more certain than it in fact is, sometimes—as the first author has noted elsewhere (Stocking, 1999; Stocking & Holstein, 1993)—journalists make science appear more complex, or at least more uncertain, than scientists believe it to be; that is to say, the news media sometimes work in ways that exaggerate the unknowns and uncertainties of science, possibly contributing to public bafflement about the scientific enterprise.

Unexplained Flip-Flops

The very certainty of many caveat-free discovery stories in science, when followed in rapid succession by other equally certain but contradictory discovery stories, may be expected to magnify uncertainty in the public mind. Let’s say, for just one example, that studies appear to suggest that children are not harmed by day care. This study is quickly followed by—or in this case with—another that appears to suggest that children are harmed. What is the public to make of such seemingly contradictory findings?

“Scientists,” the first author once heard a taxi driver say when he learned of her interest in science, “can’t make up their minds about anything.” His remark reflected a lack of awareness that different studies, if properly understood in all their complexity, might not be contradictory at all. It may be, of course, that day care has been found to be harmful under the circumstances of some studies (when the amount of time in day care is extensive, for example, or in low-quality day care arrangements) but not under the circumstances investigated in other studies (when the amount of time is less or when the quality of day care is higher). Even if single studies are roughly comparable and the findings do appear to directly contradict one another, it does not mean that scientists don’t know what they are doing; any individual scientific study is inherently uncertain, meaning the findings of one study can contradict in the short term a study that is similar or that may even appear to be identical.

Science, it has been said, proceeds a little like a sailboat, first one way, then the other, but over time making progress in a particular direction. Without such understandings conveyed in news media accounts, scientifically illiterate members of the public, like the taxi driver, may experience a kind of cognitive whiplash that tempts them to dismiss science—and scientists—out of hand.

Controversy-Driven Exaggerations

Conflict, which has long been observed to be a staple of journalism (Burnham, 1987; Nelkin, 1995; Pellechia, 1997; Weigold, 2001), can also magnify scientific uncertainties beyond what some scientists think reasonable. This can be particularly true when journalists, in the interest of being fair or objective, attempt to balance claims that opposing sides make in a controversy, without clarifying that one side carries more scientific weight than another. (For further discussion of weight-of-evidence reporting).

Taking advantage of journalistic balancing practices, groups that find scientific findings threatening to their particular interests have been known to spin the inevitable holes and uncertainties of science to their own advantage. Historically, for example, the tobacco industry worked to magnify in the press the unknowns and uncertainties in the science linking tobacco and lung cancer (K. Miller, 1992). More recently, the fossil fuel industry did the same with the science of global climate change (Gelbspan, 1997), leading—at least for a time—to news stories that gave no more weight to the consensus reports of thousands of scientists around the world than to the contrarian views of a minority of scientists (Boykoff & Boykoff, 2004).

Likewise, the pork industry magnified the unknowns and uncertainties associated with research that revealed danger to the health and well-being of people who lived around industrial hog farms (Stocking & Holstein, in press); in one instance, the industry appropriated a caveat from the scientist’s research, transforming it into what appeared to be an admission of guilt and using it to discredit the science (Stocking & Holstein, in press). Intelligent design advocates also magnified the gaps in Darwin’s thinking on evolution (Mooney & Nisbet, 2005). The result of the actions of these “sowers of uncertainty” (Pollack, 2003, p. 13): Consensual science was rendered more uncertain than it in fact was.

In the behavioral sciences, too, interested parties to conflicts can make the science findings appear less certain—and the scientists as less capable—than they are. For example, the science of sex—an interdisciplinary enterprise including medicine, biology, physiology, psychology, sociology, and anthropology—has been frequently attacked by political, religious, and cultural conservatives as bad science and the scientists who do the research as incompetent, morally deviant, or both (Bancroft, 2004).

Solutions

Studies on solutions to the problems that have been identified are few. But this doesn’t mean that solutions do not exist. Inferences drawn from existing research, coupled with our own and others’ informed experience as journalists and practitioners of behavioral science, have given rise to what might be considered best practices in the public communication of the complexities and uncertainties of science. Though many of the solutions to the problems surely lie with journalists, we will concentrate in this section on those solutions over which we have come to believe scientists, who are the primary audience for this handbook, have some control.

Solutions to Loss of Scientific Complexities and Uncertainties

Solutions to the Loss of Methods

Audience considerations appear to be one of the most important reasons that mainstream news media accounts give relatively little attention to methods. With the possible exceptions of some stories considered vital to the public interest, it is thought that the public will not tolerate the level of complexity desired by most scientists. We’re not in the education business, many journalists will tell you; we’re in the information business (West, 1986, cited in Weigold, 2001). This is no doubt dismaying news to behavioral scientists in the academy who would prefer journalists to act as they try to do, in their classes, teaching their students about the important complexities and uncertainties of science. But it is not as though education-minded scientists can do nothing. Although most journalists will downplay or exclude methods in their stories, some won’t. Behavioral scientists who are convinced a story can’t be told without complexities and uncertainties they deem important may be able to find journalists who have the time, the space, and the capacity to go beyond what is customary.

Since scientists report more satisfactory experiences with science writers than they do with general assignment reporters (Valenti, 1999), and since journalists in the main report little formal training in science (Weaver & Wilhoit, 1996), it is tempting to imagine that behavioral scientists ought to restrict their interviews to science writers. However, in their study, Weiss and Singer (1988) found that social scientists rated the stories written by beat reporters (including science reporters) as no better in completeness, accuracy, and emphasis than stories written by general assignment reporters. Indeed, the few highly rated stories in their study were produced not by science writers but by general assignment reporters who expressed a concern for satisfying the values of social science. It is hard to know why the small number of science writers in this study did not produce more highly rated social science stories, but given evidence that science writers view the social sciences as garbage science relative to the biological and physical sciences (see Dunwoody, 1986), it could be that science writers don’t work to reflect social scientists’ values or invest as much in social science stories as they do in other science news accounts. Since these findings are based on a small number of journalists and fail to offer firm guidelines for scientists, probably the best thing for a behavioral scientist to do before agreeing to an interview is to check online to find out how receptive an individual journalist is to conveying the scientific complexities and uncertainties of behavioral science studies. In looking across a reporter’s stories, it is usually possible to get a sense of the level of quality of a journalist’s work.

In our experience, a surprising number of reporters, even those without a science background, will be open to cultivating an understanding of research methods. Most will want to do this, not so they can actually write extensively about research methods in their stories but so they can better decide for themselves whether research findings are trustworthy enough to write about in the first place. For behavioral scientists, it may be in the vetting of stories that they can be of greatest assistance in the public communication of science. They may, for example, be able to explain to journalists the degree to which a statistically significant finding is of any practical importance. Or they may point out that the size of the N—a heuristic that journalists often use to determine the quality of research, according to Schmierbach (2005)—is but one indicator of the soundness of a study. They can thus help journalists to sort the scientific wheat from the chaff. Many journalists appear to rely on particular scientists to help vet stories they cannot vet themselves, and scientists who would relish becoming a part of a journalist’s news net (Tuchman, 1978) can perform a great public service, though behind the scenes and without the level of recognition they have been trained to accrue for themselves and their institution.

Solutions to the Loss of Caveats

Research on journalists’ use of caveats, as well as the reasons for their use, is slim (Rier, 1999), offering little guidance for how to increase the likelihood that journalists will use important caveats. However, in our experience, journalists won’t look favorably on extensive caveats that undermine the significance of the research. Significance is an important news value, after all, and if significance is undermined too much, so will be the journalist’s story. On the other hand, an important caveat that does not appear to undermine the significance of the research, if articulated well and emphasized, may find its way into some journalistic accounts.

For the scientist, this may mean articulating ahead of time the significance of the research, along with the particular caveats that the public needs to decide whether it can use the findings. It has been our observation that when it comes to significance, most journalists, out of concern for their audiences, are going to focus on the practical values of the research over scientific significance, though it is also true that some journalists will be open to using, along with claims about practical significance, assertions about the scientific value of the work.

As for caveats, those most likely to be used will be those that let the audience know the value of the research for them as decision makers. So if the research is primarily of scientific interest, with little immediate practical value, it will be necessary to explain that additional studies will be needed before the practical value of the findings will become clear; if the potential practical significance of the research is high, as with research that has implications for the treatment of autistic children, it may also be necessary to explain the particular kind of studies that will be needed. If the research has immediate practical value, in contrast, it may be important to explain what the research does not tell us that someone would need to know before making a decision. For example, is there anyone to whom this finding does not apply? Would the study on the harmful effects of television on toddlers apply to all children or just to those who receive little in the way of alternative stimulation in their environments? Would the study on the effects of day care apply to all children or only to children in a particular kind of day care environment? And what kinds of studies would be needed to answer these questions? In our view, a caveat that arises out of this kind of thinking is likely to be more informative than the general but not very revealing more research is needed. The latter caveat, far from being helpful, fails to assist the public to make decisions or to improve its general understanding and appreciation of behavioral science findings. It may, in fact, be that general caveats of this nature simply lead the public to assume that the research study in question is inadequate.

Solutions to the Loss of Scientific Context

For journalists, who tend to shortchange context in news coverage of all sorts, the larger scientific context may or may not be regarded as important for the story. For behavioral scientists, however, it is likely to be regarded as critical. A single study is often but one piece in a large scientific puzzle, and every researcher knows that a single piece in a large picture puzzle does not give you a very accurate idea of the picture on the box.

To the extent that scientists believe context is important to the public’s understanding and use of their research (and it can matter most with findings that have serious practical implications and/or that are likely to add fuel to one side or another of an inflamed public controversy), it is important to explicitly state how the research fits with the larger body of scientific knowledge. Do these results confirm, extend, or contradict the bulk of prior findings? And if they contradict prevailing scientific understandings, what is the public to make of this?

Consider a study that concluded that long hours in day care can lead to more aggression in some children regardless of the quality of the care. While this particular finding cast a pall on previous studies that had concluded that high-quality care does not adversely affect children, child care experts agree that quality of care still matters. The amount of aggression observed in children who spent long hours in different day care settings was mild, and even long hours of high-quality day care was found to have the positive benefits that earlier studies had identified for children, a picture that parents need to understand as they make their decisions about their children’s welfare.

Conveying such context can be tricky. It can help to think about innovative ways to do this, possibly by creating info-graphics that reflect the complexities or by listing bulleted points that journalists can insert into their stories. There is no guarantee, of course, that journalists will take the time to convey this larger context. But it is certainly the case that if you don’t make the attempt to explicitly provide that context, it is much less likely to find its way into a story. It can also be useful to refer journalists to other scientists who can comment on the research and how it fits with the larger state of science in the area.

Solutions to Exaggerated Unknowns and Uncertainties

Solutions to Unexplained Flip-Flops

As we have indicated, many of the flip-flops that laypersons perceive in the news are more apparent than real. It may appear, for example, that the latest study on fat is a flat-out reversal of the prevailing wisdom on fat intake, when in fact it may simply be a refinement of what is already known—namely, that fat still poses dangers for health, but it is not fat per se but the type of fat—good versus bad—that matters.

One way to correct public perceptions of apparent flip-flops is to use a transformative explanation. A transformative explanation is an explanatory technique in which one identifies (or anticipates) a mistaken public perception, acknowledges the intuitive plausibility of the perception, explains the limitations of the plausible view, and then explains the superiority of the correct view (Rowan, 1999).

When a new study concluded that a low-fat diet does not reduce breast cancer, scientists reacted quickly to identify and acknowledge the plausibility of a conclusion that the public might draw from the latest findings—namely, that fat may now be okay. Although it is not clear that the federal agency that sponsored the research took all the other steps involved in a formal transformative explanation, it did host a press conference in which scientists said “that they hoped women would not start eating fat because of this study” (Kantrowitz & Kalb, 2006, p. 44). The scientists then explained the more complex reality and why the public should continue to consume fat judiciously (Arnett, 2006; Brody, 2006). “These studies are more complicated than a simple headline or sound bite can convey,” one official told Newsweek magazine, which attempted to clarify the situation for the public, “and that’s an important lesson for all of us” (Kantrowitz & Kalb, 2006, p. 44). Although good fat, bad fat, and no fat were not completely explained in the news coverage of the initial study, later coverage did demonstrate a clearer recognition of the complexities of the matter.

Solutions to Controversy-Driven Exaggerations of Unknowns and Uncertainties

The solution to controversy-driven exaggerations of unknowns and uncertainties in science coverage may be more difficult to manage than the exaggerations due to apparent flip-flops. In our experience, when vocal opponents of a particular line of research magnify unknowns and uncertainties as a strategic rhetorical tool, it can be difficult for journalists to ignore such claims. Knowing this, it would appear wise for scientists to, at the very least, anticipate the claims of the opposition and prepare to defend their findings. It may help, for example, to anticipate that some journalists are likely to give equal weight to the claims of scientists and media-savvy nonscientists, as well as prepare to explicitly articulate the relative weightiness of the scientific findings. Depending on the dynamics of the controversy, it may also help, when journalists commit outright errors in their accounts, to request corrections. Alternatively, it can be useful to register a comment with a media ombudsman if there is one; the ombudsman will often circulate concerns to staff, even if he or she doesn’t write anything for public consumption. In addition, it can help to write op-ed pieces or letters to the editor to set the record straight. Media relations professionals at one’s institution can often be helpful, too, in preparing for and containing a controversy.

In the case of the public health researcher who studied the health effects of industrial farms, the scientist found he had no choice but to publicly counter an industry trade association when it blasted as pseudo-science the exploratory self-report methods he used in an epidemiological study comparing the health effects of industrial hog farms with those of other livestock farms. In this particular case, the industry also made sharp ad hominem attacks and took active steps to shut down the scientist’s research (Stocking & Holstein, in press). The scientist, who never denied the exploratory nature of his findings, felt himself to have been naive about the political dynamics that can arise when science threatens entrenched interests, and he documented his experiences in a professional journal to warn other scientists who do environmental health research (Wing & Wolf, 2000). Concluding that the threatened industry was trying to intimidate him, he also granted interviews with a few journalists, who wrote explicitly about industry’s aggressive attacks on his work (Stocking & Holstein, in press).

In a related vein, when critics have viciously attacked both the soundness and morality of research produced by Indiana University’s Kinsey Institute for Research in Sex, Gender, and Reproduction, staff members have had to gingerly work to counter what were clearly distortions of the research record so as to not jeopardize funding. The former director of the institute, like the researcher who studied the health hazards of large-scale hog farms, documented some of the institute’s travails in a publication for his peers (Bancroft, 2004). His staff prepared themselves for the inevitable media inquiries by developing and circulating in-house a line-by-line rebuttal of the opposition’s criticisms. But for the most part, then and now, they have kept a low profile with respect to their opponents’ charges, responding to public accusations only when necessary so as to not give the opposition a platform.

Research Needs

We have focused here on those problems that scientists and media scholars have identified, and we have offered solutions that we think are within the scientists’ control. We believe, based on experience, that many of the solutions we have offered will be effective. But it is important to remind ourselves that these solutions are based on interpretations of a limited body of data, informed by experience. Much remains to be formally explored with respect to news media treatments of scientific complexities and uncertainties, as well as with respect to the factors that affect these treatments. We will use the rest of this chapter to outline some areas that we think offer fruitful questions for investigation. We will consider, first, studies that might be conducted on news media treatments of complexities and uncertainties, followed by suggestions for research on the roles played by journalists and scientists in those treatments; on the testing of interventions to improve media coverage; and on the effects of media coverage on audiences.

Media Treatments of Scientific Complexity and Uncertainty

Though converging empirical evidence indicates that science news often lacks research methods, context, and caveats, the research has several limitations for behavioral scientists looking for guidance into how to explain the complexities and uncertainties of their work. Most of the studies have involved content analyses of science news, broadly construed, and most of these studies have examined the content of a limited set of print media. Weiss and Singer’s (1988) treatment of social science news in print and broadcast news is the rare exception, on both counts, but it is growing old and needs replication in a changing media environment. Other studies have involved surveys of scientists, only some of whom were from the behavioral sciences.

Moreover, in studies that have concluded, as Weiss and Singer (1988) did, that the news media tend to slight scientific complexities or uncertainties, there have always been science news stories that did give ample attention to methods, context, or caveats. Also, one recent study in the medical sciences reported that a majority of media accounts of science presented at medical science meetings at least included basic study facts, if not cautions about the limitations of the studies (Woloshin & Schwartz, 2006). These apparent departures from the dominant empirical patterns demand our attention: Who are the journalists who give more attention to the complexities and uncertainties of the sciences, especially the behavioral sciences, and how, if at all, do they differ from other journalists? And what factors might lead some reporters to give ample attention to methods, context, or caveats, while many other journalists slight them?

Given how little is known about these matters, it could be useful to conduct content analyses of a few major behavioral science news stories, supplemented by exploratory interviews with the journalists who produced the stories and possibly with other actors in the communication process. It could be useful, for example, to analyze the attention to scientific complexities and uncertainties that a diversity of media outlets (for example, print news magazines, newspapers, wire services, women’s media, and online media) has given to the same widely covered behavioral science studies. Let’s imagine that content analysis revealed that weekly news magazines and elite newspapers whose audiences share a similar demographic profile differed significantly from one another in their coverage of the complexities and uncertainties of the aggression and day care story; interviews with the journalists who wrote and edited these stories could help to identify some of the factors other than audience demographics that could account for the differences in coverage (characteristics of the individual journalists and their editors, for example). Subsequent research could then explore more directly the difference that these factors make to media coverage.

In one such exploratory investigation, the first author compared the coverage by major newsweeklies of findings linking heart disease to the consumption of iron; one news magazine devoted just a few paragraphs to the findings, while another devoted one page, and a third ran a lengthy cover story. The attention to claims concerning the unknowns and uncertainties of the science varied dramatically as a direct function of the amount of space given to the findings, with the stories according the least significance to the knowledge claims giving the least attention to the claims about the unknowns and uncertainties as well. The observed differences could not be accounted for by audience demographics, as the three newsmagazines appealed to similar audiences. Instead, as was discovered in interviews, the differences had most to do with variations in the individual journalists’ interests in the findings (the editor on the cover story had had a recent heart attack and knew of related research). Differences in perceived constraints on the amount of space the magazines would give to emerging science also played a role (Stocking, 1996).

Journalists’ Potential Contributions to the Problem

It is certainly plausible to believe that characteristics of individual journalists, including their personal relationship to the findings and their knowledge about science, could account for at least some of variations in the coverage of complexities and uncertainties.

On the presumption that journalists’ knowledge of a scientific issue would affect their coverage, Wilson (2000) surveyed 249 journalists who reported on climate change and found that more than half did not know the level of scientific consensus on the issue. “Instead of correctly understanding where (and why) the scientific debate occurs, reporters were confused; they exaggerated the debate and underplayed the consensus” (Wilson, 2000). Importantly, journalists who spent the most time with scientists had more accurate knowledge of the uncertainties in the debate. Whether greater knowledge leads to more accurate treatment in actual stories, though, is not clear. It could be, for example, that even the more knowledgeable journalists will feel compelled, out of an interest in journalistic fairness, to give equal time to competing sides in the debate, regardless of the level of scientific consensus. Wilson’s survey did not address the effects of knowledge on actual media content.

Though it makes intuitive sense that journalists’ knowledge would affect media coverage of both uncertainties and complexities, Weiss and Singer (1988) found no relationship between journalists’ formal training in the social sciences and their abilities to develop stories that social scientists viewed as accurate, complete, and having appropriate emphasis. Instead, they found a modest relationship between journalists’ years on the job and their ability to develop such stories. It may well be that on-the-job experience provides journalists with the knowledge that they need to cover the social sciences well in scientists’ eyes. But without more studies examining the direct effects of experience and knowledge on coverage of complexities, it is difficult to say.

One exploratory study suggests that, in addition to journalists’ knowledge of science, journalists’ perceptions of their journalistic roles may be a factor in media’s treatment of scientific unknowns and uncertainties. The first author and a colleague analyzed news content that contained claims and counterclaims about the unknowns and uncertainties in a research study that threatened an industry’s interests and also talked to the journalists who produced the content (Stocking & Holstein, in press). If a journalist saw himself as a simple disseminator of information (to name just one kind of role; see Weaver & Wilhoit, 1996), he was more likely to treat a threatened industry’s claims about the scientific gaps and uncertainties in the research as no less deserving of space than the knowledge claims made by the scientist and so would balance the competing claims, without regard to scientific merit. If a journalist saw herself as a popular-mobilizer who worked hard to get the views of laypersons into the news, she was likely to give less space to industry’s claims and more space to the scientific claims that bolstered laypersons’ complaints about industry activities. As provocative as these findings appear to be, they too are limited in that they emerged in a study that was not designed to test for the effects of journalists’ roles, and they are based on a very small sample of journalists covering one particular controversy for varying types of newspapers, all limitations that need remedying in future research (for more discussion of research options, see Stocking, 1999).

Scientists’ Potential Contribution to the Problems

Thus far, our discussion has tended to assume that many of the problems with media coverage of the complexities and uncertainties of science originate with journalists, but this may in fact not be the case. To the extent that scientists and scientific institutions feel competitive pressures to communicate to nonscientists to enhance their visibility among those who appropriate or dispense funding for science, they themselves may make findings appear less complex and more certain than they are and so contribute to the problems identified here. Indeed, Weiss and Singer (1988) have expressed their concerns that journalistic values, which emphasize producing good, newsworthy stories of interests to readers, may come to influence some scientists, to the detriment of traditional scientific values.

Consider a recent study on the relationship between oxytocin and trust (Kosfeld, Heinrichs, Zak, Fischbacher, & Fehr, 2005). The journal Nature, which published the experimental study, asked a well-known neuroscientist to write a commentary on the investigation. The scientist, who has written popular books about science, wrote an engaging piece that greatly simplified the basic research findings and exaggerated the practical implications. The online news service of Nature picked up this scientist’s spin of the study and produced its own news account that also reduced the complexities of the science and contained few caveats. Subsequent news media coverage adopted the emphases of the commentary and online story, with the result that in the larger public domain, the complexities and uncertainties in the findings were slighted and the practicalities of future applications overplayed (Vergano, 2005; Verrengia, 2005).

A similar thing happened with the “day care causes aggression” study. One scientist made a catchy statement about the findings in a conference call with journalists. The journalists snatched it up, and until other scientists involved in the study did some fast repair work, media coverage of the complexities of the study suffered (L. Fasig, personal communication, July 25, 2006).

It is examples such as these that lead us to wonder the following: To what extent do scientists’ own statements—in scientific journals, in interviews, and in news releases prepared by their institutions—fail to explain research methods or offer needed caveats or scientific context, thereby contributing to the oversimplification and lack of provisionality of so many of their findings in the news media? We know of no studies that have explored this important question.

Of course, scientists may not contribute to just the problems identified here but also to the solutions. So, to what extent do scientists, in fact, engage in many of the remedies we have proposed in this chapter? To what extent, for example, do scientists work with journalists behind the scenes to help them vet scientific studies? Who serves in these vetting roles? In addition, to what extent do scientists take the communication of scientific complexities and uncertainties into their own hands, writing op-ed pieces and letters to the editor to combat what they see as distortions? Who takes on truth squad roles, requesting corrections or contacting newspaper ombudsmen when media get things wrong? What motivates individuals to take on these vetting and truth squad roles, and how, if at all, are they rewarded in their institutions and professional communities?

And most critically, to what extent do these and other practices affect journalists’ treatments? Do they improve the quality of the news in scientists’ eyes or in consumers’ eyes? If yes, under what conditions? If not, then why not? Studies that respond to such questions could do much to guide behavioral scientists as they work with journalists to present science to those who might benefit.

Interventions to Improve Media Coverage

Despite the gaps and uncertainties in our empirical knowledge about news media treatments of scientific complexities and uncertainties and factors that might account for variations in treatments, some scientific organizations have moved ahead to create workshops to improve the media coverage.

Most of the workshops aimed at improving science news that we know about have been concerned with sciences other than the behavioral sciences, and most have been directed at journalists. The National Institutes of Health (NIH), for example, has conducted workshops on evidence-based medicine for health and medical writers in print and broadcast. The workshops combine lectures with hands-on exercises that offer practice in evaluating the soundness of scientific studies. Journalists are pretested on their knowledge of methods and statistics as the workshop begins, surveyed at the conclusion of the workshop, and surveyed again months later to see if what they learned remains intact. However, the effects on journalists’ actual selections of studies to cover and treatment of research methods and statistics in their stories are not yet clear; the scientists who run the workshop, now a cooperative venture among NIH, Dartmouth University, and the Veterans Administration (NIH, 2006), intend in the future to supplement their follow-up surveys of workshop participants with examinations of the journalists’ actual stories (S. Woloshin, personal communication, June 14, 2006).

Workshops directed at scientists, to assist them as they work with journalists to improve media coverage, have also been offered over the years and appear to be growing in popularity in response to institutional imperatives to promote public visibility of science and in response to scientists’ own expressed interest in learning how to better communicate their work to journalists (Hartz & Chappell, 1997). The American Association for the Advancement of Science (AAAS) sponsored one of the most interesting such efforts, attended by the first author. This effort is particularly noteworthy because it was one in which scientists were asked to write news stories about unfamiliar science in very short order, to give them a feel for the constraints journalists operate under in their work. This session did not explicitly address the issues raised in this chapter, though a session organized for scientists very well could, requiring the scientists to decide on the spot how much in the way of research methods and context could be included in a 350-word newspaper story, as well as what caveats ought to be included for a particular audience.

Workshops that bring together equal numbers of scientists with equal numbers of journalists, to discuss public communication issues, appear to be rare. But they are not unprecedented. On the assumption that both scientists and journalists play a role in how sex research gets communicated in the press, Indiana University’s Kinsey Institute along with the School of Journalism designed a workshop for equal numbers of sex researchers and journalists in 2006. The institute surveyed groups of journalists and scientists about their interactions with each other’s profession and followed that up with a daylong meeting in which leading science writers and sex researchers met to hear the findings, discussed commonalities and differences in their professions, and worked toward developing a list of best practices for communicating research on this highly sensitive research topic. Similar workshops, collaboratively organized by programs in the behavioral sciences and communications, might be designed to cross-train scientists and journalists in the particular challenges of communicating scientific complexities and uncertainties to the public.

Given the public’s stake in the outcomes of such workshops, it could be an exciting and useful innovation to actually involve members of the public in the discussions. Such workshops could become the basis for collaborative research by scientists and communications researchers, so as to answer the many outstanding questions about how scientific complexities and uncertainties are covered in the news, as well as the equally compelling questions as to what the public comes to understand about the complexities and uncertainties of science and the difference that understanding, if any, makes to their lives.

Audiences’ Responses to Journalists’ Treatments

How the public actually responds to news media accounts of the complexities and uncertainties of science is clearly an area ripe for research. Many scholars, including ourselves, have asserted that the tendencies to simplify science in popular discourse can affect the public’s understanding and decision making; however, only a handful of scholars have begun to explore the actual relationship between the journalists’ treatments and public understandings and behavior.

In research that addressed the presumed importance of scientific context, participants in a focus group read two stories—one on global warming and the other on AIDS; when asked to talk about what inhibited their understanding of the issues, one of the principle problems participants mentioned was a lack of context. Building on this work, Corbett and Durfee (2004) in a lab experiment manipulated the presence of scientific context in news stories and found that the inclusion of context significantly decreased audiences’ perceptions of the uncertainty of the science. Controversy injected into a story—by the inclusion of methodological criticisms and conflicting findings from other studies—significantly increased audiences’ perceived levels of uncertainty. In their conclusions, Corbett and Durfee called for additional research with particular attention to other factors that may influence public perceptions of uncertainty, including single-source stories, visuals, story structure and framing, and journalistic balancing practices that often give equal weight to scientists and nonscientists or to mainstream scientists and fringe scientists.

Research on public responses to the inclusion of research methods and caveats in news stories is also needed. In his work, Rier (1999) has suggested that there is a need to understand the circumstances under which audiences pay attention to caveats. A number of assertions have been made. Pollack (2003), for example, has suggested that scientists, when they say they don’t know everything, can be interpreted as suggesting they don’t know anything (Corbett & Durfee, 2004). But is this so? And when, if at all, does the public even attend to statements in news stories about what is unknown or uncertain? Answers to these questions might put scientists in a better position to know which caveats to emphasize in their interactions with journalists.

We began this chapter by spelling out our assumptions, including our view that journalists’ treatments of scientific complexities and uncertainties really do matter to audiences—not only with respect to their understanding of science but also with respect to their use of science in decision making. While our assumptions have face validity, little formal evidence exists to support them. Clearly, there is work to do.

Conclusions

Anyone who has read this far has likely at some point found behavioral science news, particularly its treatment of scientific complexities and uncertainties, unappetizing fare. Perhaps, in the heat of conflict, the unknowns and uncertainties have been exaggerated, or the complexities, including research methods and context, have been tossed out like so many leftovers. Or perhaps uncertain scientific claims have, as they made their way into the news media, hardened into overly simplified claims of knowledge. Whatever the case, we have worked in this chapter toward research-based solutions that might have the effect of making behavioral science news more appetizing to those scientists who, consuming the news, have felt a little queasy.

Though we believe the suggestions we have offered to be reasonable, we would be remiss if we failed to point out that not all scientists or journalists are going to agree with our proposals. Psychologist Bennett I. Bertenthal, for one, might argue that we have been entirely too finicky, at times offering suggestions for action and research that only scientists and a few among the lay public will have the stomachs to digest.

In an article in the American Psychologist, Bertenthal (2002) has written,

Overemphasis on getting the specific details straight is misguided because these details usually require a level of understanding reserved for the expert but surely not available to the novices. The key is to motivate the interest of the public by helping them to understand why and how psychological research is meaningful; focusing too intensely on specific details is likely to obfuscate and confuse rather than help. (p. 217)

Perhaps for similar reasons, a television journalist surveyed by the Kinsey Institute advised sex researchers to “keep it basic. We don’t need every little detail on how you came to your findings; we just want the findings. Save the details for the actual textbooks and longer forms” (Sparks, 2006, p. 18).

If we were to take these views to heart, behavioral scientists would supply journalists with the basic ingredients when the latter are cooking up stories about their research, but that is all. To try to do much more—and to expect much more—would be only a recipe for misery.

Is this right? For many behavioral scientists and journalists, we think not, but it is hard to say. Only time—and a converging body of evidence—will tell.