B J Fogg, Elissa Lee, Jonathan Marshall. The Persuasion Handbook: Developments in Theory and Practice. Editor: James Price Dillard & Michael Pfau. Sage Publications. 2002.
Since the advent of modern computing in 1946, the uses of computing technology have expanded far beyond their initial role of Performing complex calculations (Denning & Metcalfe, 1997). Today, computers are not just for scientists; they are an integral part of workplaces and homes. The diffusion of computers has led to new uses for interactive technology, including one that computing pioneers of the 1940s never imagined: using computers to change people’s attitudes and behavior—in a word, Persuasion.
Recent research and new computing products illustrate interactive technology’s potential for motivating and influencing people (see, e.g., Dijkstra, Liebrand, & Timminga, 1998; Fogg, 2002; King & Tester, 1999; Lieberman, 1992; Schumann & Thorson, 1999). However, scholarly research on how computer technologies Persuade is only beginning to gain momentum (Crawford, 1999). Often referred to as “captology” (Anthes, 1999; Berdichevsky & Neunschwander, 1999; Caruso, 1997, 1998; Fogg, 1998; Khaslavsky & Shedroff, 1999), the study of computers as Persuasive technologies is a relatively recent endeavor, yet interest in this area is likely to increase, expanding our theoretical understanding about how computers can Persuade as well as showing the many possibilities for using computing systems to motivate and influence people.
Research on Persuasive technologies is an interdisciplinary pursuit. Because computing technologies are so versatile and diverse, no single theory or academic Perspective adequately captures the Persuasive possibilities of interactive technologies. As a result, theories and methods from psychology, communication, design, human-computer interaction, and other disciplines inform the study of Persuasive technologies. Captology not only crosses academic boundaries but also brings together academic researchers and industry innovators, creating understanding that is at once theoretical and practical.
In this chapter, we present research and Perspectives about computers designed to change people’s attitudes and behaviors. We first offer definitions and examples and then present a framework that illustrates various ways in which technology can motivate and influence people. We next review the existing work on computer credibility. We conclude the chapter by outlining future directions for increasing understanding of computers as Persuasive technologies.
Persuasive Technology: A Definition and Two Examples
A “Persuasive technology” is any type of computing system, device, or application that was designed to change a Person’s attitudes or behavior in a predetermined way (Berdichevsky & Neunschwander, 1999; Fogg, 1998; King & Tester, 1999). One illuminating example is a product named “Baby Think It Over.” A U.S. company http://www.btio.com) designed this computerized doll to simulate the time and energy required to care for a baby, with the purpose of Persuading teens to avoid becoming parents prematurely. Used as part of many school programs in the United States, the Baby Think It Over infant simulator looks, weighs, and cries something like a real baby. The computer embedded inside the doll triggers a crying sound at random intervals; to stop the crying sound, the teen caregiver must pay immediate attention to the doll. If the caregiver fails to respond appropriately, the computer embedded inside the doll records the neglect. After a few days of caring for the simulated infant, the teenager generally reports less interest in becoming a parent in the near future (see http://www.btio.com/btiostud.htm), which—along with reduced teen pregnancy rates—is the intended outcome of the device.
Another example of a Persuasive technology is a CD-ROM titled “5 a Day Adventures” (http://www.dole5aday.com). Created by Dole Foods, this computer application was designed to Persuade kids to eat more fruits and vegetables. Using 5 a Day Adventures, children enter a virtual world with characters such as “Bobby Banana” and “Pamela Pineapple,” who teach kids about nutrition and coach them to make healthy food choices.
Not all computing technologies are Persuasive; in fact, only a small subset of today’s computing technologies fit into this category. A computer qualifies as a Persuasive technology when the people who create the product do so with an intent to change attitudes or behaviors in a predetermined way (Berdichevsky & Neunschwander, 1999; Fogg, 1998, 2002). This point about intentionality may seem subtle, but it is not trivial. Intentionality determines whether behavior or attitude change is a side effect or a planned effect of a technology. While acknowledging unintended side effects, captology focuses on the planned Persuasive effects of using computer technologies.
Applications of Persuasive Technology Systems
Using computing technology to change attitudes and behaviors has applications in various domains. Areas that have shown considerable promise include health interventions (Binik, Westbury, & Servan-Schreiber, 1989; Bosworth, Gustafson, & Hawkins, 1994; Lieberman, 1992, 1997; Marshall & Maguire, 1971; Parise, Kiesler, Sproull, & Waters, 1999; Reardon, 1987; Schneider, Walter, & O’Donnell, 1990; Street, Gold, & Manning, 1997), business applications such as buying and branding (Anthes, 1999; Davis, 1999; Demaree, 1987; Feldstein & Kruse, 1998; Gopal, 1997; Henke, 1999; Karson, 1998; Lohse & Spiller, 1998; Nowak, Shamp, Hollander, & Camerson, 1999; Risden, 1998; Rowley, 1995; Schlosser & Kanifer, 1999), and education and training (Hennessy & O’Shea, 1993; Lester, 1997; Sampson, 1992; Stoney & Wild, 1998). Less developed application areas for Persuasive technologies include ecology, Personal management and improvement, occupational productivity, social activism and politics, and safety (Fogg, 1998; King & Tester, 1999).
Although the applications for Persuasive technologies are diverse, the computing products for these applications share predictable similarities. The framework we present in the next section provides one way to identify the similarity and differences among Persuasive technologies. In a larger sense, the framework provides a way to organize research and understanding in this domain.
A Framework for Persuasive Technology: The Functional Triad
Computers play many roles, some of which go unseen and unnoticed. From a user’s Perspective, computers function in three basic ways: as tools, as media, and as social actors. During the past two decades, researchers and designers have discussed variants of these functions, usually as metaphors for computer use (e.g., Kay, 1984; Verplank, Fulton, Black, & Moggridge, 1993). However, these three categories are more than metaphors; they are basic ways in which people view or respond to computing technologies.
Described in more detail elsewhere (Fogg, 1999, 2002), the Functional Triad is a framework that makes explicit these three computer functions: tools, media, and social actors. First, as this framework suggests, computer applications or systems function as tools, providing users with new abilities or powers. Using computers as tools, people can do things they could not do before or can do things more easily.
The Functional Triad also suggests that computers function as media, a role that grew dramatically during the 1990s as computers became increasingly powerful in displaying graphics and in exchanging information over a network such as the World Wide Web. As a medium, a computer can convey either symbolic content (e.g., text, data graphs, icons) or sensory content (e.g., real-time video, virtual worlds, simulation).
Finally, computers also function as social actors. Empirical research demonstrates that people form social relationships with technologies (Reeves & Nass, 1996). Although the precise causal factors for these social responses have yet to be outlined in detail, one could reasonably hypothesize that users respond socially when computers do at least one of the following: (a) adopt animate characteristics (e.g., physical features, emotions, voice communication), (b) play animate roles (e.g., coach, pet, assistant, opponent), or (c) follow social rules or dynamics (e.g., greetings, apologies, turn taking).
The Functional Triad is not a theory; it is a framework for analysis and design. In all but the most extreme cases, a single interactive technology is a mix of these three functions, combining them to create an overall user experience. In captology, the Functional Triad is useful because it helps to show how computer technologies can employ different techniques for changing attitudes and behaviors. For example, computers as tools Persuade differently from computers as social actors. The strategies and theories that apply to each function differ. This chapter next describes aspects of Persuasive technology, organizing the content according to the three elements in the Functional Triad.
Computers as Persuasive Tools
In general, computers as Persuasive tools induce attitude and behavior changes by increasing a Person’s abilities or making something easier to do (Tombari, Fitzpatrick, & Childress, 1985). Although one could propose numerous possibilities for Persuasion in this manner, here we suggest four general ways in which computers Persuade as tools: by (a) increasing self-efficacy, (b) providing tailored information, (c) triggering decision making, and (d) simplifying or guiding people through a process.
Computers That Increase Self-Efficacy. Computers can increase self-efficacy (Lieberman, 1992), an important contributor to attitude and behavior change processes. Self-efficacy describes individuals’ beliefs in their ability to take successful action in specific domains (Bandura, 1997; Bandura, Georgas, & Manthouli, 1996). When people Perceive high self-efficacy in a given domain, they are more likely to take action. And because self-efficacy is a Perceived quality, even if individuals merely believe that their actions are more effective and productive (Perhaps because they are using a specific computing technology), they are more likely to Perform a particular behavior (Bandura, 1997; Bandura et al., 1996). As a result, functioning as tools, computing technologies can make individuals feel more efficient, productive, in control, and generally effective (DeCharms, 1968; Kernal, 2000; Pancer, George, & Gebotys, 1992). For example, a heart rate monitor may help people to feel more effective in meeting their exercise goals when it provides ongoing information on heart rate and calories burned. Without the heart rate monitor, people could still take their pulse and calculate calories, but the computer device—whether it be worn or part of the exercise machinery—makes these tasks easier. The ease of tracking heart rate and calories burned likely increases self-efficacy in fitness behavior, making it more likely individuals will continue to exercise (Brehm, 1997; Strecher, DeVellis, Becker, & Rosenstock, 1986; Thompson, 1992).
Computers That Provide Tailored Information. Next, computers act as tools when they tailor information, offering people content that is Pertinent to their needs and contexts. Compared to general information, tailored information increases the potential for attitude and behavior change (Beniger, 1987; Dijkstra et al., 1998; Jimison, Street, & Gold, 1997; Nowak et al., 1999; Strecher et al., 1999; Strecher et al., 1994).
One notable example of a tailoring technology is the Chemical Scorecard Web site (http://www.scorecard.org), which generates information according to an individual’s geographical location in order to achieve a Persuasive outcome. After people enter their zip code in this Web site, the Web technology reports on chemical hazards in their neighborhood, identifies companies that create those hazards, and describes the potential health risks. Although no published studies document the Persuasive effects of this particular technology, outside research and analysis suggests that making information relevant to individuals increases their attention and arousal, which can ultimately lead to increased attitude and behavior change (Beniger, 1987; MacInnis & Jaworski, 1989; MacInnis, Moorman, & Jaworski, 1991; Strecher et al., 1999).
Computers That Trigger Decision Making. Technology can also influence people by triggering or cueing a decision-making process. For example, today’s Web browsers launch a new window to alert people before they send information over insecure network connections. The message window serves as a signal to consumers to rethink their planned actions. A similar example exists in a very different context. Cities concerned with automobile speeding in neighborhoods can use a standalone radar trailer that senses the velocity of an oncoming automobile and displays that speed on a large screen. This technology is designed to trigger a decision-making process regarding driving speed.
Interactive technologies that provide cues at strategic places and times fit well within the cognitive and affective response systems that humans already possess. Research has shown the effect of cues in attitude formation (Petty & Cacioppo, 1986) as well as how people use cues to assess their environment and their own feelings quickly (Petty, Cacioppo, Sedikides, & Strathman, 1988; Petty, Schumann, Richman, & Strathman, 1993).
Computers That Simplify or Guide People Through a Process. By facilitating or simplifying a process for users, technology can minimize barriers that may impede a target behavior. For example, in the context of Web commerce, technology can simplify a multi-step process down to a click of the mouse. Typically, to purchase something online, a consumer needs to select an item, place it in a virtual shopping cart, proceed to checkout, enter Personal and billing information, and verify an order confirmation. http://Amazon.com and other e-commerce companies have simplified this process by storing customer information so that consumers need not reenter information for every transaction. By lowering the time commitment and reducing the steps needed to accomplish a goal, these companies have reduced the barriers for purchasing products from their sites. The principle used by Web and other computer technology (Todd & Benbasat, 1994) is similar to the dynamic that Ross and Nisbett (1991) discussed on facilitating behaviors through modifying the situation.
In addition to reducing barriers for a target behavior, computers can lead people through processes to help them change attitudes and behaviors (Muehlenhard, Baldwin, Bourg, & PiPer, 1988; Tombari et al., 1985). For example, a computer nutritionist can guide individuals through a month of healthy eating by providing recipes for each day and grocery lists for each week. In general, by following a computer-led process, users (a) are exposed to information they might not have seen otherwise and (b) are engaged in activities they might not have done otherwise (Fogg, 2002).
Computers as Persuasive Media
The next area of the Functional Triad deals with computers as Persuasive media. Although media can mean many things, here we focus on the power of computer simulations. In this role, computer technology provides people with experiences, either firsthand or vicarious. By providing simulated experiences, computers can change people’s attitudes and behaviors. Outside the world of computing, experiences have a powerful impact on people’s attitudes, behaviors, and thoughts (Reed, 1996). Experiences offered via interactive technology have similar effects (Bullinger, Roessler, & Mueller-Spahn, 1998; Fogg, 2002). In what follows, we describe three types of Persuasive computer simulations.
Computers That Simulate Cause and Effect. One type of computer simulation allows users to vary the inputs and observe the effects (Hennessy & O’Shea, 1993)—what we call “cause-and-effect simulators.” The key to effective cause-and-effect simulators is their ability to demonstrate the consequence of actions immediately and credibly (Alessi, 1991; Balci, 1986, 1998; Crosbie & Hay, 1978; de Jong, 1991; Hennessy & O’Shea, 1993; Zietsman & Hewson, 1986). These computer simulations give people firsthand insight into how inputs (e.g., putting money into a savings account) affect an output (e.g., accrued retirement savings). By allowing people to explore causes and effects of situations, these computer simulations can shape attitudes and behaviors.
Computers That Simulate Environments. A second type of computer simulation is the “environment simulator.” These simulators are designed to provide users with new surroundings, usually through images and sound. In these simulated environments, users have experiences that can lead to attitude and behavior change (Bullinger et al., 1998), including experiences that are designed as games or explorations (Lieberman, 1992; Schlosser & Kanifer, 1999; Schneider et al., 1990; Woodward, Carnine, & Davis, 1986).
The efficacy of this approach is demonstrated by research on the Tectrix Virtual Reality Bike (an exercise bike that includes a computer and monitor that shows a simulated world). Porcari, Zedaker, and Maldari (1998) found that people using an exercise device with computer simulation of a passing landscape exercised harder than those who used an exercise device without simulation. Both groups, however, felt that they had exerted themselves a similar amount. This outcome caused by simulating an outdoor experience mirrors findings from other research; people exercise harder when outside than when inside a gym (Ceci & Hassmen, 1991).
Environmental simulators can also change attitudes. Using a virtual reality environment in which the people saw and felt a simulated spider, Carlin, Hoffman, and Weghorst (1997) were able to decrease the fear of spiders in the participants. In this research, participants wore a head-mounted display that immersed them into a virtual room, and they were able to control both the number of spiders and their proximity. In this case study, Carlin and colleagues found that the virtual reality treatment reduced the fear of spiders in the real world. Other similar therapies have been used for fear of flying (Klein, 1999; Wiederhold, Davis, Wiederhold, & Riva, 1998), agoraphobia (Ghosh & Marks, 1987), claustrophobia (Bullinger et al., 1998), and fear of heights (Bullinger et al., 1998), among others (Kirby, 1996).
Computers That Simulate Objects. The third type of computer simulations are “object simulators.” These are computerized devices that simulate an object (as opposed to an environment). The “Baby Think It Over” infant simulator described at the beginning of this chapter is one such device. Another example is a specially equipped car, created by Chrysler Corporation, designed to help teens experience the effect of alcohol on their driving. Used as part of high school programs, teen drivers first navigate the special car under normal conditions. Then the operator activates an onboard computer system that simulates how an inebriated Person would drive—sluggish brakes, inaccurate steering, and so on. This computer-enhanced care provides teens with an experience designed to change attitudes and behaviors about drinking and driving. Although the sponsors of this car do not measure the impact of this intervention, the anecdotal evidence is compelling (Machrone, 1998).
Computers as Persuasive Social Actors
The final corner of the Functional Triad focuses on computers as Persuasive social actors, a view of computers that has only recently become widely recognized. Past empirical research has shown that individuals form social relationships with technology even when the stimulus is rather impoverished (Fogg, 1997; Marshall & Maguire, 1971; Moon & Nass, 1996; Muller, 1974; Nass, Fogg, & Moon, 1996; Nass, Moon, Fogg, Reeves, & Dryer, 1995; Nass & Steuer, 1993; Nass, Moon, Morkes, Eun-Young, & Fogg, 1997; Parise et al., 1999; Quintanar, Crowell, & Pryor, 1982; Reeves & Nass, 1996). For example, individuals share reciprocal relationships with computers (Fogg & Nass, 1997a; Parise et al., 1999), can be flattered by computers (Fogg & Nass, 1997b), and are polite to computers (Nass, Moon, & Carney, 1999).
Laboratory experiments have demonstrated how computers can be Persuasive social actors (Fogg, 1997; Fogg & Nass, 1997a, 1997b; Nass, Fogg, & Moon, 1996). In particular, computers as social actors can Persuade people to change their attitudes and behaviors by (a) providing social support, (b) modeling attitudes or behaviors, and (c) leveraging social rules and dynamics.
Computers That Provide Social Support. Computers can provide a form of social support in order to Persuade, a dynamic that has long been observed in human-human interactions (Jones, 1990). While the potential for effective social support from computer technology has yet to be fully explored, a small set of empirical studies provide evidence for this phenomenon (Fogg, 1997; Fogg & Nass, 1997b; Nass, Fogg, & Moon, 1996; Reeves & Nass, 1996). For example, computing technology can influence individuals by providing praise or criticism, thus manipulating levels of social support (Fogg & Nass, 1997b; Muehlenhard et al., 1988).
Outside of the research context, various technology products use the power of praise to influence users. For example, the Dole 5 a Day Adventures CD-ROM, discussed earlier, uses a cast of more than 30 on-screen characters to provide social support to users who Perform various activities. Characters such as Bobby Banana and Pamela Pineapple praise individuals for checking labels on virtual frozen foods, for following guidelines from the food pyramid, and for creating a nutritious virtual salad.
Computers That Model Attitudes and Behaviors. In addition to providing social support, computer systems can Persuade by modeling target attitudes and behaviors. In the natural world, people learn directly through firsthand experience and indirectly through observation (Bandura, 1997). When a behavior is modeled by an attractive individual or is shown to result in positive consequences, people are more likely to enact that behavior (Bandura, 1997). Lieberman’s (1997) research on a computer game designed to model health maintenance behaviors shows the positive effects that an on-screen cartoon model had on those who played the game. In a similar way, the product “Alcohol 101” (http://www.centurycouncil.org/underage/education/a101.cfm) uses navigable on-screen video clips of human actors dealing with problematic situations that arise during college drinking parties. The initial studies on the Alcohol 101 intervention show positive outcomes (Reis, 1998). In the future, computer-based characters, whether artistically rendered or video images, are increasingly likely to serve as important models for attitudes and behaviors.
Computers That Leverage Social Rules and Dynamics. Computers have also been shown to be effective Persuasive social actors when they leverage social rules and dynamics (Fogg, 1997; Friedman & Grudin, 1998; Marshall & Maguire, 1971; Parise et al., 1999). These rules include turn taking, politeness norms, and sources of praise (Reeves & Nass, 1996). The rule of reciprocity—that we must return favors to others—is among the most powerful social rules (Gouldner, 1960) and is one that has been shown to also have force when people interact with computers. Fogg and Nass (1997a) showed that people Performed more work and better work for a computer that assisted them on a previous task. In essence, users reciprocated help to a computer. On the retaliation side, the inverse of reciprocity, the research showed that people Performed lower quality work for a computer that had served them poorly in a previous task. In a related vein, Moon (1998) found that individuals followed rules of impression management when interacting with a computer. Specifically, when individuals believed that the computer interviewing them was in the same room, they provided more honest answers than did individuals who interacted with a computer believed to be a few miles away. In addition, participants were more Persuaded by the “proximate” computer.
The preceding paragraphs outline some of the early demonstrations of computers as social actors that motivate and influence people in predetermined ways, often paralleling research from long-standing human-human research.
Functional Triad Summary
The Functional Triad is a useful framework for the study of computers as Persuasive technologies. It makes explicit how a technology can change attitudes and behaviors—either by increasing a Person’s capability, by providing users with an experience, or by leveraging the power of social relationships. Each of these paths suggests related Persuasion strategies, dynamics, and theories. One element that is common to all three functions is the role of credibility. Credible tools, credible media, and credible social actors all will lead to increased power to Persuade. In the next section, we discuss the elements of computer credibility.
Computers and Credibility
One key issue in captology is computer credibility, a topic that suggests questions such as “Do people find computers to be credible sources?” “What aspects of computers boost credibility?” and “How do computers gain and lose credibility?” Understanding the elements of computer credibility promotes a deeper understanding of how computers can change attitudes and behaviors, as credibility is a key element in many Persuasion processes (Gahm, 1986; Lerch, Prietula, & Kulik, 1997).
In this section, we address two aspects of computer credibility. First, we discuss computer credibility in general—what computer credibility is and what the existing literature says about this topic. Next, we look specifically at computer credibility as it relates to the World Wide Web, an area of increasing importance as more computing information and experiences become Web based (Caruso, 1999).
What is Credibility?
Credibility has been a topic of social science research since the 1930s (for reviews, see Petty & Cacioppo, 1981; Self, 1996). Virtually all credibility researchers have described credibility as a Perceived quality made up of multiple dimensions (e.g., Buller & Burgoon, 1996; Gatignon & Robertson, 1991; Petty & Cacioppo, 1981; Self, 1996; Stiff, 1994). This description has two key components germane to computer credibility. First, credibility is a Perceived quality; it does not reside in an object, a Person, or a piece of information. Therefore, in discussing the credibility of a computer product, one is always discussing the Perception of credibility for the computer product.
Next, researchers generally agree that credibility Perceptions result from evaluating multiple dimensions simultaneously. Although the literature varies on exactly how many dimensions contribute to the credibility construct, the majority of researchers identify trustworthiness and expertise as the two key components of credibility (Self, 1996). Trustworthiness, a key element in the credibility calculus, is described by terms such as well-intentioned, truthful, and unbiased. The trustworthiness dimension of credibility captures the Perceived goodness or morality of the source. Expertise, the other dimension of credibility, is described by terms such as knowledgeable, experienced, and competent. The expertise dimension of credibility captures the Perceived knowledge and skill of the source.
Extending research on credibility to the domain of computers, it has been proposed that highly credible computer products will be Perceived to have high levels of both trustworthiness and expertise (Fogg & Tseng, 1999). In evaluating credibility, a computer user will make an assessment of the computer product’s trustworthiness and expertise to arrive at an overall credibility assessment.
Overview of Computer Credibility Research
The research relating to computer credibility is often obscured by semantic issues (Fogg & Tseng, 1999). A number of studies do not use the term credibility but instead use phrases such as “trust in the information,” “believe the output,” and “trust in the advice” (see, e.g., Kantowitz, Hanowski, & Kantowitz, 1997; Muir, 1994; Muir & Moray, 1996). These phrases are essentially synonymous with credibility; they refer to the same psychological construct.
In some situations, computer credibility is not an issue for those who use computers. Sometimes, the computer system is invisible to users (e.g., a fuel injection system), or users do not question the device’s competence or bias (e.g., a pocket calculator). But in many situations, computer credibility matters a great deal (Sampson et al., 1992). Tseng and Fogg (1999) proposed that computer credibility matters when computing products do one of eight things: act as knowledge sources, instruct or tutor users, act as decision aids, report measurements, run simulations, render virtual environments, report on work Performed, or report about their own state.
Although computer credibility is relevant to computer users in these eight areas, a relatively small body of research addresses Perceptions of credibility in human-computer interactions. In what follows, we draw on the work of Fogg and Tseng (1999) to describe how previous research on computer credibility clusters into six domains.
Cluster 1: The Credible Computer Myth
One cluster of research investigates the notion that people automatically assume computers are credible. In framing these studies, the authors state that people Perceive computers as “magical” (Bauhs & Cooke, 1994; Hennessy & O’Shea, 1993) with an “‘aura’ of objectivity” (Andrews & Gutkin, 1991), as having a “scientific mystique” (Andrews & Gutkin, 1991), as “awesome thinking machines” (Pancer et al., 1992), as “infallible” (Kerber, 1983), as having “suPerior wisdom” (Sheridan, Vamos, & Aida, 1983), and as “faultless” (Sheridan et al., 1983). In sum, researchers have long suggested that people generally are in “awe” of computers (Honaker, Hector, & Harrell, 1986) and that people “assign more credibility” to computers than to humans (Andrews & Gutkin, 1991). In addition, anecdotal experience suggests that—at least during one Period in history—computers were Perceived by the general public as virtually infallible (Pancer et al., 1992; Sheridan et al., 1983).
But what does the empirical research show? Most studies that directly examine assumptions about computer credibility conclude that computers are not Perceived as more credible than human experts (Andrews & Gutkin, 1991; Honaker et al., 1986; Matarazzo, 1986; Northcraft & Earley, 1989; Pancer et al., 1992). In some cases, computers may be Perceived as less credible (Lerch & Prietula, 1989; Rieh & Belkin, 1998; Waern & Ramberg, 1996). Although anecdotal evidence suggests that people Perceive computers as more credible than humans in some situations, little solid empirical evidence supports this notion (for exceptions, see Dijkstra et al., 1998; Ingle, 1975). Future research is needed to reconcile anecdotal experience with the majority of research findings, which have largely failed to document that people assume computers to be highly credible.
Cluster 2: Dynamics of Computer Credibility
Another cluster of research examines the dynamics of computer credibility—how it is gained, how it is lost, and how it can be regained. Some studies demonstrate what is highly intuitive: Computers gain credibility when they provide information that users find accurate or correct (Amoroso, Taylor, Watson, & Weiss, 1994; Hanowski, Kantowitz, & Kantowitz, 1994; Kantowitz et al., 1997; Muir & Moray, 1996); conversely, computers lose credibility when they provide information that users find erroneous (Kantowitz et al., 1997; Lee, 1991; Muir & Moray, 1996). Although these conclusions seem obvious, we find this research valuable because it represents the first empirical evidence for these ideas. Other findings on the dynamics of credibility are less obvious, which we summarize in the following paragraphs.
Effects of Computer Errors. A few studies have investigated the effects of computer errors on Perceptions of computer credibility. Although researchers acknowledge that a single error may severely damage computer credibility in certain situations (Kantowitz et al., 1997), no study has clearly documented this effect. In fact, in some research, error rates as high as 30% did not cause users to dismiss an on-board automobile navigation system (Fox, 1998; Hanowski et al., 1994; Kantowitz et al., 1997). In other situations, an error rate of this size would likely not be acceptable.
Impact of Small Errors. Another research area has been the effects of large and small errors on credibility. Virtually all researchers agree that computer errors damage credibility—at least to some extent. One study demonstrated that large errors hurt credibility Perceptions more than did small errors, but not in proportion to the gravity of the error (Lee, 1991; Lee & Moray, 1992). Another study showed no difference between the effects of large and small mistakes on credibility (Kantowitz et al., 1997). Findings from these studies and other work (Muir & Moray, 1996) suggest that small computer errors have disproportionately large effects on Perceptions of credibility.
Regaining Credibility. Researchers have also examined how computer products can regain credibility (Lee & Moray, 1992). Two paths are documented in the literature. First, the computer product may regain credibility by providing good information over a Period of time (Hanowski et al., 1994; Kantowitz et al., 1997). Or, the computer product may regain some credibility by continuing to make the identical error; users then learn to anticipate and compensate for the Persistent mistake (Muir & Moray, 1996). In either case, regaining credibility is difficult, especially from a practical standpoint. Once users Perceive that a computer product lacks credibility, they are likely to stop using it, which provides no opportunity for the product to regain credibility (Muir & Moray, 1996).
Cluster 3: Situational Factors that Affect Credibility
The credibility of a computer product does not always depend on the computer product itself. Context of computer use can affect credibility. The existing research shows that three related situations increase computer credibility. First, in unfamiliar situations, people give more credence to a computer product that orients them (Muir, 1987). Next, computer products have more credibility after people have failed to solve a problem on their own (Waern & Hagglund, 1992). Finally, computer products seem more credible when people have a strong need for information (Hanowski et al., 1994; Kantowitz, 1997). Indeed, other situations are likely to affect the Perception of computer credibility such as situations with varying levels of risk, situations with forced choices, and situations with different levels of cognitive load. However, research is lacking on these points.
Cluster 4: User Variables that Affect Credibility
Although individual differences among users likely affect Perceptions of computer credibility in many ways, the extant research allows us to draw two general conclusions. First, users who are familiar with the content will evaluate the computer product more stringently (Honaker et al., 1986; Kantowitz et al., 1997; Lerch & Prietula, 1989). Conversely, those who are not familiar with the subject matter are more likely to view the computer product as more credible (Waern & Hagglund, 1992; Waern & Ramberg, 1996). These findings match credibility research outside of human-computer interaction (Gatignon & Robertson, 1991; Self, 1996; Zajonc, 1980).
Next, researchers have investigated how user acceptance of computer advice changes when users understand how the computer arrives at its conclusions. One study showed that knowing more about the computer actually reduced users’ Perception of computer credibility (Bauhs & Cooke, 1994). However, other researchers have shown the opposite to be the case; users were more inclined to view a computer as credible when they understood how it worked (Lee, 1991; Lerch & Prietula, 1989; Miller & Larson, 1992; Muir, 1987).
Cluster 5: Visual Design and Credibility
Another line of research has investigated the effects of interface design on computer credibility (Friedman & Grudin, 1998; Kim & Moon, 1997). These experiments have shown that—at least in laboratory settings—certain interface design features, such as cool (as opposed to warm) color tones and balanced layout, can enhance users’ Perceptions of interface trustworthiness. Although these design implications may differ according to users, cultures, and target applications, this research sets an important precedent in studying the effects of interface design elements on Perceptions of trustworthiness and credibility.
Cluster 6: Human Credibility Markers in Human-Computer Interaction
An additional research strategy has been investigating how credibility findings from human-human interactions apply to human-computer interactions. Various researchers have taken this approach (Burgoon et al., 2000; Fogg, 1997; Kim & Moon, 1997; Muir, 1987; Quintanar et al., 1982; Reeves & Nass, 1996), as discussed earlier in the section on computers as Persuasive social actors. A handful of such studies have measured credibility as an outcome to various experimental manipulations. In what follows, we describe two lines of research as they relate to the credibility of technology devices.
Affiliation Effects. In most situations, people find members of their “in-groups” (e.g., those from the same company or the same team) to be more credible than people who belong to “out-groups” (Mackie, Worth, & Asuncion, 1990). Researchers demonstrated that this dynamic also held true when people interacted with a computer they believed to be a member of their in-group (Fogg, 1997; Nass, Fogg, & Moon, 1996). Specifically, users reported the in-group computer’s information to be of higher quality, and they were more likely to follow the computer’s advice.
Labeling Effects. Titles that denote expertise (e.g., Dr., Professor) make people seem more credible (Cialdini, 1993). Applying this phenomenon to the world of technology, researchers labeled a technology as a “specialist.” This study showed that people Perceived the device labeled as a specialist to be more credible than the device labeled as a generalist (Nass, Reeves, & Leshner, 1996; Reeves & Nass, 1996).
In addition to the preceding lines of research, other human-human credibility dynamics are likely to apply to human-computer interaction. Outlined elsewhere (Fogg, 1997), the possibilities include the following principles to increase computer credibility: physical attractiveness (Byrne, 1971; Chaiken, 1979) or making the computing device or interface attractive, association (Cialdini, 1993) or associating the computer with desirable things or people, authority (Gatignon & Robertson, 1991; Zimbardo & Leippe, 1991) or establishing the computer as an authority figure, source diversification (Gatignon & Robertson, 1991; Harkins & Petty, 1981) or using a variety of computers to offer the same information, nonverbal cues (Larson, 1995) or endowing computer agents with nonverbal markers of credibility, familiarity (Gatignon & Robertson, 1991; Self, 1996; Zajonc, 1980) or increasing the familiarity of computer products, and social status (Cialdini, 1993) or increasing the status of a computer product.
Research has yet to specifically show how the preceding principles—which are powerful credibility enhancers in human-human interactions—might be implemented in computing systems (Fogg, 1998).
We now turn our attention to credibility Perceptions of the World Wide Web, an increasingly important aspect of computers as Persuasive technologies.
Credibility and the World Wide Web
The nearly nonexistent barriers to publishing material on the World Wide Web has made the Internet a repository for all types of information, including misinformation. As a result, credibility has become a major concern for those seeking or posting information on the Web (Caruso, 1999; Johnson & Kaye, 1998; Kilgore, 1998; McDonald, 1999; Nielsen, 1997). During the second half of 1990, librarians, designers, and researchers have addressed these problems in different ways. The existing literature on Web credibility can therefore be divided into three categories: (a) evaluation guidelines, (b) design guidelines, and (c) research findings on credibility evaluations.
Evaluation Guidelines on Web Credibility. The first category of Web credibility literature is clearly the most plentiful: guidelines on how to evaluate sources. Often discussed under the label of “information quality,” this aspect of Web credibility has been embraced by librarians and others. They see themselves as having key skills to evaluate Web sources and to train others to do so (Tillman, 2000). As a result, many excellent guides exist to help students and researchers evaluate the information they find online (e.g., Caywood, 1999; Grassian, 1998; Rosenfeld, 1994; Smith, 1997; Stoker & Cooke, 1995; Tate & Alexander, 1996; Tillman, 2000; Wilkinson, 1997). The creators of these guidelines often have adapted evaluation strategies for other media and applied them to the Web, which includes examining elements such as purpose, authority, scope, audience, cost, and format (Katz, 1992).
Design Guidelines on Web Credibility. The second category of Web credibility literature takes a different approach. While the information is still prescriptive in nature, the aim is to help designers create Web sites that convey maximum credibility to users. In essence, these are design guidelines. For example, in his online column (http://www.useit.com), Nielsen has addressed the issue of designing for Web credibility (e.g., Nielsen, 1997; Nielsen, 1999a, 1999b). Other designers and researchers have also suggested approaches to make Web sites more credible or trustworthy (Cheskin Research & Studio Archetype, 1999; Johnson, 1999). There is no universal consensus on how to design for credibility, but most sources discuss the importance of elements such as attractive layouts, intuitive navigation systems, and clear presentation of material (Dormann, 1997).
Research on How People Assess Web Credibility. The third category of Web credibility literature is also the least common: research studies that examine how people evaluate the credibility of Web sites (Cheskin Research & Studio Archetype, 1999; Critchfield, 1998; Eighmey, 1997; Fogg et al., 2000; Rieh & Belkin, 1998). In one small study, notable because so few exist, Critchfield (1998) tentatively concluded that users’ “Perception of the credibility of a resource was influenced by an aesthetically pleasing, usable Web site design.” With similar intent, Morkes and Nielsen (1997) conducted a study examining how writing style on the Web affected user responses, including credibility impressions. Although not statistically based, this work concluded that objective writing (as opposed to promotional writing) enhances credibility.
A larger study by Cheskin Research and Studio Archetype (1999), two commercial firms in the Silicon Valley area of California, examined “e-commerce trust”—a related, but not identical, construct to Web credibility. This study consisted of 138 participants and found six important elements that gave people confidence to transact business with Web sites: (a) brand (“the company’s reputation”), (b) navigation (“ease of finding what the user seeks”), (c) fulfillment (“the process users experience from when they begin a purchase until they receive a shipment”), (d) presentation (“how the site communicates meaningful information”), (e) technology (“ways in which the site functions technically”), and (f) seals of approval (“symbols that represent companies that assure the safety of Web sites”).
Building on research described previously, Fogg and colleagues (2000) collaborated with industry partners to conduct an online study focusing on Perceptions of Web credibility (http://www.webcredibility.org). This study consisted of more than 1,400 participants and examined 51 elements relating to credibility evaluations. The data suggest five major conclusions: (a) Web sites gain credibility when they convey a real-world presence (e.g., listing a physical address or a phone number); (b) even small errors (e.g., typos, broken links) hurt credibility substantially; (c) ease of navigation leads to enhanced Perceptions of credibility; (d) Web ads that distract or confuse reduce credibility, while other ads can enhance credibility; and (e) technical problems weaken credibility.
Taken together, these research studies suggest similar findings, but further research is needed to understand deeply what leads people to believe—or not believe—what they find on the Web. Further insight into Web credibility will contribute significantly to the study of computers as Persuasive technologies.
Key Questions and Future Directions
This chapter has provided definitions and a framework for better understanding computers as Persuasive technologies. Although knowledge about the theory, design, and analysis of Persuasive technology continues to increase, many key questions in captology remain unanswered. They include the following:
- What are the best applications of Persuasive technologies?
- What are the potentials of Persuasive technologies?
- What are the limits of Persuasive technologies?
- What are the effects and side effects of using Persuasive technologies?
- What are the ethical implications of Persuasive technologies? (Berdichevsky & Neunschwander, 1999; Friedman & Grudin, 1998)
Although the extant literature that focuses directly on computers as Persuasive technologies is relatively small, the future possibilities are large. To help move work forward in this area—both in research and in design—in what follows, we suggest future directions for captology in terms of who, what, how, and why.
Who is Best Positioned to Research Captology?
The study of computers as Persuasive technologies is an interdisciplinary endeavor by definition. As a result, captology does not fit neatly into a single academic department. Those who research computers and Persuasion are likely to be individuals or teams with interdisciplinary interests, combining social science approaches with technology and design insights.
Interdisciplinary Academics. Some academic departments are better suited for captology research than are others. For example, departments of communications have a history of using social science methods to study the impact of new technologies. This is likely to be a good fit. Many psychology researchers have relevant skills for research in captology. However, traditional psychology departments have been slow to study new technologies, and they may fail to reward people who make this an area of research. Fortunately, some institutions have interdisciplinary programs, such as symbolic systems and human-computer interaction, that bring together areas germane to Persuasive technologies.
Industry Researchers. Industry researchers are in a good position to study Persuasive technologies. Because the ability to influence is a core competency of—and presents a strategic advantage to—many companies, captology has been a good fit with industry researchers. The major disadvantage of industrial research in this area is that, for the most part, the research findings are not publicly shared. This approach, then, makes little contribution to a wider understanding of Persuasive technologies.
Industry and Academic Partnerships. A third approach, which seems to be the most promising, is collaborative research among academics and industry players. Each party can bring what it does well to the endeavor. If academics are slow to partner with industry in this research, it is likely that academics will be left behind in understanding Persuasive technologies. The Persuasive devices and Web sites launched during the past few years have been well ahead of most academic understanding in this area. Academics would do well to partner with industry in order to move quickly, staying abreast of new developments.
What Should We Focus on in Captology?
Because captology is relatively uncharted territory, many paths will offer new insights and understanding. However, not all paths have equal potential or value. In what follows, we describe directions we deem most profitable, organized into the categories of dependent variables and independent variables.
Dependent Variables
Although most of the psychology literature on Persuasion is based on measuring attitude formation and change, people involved in captology would do well to focus on behavior change as the principal dependent variable for Persuasive technologies. Behavior change is a more compelling metric than attitude change for at least three reasons: (a) behavior change is thought to be more difficult to achieve than attitude change (Larson, 1995; Zimbardo & Leippe, 1991), (b) behavior change is more useful to people concerned with real-world outcomes (Graeff, Elder, & Booth, 1993; Street et al., 1997), and (c) researchers can measure behavior change without relying on self-reports.
Our bias for behavioral measures is not intended to discourage research with attitudinal measures or other types of data collection. We simply propose that a focus on behavior change in captology will give clear evidence of how computers can motivate and influence people.
In addition, we advocate studying the planned effects of technology, not the side effects. Although the side effects of technology use is an important area of inquiry, this is not central to the study of Persuasive technologies. The core of captology deals with planned effects—the attitude or behavior changes that were anticipated and intended. By researching these planned effects, we will be better able to build a body of knowledge about technologies designed to influence and motivate.
Independent Variables
What variables are most profitable to manipulate in the study of Persuasive technologies? A wealth of possible research directions awaits captology researchers. In what follows, we propose some paths we view as most important to increasing our shared understanding of Persuasive technology.
Technology Forms. From our vantage point, many Persuasive technologies of the future will be specialized, distributed, or embedded computing systems—what some call “Pervasive” or “ubiquitous” computing (Weiser, 1991). Ubiquitous computing systems, which might not look anything like today’s desktop computers, hold special implications for the study of Persuasive technologies. Because Persuasive situations occur most frequently in the context of normal life activities—not when people are seated at their desktop computers—we advocate researching the impact of different technology forms on Persuasion. This line of research examines, in part, the differential Persuasive outcomes of using an identical computing application in different formats, for example, a handheld device versus a wearable computer versus a desktop machine. With computing technology moving toward portable and wearable devices, it is important to understand how these new forms change the Persuasive potentials of interactive technology.
Same Strategy, Different Manifestations. Another profitable path for those studying Persuasive technologies is to focus on a single Persuasion strategy and vary how a computing device can implement that strategy. For example, positive feedback (e.g., praise) is a Persuasion strategy that computers can manifest in various ways—a text message, a human voice, a musical passage, and so on. By keeping the strategy constant and varying the manifestations, researchers can learn about the impact of each manifestation type. Over time, we then may be able to draw general conclusions about the Persuasive impact of different manifestations (e.g., how a voice from a computer Persuades vs. how text messages from a computer Persuade).
Same Manifestation, Different Strategies. A complementary approach to the preceding approach is to hold the manifestation constant in research while varying the Persuasion strategy the computer uses. For example, the computer could always use voice but could vary the Persuasive strategy used (e.g., compare praise, criticism, threats, and promises). By keeping the manifestation of the strategy constant and varying the strategies, researchers can, over time, theorize and generalize how Persuasive strategies function in a computing system.
Avoiding Cross-Media Comparisons. An ongoing issue is the comparative effectiveness of Persuasive media types, for example, print versus video versus interactive technologies. These cross-media comparisons have limited usefulness (for a longer discussion, see Kuomi, 1994). It is rare that a cross-media comparison study has been able to generalize its conclusions beyond the specific stimuli used in the particular study (for an exception, see Kernal, 2000). Although a researcher can clearly determine that Computer Program X is more Persuasive than Video Y or Pamphlet Z, these results hold only for Artifacts X, Y, and Z— not for comparing computers, videos, and pamphlets in general. Too many variables are at play in cross-media studies; as a result, no useful theoretical understanding comes from this type of research (Nass & Mason, 1990).
How Should We Study Persuasive Technologies?
Like most research endeavors, the study of computers as Persuasive technologies lends itself to various research methodologies.
Quantitative Research. Studying Persuasive technologies from a quantitative point of view, such as experiments and surveys, can produce conclusions supported by statistical evidence. Many studies discussed in this chapter are quantitative in nature, but we acknowledge that other approaches add significant value in researching Persuasive technologies.
Qualitative Research. Studying Persuasive technology products from a qualitative standpoint can offer insights not available through quantitative means. Participant-observer research, content analyses, heuristic analyses, and focus groups can be helpful. There are at least three reasonable outcomes to this type of research: generating rich insight into a particular Persuasive technology (e.g., the strengths and weaknesses of a product), generating insight into a particular user group for a Persuasive technology (e.g., a target group’s biases and reactions to the product), and creating hypotheses for future research and design efforts. All of these outcomes are valuable contributions.
Literature Reviews. In addition to the methods just discussed, our understanding of Persuasive technologies can be enhanced by careful reviews of literature from diverse fields. For example, Aristotle certainly did not have computers in mind when he wrote about the art of Persuasion, but his work on rhetoric can broaden and deepen our understanding of how computers can motivate and influence people. In general, we can speed our understanding of Persuasive technologies by gleaning the relevant work from other fields. The field of psychology—both cognitive and social—has a tradition of examining different types of Persuasion and influence. The theories and methods from psychology transfer well to captology. In addition, the field of communication has a history of examining the Persuasive effects of media and other types of message sources. Specifically, the applied domain of public information campaigns has a set of theories and practices that can provide insight into the study of Persuasive technologies.
Why Should We Research Persuasive Technologies?
In addition to the who, what, and how of future research on Persuasive technologies, we also outline the “whys” or motives for engaging in this work.
Commercial Application. The commercial possibilities for Persuasive technologies will continue to generate research for the foreseeable future. As corporations learn to create interactive technologies that influence individuals, they will most likely profit financially or gain a market advantage. The commercial applications are unlikely motivators for academics who study Persuasive technologies.
Theoretical Understanding. One compelling reason to study captology, from an academic’s Perspective, is to increase knowledge about the theory and application of Persuasive technology. As with other academic pursuits, the process of research and the insights gained can be intrinsically rewarding. The theoretical understanding not only can form a foundation for subsequent research in Persuasive technologies but also can enhance research in other areas.
Prosocial Interventions. Another motive for researching Persuasive technology is the potential for positive outcomes. Because many social problem can be minimized by changing attitudes and behaviors, Persuasive technologies have a place in prosocial interventions. Many examples exist, addressing social issues that range from environmental issues to HIV transmission.
Impact of Persuasive Technologies
As computing technology becomes ubiquitous, we will see more examples—both good and bad—of computers designed to change attitudes and behaviors. We will see computers playing new roles in motivating health behaviors, promoting safety, promoting eco-friendly behavior, and selling products or services. To be sure, Persuasive technologies will emerge in areas we cannot yet predict.
To some people, this forecast may sound like bad news—a world full of inescapable computer technology constantly prodding and provoking us. While it could happen, this “dystopian” scenario seems unlikely. We propose that in many cases, people will choose the technologies they want to influence them— just as people can choose a Personal trainer at the gym or a tutor for their children. And even though certain types of Persuasive technologies will be imposed on people—by corporations and government institutions— people will learn to recognize and respond appropriately to these Persuasive appeals. In the extreme cases, we—as an association of Persuasion scholars—will need to help create public policy to influence the design and uses of computers as Persuasive technologies. However, to effectively shape the landscape of Persuasive technologies, we need to educate ourselves and others about the potentials and pitfalls of this domain. In this way, we can leverage the power of Persuasive technologies to improve our lives, our communities, and our society.