Peter Danielson. Information Security and Ethics: Concepts, Methodologies, Tools, and Applications. Editor: Hamid Nemati. Volume 1. Hershey, PA: Information Science Reference, 2008.
While “digital morality” and “digital ethics” may sound strange, the technologies that drive digital government and democracy operate as well in these less formal areas of social regulation. Information technologies can affect morality and ethics at several levels: facilitating compliance with moral rules, altering the formation of norms and rules, and aiding the ethical assessment of rules. This article sketches an account of ethical decision-making which lets us explore some threats and opportunities of the emerging technologies of digital morality and ethics.
The focus of this article is how new communication technology affects ethical decision-making. Since ethics has a large and controversial literature, we will simplify. First, while disagreement about substantive issues is wide-spread, there is greater agreement about the process of ethical decision-making. Most writers on ethics agree on what counts as ethical agents (i. e., most people, with minor disagreement about young children, some animals, and organizations). Further, there is wide agreement on the kind of decision-making broadly characterized as ethical. We summarize this agreement in terms of ethical decision-making having three components: compliance with moral rules, discovery of moral norms, and critical ethics. Second, while ethics has many dimensions, there is broad agreement in the social sciences that morality and ethics are coordination mechanisms. Agents who can discover the local moral norms and use them to govern their behavior can solve the coordination problems endemic to social life. The ability to critically assess alternative moral rules helps to solve the higher-level coordination problem of moral disagreement.
Although the terms “morals,” and “ethics” are used in a variety of ways, we shall use them to distinguish these two levels, lower and higher, respectively, of coordination and decision-making.
Rationality and Morality
Moral and ethical agents are a subset of rational agents, whose behavior tracks their values. Rational agents must be able to consider alternative courses of action and their outcomes, rank these outcomes in terms of relevant values, and select the most valued option. Therefore, moral agents inherit the problems of rationality: uncertainty and time constraints, problems of self-control (Rachlin, 2000), framing and other decision biases (Tversky & Kaneman, 1981). On most accounts, moral agents are distinguished from rational agents by a broader set of pro-social or altruistic values and a commitment to following moral rules. These features bring new problems specific to moral decision-making, such as balancing self and others (Schmidtz, 1998) and hypocrisy.
In addition, moral decision-making has a distinctly social component. Morality depends on moral norms, a subset of social norms that influence individual decisions. Social norms go under the name “conventions” in some literatures; “social equilibria” in others, and refer to existing institutions, rules, traditions, or practices (Binmore, 2004). All involve some coordination: strategic situations where most agents value doing what (most of) the others are doing. In addition, moral norms involve special motivations. Deviant behavior typically invokes both psychological (shame) and social sanctioning (blame).
The social component of moral decision-making can easily go wrong for lack of information, or due to misinformation. In several well-studied cases—college drinking in the US is most thoroughly documented-behavior is in a mistaken equilibrium (Greenberg, Dodd, & David, 1999; Perkins & Berkowitz, 1986). Believing most other students drink heavily, many drink in excess to comply with the norm. Their beliefs are a self-confirming estimation of the group’s behavior, which should be amenable to new information.
Ethics aims at critically evaluating morality. Obviously, partisans of two competing norms in a society should not simply appeal to what their own norms require. They need to appeal to “higher” standards: human harm or benefit, rights, progress, national solidarity, tradition, or other ideals. Failures in ethical decision-making combine the problems surveyed for rationality and morality. Ethical decision-making has an ideal element that links it to other normative ideals, such as deliberative democracy.
Computer-Mediated Opportunities and Threats
Having resolved the field of ethics (broadly considered) into three components, it becomes clearer how computer-mediated technologies can change, perhaps threaten, and hopefully improve each of them.
Computerization can assist rationality in myriad ways—from calculators through spreadsheets and databases—beyond the scope of this article. We will mention a few examples of aids relevant to the rationality of moral and ethical agents.
Visualization software is a major innovation relevant to normative rationality. These programs allow us to see how our values map onto the world. Examples are maps of political preferences and scorecards ranking firms and mutual funds by “ethical” scorecards. Calculators allow us to evaluate our choices in terms of our values. For example, global warming gas emissions calculators help us decide between our energy intensive options and so achieve personal responsibility, if desired (Danielson, 1993). Simulators allow us to think through values and choices in complex technical and social environments. (Epstein & Axtell, 1996) is the most developed academic example of a simulator designed to increase insight into elementary social science. Of course, computer-mediated communication also can threaten moral rationality. New media can add new distractions and sources of poor quality information, especially until our information niters catch up with the technology. Simulators may imbed biases in ways that are difficult to counter. Violent electronic games that give the thrill of combat, aggressive driving, and street crime shorn of all consequences are prominent examples of morally dubious simulators (McCormick, 2001). Some criticize even the more pro-social SimCity series for the weight of entertainment as contrasted with educational values (Starr, 1994).
Moral decision-making is based on moral rules. Compliance is difficult when the rules are complex, unclear, and various. Research ethics provides a good example of this problem and the promise of digital technology to mitigate it. Human “gene banks” are collections of genetic data (or tissue samples), clinical data, and environmental data. Genomic scientists see great promise in research linking this data across large populations. Unfortunately, the moral rules governing who can access which data vary across jurisdictions and are often unclear (Maschke, 2005). Note that this problem arises in spite of the relatively formal nature of the rules and their institutionalization by ethics review boards. Recent work by bioinformaticians involves applying digital technology to this problem. Wilkinson (2003) suggests that “ethics ontologies” can allow automated “agents” to navigate the rules governing access to various sources of data. This computer-mediated resolution of moral uncertainty yields the direct benefit of allowing researchers to access only the data they morally ought to be permitted to use. Indirectly, it may allow research subjects deciding between granting or withholding consent to a particular use better to understand the consequences of their decisions.
As the most social of our three factors, morality is the most subject to change due to the introduction of new computer mediated technologies. We consider three ways.
- Knowledge of Norms: Most obviously, Web technology has made simple polling very easy. For example, a Victoria, Canada, radio news station runs a different public affairs poll every day on its Web site. These polls attract about 275 responses a day. Notice that polls like this are likely to serve as moral convention amplifiers rather than ethical instruments, for several reasons. First, the poll page displays a tally of previous responses, so participants’ answers are not independent. Answers are subject to an information cascade effect that reinforces the power of existing norms (Hirshleifer, 1995). Second, participants are self-selected, inviting interested parties to skew the sample. Third, Web surveys collect superficial”top of head”opinions. However, we need not restrict ourselves to simple polls. More sophisticated computerized surveys allow us to support more complex moral decision-making (Danielson, Ahmad, Bornik, Dowlatabadi, & Levy, in press). Since moral norms, as social equilibria, are a function of agents’ knowledge and expectations, new information can change norms. Recall the example of campus drinking. Were students to realize that most others are not drinking excessively, they may drink less. In this case, more information weakens the force of the norm that supported drinking.
- Changing Social Networks: Computer-mediated communication can change the social basis of norms by facilitating non-spatial social networks. Consider the example of “apotemnophilia”—an attraction to the idea of being an amputee “leading to healthy people seeking amputations, publicized by bioethicist Carl Elliott. It is a good example for several reasons. First, the desire is baffling,” Why would anyone want an arm or a leg cut off? Where does this sort of desire come from?” A surgeon who has performed such operations” has said that the request initially struck him as” absolutely, utterly weird” (Elliott, 2003, p. 209). Second, it is a disturbing example, which helps to overcome the prejudice that something as recent and superficial as computer-mediated communication cannot make a real moral difference. Third, the reader is unlikely to have come across this new community, which helps make the point about coordinating out of the public eye. Fourth, apotem-nophilia has been made acceptable due to computer-mediated communication. “By all accounts, the Internet has been revolutionary for wannabes. It took me months to track down even a handful of scientific articles on the desire for amputation. It took about ten seconds to find dozens of Web sites devoted to the topic. Every one of the wannabes and devotees I have talked with about the Internet says that it has changed everything for them. Because the desire is so rare, it is unlikely that most wannabes would ever spontaneously meet another wannabe. But the Internet brings [them] together, online if not in person” Elliott, 2003, p. 217, 219). Another example of the Internet bringing together groups that validate otherwise morally impermissible behavior are Japanese “suicide clubs” (Harding, 2004).
- Anonymity: While communicating anonymously or using a pseudonym in political debate has a long history, computer-mediated communication has greatly increased its importance (Danielson, 1996). Elliott brings this out as he elaborates on his case study, “Many wannabes participate in online groups anonymously. So they get both the comfort and satisfaction of being part of a group, and knowing they are not alone in the world, while also avoiding the potential shame of actually having to reveal themselves to anyone else face-to-face”(Elliott, 2003, p. 219).
At the ethical level, we act as legislators, taking various feasible moralities as the laws amongst which we choose (Rawls, 1955). Computer-mediated communication can add this activity in several ways already covered above. We can collect more information about our own moral norms and other groups” norms, which may be feasible alternatives for us. In addition, we can calculate and simulate how these proposed norms might work for us, and even how our own values might change if make some choices.
Turning to distinctly ethical factors, computer-meditation offers new ways to implement the ideals of deliberative democracy. However, the question whether and if so, when, computer-mediation improves deliberation remains open. Scott (1999) summarizes the recent research on face-to-face versus computer-mediated deliberation as “mixed and inconclusive findings”. In a welcome development, the leading theorist of deliberative democracy, James Fishkin is actively experimenting with computer mediated variations of traditional face-to-face meetings (Fishkin, 2004). His group identifies the main drawback of online deliberation:” The major liability of the online model concerns the representativeness of the participant pool. Access to technology remains closely dependent on socio-economic standing, and there is no reason to suppose that the” digital divide “will disappear in the ordinary course of events. The digital divide might, therefore, compromise the ability of online researchers to draw representative samples in the absence of special interventions. But interventions—in the form of free access to technology—are possible” (Iyengar, Luskin, & Fishkin, 2004, p. 4).
Straightforward benefits of online deliberation are low cost and flexible access. A third feature is more complex: “the online process offers greatly improved metrics for determining exactly what the participants are doing, what aspects of the experimental treatment they are making use of, which parts of the briefing documents or the answers to questions they are reading. Hence online Deliberative Polling opens up new possibilities for understanding the mediators of the treatment effects (what exactly is causing the opinion changes), and whether there are inequalities in participation in specific aspects of the process” (Iyengar et al., 2004,p. 4). Here the benefit accrues to the researchers; the deliberation is treated as an experiment. While it is important at this early stage to learn more about alternative deliberative processes, there is a tension between experimentation and ethics here, unless we make the deliberators themselves aware of the new information gained on their process. This is not an objection to the experiment but a pointer to the richness of problems that lie ahead. While the natural information constraints of face-to-face deliberation limit this process, we do not yet know how to use the rich resources of computer-mediated communication in a fully open and reflexive way.
Another feature of deliberation is “the phenomenon of group polarization. The idea is that after deliberating with one another, people are likely to move toward a more extreme point in the direction to which they were previously inclined, as indicated by the median of their predeliberation judgments. With respect to the Internet, the implication is that groups of people, especially if they are like-minded, will end up thinking the same thing that they thought before—but in more extreme form” (Sunstein, 2001). The concern here is that the internet facilitates association with the like-minded (Sunstein, 2000).
Digital morality and ethics are new ways of social regulation, charged with controlling power. Therefore, they are subject to the same forces as more familiar political mechanisms. A report on a participatory experiment in local government observes,” … teledemocracy initiatives potentially perturb the power balance. This effect is likely to be most evident in groups where information is held by a few. Frequently, barriers to communication are clearly more social and political than technological. Also, reluctance to adopt might be ascribed to differences in status (a source of influence) among organization members and, in addition, reflect the values of different political parties and structures”(Watson, Akselsen, Evjemo, & Aarsaether, 1999, p. 61).
Second, we should expect these new communication media to be subject to the pressures found in our traditional for a and venues: the press, media, libraries etc. Ithiel Poole opens his groundbreaking book, Technologies of Freedom, with this dramatic warning about “the right of people to speak and print freely, unlicensed, uncensored, and uncontrolled. But new technologies of electronic communication may now relegate old and freed media such as pamphlets, platforms, and periodicals to a corner of the public forum. Electronic modes of communication that enjoy lesser rights are moving to center stage. The new communication technologies have not inherited all the legal immunities that were won for the old” (Pool, 1983, p. 1). Generalizing, traditional modes carry norms—not only legal protections but also communities and professionals such as journalists and librarians—and communal memories. New technologies will not automatically pick up these norms, nor maintain continuity with these communities. A search engine like Google may not distinguish content and advertising in the same way as a newspaper, or indeed, at all. Nor, evidently, do even electronic voting machines, as subject to scrutiny as they are, meet the same standards as more traditional voting practices (Larsen, 1999). Therefore, we need to be cautious about what we may lose in our enthusiasm to embrace new technologies of collective self-governance.
Most generally, we should expect unanticipated consequences: digital morality and ethics are new developments. In particular, technological innovations have fostered optimism. Langdon Winner’s essay “Myth information” reminds us that other technologies such as television were welcomed—indeed hyped—with great social and political expectations (Winner, 1986). Moreover, most digital morality and ethics innovations are small research projects. They may be utopian; their small scale cannot explore the range of unintended consequences that would likely follow general introduction. Until digital morality and ethics are played out by real social and political agents their full set of consequences, many unanticipated, are practically impossible to discover and assess. They remain beyond our rational or ethical planning horizon.
While digital morality and ethics are new topics, they point to real and important developments. Digital technology has the capacity to change our moral and ethical decision-making at several levels, from implementing existing moral rules to forming new social norms and ethically evaluating our norms.