Adele Santana & Donna J Wood. Ethics and Information Technology. Volume 11, Issue 2. June 2009.
Despite the many benefits of Internet technology, including democratization and cost reduction of information provision, there are some thorny difficulties yet to be resolved by the growing popularity of open-source information sites, in particular, Wikipedia. Although Wikipedia promises transparency and, to a large extent, has very open and transparent processes, there are serious issues with the credibility of information provided or edited by unaccountable anonymous users. Furthermore, when assessed by standards of corporate social responsibility and performance, Wikipedia’s owner, the Wikimedia Foundation, falls short on one essential dimension—its lack of attention to the moral agency of contributors.
Our article is organized as follows. First we examine the general context of higher education and briefly report the history of Wikipedia. Next, we analyze the relationships among the degree of transparency in Wikipedia and a number of critical social and ethical issues for the production and dissemination of information, including:
- The credibility, legitimacy, and accountability of the providers,
- The validity of the information being transmitted, and
- The social responsibility and performance of Wikipedia’s sponsors as reflected in the incomplete transparency of the medium’s writing and editing processes.
These issues are, of course, tightly intertwined, and so we structure our discussion by addressing each issue briefly and then showing their interactions. We argue that in order for democratic information processes to produce valid, reliable information, transparency must be achieved along all relevant dimensions, not just a few.
Background: Higher Education and Wikipedia
With the introduction of the Internet and the World Wide Web, the traditional prerogative of professionals—to create and transmit valid, reliable information—has begun to yield to a more fluid, unstable, and democratic form of information production and transmission. Wikipedia, the open-source on-line encyclopedia, embodies this trend. Although Wikipedia’s processes are in part transparent and self-correcting, they do not depend upon expert knowledge, critical analysis is actually forbidden, and there is no link between information credibility and the personal reputations of posters or editors. These process flaws yield unknown and potentially very large risks in terms of the reliability and truth of information, and the benefits appear primarily in terms of ease of access.
On January 15, 2001, Wikipedia was launched by founders Jimmy Wales, Larry Sanger, and others as an “open, less formal encyclopedia project,” based on the “wiki” technology which allows an unlimited number of users to add, delete, or edit content and which tracks the history of entries. In June 2003, the Wikimedia Foundation was created to own the website. Wikipedia describes itself as follows:
Wikipedia … is a multilingual, web-based, free content encyclopedia project. Wikipedia is written collaboratively by volunteers from all around the world. With rare exceptions, its articles can be edited by anyone with access to the Internet, simply by clicking the edit this page link. The name Wikipedia is a portmanteau of the words wiki (a type of collaborative website) and encyclopedia. Since its creation in 2001, Wikipedia has grown rapidly into one of the largest reference Web sites.
In every article, links will guide the user to associated articles, often with additional information. Anyone is welcome to add information, cross-references or citations, as long as they do so within Wikipedia’s editing policies and to an appropriate standard. One need not fear accidentally damaging Wikipedia when adding or improving information, as other editors are always around to advise or correct obvious errors, and Wikipedia’s software, known as MediaWiki, is carefully designed to allow easy reversal of editorial mistakes.
Because Wikipedia is an ongoing work to which, in principle, anybody can contribute, it differs from a paper-based reference source in important ways. In particular, older articles tend to be more comprehensive and balanced, while newer articles may still contain significant misinformation, unencyclopedic content, or vandalism. Users need to be aware of this to obtain valid information and avoid misinformation that has been recently added and not yet removed (see Researching with Wikipedia for more details). However, unlike a paper reference source, Wikipedia is continually updated, with the creation or updating of articles on topical events within minutes or hours, rather than months or years for printed encyclopedias.
The concept of Wikipedia is breathtaking. Scholar Melanie Remy (2002, p. 434), for example, writes that Wikipedia “embodies some of the brightest promises of the Web—collective intellectual enterprise, informed consensus, unrestricted access to knowledge, and the free sharing of information and software—while challenging commonly-held notions of authorship and editorial control.” But it is exactly this challenge that gives us pause.
Wikipedia claims to have 75,000 “active contributors” and hundreds of thousands of site visitors who can and often do edit its ten million entries, of which 2.8 million are in English as of this writing. While traditional print encyclopedias have been careful to acquire the writing and editorial services of legitimate, credentialed professionals, Wikipedia does not demand any credentials of its contributors at all: “Visitors do not need specialized qualifications to contribute, since their primary role is to write articles that cover existing knowledge; this means that people of all ages and cultural and social background can write Wikipedia articles.”
To one who is professionally trained or has acquired expertise, the task of writing “articles that cover existing knowledge” is not one taken lightly. To produce a good (i.e., reliable, comprehensive, valid) review article involves a great many skills: knowing what encompasses “existing knowledge” and how to access it, understanding technical or arcane terminology, being able to discriminate inadequate or unacceptable material from logically and empirically sound contributions, grasping the debates and competing interpretations within a field of study, and so much more. To Wikipedia’s non-expert founders, contributors, and users, such skills may seem like so many glass bead games played by otherwise unemployable wizards, as in Hermann Hesse’s classic novel, Magister Ludi.
Deborah Johnson (1997) argues that web-based problems are no different from other societal issues, except that web communication tends to intensify certain aspects and diminish others. The e-problems she sees are human and cultural problems, not problems per se of web communication. In line with Johnson’s thinking, it seems clear that the Wikipedia phenomenon represents an attempt at a quantum shift in how information is produced, transmitted, and used, and that it also reflects sociocultural differences in generations of information users. Gen-Xers and their parents know what it is like to have books around—how they look, how they smell, how they feel. They are trained to respect the printed word and they have learned how to evaluate the sources of those words for credibility, validity, and legitimacy. Today’s college-age adults do not have this same experience with printed forms of information. They are computer literate from toddlerhood, and, although they may read books, they are much less likely to use print media and much more likely to rely on on-line sources for their information.
The current generation of college students is unlike any generation before them in their comfort with computer technology. Students are accustomed to sourcing their term papers and class projects from the Internet, and ‘googling’ is the method of choice for gaining information on virtually any topic. Wikipedia often holds the top spot in Google-searches; professors are finding that students have no hesitation in using Wikipedia as an authoritative source and do not recognize a problem when challenged on this behavior (see, e.g., Waters 2007). Badke (2008), for example, writes:
If the average university student can safely go to Wikipedia instead of consulting a specialized print reference source, then academia is broken. It is a finger in the eye of the whole academic enterprise. It’s as if our students are saying, “We don’t care if it breaks the rules, deceives us, or is dumber than print reference books. We like Wikipedia, and it rarely lets us down.”
As they get used to Wikipedia, students refer less to other, more reliable sources of information, such as peer reviewed publications, books, and traditional encyclopedias. Students tend to choose the easier path of searching the Internet and, of course, it is not always easy to tell if one has arrived at reliable sources. Wikipedia has been the locus of practical jokes, arcane antagonisms, smears, falsifications, and rumors. One contributor to Wikipedia, for example, has recently been accused of false identity—a case we discuss below that illustrates the problems of information credibility faced by such Internet sources. This situation exemplifies the unintended consequences of anonymity on Wikipedia—the risk of accessing unreliable or invalid content that nevertheless has apparent face validity.
Transparency Issues for Wikipedia
Transparency of process and information is an issue at least as old as the Industrial Revolution and the development of capitalist theory. In fact, capitalist economics requires that marketplace actors (whether shareholders, producers, employees, or consumers) have full and accurate information available on which to base their decisions.
A great deal of theoretical and empirical scholarship exists on information asymmetry and its consequences. The issue is exemplified by the old adage, ‘Knowledge is power.’ When a better-informed social actor (whether a person or an organization) withholds, distorts, or falsifies information needed by a less-informed actor who is dependent on that information for rational decision making, then a power asymmetry is created. The better-informed actor has power to exercise self-interested influence over the less-informed actor’s decisions.
This line of thinking is extended into the e-world by a number of scholars. Fleischmann and Wallace (2005), for example, argue that user awareness is needed to solve web problems, and that such awareness happens via transparency and education. The problem is one of a power asymmetry between the software/system designer and the product’s user. To achieve a power balance, the software or system must not be a ‘black box,’ but must be understandable by the user. These authors offer parameters for transparency: models should be thoroughly documented, so that the user can understand them; models should explicitly state their assumptions and values, so that validity can be tested; and users should have access to the content of models, so that meaning of content can be analyzed.
Vaccaro (2006) was one of the first scholars to explicitly add transparency to the categories of ethical problems involving information and communications technology (ICT). In his analysis of ICT ethical issues for firms, he identified privacy and security/safety as the ‘traditional’ categories, adding internal and external transparency issues as a third category. His analysis yields a set of six questions that specify particular arenas of ethical concern over the uses of ICT. Vaccaro and Madsen (2009b) extend the earlier analysis to propose managerial practices related to “dynamic transparency,” a two-way communication process between organizations and their stakeholders. Vaccaro and Madsen (2009a), studying ICT transparency issues in a European nongovernmental organization, point out the social and political constraints that such organizations may face in using transparent processes, but also claim that transparent information disclosure yields stakeholder empowerment.
Despite traditional capitalist theory’s reliance on transparency and full information disclosure, information asymmetry is one of the chief means by which market inequities occur. Companies often resist providing full and accurate information for reasons of cost, marketing, and competitive advantage; users of such information, in turn, do not necessarily have the background knowledge, ability, or motivation needed to search out and interpret the meaning of information to their own decisions. In the Ford Explorer/Firestone tire controversy, for example, the issue was not just finding out about the rollover problem, but tracing the problem back to its source. It appears that Firestone did not want to tell consumers everything it knew about the tires, because of fears that consumers would choose other companies’ products. (Tapscott and Ticoll 2003) Furthermore, users knew there was some problem with Firestone tires on SUVs, but for a long time it was not clear whether the problem was high speed in hot weather, or under-inflation, or a too-high SUV center of gravity, or a defect in the tires’ materials or workmanship. Users had lots of information, but little ability to turn that information into rational action.
For e-transparency to occur, the user must be able to know how the product works, how it is constructed and changed (Fleischmann and Wallace 2005). Wikipedia’s processes are very transparent. Clear policies are easily accessible on the Wikipedia.org website, and the process of becoming a contributor or editor is equally transparent. The primary feature of Wikipedia that is not transparent is that contributors, editors, and administrators may use pseudonyms and do not need to make their true identities known.
More traditional processes of information development and transmission also have transparency issues. The typical double-blind review processes of scholarly journals, for example, hide the identities of reviews and authors from one another, and reviewers remain unknown even after an author is revealed through publication of the reviewed article. Double-blind review is intended to remove political and other idiosyncratic influences from the publication process, but of course such influences can circumvent the controls and play a role anyway. Publications in the natural and physical sciences do not typically use double-blind reviewing, so that reviewers and authors are known to each other and the attendant reputations and expertise can be brought to bear on publication decisions. Wikipedia’s publication processes are transparent to the extent that editing histories are published, but the lack of identity for authors and editors makes it impossible for users to refer to reputation, credentials, position, or expertise as ways of validating the information.
According to Wallace (1999), anonymity refers to “non-co-ordinatability of traits in a given respect” (p. 23):
Anonymity is a kind of relation between an anonymous person and others, where the former is known only through a trait or traits which are not coordinatable with other traits such as to enable identification of the person as a whole. (p. 23)
In the case of Wikipedia’s authors and editors, if the history of their contributions is traced back—which is possible to some extent—patterns of thoughts and interest, as well as overall quality of contributions might be identified. However, these traits cannot be coordinated with other traits of importance, particularly the accountability that identified authors typically have.
A case can be made that this lack of identity transparency in Wikipedia is minor, even insignificant, and does not create a power imbalance between users and creators. Contributors must cite published sources for their entries, and any entry or source can be challenged by other contributors, editors, or mere readers. Challenges are discussed on-line among the various actors—contributor, editor, challenger, and others—and an arbitration process is in place to decide on a proper course of action when consensus cannot be reached.
A case can also be made that Wikipedia’s lack of identity transparency is a critical flaw in its value as a credible source of information. Claimed credentials—what Mitnick (1999, 2000) calls ‘testaments’—can be used to sway opinion and argument. If those credentials do not exist or are falsely stated, as in the Essjay case detailed below, then institutional credibility is being brought to bear where it is not warranted, and a new power asymmetry is being created. Trust in any system is broken when users discover that their trust has been misplaced or violated.
Johnson (1997) argues that trust in modern society is changing its character, becoming less of a personal identity and relationship issue, and more reliant upon credentials and ‘testaments.’ Yet these are difficult to establish and to verify with respect to e-information vehicles. She says:
… trust in the information we use in decision making and trust in the individuals with whom we have relationships seems crucial to our way of being. Yet trust is difficult to develop in an environment in which one cannot be sure of the identities of the people with whom one is communicating. It is difficult to develop a reliable history of experiences with specific people (p. 62).
Johnson argues further that transparency per se is less important than users’ awareness of and agreement to the degree of transparency of the product. She says (1997, p. 65):
…what seems most important for computer networks is that individuals be informed about what to expect when they enter an online environment and that the environment be what it purports to be.
We can have a wide variety of forms of online communication with a high level of trust if the rules are known or explained to individuals before they enter an environment. We can have environments in which there is a high degree of anonymity, environments in which an operator goes to great lengths to check and verify the identity (and even the credentials) of participants before allowing them to participate, and environments between these two extremes. We can have filtered and unfiltered discussions, discussions filtered by diverse criteria. The important thing is that individuals know what they are getting into before they enter.
Johnson is applying a standard ethical criterion of free choice under conditions of perfect information, and if both of these assumptions were met, we would have little or no objection to Wikipedia. However, Wikipedia writers and editors do indeed ‘know what they are getting into before they enter,’ but users likely do not. And it is users, ultimately, who decide the meaning of a product or system like Wikipedia. If users believe that Wikipedia is a credible, reliable source of information, then that will be its practical identity, regardless of what its contributors believe or do. Users’ beliefs, then, can distort and ultimately destroy the standard information production processes based on expertise, training, and other processes of legitimation and verifiability.
In support of hidden on-line identities, issues ranging from harassment to identity theft to physical safety arise when full identities are posted on-line. In the early days of personal websites, for example, people often posted family pictures in an apparent effort to establish the sorts of human connections that lead to trusting relationships. This practice became much less common, however, when website-owners became aware that they could be putting their children at risk from would-be drug-dealers, sex offenders, or kidnappers. On the other hand, without knowing the identities of Wikipedia posters, readers have no access to the shorthand modes of credibility establishment that are offered by degrees, professional locations, and other credentials. And providers have no reason to hold themselves accountable to the users of the information they provide.
While Johnson’s (1997) argument that varying degrees of transparency are fine as long as users know and agree seems valid in theory, it does not always work well in practice. Many times users are ill-informed, lazy, and/or satisficing. As Weil et al. (2006) affirm,
Because of limited time and cognitive energy, information users acting rationally to advance their various, usually self-interested, ends may not seek out all of the information necessary to make optimal decisions. Instead, they seek information to make decisions that are good enough, using time-tested rules of thumb or ‘satisficing’ behavior (p. 158)
And yet, the democratization of information that the Internet provides is a huge, potentially positive change in global equality of opportunity. Undue constraints on this process would slow the pace of positive change. Wikipedia, put together by volunteers who claim to transmit existing knowledge to those who have never before had access, can be an amazing tool. Yet the information that Wikipedia transmits may be incomplete, biased, and even inaccurate. As Wikipedia continues to evolve, the transparency and accountability issues of pseudonymic contributors and editors must be addressed in a way that establishes the credibility of the information they present.
Transparency and Credibility: The Essjay Saga
In March 2007, a 24-year-old community college dropout, going by the screen name Essjay, confessed to fabricating an identity that lent weight and credibility to his Wikipedia postings and edits. Ryan Jordan, the person behind Essjay, had told the Wikipedia-world that he “held doctoral degrees in theology and canon law, and worked as a tenured professor at a private university.” With input into 16,000 articles, Essjay was one of Wikipedia’s foremost editors and administrators. Jimmy Wales, Wikipedia founder, initially commented that despite Wikipedia’s failure to verify his identity, Essjay had been an “excellent editor with an exemplary track record.” He later rescinded his support and publicly announced that he hadn’t known what was going on and had asked Essjay to resign (Cohen 2007).
On his user space, Essjay wrote,
My comments here will be short and to the point: I’m no longer taking part here. I have received an astounding amount of support, especially by email, but it’s time to go … Many of you have written to ask me to not leave, to not give up what I have here, but I’m afraid it’s time to make a clean break.
His supporters were devastated and not reluctant to say so, as this sampling of their posted comments (reported verbatim) shows:
- Essjay retired because of all the dilemma [sic] and it’s so sad that we are losing a very dedicated Editor.
- Thank you for your service. I was debating retirement less than an hour ago because of a melting pot of issues, but they are nothing compared to the hell you’ve been through. You will be missed by those who rightfully see the service outweighs the scandal.
- You will be sorely missed my friend. While you probably won’t get this, my best wishes go with you wherever you are. Good luck with Wikia, and to Mia 🙂 it is a shame the trolls won the battle. They won’t win the war.
Then again, deep into the posted comments, there’s this:
I have to admit that I am surprised that everything but the gushing ‘oh my, I am so sorry that you are leaving’ comments have been edited. While plainly offensive remarks should be removed, critical comments should be allowed to stay. So again: leaving is an honorable decision. Off with you. Dpilat 04:43, 4 March 2007 (UTC)
All the negative comments were edited out? Oh, right; that’s how Wikipedia works!
A Newsweek article cited a critic writing in Web 2.0 as claiming that “sites like Wikipedia, along with blogs, YouTube and iTunes, are rapidly eroding our legacy of expert guidance in favor of a ‘dictatorship of idiots”’ (Levy 2007, p. 16). The Newsweek writer, however, begged to differ. The Internet, he pointed out, can be compared to the introduction of the printing press and the devastation it wrought on the overwhelming authority of the Catholic church, even as it also unleashed vast human creativity (Levy 2007, p. 16). Just because something is authoritative, the author seems to imply, doesn’t mean that it’s true, or even useful.
The credibility and legitimacy of information providers is not just an arcane subject of interest only to those providers themselves. Johnson (1997) observes that e-communications have a scope not matched by any other mode of human communication. She defines scope as the combination of “vastness of reach, immediacy, and availability to individuals for interactivity” (p. 60), and points out that the enormous scope of e-communications lends them tremendous power. She writes,
… we generally expect those engaged in powerful activities to take greater care. We restrict who can use powerful technologies, for example, by licensing their use … We expect and require those who use more powerful, especially dangerous, technologies to take more precautions and exercise greater care than those who use less powerful technologies … Indeed, we often hold individuals legally liable for the effects of their actions when they use powerful technologies recklessly (Johnson 1997, pp. 60-61).
We will return to this issue of power and its careful use when we examine issues of social responsibility and performance, later in this article.
Transparency and Validity of Information
Where information is concerned, it is quite possible to have very credible and legitimate providers and full transparency, and yet arrive at invalid, illogical, dangerously incomplete, misleading, or untrue information. Process matters, but content does too.
Wikipedia’s formal content policy for articles applies the following standards to all posted materials:
- Neutrality in point of view, “representing fairly and without bias all significant views (that have been published by reliable sources).”
- Verifiability—”The threshold for inclusion in Wikipedia is verifiability, not truth. ‘Verifiable’ in this context means that any reader should be able to check that material added to Wikipedia has already been published by a reliable source.”
- No original research, meaning “unpublished facts, arguments, concepts, statements, or theories. The term also applies to any unpublished analysis or synthesis of published material that appears to advance a position.”
These standards, we argue, are simply not adequate to ensure that Wikipedia transmits valid information.
Neutrality
An anonymous editorial on The Economist’s website, commenting on Wikipedia’s Essjay scandal, remarked that “anonymity creates a phony equality, which puts cranks and experts on the same footing. The same egalitarian approach starts off by regarding all sources as equal, regardless of merit” (Anonymous 2007, p. 1). Consider, as an example, that both right-wing and left-wing editorial magazines can be considered ‘reliable sources’ for promoting particular political views and for identifying and interpreting facts and events within those points of view. As sources of information, these magazines can tell readers how the right or the left is thinking on particular issues. However, merely presenting published views on the left and the right does not necessarily constitute a ‘neutral’ or balanced presentation. As another example, consider a major event that receives considerable attention, say, the Challenger explosion of 1986. Wikipedia’s standard would seem to require that an article would include every published theory, explanation, and idea about what happened. A sabotage theorist’s blog would be ‘balanced’ with the careful work of the Rogers Commission in explaining the disaster. From the point of view of information validity, this makes no sense.
It could also be argued that there is no such a thing as a truly neutral point of view. All theory and research is based upon a variety of spoken and unspoken assumptions, any one of which, if challenged successfully, could change what is believed to be known or reasonably concluded. History is based on historians’ ideas of what is important and worth saving, and of course on what remains of the events to be explained. Mainstream philosophy is based on the assumption that reasoning (not intuition or revelation) is the path to knowledge. And so on. Information requires assumptions, and those assumptions necessarily skew the direction that information will take. The biases and assumptions of anonymous writers and editors are difficult if not impossible for users to discern.
Verifiability
For Wikipedia writers and editors, verifiability simply means that posted material can be found in some other published source; it does not mean that the material is, or has to be, true or generally accepted. This criterion for inclusion does not guarantee accuracy and quality of information. Publishing is no longer the preserve of an elite few; virtually anyone can publish virtually anything whatsoever, so the mere fact of publication in no way guarantees the accuracy or relevance of the material. And, in scholarly, scientific, and other professional domains, published materials often reflect ongoing arguments about what is reasonable, valid, or appropriately constructed or interpreted. To assume that ‘published’ material is factual and inclusive is simply naive.
Even worse, because of Wikipedia’s ever-shifting content, it is next to impossible to track all the changes that have been made in an article. By the time a reader wants to check our Wikipedia sources in this article, the pages will likely no longer be available and the reader will not be able to verify our interpretation of things. College students seem to have difficulty understanding this criterion of information use—that it be verifiable, especially if they have not been taught very strictly about plagiarism and referencing. They have grown up with the fluid information of the Internet, and they tend not to experience such fluidity as a problem.
Several studies have been done of the accuracy (in terms of breadth, depth, and factual content) of Wikipedia articles, with wildly varying results. An anonymous report in The Quill (2008) noted that a “panel of experts” rating the accuracy of five randomly chosen entries resulted in assessments including “pointless,” “puzzling,” “inaccurate,” “largely accurate,” and “distinctly biased.” Fiedler (2008) details a case in which a false biographical entry in Wikipedia was merely lifted from blogs and left uncorrected for months. Luyt et al. (2008) tested the hypothesis that older edits were more trustworthy than newer ones, but found that early edits accounted for about 20 percent of errors in the entries they examined and so were not espe-cially trustworthy. Kirtley (2006) details several cases of political, personal, or historical smear campaigns conducted anonymously in Wikipedia articles. Schweitzer (2008), by contrast, reported that Wikipedia’s “coverage of psychological topics was comprehensive and prominently displayed on the major search engines.”
Verifiability is a legitimate criterion in determining the value of some body of information, and the accuracy with which information is transmitted is a sister criterion. Wikipedia so far does not offer reliable mechanisms for insuring the verifiability and accuracy of information pre-sented in its articles.
No Original Research
This criterion is supposed to protect the user from material that has not been legitimated by peer-reviewing or traditional publishing processes. It represents the rejection of a layer of evaluation and analysis added to the piece of information being reported. Further, the requirement of published information increases the chance that the material has been earlier either incorporated into or rejected by thinkers in the field. Nevertheless, the problems with neutrality, validity, and verifiability make the ‘no original research’ requirement almost irrelevant.
What difference does it make if transmitted information is valid? The short answer is that false information causes untold harm. For example, the tobacco industry claimed for decades that nicotine was not addictive and smoking was not harmful to health. These claims were false, and were made by companies and individuals who knew of the falsity. The resulting harms to smokers and those around them have been enormous.
The deeper answer to the relevance question might be based in John Rawls’s (1971) claim that ‘honesty is the first principle of a just society.’ A just, or fair, society is one in which everyone has access to the processes by which the society’s benefits and burdens are distributed, and no one is unfairly burdened. Almost by definition, access to processes requires accurate, sufficient information about them. This requirement can only be achieved if honesty is an infallible behavioral guide. In a fair or just society, participants may reasonably trust that they have accurate, adequate information to fully participate in society’s processes. In a Wikipedia-type system, trust may exist, but there is no substantial reason for it. This type of system is therefore too fragile to be sustained in the face of some participants’ inevitable opportunistic behaviors.
Following Vaccaro (2006), the question to be asked here would be this: ‘Under the veil of ignorance, what should be the criteria for knowledge [information] sharing on the Internet’? Rawls’s (1971) ‘veil of ignorance’ is a thought experiment in which people are to imagine that they have entered a new society and must agree to rules about how that society will be run. The catch is that they have no idea about where they will rank in that society, or how any personal attributes will be valued. They do not know if things like height, weight, intelligence, skin color, racial origin, sex, age, talents, interests, or anything else will give a person higher or lower status, more or fewer privileges. Vaccaro’s (2006) argument focuses on user privacy and safety issues involved in information technology. For our purposes, considering the Rawlsian question might lead to a rule something like this: ‘Information producers shall tell the truth, the whole truth, and nothing but the truth as best they can.’
In traditional information-production systems, invalid assumptions, methods, findings, and conclusions can be overturned by an inability of others to replicate the information-producing process, by presentation of new or contrary evidence, by analyzing and rearguing, or by a number of other challenges that can be issued and must be addressed. In Wikipedia-type settings, where anything can be set forth as ‘information,’ anyone can not only issue a challenge but actually change the presentation itself.
Consensus-building
Wikipedia processes assume that the quality of the articles improves with the number of revisions, and that controversies among editors can be resolved through documented discussion logs. However, as the number of revisions increases and the discussions taper off, there is a greater chance that information will rely on what Kamm (2007) calls “the wisdom of the crowds”:
There is no reason that Wikipedia’s continual revisions enhance knowledge. It is quite as conceivable that an early version of an entry in Wikipedia will be written by someone that knows the subject, and later editors will dissipate whatever value is there. Wikipedia seeks not truth but consensus, and like an interminable political meeting, the end result will be dominated by the loudest and most persistent voices.
This concept of ‘knowledge-by-crowd’ becomes more vivid considering the fact that the identities of Wikipedia collaborators are hidden. The possibility of being one voice that makes a difference in the collective would be fascinating and very appealing, if it were not for the credibility issue. The difference between credible and non-credible information transmission relies heavily on the identity, legitimacy, and accountability of the provider.
The Social Responsibility / Performance and Transparency Connection
Johnson and Powers (2005) argue that issues of responsibility in ICT are complex because many individuals participate in the processes involved in these technologies:
The many hands include modelers, coders, testers, documentation writers, system administrators, and users. When something goes wrong, or even when there is some preventative action to be taken to avoid untoward events in the future, questions arise as to whether and how responsibility should be distributed among these ‘many hands.’ (p. 99)
The authors refer to responsibility on the individual level of analysis. In addition to the individual level, responsibility also exists on the organizational and institutional levels of analysis. The field of business & society/business ethics (B&S/BE) has a great deal to offer the ongoing discussion of ICT ethics and responsibility in general, and open-source information production in particular. Corporate social responsibility is defined as “the set of duties that companies owe to their stakeholders and to society” (Wood 2007, p. 43). Corporate social performance, a broader and more inclusive term, is defined as “a business organization’s configuration of principles of social responsibility, processes of social responsiveness, and observable outcomes as they relate to the firm’s societal relationships” (Wood 1991, p. 693). Although these concepts were originally developed for and applied to businesses, they can be extended to nonprofit organizations. Wood’s (1991) corporate social performance model offers a theoretical basis for examining these issues of Wikipedia’s information production from the standpoint of the social responsibility exercised by the owner organization, Wikimedia Foundation, and by the loose organizational system that is Wikipedia itself.
Analyzing Wikipedia and the Wikimedia Foundation from a social performance perspective requires first determining whether or not they abide by the principles of social responsibility, which, in brief, involve responsible use of power, responsiveness to stakeholder and societal interests, and the exercise of moral autonomy by actors within the organization or system. We have seen that Wikipedia is acquiring immense power to define information. Whether it makes wise use of that power is still an open question. On the second principle, Wikipedia seems to have been extraordinarily responsive to most stakeholders’ interests in the sense that editing processes are transparent and available to anyone who has an interest in a topic. On the third principle, however, Wikipedia and its owner are failing miserably. The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so.Exercising moral responsibility means, in part, being willing to accept the consequences of one’s acts and to ‘stand up and be counted.’ This is the essence of accountability. Accountability is not necessary for the exercise of moral responsibility in some cases; one may fulfill a perceived moral duty by giving to charity anonymously, for example. However, when actions have the potential for negative effects, accountability becomes more critical. Harms done by anonymous others leave no room for moral redress.
Abrams et al. (2003) argue that organizational information networks depend on the existence and nurturance of trust that participants are both benevolent (mean no harm) and competent (skilled and knowledgeable). Based on their research in organizations, they suggest that mechanisms for holding participants accountable are essential in establishing and maintaining such trust. Wikipedia has no such mechanism.
Because Wikipedia’s processes are not as transparent as the owners claim, the medium fails on grounds of the moral autonomy principle. This lack of transparency in terms of anonymous providers (writers, editors, administrators) yields unintended but potentially severe consequences for users and others: users treat Wikipedia as an authoritative, factual, and sufficiently complete source of information; users do not verify Wikipedia’s information or cross-check it with other more legitimate and verifiable sources; users therefore may act upon information that is incomplete, misrepresented, or untrue; those actions may result in unintended harms to users and to others.
Two scholars in particular have articulated ethical positions that are relevant to the issue of author/editor anonymity on the Internet. Floridi (1999) argues that computer ethics (CE) is based on a broader theory of information ethics (IE), and that IE presents challenges that conventional ethical approaches cannot handle. Dreyfus (1999) argues that when anonymity substitutes for personal commitment, as is true of Wikipedia writers, editors, and users alike, then none of these actors can find any substantial, lasting meaning to their actions or their lives. Below we present these two perspectives in more detail.
Hubert L. Dreyfus (1999) applies Kierkegaard’s mid-20th century analysis of the ‘dangers and opportunities’ of the press to the modern use of the World Wide Web for educational purposes. Kierkegaard, writing of the widespread influence of mass media, outlined three sequential ‘spheres of existence,’ only the last of which was capable of producing a meaningful life. Dreyfus’s adaptation of these spheres to Internet usage is captured in the following summary.
The aesthetic sphere is characterized by curiosity and a continuous search for enjoyment. Surfing the Web to find interesting sites and information, participating in chat rooms, and so on are typical behaviors of people in this sphere of existence. Ultimately this approach fails to create meaning because the user remains anonymous and takes no risks, and furthermore has no standards to distinguish among information, except entertainment value, which over time naturally lessens.
The ethical sphere is marked by processes that turn information into knowledge according to the user’s chosen perspective. Information is sought not for entertainment value but for serious purposes—to guide action or to learn something important. Adopting a perspective, say, that of a trained anthropologist observing and studying culture, allows one to transform information into knowledge and thereby attain competence in a chosen field. It does not, however, allow mastery and ultimate satisfaction, which requires entry into the third sphere.
The religious sphere in this analysis is not necessarily godly; instead it requires an incontrovertible, unconditional commitment to one world view or perspective. Such a commitment is life-changing and life-defining; it sets standards for judging the trivial and the profound; it is identity-based and risky. The Internet, with its blatant anonymity and impermanence, does not and cannot sustain such commitment and so, ultimately, it cannot provide the meaning that humans seek.
So, in Dreyfus’s view, the entertainment value of the Web inevitably leads to boredom, and if one tries to use the Web from within a chosen perspective, there is ultimately no defense for choosing that perspective instead of another. He writes, “The ethical [sphere] breaks down because the power to make commitments and so to choose what information to seek out undermines itself. Any choice I make does not get a grip on me, so it can always be revoked” (1999, p. 18). Finally, because the Internet does not support the unconditional commitment necessary for the religious sphere of existence, no sustained meaning can be wrenched from Internet experiences. Dreyfus concludes with the gloomy view that students using the Internet are not likely to progress past the state of despair that is the natural consequence of living in the aesthetic sphere. These are serious consequences indeed!
Floridi (1999), attempts to apply the philosophy of information ethics to justify his contention that computer ethics (CE) is worthy of philosophers’ time and attention. Privacy, accuracy, intellectual property, access, security, and reliability, he argues, represent longstanding ethical issues that have been so transformed by information technology advances that they require new ways of thinking to discriminate between ethical and unethical uses and behaviors.
Floridi explains that the usual ethical approaches—deontology, contractualism, and consequentialism—do not adequately encompass these CE issues because technology itself is often an implied moral actor, despite its lack of consciousness, motivation, and intention (p. 39):
Two possible forms of distortion … are the projection of human agency, intelligence, freedom and intentionality (desires, fears, expectations, hopes, etc.) onto the computational system, and the tendency to delegate to the computational system as an increasingly authoritative intermediary agent (it is not unusual to hear people dismiss an error as only the fault of a computer). In both cases, we witness the erosion of the agent’s sense of moral responsibility for his or her actions.
CE, he explains, is different: “Without information there is no moral action, but information now moves from being a necessary prerequisite for any morally responsible action to being its primary object” (p. 43). Or in different words, information ethics (IE) “suggests that there is something even more elementary and fundamental than life and pain, namely being, understood as information, and entropy” (p. 45). The idea here is that the state of being is even more basic than the state of pain, which drives most traditional ethical thinking, and that when ethics approaches are life-and-consciousness-centered, rather than being-centered, they overlook entire universes—inanimate objects, processes, information—that can also be subject to moral reasoning and action.
Transparency, as we now see, is beneficial across many layers of analysis, including the deeply philosophical, and yet it is not the entire answer to the information issues posed by Wikipedia-type sources. Transparent processes of accessibility permit a democratization of information production and usage, but, as is also true of political democracy, the ‘voter’ (user, producer) need not be knowledgeable, honest, or even paying attention. Transparent processes of information production and transmission allow greater capacity for validation and replicability, but when the processes emphasize only values such as neutrality, inclusiveness, and prior publication, there is no requirement for truth or credibility. Finally, the anonymous production and use of information prevents human users from achieving the deepest possible meanings in life, and violates as well the ethical principle of integrity of information. In addition, anonymous providers need not exercise moral responsibility for there is no accountability—no need to accept consequences of one’s acts. Full transparency of access and production processes would certainly help to solve some of the problems Wikipedia generates.
Conclusions
Increased access to information via the Internet in the last decade is a phenomenon that is transforming societies worldwide. Anyone with access to the Internet can read the world’s major newspapers or search for information on literally any subject online at no or minimal cost. The democratization of information has no historical equal—individuals from all over the world, assuming Internet access, are now able to access the same vast array of information. This opens the possibility of reducing the inequality of access to information and its resulting power asymmetry—which, as we pointed out earlier, is one of the major problems with the free market concept (Friedman 2005; Stiglitz 2002; Weimer and Vining 1992). Further, as Johnstone (2007) argues in her discussion of capabilities theory, computer technologies empower individuals, contributing to “people’s abilities to define and lead lives that they value” (p. 86). Wide access to information and ease in communication across the globe also can make civil society, business, and government more transparent, in response to an increasing public demand for accountability from institutions and individuals.
The democratization of information production and use is not an unmixed blessing. Because the stakes are very high, it is crucial to develop some generally accepted understandings about the validity, accuracy, and verifiability of information, and about the accountability and moral responsibility of providers. To this end, it is essential to have information processes that are not just transparent, but that also acknowledge and rely upon expertise, verifiability, and credibility.
Wikipedia is a new process and may turn out to be very valuable in bringing information to the world’s fingertips. Some scholars argue that professors can fruitfully seize “teachable moments” from students’ inevitable Wikipedia use. Badke (2008), for example, suggests that Wikipedia entries and editing histories can be used in teaching students how to evaluate written material, that students could themselves engage in writing and editing Wikipedia articles as class projects, or that students could learn to find and use other, more legitimate on-line sources of information.
Wikipedia offers many benefits to the user: ease of use, no costs, a wide scope of accessible information, and hyperlinks that direct the user to other areas of Wikipedia, opening possibilities of enrichment. Individuals may contribute to Wikipedia in a relatively easy interactive process. For the individual citizen, the easy access at no cost can translate into more use of information, and a deeper understanding of topics of interest. In addition, individuals may find, in the possibility of contributing information to Wikipedia, an opportunity to develop skills and understandings that would not be available otherwise.
But there must be valid and reliable mechanisms that legitimate this new information transmission form. Citizendium (http://en.citizendium.org), an offshoot of Wikipedia driven by founder Larry Sanger, aims to correct many of the problems we and others have identified with open-editing encyclopedias. Authors and editors use their real names and credentials on-line, and the site claims to “aim at reliability and quality, not just quantity.” Alas, from its start-up in 2006 to the March 2009 date when we checked, Citizendium had about 10,400 articles in English to Wikipedia’s 2.8 million. Expert information production, even on-line, is a slower, more tedious process than free-for-all writing and editing.
Perhaps ultimately what will bring legitimacy and validity to Wikipedia is the complete transparency of its process. For this to happen, Wikipedia must move away from anonymous and pseudonymic postings toward full accountability for those who post and edit articles, using real names, occupations, credentials, and affiliations. As a new process, Wikipedia will undoubtedly change and develop, and when it ‘grows up,’ it must offer reliable ways for users to determine the accuracy and reliability of the information presented.