Brendesha Tynes. Handbook of Children, Culture, and Violence. Editor: Nancy E Dowd, Dorothy G Singer, Robin Fretwell Wilson. Sage Publication. 2008.
As hate groups have proliferated in the United States over the last decade, there has been a corresponding increase in the number of online environments where their adherents congregate. Scholars have argued that a “virtual culture” of racism is now forming as a result (Back, 2002; Zickmund, 1997). While most members of such groups are adults, the widespread adoption of computer technology by children and teens has made it easier for hate groups to reach out to young people. Recent surveys suggest that some 97% of children and adolescents in the United States have Internet access (Center for the Digital Future, 2004). Nearly three-quarters of these youth use the Internet as an information source in doing their schoolwork (Lenhart, Simon, & Graziano, 2001). While parents may be pleased that their children are finding help with mathematics, science, or social studies, evidence is mounting that some are studying a new online subject: hate.
Hate groups and racist individuals carefully construct their online sales pitches to appeal to youth, using multimedia and other persuasive tactics (McDonald, 1999). In some research, children are portrayed as passive victims who might stumble upon online hate while doing homework or surfing the Internet (Perry, 2000). But the exposure of children to online hate sites run by adults is only part of the problem. Children are also becoming perpetrators. Recent news articles and studies have shown that children and adolescents are increasingly involved in online hate speech (Gerstenfeld, Grant, & Chiang, 2003; Greenfield, 2000; McKelvey, 2001; Tynes, Reynolds, & Greenfield, 2004). Acting both alone and in conjunction with peers or family members, children as young as 11—and perhaps younger—are active agents in the creation of hate materials and the use of deprecating speech online. Encouraged by the interactivity and anonymity of cyberspace, children and adolescents from around the world can lead their own personal hate movements, all in the privacy of their homes.
The potential impact on children is worrisome at several levels. First, hate speech can directly hurt youngsters who are members of the target groups when they encounter this material online. Second, by making their messages attractive to children and adolescents, hate groups are able to recruit, teach, and maintain alliances with youth, swelling the ranks of potential supporters and members. And finally, there is the possibility that hate speech online may directly incite or otherwise enable youth to commit violent acts offline. Perhaps the best-known example is the Columbine school massacre of 1999, where the young killers used the Internet to learn how to make pipe bombs, to plan their strategy (using an online video game), and to post online threats to their classmates (Biegel, 1999; Greenfield & Juvonen, 1999). One of the perpetrators, Eric Harris, also used the Internet to communicate with others who shared his admiration for Hitler’s philosophy.
This chapter begins by describing the global environment of online hate and the computer tools used to disseminate hate materials, especially to children and adolescents. Discourse strategies used by both adults and children to recruit, socialize with, and persuade others within the online hate movement and derogate targets outside are analyzed. Next, the chapter briefly discusses why some children rather than others participate in online hate and the risk factors for involvement. The final sections look at the potential impact of these discourses on young people and suggest policy implications.
Methodology
The analysis in this chapter draws on literature from various disciplines dealing with racist and anti-Semitic online hate. A search was conducted of major online social science databases, including PsycINFO, JSTOR, ISI Web of Science, Linguistics and Language Behavior Abstracts, Sociological Abstracts, and Communication Abstracts. Keywords used in the search were “online hate and racism,” “Internet and racism,” “Internet and anti-Semitism,” and “children, Internet and racism.” A search of major composition journals was also conducted using the same keywords. Articles resulting from each of these searches were gathered and their reference lists were used to find other studies. This chapter includes both examples drawn from the studies reviewed and examples retrieved directly from online sites.
Because the literature has focused primarily on adults, selected Web sites, discussion boards, and chat rooms geared toward youth were also monitored and used to provide examples of hate speech, both targeting young people and carried out by them. Three discussion boards were chosen: the racism board of the online version of a widely distributed teen magazine, and two extremist discussion boards linked to the Resistance Youth and Panzerfaust Web sites. Also examined was http://Stormfront.org for Kids, the youth section of the extremist Stormfront Web site. Mainstream teen-monitored chat rooms on a popular paid service were followed as well. Monitored chat rooms have a trained adult host that intermittently monitors the room and electronically “evicts” participants who violate chat rules, while unmonitored chat rooms have no monitors and no such rules. These sites provide a window into the range of spaces where online hate is present, including monitored forums, as well as those directed to extremists and the general public.
Each example of hate speech is transcribed here just as it was written by the perpetrator. The language is extreme and includes racially degrading terms. In presenting these examples, no attempt has been made to omit or modify offensive words, as mitigating the language could “convey unwittingly that the material is less extreme, both politically and morally, than it actually is” (Billig, 2001, p. 272). Misspellings and other errors in the originals have been retained as well.
The Global Environment of Online Hate
Online hate, also known as cyber racism (Back, 2002), includes hate speech and so-called persuasive rhetoric. There is no generally agreed-upon definition of hate speech, but many scholars refer to Matsuda (1993), who defines it as “speech that has a message of racial inferiority; is directed against a member of an historically oppressed group; and is persecutory, hateful, and degrading” (Boeckmann & Turpin-Petrosino, 2002, p. 209). Online hate speech may appear in the form of text, music, online radio broadcasts, or visual images that directly or indirectly exhort users to act against target groups (Thiesmeyer, 1999). While most hate speech is protected under the First Amendment, speech may rise to the level of hate crime when it creates clear danger of imminent lawless action and when it constitutes “fighting words” or defamation (Leets, 2001).
Online hate speech simultaneously targets two different audiences: on one hand, potential members and adherents of hate groups; and on the other hand, the targets of the hatred, usually minorities. While the Internet is used to promote all types of bigotry, this chapter focuses on racism and anti-Semitism disseminated by White racialists. The term White racialist refers broadly to groups or individuals who subscribe to ideologies of White racial superiority. The literature uses various terms to describe this population, including far-right extremists, White nationalists, White supremacists, and White racists. When discussing a specific author’s research, this chapter uses the term chosen by the researcher. Racism and anti-Semitism underlie most of the hate crimes White racialist groups commit offline; accordingly, the focus here is on parallel online phenomena.
A number of sociologists and criminologists have conducted content and social network analyses in an attempt to document the nature and extent of online hate (Donelan, 2004; Gerstenfeld, Grant, & Chiang, 2003; Mann, Sutton, & Tuffin, 2003). Their findings indicate that the online hate movement is promoted by a loose global network of racist organizations that use Web sites, along with discussion boards, chat rooms, and other interactive media, to create an online presence. This network appears to be growing, both offline and on. The Southern Poverty Law Center (2004a, 2004b) reported a resurgence in hate group activity in 2003, with several racist groups more than doubling their number of chapters from the year before. The organization’s Intelligence Report (2004b) listed 497 hate Web sites based in the United States alone that year, a 12% increase over 2002. While it is impossible to get a definitive count of the sites that exist, some estimates are well into the thousands.
A handful of prominent hate groups have played key roles in setting the ideological tone for the movement (Anti-Defamation League 2001a; Gerstenfeld et al., 2003; Southern Poverty Law Center 2004a, 2004b). They include the Ku Klux Klan; Skinheads; and neo-Nazi groups such as Aryan Nations, Christian Identity, and Holocaust Denial. The Ku Klux Klan is one of the oldest hate organizations, founded in 1865 by veterans of the Confederate Army. The organization has died out and reemerged a number of times but its ideology of White superiority over other races has remained constant. The organization now has a number of different factions throughout the United States. Skinheads, with roots in Britain, and neo-Nazi groups share a belief in Hitler’s anti-Semitic, racist philosophy. Skinheads, however, often have a younger membership and advocate violence more vehemently than most groups (Gerstenfeld et al., 2003). Christian Identity groups believe that Anglo Saxons are the true Jews of the Bible, that the Jews of today are descendants of Satan, and that all other minorities are inferior “mud people” (Anti-Defamation League, 2001a).
There are hundreds of different branches and chapters of these organizations and similar ones in the United States, with corresponding Web sites (Southern Poverty Law Center, 2004b). Membership in the groups is fluid, with people moving in and out and often maintaining affiliations with several groups simultaneously (Perry, 2000). These complex linkages are also present online. Gerstenfeld, Grant, and Chiang’s (2003) content analysis of 157 online hate sites showed that more than 80% of the sites had external links to other hate sites. No one group dominates the cyber world of hate. In a social network analysis of 80 White supremacist organizations with a presence on the Internet, Burris, Smith, and Strahm (2000) found that the online movement is decentralized, with multiple centers of influence.
There is little evident division between groups along doctrinal lines. Though they may differ in their practices and national visibility, the groups share a common goal that might be broadly described as ridding the world of “cultural pollution” (Perry, 1999). This pollution is defined differently by different groups, but it is most often equated with racial, ethnic, and religious minorities, with most groups espousing an anti-Semitic and White supremacist platform (Perry, 2000). Burris et al. (2000) confirm that strong links have been forged between organizations and suggest that ideologies are merging, with many groups undergoing “Nazification” or adopting neo-Nazi ideologies. Other researchers have argued that Christian Identity beliefs are increasingly shared among groups (Sharpe, 2000). In many cases, boundaries between groups are no longer sharply defined.
Recently, hate group activities offline have become increasingly mainstream. Louis Beam’s essay Leaderless Resistance (1992) encourages White racialists to infiltrate mainstream institutions. Beam also suggests in this essay that White racialist groups become more decentralized, and that members wear less conspicuous dress so that they will not be easily identifiable. This melding into the fabric of society occurs even more seamlessly on the Internet.
While organized hate groups and their adherents may account for a significant proportion of those involved in online hate, a person does not have to be affiliated with any group to espouse hateful ideologies and commit racist acts online. One only needs access to the electronic tools often used to spread hate.
Tools of Online Hate
Web sites, e-mail, chat rooms, multiuser domains (MUDs), discussion boards, music, video games, audiotapes and videotapes, games, and literature are some of the most common tools used to disseminate online hate (Donelan, 2004; Gerstenfeld et al., 2003; McDonald, 1999; Schafer, 2002). Most of these are widely known; however, chat rooms, MUDs, and discussion boards may be less familiar to some readers and therefore require some explanation.
Chat rooms directly link the senders and recipients of text messages. Participants engage in what is called synchronous communication, in which comments of all participants may be viewed in real time. This differentiates chat rooms from e-mail exchanges, where messages can languish for hours or days before being read. Users may enter a goal-oriented chat room, where the discussion centers around one topic, or an open-topic chat room, where the discussion is freewheeling. In either case, they partake in the online equivalent of a dinner party conversation with one or hundreds of friends—or strangers—at once. The language produced forms a multidimensional text that juxtaposes dissimilar lines of conversation. These various dialogues float toward the top of the screen and then off at a speed that depends on the number of people in the chat room and how fast they type.
Multiuser domains are similar to chat rooms, but they often allow users to create graphic representations or “avatars” of themselves. Participants communicate with one another through a virtual body graphically represented on the screen, which may take the form of a human or, in some cases, an animal or monster.
Discussion boards are a type of conferencing system used to discuss a wide range of topics in an asynchronous format. Like e-mail messages, discussion board messages may be read and responded to in a matter of seconds, hours, or months. An important distinction between e-mail and discussion boards, however, is that e-mail is a “push” medium—messages can be sent to people who have not solicited them. In contrast, discussion boards is a “pull” medium—people must select groups and messages they want to read and actively request them (Smith & Kollock, 1999). The participant who engages in discussion boards has sought out others who are interested in the same issues (e.g., a teen health bulletin board; see Suzuki & Calzo, 2004). Unlike many chat rooms that are open-topic, most discussion boards are created to discuss specific subject areas. Posted messages and their responses are clearly demarcated by topic in what is known as a thread.
Chat rooms and discussion boards are key tools in the dissemination of online hate. They are often linked to Web sites. Schafer’s (2002) content analysis of 132 extremist Web sites found that 18.2% of the sample offered either a discussion board or chat-room to the site’s visitors. These tools allow for interactivity in ways that other materials may not and help to facilitate the formation of a global “community.” While some of these online spaces are created specifically for hate group members and have strict guidelines for entry and participation, the bulk of online forums where hate speech takes place are open to the general public. One of the earliest types of discussion boards, Usenet newsgroups, still serves as a space for the recruitment of new members, dissemination of racist materials, and bullying of target groups (Mann, Sutton, & Tuffin, 2003). In addition to maintaining these targeted online spaces, White racialists have developed a strategy of infiltrating mainstream chat rooms and discussion boards (Whine, 1999).
How Hate Sites Target Children
Hate groups use a number of strategies to reach out to young people online. First, they create Web pages that are specifically geared to children and teens. Of the 157 extremist sites examined by Gerstenfeld et al. (2003), 7% had children’s pages. On these pages, ideas may be worded in ways that are easily understood by a younger audience. They may even feature messages by youth, directed to other youth. For example, on http://Stormfront.org for Kids, the children’s section of the Stormfront Web site, the creator of the page, 15-year-old Derek Black, introduces himself:
Hello, welcome to my site, I can see by the fact that you have visited my page that you are interested in the subject of race. I will start by introducing myself, my name is Derek. I am fifteen years old and I am the webmaster of http://www.kids.stormfront.org)
Web pages may also be organized in such a way as to appear legitimate to a younger audience. Site creators often invite children and adolescents to use the information to complete homework assignments. Further along on the http://www.martinlutherking.org. The viewer is then redirected to the site and can click on a number of links, including one that reads, “The truth about Martin Luther King.” Those who go there encounter a myriad of disparaging articles, speeches, and “facts” about this historical figure. Since children and adolescents are generally less able than adults to differentiate between truth and fiction online, hate messages masquerading as facts may be interpreted as truth.
In addition to creating sections of Web sites geared toward youth, hate groups use multimedia, including video games, regular games, and music, to appeal to young audiences (Gerstenfeld et al., 2003). For example, sites may allow viewers to order the video game Ethnic Cleansing. A promotion for the game reads,
The Race War has begun. Your skin is your uniform in this battle for the survival of your kind. The White Race depends on you to secure its existence. Your peoples enemies surround you in a sea of decay and filth that they have brought to your once clean and White nation. Not one of their numbers shall be spared. (Gerstenfeld et al, 2003, p. 39)
White power and “Oi” music, a sub-genre of punk rock created by working-class British youth in the 1980s, often convey a similar message. For example, a song called “Aryan Rage” calls on all White men to get off the fence because “it’s time to kick some ass, pulverize the niggers, trash the fuckin fags. Grab yourself a club and beat down a lousy Jew” (“William Pierce,” 2000). One researcher suggests that listening to Oi music is the single greatest predictor of racist beliefs and violent behavior in hate group members (Hamm, 1999).
Children may also access entire works of racist literature through Web sites. One of the most commonly encountered is The Turner Diaries: A Novel, written by William Pierce under the pseudonym Andrew Macdonald (1980). Considered the bible for many White racialists, this fictional account features a race war, the overthrow of the U.S. government, and the establishment of an Aryan world (Leets, 2002). A copy of the book was found in Timothy McVeigh’s car after the Oklahoma City bombing, and it is widely believed that he drew inspiration from its message. Downloaded from the Web, the novel may effectively provide blueprints for violence for young people, who need not be members of an organized hate group to retrieve this information, but need only a computer with Internet access.
Another way that hate groups ensnare children is by giving Web sites ambiguous titles, so that they are easily mistaken for more innocuous sites. Children can unknowingly stumble upon these sites while searching for information online. A high school student researching weather systems, for example, might access http://www.stormfront.org (Perry, 2000). In search engines like Google, all sites resulting from online searches appear one after the other and often for several pages. Hate sites compete with credible sources of information for the browser’s attention. As hate sites continue to proliferate, it will likely become increasingly difficult to distinguish them from other more reputable cites.
White racialists also use a number of “foot-in-the-door” techniques to get the attention of young browsers. McDonald’s (1999) content analysis of 30 racist/White nationalist sites revealed five such techniques, including warnings, disclaimers, objectives/ purposes, social approaches, and more sophisticated counterargument strategies. Twenty percent of the sample warned of the offensive content on the site, using language such as “Warning! This site contains white nationalist views. If you do not have an open mind, do not enter.” Such announcements serve a dual purpose: warning casual browsers, but also enticing them to read further. Thirty-seven percent of McDonald’s sample attracted attention by stating the group’s objectives, standards, values, credo, or manifesto. Many others used a “social approach,” adding jokes, quotes, prayers, symbols, photos, or cartoons to appeal to viewers’ emotions. The most prevalent strategies in the sample, however, were the objective approach and counterargument strategies. Sites using an “objective” approach stated their views neutrally, free of overtly derisory comments, in an effort to appear rational. Those using the counterargument strategy attempted to change potential constituents’ views about White nationalism and racism by explicitly countering mainstream conceptions of their group, often using poems, sayings, or historical facts as “moderating symbols.”
Discourse Strategies and Mechanisms for Expressing Online Hate
The computer technologies used to teach hate are in themselves neutral. Their negative power comes from discourse whose overarching goal is to establish the racial supremacy, superiority, separation, and preservation of an in-group and the racial “otherness” of one or more out-groups (Back, 2002; Back, Keith, & Solomos, 1998). This is accomplished with a repertoire of discourse strategies that fall into two broad categories: hate speech and persuasive rhetoric.
Hate speech includes verbal harassment and intimidation; displaying of racist symbols such as swastikas; expressive action, such as cross-burning; conveying an intention to discriminate or incitement to discriminate; fighting words, threats, and incitement to violence; and epithets or group libel (Goldschmid, 2001). The most common forms occurring online include threats, epithets, and an additional form gleaned from Internet research—cybertyping, or online stereotyping. Persuasive rhetoric includes such tactics as moral appeals, historical revisionism, and humor. All these forms of discourse may be found in any of the previously described online modes of communication. However, overtly racist hate speech is more likely to be encountered in music, videos, discussion boards, and chat rooms, while persuasive rhetoric is more likely to be seen on Web pages. As will become clear, the same strategies are used to communicate with both adults and youth.
Hate Speech
The presence of hate speech on the Internet is a burgeoning topic of study, engaging scholars in fields as diverse as law, communications, and psychology. Legal scholars have been by far the greatest contributors, with much of their work focused on which types of speech are liable and which enjoy constitutional protection, and the legal establishment of harm. The less extensive contributions from the social sciences focus primarily on White racialists and their use of hate speech online (Sharpe, 2000; Whine, 1999; Zickmund, 1997), or measuring the harm hate speech inflicts (Lee & Leets, 2002).
Sharpe’s (2000) qualitative analysis of the Christian Identity movement’s print and Internet materials details how hate speech is used to establish the beliefs, doctrines, and practices of this group. Blacks are referred to as “talking apes” or “talking beasts.” Anti-Semitism, however, is the most powerful theme in the Christian Identity doctrine. The Kingdom Identity Ministries Doctrinal Statement of Beliefs proclaims the following: “We believe in an existing being known as the Devil or Satan and called the Serpent (Gen. 3:1; Rev. 12:9), who has the literal seed or posterity in the earth (Gen. 3:15) commonly called Jews today (Rev. 2:9, 3:9; Isa. 65; 15)…. The ultimate end of this evil race whose hands bear the blood of our Savior (Matt. 27:25) and all the righteous slain upon the earth (Matt. 23:35), is Divine judgment (Matt. 13:38–42, 15:13; Zech. 14:21)” (quoted in Sharpe, 2000, p. 611; also see the Kingdom Identity Ministries Web site). Adherents to this doctrine cite these principles as divine justification for acts of violence against Jews and people of color. Those who commit these acts, specifically murder, are considered heroes of the faith, honored for their deeds (Blazak, 2001; Sharpe, 2000).
In much of the White racialist discourse online, notably the Usenet discussion, minorities are portrayed as a cultural disease or social contaminant. These sites typically depict African Americans as savages and subhuman destroyers of cities, the “counter-image” of civilized, productive society (Zickmund, 1997, p. 193). Jews, on the other hand, often are purported to be conspiring to take over the world.
These themes that are so prevalent on adult hate sites are increasingly surfacing in youth chat rooms and discussion boards. In the excerpt below from the Panzerfaust discussion board for teens, a youth expresses his disdain for “wiggers” (combination of the words white and nigger) who may date interracially and listen to hip-hop music. Using the screen name Onewiggerhater88, he explicitly calls Black people racially inferior in his rant, using epithets such as “smelly,” “buck nigger,” and “monkeys.” He claims that hip-hop music has changed the dating patterns of White females and predicts that if the pattern persists, the racially superior White man will be “bred out of existence.”
Was I absent from school the day the teacher told everyone its okay to go and fuck a smelly nigger? That shit is the worst thing a girl can do, its the lowest life thing in the world, worse than bestiality, which is really what it is: an extreme form of bestiality. All because of this lame, illiterate, non-musical, hip hop shit. How is a buck nigger talking fast about his automatic weapons and his “hennessey” and weed such a powerful phenomenon that it changes the breeding habits of an entire race of females, and makes them breed with the racially inferior? … The nigger of course, is trying to genetically better himself, no matter how it justifies it. Also, how come if they hate whitey so much, they will only fuck white women? Charles Darwin would be as perplexed as I am…. I am so sick of all this shit. Know this, it will only get worse. This shit has been building up since 1986, when those sell outs Aerosmith made that fucking video with RunDmc of Walk This Way … the NIGGERS [WILL] CONTINUE TO BREED THE WHITE MAN OUT OF EXISTENCE WITH THEIR FUCKING HIP HOP RAP SHIT, they will only grow in number. These monkeys are incapable of advancing to a higher level of expression. Rap is here to stay. When I look at white kids, in the FIRST GRADE wearing this hip hop “gear,” I see where it is going. It is getting stronger and stronger and reaching deeper and deeper. The gene pool is moving towards this shit. The wiggers are getting younger and younger. This did not happen with any other cultural trend, like disco, or new wave, or flower power. I fear that this shit will only continue to grow, WHILE WE GET BRED OUT OF EXISTENCE. (http://www.panzerfaust.com/forum)
This teen’s message parallels those espoused in adult discussion boards. Like adult White racialists, he asserts that the out-group is innately inferior. However, his message is tailored toward his peer group. Hip-hop music is a global phenomenon, with over 75% of its album sales to White suburban youth (Reese, 1998). Onewiggerhater88 links its popularity to the disintegration of society and argues that the White people who sympathize with Blacks and adopt cultural styles that might be considered Black are getting younger and younger. He is, in a sense, calling youth to arms, warning that Black cultural practices and race-mixing have the potential to strip White youth of their White identities.
His message can be categorized as flaming—hostile, aggressive online language characterized by profanity, obscenity, and insults. Flaming is often seen on discussion boards and is a central mechanism through which hate speech is spread. White racialists may flame among themselves or go into mainstream discussion boards and start flame wars. Other aggressive tactics developed to harass out-groups include the “mail bomb,” an e-mail sent to one or many targets who have demonstrated liberal sentiments or ideologies that counter those of the perpetrator (Back, Keith, & Solomos, 1998). Nineteen-year-old Richard Machado, a student at the University of California at Irvine, sent the equivalent of a mail bomb to 59 Asian students on campus in 1996. The message was titled “FUck You Asian Shit” and read:
As you can see in the name, I hate Asians, including you. If it weren’t for asias at UCI, it would be a much more popular campus. You are responsible for ALL the crimes that occur on campus. YOU are why I want you and your stupid ass comrades to get the fuck out of UCI. IF you don’t I will hunt you down and kill your stupid asses. Do you hear me? I personally will make it my life carreer to find and kill everyone one of you personally. OK?????? That’s how determined I am.
The message was signed, “Get the fuck out, MOther FUcker (Asian Hater)” (http://Computing Cases.org, n.d.).
Glaser, Dixit, and Green’s (2002) semi-structured interviews of 38 seemingly adult participants (no demographic data were obtained) in White racist Internet chat rooms highlight possible motivations behind racist acts such as these. Interviewees were asked questions related to interracial marriage, minority in-migration, and job competition. Respondents felt most threatened by interracial marriage; Blacks moving into White neighborhoods came second. Questions about in-migration incited strong emotions as participants described the actions they would take if minorities moved in nearby. One respondent said he would “run the niggers and all non-whites oit of my city … kill some nigger ass.” Another said he would “spraypaint ‘niggers beware— on the door before they even move in” (Glaser et al, 2002, p. 185).
Such comments are not exclusive to online spaces that are geared toward extremists. Studies of chat rooms for general use by children and teens document aggressive language in these spaces as well (Bremer & Rauch, 1998; Tynes et al, 2004). However, studies do show significant differences between monitored and unmonitored teen chat rooms, with more negative race-related language in the unmonitored forums (Tynes et al., 2004). This suggests that without social controls, latent negative attitudes are more likely to surface among youth.
This becomes evident in monitored chat rooms when monitors leave the room, as negative race-related language and hate speech increase dramatically in their absence. For example, in the transcript below from a monitored teen chat room on May 25, 2003, the monitor (HostTeens) is out of the room until midway through the conversation. This transcript was included in the quantitative analysis in a previous study (Tynes et. al., 2004), but has not previously been published. Black Cord is mistaken for a Black person and is threatened for using the word “cracker” (line 106). In the transcript, only those lines that pertain to this conversation are shown.
106. Black Cord: yo snappy cracker
109. Star: you start in on crackers and i’ll break ur face
113. Black Cord: home fry on the french fri
117. Black Cord: im hip
119. Black Cord: im cool
121. Black Cord: im a dude
124. Star: cord u wanna get beat to death
131. Black Cord: u gotta find me first
134. Star: ur black in a white mans world my friend
136. Black Cord: whos gonna beat my ass
140. Black Cord: whos gonna beat my ass
141. Zora: I am
143. HostTeens: �— © �— © �—
144. HostTeens: @ —> —§ Hey Everyone §— <—[@
145. HostTeens: �— © �— © �—
146. Star: me and my white cracker ass and shoe
148. HostTeens: hmmmm
149. Black Cord: î no
150. NUT: NO ITS A BLACK MANS WORLD
151. Zora: no not really
152. Black Cord: wait
160. HostTeens: actually it’s a purple alien’s world 😀
166. Star: ok host that was a HI gay there
172. Zora: yo who said its a white mans world who said black mans world all I know is God that made black and white said it was his world
177. HostTeens: Star, Maybe … except it won’t start a race war 🙂
178. Honey: Amen Zora!
179. Black Cord: i’m not blak 182. Black Cord: i like pearl jam
236. Star: race wars rock
—(from a paid monitored chat service)
Offended that the term “cracker” is apparently being used by a person of color, Star replies “yo you start in on crackers and I’ll break ur face” (line 109). Rather than responding to Star’s comments, Black Cord continues to antagonistically joke that he’s “hip” (line 117) and “cool” (line 119). Star then threatens to beat Black Cord to death (line 124).
The message of out-group inferiority reverberates here as it does in adult online spaces. In an effort to affirm his superiority over Black Cord, Star says that Black Cord is Black in a White man’s world. Note that prior to the host reentering the room, the hate messages include overt threats of physical violence. When the host reenters (lines 143–145) and attempts, although weakly, to quell the intensity of the conflict by saying it’s a “purple alien’s world,” the conversation becomes slightly less vitriolic. When Black Cord says he is not Black, the “race war” peters out and Star makes the final statement that “race wars rock” (line 236).
Placing ethnic or racial identifiers in screen names is a common practice in teen chat, but one that can actually invite conflict (Tynes et al, 2004). By including the word “Black” in his screen name, Black Cord made himself a target of racial attack. Such attacks can take place not only in chat rooms, but also on discussion boards and in instant messages (private online messages sent back and forth between two or more individuals in real time).
Another mainstream forum where hate speech can sometimes be found is the racism message board from an online version of a popular teen magazine. This is an open forum where teens go to discuss their attitudes about racism, get advice about interracial dating, and make connections with people from other cultural groups. While much of the communication is positive, it is often interlaced with negative stereotypes. For example, Dirtbiker2002 posted a message that read,
black people always cause mischeif and steal, they are very rotten how they have their stupid acents i mean come on man “axe” its ask. however i admit there is one black boy in my school and he is ok. (online magazine)
Dirtbiker2002 invokes images of Blacks as thieves and uneducated people, even while acknowledging that an individual he knows personally does not fit the norm described. Nakamura (2002) coined the term cyber-types to describe the distinctive images of race and racism that are propagated and disseminated through the Internet. Although cybertyping often occurs through the interaction in multiuser domains, it takes place in other forums as well. The appearance of such language on this message board, a space where many teens claim to have moved past the racism of their parents, suggests the resilience and power of racial stereotypes on the Internet (Burkhalter, 1999).
Persuasive Rhetoric
A second type of discourse used in online hate is indirect hate speech, also called persuasive rhetoric (Thiesmeyer, 1999). These techniques of persuasion are geared mainly, though not exclusively, toward potential hate group recruits, and are also used to further indoctrinate less fervent hate group members. Being tailored to these specific audiences, persuasive rhetoric is often more subtle and less explicitly offensive than the hate speech presented in the previous section. In his study of rhetoric on neo-Nazi Web sites, Thiesmeyer (1999) lists seven strategies of persuasion, including (1) pedantism or preaching; (2) urgency; (3) historicism, fake tradition, and folk etymology; (4) dele-gitimization of other discourses; (5) use of virtual community to create the illusion of “real community”; (6) production of different materials for those who are already a part of the movement and for those who must be convinced; and (7) factualization, or phrasing of a perception as though it were an established fact (p. 120).
According to Lee and Leets (2002), storytelling is one of the most powerful tools for persuasion found in online environments. These scholars differentiate between so-called high narratives, which have well-developed stories with plots and characters, and low narratives, which have less-developed narrative content. “Explicit messages” within these narratives are aligned with the speaker’s intentions and convey only one meaning, while “implicit messages” may be inconsistent with speaker’s intentions and convey multiple meanings. Lee and Leets asked adolescent respondents ages 13 to 17 to rate the persuasiveness of stories from hate group Web sites. The stories focused on five topics: interracial dating, being born White, immigration, joining a White supremacist group, and The Turner Diaries. The authors found that neutral, or uncommitted, youths found implicit messages more persuasive than explicit ones. Those youths already predisposed to accept the ideas found explicit messages more persuasive than implicit ones. High-narrative, implicit messages had more effect in the short term, but low-narrative, explicit messages had more lasting effects that persisted or even increased over time. Though this may seem counterintuitive, it has important implications for the potential ability of online hate to effect lasting changes in attitudes and beliefs.
In addition to these various forms of persuasive rhetoric, several others deserve special mention: appeals to morality or tradition, paradoxical language, historical revisionism, and comedie racism.
Moral appeal
Virtually all adult hate sites make reference to fairness, justice, and morality. Groups often seek to legitimize their activity on the grounds that they are “God’s chosen people” and have been denied justice (Duffy, 2003). A Web site visitor is solicited as a potential hero who can bring justice to his or her race and restore the world to its “proper” order after joining the hate group. Similar appeals are made on children’s sites. A site for the Ku Klux Klan’s “Youth Corp” explains that
kids are in the KKK because they want to learn about their heritage and they want to help make the world a better place. Men join because they want to protect their families and their Christian friends and neighbors from being destroyed in the future. We are all working together because we love Jesus and we want to help our people and the world, (http://www.kkk.bz/just_for_kids.htm)
Paradoxical language
In George Orwell’s book 1984, the characters speak a language called Newspeak, in which the true meanings of words and phrases are veiled through the use of paradoxical language. Truths are distorted into half-truths and in some cases, the polar opposite of truth, as in the slogan “Freedom Is Slavery.” Similar language is common on hate Web sites. In one study, as many as 21.7% of racist sites claimed to be non-racist, with some even going so far as to claim that minority groups are the real racists (Gerstenfeld et al, 2003). The KKK, for example, vehemently denies that it is a hate group. Its youth site offers a list of “bad things” people say about the KKK, such as that they burn crosses, to which the Web site responds, “This couldn’t be further from the truth.” Cross burning is a celebrated religious ceremony, the site claims, and not an act of terror. In fact, it argues that Klansmen “love the cross and would never desecrate it.” The KKK is thus portrayed as a group of love rather than a hate group.
Historical revisionism
Another key strategy used to deprecate racially oppressed groups is historical revisionism. This form of discourse is used to reconstruct an event or, in some cases, deny that it has occurred in order to further racist agendas. Extremist groups often promote revisionist rhetoric about the Holocaust in order to promote anti-Semitism. This typically includes claims that Jews were not killed in gas chambers on any significant scale, that the number of Jews murdered was substantially lower than reported, and that the Holocaust is a myth invented during the war in order to finance the state of Israel (Levin, 2001, 2002).
While the literature has documented the existence of Holocaust revisionism on the Internet (Tador-Shimony, 1995), we know very little about the role children and adolescents may play in advancing these ideologies. The example below is drawn from the Resistance Youth message board. On it, more senior members of the message board (as indicated underneath their screen names) tutor junior members in hate ideology. In a thread titled “Holahoax in English class,” Pinsafety, a junior member, poses a question to the group:
[W]e are studying about the biggest lie ever and i need good questions and comments to ask/say to make this left wing commie wench look stupid. I have allready gotten 2 other kids to say it never happened but the mo[r]e movies and crap she shows the more the kids are like poor jews. I need something to say that will unpanzifiy these people and make her look uncretitable.
A senior member named Wolfwoman then posts an extended reply that includes more than 10 questions for Pinsafety to ask his teacher, among them,
The Nazi’s wanted to kill all the Jews? Then why would they, during a war, with limited fuel supply, gather up all the Jews, ship them in railroads hundreds of miles, to camps they built specifially to house them, to shave and clothe them, to tattoo them for identification, only to kill them?
How could Anne Frank have written her diary, which was mostly in ball point pen, when she died in 1945 of Typhus and the ball point pen wasnt invented till like 1949–50???
IF 6 million were killed, then that is 3000 people a day. Is that really possible??
How come it is illegal in some countries, like Canada and most Euopean countries to even ask questions about the numbers of Jews who died in the Holocaust? What about freedom of speech? What are they afraid of ? If these revionists are full of crap then why not just let them spout thier crap and prove them wrong in trial? (http://tance.com/forum/viewforum.php?f=l)
Through this exchange, we may speculate, Pinsafety is further persuaded that his preconceived beliefs are true. Because of the instructive nature of Wolfwoman’s questions, Pinsafety is not only learning how to argue against the existence of the Holocaust, but also gaining strategies for instructing others in his class. This suggests that transmission of cultural knowledge is taking place, with anti-Semitic and racist hate learned online subsequently taught offline. Other researchers have found that violence committed offline mimics the discourse of online hate (Thiesmeyer, 1999).
Comedie racism
Another means of teaching hate is through parody and humor. Ronkin and Karn (1999) examined text-based parodies of Ebonics (African American English) on the Internet. They found that beliefs about the inferiority of this language, and by extension its speakers, are expressed through mocking discourse. Billig (2001) has explored the seemingly innocuous practice of telling jokes and found that it is common on Web sites, in chat rooms, and on discussion boards. His study of the sites Nigger Jokes KKK, Nigger Jokes, and Nigger Joke Central suggests that joking provides a way to breach the constraints of political correctness, since jokes may not be condemned as readily as outright racist statements. He argues that taboos against race have replaced the Victorian taboos against sex and that joking permits their infringement (Billig, 1999). Racist attitudes and stereotypes are easily disguised as “just jokes,” ostensibly not to be taken seriously as indicators of racist beliefs.
Comedie racism is popular in online spaces for youth. The following jokes appeared on the Resistance Youth Web site, reportedly posted by teens participating in the discussion board:
- You hear they were improvong transportation in harlem? yea they planted more trees.
- What don’t you want to call a black person that starts with n and ends with r? Neighbor. whats the difference between a pizza and a jew (pizza doesnt scream in the oven)
- How many Jews does it take to change a Light Bulb? 1, but he’ll swear blind it was 6 million!!! (http://www.resistance.com/forum/viewforum.php?f=l)
Youth and Online Hate: Why and Who?
Aside from slight differences in the themes and tools, the discourses of hate practiced by children, adolescents, and adults are very similar. The differences, as might be expected, reflect the melding of youth culture with the culture of online hate. For example, adolescents and young adults are more likely than older adults to listen to and create White power music, which is heavily distributed online. They are also more likely to perceive racial differences through the lens of youth culture. For example, as in the Panzerfaust message cited above, some teens are concerned that hip-hop culture will influence young White people to form alliances with Blacks and that it would ultimately lead to the extinction of the White race.
Why do children come to adopt or express such extreme beliefs? Part of the answer may lie in the effectiveness of the Internet as a medium for reaching and persuading computer-savvy youth. Messages online often achieve a level of legitimacy based solely on the fact that they appear on the Internet. Indeed, the information found online is often taken for literal truth (Perry, 2000). Children are not usually taught to be critical of the text that they read online and may be unable to filter out that which may be untrue. In addition, hate groups and racist individuals package their online messages in a visually persuasive manner. The engaging, interactive nature of the medium, along with its perceived credibility, may desensitize young people and make them particularly vulnerable to hate messages (Duffy, 2003).
Some researchers have used symbolic convergence theory to explain how a young person might become indoctrinated into online hate. This theory posits that groups create and exchange fantasies about themselves and others, thereby co-constructing a shared reality. In-group members tell stories and perform rituals with other group members that ultimately lead to a shared understanding of the group and of the behaviors of typical group members (Bormann, Cragan, & Shields, 1994; Duffy, 2003). These stories create a rhetorical vision of key events in history as well as of the future. If this vision speaks to an individual’s current state of mind, that person may adopt it as a credible interpretation of reality (Duffy, 2003).
Because many of the physical cues present in face-to-face interaction are absent online, participants in interactive online environments often have to recreate their physical bodies in their text. Social identity theorists would argue that the mere act of having to construct online bodies, to categorize both the self and others, leads to much of the out-group derogation that occurs in online contexts. Individuals strive to maintain a positive social identity for their own group, and the in-group must be positively differentiated from out-groups (Taj fel & Turner, 1979). Categorization is a normal process that facilitates information processing, but it may not always be based on real similarities or differences and may ultimately lead to ethnocentrism (Devine, 1995).
Anonymity is another factor that may lead to increased expressions of online hate. The facelessness of the Internet is disinhibiting— and often causes people to write things they would not consider saying in face-to-face settings. Alonzo and Aiken’s (2002) study of flaming explains this behavior with “uses and gratifications theory”—the idea that people use certain media to satisfy certain needs (p. 3). These researchers argue that flaming may fill cognitive and affective needs by providing an outlet for stimulation, tension reduction, expression, and assertion. People can fulfill these needs at little risk to themselves because of the anonymity of many cyberspace applications.
Although the computer as a medium may encourage the uninhibited expression of hate, the computer itself clearly does not “make” children and adolescents—or anyone else—commit racist acts. Multiple factors contribute to antisocial online behaviors. Research on children in offline environments has consistently shown that young people are far from color-blind. Indeed, even toddlers may exhibit indicators of racial bias (Doyle & Aboud, 1995; Katz, 2003). The origins of this bias are multiple and may include children’s cognitive abilities, their peer group, their social environment, and their parents’ behaviors and values (Katz, 2003). As children grow into adolescents, research shows that many of the same predictors determine their racial attitudes (Fishbein, 2002). While the Internet is a powerful tool, there are many social and psychological factors that may influence children’s and adolescents’ attitudes about their own and other ethnic groups, as well as their decisions about whether to act on these beliefs.
Risk Factors for Involvement
Who engages in online hate? One might expect the most common profile to be that of a Southern, White, adult male. Some of the most prominent leaders of hate movements fit this description, including David Duke, former national director of the Knights of the KKK; William Pierce, author of The Turner Diaries; and Donald Black, creator of http://Stormfront.org, one of the first racist Web sites. With the proliferation of online hate and the widespread adoption of computer technology by young people, however, the face of hate is changing (Perry, 2000).
Youths of all regional, educational, and socioeconomic backgrounds are susceptible to engaging in hate group activity (Turpin-Petrosino, 2002). Those involved are found in every part of the country, although there may be higher concentrations along the East Coast (Southern Poverty Law Center, 2004b). There is evidence, however, that youth in areas experiencing economic distress may be more open to accepting hate messages (Blazak, 2001). In fact, a key strategy for offline recruiting is to visit schools in towns that have recently experienced an economic downturn, such as a factory closing (Southern Poverty Law Center, n.d.).
In addition, certain characteristics appear to place some children at greater risk. A study of youth involved in White extremist organizations found that 62.2% of the sample came from single-parent homes (McCurrie, 1998). Alienation from school and family is one predictor of whether young people will access violent media content, including hate Web sites (Slater, 2003). Similarly, youths who experience psychological distress as a result of factors such as blocked goal attainment and a lack of established social norms are particularly vulnerable (Blazak, 1995). Anger and frustration in these individuals can ultimately lead to group delinquency (Slater, 2003).
A number of other personal characteristics may contribute to making individuals of any age more likely to engage in online hate. Higher levels of disinhibition (or sensation-seeking behavior) are positively correlated with verbal attacks online. Aggression (Slater, 2003) and anxiety have also been implicated in these practices. In some instances, people who have experienced social deprivation—or economic, racial, or gender-based threats— may be drawn to hate-group ideology and activities (Blazak, 2001; Turpin-Petrosino, 2002). Others may perceive “genetic” threats in the form of interracial dating as a key reason to advocate racist violence (Glaser, Dixit, & Green, 2002).
Impact: From Speech to Action
It is nearly impossible to establish with any certainty the precise impacts of online hate speech involving youth. But research suggests that effects may be occurring on at least three levels.
First, out-group members, typically minorities, may be directly harmed by witnessing online hate directed toward their group or toward them as individuals. Leets and colleagues have researched hate speech and the harm it inflicts both on- and offline (Lee & Leets, 2002; Leets, 2001, 2002; Leets & Giles, 1997). They found that members of different racial and ethnic groups may have different perceptions of and reactions to the language on hate sites. Participants in one study generally considered hate Web sites to be harmful, but people of color perceived more social harm than Whites (Leets, 2001). This may be attributable to the fact that messages were not directed toward Whites. When a person witnesses his or her own social or ethnic group being attacked, there are likely to be, at the very least, short-term emotional effects; these may include mood swings, guilt, anger, loneliness, and fear. For example, a Latina participant in an unmonitored chat group, confronted with death threats against her ethnic group and stereotypes of Mexicans as “dumb,” commented, “I have heart probs [problems]… and I don’t want to end up in the hospital because of these muther fucker” (transcript collected May 2, 2003, Tynes et al., 2004). Long-term attitudes and behavior may also be influenced, with victims developing more defensive or vigilant attitudes and behaviors (Leets, 2002). The effects of victimization may linger for months or even years (Bard & Sangrey; 1986; see also Leets, 2002).
One of the earliest studies of computer messages and their influence on young people’s violent behavior suggested that being able to voice hatred through the medium of technology might actually be beneficial (Hamm, 1993,1999). Online hate was seen as a substitute for action, allowing “the unfettered enjoyment of racial desires and fantasies … to be expressed through psychological catharsis, thereby quelling the impulse toward physical aggression” (Hamm, 1999, p. 9). Even if this were true for online spaces that are restricted to people who adhere to racist ideology, the message of hate is now spreading into mainstream online groups where the targets of these racial attacks are more likely to be present (Beckles, 1997). The psychological and emotional pain inflicted by this deprecating speech can often be as devastating as physical pain and material loss (Leets & Giles, 1997). Online hate, rather than being seen as an alternative to physical violence, may be viewed as violence, albeit in written form.
At a second level of impact, Internet hate strengthens the work of organized right-wing extremists. It does so by providing anonymous access to propaganda that guides criminal activity, by helping members to effectively coordinate their activities, and by creating venues for generating both legal and illegal income (Wolf, 2004). Children may be drawn into the network of hate groups by online materials. Resistance Records, a leading online distributor of White power music, sells most of its products to teens and young adults. This music may serve as the main point of entry into the online and offline hate movement for these young people (Blazak, 2001).
Finally, online hate may be a precursor to violent crime offline. Though a causal link between Internet hate and physical violence has not been proven empirically, scholars have argued that hate messages on the Internet incite hatred and promote harmful action against racial, ethnic, and religious minorities (Biegel, 1999). The Columbine case and countless hate crimes in recent years demonstrate that the Internet is, at the very least, an important educational and recruitment tool for perpetrators (Greenfield & Juvonen, 1999).
A key element in this incitement to violence is the dehumanization of target groups, making action against them seem justifiable. Minorities are dehumanized through “invalidation myths”—statements alleging that they are innately inferior (Kallen, 1998, p. 6). Kallen conceptualizes this dehumanization as a three-stage process: an invalidation myth is put forth, a theory of vilification is developed, and a course of action is planned that includes incitement to hatred and harm. These hateful messages accumulate and contribute to an atmosphere in which power structures are created and maintained, and in which violence against target groups appears socially sanctioned. Repeated often enough, these messages can eventually lead to physical harm (Calvert, 1997).
Policy Implications
Parents, Internet service providers (ISPs), the courts, and the federal government have all taken steps to protect children from hate on the Internet, but these strategies face significant obstacles. Some parents use filtering software programs including NetNanny, CyberPatrol, and Filter Logix to block access to inappropriate material. While some filters work via artificial intelligence, instantaneously scanning sites for offensive content, others work by using URL and keyword lists for filtering. Both types of filters work crudely at best: they may block access to important sites while allowing access to illicit ones (Mossberg, 2004). And because they screen only for text, graphic images of violent acts can be freely accessed. Filtering software programs created by ISPs are more effective, but still not perfect solutions. Additional attempts to protect minors from online hate include installing monitors in chat rooms. But this also has limitations, since it can be effective only when the monitor is actually present and decisively enforces chat rules.
While other developed countries such as Canada and Germany have established regulations against hateful content on the Internet, the United States has not done so, because regulation is quickly challenged as a violation of free speech. To the extent that First Amendment doctrine fails to recognize hate speech as a category of speech that justifies an exception from First Amendment protection, regulation is extremely difficult. Attempts to subject Internet communication to stricter regulation than what is applied to public communication in more traditional arenas, such as the broadcast media or the public square, have been struck down (Partners Against Hate, n.d.).
Legal prosecution of online hate speech is also problematic. Most of the blatant statements of hatred and prejudice described in this chapter enjoy constitutional protection under the First Amendment (Leets, 2001). However, there are several types of speech that do not, including speech that threatens specific individuals. Any online communication that expresses an intention or threat to commit an unlawful act against a specific person is punishable by law (Anti-Defamation League, 2001b). The precedent for this was set with the 1998 case of United States v. Machado, which followed the arrest of the college student who sent e-mail threats to Asians. After serving a 1-year jail term, Machado was fined $1,000 and sentenced to a year of probation (Deirmenjian, 2000). Other speech that is not protected includes persistent harassment aimed at specific persons. However, to be prosecutable, the harassment cannot be an isolated instance, but must be an established trend (Anti Defamation League, 2000). In addition, statements must be traceable to specific individuals or organizations. With the proliferation of new ways to send anonymous messages through cyberspace, this can prove very difficult.
Recently, legislation has been passed to protect children from many of the negative aspects of the Internet. The Dot Kids Implementation and Efficiency Act of 2002 mandated the establishment of a secondary-level Internet domain name within the “us” domain for children 13 and under (Saunders, 2003). The new domain is “kids.us” and only material appropriate for children is allowed to be posted. Links outside the domain are prohibited along with multiuser interactive services unless they show compliance with goals of the statute (Saunders, 2003). While this act would protect children from inappropriate material, it does not protect children from other children. It is likely that even those multiuser services that comply with the statutes would have difficulties monitoring every child at all times.
Stemming the spread of online hate and reducing the involvement of children and adolescents will require more than new laws or a new domain, though these can be part of the answer. The entire community must work collaboratively toward this goal. A first step is to get a better grasp on what really goes on in cyberspace, and how young people are involved as targets and perpetrators. Just as the government now monitors hate crimes offline, a task force should be established to monitor hate online, with special attention to sites frequented by youth.
Second, steps should be taken to determine what constitutes an online hate crime and to take these crimes seriously with appropriate enforcement (Deirmenjian, 2000). Criminal activity on the Internet is often regarded as beyond the bounds of detection and enforcement because of its “virtual” status (Williams, 2000, p. 95). For this reason, law enforcement agencies should receive special training to enable them to recognize and handle these types of cases (Deirmenjian, 2000).
Even with such measures, however, the nature of cyberspace makes it unlikely that effective controls can be imposed on the content of all, or even much, online communication. In the end, the best hope of limiting the damage to young people and the larger society from hate speech may lie in establishing a counterdiscourse of tolerance. Speaking to a United Nations seminar on the Internet and racial discrimination, the Global Internet Liberty Campaign (1997) asserted that “when encountering racist or hateful speech, the best remedy to be applied is generally more speech, not enforced silence.” More spaces both online and offline should be created where children of different backgrounds can learn together about other cultures and openly discuss their racial and ethnic similarities and differences. It has been argued that these integrated discussions can reduce prejudice (Burnette, 1997; Kang, 2000). Ultimately, such open communication may benefit both young people and society at large by helping to promote a culture of online tolerance that stands in clear opposition to the culture of online hate.