David Norman Smith. Handbook of Social Problems: A Comparative International Perspective. Editor: George Ritzer. Sage Publication. 2004.
Genocide, a term coined during the Nazi Holocaust in 1944, is perhaps the most popular social-scientific coinage of the past century. Yet the reality it names has been less carefully studied than many far-lesser phenomena. Adopted by the United Nations (UN) in 1948 as a legal concept offering a ground for collective action to avert mass murder, the term genocide has since become a staple of partisan rhetoric—yet until the wake of the Rwandan massacres of 1994, the UN had failed to define even a single instance of postHolocaust slaughter as genocide. A defining global social problem, genocide has nonetheless flown beneath the radar of international scholarship in most areas of study.
Like the proverbial enigma wrapped in a riddle, genocide has thus remained unknown in many ways. This is not, I contend, simply a function of the very real difficulty of the subject of mass destruction but also reflects the contradictory origin and role of the concept itself. “Genocide,” from the start, has been as much a political volleyball as an analytic category. It entered the world as a legal principle crafted to authorize breaches of state sovereignty by the global community. For three centuries, the sovereignty of independent states had been effectively sacrosanct in the Euro-American realm. But the stunning horror of the Holocaust called the old norms into question. A window of opportunity opened, briefly, for humanitarian legislation, and the global institutions of the postwar era—above all the UN—acted swiftly to pass a succession of new laws intended to prevent states from abusing civilians, whether in peace or in war, at home or abroad.
At the Nuremberg War Crimes Tribunal of 1946, the concept of war crimes was enlarged, under the rubric “crimes against humanity,” to include not only battlefield crimes but also the political, religious, ethnic, and national persecution of civilians (Ferringer 1989:87). Soon afterward, a flurry of further laws passed. The Convention on the Prevention and Punishment of the Crime of Genocide was ratified by the UN on December 9, 1948; the Universal Declaration of Human Rights passed the next day; the Geneva Conventions were enacted in 1949; and the European Convention on Human Rights followed in 1950 (Donnelly 1995:123; Jackson 1995:68). But the bipolar division of the world during the ensuing Cold War between the United States and the Soviet Union impeded the implementation of these principles. The Genocide Convention, in particular, was effectively paralyzed by the enmity and self-interest of the opposing powers, neither of which (in an era of great power involvement in wars in Vietnam, Cambodia, Afghanistan, Ethiopia, Angola, and Nicaragua) could afford the risk of politically tinged charges of mass murder. With the fall of the Soviet Union in the early 1990s, however, a decisive new phase began, the ultimate tendencies of which remain uncertain. On the one hand, the UN was freed from the Cold War constraints that had inhibited interventions—but so too was the United States.
Interventionism is now clearly on the rise, but with two rival centers of gravity: the international community, represented by the UN, and the United States as the dominant state and potential global hegemon. In this moment of tension and doubt, the legitimation of interventions acquires a new urgency, and genocide acquires a new geopolitical salience. “The abridgement of ‘human rights’… even by repressive policies would not be sufficient to justify interventions, but clear genocidal activities would be,” writes Kratochwil (1995:39-40). Hence, the world’s heightened sensitivity to genocide in the wake of mass killings in Rwanda and the former Yugoslavia in the 1990s is as complex politically as it is morally and intellectually.
My aim in what follows is to map these complexities, first by tracing the origin of the normative idea of genocide, then by situating this norm in the interstate framework that gives it political meaning.
Genocide as the Spur to a New Legal Order
In October 1933, provoked by the persecution of Jews and political dissidents in Germany, the League of Nations convened a special meeting in Madrid to consider strengthening legal protections for minorities. As the French scholar Georges Scelle observed at the time, the ensuing debate revealed a world divided on first principles.
The German delegate, flying the flag of racial “dissimilarity,” defended what Scelle called a Volkish “new edition of the imperialist interpretation of the nationality principle,” insisting that “states should have the right to maintain ties with ethnically similar elements living beyond their borders,” even as they suppress minorities themselves. This provoked an opposite reply from the Dutch delegates, whom Scelle quotes as saying that, given “the indissoluble tie between internal and international public order,” the League had a “right of intervention” to preserve peace “even during civil wars” (1934:255-56).
The latter position, Scelle stressed (1934:256), “is both super-statist and monist.” The implication was that international law supersedes national German law and that, contrary to the thesis of irreducible racial, moral, and legal “dissimilarities” between peoples, law is both unitary and universal in its ultimate principles.
A Polish contributor to this debate took an even stronger stand in favor of the superétatique view. This was the jurist Raphaël Lemkin, who, in 1944, would coin the term genocide and then, four years later, would persuade the UN to adopt the Genocide Convention. Fatefully, however, Lemkin’s efforts in 1933 were far less successful. Disturbed by the “alarming increase of barbarity” (Lemkin 1944:xiii) that had accompanied Hitler’s rise to power just months earlier—and well aware that Nazi terror could escalate radically if no preventive action were taken—Lemkin proposed a treaty to outlaw the “crime of barbarity,” which he defined as “the premeditated destruction of national, racial, religious and social collectivities.” But the forces arrayed against this kind of proposal were formidable. The Polish foreign minister, Beck, fearing Hitler’s ill will, refused Lemkin’s request to travel to Madrid to present his proposal in person. After it was read aloud, Beck accused him of “insulting our German friends.” Soon afterward, Lemkin was fired as deputy public prosecutor “for refusing to curb his criticisms of Hitler” (Power 2000:22). The first and best opportunity to tie Hitler’s hands had been lost.
At a deeper level, the problem in 1933 was that proposals like Lemkin’s cut against the grain of international norms. From 1648 until the Paris peace of 1918, private citizens had been regarded as being “under the exclusive control of States” (Cassese 1988:99-103). This precept was called into question when the excesses of the First World War came to light—above all, the extermination of Armenians and the mass killing of noncombatants—and it was further shaken when anti-Jewish pogroms erupted in Poland at the war’s end. But artful diplomacy soon rendered the reforming protocols of the 1919-1921 period moot (Macartney 1934; Sharp 1978). Hence, until the Holocaust, the “sovereign” rights of states continued to be jealously guarded, while citizens and minorities were left to fend for themselves.
The consequences were predictable. Between 1933 and 1939, Germany was allowed to infringe the rights of Jews and radicals on a vast and growing scale, “within view of the whole world” and “without any attempt by Germany at concealment” (Fein 1979:193). All the future Allied Powers sat idly by, shielded by conventional legal norms from any sense of responsibility for German affairs. While “the British were well aware of Nazi violence and discrimination against German Jewry before the war, they operated on the assumption that this antiSemitism was simply a core feature of Nazism ‘and must be accepted as such. As long as the majority of German people supported Nazism and as long as Nazi antiSemitism was legislative and nonviolent, then Britian had no right or obligation to intervene’” (Gannon 1971:277, cited by Fein 1979: 183). This policy became “appeasement” only when Hitler’s military exported his terror across national boundaries. Until then, it was just business as usual, protected from critical scrutiny by custom and law.
Later, when Hitler’s anti-Jewish policy turned decidedly violent, concern for his civilian victims remained modest. Long after the Jewish Socialist Party of Poland released the first fully authenticated report of Nazi massacres in May 1942—at a time when fewer than 15 percent of the ultimate victims had perished—the Allied Powers consistently refused to take special steps to halt the Holocaust (Feingold 1970; Wyman 1984; Wyman and Medoff 2002; Wyman and Rosenzweig 1996). Not until 1944 did mounting domestic pressure induce the previously indifferent Roosevelt administration to take baby steps to encourage third-party rescue attempts—and even then, with the death camps working overtime, the Allies continued to resist calls to bomb the rail lines leading to the camps (Fein 1979:193). By the time Auschwitz was finally liberated by Soviet troops in 1945, roughly 70 percent of European Jewry had been exterminated—a total of 5.1 to 6.1 million in all (Fein 2001:49). Also slaughtered, and by the same methods, were between 250,000 and a million Gypsies.
“What renders this situation so horrifying,” Hayim Fineman of the Labor Zionist Bloc remarked in 1946, “is the fact that this tragedy was not unavoidable. Many of those who are dead might have been alive were it not for the refusal and delays by our own State Department … and other agencies” (cited by Hilberg 1967:671). The truth of this comment is plain.
Equally plain is the truth of a remark by the British delegate to the UN that the League of Nations’ failure to ban genocide before the war gave the Nazis impunity for peacetime atrocities (Power 2002:52).
The Genocide Convention, in other words, was an afterthought. Crimes of a barbarity hitherto unknown had been committed on a scale hitherto unimagined, under the protective cover, in part, of a venerable but now suspect ideal of inviolable state sovereignty. To respond, the world needed new ideas. Among these, the most influential proved to be Lemkin’s concept of genocide, which he introduced in a study of Axis policy (1944) and which was first applied by the Nuremberg tribunal in 1946. “Sovereignty,” Lemkin had long believed, “cannot be conceived of as the right to kill millions of innocent people” (cited in Chalk 1994:47). Late in 1946, the UN officially condemned genocide for the first time. Two years later the UN passed the Genocide Convention, which went into effect on January 12, 1951, after 20 nations had signed it. A half century later, the grand total of signatory nations had risen to 130 (Power 2002:54, 59, 64; Ratner and Abrams 2001:27-8).
Since 1951, there have been many opportunities to save lives under the aegis of the Genocide Convention. If the UN had acted with humanitarian resolve, many crimes of states against peoples could have been halted or contained—in Cambodia, Bangladesh, Burundi, Indonesia, East Timor, South Africa, and elsewhere. But the Genocide Convention quickly fell victim to the realpolitik of the Forty Years Cold War. The United States refrained from signing the Convention until the eve of the Soviet collapse, and the UN managed to evade the issue even in obvious cases of planned mass slaughter (as in Cambodia). Imperial interest and state sovereignty took precedence over human rights in case after case. When, in 1994, the UN did finally recognize the state-led killing of Tutsis and dissidents in Rwanda as genocide, this came, once again, after the fact. Well over a half-million people had already perished, and the genocide had already ended, halted not by the UN (which had removed peacekeepers from Rwanda when the massacres began), but by a largely Tutsi army from neighboring Uganda (Des Forges 1999; Melvern 2000). Once again, genocide was denounced only after it had ended. On balance, the Genocide Convention seemed to be a monument to good intentions gone awry.
Since 1994, however, it has become apparent that imperial interest may be served more by the selective abridgement of state sovereignty than by its unconditional defense. Allegations of actual and potential “mass destruction” have become the rallying cry for an aggressive new interventionism, manifest, above all, in the so-called Bush Doctrine of preemptive intervention to block mass destruction. This gives the Genocide Convention a new profile. Given “a growing consensus among American policymakers that military intervention should be launched to stop genocide,” the lesson that many have drawn from the Rwandan debacle is highly activist: “To be more successful, a lower threshold for action would have been required, perhaps authorizing intervention as soon as the risk of genocide was deemed sufficiently high” (Kuperman 2002:100; my italics). How and by whom this risk is defined, of course, is the decisive question.
The world, it seems, is impaled on the horns of a dilemma. Minority rights violations of the gravest kind, including vicious state-sponsored massacres, have led many to doubt the tenability of old norms of state sovereignty. And few would challenge the view that peacekeeping forces should restrain genocides whenever possible—by bombing the rails to Auschwitz or disarming Rwandan génocidaires. But in an unequal world, the formula of “humanitarian intervention” is susceptible to self-interested manipulation. Powerful states can use the mere appearance of genocidal intent on the part of weaker states—allegedly revealed by their possession of massdestructive weapons—to infringe their sovereignty, with or without international support.
Genocide, in other words, remains as indelibly political as ever. As a principle of international law and legitimacy, it is a thread in the tapestry of the state system. Hence, to grasp the evolving geopolitics of genocide, we need to understand its role in the evolving nexus of states.
The Antinomies of Sovereignty
The roots of the current tension between state and superstate authorities lie in the remote past, with the simultaneous crystallization, in the early medieval era, of Italian city-states and the “Holy Roman” structures of church and empire. Despite the conflicts between them—which grew acute in the later eleventh century—both the popes and emperors sought, and received, formal recognition from their subject realms for several centuries after Charlemagne was anointed the first Imperator Romanus by Pope Leo III in 800 C.E. But as the Italian city-states outgrew their dependence on spiritual and temporal overlords, ritual recognition yielded to a legal doctrine of nonrecognition, which, in turn, laid the ideological groundwork for the ultimate assertion of state sovereignty. The driving forces that led to the affirmation of state sovereignty were, of course, economic and political, not merely ideological. But “sovereignty” is irreducibly an ideological construct and hence requires explanation at this level as well as structurally.
In the early days of the Carolingian empire, the liturgy of king-making included, besides anointment and coronation, the acclamation of the emperor by the nobility (Canning 1996:55-63). This liturgy was thought to confer power upon the ruler constitutively, not simply to acknowledge his preexisting eminence. Over time, as acclamation evolved into the semilegal ritual of recognitio, the residually democratic aspect of this ritual (the lingering awareness that the emperor’s power derived from the voluntary submission of his subjects) became “grating and disturbing” to the increasingly theocratic rulers. In 1270 C.E., the Frankish kings, the original locus of the empire, excised the recognition ceremony in a vain effort to rid themselves of the “populist” implication that what is recognized today can be renounced tomorrow (Ullmann 1966:147, 202-203). But the risk of recognition and its withdrawal cannot be overcome so easily. When, in the thirteenth and fourteenth centuries, the Germanborn emperors began to lose their grip on the dynamic commercial cities of coastal Italy (Venice, Genoa, Florence, Pisa, and Milan), French and Italian jurists formulated a new doctrine of the “nonrecognition” of empire (Ercole 1915, 1931; Sereni 1943). Bartolus, the most influential jurist of the age, gave this doctrine its canonical form: “Citystates recognize no superiors.” At first, this was simply a statement of fact. In the late medieval period, as states gathered strength and structure, the overarching power of the empire clearly waned. But when the empire reasserted its claims—especially in the early 1500s when the Habsburgs united Spain to central Europe under Charles V—the nonrecognition doctrine became a principle of open defiance. Just as the papacy was confronted by the de facto nonrecognition of Protestant reformers in the same century, so the empire confronted de jure declarations of independence from rising national states. The early-sixteenth-century jurists Suarez, Victoria, and Gentili “denied… the claim of the emperor to exercise temporal jurisdiction over princes” (Gross 1948:32). Gentili went further, applying the same principle to the Pope, whose authority, he said, remained valid only for the faithful “who still recognized his rule” (Sereni 1943:64).
In the world of practical politics, this lesson was learned only after a long series of chaotic and bloody conflicts, culminating in the Thirty Years War, which, from 1618 to 1648, drew all of Europe into a maelstrom of antagonisms, old and new. As the ascending forces of capitalism and colonialism and the nation-state called every previous structure and identity into question, violence tore the social fabric at every seam, from class to nation to religion. The Habsburg version of the empire, which had stretched from Spain and Holland to Austria and beyond, was decisively defeated, and its core German territories were bled white. Millions died, and millions more were displaced. It became plain that the old powers were incapable of assuring either peace or security. The leaders of the victorious antiimperial camp, most notably France and Sweden, proclaimed a new age of national independence, freed from Habsburg overlordship. On this ground, in 1648, the Peace of Westphalia established the legal foundation for a new European order. The principle of this order was the simultaneous independence and interdependence of sovereign states.
The new states no longer recognized the authority of the universal institutions of church and empire. The emperor had been “forced to abandon all pretenses on the field of battle” (Gross 1948:28), and the pope, Innocent X, was left to fulminate vainly against the hubris of states. Where, previously, the pope and the emperor had enjoyed a monopoly of sovereign legitimacy, now states claimed sovereignty for themselves. Instead of recognizing their ostensible superiors, they recognized other states, their avowed equals. And since they no longer deferred to overarching institutions, the only “higher” power they could accept was the emerging international concert of states.
The new state system had begun to assume distinctively modern form in the early sixteenth century. The older roots of this system can be traced far earlier—according to Tilly (1990), to the immediate wake of the Carolingian era in 990 C.E. And in the succeeding half millennium, many steps were taken toward the modern system, particularly in the diplomatic relations among the Italian city-states: the recognition of the rights of foreign merchants and merchandise, the elaboration of a rudimentary system of treaties and embassies, and so on. But the scholarly consensus is that this state system first crystallized in the Buda peace treaty of 1503. Since 1494, a widening circle of wars had broken out, prompted, initially, by a French invasion of Italy (Grewe 2000:13). Of these wars, the most decisive pitted Venice against the Ottoman Empire, with the auxiliary involvement of most of the powers of the day: Portugal, Spain, the papacy, England, Bohemia-Hungary, Poland-Lithuania, and others. The Buda settlement that closed this chapter in European history is widely regarded as the first major multilateral peace settlement of the modern type (Tilly 1990:162). Not long after, jurists began to deduce the latent reality and sovereignty of a body of international law. Thus, Victoria concluded that states cannot “refuse to be bound by international law, the latter having been established by the authority of the whole world” (1532, cited by Gross 1948:32). Gentili held, similarly, that all the world’s peoples, “united in the societas humana,” are rightly “governed by the jus gentium, or international law” (1598, cited by Sereni 1943:64; cf. p. 103f).
The laws of the state system were thus accorded a kind of sovereignty all their own. These laws, of course, did not wield the directly interventionist power of the pope or the emperor, and in this sense the “sovereignty” they embodied was more ideal than real. The actual locus of power had shifted from the commanding heights of the universal institutions to the bold new national states, especially Spain, France, and England. But the vertical tension between states and the system they formed remained plain. However much they might want to deny the disembodied social reality of the international community, the new states were hardheaded enough to grasp that they could not violate its norms with impunity. A dialectic of reciprocal claims to sovereignty bound states not only to each other but also to the larger system of states.
One of the most striking aspects of the Westphalian peace settlement was the fact that the assembled delegates—145 in all, representing every corner of Europe—adopted a virtual constitution, giving all signers the right to intervene to enforce its provisions. One of the subsidiary Westphalian treaties warns aggressors that “all and every one of those concern’d in this Transaction shall be oblig’d to join the injur’d party, and assist him with Counsel and Force to repel the Injury” (Treaty of Münster, cited by Gross 1948:25); and rulers were warned that any effort to redraw the religious map would be grounds for their overthrow (Tilly 1990:160).
Just about every conflict in this period, whatever its ultimate economic or political roots, ultimately acquired such strong religious overtones that interconfessional toleration soon became a byword for peace. As early as 1555, a Catholic-Lutheran peace treaty signed at the Imperial Diet of Augsburg assigned “each religion its own region” and asked the faithful to resettle among their brethren, to reduce the risk of violence. Just 10 years earlier, sectarian violence in Provence had grown so fierce that a pamphleteer, seeking a word adequate to convey the idea of mass murder, took massacre from the butcher’s vocabulary. Calvin’s successor, Beza, clarified the logic of the new term when he deplored “the savage massacre that the judges at Aix perpetrated… not upon one or two individuals but upon the whole population, without distinction of age or sex, burning down their villages as well” (cited by Greengrass 1999:69).
The implication of the charge of massacre was that heresy hunters were moved by genocidal intent. In the sectarian traumas of the ensuing decades, which peaked in 1572 in the St. Bartholomew’s Day massacre of Huguenots in Paris, the word massacre became a virtual synonym for genocide, and loomed particularly large in the Protestant vocabulary.
In 1648, after the horror of the Thirty Years War, both Catholics and Protestants resolved to break the cycle of violence. It was in this spirit that the Westphalian envoys agreed to work together to repress future religious atrocities. But the will to intervention soon weakened. The appeal of raison d’état and national sovereignty proved too strong. By the end of the sixteenth century, European states had effectively attained the monopoly of legitimate violence in their own territories that Max Weber defined as the core principle of state identity. How states wielded this monopoly—in particular, whether they deployed violence against their own citizens—was their own affair. Hence, in 1758, reviewing a century of experience post-Westphalia, the jurist Vattel was able to conclude that in the concert of European states, “The only universal law is that states have an equal right not to be hindered in their internal affairs” (Philpott 1996:40; cf. Glete 2002). Nonintervention, Vattel argued, is a logical corollary of sovereignty (Onuf 1995:43).
There is a certain irony in the fact that nonintervention was enshrined as a “law” in this way at a moment when the slave trade and colonial empire building were disrupting cultures at an epic pace. It was in the 1750s, for example, that England wrested control over India from its rivals. In the same decade, Portuguese slave traders became so worried over the costs incurred when slaves died en route to the Americas—“the slave is a thing that may die” at any time, one warned—that they forced their African and Brazilian trading partners to assume financial responsibility for all who died on the way (Miller 1988:658ff.). In Europe, however, “nonintervention” remained the order of the day. Wars—between Spain, France, England, and the Netherlands in particular—periodically adjusted the balance of power, but left the great states undisturbed internally.
Everything changed in 1789. The Jacobin and Napoleonic leaders of the French Revolution sought to revise every social relation, not only in France, but across Europe. So profound was the challenge, so great the trauma for every ancien régime, that in 1815, when Napoleon was finally defeated, three of his victorious enemies—reactionary Prussia, Russia, and Austria—formed a “Holy Alliance” to make the world safe for monarchy. Their goal was to crush future revolutions. An accord signed in Troppau in 1820 enunciated this clearly:
States that have undergone a change of government due to revolution, the results of which threaten other states, ipso facto cease to be members of the European Alliance…. If, owing to such alterations, immediate danger threatens other states, the parties bind themselves, … if need be by arms, to return the guilty state into the bosom of the Great Alliance. (Cited by Krasner 1995:243-44)
In effect, the Alliance sought to shield itself from revolutionary interventions by counterrevolutionary interventions. Sovereignty, in its popular form, was viewed as anything but sacred. Like Napoleon, but with an aristocratic bias, the Alliance members sought to render their national monopolies of legitimate violence international. Their formula for achieving this was both legal and military: to withdraw recognition and send in troops.
Britain opposed the interventionism of the Troppau accord, but with little success. In 1821, the Alliance authorized its leading spirit, Metternich, to send Austrian troops into Naples and Piedmont to prop up weak monarchies. Soon after, France won support for an invasion to restore Spanish absolutism. By 1850, antipopular interventions of this kind had become a leading theme in European politics. Between 1815 and 1900, the reactionary powers intervened in the domestic affairs of other states (mainly Spain, Germany, and Italy) 17 times. Sixteen of these interventions occurred before 1850, and in almost every case the motive was to repel popular sovereignty (Leurdijk 1986, cited by Krasner 1995:245-46).
As we saw earlier, there appears to be an elective affinity between interventionism and the idea of genocide. Hence, it is not surprising that in this vortex of uprisings and interventions, hints of the idea of genocide began to circulate widely. The Jacobin terror sparked a wave of criticism and analysis. Gracchus Babeuf, the most egalitarian of French revolutionaries, offered a harsh indictment of Robespierre’s “exterminating wheel,” which received its animating impulse, he said, from a “system of government” that was also a “system of extermination.” This system culminated in the vicious repression of the Vendéan peasant rebels, in outright “populicide” ([1794] 1987:96-8). The privileged orders were even more fearful and enraged, responding to the revolution both with conspiracy theories, revealing a streak of paranoia (Cohn [1967] 1996:25-46), and with dark warnings of a genocidal Prometheus unbound. The great Prussian philosopher Hegel spoke for many when, in his Philosophy of Right ([1821] 1991), he decried the kind of destructive “fanaticism” that is bent, he wrote, on “demolishing the whole existing social order, eliminating all individuals regarded as suspect by a given order, and annihilating any organization that attempts to rise up anew” (p. 38).
Meanwhile, counterrevolution also inspired intimations of genocide. In 1831, a decade after Hegel’s prophecy and Metternich’s invasion of Italy, the poet August von Platen, in his fiery Polenlieder, called the repression of the Polish revolution by reactionary Russia a case of Völkersmord—that is, the outright “murder of a people” (cited by Jonassohn 1998:140; cf. Henry 2001:9f.).
Sovereignty and Humanitarianism
Escalating conflicts, in other words, combined with bureaucratic and technical advances in mass destruction, gave nineteenth-century Europe a dawning awareness of the growing potential for ethnopolitical mass murder. But this awareness faded in the century of British hegemony from 1815 to 1914. The Holy Alliance collapsed in 1825, and although Metternich and his allies remained active, it was ultimately the British-led counteralliance that placed its imprimatur on the age. This gave the dialectic of national and international sovereignty a new impetus, since Britain—which was almost unchallenged in its global dominion for an extraordinarily long period—took a strongly humanitarian and anti-interventionist stance. The brusque cynicism of the Holy Alliance gave way to a Victorian discourse of peace. Rhetoric and reality in this phase were often jarringly discordant, given the scale and brutality of nineteenth-century colonial violence in British India, China, and Jamaica. But a humanitarian ethic took root that was not altogether disingenuous.
Giovanni Arrighi traces this ethic, in part, to the exigencies of free trade capitalism, which could not abide the chaotic disruptions of trade and profit making that accompanied Napoleonic wars and counterrevolutionary machinations. By staking its claim to global legitimacy on the alleged ethical and historical necessity of free markets, Britain put the state system on a new footing:
The Westphalia system was based on the principle that there was no authority …above the inter-state system. Free trade imperialism, in contrast, established the principle that the laws operating within and between states were subject to the higher authority of a new, metaphysical entity—a world market ruled by its own “laws”—allegedly endowed with supernatural powers greater than anything pope and emperor had ever mastered in the medieval system of rule. (Arrighi 1994:55)
Seeking both cultural and military hegemony, Britain became the voice of the market as well as ruler of the seas. In both spheres, it defended the categorical imperative of free trade and the rule of law and the market. “By presenting its world supremacy as the embodiment of this metaphysical entity, the United Kingdom succeeded in expanding its power in the inter-state system well beyond what was warranted by the extent and effectiveness of its coercive apparatus” (Arrighi 1994:55).
The humanitarian ethic has many other roots as well, including, not least, Puritanism. But it is difficult to deny that peace is good for business. Many movements, antislavery as well as antiwar, arose in part on this foundation. Other humanitarian currents arose as well, calling attention to issues of health, mental health, hospital reform, prison reform, and animal protection, among others. And the newborn labor movement extended the circle of humanitarian concerns still further, inspiring poor relief, minimum-wage laws, unemployment and health insurance, factory regulations, limits on child labor, and much else. Almost all such reforms first gained real momentum in this period.
Organized peace campaigns also arose on a large scale for the first time in the nineteenth century. There had been precursor movements, but only when pacifism became profitable did it became widely popular. The initial momentum in favor of generalized peace had come in the eighteenth century, stimulated by growing international trade and the desire to unify Europe against the Ottomans. Vattel, Rousseau, and Kant all took Saint-Pierre’s slogan of permanent peace seriously and found an audience for this slogan among the burgeoning commercial classes of the later eighteenth century.
The organized peace movement, like so much else, was an immediate outgrowth of the Napoleonic Wars. The first formal peace groups were founded in the United States in 1815, in Britain in 1816, and in France in 1821. Such groups grew prolifically and ultimately convened major international peace conferences in London (1843), Brussels (1848), Paris (1849), and elsewhere. Now as before, peace was urged on economic as well as on moral grounds. Leading French and British free traders (including Bastiat, Say, Bright, and especially Cobden, who was closely identified with the peace movement) argued that protective tariffs provoke wars. The Brussels conference linked peace to the standardization of weights, measures, and coinage.
By the start of the twentieth century, there were 425 peace organizations worldwide. Crowning this movement were the two Hague Peace Conferences, of which the second (in 1907) produced what appeared to be definitive results. In 1910, in the aptly named Great Illusion, the influential peace advocate Norman Angell was sufficiently emboldened to predict “that war had become so destructive of all economic values that nations would never again engage in it” (as paraphrased by Northedge 1967:66).
The ever-greater destructiveness of war, in other words, was increasingly viewed as a legitimate ground for preventive action both by the state system and by international civil society, which began to crystallize in this period. Domestic issues were less likely to attract reforming attention, but the exceptions to this rule were significant. Antislavery campaigns, which were perhaps the most dynamic and deeply rooted of the interventionist movements, were fueled by the conviction that free labor is superior to slavery not only morally but also economically (Smith 2001). Here, as in the antiwar movement, the spur to reform was moral zeal coexisting, uneasily but productively, with the profit motive. A similar duality of altruism and selfinterest marked the early minority rights movement as well.
By World War I, the theme of minority rights had begun to emerge from the shade into which it had been willed by diplomats eager to defend state sovereignty. The consensus until then had been that the dominion of states over subjects is immune to outside interference, an inalienable part of their domaine réservé. But since Augsburg in 1555, religion had figured as the one grand exception to this rule. And since Westphalia a century later, the rights of religious minorities had become a virtual template for human rights in the later, modern sense. This is true both in terms of the intrinsic moral content of the call for minority rights and also in the ulterior expediency that so often lurked behind this call.
What began as a principle of the truce between Catholics and Protestants soon became a bargaining chip in the relations between Christian Europe and the Ottoman Empire. Beginning in 1615, when Austria concluded a treaty with the Ottomans guaranteeing religious freedom to all those in the Muslim realm “who profess themselves the people of Christ” (cited by Macartney 1934:161), Austria entered into a competition with its French and Russian rivals to be recognized as the protector of Ottoman Christians. In 1649, the new Westphalian ideal of toleration gave France an impetus to renew a longforgotten medieval pledge to protect Lebanese Christians, and in 1673, France won concessions for Jesuits and Capuchins as well. In 1699, Austria won the right to intervene in Ottoman territory itself—a major break from tradition, which was subsequently reaffirmed in 1718, 1739, and 1791. Russia, jealous of Austrian prerogatives, won concessions for Orthodox believers in treaties negotiated with the Ottomans between 1774 and 1829. In 1853, Russia challenged Ottoman sovereignty so fundamentally that in the ensuing Crimean War, England and France sided with Turkey, resolving that “no State shall claim a protectorate over the subjects of the Sultan”; rather, they said, “the Great Powers shall see to the guarantee of the privileges granted to the Christians without infringing the Sultan’s sovereign rights” (cited by Macartney 1934:162-3). When Austria too threatened the use of force to defend Turkish sovereignty, Russia yielded.
At this stage, Western Europe was too concerned about the danger of Russian expansionism to press its advantage against the declining Ottomans. England in particular was firmly resolved to stand with Turkey against Russia. Sovereignty, not interventionism, was the principle of the hour. But the shoe was on the other foot in May 1876 when the Ottomans brutally crushed a revolt by Bulgarian Christians in east Rumelia, killing 15,000 people. British opposition politicians led by Gladstone seized on a current of popular humanitarian outrage, mixed with anti-Islamic bias, to take power away from Disraeli’s pro-Turkish government (Saab 1991). Ultimately, the traditional foreign policy elite was able to contain the interventionist impulses catalyzed in this way by the Bulgarian agitation. But the anti-Muslim valence of this agitation strengthened Britain’s hand against Ottoman Turkey, which could now be portrayed as a kind of rogue state avant le lettre. Turkey, with continuing brutality—often directed against its Armenian minority—gave the world every reason to accept this portrait.
In 1877, just a year after the Bulgarian atrocities, the Turks massacred Armenians as well. The European Powers responded by forcing the Turks to sign a treaty pledging to respect minority rights; but this was soon forgotten by almost everyone involved. In this case, as in others, the threat of humanitarian intervention proved expedient and popular; but Britain and its allies were flexible—many said cynical—about whether they would live up to their threats. They wasted no sympathy on the Turks, but they did not rush to help the Armenians. When it served their interests, they were entirely capable of turning a blind eye to Ottoman misdeeds.
In 1894, pressed on all fronts and worried about growing Russian influence among Christian minorities, the last of the Ottoman sultans, Abdul Hamid, ordered the slaughter of more than 10,000 Armenians by Kurdish death squads, the Hamidye, which he armed with repeating rifles (Reid 1992:37). Further violence of this kind simmered on a low boil until the sultanate was overthrown by Young Turk nationalists in 1908, and then flared up again sharply in 1909. This was a portent of dire things to come. In 1915 and 1916, under the cover of war, the Young Turks embarked on mass murder, killing over a million Armenians.
This, scholars agree, is the first indisputable genocide of the modern era. A persecuted ethnicreligious minority was singled out for extermination by a regime capable of acting on its intent. Turkey, pressured for centuries to honor minority rights, chose to reassert its sovereignty over minorities in the most intransigent and definitive way possible. Aware that humanitarian rhetoric often serves selfinterest—and knowing that self-interest dictates inaction more often than the reverse—the Young Turks chose to ignore the claims of international law. They believed that the world would ignore their transgressions.
They were right. “Who, after all, speaks today of the annihilation of the Armenians?” This was Hitler’s question to his commanders when, on the eve of invading Poland in 1939, he explained why he believed that Germany could take Lebensraum from the Poles with impunity (cited by Fein 1979:4).
Sovereignty In Extremis
When states fear the loss of their sovereignty, they can react with extreme nationalism. When they feel humiliated, they can react with extreme violence. Turkey and Germany proved the truth of these propositions in the waning years of Britain’s century of dominance.
At least since 1815, Britain’s sheer power and the growing influence of its free trade norms had become increasingly oppressive. This was especially true for faltering powers, like the Ottoman Empire, and upstarts, like the German nation, which found themselves blocked and frustrated by Anglo-French priority in capitalist enterprise and the global scramble for empire. A defiantly and romantically protectionist, chauvinist, antibourgeois, antihumanitarian, and authoritarian reaction set in. Universal law and morality were belligerently rejected. Racialethnic war was declared a natural necessity. Peace, tolerance, and charity were scorned as the virtues of weaklings and hypocrites.
Ultimately, this antihumanist worldview bore fruit in actual violations of international law and morality and actual ethnonational war and murder. It bears careful scrutiny.
In the Turkish case, we can take the writings of Ziya Gökalp as archetypal. Gökalp, the main Young Turk theorist, was a key figure in the crusade to transform Turkey from the nucleus of a declining empire into a modern state and nation (Akural 1978; Smith 1995). His views changed over time and were never entirely cohesive; but they capture the spirit of Young Turk nationalism.
Gökalp advocated an ethnically Turkish nation, which he opposed to the Ottoman ethnic mélange. “Why is everything Turkish so beautiful and everything Ottoman so ugly?” He imputes to the Turks a “national soul,” a “metahistorical” identity. The Turks, he writes, are Nietzsche’s supermen, “whiter and more handsome than Aryans.” It was thanks to Turkish culture that the Ottomans had never definitively lost to the West—that, indeed, “they expelled the British and the French from the Dardanelles [and] defeated the British-armed and financed Greeks and Armenians and thus, indirectly, the British also.” Nor will Turkey allow universalist ethics to tie its hands: “If there are no higher values than the good of a particular society, then society is not subject to any moral obligations regarding its relations to other societies.” The same applies to the claims of individuals: “Do not say ‘I have rights’ There is only duty, no right.” Sovereignty is elitist, not popular: “Democracy is not the rule of the ignorant masses (avam) but of the elite.” Indeed, the people are merely “malleable raw material” to be shaped by the ruling “saviors,” “heroes” and “geniuses” of the nation—a category, Gökalp wrote in 1915, that includes Talat Pasha and Enver Pasha, who were then presiding over a genocide.
Not welcome in the sacred Turkish circle are the non-Muslim minorities, who had embraced Western values “just like a man who buys ready-made suits.” By 1914, Gökalp had concluded “that only a State consisting of one nation can exist.” Western culture is “rotten” and will be replaced by a civilization created “by the Turkish race which has not, like other races, been demoralized by alcohol and licentious living, but has been strengthened and rejuvenated in glorious wars.” Insults will not go unpunished. “As individuals, we are not vindictive/But we do not forget national revenge.”
This is not just nationalism spiced with ethnic pride and prejudice. It is, rather, as Durkheim held with respect to similar German views, a “pathological” reaction to national frustrations in the realm of state pride and sovereignty. Skeptics might doubt, Durkheim adds, that the state is concrete enough “to have made any deep impression upon men’s minds,” but in fact many people now hold deeply belligerent convictions about states and their rivalries. In the German case, Durkheim says, this belligerence has achieved a kind of “morbid enormity.” This, in turn, has inspired a series of vain but enormously destructive attempts to defy the laws of social gravity.
“There is no state so powerful,” Durkheim writes, “that it can govern eternally against the wishes of its subjects and force them, by purely external coercion, to submit to its will. There is no state that is not submerged in the vaster milieu formed by the ensemble of other states, that does not, in other words, take part in the great human community to which it is subject and owes its respect. There is a universal conscience and world public opinion, and it is no more possible to escape their empire than to escape the empire of physical laws; for they are forces which react against those who transgress them. A state cannot maintain itself when all humanity is arrayed against it” (1916: 45-6, 7). Germany, however—like Turkey—had proven unwilling to bow to this reality. “To affirm its power more impressively, we shall even find it exciting the whole world against itself, and lightheartedly braving universal anger.” International law is scorned, treaty obligations dismissed, individual and minority rights rejected, morality defied, and war exalted. The pivot of this mentality, Durkheim concludes, is the wish “to rise ‘above all human forces,’ to master them and exercise an absolute sovereignty over them. It was with this word ‘sovereignty’ that we began our analysis, and it is to this that we must now return and conclude, for it sums up the ideal we are offered” (1916:45).
Like Lemkin three decades later, Durkheim sees the will to unlimited sovereignty as the crux of an irrationalist raison d’état. Taking the works of Heinrich Treitschke as the locus classicus of this outlook, Durkheim stresses that Treitschke’s views are representative of widely held opinions, however far they stray from the reality principle. In actuality, Durkheim says, sovereignty is always relative, a function of relations with other, legally equivalent states. Interdependence is the rule, not sheer independence. States normally depend on each other, on treaties, and on domestic and foreign opinion. “Exaggerate, on the contrary, [state] independence, release it from all limitation and reserve, raise it to an absolute, and then you will grasp Treitschke’s idea of the state” (1916:7). This proud and autarkic state refuses to recognize any higher jurisdiction, or even to “permit a contrary will to express itself, for any attempt to apply pressure to it is a denial of its sovereignty. It cannot have even the air of yielding to any kind of exterior constraint, without enfeebling and diminishing itself” (1916:8). For Treitschke, war and weapons decide all truly contested questions, not law. “Without war, the state is inconceivable, and indeed, the fundamental attribute of sovereignty is the right to make war at will” (1916:11). Universal ethical claims are nugatory: For Treitschke, “the State is not an Academy of Arts. When it sacrifices its power to the ideal aspirations of humanity, it contradicts itself and goes to its ruin.” Nor can the state tolerate even the slightest insult. “The State is not a violet, which blooms in the shade; its power should be proudly displayed in the full light of day, and must not permit itself to be challenged, even symbolically. When its flag is insulted, the State’s duty is to demand satisfaction, and, if this fails, to declare war” (Treitschke, cited by Durkheim 1916:14, 9).
War, whatever its cause, is sacred and ennobling. “It is war,” as Durkheim paraphrases Treitschke, “which compels men to master their natural egoism; it is war which raises them to the majesty of the supreme sacrifice, the sacrifice of self” (1916:12). Sovereignty, monopolized in this way by the German state, is thus denied in principle to all others: “Universal hegemony is an ultimate ideal for the state” (1916:43). So, too, “Germany has never recognized the right of peoples to self-determination” (1916:40). Nor does the German state recognize humanity per se: “For the state humanity is nothing. The state is its own end, and beyond it there is nothing that commands its loyalty” (1916:23). Hence, Africans and Asians should expect no mercy from their colonial rulers: “In those lands,” Treitschke says, “he who knows not how to terrorize is lost” (cited by Durkheim 1916:25). Specifically, Treitschke praises “the example of the English, who, over 50 years ago, bound Hindu rebels to the mouths of cannon and then scattered their bodies to the winds” (cited by Durkheim 1916:25).
Analytically, Durkheim finds the Achilles’ heel of Treitschke’s worldview in his steadfast refusal to accept the reality of the world community and its train of laws, norms, beliefs, institutions, and myths. For Treitschke, physical force is everything, culture evanescent at best. Recalling the example of Steinthal and others, Durkheim says that, formerly, German scholars deserved credit for stressing the “impersonal, anonymous and obscure forces which are not the least important factors in history” (1915:27). So, too, many historians have said that “the state is a result rather than a cause” (1915:34) and that statecraft is a comparatively superficial aspect of social life—yet for Treitschke, the state is power, and power is all.
The problem with Treitschke’s perspective is not just that he exaggerates the independence of sovereign states, but, rather, that he misconstrues sovereignty altogether. States are more than merely physical forces. They exist socially and morally; they have legal rights and obligations; they occupy statuses. A nation acquires legal reality as a state when it wins recognition from other states. Recognition is thus “constitutive” in the sense that it gives the nation a legal status that it would not otherwise have; it confers legitimacy, sovereignty, rights, and obligations in the wider state system.
The Treitschkean error, in short, is to exaggerate the state’s independence not merely from other states but also from the very web of legal, moral, and social ties that give the state its power in the first place. The state’s monopoly of domestic violence, for Durkheim as for Weber, springs from its legitimacy—that is, from public willingness to recognize its claims. Treitschke, and others like him, mistakenly think that the state is so innately powerful that it compels loyalty, when in reality its power is a function of public loyalty. And he wrongly imagines that the German state can thrive even without the recognition and support of the international community. This overvaluation of the state and its power proved fatal to Germany, and its victims, in two world wars.
Genocide and Sovereignty
When Durkheim’s pamphlet first appeared, it would have been easy to doubt whether mentalité is truly as historically significant as he implies. In the aftermath of the Holocaust, however, it is hard to think otherwise. A pathological mentality of “morbid enormity” came to the fore once again, seeking unlimited sovereignty with all the intensity of wounded narcissism. “In the future,” Durkheim wrote, “historians and sociologists will have to establish the causes” (1916:46) of this simultaneously aggressive and submissive mentality. But its content, he was certain, was a fetishistic overvaluation of state sovereignty.
Raphaël Lemkin agreed. While his overall account of genocide was multidimensionally sociological—encompassing social structure, institutions, geopolitics, culture, and economics (see Smith 2000)—Lemkin was also mindful of the centrality of intentional actors, from Hitler and Himmler to the Einsatzgruppen. To explain the mentality that spurred the systematic murder of Jews, Gypsies, Poles, Serbs, Bolsheviks, the mentally ill, and the infirm, Lemkin invokes a range of psychological tendencies: Treitschkean “exaggerated pride” and self-righteousness, a tendency to “glory” in wartime barbarity, contempt for victims, cynicism, denial, and a cold resolve to follow leaders and orders (1992: 189-235). Lemkin was also sternly determined to stress the connection between this toxic mentality and the issue of sovereignty. State power, ratcheted to dizzying and transgressive heights, was Hitler’s end as well as his means. The sacralization of German sovereignty at the expense of other states and nations was central to Nazism. Accordingly, Lemkin’s principal aim was to place restraints on the sovereignty of genocidal states, preferably before they took action. Genocide, in this vision, would be a legal cause for intervention by the world community. Sovereignty, in other words, would be self-canceling when it turned cancerous.
This vision is plausible only if international law can be enforced, and that, in turn, requires an effectively institutionalized state system. Lemkin did indeed repose a fair amount of hope in international law and the UN. But he was hardly a legal utopian. On the contrary, Lemkin was keenly aware that international law is highly limited in its potential efficacy. Imperial rivalries, clashing cultures, bureaucratic indecision, corporate venality, and political cowardice all limit the extent to which the world community can act to halt genocide. But the only alternative to a superstate system is reliance on states—and nothing could be more utopian than to count on states to restrain themselves. Hence Lemkin’s guarded hope that the world community, despite its severe limits, would accept the call to ethical responsibility in the case of genocide. Hence, also, his dismay when the Universal Declaration of Human Rights (UDHR) was enacted by the UN just one day after the passage of the Genocide Convention.
Many people have found Lemkin’s resistance to the UDHR mystifying. But it is, in fact, entirely consistent with his realism about the limits of international law. If the global community is ever to take the fateful step of breaching state sovereignty, the precipitating cause must be overwhelmingly clear and significant. The UN cannot be asked to act in every instance of injustice; such an expectation would gravely dilute its criteria for action and reduce the chance that it would ever act, or act consistently and without prejudice. Yet that is precisely what the human rights charter demanded, Lemkin said:
The same provisions apply to mass beatings in a concentration camp and to the spanking of a child by its parents. In brief, the dividing line between the crime of Genocide, which changes the course of civilization on one hand, and uncivilized behavior … disappears. (Cited by Power 2002:75)
International law could not be expected to eradicate prejudice and discrimination in the myriad forms specified in the human rights charter. “History has tried to achieve this task through a combined travail of revolution and evolution, but never before has a…lawyer dreamed … of replacing a historical process by fiat”—that, truly, is utopian, and “Utopia belongs to fiction and poetry and not to law” (cited by Power 2002:76). Lemkin was scathing, furthermore, when it was suggested that genocide could be subsumed under the rubric of discrimination. “Genocide,” he wrote, “implies destruction, death, annihilation, while discrimination is a regrettable denial of certain opportunities of life. To be unequal is not the same as to be dead” (cited by Power 2002:75).
Drawing a clear line between genocide and other forms of violence and oppression is crucial to Lemkin’s intent and, indeed, to the entire enterprise of legislating against action intended to physically destroy an entire social group. It is also pertinent with respect to the analytic and definitional problems that have plagued the comparative study of genocide.
The general assumption in the study of genocide seems to be that, since genocide is a word everyone knows, it must have a coherent meaning. “We have defined genocide,” Chalk and Jonassohn (1990) say in a standard work, “because we assume that it is a definable form of human behavior” (pp. 26-7). In reality, however, the word is jural in origin and import, and it cannot be readily applied outside the realm of law. And even in law, major problems arise.
The most often noted problem with the term “genocide” is precisely the one that Lemkin feared: semantic stretch. “Virtually everything but genocide, as Raphaël Lemkin first defined it, is called genocide!” (Fein 1994:95). This is obviously the case in ordinary partisan politics, where charges of genocide have become so routine that they no longer carry much sting. Abortion is called “the genocide of the unborn,” the drug trade is called “genocidal” for communities blighted by drug use, the Marxian goal of a classless society is said to be “genocidal” toward the bourgeoisie, and so on. But semantic stretch also informs much of the academic discourse on genocide. Israel Charny, one of the principal figures in the relatively small world of comparative genocide studies, widens the “definitional matrix” to include, in addition to “intentional genocide,” mass death resulting from ecological negligence or abuse, “the overuse of force or cruel and inhuman treatment,” mass murder “incidental” to the prosecution of war, and “cultural genocide” that destroys culture or language without direct loss of life (1994:76-7). More recently, the anthropologist Nancy Scheper-Hughes suggests the concept of “a genocide continuum” to cover the spectrum of “small wars and invisible genocides” that she says occur in everyday settings (schools, hospitals, clinics, courts, etc.) and that reveal our fatal capacity “to reduce others to nonpersons, to monsters, or to things” by acts of “social exclusion, dehumanization,” and so on (2002:369). Leo Kuper, often called the “doyen of genocide studies” (Charny 1994:64; cf. Chalk and Jonassohn 1990:16), also favored a loose definition. Given that the Genocide Convention defines genocide ambiguously as action intended to destroy a group either “in whole or in part,” without clarifying the phrase “in part,” Kuper extended the category to include “massacres” of relatively small size and uncertain intentionality. This extended the notion of genocide to gray areas left unclear by the Convention. And though Kuper formally advised against “new definitions of genocide, when there is an internationally recognized definition” (1981:39) affirmed by the Genocide Convention, he nevertheless amended the official definition, in practice, in one basic and quite arbitrary respect, while ignoring other proposed revisions.
The work informed by these wide-angled definitions is well-intentioned and often valuable. But defining genocide this broadly deprives the term of its power and specificity. Scholars, reluctant to omit anything from their taxonomies, have increasingly tended to say that genocide is simply a human universal and constant, found in every corner of the world and in all periods (Jonassohn 1990:415 and 1998:2, 10; Kuper 1981:11). The method this typically inspires is an effort to trawl through history for cases of mass violence, which are then classified under the headings of one typology or another. These typologies and taxonomies can be modestly fruitful in analytic terms, but politically and legally, they have to be handled with great care. The tendency to widen the definitional circle, however morally understandable, tends to convert genocide into a legitimating rhetoric for great power interventionism.
Consider what would happen, for example, if the international community were to define genocide as a “continuum” running from social exclusion to ethnic cleansing. That would justify interventions almost anywhere, on almost any grounds. Several critics of Michael Walzer’s famous Just and Unjust Wars (1977) have proposed essentially this, saying that even “under ordinary oppression peoples’ socially basic human rights are violated… systematically enough to… justify intervention” (Luban 1980:396; cf. Beitz 1980; Doppelt 1980).
This is extremely dangerous. Indeed, precisely because arguments of this kind are motivated by the best intentions, they offer moral cover for amoral realpolitik. That is the predicament faced by human rights nongovernmental organizations (NGOs). They want to see wrongs righted, and they feel pained when the great powers fail to help. But when great powers are asked to intervene, one danger—is that they might intervene. As Weiss and Chopra observe, the main problem with the doctrine of humanitarian intervention is its “vulnerability to abuse”—the fact that “powerful states with ulterior motives would be able to intervene in weaker states on the pretext of protecting human rights” (1995:90).
Currently, given the asymmetry of power between the United States and the rest of the world—the fact, as Michael Mann remarked a decade ago, that “American power is [now] qualitatively greater than anything known in this area of civilization since the Romans” (1990:7)—the danger of this kind of abuse is far greater than ever before. Interventions by the United States and by the UN have both soared in number since 1989 (Gurr and Harff 1994; Knutsen 1997), and the United States is clearly en route to a kind of hegemonic status, with unilateralist tendencies restrained only in part by countervailing global forces. The problem before us, then, is to find ways to prevent genocide and encourage human rights without yielding the mantle of humanitarianism or the legitimate monopoly of global violence to the new hegemon and its legions.
This, of course, is no easy task. Since 1988, as Levene has incisively shown (2001), the great powers have more often permitted or even encouraged genocide than the reverse. The international community has been irresolute at best, and states continue to value sovereignty over solidarity. What then is still possible?
Few would now dispute that genocide is a sufficient ground for genuinely humanitarian intervention, and it appears, at present, that the only body capable of interceding without primarily venal motives is, despite everything, the UN. Grave doubts about the UN’s will and power are, of course, apropos. But there appear to be simply no alternatives. So NGOs that hope to prevent future genocides will have to find ways to influence the UN. That, in turn, is likely to require grassroots efforts to sway the UN’s member states.
The UN should also be asked to continue to define genocide exactly enough to permit delimited legal action. Otherwise, genocide will remain more a trope than an article of law. Considerations of this kind originally came into play as early as 1948, when the UN chose to omit the idea of “cultural genocide” from the Genocide Convention. It was clear even then that this category is too vague and elastic as to be “susceptible to adequate definition, thereby potentially giving rise to abusive and illegitimate claims of genocide” (Ratner and Abrams 2001:31). More recently, the genocide tribunals in Rwanda and the former Yugoslavia have echoed this point, emphasizing, rightly, that they are “guided by the Convention’s primary focus on preventing the physical destruction of groups” (Ratner and Abrams 2001:32-3; cf. Magnarella 2000). Narrow, focused definitions of this kind will remain necessary in the future as well.
What, then, of less-than-genocidal human rights violations? Here, too, I would argue that grassroots action is key. Consider labor rights, for example. Plainly, no government or international body will intervene militarily to assure workers in China, Cameroon, or Burma the right to strike. But cross-border solidarity efforts to provide aid and resources—from the global labor movement, supportive NGOs, and others of goodwill—are an elementary necessity. And if, over time, efforts of this kind succeed in eliciting support from national or international bodies as well, all the better. But it would be highly unrealistic to imagine that action by states or the world community would prove decisive in cases of this type. Only in extreme cases of genocidelike gravity, and then only rarely, can we credibly hope to exert influence at the level of the state system per se.