Robert Bates Graber. 21st Century Anthropology: A Reference Handbook. Editor: H James Birx. Volume 2. Thousand Oaks, CA: Sage Reference, 2010.
Social evolution refers to social or cultural change over relatively long periods of time. By social is meant having to do with two or more organisms of the same species engaged—directly or indirectly—in patterned interaction; by cultural is meant having to do with the way of life of a social group, insofar as that way of life is a social rather than a biological acquisition.
If the price of gasoline goes up, that is a social change (specifically, an economic one). If the price goes high enough and stays high, people’s travel habits may actually change; that would be a cultural change. If the means of transportation themselves undergo significant change, that would be a cultural evolutionary change. These distinctions, though not exact, are clear enough to prove useful.
Note that “social,” as generally used, is a more encompassing term than is “cultural.” Since its culture clearly is a characteristic of a social group, the cultural realm can be considered a subset of the social realm. There seem to have been few if any attempts to systematically distinguish between social and cultural evolution; many scholars, finding it more useful to integrate rather than differentiate the two, have used the hybrid term “sociocultural evolution.”
Ancient treatments of social evolution still worth reading were produced by the Roman writer Lucretius and more than a millennium later by the Saracen Ibn Khaldun. Famous French writers of the 18th century, including Montesquieu, Turgot, and Condorcet, offered optimistic analyses tending to celebrate what was seen as the inexorable march of reason; notably more scientific was Scottish writer John Millar in his 1771 book, Observations Concerning the Distinction of Ranks in Society (Harris, 1968, pp. 48-52). More closely associated with social evolutionism today, however, are 19th-century writers. Most often identified explicitly as social evolutionists are the British writers Herbert Spencer and Edward B. Tylor and the American Lewis Henry Morgan; less often classified as social evolutionists but making major contributions nonetheless are England’s Robert Malthus and two German authors, Karl Marx and Theodor Waitz.
Spencer, Tylor, and Morgan
When people hear the phrase “survival of the fittest,” they are likely to think of the great biologist Charles Darwin; the phrase in fact appears to have been coined by a contemporary of Darwin’s, the philosopher Herbert Spencer.
Spencer (1897, 1851/1969) thought of evolution as involving much more than biology. For him, evolution pervaded the inorganic as well as the organic realm. His voluminous work also treated “superorganic evolution” (which we today would term social evolution) and evolution of “superorganic products” (what we call cultural evolution). Much as cells combine to make up organisms, organisms themselves combine in some species to make up “superorganisms,” or societies. The comparison of societies to organisms has roots in ancient Greece, but Spencer elaborated this idea in greater detail than anybody else before or since. He emphasized three developmental tendencies shared by societies and organisms: (1) growth in size, (2) increasing complexity of structure, and (3) differentiation of function. Generally speaking, larger life-forms, unlike smaller ones, have several types of tissues and organs, each suited to perform its special function; similarly, larger societies, unlike smaller ones, have specialized arrangements for performing different functions. Examples include factories, stores, schools, and churches; less concrete arrangements, such as economic and political systems; the occupational division of labor; and the division of society into rich and poor, powerful and powerless.
Yet this analogy, like any, has its limits—some of which Spencer recognized and discussed, others of which he overlooked or ignored. He admitted, for instance, that the parts of an organism are in direct contact, while the members of a society are not; but he argued that communication considerably reduced this difference. He seems not to have confronted the related—and scientifically awkward—fact that societies by having no membrane or skin are less identifiable entities than are organisms. Spencer’s work had a political as well as a scientific dimension. Unfortunately, he regarded the “survival of the fittest” as a sort of guide for governmental policy, which often led him to oppose programs to assist the poor. His skepticism about the ability of government to do more good than harm—concerning not only poverty but also quite generally—has made him an important inspiration of what today is called libertarianism. Also unfortunately, these rather extreme political views helped cause Spencer’s more scientific writings, such as Principles of Sociology (1897), to fall into neglect for several decades. Since the revival of cultural evolutionism in the mid-20th century, however, Spencer has been rediscovered; much of his most valuable work appears in two excellent anthologies (Carneiro, 1967; Peel, 1972).
Spencer’s greatest contribution perhaps was to encourage people to try thinking of society and culture, no less than stones and pinecones, as belonging to the natural world. “Civilisation,” he declared, “is a part of nature; all of a piece with the development of the embryo or the unfolding of a flower” (Spencer, 1851/1969, p. 65).
Edward B. Tylor
If any one person deserves recognition as the founder of anthropology, it is Edward B. Tylor. To students wondering why they should be expected to learn yet another science, Tylor (1881/1909) suggested that anthropology resembled a backpack. A backpack adds yet more physical weight to be carried, but it more than pays for itself by making everything else so much easier to carry; likewise, he suggested, anthropology more than pays for itself by tying together diverse subjects, thereby making the educational load not harder but easier to bear (p. v).
Anthropology, in the United States, has four subfields: biological, archaeological, cultural, and linguistic. Together they comprise the “study of humanity.” Largest of the four subfields, in number of anthropologists specializing in it, is cultural anthropology. People have followed different ways of life in different times and places; making sense of this diversity is the central task of cultural anthropology. Its key concept is culture itself, for which Tylor (1871/1924) furnished the most famous definition. Culture, he wrote, is “that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society” (p. 1).
Cultural evolutionism’s great early achievement was the defeat of degenerationism. According to this theory, human culture had originated at a fairly “high” level after which some cultures “degenerated” to “lower” levels while others “rose” to yet “higher” ones. Foremost among scholars putting degenerationism to rest was Edward B. Tylor himself. Using his extensive knowledge of the anthropological evidence that already had accumulated by around 1865, Tylor (1865/1964) showed that high cultures quite certainly had originated in a state resembling that of the low cultures still observable in some parts of the world and that there was no evidence that any of the latter had come into being by degeneration from a higher condition of culture.
Evidence overwhelmingly favors the conclusion that up until only 10,000 or 15,000 years ago all humans had lived, from our very beginnings as a distinct form of life, in small, nomadic bands that survived by hunting and gathering the wild food sources around them. In view of the ingenuity and durability of foraging culture, anthropologists no longer call it low, our own culture high; but looking past the ethnocentric terminology, we can see that the conclusion drawn by Tylor and others has been reinforced by all subsequent findings. Social evolution surely began everywhere with very small societies; and culture has been transformed in those times and places where, for reasons still being vigorously investigated, societies grew into villages, chiefdoms, nations, and empires.
Though degenerationism had been motivated by religion (especially the story of the Tower of Babel in the Book of Genesis), it did have testable implications; therefore, it could be—and was—rejected through the application of reason to empirical evidence. The defeat of degenerationism was a great step in science.
Lewis Henry Morgan
The surprising facts that a few basic patterns occur over and over and that each pattern has a logical structure of its own were among the discoveries of Lewis Henry Morgan, a prosperous attorney who lived in 19th-century Rochester, New York. As a young man Morgan had taken great interest in the Iroquois Indians of New York, and in 1851, he published a book about them highly respected even today. His anthropological interests reached full flower, however, in his 1877/1985 book, Ancient Society. To organize the growing body of knowledge about human cultures of the past and present, Morgan carefully defined three main cultural stages: savagery, barbarism, and civilization. Savagery and barbarism he divided into substages of lower, middle, and upper for a total of seven stages. These terms sound ethnocentric, or culturally biased, to us today; but in Morgan’s time and certainly in his usage, they were technical terms carrying but little of the pejorative connotation they later acquired. It is not sufficiently appreciated that Morgan, especially for his time and place, had a very tolerant, sympathetic attitude toward cultural differences. The preface to his book on the Iroquois (Morgan, 1851/1901), for example, declares that “the public estimation of the Indian, resting, as it does, upon an imperfect knowledge of his character, and tinctured, as it ever has been, with the coloring of prejudice, is universally unjust” (pp. ix-x).
Morgan’s seven stages (1877/1985), partly for reasons of convenience and clarity, were defined mainly with reference to specific elements of technology. Barbarism, for example, was distinguished from savagery by the presence of pottery. This had some strange consequences. Peoples of Polynesia, for example, despite living in large chiefdoms consisting of several permanent villages, were classified as “savages” along with small, nomadic bands of foragers simply because they happened to lack pottery. Yet the criticism later heaped on Morgan for this and other failings was not quite fair; he himself had looked forward to a time when fuller evidence would allow more satisfactory classifications than his own. Recognizing the limitations of both his own scheme and the Stone Age/Bronze Age/Iron Age scheme introduced by Danish archaeologists, he suggested that general subsistence patterns rather than specific artifacts or materials ultimately would provide “the most satisfactory bases for these divisions. But investigation has not been carried far enough in this direction to yield the necessary information” (p. 9).
This was an accurate prediction. Anthropologists now agree that it is not ground-stone tools or pottery that most basically signal the Neolithic but the transition from foraging for food to growing it. The archaeological appearance of pottery around the world correlates fairly well with this transition probably because pottery is too heavy and fragile to have been of much use before people began settling into villages and growing food some 10,000 years ago; so even in emphasizing pottery, which got him into such trouble over the Polynesians, Morgan had not been too wide of the mark.
Malthus, Marx, and Waitz
Thomas Robert Malthus’s father held liberal views and was optimistic about the prospects for the betterment of human society. Malthus himself, even in his youth, suspected that social problems were deeply rooted and that there must be real limits to the improvement of society. One theological argument, of course, is that social problems are inevitable because human nature is sinful. One might think this should have settled the matter for Robert, who after all had been ordained at around age 22 in the Church of England. But he also had studied mathematics and shown scientific inclinations; neither he nor his father seems to have regarded the issue as essentially theological.
Searching constantly for stronger arguments, Robert Malthus eventually hit on an idea so persuasive that in 1798 he published an essay laying it out in detail: An Essay on the Principle of Population Humans, he noted, along with other forms of life, tended to reproduce in numbers greater than could be easily supported by available resources. Population inevitably would be “checked,” whether by higher mortality (due to famine, disease, or war) or lower fertility (due to abstinence, contraception, or nonreproductive forms of sexual behavior, such as masturbation and homosexuality). These checks on population all entailed either misery or—in Malthus’s judgment—wickedness (“vice”); and even at that, they left the population so high, relevant to resources, that social problems, such as poverty and crime, would be chronic. True, periods of relief occasionally would occur after plagues had sharply reduced the population or technological breakthroughs had abruptly increased available resources (especially food); but the power of population was so great that soon there once again would be too many people. Population again would stabilize via the miserable and vicious checks, but it would do so at a level above that of “easy support”; relief from social problems, then, would be rare and temporary.
A key point in Malthus’s argument (1798/1993)—which has been under nearly nonstop debate for two centuries now—has been widely overlooked: his claim that population tended to stabilize at a level greater than what resources could easily support meant that there would be not only social problems but also nearly constant pressure for culture change. As he wrote toward the end of An Essay on the Principle of Population, the press of population “is constantly acting upon man as a powerful stimulus, urging him to the further cultivation of the earth, and to enable it, consequently, to support a more extended population” (p. 147). The idea that population pressure could help explain cultural evolution would lie in neglect until the 1960s; when recovered, it was labeled rather misleadingly “anti-Malthusianism.”
“So what do you want to be when you grow up?” Pressure to find an occupation begins early in our own enculturation. In the small societies in which all people lived until around 10,000 years ago, such a question made no sense; all people grew up to engage in pretty much the same range of activities (except for differing sex roles). The young Karl Marx (Tucker, 1972) hoped—and believed— that society was evolving toward a way of life that would be far more fulfilling than either the modern or ancient situation—a society in which the individual, far from being forced to become one certain “thing” in the division of labor, would be free “to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic” (p. 24).
Marx and his collaborator Friedrich Engels (Tucker, 1972) believed that conflict between haves and have-nots was the prime mover of social evolution; and it was class struggle, they thought, that soon would usher in a very different kind of society. The productive powers of industrialization harnessed to a centrally planned economy would ensure that the merely animal needs of all people were efficiently met. People then would work not out of “mere animal necessity” but to fulfill their essential nature. Marx and Engels despised Malthus for his pessimism about improving society; in their view, ideas like his hindered positive social change by making excuses for the status quo. On the other hand, they found some inspiration in the work of Lewis Henry Morgan; they particularly liked his emphasis on the importance of technological changes in the human past. They too constructed a set of stages; it differed somewhat from Morgan’s, however, and has been less influential.
When moralists condemn modern society for its “materialism,” they usually are referring to love of physical things—cars, clothes, boats, or condominiums. In intellectual history, however, the word materialism has a rather different meaning. The materialist perspective is well suggested by a declaration of the ancient Greek philosopher Democritus: “Nothing exists but atoms and the void.” In this spirit, Karl Marx thought that the mental world created by a human society must be understood in terms of how the organisms composing the society were managing to survive and reproduce. As Marx wrote in 1859, “It is not the consciousness of men that determines their being, but, on the contrary, their social being that determines their consciousness” (Marx as cited in Tucker, 1972, p. 4).
Marx and Engels believed that materialism had both political and scientific implications. Though they considered these deeply intertwined, their subordination of the scientific to the political is clear. “The philosophers have only interpreted the world, in various ways; the point, however, is to change it” (Marx as cited in Tucker, 1972, p. 109). It remained for the Russian leader V. I. Lenin to reach the extreme conclusion that the scientific search for truth must not be allowed to stand in the way of political revolution. In anthropological theory, the main political and scientific developments of materialism sometimes are distinguished respectively as dialectical materialism and cultural materialism. Cultural materialism refuses to subordinate scientific analysis to political agendas (Harris, 1979, pp. 157-158). Our understanding of social evolution has been invigorated by the cultural-materialist attempt to understand culture and culture change as reflecting the actual conditions— demographic, environmental, and technological—in which people as creatures struggle to survive and reproduce.
Theodor Waitz was the precocious descendant of a long line of German Protestant preachers and teachers. His intellectual maturation took him from theology to philosophy, then to psychology, and finally to incipient anthropology. Showing superb intellectual judgment, Waitz (1859/1975) proved ahead of his time (1) in conceiving of general anthropology as a new empirical science, (2) in stressing the biological unity of humankind as one species, and (3) in arguing that what biological differences there were between human populations (“races”) were not relevant to accounting for their observable cultural differences. The usual idea, he noted, was that a people’s conditions of culture reflected their basic capacities; he reversed this by arguing that the relevant capacities actually resulted from the conditions of culture. Franz Boas, often credited with this breakthrough, acknowledged that it was Waitz who actually had been first to express clearly what would become fundamental to modern anthropological research. Robert Carneiro’s Evolutionism in Cultural Anthropology (2003) is helping secure for Waitz the credit he deserves in this regard. When Waitz died of typhoid fever at only 44, anthropology lost perhaps its greatest mind.
In the early 20th century, anthropology turned away from cultural evolutionism. Though the 19th-century evolutionists had been among the most enlightened people of their time, their work inevitably was tainted by the prevailing interpretations of reality, including assumptions of racial and cultural superiority. Even the great evolutionists Spencer, Tylor, and Morgan were disparaged as “armchair speculators”; what was needed, it was asserted, was actual fieldwork in order to learn firsthand about the history and functioning of small-scale societies before they disappeared from the face of the earth forever.
The emphasis on fieldwork produced mountains of new and better information about the cultures of the world; yet the urge to make systematic sense of all this new, chiefly descriptive material soon gave rise to a resurgence of cultural evolutionism. Early contributors—most notably, perhaps, Spencer—were resuscitated, their more promising ideas reconsidered, reformulated, and extended. This movement, sometimes termed neo-evolutionism, was led by two American anthropologists, Julian Steward and Leslie A. White. Most of Steward’s work (e.g., 1955) had the modest goal of elucidating the effects of specific environments on the cultures of the people inhabiting them (cultural ecology). White’s work (e.g., 1949) offered more ambitious generalizations about the course of human culture as a whole; he was impressed especially by the relationship between cultural evolution and how—and how much—energy was used by human societies. Steward and White and their followers engaged in vigorous debates, which, though initially fresh and illuminating, eventually seemed to be generating more heat than light. In 1960, a small but influential book of essays titled Evolution and Culture argued cogently that the approaches of Steward and White were better seen as complementary than as opposed (Sahlins & Service, 1960). Though this cleared the air, it pointed no new direction.
In 1965, a new path was opened when American anthropologist Don E. Dumond and Danish economist Ester Boserup independently proposed that population growth under certain conditions could be an important cause of certain kinds of culture change. (That they had been unaware of each other’s work was confirmed in conversation with Dumond in 1994. Their insight had been hinted at as early as 1798 by Malthus himself; but the scholarly world had long overlooked this, in part because what Malthus stressed had been the dependence of population on cultural conditions rather than the converse.) Impressive theories soon were proposed for the major transformations of cultural evolution. Though “population pressure theory” attracted the most attention in the 1970s, it continues to be a significant and promising specialty within sociocultural evolutionism.
The old theory of agriculture was that it was a difficult invention, which once achieved spread rapidly from a single origin because it made life much easier and more secure. Anthropological progress in the 20th century made this less and less tenable. In several parts of the world, archaeologists accumulated evidence that the domestication of plants and animals had been a long, gradual process of change in which species found wild in local environments came slowly to resemble the domesticated forms of today.
Cultural anthropologists, meanwhile, were learning that hunting-gathering peoples possess extensive knowledge of the plant and animal life around them. The fact that plants grow from seed, for example, was not a profound mystery but common knowledge. Furthermore, the foraging life, even in marginal environments, such as the Kalahari Desert in South Africa, proved to be considerably less difficult than had been believed. Neither the archaeology nor the ethnology seemed to fit with the old theory. If foraging for food was usually a relatively easy lifestyle, why did people ever begin growing food? And why, when they finally did (after millions of years of foraging), did it happen so slowly and in so many places?
The pieces of the puzzle were assembled neatly by the archaeologist Mark Nathan Cohen. Influenced by earlier writers, especially Ester Boserup, Cohen (1977) proposed population pressure as the key. The beginnings of agriculture some 10,000 years ago approximately coincided, Cohen pointed out, with the end of the long process of human expansion throughout the habitable portions of the planet. As population continued to grow with nowhere new to go, global density would have begun to increase rapidly; wild plant and animal food sources gradually were ever less sufficient for human survival. Our ancestors took up farming only when and to the extent that they had to.
An especially nice feature of this theory is that it explains why, after several millions of years of human existence, agriculture cropped up in so many places within a mere few thousand years. Study of recent foragers demonstrates that individuals move rather freely between bands and that the bands themselves move frequently over the landscape. Both kinds of movements often are in response to resource distributions. (Among the Mbuti pygmies, for instance, newlyweds go to live with either the bride’s or groom’s band, depending usually on where food is most plentiful at the time.) These “flux” mechanisms, then, distribute population relative to resources (Turnbull, 1968). During the human expansion out of the tropics into the rest of the world, an expansion that began 1 to 2 million years ago, our ancestors had been foragers too; it therefore is a safe bet that flux mechanisms operated day in and day out over the millennia, constantly distributing and redistributing population relative to food resources. When expansion at last had to end but population kept growing, the pressure on wild resources would have increased sharply all over the world. Cohen’s (1977) theory thus tied together findings of archaeologists and cultural anthropologists to produce the best general theory we have of this great transformation in cultural evolution.
A genuine problem, however, is the fact that archaeological evidence of hunter-gatherers throughout the New World is not as ancient as the theory implies. However, if new findings continue pushing back our estimates for the peopling of the New World, Cohen’s (1977) theory will be more strongly confirmed.
Before 1965, social scientists long had thought of agricultural change as a cause of population increase rather than as an effect of it. More food would mean more people—that was the whole story, and its author was believed to have been Malthus. In 1965, however, Danish economist Ester Boserup argued that the evolution of agriculture was less a cause than an effect of increasing density of human population.
Boserup (1965) noticed that small-scale societies growing food without plows and draft animals often resisted this seemingly superior technology even when it was offered free by the government. She suspected that this was more than blind adherence to traditional ways. The technology being offered could squeeze more food from a given amount of land;but what if traditional methods yielded more food for a given amount of labor? Boserup’s detailed study of several forms of agriculture convinced her that those involving simple tools and long periods of fallow indeed were more labor efficient. The greatest single reason seemed to be that at low population densities, people could afford to farm only a small portion of their land each year. By the time they returned to a given plot to plant a garden, forest had reclaimed it. This seeming disadvantage was in fact a huge advantage: Forested land could be cleared well enough for planting merely by slashing down the young forest vegetation, burning it, and sowing seed in the ash. Stumps of larger trees were simply planted around. Neither plowing nor even hoeing was necessary. Such “low-tech” farming paid off very handsomely for no more labor than it took. Fire was the key, and low population density was the necessary precondition.
So why had agriculture in fact changed in some parts of the world? Probably, Boserup wrote, due to increasing density of human population (1965). When land had to be planted before forest had had time to reclaim it, fire was less effective for clearing because fire does not destroy the roots of thick grass (not a problem once forest has taken over). Hoes and eventually plows had to be adopted due to increasing population pressure.
Before Boserup (1965), scientists had thought of land as either cultivated or uncultivated. An important feature of her analysis was a five-stage sequence based on the frequency with which land was cultivated. The least intensive stage involved cropping about once every 20 years, the most intensive more than once a year, which usually required much labor for fertilizing and irrigating. In presenting this set of progressive stages, Boserup’s work harked back to a 19th-century approach used by such scholars as Lewis Henry Morgan that had fallen into prolonged disfavor among anthropologists. But Boserup’s stages allowed land use to be measured more accurately than ever before; her work deservedly had a large and continuing effect on anthropology in general and on sociocultural evolutionism in particular.
For most of the human past, people lived in small bands, each containing no more than a few dozen individuals controlling their own affairs entirely locally. Even when people in some places began settling in villages some 10,000 years ago, the local community remained self-governing. Perhaps 7,000 or 8,000 years ago, there arose the first multicommunity societies: chiefdoms in which one person had achieved effective political control over two or more villages. In the following millennia, some of the chiefdoms coalesced into states: multicommunity societies with a central government strong enough to tax, draft, and legislate. With this came social cleavages—familiar to us today— between town and country, rich and poor, rulers and ruled. The cultures of state-level societies differ greatly from the cultures of band and village societies; they differ much less among themselves. When Cortes first encountered the Aztecs, for example, he found that their society reminded him of life back in Spain: fields, markets, churches, and— sad mark of civilization!—beggars in the streets. Social growth indeed causes culture to evolve dramatically in certain definite ways, as Herbert Spencer had insisted; to this extent, cultural evolution may be considered a function (in the mathematical sense) of social evolution.
Chiefdoms and especially states developed independently in several places around the world; but in most places, humans continued living in bands or villages. What made the difference? In 1970, Robert L. Carneiro identified three kinds of circumstances that seemed to foster political evolution. The first is environmental circumscription— fertile land more or less hemmed in by mountains, deserts, or water. Here, as agricultual intensification made land ever more scarce, defeat in war increasingly would leave the losers with nowhere to go to escape subjugation. Chiefdoms and eventually states would result. A second circumstance is resource concentration—productive resources, such as lakes or streams rich in seafood, so attractive that people try to stay near them. A third circumstance is social circumscription—being hemmed in not by geographical features but by other societies.
The term social circumscription is one Carneiro borrowed from Napoleon Chagnon (1974), who had observed that population growth among the Yanomamo Indians of the Amazon Basin led to villages splitting and spreading more deeply into the tropical forest around them. Due to such splitting, the average village size—around 100 people— seemed fairly stable through time. At any given time, though, the more centrally located villages were the largest. Perhaps, Chagnon suggested, this was because, being surrounded by other villages (usually hostile), central villages were less able than peripheral ones to resolve internal conflicts by splitting.
It is tempting to think that human societies, like complex organisms, have a natural tendency to grow larger. Chagnon’s (1974) work suggests instead that the natural tendency is for societies to stay about the same size, even when overall population is growing, due to splitting. It seems possible that the tendency to resolve social conflict by splitting is a deep and universal propensity that had had to be suppressed before large societies ever could evolve in the first place. If a growing population is surrounded by rich, unoccupied territory, it will expand easily into that territory; but it will do so Yanomamo-like, and the average society size will remain approximately constant due to splitting. Increase in this average size would be expected only when the opportunity to expand was somehow inhibited. Inhibited expansion would lead to inhibited splitting; if societies could no longer split fast enough to offset population growth, larger societies—chiefdoms, states, empires—eventually would be forged, and culture would have to be transformed concomitantly. This process would tend to unfold in precisely the kinds of conditions Carneiro had identified: environmental circumscription, social circumscription, and resource concentration.
It is possible to formulate mathematical definitions for inhibition of both geographical expansion and political splitting. The assumption that splitting would not be inhibited until expansion was inhibited proves fruitful and leads to an exact mathematical theory of the relationship between population density and political evolution (Graber, 1995). This theory fits, in a general way, with evidence not only on prehistory but also on the 20th century (Graber, 2006); but this general fit scarcely constitutes a rigorous test of the theory.
Coal might appear obviously superior to firewood as an energy source for an industrializing society; but historically, industries initially shifted to coal only because firewood grew scarce. In some—not all—the transition was easy to make, and in some industries, coal ultimately proved superior in some ways; but the shift originally was prompted not by coal’s superiority but by firewood’s scarcity (Wilkinson, 1973).
Rural life, requiring much physical labor, might appear intrinsically harder than urban life; one therefore might assume that people gladly took advantage as industrialization afforded growing opportunities to move from country to city. Yet British historian Joan Thirsk showed that the earliest industrial centers in England took root in regions in which agricultural populations had grown so dense that families no longer could survive on their meager acreages. As centuries passed, people grew accustomed to having to hold a job to earn a living and generally accepted it as a fact of life (with occasional objectors, including the young Karl Marx). But the “opportunity” to become paid workers rather than self-sufficient farmers appears to have been, in the first place, a matter not of preference but of survival.
Poverty in the countryside, however, was only part of the story. The other side was capitalists who could employ those needing work and make other investments of resources, without fear of being taxed or “plundered” to death by authoritarian governments as soon as big profits began rolling in—as seems to have happened in the states and empires of antiquity, such as China, as Marx and others have suggested.
Ancient civilizations also had had a plentiful supply of both poor people and profit seekers. Why had industrialization not occurred? Possibly because most governments, by controlling the huge irrigation systems on which everyone’s survival depended, had nearly absolute power; and they could not resist using this power to limit commerce whenever doing so was in their short-term interest. In western Europe, however, agriculture was based not on irrigation but on rainfall, which no government could control. Accordingly, commerce could flourish (Harris, 1977).
As Richard G. Wilkinson argued in his 1973 book, Poverty and Progress, it appears that industrialization had important roots in population pressure. Pressure on firewood caused the turn to fossil fuel, pressure on land, the turn from farm to factory. Industrialization has transformed how humans live—our culture. This cultural evolutionary transformation in its dependence on population pressure resembles earlier key transformations: agricultural origins, agricultural intensification, and political evolution.
Sociocultural evolution has not stopped; here are four important ongoing processes, the first of which, however, is not itself essentially sociocultural.
The roots of cultural evolutionism are intertwined with the biological theory of natural selection—a theory arrived at independently by A. R. Wallace and Charles Darwin and made famous by Darwin’s book of 1859, On the Origin of Species. Yet biological and cultural evolution each have “rules of their own”; confusing the two is a grave error— one that marred the work of thinkers such as Herbert Spencer, and that has reappeared in recent decades.
A key difference is that once a species is intelligent enough for its ways of life to depend greatly on learning, those ways of life can change far faster than can the species’s biological makeup; the steam engine, the automobile, and the computer clearly did not need to wait on biological evolution in order to transform our way of life. Artifacts, customs, and ideas can spread rapidly within a generation; biological evolution happens only over generations. Biological evolution can occur rapidly but only in simple life-forms, such as microorganisms, that have very short generation times. Indeed, the rapid evolution of microbes is what causes our antibiotics to “wear out” so quickly. By filling certain microbes’ environment (our own bodies) with drugs, we wipe out all those that have no resistance to that drug; but if even a single “bug” contains a gene making it resistant to the drug, that is precisely the one that will survive and reproduce, giving rise to a new strain—a resistant strain, for which a new antibiotic will have to be sought. Thus, there is an ongoing war between us and the germs that infect us, a war in which they fight with the weaponry of biological evolution, and we, of cultural evolution.
Yet biological evolution is a continuing process even within large, slowly reproducing species like our own. No generation has exactly the same genetic makeup as did the previous generation; chance alone is sufficient to guarantee this. Life, Darwin wrote, is somewhat like a great, ancient tree in which existing species are the green buds. Wherever the tree is growing, evolution is occurring. But are biological changes taking human evolution in any particular direction? This is a difficult question. Scientists used to speculate that humans of the distant future may be small-bodied, swollen-headed, toothless and hairless creatures with senses so weak that everyone would require extensive artificial assistance of the kind pioneered by eyeglasses and hearing aids. Yet such speculations seem outmoded now that genetic engineering is beginning to bring our genetic makeup as a species under our own conscious control—a rather forbidding responsibility.
It is tempting to believe that history could easily have developed in a way quite different from what it has. According to this view, we would not now face the unsettling prospect of engineering human genes had biology’s history been different—if, for example, Darwin had died, as did many in his time of some childhood disease. Social evolutionism, however, offers a different perspective. It is well to remember, after all, that another biologist of the time, Alfred Wallace, quite independently hit on basically the same theory at around the same time. And in fact, history presents many examples of similar occurrences, treated in the writings of William F. Ogburn, A. L. Kroeber, and Gerhard Lenski. Social evolutionists find the dependence of individuals on cultural conditions a more illuminating perspective than the conventional one according to which great individuals mysteriously “produce” history and culture as if by magic.
Destruction of the ozone layer, pollution of the environment, reduction of biodiversity—while these ominous processes seem to continue, the number of human beings keeps growing. Are we indeed, as some would have it, a kind of “cancer” on the planet, an uncontrolled malignancy that destroys the “healthy tissue” around it?
This gloomy image is quite misleading. Though human population is still growing, it is doing so at a slower rate. As industrialization spreads, children change economically from being valuable assets to being expensive liabilities; accordingly, people have fewer of them. Dozens of the advanced industrial societies already are producing too few children to maintain their population. Meanwhile, many social problems have causes unrelated to so-called overpopulation. Some of the poorest people in the United States occupy some of the least populated areas, and some of the world’s poorest nations are among the least densely populated.
It might be objected that even if population growth is slowing and even if it alone is not responsible for all social problems, things might be better if there were fewer people. Some people, on hearing that the world’s population is expected to increase by about 50% in the coming decades, may be inclined to declare, “Our systems simply cannot support that many people!” Such people, however, are forgetting a key fact: Culture evolves. The population of the future will be supported not by our way of life but by theirs. Artifacts, customs, and beliefs will have changed; and for all we know, those future people will be better off, on the whole, than we are today.
Sociocultural evolution provides some reason to suspect that a stable population, which sounds so good to most people, would deprive human culture of its greatest single source of dynamism—population growth itself. The origins of agriculture, agricultural intensification, political evolution, industrialization—all appear indebted to population growth. However, population growth’s role in a few major cultural transformations of the past does not mean that it is essential for all culture change; it scarcely seems likely that people would stop seeking better cures for disease, for example, simply because population had stabilized. Furthermore, the absence of population growth does not necessarily mean the absence of population pressure. Indeed, Thomas Robert Malthus (1798/1993) believed that populations, when they do stabilize, tend to do so at a level too high to be easily supported by existing resources, creating constant pressure for culture change. If he was right, then even a population stable numerically is inherently unstable culturally.
Technology keeps changing; the evidence surrounds us. One need not have lived long, for example, to have seen the spread of personal computers, microwave ovens, and cell phones. These innovations alter how we live just as life in the early 20th century was altered by, say, automobiles and telephones. Of course, some innovations prove more far-reaching than others; the microwave oven, a welcome convenience, seems unlikely to reshape life to the extent that the automobile has.
Is it possible to identify a class of innovations that are of particular cultural evolutionary significance? Some anthropologists, following Leslie A. White (1949), believe that breakthroughs in energy use are just such a class. From this perspective, four milestones of technological evolution can be identified: use of tools, control of fire, application of the steam engine, and control of nuclear power (Asimov, 1972). Human societies always have relied heavily for survival on the making and using of tools. Stone tools date back over 2 million years; tools of softer materials, such as digging sticks, spears, and clubs may be much older still. Indeed, it is quite possible (as Darwin himself speculated) that an increasing reliance on tools created the evolutionary pressure that made us into upright bipeds: Tool users with hands and arms less used for locomotion would have survived and reproduced better than tool users who continued getting around on all fours.
Tools allow the body’s energy to be concentrated on small areas—points and edges that can cut and penetrate where the unaided body would fail. Teeth are hard and sharp but very limited. Consider the difference between a stone chopper held in your hand and an incisor tooth held in your gums. The power of your swinging arm can be transferred to the edge of the chopper; but if you tried to swing your head in a similar arc to cut a branch from a tree or smash open a bone, the results would be less satisfactory.
The energy driving hand tools is generated in the human body. Use of energy generated outside our bodies originated when humans got control of fire—a milestone probably reached by half a million years ago. Getting energy by burning firewood is chemically similar to getting energy by eating food; but the former allowed our ancestors to expand out of our tropical evolutionary cradle.
Useful as fire was (especially in helping trigger the Bronze Age and Iron Age a few thousand years ago), it began to significantly replace animal—including human— muscle power only with the refinement of the steam engine in the 1700s. As firewood became scarce in England, people turned to coal; and as coal mines deepened, water seepage and flooding of the mines became a chronic problem. Pumping water from coal mines was the necessity that mothered the refinement of the steam engine; because steam engines increasingly were fueled by coal, coal mining itself helped increase the demand for coal. The steam engine, meanwhile, found innumerable industrial applications by converting heat energy to mechanical energy, as in textile factories, where it was used to drive huge looms; indeed, the steam engine often is considered to have been the most important single innovation for industrialization.
When we burn either firewood or fossil fuel (coal, oil, natural gas), we are indirectly using solar energy “trapped” by photosynthesis. (The same applies when we energize our bodies by eating food.) The first genuine departure in this respect occurred only with control over nuclear power achieved a few decades ago. Fission, used to generate electricity in coal-poor societies, is efficient; but fuel is expensive, and hazardous wastes are produced. Nuclear fusion—the means by which stars produce heat and light—is even more efficient; moreover, the fuel is cheap, and wastes are not hazardous. The problem is containing the extremely high temperatures involved: So far, fusion literally remains too hot for us to handle. In future decades, however, controlled fusion could become a practical energy source. Cultural evolutionism can suggest that so fundamental a change in energy use could usher in a new stage of cultural evolution; foreseeing what that stage would be like, however, is well beyond its current ability. Meanwhile, we must cope with the fact that nuclear power’s unprecedented potential for destruction places our very survival in doubt. As Leslie A. White wrote in 1949, concerning the advent of nuclear power,
The mastery of terrestrial fire was tolerable, but to create energy by the transformation of matter is to play with celestial fire. Whether it can be done with impunity remains to be seen. The new Prometheus may also be the executioner. (p. 362)
Mammals are ubiquitous. After the dinosaurs died out some 60 million years ago, mammals underwent a spectacular process of expansion into different environments. Many lived on the ground; some burrowed into it. Some, like the whale, returned to the water where life had originated eons earlier. Others took to living in the trees; these are the Primates from which we humans recently sprang. (Our grasping hands, rotating arms, and stereoscopic vision all reveal our tree-dwelling roots.) Mammals even took to the air as the bats of today remind us.
When a life-form expands into a new environment, any traits that happen to help individual organisms survive and reproduce there will grow more common as generations pass. Assuming the form does not die out, then, it will be modified by natural selection for living ever more effectively in the new environment. When a single form of life successfully expands into many environments, the process is termed adaptive radiation. Adaptive radiation ordinarily involves the development of new species in the new environments. That is, the cumulative effects of natural selection eventually make the populations in different environments so different from one another that interbreeding between them is no longer possible—that is, they have become separate species.
A few human traits, such as skin color, do appear to reflect natural selection in different environments. Near the equator, where the sun’s rays strike the earth directly year-round, a dark skin aids survival and reproduction by affording protection from skin cancer; far from the equator, though, a light skin seems to offer protection against rickets, a bone disorder, which can result from too little exposure to sunlight. Yet this and other geographically based differences between human populations are quite superficial; the ultimate biological proof of this lies in the fact that a healthy male and female from anywhere in the world are capable of mating to produce fertile offspring. Clearly, humans have managed to go “all over the place” while remaining a single species.
That a single species—especially a large-bodied one— should have done this is remarkable indeed from a zoological and ecological standpoint. Other large-bodied species remain confined to relatively narrow environmental ranges. Chimps and gorillas, our closest living kin, inhabit still the tropical forests of our early ancestors.
Clearly, the secret of our success is culture. Humans have adapted to new environments, for the most part, not biologically but culturally. Culture allows us to create, within hostile environments, a “little environment” friendly to us. Control of fire, for example, meant we could create little enclaves of warmth in the coldest corners of the earth. Now, half a million years later, we live with the fish, not by evolving fins and gills but by surrounding ourselves with submarines; and we are venturing into airless space, not by evolving the ability to do without oxygen but by surrounding ourselves with space shuttles and stations. It even is conceivable that we will be able to modify other planets to suit our needs.
Human social evolution up to the present dramatically suggests the value of culture in promoting the survival and reproduction of culture bearers. Biological evolution adapts species to environments; cultural evolution adapts environments to species. Long before the advent of space stations, Herbert Spencer (1897) had seen deeply into the profound significance of culture as an adaptation allowing our species to venture into new environments by creating the following:
A secondary environment, which eventually becomes more important than the primary environments—so much more important that there arises the possibility of carrying on a high kind of social life under inorganic and organic conditions which would originally have prevented it. (Vol. 1, pp. 13-14)
As an approach, social evolution ism is historically associated especially with the 19th century; but the approach is still alive because social evolution itself is a continuing process.