Sociobiology: Nature and Nurture

Christian Spahn. 21st Century Anthropology: A Reference Handbook. Editor: H James Birx. Volume 1. Thousand Oaks, CA: Sage Reference, 2010.

Ever since the publication of Charles Darwin’s On the Origin of Species in 1859, many attempts have been made to apply the insights of evolutionary biology to the study of humans. The first edition of On the Origin of Species gives only a small hint at the possible impact that the theory of evolution could have on anthropology:

In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history. (Darwin, 1859)

But within about 10 years, Darwin himself had already given a first account of The Descent of Man, and Selection in Relation to Sex (Darwin, 1871) and developed an evolutionary explanation of the modes of the Expressions of the Emotions in Man and Animals (Darwin, 1872). In the edition of 1876 of his On the Origin of Species, Darwin appraised, in the same passage quoted above, H. Spencer’s way of founding psychology on the principles of evolutionary biology (Spencer, 1873). From Darwin’s time on, the application and extension of evolutionary biology to the study of human nature has gone through changing tides of “biologization” and antibiological demarcations, resulting in the nature-nurture distinction and in the debate of how much in human behavior is innate or inherited—and thereby a possible candidate for a biological explanation, and how much is influenced by nurture—which can therefore only be explained by reference to cultural and intellectual influences.

From the first trials until today, the application of evolutionary biology in order to explain not only human morphological traits, but also human mind and human behavior, has been under attack from both social scientists and biologists alike, but it has also found enthusiastic support. Dawkins (1976) states in quoting G. G. Simpson (1966) that any effort to grasp human nature and humans’ reason for existence that was undertaken before 1859 should be ignored, thereby putting the strongest possible emphasis on a Darwinian understanding of nature in the nature-nurture distinction. Tooby and Cosmides (1992) write that social sciences as they have been studied before without the incorporation of evolutionary biology have been extraordinarily unsuccessful as science just because of the shortfall of ignoring the evolved human nature.

On the other hand, in cultural studies, sociologists but also biologists have always emphasized that it is only nurture that can explain the richness and variety of human culture and behavior and have attacked biocentrism or gene-centrism as an oversimplification that is not sufficiently explanatory and may be even politically dangerous. These scientists include Sahlins (1976); Lewontin, Rose, and Kamin (1984); Gould (1978, 1981, 1997); Jablonka and Lamb (2005); Richerson and Boyd (2005); and Wilson and Wilson (2007).

Since nearly all important realms of human behavior— from gender roles, aggression, love, questions of altruism and egoism, questions of acquiring knowledge, and malleability of character ultimately to the question of freedom, responsibility, and individuality—are affected by the nature-nurture debate, this debate is and will continue to be a controversial issue in biology, social sciences, and even in everyday life and politics.

Nature and Nurture: Darwinism and the Nature of Nature

Darwin’s theory of evolution by natural selection (Darwin, 1859) describes the world of organisms as a world of competition for survival and replication. Due to the high fertility rate of most populations, within a few generations the world would be overcrowded by most species, but in fact we observe in most cases an almost steady state of population. From that observation, Darwin infers that there must be strong competition for the resources organisms need; that is, not all individual organisms that are born are in fact able to survive and to reproduce (the struggle for existence). The next step is the fact that individuals of the same kind slightly differ in their qualities and that often certain varieties are inherited (Darwin refers to knowledge from the field of breeding). In the struggle for existence, those qualities that lead to better survival chances (natural selection in a narrower sense) and higher chances of reproduction (sexual selection) will thereby necessarily be more often present in the next generation than maladaptive features or disadvantageous traces.

This process is labeled by Darwin, in analogy to humans’ selection in breeding, natural selection. Natural selection “chooses” from the occurring differences (mutations) those features that tend to increase fitness, and Darwin infers that this process leads over a long time of accumulation of small differences to the origin of new species and to the astonishingly complex and highly functional adaptations (designs) that we find everywhere in nature. Darwin was of course not the first scientist to hold the view that species have evolved, but he was the first to discover—independent from but consistent with the ideas of Wallace (Darwin & Wallace, 1858)—the causal mechanism (natural selection) that is responsible for the generation of all the observed adaptations of organisms.

The later combination of Darwin’s theory with the discovery of the true mechanism of inheritance and with new insights from population genetics and cell biology led to a new powerful explanatory framework in biology, labeled synthetic theory (Huxley, 1942; Mayr, 1942). Within post-Darwinian biology, many features of Darwin’s theory have been and are being critically discussed: What is the level or unit of selection? To what degree is, for instance, group selection possible (e.g., Dawkins, 1976, 1982, vs. Richerson & Boyd, 2005; Wilson & Wilson, 2007)? How much emphasis should we lay on the adaptive side of the process instead of stressing the contingency and internal constraints within the development of organic systems? In the process of evolution, does randomness and contingency prevail, or are there certain trends and specific trajectories of adaptation and evolutionary pressure that make certain “inventions” more likely or even probable (e.g., Gould & Lewontin, 1979, vs. Conway Morris, 2003)? Is it true that there is no way of inheriting individually acquired traces, and is every mutation random, or can we revive some elements of Lamarckism? Is it reasonable to see the genes as the ultimate unit of selection and as a causal power using the individual organisms as their vehicle (Dawkins, 1976, 1982), or must we consider a complex interplay of evolution and environment, of the extraction of genetic information and independent laws and factors of development (e.g., Jablonka & Lamb, 2005)?

All these discussions, however, are discussions within the Darwinian framework; they all accept the idea that the combination of blind mutation, inheritance, and competition will lead to differential survival (natural selection). They all agree, therefore, that knowledge about the mechanisms and logic of natural selection is in fact the key to understanding the biological world: As Dobzhansky (1964, 1973) famously put it, “Nothing in biology makes sense except in the light of evolution” (1964, p. 499; 1973). The modern picture of nature in the nature-nurture distinction is thereby profoundly shaped by Darwin’s theory of evolution.

Nature Versus Nurture: The Case for Nurture

Before and still after the rise of sociobiology in the 1970s, two ways of dealing with the biological side of human nature in the social sciences and philosophical anthropology can be distinguished.

Humans are either, first, regarded as a very special animal that has been equipped by evolution with a very peculiar nature. Since this peculiar nature has been brought forward by the forces of evolution, biological knowledge can to some degree be fruitfully applied in the enterprise of understanding humans (see Gehlen, 1940/1993; Plessner, 1928, 1940; Scheler, 1925, 1928). Since, however, human nature differs distinctly from animal nature, there are in this view limits for this application, and there is a clear distinction between the realm of animals and the human sphere: Evolution itself has led to a creature that has left the realm of pure biological determination.

Second, against this cautious incorporation of biological knowledge in the humanities, a strong antibiological tendency can be found in the doctrine of the antiuniversalistic character of human cultures and in the idea of the autonomy of cultural processes, an idea originally already put forward by Alfred Kroeber in his theory of the “superorganic” nature of cultural processes (Kroeber, 1917). Cultural processes that shape human behavior are considered to be not at all biologically determined, and they might even be independent from the individual psychology of human beings (Durkheim, 1895/1982; Geertz, 1973; Kroeber, 1916, 1952; Sahlins, 1976; see also Lewontin, Rose, & Kamin, 1984; Richerson & Boyd, 2005).

The Case for Nurture: Nature Requires Nurture

Scheler, Plessner, and Gehlen all tried to give accounts of the difference between human actions and abilities and animal behavior by looking at the peculiar evolved nature of human beings.

Although Scheler praised the new insights in the abilities and capacities of animal intelligence, he insisted that the human mind could not be understood in terms of pure, survival-oriented “strategic rationality.” This capacity is only quantitatively different in humans and their capacity for technical inventions, and it might even be that animals in this realm are much more securely guided by instincts (Scheler, 1925). The new feature that places humans away from and above nature and biology is their ability to grasp objective reasons and values and to contemplate what value might be preferred to others. (Even abstract values might be preferred to the value of personal survival). Thus, humans seem to have freed themselves from the instinct-driven means-end rationality of other animals (Scheler, 1925, 1928).

Plessner (1928) emphasizes the “ex-centric” nature of humans that places them partly outside the realm of biology. He argues in favor of leaving the “body versus mind” or “nature versus human” distinction behind: Humans must place themselves within nature (they are not bodiless rational beings), but without betraying their peculiar position or special nature. Plessner tries in his philosophy of biology to capture the different modes of life in the three realms of plants, animals, and humans. While animals are centric insofar as they are guided by nature through their instincts, humans have the ability to consciously relate to themselves and to distance themselves from their nature. Humans therefore are in an ex-centric position; that is, they have an open, not biologically fixed, relation to the world. Consequently, due to the reduction of instincts, humans must heavily rely on nurture (culture and institutions) in order to survive. Plessner (1928) puts this in the famous phrase of humans’ natürliche Künstlichkeit (“natural artificiality”): By their very nature, humans must rely on human-made products, nurture, and culture.

Gehlen (1940/1993) follows the concept of Herder when he interprets a human primarily as an instinct-deprived animal, as a Mängelwesen (“deprived creature,” Homo inermis) that bears the stamp of an overall retardation. Human infants are born premature, go through what might be called an extra-uterine embryonic year (compared to other primates), and have a very long phase of neoteny. Compared to other animals with their high specializations, humans are weak in their morphology: They lack natural weapons and seem to be less specialized to cope with specific environments or to fulfill certain tasks. In order to compensate for this reduction of instincts and physical strength, humans must exert their rational capacities and intellect in order to survive. Technological tools can be regarded as “external organs” that help them. Gehlen calls this “the need for institutions,” and his definition of institution includes all kinds of nonbiological intelligent survival techniques, from tool use to social institutions. It is exactly humans’ extraordinarily weak or deprived nature that compels them, more than any other animal, to nurture and culture. Gehlen backs his thesis with contemporary scientific insights (Lorenz, 1943; Tinbergen, 1952), according to which there is a fixed nature of stimulus-response mechanisms in animals that cannot be the predecessors of the flexible, not-content-bounded mind of man. Whereas animal cognition seems to be constrained to certain well-defined stimulus patterns, humans are weltoffen (“open to the world”) and can think about all kinds of objects, theories, and even fictional events.

The common assumption of these views is the emphasis on “the reduction of nature” (fewer instincts, less physical power) that lead in a situation of Darwinian competition to the urge to compensate for these disadvantages through the further and further development of reasoning and intelligence. Humans must invent tools, invent clever hunting strategies, compensate for the lack of guidance through instincts, and so forth, thereby developing the ability for cognitive problem solving to such a high degree that this capacity itself becomes the basic foundation for nurture, and yes even for the overcoming of nature.

In contrast to old, pre-Darwinian dualistic views, the specialness of human nature (its rationality) is not given by a divine creator, nor is it a sign of human superiority, but is itself a product of a special evolutionary pressure that operates on the basis of the given human weakness and human physical inferiority. Nevertheless, this view emphasizes a distinction between humans and animals, nature and nurture, claiming that through reasoning humans eventually leave nature behind. The process of leaving nature behind may in this view be described in Darwinian terms “through the logic of selection pressure,” but the result is viewed as a state in which humans have freed themselves from biological nature via “rational nurture.” This view became dominant in the European and Anglo American social sciences and shaped their accentuation of nurture over nature.

The Case for Nurture: The Autonomy and Pluralism of Cultures as a Counterargument against the Influence of Nature

Modern anthropological and ethnographic researchers have been impressed by the vast variety of and innumerable differences among human cultures all over the world, and by the human culture, behavior, and society in comparison to animal behavior. Modern ethnography and cultural studies, from the beginning of the 19th century onward, have put forward a strong antiuniversalistic stance, dismissing and heavily criticizing the idea of a single common human nature or common human culture.

In this view, there simply exists no such thing as one human culture, there are only human cultures. Geertz (1973), following Kroeber, even famously calls the so-called anthropological universals such as religion, marriage, trade, and property “fake universals” (p. 39). Mead (1949) also in her early studies stresses the cultural differences in gender roles; Whorf (Whorf & Carroll, 1964) famously claimed that the Hopi Indians did not possess a concept of time analogous to that of “Western thought”; Darwin’s view of a common human way of expressing emotions seems to be refuted by the richness of human habits and customs. Generally, modern studies along the lines of “postcolonial” approaches try to avoid at all costs the measurement of different cultures and habits with one common universal standard. (For a critical overview, see Brown’s important book on human universals [1991].)

Since a common biological nature of humans cannot, it seems at first sight, account for this pluralism of cultures among humans, the differences must be culturally induced; therefore nurture and cultural learning become in these approaches the most important factors in understanding human behavior. Human nature, it seems, only equips us with flexibility and does not bind us to a specific expression of behavior.

The alternative view, that these differences might be traced back to a different biological or genetic set-up of different groups, was from the time of its origins heavily criticized by Kroeber (1916) and can now be considered to be totally refuted. (See Tooby & Cosmides, 1990, for the Darwinian argument against the view that there could be fundamental biological differences between different human “races.”) Kroeber (1916) argues that, if Weismann’s (1885) neo-Darwinian doctrine is right, different acquired cultural habits, customs, and so forth cannot be inherited, and the idea of a biological basis for the differences in cultures is—according to the biological neo-Darwinian standard—itself absurd (Kroeber, 1916; see also Geertz, 1973).

Nevertheless, racist tendencies and ideas of biological determinism that gave rise to social Darwinism and eugenic ideologies were popular in science at the beginning of the 20th century. It is the merit of Boas and his pupils, including Alfred Kroeber and Margaret Mead and also Ruth Benedict (1934), that they emphasized the value of cultural plurality. They insisted from early on that this pseudoscientific thinking could not be rooted in neo-Darwinian evolutionary biology and that the idea of racial determinism of cultural behavior is completely misguided.

If differences in culture cannot be traced back to nature in this way, it is then, as argued above, plausible to consider them to be a product of nurture. This insight and the misuse of biological views in the age of racism and eugenics have lead to a strong disregard of human nature in social sciences. The difference between human capacities and those of an animal mind—conceptualized as being guided by stimuli and responses, narrow behavioral patterns, and content specificity—obviously seems to ask for a strong emphasis on nurture in order to account for human culture. The study of culture alone becomes therefore the main, sometimes the only, focus in social sciences and anthropology. In some of these views, human nature becomes almost completely irrelevant and negligible; human nature is only important insofar that it enables flexibility and cultural diversity. In order to do so, it cannot be regarded as fixed or as determining human action.

This line of reasoning has become even more dominant in the school of behaviorism (Skinner, 1938, 1957) that describes the human mind as a black box that can, in an astonishingly flexible way, be adjusted to almost any task simply by learning through reinforcement. While Pavlov developed the idea of classical conditioning (given stimulus-response reactions can be triggered by a different stimulus through association), Skinner expanded this idea to the more universal method of operant conditioning: Animal behavior can be influenced in a flexible way through positive or negative reinforcement (punishment or reward), leading to completely new stimulus-response connections. This idea lead to the concept of a great flexibility of cognition that is expressed in Watson’s famous words that describe a strong independence of nurture (training/conditioning) over nature:

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief, and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. (Watson, 1925, p. 82)

Although this view does not claim that the human mind is a tabula rasa, the general ability to learn seems to make the human mind more similar to a general-purpose computer that can be programmed by nurture for almost any task, from riding bicycles to solving theoretical problems in proto-plasma physics. Only the fact that nurture is much more important than nature could, it seems, give an account that renders justice to the generality, flexibility, and diversity of human actions and capacities.

The Case for Nature: Ethology

In strong opposition to the autonomy thesis of culture, ethologists, sociobiologists, and evolutionary psychologists have stressed the dependence and deep connection between nature and nurture. The insights of Tinbergen and Lorenz in animal behavior and cognition corrected the view of a simple stimulus-response model. Lorenz’s work on the biological function of aggression is a good example of the ethological approach to behavior. Lorenz argues that he cannot accept the dualism of a life-fostering drive (eros) and a drive for destruction (thanatos) that Freudian psychology postulates. From a biological point of view also, the negative drive of destruction must have a positive or adaptive function for the organisms and cannot be seen as an antibiological drive of (self-)destruction.

Lorenz tries to give an account of how aggressiveness can have evolved and what its adaptive function in the animal realm is (Lorenz, 1963). Thereby he claims that aggression is simply a necessary feature of life, not something that is externally induced by frustration or stimuli: Since, for Lorenz, drives have a certain autonomy, one cannot avoid aggression simply by avoiding frustration or aggression stimuli. Lorenz draws conclusions for humanity: We should not try to overcome aggression but rather deal with it in a productive way. Lorenz hereby challenged the mainstream view of his time—that any education that avoids frustration will lead to nonaggressive personalities.

This inference from biological insights to the realm of human education has, however, met heavy criticism from the social sciences, but the school of ethology has also been criticized by biologist for internal Darwinian problems. Lorenz analyzes the positive function that aggression has for the preservation of the species, thereby postulating a drive to do what’s good for the species. Such a drive, however, cannot be postulated and can only be assumed by making unreasonable claims about group selection. This problem has lead to the rise of the new paradigm of sociobiology that replaced the dominant understanding of behavioral adaptation (“do what is good for the group”) in the framework of ethology.

The Case for Nature: Sociobiology—Theory and Application

Interestingly enough, the sociobiological application of evolutionary biology to the realm of humans was in part made possible by a view dominant in the social sciences that was still in the tradition of those views that stress the distance between humans and animals. Human beings are considered to be the rational animal. The rational choice theory or theory of games developed in economics (Von Neumann & Morgenstern, 1944) considered human beings to make rational choices: No one wants, for instance, to buy a product for a higher price than necessary; nobody wants to sell it for a lower price than possible. It is clear that the situation in the free market resembles a Darwinian scenario: It is a situation of competition for scarce resources (customers), in which any company that wants to survive will have to improve its products in direct competition or has to find its own ecological (economic) niches.

Looking at rational strategies of market participants therefore intrinsically resembles the task of searching for evolutionarily stable strategies, that is, for inheritable or programmed animal behavior that will increase the reproductive fitness of that animal using the strategy. An evolutionarily stable strategy is a strategy that, if the majority of a population uses it, cannot be outcompeted by any other strategy (Maynard Smith, 1982; Smith & Price, 1973). The same mathematical models, the same types of problems and questions (when and why cooperate, when and why “cheat”?) can be asked in both fields. Lewontin (1961), Hamilton (1967), and Maynard Smith (1982) successfully applied the models of game theory to biology.

Interestingly, both views were challenged by empirical facts. From an evolutionary viewpoint, it seems that programs for maximizing egoistic fitness would out-compete more altruistic strategies. But in the realm of biology, it is in fact true that the animal world is full of examples of mutual cooperation, even of acts of self-sacrifice: Parents sacrifice themselves for their offspring, mutual cooperation in groups is not rare, birds give alarm calls, thereby exposing themselves to danger, and so forth. Within the realm of social sciences, evidence was found that people executed “defections” from the ideal egoistic strategies in “dictator games” or “ultimatum games.” People in an ultimatum game are willing to altruistically punish certain behavior of others: They are willing to invest resources in a non-profit-maximizing way just to punish others who deviate from expected moral norms (Fehr & Fischbacher, 2003; Henrich et al., 2004).

How can this contradiction between the expected egoism in a game of competition and the observed cooperation and altruism be reconciled? How, in both Darwinian situations of fitness maximizing and rational (strategic) choice making, can altruism occur?

Darwin noted that there can be a conflict between activities that are good for the group and those that are beneficial for the individual organism. For a group of monkeys, for instance, it might be good to have courageous individuals that are willing to aggressively fight against other groups (for instance, in order to defend territory and resources). Such a group will have an advantage over a defenseless group. For the individual, however, it is better to let others do the violent and potentially dangerous job (see Darwin, 1871, p. 161). Freeloaders always have a higher chance for reproduction, simply because they avoid putting themselves in danger; therefore, this strategy must be much more successful and must over generations replace the strategy to defend the group in an altruistic fashion. If one does not want to impose implausible group-selectionistic theories or evoke the notion of a drive to preserve the species, a theoretical solution must be found to explain the existence of altruistic behavior in the animal realm. More dramatically, this problem is almost a Darwinian paradox. A biological definition of altruism would be the following: An altruistic act is an act by which an organism enhances the chances of survival and reproduction of another organism by decreasing its own chances of reproduction. Acts of this kind exist, but at first sight they cannot evolve in the logic of Darwinism if one assumes that behavioral strategies must be genetically inherited. Altruistic organisms (in this sense) tend (by definition) to leave fewer offspring than egoistic organisms; therefore, they cannot have, it seems, a chance in the course of evolution.

There are three possible answers to this problem. First, you can try to unmask acts of altruism as being in fact egoistic. The gazelle might not jump in a self-sacrificing way to draw the attention of a lion away from a weak conspecific; it may be a selfish act to display her own strength and health, thereby signaling that the other weak conspecific is a much easier target (Dawkins, 1976). Similarly, giving an alarm call might have egoistic advantages: The bird’s call might have the effect of asking the other birds to be quiet, thereby increasing its own chances of survival; it might also fly away with them, so that its chances of being caught are fewer than in a solitary attempt to escape (Dawkins, 1976). But there are many cases where it is implausible to find such an individual-egoistic advantage. A parent sacrificing itself for its offspring hardly has any individual egoistic advantages in terms of further reproductive success.

A second solution is to shift the perspective from the individual level to the gene level. This is one of the basic original insights of sociobiology in comparison to the group-level orientation of early ethology: If the level of selection is not the group and not the individual, but the genes, then altruism toward relatives is likely to occur. Hamilton’s rule names the factors that make altruism in the animal realm likely to occur: If the costs of an altruistic act are smaller than the chances of reproduction of the same genes (these chances are dependent on the extent of relatedness between the altruist and the beneficiary), then selection should favor altruistic strategies (Hamilton, 1964). In this view, it is not important that the individual organism enhances his own fitness and reproductive success but that he increases the overall fitness of his kin, for many of his genes are also present in his kin. You can therefore foster the replication of your genes either directly or by helping your kin to reproduce.

The fact that altruism in the animal realm is indeed very often linked to kin and the fact that different social strategies (the astonishing social altruism of worker bees, for example) can be traced back to different mechanisms of inheritance indicates the great success of these theories in evolutionary biology. Insects of the group Hymenoptera, for example, are haplodiploid. This leads of course to different degrees of relatedness than in diploid species; therefore, it might for a worker bee be much more “rational” to invest in the offspring of the mother (r = 3/4) than in its own reproduction (Trivers & Hare, 1975).3 By shifting the focus in biology away from the idea that the individual is the unit of selection to the idea that the gene is the unit, many behavioral phenomena can be mathematically explained as a natural outcome of selection. Altruism on the level of the organism must be traced back to “egoism” on the level of genes in order to solve the Darwinian paradox of altruism. Also, this leads to interesting biological research questions: If it is necessary to limit your altruism to kin, what mechanisms of kin-detection can we find? What different strategies apply to those species that are able to individually recognize conspecifics in comparison to those that can’t? What possible counterstrategies can freeloaders find to break a rule of kinship detection? What could be possible refinements to counter the counterstrategy?

Given this view, one can, for example, ask what would be a good strategy of parental investment. Trivers (1972) and Maynard Smith (1977, 1982) regard the relation between the sexes as an “economic” enterprise: Both parties must try to invest as little as possible and try to gain the highest possible rate of replications of their own genes in this process. Because of the anisogamy, there is from the beginning an advantage on the male side; he brings 50% of his genes into the offspring, but he invests less than 50% of his energy in them, especially in those cases where conception takes places within the female. Therefore, in some fish, males are left behind in a cruel bind: The female fish can spawn the eggs some seconds earlier than the male fertilizes them, and can use this advantage to escape raising and protecting the young. The male fish must then decide whether to raise the offspring (defend the eggs) since he already made a parental investment, or whether to start a new investment. When conception takes places within the female, the advantage is at first sight on the male side.

Further interesting strategic questions can be asked: Should males be more cautious when investing in their offspring, because they are more insecure than females as to whether the offspring are in fact their own? Should parents invest more in healthy offspring, neglecting the weak ones? Is it in the interest of a “step parent” to kill his step sons, as we see with lions? Should sexual strategies differ fundamentally between the sexes? Does animal behavior in fact resemble these predictions? All these questions are useful tools and guidance for giving a truly evolutionary account of animal behavior, and the explanatory power— even if sometimes overstressed and even if many examples are still in debate—is impressive.

These two first steps, however, are limited to altruism to kin; they cannot explain cooperation or symbiotic relations that expand beyond kin. Altruism beyond kin can be interpreted as a malfunction of the kin-detection mechanism (e.g., in the case of cuckoo birds, who lay their eggs in the nests of other birds), but cooperation, altruism, sociality, and group living expand, in many cases in the animal realm, beyond the realm of close kin. The advantages of group living are obvious (for a discussion, see Lorenz, 1963), but again the freeloader problem emerges: For altruism of this level, the gene view must be left behind, and the focus must be set on insights from general game theory (the theoretical analysis of successful strategies in different situations of competition).

Therefore, the third step to explain the necessary emergence of altruism in the Darwinian scenario of sociobiology is the idea of reciprocal altruism and the search for evolutionarily stable strategies (Axelrod, 1984; Axelrod & Dion, 1988; Axelrod & Hamilton, 1981; Trivers, 1971). The result of the empirical researches along this line was famously summarized in E. O. Wilson (1974, 1975). Hamilton’s mathematical approach can in this way be further generalized if you regard the individual organism as a vehicle for executing selfish gene programs (Dawkins, 1976, 1982); then also other contingent facts in evolution might be explainable as consequences of the struggle of genes to maximize their frequency in the gene pool by finding the most successful strategy in balancing cooperation and defection.

An act of altruism among nonrelated organisms may in fact be likely if there is an act of retaliation. These considerations, however, lead to the well-known problems of strategic behavior that have been discussed for a long time in game theory and to the theory of the rational agent as noted above. Since there usually is a time span between the altruistic act and the act of reciprocity, it is tempting to defect. A betrayer naturally always gets the highest payoff: He receives the benefit from the altruistic act but does not bear the cost of reciprocation. This situation is known as the prisoner’s dilemma in game theory: Both parties would be better off if they cooperate, both sides would individually profit more from defection (if the other cooperates), each side risks having the highest costs if it cooperates but the other side betrays it. If you assume that your aim is to maximize your profit, you will defect, and so should the other if he is a rational (strategic) agent.

How can cooperation among nonrelatives arise in a Darwinian scenario if there is always a higher possible payoff for betrayers? Trivers, Axelrod, and Hamilton have worked out mathematical models for these situations and searched for strategies that are sustainable, that is, strategies that will be able to evolve and survive in a Darwinian competition of strategies. The most striking example is the extraordinary success of the tit-for-tat strategy. For this mixed strategy, the choice for cooperation or defection depends on the other player. (Obviously, this strategy presupposes the ability to recognize individual players; it can therefore be applied only to certain species.) If the other one has cooperated in the last encounter, then you cooperate; if he has defected, you defect. Tit for tat is in many scenarios an evolutionarily stable strategy (Axelrod, 1984). Surprisingly, strategies that include cooperation might in many scenarios be much better off than pure egoistic strategies.

Due to the mathematical generality of this approach, it can also be used to describe cultural evolution in analogy to the biological situation. The theory of memes claims that “cultural units” (songs, ideas, theories, institutions—called memes in analogy to genes) are also in competition for instantiation. Cultural history thus resembles a Darwinian scenario: Ideas or memes are more or less faithfully copied from generation to generation; better adapted memes outcompete others, and so forth. This extension of Darwinism can be understood in biological or in structural terms. Evolutionary psychology would emphasize the fitness of certain memes by pointing out that they fit with certain evolved biological traces of cognition: Memes that help the individual to gain a fitness advantage will be more likely to succeed. But this extension can also be interpreted in the strict sense of a structural analogy. Even if nonbiological entities such as robots had culture, cultural memes would have to be replicated, thereby being in competition for attention, and the mathematics of game theory and evolutionary biology would still apply. This view could stress the independence of meme evolution; memes are for Dawkins (1976) in a certain sense replicators sui generis. A meme might lead to a fitness disadvantage in the individual (the meme for celibacy or self-sacrifice) but can nevertheless be successful. It can, in Dawkins’s terms, use the individual just like a vehicle, just like a gene can; it can therefore override or free humans from gene determinism, if only (in this view) to the price of meme-determinism (see also Blackmore, 1999). However, since memes must fit with brains as their environment, Dawkins and Blackmore stress the fact that memes are most likely selected if they increase and do not decrease the inclusive fitness of the individual.

All these examples and considerations highlight the basic tenet of sociobiology: Seeking for egoistic advantages is seen as the natural outcome of evolution; however, since gene replication is the crucial goal, altruistic compromises may be possible or even necessary, but whenever the circumstances allow it, these compromises will be broken. This framework, however, can claim to be truly Darwinian and can give some good explanations both for altruism and egoism on the level of individual organisms, without demanding that altruism and cooperation must evolve against the laws and probabilities of natural selection. It is also clear that from this perspective, the interpretation of culture and humans in terms of adaptive, inclusive, fitness-maximizing behavior becomes an important key to understand human beings.

The Case for Nature: The Shift to Evolutionary Psychology

Evolutionary psychology shifts the focus from the sociobiological analysis of fitness-maximizing success of behavioral strategies to an inquiry about underlying evolved psychological mechanisms. Rather than finding out how far certain cultural behavior leads to increased inclusive fitness, the question is whether we can find basic psychological features in the human mind that might have been adaptive during the phase of hominization, especially in the context of hunter-gather societies, even if they lead to disadvantageous behavior under modern circumstances. Tooby and Cosmides (1992) claim that in fact evolutionary psychology is thereby the missing link (Cosmides & Tooby, 1987) between biology and the social sciences: Society and culture are produced by humans, and humans are a product of biological evolution; thus, psychology links these fields. The program and basic tenets of this approach are summarized in Buss (1995) and Tooby and Cosmides (1992); the necessity of this shift after the dominance of sociobiology is argued in Barkow (1978, 1984).

Two developments in empirical sciences have led to the rise of evolutionary psychology. First, the conception of human cognition as a general-purpose computer has come under attack. Results from the analysis of the visual system, insights from the research on artificial intelligence, and studies on language acquisition suggest that cognition is a content-specific modular process. (For cognition in general, see Fodor, 1983.) Chomsky influentially attacked Skinner’s view of learning as a general mechanism, leading to speculations about a special adaptation or an evolved module device for language acquisition (Chomsky, 1959, 1975, 1980; Pinker, 1984, 1994). Research on artificial intelligence stressed the idea that much more innate knowledge is needed for problem solving. To keep the computational possibilities reasonably small, artificial cognitive systems must already have implemented knowledge and biases. (For an overview over these, see Tooby and Cosmides, 1992, pp. 106 ff.)

Experiments with animals have further supported this idea of cognitive preparedness. A monkey raised in captivity can be trained (via a video tape showing a panic reaction to a snake by a conspecific) to show panic reaction to snakes; however, he cannot be trained to show this reaction to flowers (Mineka, Keir, & Price, 1980). Rats can learn avoidance only in a module-specific way: They can associate nausea with food and taste, shock with sounds and lights, but not vice versa (Garcia & Koelling, 1966). The facts that humans also learn certain things more easily than others, that all children are equipped to learn language, that they seem to have an innate physics (expectations about the behavior of objects), and so forth render the idea plausible that human cognition is in fact not all based on a reduction of instincts but relies heavily on instinctive knowledge, which can be recombined and generalized. Gould and Marler (1987) coined the phrase “learning by instinct,” and Lorenz (1973) developed a complex account of the steps from animal to human cognition beyond the stimulus-response model, distinguishing between open and closed instinct programs. Not the reduction of innate programs but strongly relying on them makes learning possible. Accordingly, the human mind can no longer be conceptualized as a content free general-purpose computer (see Buller, 2005; Carruthers & Chamberlain, 2000; Carruthers, Laurence, & Stich, 2005).

Second, the idea of “human universals” has become more popular again, following the refutations of the claims of Mead (see Freeman, 1983, and Gewertz, 1981), Whorf (see Malotki, 1983), and against the claim of the cultural dependence of the expression of emotions (see Ekman, 1973, 1992; Ekman & Friesen, 1986; Ekman, Sorenson, & Friesen, 1969; Ekman et al., 1987). Berlin and Kay (1969) discovered culturally independent ways of color classification. These results and new cross-cultural studies (summarized in Brown, 1991) led to the distinction of “innate evolved universals” with different cultural manifestations, in analogy to the genotype-phenotype distinction (see Cosmides & Tooby, 1989, Tooby & Cosmides, 1989a, 1989b, 1992). If one distinguishes innate universals from their local modes of expressions (their “display rules”), many differences seem to vanish. Culture may shape the expressions of certain universals, but the underlying cognitive mechanism might be the same in all humans (Brown, 1991).

These developments strengthened the research program of evolutionary psychology: Just as the study of the human visual system reveals a marvelous and complex computational and highly specific adaptive structure (Marr, 1982), might it not be plausible to search for complex information processing modules or “mental organs” (Chomsky, 1975, 1980) in the human mind that have been selected as solutions to recurring adaptive problems such as finding mates, detecting kin, and so forth? (For an overview, see Tooby & Cosmides, 1992, pp. 101 ff.) The questions that are asked in this approach are the following: What are the adaptive problems that humans faced during the phase of hunter-gather societies? What cognitive information processes could have been implemented by evolution into human cognition? Does human cognition show traces of this kind of adaptation? Symons (1979), Buss (1994), and Miller (2000) asked these questions concerning human sexuality, mating strategies, and gender roles. Pinker (1984, 1994) pursued this inquiry for the evolution of language. Even questions of aesthetic taste and moral judgments, prejudices, and preferences can be addressed in the same fashion. (See the selection of essays and topics in Barkow, Cosmides, & Tooby, 1992, and see Buss, 1999, 2005, and Buller, 2005.)

The difference with the sociobiological approach, again, is that the question is not whether these mental organs or domain-specific cognitive mechanisms are adaptive today but whether they were adaptive during the course of human evolution. The difference between this and the standard social sciences model (Tooby & Cosmides, 1992) is the search for a universal human nature beyond the cultural differences.

Conclusion and Future Directions

The view of nature and nurture in sociobiology and evolutionary psychology is and will remain a controversial issue within science. Since almost all fields of human activity can be situated in relation to the nature-nurture distinction, it is not possible to even try to give an overview of the state of the art of ongoing discussions in the fields of human intelligence, aggression, emotions, gender roles, life history, evolutionary esthetics, ethics, cognitive sciences, and so forth.

However, while most of the empirical insights of evolutionary psychology and sociobiology in all the named fields cannot be ignored, their importance and fruitfulness for a full-fledged understanding of human nature has to be evaluated on a case by case basis (Kitcher, 1985). There is no clear method to give a precise and general account of what in humans is mere nature and what can be attributed to mere nurture. Furthermore, all the important debates within the framework of Darwinism that have been mentioned affect the relevance and shape of sociobiological insights into human nature. Two important avenues for criticism can, however, be highlighted.

First, after a strong emphasis on the possibilities of biological reductionism, a countertendency can be observed that puts humans again away from a biological or genetic determination (Lewontin, Rose, & Kamin, 1984). “Gene-determinism” has come under attack from both social scientists and biologists. Alternative theories to explain eusociality have been proposed, and it might even be necessary to “rethink the theoretical foundation of sociobiology,” as Edward O. Wilson himself put it (Wilson & Wilson, 2007, p. 327). New theories suggest that it is a complex interplay of environment, development, and selection at the gene level, the individual level, and even at the group or cultural level that shapes evolution (Jablonka & Lamb, 2005; Kerr & Godfrey-Smith, 2002; Okasha, 2006; Wilson & Wilson, 2007). These new multilevel approaches render a monocausal view more and more implausible; thereby, they also try to leave the dualistic nature-nurture divide and gene determinism behind (Lewontin et al., 1984; Oyama, 2000). If some of these theories can be sustained, then it might also be false to view culture as a mere evoked manifestation, because it is then itself a causal factor, as it is expressed in the theories about gene-culture coevolution or dual inheritance (see Boyd & Richerson, 1985; Lumsden & Wilson, 1981; Richerson & Boyd, 2005). Due to the fact that the first cultural exchanges in human history took place a very long time ago, it is further likely to assume that human nature has in part been shaped by human culture and that a causal interdependence or coevolution was in fact crucial in the evolution of mankind (see also Geertz, 1973, Chapters 2 and 3).

Second, we see along these lines the revival of the basic ideas of the nature-requires-nurture theorem: Human nature may not be driven by the reduction of instincts, but it is still a plausible possibility that evolution itself has brought forward peculiar features in human nature, leading to a take-off of cultural evolution. Humans’ ability to frame a complex theory of mind, to engage in an explicit “we intentionality” (Searle, 1983) of doing things intentionally and jointly together, might be a special ability that enables cultural evolution, as comparative studies of human and primate cognition seem to suggest (Tomasello, 1999; Tomasello et al., 2005). It is plausible to assume that social cognition and the challenges of group life, along with the extension of the human brain—the neoteny and so forth—gave rise to the peculiar human (growing) dependence on culture. It may well be that in fact a special ultrasociality or special tribal instincts fostered the shift to cultural evolution (see Richerson & Boyd, 2005). These approaches emphasize again the very special social or altruistic nature of humans, thereby distancing themselves from the selfish gene view (see also Sober & Wilson, 1998). To identify those uniquely human traits that enable the uniquely human cultural process andexplain how these peculiar traits might have evolved, given the logic of selection, is one of the most important tasks for future research.

Cultural evolution might then, nevertheless, be spelled out according to a Darwinian logic (analogous or biocentric), but it is clear that cultural evolution resembles what is in fact a much more Lamarckian process: Acquired knowledge of individuals can be faithfully transmitted and become perfected over generations in an accumulative process (a “ratchet effect”), different traditions can be combined, new ideas can be brought forward not by chance of random mutations but through thoughtful guesses and intentional advancement, and human technology might free us more and more from a competition for survival. All of these factors allow a much faster evolution than biological evolution (Tomasello, 1999).

However, it remains true that shifting the power away from genes to a somewhat more autonomous understanding of cultural evolution is not in itself a claim for any autonomy of human behavior. Cultural determinism might be a dubious view just as biological determinism is. Only a theory that incorporates the peculiar ability of humans to follow and question reasons, to live out and to distance themselves from their own drives, preferences, prejudices, influences and predeterminations, might be compatible with the fact that sociobiology, evolutionary psychology, cultural determinism, and so forth are themselves theories— theories that appeal to us because they might be true and convincing, and because we might be able to follow their reasoning.