James Allan Cheyne. Skeptic. Volume 15, Issue 2. 2009.
Over the course of the past century there have been three curious phenomena in the social sciences that are both significant and related (1) the rise in IQ scores in developed countries, (2) the decline in religious belief and commitment in these same countries, and (3) the negative correlation between intelligence and religious belief. The connections among these three will seem obvious to some from the outset: people are getting smarter, smart people tend to reject irrational beliefs, hence with increasing intelligence more people become nonbelievers. Synoptically, I think this is about right. A closer examination of the data for each of the three phenomena and their historical trends, however, leads to a more complex and, I think, much more interesting understanding of intelligence, religious belief, and their relation.
The Flynn Effect
The rise in IQ scores in the developed nations over the 20th century has been dubbed the Flynn effect, after the social scientist J. R. Flynn who first recognized it. The Flynn effect has now been observed in over 30 nations and appears to be consistent across age and levels of intelligence. The effect is remarkably strong, amounting to an average increase of approximately a third of a point of IQ per year, or about three points every decade. The Flynn effect is so strong that accepting it at face value leads to the apparent absurdity that our great grandparents were so intellectually challenged that they were barely capable of independent living.
Flynn has, however, recently offered a hypothesis for the effect that eliminates this absurd deduction. I will argue that this new hypothesis is also capable of explaining the negative correlation between intelligence and belief, as well as at least part of the decline of religious belief and the growth of atheism over the same historical period.
Declining Religious Commitment and the Rise of Atheism
Although surveys on religion are fraught with methodological complications, findings documenting the collapse of religious belief and commitment in the developed world in the 20th century have been remarkably consistent Religious belief, commitment, involvement, and influence waned substantially over the past hundred years. The results are often startlingly dramatic. In England, for example, church attendance dropped to less than one-third of prior levels.
Atheism is no longer rare.4 According to the sociologist P. Zuckerman, based on estimates from a number of international studies, “we can deduce that there are approximately 58 times as many atheists as there are Mormons, 41 times as many atheists as there are Jews, 35 times as many atheists as there are Sikhs, and twice as many atheists as there are Buddhists.” He also cites data suggesting that nonbelievers as a group have become the fourth largest belief group, world-wide, after Christianity, Islam, and Hinduism. (The growth of religious skepticism may be even greater because it turns out that although by definition atheism and unbelief are the same, people seem much more reluctant to self-describe as atheist. In a number of national surveys, roughly twice as many people state that they do not believe in God as describe themselves as atheists.) Perhaps most revealing is the loss of commitment to specific religions. Those who are not abandoning religion altogether are switching religious affiliation with a frequency bordering on the downright fickle.
IQ and Religion
Correlations between measures of intelligence and reported religious belief are remarkably consistent. Approximately 90% of all the studies ever conducted have reported a negative correlation. That is, as intelligence goes up, religious belief goes down. Moreover, not only does belief decrease from childhood to adolescence—suggesting a negative association between intelligence (or, strictly speaking, mental age) and religiosity—but the negative correlation also increases with age.
The Disease of the Learned
Atheism has, since the early years of the Enlightenment, been considered a “disease of the learned” and has been on the rise among the most learned and intelligent segments of the population. In a classic series of studies early in the 20th Century, the psychologist James Leuba reported that scientists tended to be particularly ineligious. In the late 20th century Edward Larson and Larry Witham replicated Leuba’s study and found the trend reported by Leuba had continued to the point where the function appears to have effectively reached its asymptote at around 7% believers among elite scientists (members of the National Academy of Science). Thus, one can conclude that theism has all but disappeared—or at least been reduced to marginality—in this highly intelligent group. Not only are NAS members intelligent, but I will argue they have developed their intelligence in very particular ways that explain the negative correlation between measured intelligence and religious belief.
Kinds of Minds
A clue to the resolution of the absurdity of retarded ancestors implied by a simple-minded interpretation of the Flynn effect lies in another rather odd observation, itself somewhat paradoxical at first glance. One test that most consistently and strongly reveals the IQ score increase is the Raven’s Matrices Test, which was designed specifically to be “culture-free.” But the increase in IQ scores over the course of a single century must be a cultural effect. How can this be? Another test that shows strong IQ gains over time is the Similarities Test from the Wechsler Intelligence tests for adults and for children (WAIS/WISC; e.g., how are dogs and rabbits alike?).
On the other hand, other subtests measuring factual knowledge and computational skills, such as Information (What is the Capital of Canada?), Vocabulary, and Arithmetic do not show very big gains. Thus, the Flynn effect does not appear to rest simply on better developed basic knowledge or improved computational and language skills. What is it about the Raven’s and the Similarities tests that leads to these differences? Well, they require the ability to deal with abstract categories and to think hypothetically, rather than simply displaying previously acquired knowledge and skills. Indeed, with regard to the Raven’s test, in the attempt to create a “culture-free” test by eliminating all culturally specific content, an extremely abstract set of questions involving geometric forms was created. Ironically, as I will argue, this biases the test in favor of those who possess some very culturally specific intellectual abilities.
Flynn’s hypothesis is that succeeding generations in the developed world acquired increasing expertise in a particular kind of intelligence over the last century. He begins by considering the following classic study. In the early days of the Russian revolution, two Soviet psychologists, Lev Vygotsky and Alexander Luria, attempted to assess the impact of the introduction of different levels of education on the thinking of peasants. To do this Luria visited several remote and previously largely illiterate villages of Uzbekistan and neighboring areas. Formal education at several levels was being introduced at that time in some of the “liberated” portions of the then new Soviet Union, providing an opportunity for a “natural experiment.” Luria subsequently provided detailed verbatim accounts of the reactions of the peasants (particularly those who had not been exposed to any of the new forms of education) to his questions.
In a typical exchange the questioner asks: “In the Far North, where there is snow, all bears are white. Novaya Zemlya is in the Far North and there is always snow there. What color are the bears there?” One peasant answers: “I don’t know. I’ve seen a black bear, I’ve never seen any others … We don’t talk about what we haven’t seen.” Exchanges of this sort could be repeated at length. In essence, the peasants refused, or were unable, to reason hypothetically. Similarly, when asked about similarities between objects, they tended to group them by similar or related use rather than by abstract categories. For them, a saw and a hatchet go together because they are both needed to make firewood, not because they are both tools (and, moreover, a log needs to be included in the group for utilitarian completeness). The people tested had adequate vocabularies and detailed knowledge about their world. The exchanges with the testers revealed that they were often quick-witted, clear thinkers. They were, however, not comfortable with abstract or hypothetical thinking and found such thinking to be alien. In their world, abstract categories and hypothetical thinking were, frankly, not perceived to be very useful, and even faintly preposterous. Sometimes their answers implicitly said as much. Even if such habits of thought had been potentially useful, no one was disadvantaged because no one else in the community thought in such ways either. Not having such habits of thought, they did not develop expertise in dealing with problems involving abstract categorical and hypothetical (ACH) thinking assessed by the Raven’s and Wechsler Similarities tests.
Historically, neither peasants, nor laborers, nor tradespeople nor, indeed, practically anyone anywhere had much use for such skills prior to the 20th century, except philosophers, scientists, and perhaps a few others. Over the course of the 20th century, however, the widespread need for ACH thinking in developed countries increased steadily—as steadily, indeed, as the increase in measured intelligence. The only way one can develop expertise in this sort of intelligence is through a particular kind of formal education. The public schools, despite their limitations, are likely responsible for the Flynn effect.
As Flynn notes, our great-grandparents were, of course, not intellectually challenged and were, on average, about as good at tasks requiring practical intelligence as their great grand-children are now. Their ACH thinking was not, however, as well developed. To the extent that they did receive formal science education it was certainly less than succeeding generations and was almost certainly of poorer quality generally. More importantly it was different in kind. Emphasis was placed on developing concrete intelligence (e.g., the three-Rs). To the extent that our great-grandparents did get some training in ACH thinking, most of them did not need it for their daily lives. They knew it and their teachers knew it; but as the 20th century wore on, more and more of us needed it. Fewer and fewer among us could get along simply with reading, writing, and arithmetic. The nature of work changed in ways that required that we take ACH habits of thought seriously and continue to employ them in at least limited ways beyond the classroom and into the workplace. The new workplaces provided contexts for exercising the new habits of thought acquired in schools.
Science’s Most Dangerous Idea
It is important to stress that when we describe the Flynn effect as rising from a change in our habits of thought we are not arguing for a relativistic equation for all forms of thinking. If developing greater ability for ACH thinking helps meet cognitive challenges then it is a real gain in intelligence. In the modem world, ACH reasoning is superior—and nations that have developed this are called the developed nations, and those that are now beginning to provide it are called the developing nations, and those that fail to provide it are and will continue to be considered failed nations. The major area in education where students receive training in this form of thinking is science training. Admittedly, science training is hardly exemplary throughout the developed world. Yet poor as it may often be, it does generally encourage students to develop basic ACH thinking skills. Science, perhaps most fundamentally, rests on abstract, categorical and, most especially, hypothetical thinking. Science training has, moreover, increasingly emphasized going beyond mere hypothetical reasoning to conducting empirical tests of propositions: If we do X then we find Y, but not Z.
This last move is momentous. It opens the potentially gut-wrenching possibility that we may be wrong and that we may be found to be wrong. This is, I suggest, the route to true skepticism; not the dismissive rejection of somebody else’s beliefs on purely rationalistic grounds, but facing the possibility that you or, horrible to contemplate, even / might be empirically, factually, wrong. This is what the scientific experiment is all about. This is not to say that science requires us to be motivated to challenge our presuppositions. Most scientists are likely motivated by the prospect of corroborating their favored hypotheses. But the logic of the experiment inevitably raises the possibility of falsification. There is nothing quite so bracing as the thought that what one is about to do might actually prove one wrong. Like the prospect of hanging, it focuses the mind wonderfully. This challenge is seldom acknowledged directly, except occasionally among professional scientists. Yet it is implicit in hypothetical thinking.
The Age of Reason and the History of Its Cultivation
Recall the finding mentioned earlier that the negative conelation between IQ and religious belief reaches adult levels around 15-18 years of age. This is the age range within which what the developmental psychologist Jean Piaget called formal thinking arises. Formal reasoning as Piaget discussed it is nothing more nor less that ACH thinking, and Piaget’s illustrations of formal thinking tended to be essentially experimental science demonstrations. We have now learned, contrary to Piaget’s initial intuitions, that such thinking requires a particular educational environment, one that was developed gradually over the course of the 20th century.
Now, there was little in the way of a science curriculum before 1900 and such explicit science training as existed was limited to a relatively small elite. Moreover, when the early science curricula were developed they initially emphasized “rote learning of facts and principles and were seen as concrete exercises for the hands, eyes, and senses.” Gradually, however, science education began to emphasize hypothesis formulation and experimental testing of those hypotheses. This trend increased over time, in part, because of the substantial influence of Dewey, then Piaget, and later Vygotsky, on the development of science curricula, most notably in the direction of increasing emphasis of ACH thinking. Perhaps most important, these developments occurred as the age of graduating from school was increasing, ensuring that more students remained in school throughout the ages (adolescence) most receptive to the new training in ACH thinking.
Extending the Reach of Science
Also over the past century, scientific hypothetical thinking using abstract categories has been applied to broader and more varied domains, most significantly (for present purposes) by the social sciences when applied to solving cultural, environmental, and even interpersonal problems. Scientific thinking became less and less the exclusive domain of traditional physical science. This broadening of the application of ACH thinking provided opportunities for the new habits of thought to be carried over into virtually all aspects of life. The new habits of thought enabled us to understand and solve problems that were previously beyond us—problems that would have never even occured to us without ACH thinking. As noted, scientists are famously, or notoriously, religious skeptics and, consistent with the present thesis, those who apply scientific thinking to social, cultural, and psychological problems even more so.
ACH Thinking and Religious Belief
ACH thinking is a habit of thought that is inimical to accepting received knowledge of all sorts. I propose that, to the extent science teaching encourages the development of hypothetical thinking, we become true skeptics; not mere scoffers of the foibles of others, but those who truly know that we may all have something new to learn from the next new discovery. Entertaining doubts about the correctness of our own beliefs will not, by itself guarantee that we will reject only false beliefs. Rather all forms of understanding become vulnerable. This applies to scientific claims as well. Science itself was being critiqued in scientific style (if not substance) by post-modem arguments by the latter part of the 20th century. Nonetheless, science does have an advantage over other forms of understanding when questioned in this manner as it is itself based on ACH thinking, in especial contrast to religious modes of thought. Science thrives in a cognitive environment of ACH thinking.
By contrast mainstream religions lost much ground over the 20th century, often replaced by forms of belief that were equally irrational and equally vulnerable to the same doubts. This led to the instability of religious commitment observed earlier. ACH thinking therefore tends to have a leveling effect on religious hegemonies. This, I suggest, is behind the recent Pew survey report that people are changing religious affiliations at an unprecedented rate. Moreover, even those who remain within particular religious communities seem less inclined to accept, for example, blanket authoritarian pronouncements on moral and social problems.
Future of ACH Thinking
There are some indications that the Flynn effect (increasing IQ) has reached its limit in the developed world, but may continue in the developing world, leading to the prediction that religion in the developing world may decline over the next few decades—at least among those nations for which ACH thinking is cultivated and also applied beyond the physical sciences. What the limits of ACH drinking are in this respect remains to be seen. Such limits will depend, in part, on just how deeply theistic thinking is grounded in human nature, as well as upon numerous practical and political events. The claim is sometimes made that there is a certain inevitability to religious and superstitious belief at a basic, intuitive level and that the intuitions of religion can be overcome only through explicit rational work and that we are therefore always in danger of lapsing into superstitious modes of cognition. That may well be, but I do think that a figure of only 7% theistic belief in any group, such as that of eminent scientists, should give pause to those who think that the religious perspective is inevitably predetermined by human nature.