Cosmology and the Anthropic Principle

Victor J Stenger. Science, Religion, and Society: An Encyclopedia of History, Culture, and Controversy. Editor: Arri Eisen & Gary Laderman. Volume 1. Armonk, NY: M.E. Sharpe, 2006.

Does Earth, and with it human life, possess a unique place in the cosmos? Or does human life exist within an unexceptional, almost negligible pinpoint of space and time? In recent years, new arguments have emerged that aim to establish human beings as the focus of existence. The anthropic principle, generally speaking, claims that the universe appears to be highly fine-tuned for human life.

The only form of life in the universe of which we are aware is that found on our home planet. That life is based on the chemical element carbon, whose four-valence structure and other properties make it particularly well suited as the framework on which molecules containing many atoms in a wide range of complex shapes and properties can be assembled. While other elements, such as silicon and germanium, have similar structures, and indeed are used in semiconductor technology to manufacture devices with complex properties of their own, carbon seems best suited for a form of life to evolve under the conditions that exist on Earth and perhaps elsewhere in our universe.

Whatever elements serve as the building blocks of complex structures, they did not exist in nature when our universe first formed 14 billion years ago. Cosmologists and physicists now have a reliable theoretical picture of those early stages. Beginning in a tiny region of space far smaller than an atomic nucleus, the universe emerged as an expanding ball of hot gas and radiation through a process known as the Big Bang. After a few minutes, that gas cooled to the point where atoms could hold together without being ionized by the radiation. One out of every billion of the primordial bodies was a hydrogen atom, with even smaller amounts of helium and lithium. These are the first three elements of the chemical Periodic Table that we see hanging on the wall of many science classrooms. Over a period of perhaps 100 million years, these elements were gathered by gravity into the first stars.

Another billion years or so was needed for stars to produce the carbon and other ingredients needed for the evolution of life. The main source of energy production in stars is the fusion of hydrogen nuclei into helium. The larger a star, the faster it evolves. When its hydrogen is used up, other nuclear processes take over and synthesize carbon and the other heavier elements. If the star is at least ten times as massive as our sun, it will produce a gigantic explosion called a supernova, in the process blasting the newly made elements into interstellar space. Once there, this matter can be assembled, by gravity, into planets like Earth. And so, Earth formed 6 billion years ago, 8 billion years after the Big Bang, with a heavy core of iron and a surface containing enough carbon, oxygen, and other substances needed for life to form. In 1952, astronomer Fred Hoyle calculated that sufficient carbon would not be produced in stars unless the nucleus of carbon contained a previously unknown excited state of a specific energy. A laboratory experiment, proposed by Hoyle, shortly confirmed the existence of this state.

The properties of atomic nuclei are determined by fundamental constants of nature, such as the masses of the proton and neutron and the strength of the nuclear force. The values of these constants were already set billions of years before Earth formed, and indeed long before the formation of the first star. Yet these constants seem specially selected to allow for the eventual development of carbon-based life.

The Anthropic Coincidences

For many years, physicists have pondered why the constants of physics have the particular values they do. Perhaps the biggest puzzle is the huge difference between the strengths of the gravitational and electromagnetic force. Consider the hydrogen atom, which is composed of a proton and electron of equal and opposite electric charge. The electrical attraction between these two particles is thirty-nine orders of magnitude greater than the gravitational attraction. Why thirty-nine orders of magnitude? Why not 58 or 137?

What if the two forces—electrical attraction and gravitational attraction—were equal? A star is maintained in equilibrium by a balance between the attractive force of gravity and the pressure of the outgoing electromagnetic radiation that is produced by the nuclear processes going on at the star’s core. If gravity were the same strength as electromagnetism, a star would quickly collapse—long before any heavy elements could be made. So, once again, we seem to have a tuning of the parameters of physics to allow time for the elements of life to be fabricated and spread throughout space. The seeming connections between physics parameters and life are called the anthropic coincidences.

In their 1986 book The Anthropic Cosmological Principle, physicists John Barrow and Frank Tipler assembled a large number of examples to illustrate how the laws and constants of physics appear to be fine-tuned for the evolution of life as we know it. In many cases, changing a constant by a tiny amount is sufficient to make life as we know it impossible.

For example, the element-synthesizing processes in stars depend sensitively on the properties and abundances of deuterium (heavy hydrogen) and helium produced in the early universe. Deuterium would not exist if the difference between the masses of a neutron and a proton were just slightly displaced from its actual value. The relative abundances of hydrogen and helium also depend strongly on this parameter. These abundances also require a delicate balance of the relative strengths of gravity and the weak nuclear force, which is responsible for energy production in stars. With a slightly stronger weak force, the universe would be 100 percent hydrogen. In that case, all the neutrons in the early universe would have decayed, leaving none around to be saved in deuterium nuclei for later use in the element-building processes in stars. With a slightly weaker weak force, few neutrons would have decayed, leaving about the same numbers of protons and neutrons. In that case, all the protons and neutrons would have been bound up in helium nuclei, with two protons and two neutrons in each. This would have led to a universe that was 100 percent helium, with no hydrogen to fuel the fusion processes in stars. Neither of these extremes would have allowed for the existence of stars and life based on carbon chemistry.

The electron also enters into the tightrope act needed to produce the heavier elements. Because the mass of the electron is less than the neutron-proton mass difference, a free neutron can decay into a proton, electron, and antineutrino. If this were not the case, the neutron would be stable and most of the protons and electrons in the early universe would have combined to form neutrons, leaving little hydrogen to act as the main component and fuel of stars. It is also essential that the neutron be heavier than the proton, but not so much heavier that neutrons cannot be bound in nuclei, where conservation of energy prevents the neutrons from decaying.

The Cosmological Constant

Another puzzling example of an anthropic coincidence is the cosmological constant problem. When Einstein first wrote down his equations of general relativity in 1915, he saw that they allowed for the possibility of gravitational energy stored in the curvature of empty space-time. This vacuum curvature is expressed in terms of what is called the cosmological constant. The familiar gravitational force between material objects is always attractive. A positive cosmological constant produces a repulsive gravitational force. That is, anti-gravity is allowed by Einstein’s theory.

At the time, Einstein and most others assumed that the stars formed a fixed, stable system, or in the language of the Bible, a stable firmament. A stable firmament is not possible with attractive forces alone, so Einstein thought that the repulsion provided by the cosmological constant might balance things out. However, when Edwin Hubble discovered that the universe was not a stable firmament but expanding, the need for a nonzero cosmological constant was eliminated. Over the years since, the data gathered by astronomers have indicated that the cosmological constant is at most very small and very possibly zero.

Elementary particle physicists, however, found that they cannot understand why the cosmological constant should be so small—the corresponding energy density is far lower than expected from theoretical estimates. Calculations indicate a value of the resulting vacuum energy density is at least 120 orders of magnitude higher than the astronomically observed upper limit for this energy. Those calculations are clearly wrong.

Dark Energy

In 1998, two research groups studying distant supernovae were astonished to discover, against all expectations, that the expansion of the universe is accelerating. More recent observations have confirmed this result. The universe, to put it simply, is falling up! This can only be explained by a gravitational repulsion. While the notion of antigravity may seem like science fiction, gravitational repulsion is allowed by general relativity. Einstein’s equations give a gravitational repulsion whenever the pressure of a gravitating medium is sufficiently negative. A positive cosmological constant is one way to achieve this—but not the only way.

The observations of an accelerating universe indicate that whatever is producing this repulsion represents 70 percent of the total mass-energy of the universe. This component has been dubbed dark energy to distinguish it from the gravitationally attractive dark matter that constitutes another 26 percent of the mass-energy. Neither of these ingredients is visible, nor can they be composed of ordinary atomic and subatomic matter like quarks and electrons. Familiar luminous matter, as seen in stars and galaxies, makes up only 0.5 percent of the total mass-energy of the universe, with the remaining 3.5 percent residing in ordinary but nonluminous matter like planets.

This result makes the cosmological constant problem even more problematical. The energy densities of dark energy, dark matter, and familiar matter are currently of the same order of magnitude. If the dark energy resides in a cosmological constant, the corresponding energy density is constant. On the other hand, the energy density of matter and radiation vary as the universe expands. So we have another anthropic coincidence in which the cosmological constant was originally fine-tuned to select out our particular moment, a few million years in the 14 billion years of our universe’s existence. Thus, more ammunition is supplied to those who would see humanity as occupying a special place in the cosmos.

Versions of the Anthropic Principle

In 1974, Brandon Carter introduced what he, to his later regret, called the “anthropic principle.” He proposed two versions. His weak anthropic principle states: “We must be prepared to take into account the fact that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers.” His strong anthropic principle says: “The Universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage.” These ideas offered an explanation, of sorts, to the anthropic coincidences. If the universe were not the way it is, we would not be here to talk about it.

However, many authors have read deeper meanings into the anthropic principle and presented their own versions. Barrow and Tipler, for example, proposed three versions. The first two rephrase Carter’s wording. Their weak anthropic principle reads: “The observed values of all physical and cosmological quantities are not equally probable but take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirement that the Universe be old enough for it to have already done so.” Note that Barrow and Tipler require the existence of “carbon-based life” while Carter simply refers to the existence of “observers.” Barrow and Tipler’s strong anthropic principle reads: “The Universe must have those properties which allow life to develop within it at some stage in its history.” Note that all three authors say that the universe “must” have the properties that allow for the creation of life, or at least observers. Thus, the strong anthropic principle seems to imply some intent or purpose within the universe.

Barrow and Tipler offer three possible reasons for the strong anthropic principle:

(A) There exists one possible Universe “designed” with the goal of generating and sustaining “observers.”

(B) Observers are necessary to bring the Universe into being.

(C) An ensemble of other different universes is necessary for the existence of our Universe.

Reason (A) has two further possibilities, depending on the meaning of “designed.” Many authors with religious agendas have interpreted the designer as a creator God, although nothing in the above discussion requires that this God be the one of any particular faith. Indeed, “design” might be interpreted as a purely natural process, perhaps an evolutionary one akin to the design inherent in Darwinian natural selection or just some structure built into the universe that science has not yet explained.

Reason (B) arises from a mystical misinterpretation of quantum mechanics that has formed the basis of a large literature in recent years. In quantum mechanics, the detection apparatus plays a large role in determining the outcome of an experiment. When you measure a particle property, such as a localized position, then the object being measured is described as a particle. When you measure a wave property, such as wavelength, the object being measured is described as a wave. This has led some to suggest that a human observer creates her or his own reality. So why not the universe itself?

However, nothing in experiment or theory supports this interpretation. No human need be involved in physical processes and no incompatibility exists in the wave-particle descriptions used in quantum theory. Obviously the act of observation requires an interaction with the system being observed, and the theory must properly take that into account when, as occurs on the atomic and subatomic scale, this interaction cannot be neglected.

Reason (C) introduces the notion that multiple universes exist and so we happen to just live in that universe which is suitable for the evolution of our kind of life.

Barrow and Tipler round out their various anthropic principles with the final anthropic principle: “Intelligent information processing must come into evidence in the Universe, and, once it comes into existence, it will never die out.”

How Fine-tuned?

Many scientists find the whole anthropic argument circular, or at least posed in a rather twisted way. Of course the constants of nature are suitable for our form of life. If they were not, we would not be here to talk about it.

Consider the fact that we live on Earth, rather than Mercury, Venus, Mars, or some other planet in the known solar system. The temperature range on Earth is just right for life, while Mercury and Venus are too hot and Mars is too cold. Mercury has no atmosphere, while the atmosphere of Venus is too thick for the sun’s rays to penetrate to the surface, and the atmosphere of Mars is too thin to provide sufficient oxygen and water.

Earth’s atmosphere is transparent to the same spectrum of light to which our eyes are sensitive. Anthropic reasoning would have it that the atmosphere was fine-tuned so that humans and other animals could see at a distance. The transparency also happens to match the spectral regions within which the electromagnetic radiation from the sun is maximal. Again, anthropic reasoning would attribute this to design with humans in mind.

But, rather obviously, life evolved on Earth because conditions here were right. The type of life that evolved was suitable for those conditions. Life did not evolve, to the best of our current knowledge, anywhere else in the solar system because of the unsuitability of any of the other planets that orbit our sun. But with 100 billion stars in 100 billion galaxies in the visible universe, and countless others likely to lie beyond our horizon, the chances seem good for some form of life developing on some planets somewhere. Indeed, many of the chemical ingredients of life, such as complex molecules, have been observed in outer space. Of course, we will not know for sure until we find such life.

Still, we expect any life found in our universe to be carbon based, or at least based on heavy element chemistry. The fine-tuning argument implies that this is the only form of life that is possible, and that is a huge assumption. Even if all the forms of life in our universe turn out to be of this basic structure, it does not follow that life is impossible under any other arrangement of physical laws and constants.

Some physicists imagine that someday we will be able to derive all physical laws and parameters from a few basic principles, that only one set will be shown to be possible in an ultimate “theory of everything.” We have not reached that stage, and another possibility has also been widely discussed.

Many “laws” of physics can be shown to follow naturally from basic symmetries of space and time and the requirement that physical theories must first of all be objective, that is, they must not be formulated in such a way that they single out the point of view of a particular observer. These laws include some of the most important: the great conservation principles such as energy conservation, Newton’s laws of motion, special and general relativity, and quantum mechanics. Both gravity and electromagnetism also follow from this simple principle. Thus these principles can be expected to apply in any conceivable universe that would be formed in the absence of any external agency.

Furthermore, some of the fundamental constants that are often used in the fine-tuning arguments are conventions whose values are arbitrarily chosen in order to define the system of units being used. For example, the speed of light in a vacuum defines the unit of length in terms of the international standard unit of time, the second. Similarly, the Planck constant and Newton’s gravitational constant are arbitrary and so cannot be fine-tuned.

On the other hand, certain fundamental parameters may be the random, uncaused result of a process called spontaneous symmetry breaking. This includes the masses of the elementary particles, the relative strengths of the forces by which they interact, and other details that enable the universe to develop complex structures. In this picture, our universe starts out in a highly symmetric state but certain symmetries are broken as the universe expands and cools.

The mechanism is essentially that of the familiar phase transitions that occur as a gas cools to a liquid and then a solid. At each transition, the system moves from a more symmetric to less symmetric state, but one with more structural complexity and apparent organization. For example, as the temperature decreases, a sphere of water vapor is composed of randomly moving molecules, becomes a liquid with more orderly molecules, and then a solid crystal of ice in which the molecules occupy fixed positions.

One symmetry that is expected to have existed in the very early stages of the universe was a balance between matter and antimatter. If that symmetry had been maintained, the current universe would be almost pure radiation resulting from matter-antimatter annihilation. No structures such as stars, planets, and living organisms would exist. In fact, the current universe has about a billion particles of normal matter for every antiparticle, and structures are maintained without the threat of annihilation. To achieve this, the symmetry between matter and antimatter had to have been broken within the first few minutes of the Big Bang. This might have happened by means of some dynamic process that could be described as lawlike. However, spontaneous symmetry breaking—random rather than dynamic—remains a viable alternative.

The highly successful standard theory of elementary particles and forces contains about twenty parameters that are neither arbitrary nor predetermined by known fundamental principles but must currently be inferred from experiments. However, only a few parameters are needed to specify the broad features of the universe. One of these features, which we may reasonably deem as necessary for the ultimate evolution of life, is the existence and long lifetime of stars. Recall that large stars need to live about a billion years or more to allow for the fabrication of heavy elements. Smaller stars, such as our sun, also need about a billion years to allow life to develop within its solar system of planets.

In my own research, I have studied how the minimum lifetime of a typical star depends on three parameters: the masses of the proton and electron and the strength of the electromagnetic force. If we vary these parameters by ten orders of magnitude around their present values, we find that over half of the stars will have lifetimes exceeding a billion years. Anthony Aguire has examined the universes that result when six cosmological parameters are varied by orders of magnitude and found that they do not preclude the existence of intelligent life.

One of the mistakes made by those who claim that the constants of nature are fine-tuned for life is to vary one constant while holding all the others fixed. As Aguirre and I have independently shown, changes in other parameters may compensate for the change in a selected parameter, allowing more room for a viable, livable universe than might otherwise be suspected. We and others have concluded that the fine-tuning is not as fine as some have argued.

Quintessence

The cosmological constant problem has a very specific, plausible solution that does not require fine-tuning. As we have seen, the cosmological constant problem is really a vacuum energy problem. Estimates made for the vacuum energy density exceed the empirical upper limit by over 120 orders of magnitude. We can safely conclude that those estimates are grossly wrong. In fact, a closer looks at these calculations reveals that they are very crude and based on unsupportable assumptions. Furthermore, until we have a working theory of quantum gravity, we have no way of making such a calculation reliable. Certainly this is a “problem,” but not anything so anomalous as to require drastic solutions at our current level of knowledge. There are even some good reasons for thinking that the vacuum energy density is in fact zero.

In that case, what is the dark energy? Theoretical physicists have proposed models in which the dark energy is not a vacuum energy, but rather a dynamic, material energy field dubbed quintessence. This field has negative pressure and repulsive gravity. The energy density of quintessence is not constant but evolves along with the other matter/energy fields of the universe and, thus, need not be fine-tuned. While the questions of the nature and amount of dark energy and vacuum energy in the universe are still open, they do not point to any particularly strong need to invoke anthropic arguments.

The Multiverse

Usually we think of the “universe” as encompassing all that is. However, modern usage of this term is gradually changing to refer to all that arose from the original Big Bang 14 million years ago. This includes all that is within our “horizon,” that is, objects near enough to Earth that their light can reach us in less than the age of the universe. It also includes anything that might be beyond that horizon but arose from the same source. Indeed, if modern inflationary cosmology is correct, the part of our universe within our horizon, all 100 billion galaxies, is an almost negligible fraction of the total.

Stupendous as this seems, there may be much more. Nothing in our current knowledge requires that this universe, our universe, is all there is. While we cannot see through the chaos of the first moments of the Big Bang, the same equations that describe that event and the expanding universe that resulted work equally well for the time before what we call “t = 0.” Another universe could very well have existed at that time, contracting from our point of view to the tiny region that then exploded into our universe.

In fact, since the direction of time is not an inherent property of nature, appearing in no physics equations but merely a convention we choose as the direction of increasing entropy, time would run in the opposite direction to ours in that universe on “the other side of time.” And given that so much of what happens in our universe is the result of random, spontaneous processes, we would not expect that universe to be exactly like ours.

But this is not all. Modern cosmology suggests that the same process that produced our universe could happen in many other places outside our universe. None of these universes, which form what is called the multiverse, would be identical. They would be expected to have many of the same basic “laws” as ours, such as energy conservation, which follow from symmetries. But we can conjecture that these other universes will differ in those properties that are accidental, the result of spontaneous symmetry breaking.

In this case, the anthropic coincidences are readily explained by Barrow and Tipler’s proposal (C) discussed above. There exists an ensemble of universes with a wide range of properties, and we are in the one that had those properties necessary for carbon-based life to form. In short, the universe is not fine-tuned for us. We are fine-tuned to our universe.