Geologic Ages and Dating Techniques

David King Jr. Scientific Thought: In Context. Editor: K Lee Lerner & Brenda Wilmoth Lerner. Volume 2. Detroit: Gale, 2009. 


Earth is about 4.5 billion years old. Geologists divide this age into major and minor units of time that describe the kinds of geological processes and life forms that existed in them. Earth’s geologic record was formed by constant change, just like those that occur routinely today. Though some events were catastrophic, much of Earth’s geology was influenced by normal weather, erosion, and other processes spread over very long geologic ages. Accurate dating of the geologic ages is fundamental to the study of geology and paleontology, and provides important context to the life sciences, meteorology, oceanography, geophysics, and hydrology.

Historical Background and Scientific Foundations

In the mid-seventeenth century, James Ussher (1581-1656), the Archbishop of Ireland, compiled a chronology of Earth by adding up the generations named in the Bible. He determined that Earth was created the night before October 23, 4004 BC. This would make the world about 5,650 years old in Ussher’s day and about 6,000 years old now.

Although Ussher also based his calculations on a painstaking analysis of many other literary sources as well as the Bible, his was not a scientific investigation. His chronology represented the common belief among Christians of his time that biblical events, including the creation account in Genesis, happened exactly as they were written. By the nineteenth century it had become a popular opinion among scientists and scholars that Earth was created in a single event and that its short history was altered only by the great biblical flood.

Ussher’s chronology was widely accepted at the time that early geologic investigators began their work. In the 1750s, however, Giovanni Arduino (1714-1795), an Italian professor studying mining and surveying, began to realize that different kinds of rocks had been deposited at different times in history. He divided the many different kinds of rocks that he studied into four broad categories: Rocks from the primary age, consisting of igneous or metamorphic rocks at the cores of mountains, were the first to be deposited. Rocks from the secondary age were sedimentary layers deposited on the sides of mountains, on top of primary layers. Deposits from the tertiary age were made up of hills of gravel and sand, i.e., broken pieces of rocks that were formed in earlier ages. The quaternary age is the current era, in which rocks and soils are still being deposited.

Arduino’s work did not create an absolute time scale, but more of a relative chronology, a sequence of events without distinct time spans imposed on it. He described which deposits had to be older than others by understanding that sedimentary rocks, gravels, and sands are made up of older rocks that have been broken apart to make up new materials. Although Arduino’s chronology was broad and relative, it is still generally accepted as correct and has been incorporated into modern methods of geologic dating.

The nineteenth century was an important period in the study of geology. Many important discoveries, particularly in Britain, laid the foundation for a scientific investigation of Earth. A Scottish scholar and farmer named James Hutton (1726-1797) introduced many of the fundamental geological concepts that led to a scientific understanding of Earth’s age and development.

A curious man, Hutton sought to understand how the geological features that he saw around him had come to be. He recognized that for one type of rock to intrude on another, the first rock formation must have been molten. This violated the prevailing theory that most rocks were deposited from the waters of the biblical flood. Hutton called his theory that all rocks were the result of volcanic activity “Plutonism,” after the Roman god of the underworld. Scientists know today that only igneous rocks are formed from a molten state, and only sedimentary rocks are deposited under bodies of water.

One enduring idea that James Hutton introduced, however, was the principle of uniformitarianism. He argued that processes such as volcanism, weather, and erosion that happen today have been occurring at the same rate and intensity throughout Earth’s history. This led him to realize that Earth must be much older than the 6,000 years calculated by Ussher, an important breakthrough in the study of Earth’s geologic ages, since it means that natural laws can be determined and expected to apply in the same ways over time.

The law of superposition had been formulated in the seventeenth century by Danish geologist Nicolaus Steno (1638-1686); Hutton applied this principle when studying Siccar Point, a place where upturned shale layers were covered with a horizontal layer of sandstone. Since the sandstone must have formed after the shale solidified, the sandstone must be younger than the shale. This law is central to understanding geologic dating, because it can be applied to individual layers, kinds of fossils, or even whole geologic ages: Newer material is deposited on top of older. Therefore, since fossils of trilobites are found in layers far below those of birds, it can be concluded that trilobites lived in an age long before birds appeared.

The law of superposition ties in tightly with the law of floral and faunal succession, which states that different kinds of plants and animals (flora and fauna) occurred in a specific and identifiable order wherever they are found. The English geologist William Smith (1769-1839) developed this law while working as a surveyor for mining and canal-building companies in the late 1700s and early 1800s. By descending into mines, Smith was able to study the rock layers, or strata, noting their composition and the kinds of fossils preserved in them. This led him to conclude that the order of strata and fossils was consistent even in different places. Though all layers were not visible in one place, Smith, in his job as a wandering surveyor, strung data from different locations together. This allowed him to establish a relative geological chronology and create his greatest work, a geological map of England. However, the development of an absolute chronology relating the distant past to the present was not yet possible.

In the nineteenth century, using the work of Hutton, Smith, Arduino, and others, it became possible to divide geologic time into eons, eras, periods, and epochs based on the kinds of rock present, the layers they fell into, and the kinds of life present in them. Though geologists and paleontologists could put most of the ages of Earth’s past in order, they still had to estimate how long ago those events happened and how long they lasted. The development of a measured absolute time scale would have to wait until the twentieth century.

Currently, the International Commission on Stratigraphy recognizes four eons and more than 70 eras, periods, and epochs within them. Eons are usually agreed to be the largest division of time on the geologic scale. Until recently, the three most distant eons, the Hadean, the Archean, and the Proterozoic, were collectively referred to as the Precambrian eon. Use of this term has since been officially discontinued, but still occurs occasionally, especially in older texts.

The Hadean eon began 4.5 billion years ago, when Earth formed from debris as the solar system coalesced, and ended about 3.8 billion years ago. At the beginning of this eon, Earth was molten. As it cooled, it separated into layers, producing the core, mantle, and crust that exist today. A favored scientific theory holds that the moon was also formed at this time, when a massive object the size of Mars collided with Earth. Shortly thereafter, Earth was pelted with meteorites during the late heavy bombardment, increasing the environment’s hostility to life. Chemical analyses have supported the theory that oceans formed during the Hadean eon, although because of atmospheric conditions they were more acidic than they are today. There is no evidence of life from the Hadean eon, and it is so distant in time that only a few scattered samples of minerals, and no complete rock formations, survive.

The Archean eon, extending from 3.8 to 2.5 billion years ago, was characterized by extreme geologic activity, which formed the first protocontinents. Because radioactive materials were present in amounts far greater than today, the heat from their decay drove volcanism and other geologic activity at extraordinary rates. Geologists do not agree on whether plate tectonics were active during the Archean eon or whether Earth’s crust was even made up of plates at this time. However, smaller land masses were developing and being destroyed and rocks were forming at much higher temperatures than are possible today. The atmosphere was radically different as well, containing very little oxygen. Certain kinds of iron ores formed that would not have been possible had oxygen been present in the atmosphere. Most modern life forms could not have survived in the oxygen-poor atmosphere of the Archean eon.

Despite the lack of oxygen, the first kinds of life arose in the Archean: anaerobic bacteria that formed into mats, columns, or cones called stromatolites. Many different kinds of fossilized stromatolites have been found in different rock formations around the world, giving geologists a great deal of information about the Archean eon. They show that different kinds of bacteria lived together in an ecosystem, that some bacteria used photosynthesis to generate energy from the sun while others relied on different sources, and that areas of both shallow and deep water were present. Although the fossil evidence from the Archean is limited, all the life forms discovered so far have been single-celled prokaryotes that lack a nucleus.

The Archean was followed by the Proterozoic, occurring between 2.5 billion and 545 million years ago. During this eon life began to transform into types that we recognize today, changing Earth along with it. Shallow seas formed and the atmosphere began to change as well. During the Paleoproterozoic era (the earliest part of the Proterozoic eon) an event known as the oxygen catastrophe occurred: a relatively sudden increase in the amount of available oxygen which was the result of a complex chain of events. Since the Archean eon, early bacteria had been excreting oxygen as a waste product. Initially, most of the oxygen was consumed in the oxidation of minerals and metals such as iron. As the amount of unoxidized iron began to decrease, the amount of oxygen in the atmosphere increased. This poisoned some types of anaerobic Archean bacteria, but spurred others to use oxygen in their metabolism, a much more efficient way of processing energy. Aerobic organisms became dominant in the Proterozoic eon.

The Mesoproterozoic era (the middle part of the Proterozoic eon) saw the development of eukaryotes, single-celled organisms with a nucleus. During the end of the Neoproterozoic era (the most recent part of the eon), in a division known as the Ediacaran period, the earliest complex multicellular organisms appeared. These soft-bodied creatures appear to have lived on the bottom of shallow seas, not unlike modern corals or sponges. They were diverse in size, structural complexity, shape, and symmetry. The Ediacaran period is the most recently recognized of all the eons, eras, and periods, named for the Ediacara area in Australia, where many of the fossils have been found.

Alongside the rapidly changing life forms of the Proterozoic eon, significant geological processes were occurring. The supercontinent called Rodinia formed at the end of the Stenian period in the Mesoproterozoic. The first ice ages occurred during the Proterozoic era. At times ice may have covered Earth entirely, a hypothesis known as the “snowball earth” theory.

The end of the Proterozoic is marked by a dramatic event in the fossil record known as the Cambrian explosion. At this time, a remarkable increase in the numbers and types of species is seen, as well as the first hard-bodied animals, i.e., those with shells. The Cambrian period marks the beginning of the Phanerozoic eon, the eon of “visible life” spanning the past 545 million years and continuing today. During this time, life evolved from the simplest sponges, jellyfish, and worms to include almost everything we can think of that is alive today.

Geological periods during the Phanerozoic are divided into smaller epochs based on changes in the kinds of life that appear in the fossil record. The larger number of fossilized species present and the relatively short period of time since their deposit allow this more precise dating. The largest divisions of the Phanerozoic eon are the Paleozoic, Mesozoic, and Cenozoic eras. Each lasted for millions of years and each is broadly characterized by the degree of development that the life within it has undergone.

The Paleozoic is divided into the Cambrian, Ordovician, Silurian, Devonian, Carboniferous (which is sometimes divided into the Mississippian and Pennsylvanian eras) and Permian periods. Each of these is further divided into several epochs, some named for places where their major characteristics were discovered, others simply divided into early, middle, and late epochs. During the Paleozoic era, insects, plants, the first vertebrate animals, amphibians, reptiles, fish, sharks, and corals all appeared. Often, it is the changes in the kinds of animals and plants that are used to decide boundaries between the different periods.

Despite the emphasis on life in describing the various ages of the Paleozoic, geologic processes were still under way. Supercontinents formed and broke apart, several ice ages advanced and retreated, temperatures fluctuated, and sea levels rose and fell. These diverse processes influenced the many changes in life that are recorded in the fossils of the era—coal deposits in Europe laid down during the Carboniferous period are one of its more famous features. At the end of the Paleozoic era, a disastrous event known as the Permian-Triassic extinction led to the destruction of almost all Paleozoic species. Though there have been efforts to link this extinction to a meteorite impact, no convincing evidence of a large enough collision during this time period has been found.

Dinosaurs appeared during the Mesozoic era. The names of the periods in the Mesozoic era may sound familiar: Triassic, Jurassic, and Cretaceous. During this 180 million-year era, all the familiar dinosaurs such as triceratops, tyrannosaurus, stegosaurus, diplodocus, and apatosaurus flourished at different times. Some modern animals have ancestors that first appeared during the Mesozoic era, including birds, crocodiles, and mammals. Plants continued to develop, and the first flowering plants appeared.

The end of the Mesozoic era can be seen clearly in some rock layers. Known as the K-T (Cretaceous-Tertiary) boundary, this dark line of sediment is rich in the element iridium. Another massive extinction of species occurred at this time, possibly because of one or more meteorite impacts along with a period of intense volcanic activity. This would have decreased the amount of sunlight reaching Earth’s surface, killing plants and, eventually, animals. Not all geologists and paleontologists are convinced that the K-T extinction was a catastrophic event; some argue that it occurred over a few million years after slower climate changes.

The Cenozoic era, the current era of geologic time, is divided into the Paleogene and Neogene periods, and further into the Paleocene, Eocene, Oligocene, Miocene, Pliocene, Pleistocene, and Holocene epochs. During the Cenozoic, the supercontinent of Gondwana broke apart, and the continents reached their current positions. Several ice ages occurred, and the poles became ice-covered. The first mammals began to flourish in the Paleocene; the first apes appeared in the Miocene; and the first human ancestors in the Pliocene. Modern humans, along with large animals such as mammoths and wooly rhinoceroses, appeared in the Pleistocene. The Holocene epoch, currently ongoing, began with the end of the last ice age, less than 10,000 years ago.

Though this vast span of time was largely understood by the end of the nineteenth century, geologists, paleontologists, and scientists of other disciplines were still curious about Earth’s absolute age, using different approaches to tackle the problem. In the 1860s William Thomson (1824-1907), more commonly known as Lord Kelvin, applied his theories of thermodynamics to determine Earth’s age. He surmised that Earth was between 20 and 40 million years old by calculating the time it should take for it to cool from a liquid to a solid. Though his calculations and some of his assumptions were correct, he failed to account for heat added by radioactivity. Around the turn of the twentieth century, Irish geologist John Joly (1857-1933) estimated Earth’s age by analyzing the salt content of the seas. He then assumed that the oceans had started off as freshwater, and that all the salt had washed into them from the land. This relied on the assumption that the rate of salt coming into the oceans was constant and that no salt had ever been removed from the seas. By this calculation he arrived at an age of about 100 million years.

Scientists needed a method that relied on something measurable over Earth’s entire lifespan. In rocks older than about 600 million years, it becomes impossible to use fossils to calculate their age because very few, if any, exist in these rocks. There are, however, a number of naturally radioactive elements that have been decaying since the formation of Earth. With the discovery of radiation and the calculation of half-lives in the twentieth century it finally became possible to determine the age of Earth’s oldest rocks.

Radioactive decay is the spontaneous change in the nucleus of an element by the escape of a proton or neutron. Once a particle escapes the nucleus of an atom, it becomes a different isotope of the same element, or sometimes a different element altogether. The ratio of the original parent element to the daughter element produced by decay determines how long the element has been decaying. The half-life of an isotope is the amount of time it takes for half of the sample to decay. In 1906, New Zealand-born British physicist Ernest Rutherford (1871-1937) discovered that uranium and thorium decayed into isotopes of lead. By 1907 Bertram Boltwood (1870-1927), an American chemist studying radioactive materials, had calculated the age of certain rocks based on analysis of their radioactivity.

Radiometric dating, a well-regarded way to establish the age of rocks, is still based on the same principles laid out by Rutherford and Boltwood. It assumes that the half-lives of elements do not change over time, and that the sample has not been contaminated by the addition or removal of radioactive material. Zirconium crystals are usually analyzed because they trap uranium in their structure. Analyzing the decay of uranium to lead is useful because the half-life of uranium 238 is 700 million years. Even longer dates can be measured with potassium-to-argon decay, with a half-life of 1.3 billion years, and rubidium-to-strontium decay, with a half-life of 50 billion years. Carbon-14 dating is useful for measuring very short ages on the geologic time scale. With a half-life of 5,730 years, carbon-14 decay is useful for measuring dates up to about 70,000 years. This makes the method particularly useful for dating samples from the Holocene and late Pleistocene epochs.

Modern Cultural Connections

Radiometric dating is the key to developing and understanding an absolute time scale of Earth and its geologic ages. When geological events, rock formations, and individual species can be placed accurately in time, it becomes possible to understand their relationships to each other and to events and circumstances present today. Many scientific disciplines rely on an understanding of the geological past to make accurate observations and predictions. Some of these sciences, like meteorology, hydrology, and oceanography have important roles to play in understanding and possibly mitigating the effects of global climate change and population growth. By studying climate changes in the past or uncovering the reasons for mass extinctions, it might be possible to foresee disasters and figure out how to avert them.