Gauging Earthquake Hazards with Precariously Balanced Rocks

James N Brune, Matthew D Purvance, Abdolrasool Anooshehpoor. American Scientist. Volume 95, Issue 1. Jan/Feb 2007.

Wandering through the hills, the smell of sage lingering in the air, we scour the terrain for dues, for indications. This is a reconnaissance mission, one of hundreds we’ve mounted over the past 15 years through the sparsely populated, arid regions of southern California and Nevada. Right now we are near Palmdale, California, just east of the San Andreas fault, where the North American and Pacific tectonic plates grind sideways past each other with a stick-slip motion that every once in a while creates a large earthquake. A roadrunner, trailing off toward the horizon, leads our attention to a distant rock outcrop. As we approach it, we cannot help thinking what it would have been like to be standing here on January 9th, 1857.

Back on that day, the Fort Tejon earthquake, a magnitude-8 (or so) temblor, ripped past mis spot. During this great quake, the two sides of the fault lost their hold and suddenly slipped several meters over a distance of some 400 kilometers. Models used by the U.S. Geological Survey (USGS) imply that the ground shook violently at this locale, 18 kilometers from the San Andreas—so violently that, had we been standing here at the time, we might have been thrown off of our feet. Yet that conclusion is hard to reconcile with the dozens of precariously balanced boulders we see still standing in the area. Presumably such tippy rocks would have tumbled over had the ground motions been significant.

We have tested this commonsensical surmise about such boulders, here and elsewhere, with experiments and numerical simulations, all in an effort to make the observations of precariously balanced rocks a quantitative tool for assessing seismic hazards. We are now quite confident that such stones can indeed provide valuable information, once seismologists like ourselves figure out how to get these age-old witnesses of past earthquakes to reveal their secrets.

Mapping Hazards

Over the years, many scientists have sought to predict earthquakes by investigating premonitory signs, such as fluctuations in the level of water in wells, variations in the magnetic field, even the anomalous behaviors of animals. None has been truly successful. Instead, seismologists and engineers have grown to rely on an approach that C. Allin Cornell of Stanford University proposed in 1968. He suggested that investigators focus on estimating the level of ground shaking that one can expect to take place at a site over an extended time period, a technique called a probabilistic seismic-hazard analysis. The USGS has used this method to construct seismic-hazard maps for all of the United States. Such maps don’t say when the next earthquake will happen, but they are nevertheless enormously helpful.

Suppose that you own a piece of land in the Los Angeles basin and want to build a house on it. Under the newest building codes, your proposed dwelling must meet certain guidelines; in general, it must be able to withstand the maximum level of ground shaking that your property can be expected to experience over a 2,500-year period. How do you know what level that is?

Without the USGS maps, that task would be formidable. First, you would need to figure out which faults could produce earthquakes that would affect your location. You needn’t consider faults more than, say 500 kilometers away, because the amplitude of ground shaking diminishes with distance from the source. Accounting for the exact location of the site of interest, namely your building lot, is important, because it is at different distances from the various regional faults and because it has a specific geological setting. If the ground below your proposed house is hard (that is, if bedrock comes near to the surface), the shaking from an earthquake will generally be less there than if your property is underlain by unconsolidated materials. (Unfortunately for you, the Los Angeles basin is filled with soft sediments and acts just like a big bowl of jelly when shaken, which will jiggle your house-to-be quite a bit.) With these pieces of information, it is possible to estimate the amplitude of ground shaking to expect from a given earthquake using various formulas, ones empirically derived from recordings of quakes throughout the world.

The basic idea behind probabilistic seismic-hazard analysis is that, with certain simplifying assumptions, you can determine the number and magnitude of earthquakes that can be expected to take place on each of the relevant faults over time. But how? A good starting point might be to measure the absolute locations of two points on either side of a fault where the two flanks are being dragged relative to each other. The rate of their separation is called, naturally enough, the slip rate. In very rare spots where the two sides of the fault slip freely past each other, there is little potential for spawning earthquakes. The danger comes in places where the fault is locked, the usual case. Here elastic-strain energy accumulates-energy that can prove quite destructive when it gets released all at once during an earthquake.

Let’s assume that such catastrophes indeed happen from time to time on the fault of interest. The total amount of slippage between the two sides of this fault in, say, 500 years must take place during the various earthquakes that occur during that period. But will there be lots of tiny earthquakes or just a few big ones? This uncertainty may seem insurmountable, but here empirical observations again come to the rescue. In the 1940s, Caltech seismologists Beno Gutenberg and Charles Francis Richter found a relation between the frequency of earthquakes and their magnitudes. So it is possible to use this knowledge, along with the slip rate and some generalizations about faulting mechanics, to infer the rate of earthquakes as a function of magnitude. Doing so for all the relevant faults allows you finally to determine the rate at which ground motions of certain intensities will happen on your land (using accepted models for how the amplitude of ground shaking diminishes with distance from an earthquake source).

As you might guess, there is considerable uncertainty in working out the many components of such a probabilistic seismic-hazard analysis. For example, the faults contributing to the risk you’re trying to calculate are not necessarily all known. Indeed, recent evidence suggests that there is a buried fault right beneath the Los Angeles metropolitan area, termed the Puente Hills blind-thrust fault. Yet current seismic-hazard assessments for the Los Angeles basin do not account for this possible earthquake source. Indeed, from the ground-motion attenuation models to the knowledge of slip rates to the use of the Gutenberg-Richter relation, uncertainties abound-uncertainties that will translate into dollars when you finally build your Los Angeles dream house.

The study of precariously balanced rocks provides an opportunity to test the predictions of probabilistic seismic-hazard assessments. The field observations will never replace such assessments, but they should spur refinements to that methodology-and improve the understanding of earthquakes in general.

Rocky Start

Back in the desert, we scan the granitic outcrop before us, spotting a few candidates: slender silhouettes of stone highlighted by the early morning sun. Well before these rocks ever saw the light of day, they were shaped by weathering processes deep underground. Water percolating through fractures in what was once a relatively intact piece of buried bedrock preferentially broke down the minerals adjacent to the cracks. The ever-present forces of erosion eventually lowered the ground level and removed this weathered material, leaving individual stones or stacked piles of rocks, which in the desert can remain standing for many millennia. On occasion, one of these boulders has just the right size, orientation and shape to end up standing precariously.

Countless passers-by have noticed oddly balanced boulders, and more than a few people (including the authors when younger) have tried, often successfully, to tip such rocks over. There’s no doubt that many observers have correctly guessed that strong earthquake shaking would have toppled such stones-which just goes to demonstrate that a sufficiently large earthquake has not recently occurred nearby. The paleontologist and geologist Nathaniel Southgate Shaler of Harvard University, a student of the famous scientist Louis Agassiz, suggested just this in his early work Aspects of the Earth: A Popular Account of Some Familiar Geological Phenomena, which was published in 1890. Prior to 1991, when one of us (Brune) started studying the subject, such surmises lacked a clear understanding of the time scale involved and the potential value of such sightings for quantifying seismic hazards.

The work began after Brune observed a number of precariously balanced rocks while seeking out suitable sites for placing seismometers near the designated nuclear-waste repository at Yucca Mountain in Nevada. Following these field excursions, Brune attended a professional meeting of geologists and was struck by a presentation that Charles D. Harrington of Los Alamos National Laboratory and John W. Whitney of the USGS gave on erosion rates at Yucca Mountain. These investigators studied the clay, manganese and iron oxides that are deposited by wind on rocks in arid environments—something called “desert varnish.” Harrington and Whitney indicated that the presence of desert varnish is a clear sign that a rock has been exposed on the surface of the earth for thousands of years. It hit Brune immediately that the precarious rocks he had observed were covered with these dark coatings, suggesting that such stones might provide important clues about the long-term seismic hazards at Yucca Mountain.

Thus began the search for these elusive objects near that site. Not long after, Brune realized it would also be worthwhile to look for precarious rocks in seismically active parts of southern California.

To gauge how long such boulders had survived, he, along with colleagues John W. Bell of the Nevada Bureau of Mines and Geology, Tanzhuo Liu of Columbia University, Marek Zreda of the University of Arizona, Tucson, and James C. Yount of the USGS, considered both the character of the rock varnish and the deposition of certain special nuclides, ones that result from the constant barrage of cosmic rays on Earth’s surface. After erosion exhumes buried stones, they are exposed to these highenergy particles, producing uncommon nuclides such as ^sup 10^Be, ^sup 26^Al or ^sup 36^Cl, from which geologists can estimate exposure ages and even the erosion rate. Determining exposure ages involves sampling both precarious rocks and their pedestals. In California’s Mojave Desert and near Yucca Mountain these ages have invariably proved to be older than 10,500 years, and several of these rocks have been standing upright for more than 30,000 years. In addition, evidence from this study suggests that these precarious rocks have not changed shape (which is related to their stability when shaken) significantly over the past 10,500 years. Contrast these ages with the approximately 100-year instrumental record of earthquakes, and you can imagine the way that Brune felt back in 1991 when he realized the potential these rocks held for gauging long-term seismic hazards.

The next task was to determine just what kind of shaking was necessary to topple a given boulder. One can think of precariously balanced rocks as rigid blocks that can tip to and fro about their points of contact with the supporting surface. There is a rather rich and interesting history of research focused on the analysis of blocks overturned during earthquakes. The Irish physicist and seismologist Robert Mallet attempted to use fallen objects to investigate the 1857 Neapolitan earthquake in his 1862 treatise. He found that the more slender the object, say, a stone monument or a brick column, the easier it is to overturn, which of course makes sense. Shortly thereafter in Japan, the seismologists John Milne and Fusakichi Omori shook carts on which slender columns were placed, calibrating their potential for overturning. In 1927, physicist Paul Kirkpatrick showed experimentally and theoretically that the formula Mallet used does not reveal the ground acceleration required for overturning but, instead, yields the minimum acceleration required to force a rigid block into rocking motion.

Up to that point, workers had implicitly assumed that no energy is lost as a block transitions from rocking about one bottom edge to the other, an assumption that, it turns out, violates the conservation of momentum. In 1934, two Japanese investigators, H. Kimura and K. Iida, corrected this error, although their results, published in Japanese, were overlooked in the West. In the early 1960s, George W. Housner, an engineer at Caltech, independently derived Kimura and lida’s result in an effort to investigate the possibility of estimating the intensity of ground shaking from the toppling of various man-made objects-for example, the large transformers used for the distribution of electric power or elevated water tanks seen in many Midwestern towns. Two of us (Anooshehpoor and Brune) worked with Baoping Shi (formerly of the University of Nevada, Reno) to generalize some of Housner’s overturning results.

The nonlinear, nonsmooth equations governing rocking motion are not directly solvable without simplifying assumptions that may not be applicable to earthquakes. In 1980, Solomon C. Yim, now a professor of civil engineering at Oregon State University, and two coworkers proposed a practical way around this thorny problem: numerical simulations. These investigators shook blocks virtually on the computer, applying ground motions of increasing intensity. Strangely enough, finding that a block topples after exposure to a given history of ground acceleration does not guarantee overturning when it is shaken by exactly the same pattern of movement with time but having a larger peak amplitude. Indeed, the back-and-forth motion of the base coupled with the complex to-and-fro rocking behavior of the block produces extraordinarily complex responses. Hence stability does not scale simply with the amplitude of ground shaking. How then can one ever hope to use the observations of precariously balanced rocks to say anything about the size of the earthquakes they have survived?

Playing the Odds

The key to moving forward was to consider more than just the problem of a given block (or rock) and a given pattern of shaking. One has to consider a wide range of possible ground motions and to work out the odds that the block in question will topple. When they did so, Yim and his coworkers found that the overturning probability indeed increases with increasing amplitude of shaking.

Yim’s work, though suggestive of a method to obtain practical information from precarious rocks, was limited in scope by the paltry computing power available at the time. Such calculations are, of course, much more practical to carry out nowadays. In this vein, one of us (Purvance) carried out extensive numerical simulations of dozens of blocks exposed to more than 1,000 synthetic shaking histories, a task that took months of number crunching on a network of computers. Purvance investigated all sorts of characteristics that might contribute to overturning potential. Along with the peak acceleration of the ground and the geometrical parameters of the block, he found that the probability of tipping could be ascertained from the ratio of the peak ground velocity to the peak ground acceleration.

The reason for this result is not hard to understand. A freestanding object falls over when its center of mass shifts sideways beyond the points supporting it (the rocking points), unless the base on which it rests quickly displaces to return its center of mass to a safe configuration-just imagine balancing a baseball bat vertically on the palm of your hand. To get a block to fall over, the ground must accelerate in one direction for a relatively long time, which allows the velocity and overall displacement to build up, even if the peak acceleration is small. Conversely, a motion that has a large peak value of acceleration is not likely to topple a block when the direction of morion switches back and forth so rapidly that velocity and displacement never grow very large. Keeping track of the ratio of the peak ground velocity to peak ground acceleration thus proves a convenient way to describe the propensity of ground shaking to overturn a block.

What do motions with either a high or low value of this ratio feel like? A good way to answer the question is with a “shake table,” a stainless-steel platform riding on a bed of pressurized air and driven by hydraulic actuators. These engineering marvels are used to test the response of structures to a wide range of earthquake ground motions. Mimicking the magnitude 7.6 Chi-Chi earthquake, which struck Taiwan in 1999, would illustrate things nicely.

As that earthquake propagated from south to north, there was a substantial increase in the ratio of peak ground velocity to peak ground acceleration. So if one programmed a shake table to simulate the motion experienced in the south, the platform would rattle around as if you were standing on the top of a washing machine, but you would have no trouble remaining upright. Program the table to reproduce the shaking up north and things would be very different. The ground motion there contained lower-frequency shaking, with significantly longer periods during which the acceleration remained in one direction. So before you begin the next simulation, brace yourself-this one might be a bit rough. As those low-frequency oscillations hit, it may feel as though that playground bully from your childhood pushed you in the chest, sending you onto your rear.

We have carried out just such shaketable exercises with the assistance of Patrick Laplace and Paul Lucas at the University of Nevada’s Large Scale Structures Laboratory. Experiments with regularly shaped objects were invaluable for validating the parameterization that Purvance had worked out earlier from his computer simulations. We also tested three large hunks of granite to investigate the overturning of real rocks. These small boulders were asymmetric and had irregular bases. We found that these stones fell over more readily than predicted on the basis of the numerical calculations for regular blocks, because the little bumps on the bottom allowed rocking motion to initiate more easily than if it only took place about the outermost basal edges.

Instead of repeating the numerical simulations with more realistic rock shapes, we decided that the best way to proceed was to measure the bumps and try to make a rock-specific correction. Happily, we found that when posed in this probabilistic fashion, the overturning behavior of a complex block is equivalent to that of a more slender block with simple contact conditions. We merely had to use the distance between rocking points, rather than the actual width of the boulder, to obtain a favorable comparison with the numerical simulations. With this correction in hand, our previously derived parameterization predicts the overturning responses of real boulders quite well.

But how can one determine where the supporting bumps are without disturbing a precariously balanced rock in the field? After all, we don’t want to topple them to map out their bumps, because doing so would ruin their usefulness for future generations of seismologists. We eventually figured out a solution. Slowly tipping a regularly shaped block results in a restoring force that is large at the start but then decays with increasing angle of tilt until the center of mass moves over the rocking point, when the force goes to zero. With real rocks, however, the restoring force does not diminish in such a smooth fashion. Instead, the force may vary, increasing and decreasing erratically until tipping takes place on the outermost rocking point. So by pushing on a rock and measuring the restoring force, one can determine the location of the outer rocking points without having to push the thing all the way over.

This approach seemed great to us in theory, but we were a little worried about what might happen when we tried it in the field. One of us (Anooshehpoor) has contributed fundamentally to this effort, which involves measuring the restoring force with a digital load cell and the tilt with an inclinometer. The real trick is figuring out how to push and pull on massive rocks with wire cables, nylon straps, chains, pulleys, winches, hydraulic pistons, the kinds of ground anchors used to hold down circus tents, 4×4 blocks of wood and various other pieces of heavy-duty paraphernalia. Every case is different, and each requires a great degree of ingenuity and caution, especially because most of the precariously balanced rocks we study are at sites that are inaccessible by car. Does science get any better than this?

Once the measurements of restoring force are done, we can say something quantitative about ground motions that would overturn a precariously balanced boulder. In general, a large number of earthquakes have shaken the ground under such a rock over the many millennia it has been standing. Suppose for a moment that large earthquakes, such as the Fort Tejon event, dominate the hazard from ground shaking at sites such as the one we visited near Palmdale. Suppose, in addition, that earthquakes of this magnitude recur once every 200 years on average, which is perhaps an underestimate of their true frequency. And let’s say that the precarious rocks we found near Palmdale have been in place for at least 10,000 years. Then these rocks must have been shaken by at least 50 large earthquakes during their long lives. Suppose, finally, that these 50 earthquakes have all moved the ground in different ways (so that these episodes of shaking can be considered to be independent of one another in a probabilistic sense) but that the peak ground motion amplitudes were the same.

For a given rock, call it boulder A, the probabilities that each of the ground motions from these 50 earthquakes would knock it over are nearly the same. We’ll start with a simple example, supposing that there is a 50-50 chance that each of the ground motions would overturn boulder A. Determining the probability that this rock would survive these 50 earthquakes without falling over is equivalent to flipping a coin 50 times and getting 50 heads in a row. The probability of such an occurrence is very small indeed, around 0.0000000000001 percent.

It would be logical to conclude that ground motions with such a chance of knocking over boulder A have not happened 50 times over the past 10,000 years. Indeed, had boulder A been shaken this hard only seven times, the probability that it would have survived upright is less than 1 percent. So one must conclude that the 50 earthquakes this site experienced during the past 10 millennia were somewhat smaller, perhaps with amplitudes that were sufficient to knock boulder A over 1 out of 10 times. The probability that this rock would remain standing in the face of such shaking would still be very small, less than 1 percent. For there to be an even chance that boulder A would survive 50 earthquakes, each must be no bigger than what is required to overturn boulder A about 1 out of 100 times.

Reality Check

What these precarious rocks were telling us initially seemed consistent with the established seismic-hazard assessments, so long as we considered only the earthquakes that might have struck during the past few centuries. But when we considered longer intervals (commensurate with the age of these boulders), something seemed definitely amiss: The hazard assessments often suggested that an earthquake of sufficient size to topple the rocks we observed should have occurred at least once in the past few millennia. Yet the standing stones showed that none had. Later, using our more refined methods, we were able to make much more quantitative comparisons. But still we found that there was often significant disagreement, with precarious rocks suggesting that zones near major faults may be less prone to intense shaking than the USGS estimates imply.

Part of the mismatch can be attributed to the way the USGS investigators calculated the attenuation of ground motion with distance from the causative fault. The current seismic-hazard maps are based on various ground-motion models, including a scheme Norman A. Abrahamson (of the Pacific Gas and Electric Company) and Walter J. Silva (of Pacific Engineering Analysis) published in 1997. But recent earthquakes in Turkey, Taiwan and Alaska produced ground motions that were smaller than what was predicted by this and the other models that the USGS used, suggesting that the current maps may overestimate the seismic hazard.

Abrahamson and others are now developing new ground-motion models based in part on data from recent earthquakes that predict less intense shaking close to faults. Presumably future USGS seismic-hazard estimates will use Abrahamson’s new model (or something like it), in which case they may become more consistent with the constraints we have derived from our observations.

In the meantime, we are trying to formulate ways to augment existing seismic-hazard assessments based on our finding precarious rocks at sites where current hazard maps suggest such stones should not be found. This work may help to determine the level of danger that ground shaking from earthquakes truly poses for these, and by inference, other locales.

There is much still to be done, but we are convinced of the value of this approach. Without many more nearsource recordings of very large earthquakes, which may take many decades to collect, precariously balanced rocks will remain one of the few independent ways to test seismologists’ calculations of seismic hazard.