Mason A Porter, Norman J Zabusky, Bambi Hu, David K Campbell. American Scientist. Volume 97, Issue 3. May/Jun 2009.
Four years ago, scientists around the globe commemorated the centennial of Albert Einstein’s 1905 annus mirabilis, in which he published stunning work on the photoelectric effect, Brownian motion and special relativity—thus reshaping the face of physics in one grand swoop. Intriguingly, 2005 also marked another important anniversary for physics, although it passed unnoticed by the public at large. Fifty years earlier, in May 1955, Los Alamos Scientific Laboratory (as it was then known) released technical report LA-1940, titled “Studies of Nonlinear Problems: I.” Authored by Enrico Fermi, John Pasta and Stanislaw Ulam, the results presented in this document have since rocked the scientific world. Indeed, it is not an exaggeration to say that the FPU problem, as the system Fermi, Pasta and Ulam studied is now universally called, sparked a revolution in modern science.
Time After Time
In his introduction to the version of LA-1940 that was reprinted in Fermi’s collected works in 1965, Ulam wrote that Fermi had long been fascinated by a fundamental mystery of statistical mechanics that physicists call the “arrow of time.” Imagine filming the collision of two billiard balls: They roll toward each other, collide, and shoot off in other directions. Now run your film backwards. The motion of the balls looks perfectly natural—and why not: Newton’s laws, the equations that govern the motion of the balls, work equally well for both positive and negative times.
Now imagine the beginning of a game of billiards—actually, American pool—with the 15 balls neatly racked up in a triangle and the cue ball hurtling in to send them careening all over the table. If we film the collision and the resulting havoc, no one who has ever held a pool cue would mistake the film running forward for it being run in reverse: The balls will never regain their initial triangular arrangement. Yet the laws governing all of the collisions are still the same as in the case of two colliding billiard balls. What then gives the arrow of time its direction?
For reasons that we will explore further below, Fermi believed that the key was nonlinearity—the departure from the simple situation in which the output of a physical system is linearly proportional to the input. He knew that it would be far too complicated to find solutions to nonlinear equations of motion using pencil and paper. Fortunately, because he was at Los Alamos in the early 1950s, he had access to one of the earliest digital computers. The Los Alamos scientists playfully called it the MANIAC (MAthematical Numerical Integrator And Computer). It performed brute-force numerical computations, allowing scientists to solve problems (mostly ones involving classified research on nuclear weapons) that were otherwise inaccessible to analysis. The FPU problem was one of the first open scientific investigations carried out with the MANIAC, and it ushered in the age of what is sometimes called experimental mathematics.
The phrase “experimental mathematics” might seem like an oxymoron: Everyone knows that the validity of mathematics is independent of what goes on in the physical world. Nevertheless, FPU’s original investigation can very reasonably be described as the birth of experimental mathematics, by which we mean computer-based investigations designed to give insight into complex mathematical and physical problems that are inaccessible, at least initially, using more traditional forms of analysis.
Today, computational studies of complex (typically nonlinear) problems are as commonplace as they are essential, and the computer has taken its rightful place alongside physical experiment and theoretical analysis as a tool to study myriad phenomena throughout the sciences, engineering and mathematics. Rigorous mathematical proofs, such as the one for the famous “four-color problem,” have now been carried out with the aid of computers. In fluid dynamics, computer-generated visualizations of complex, time-dependent flows have been crucial to extracting underlying physical mechanisms. Modern experiments in condensed-matter physics, observations in astrophysics and data in bioinformatics would all be impossible to interpret without computers. Things have come a long way since FPU’s study, and in this light it becomes especially important to understand how their pioneering work unfolded.
With Pasta and Ulam, Fermi proposed to investigate what he assumed would be a very simple nonlinear dynamical system—a chain of masses connected by springs for which motion was allowed only along the line of the chain. FPU’s idealized set of masses and springs experienced no friction or internal heating, so they could oscillate forever without losing energy. The springs of this theoretical system were, however, not the kind studied in introductory physics courses: The restoring force they produced was not linearly proportional to the amount of compression or extension. Instead, FPU included nonlinear components in the mathematical relation between amount of deformation and the resulting restoring force.
The key question FPU wanted to study was how long it would take the oscillations of the string of masses and nonlinear springs to come to equilibrium. The equilibrium they expected is analogous to the state of thermal equilibrium in a gas. In a monatomic gas, such as helium, the thermal (kinetic) energy of the molecules at equilibrium is equally partitioned among the three possible components of motion they can have: along the x, y or z axes. For example, there won’t be more atoms bouncing up and down than bouncing to the left and right.
This notion of sharing energy evenly among different modes of motion is fundamental. This precept, known as the equipartition theorem of statistical mechanics, can be extended to include molecules that are more complicated than billiard-ball-like helium, which can partition energy in rotational or vibrational movements as well. Application of the equipartition theorem allows physicists to calculate such things as the heat capacity of a gas from basic theory.
FPU’s premise was that they could start their system off with the masses in just one simple mode of oscillation. Ii the system had linear springs (and no damping forces), that one mode would continue indefinitely. With nonlinear springs, however, different modes of osculation can become excited. FPU expected, the system would “thermalize” over time: The vibrating masses would partition their energy equally among all the different modes of oscillation that were possible for this system.
Visualizing the possible modes of oscillation is a little tricky for FPU’s string of masses, but it’s easy to see how different modes of vibration arise in, for example, a plucked violin string. One mode corresponds to the fundamental tone, in which the string shifts up and down the most at the center and progressively less as you approach its fixed ends. Another mode is the first harmonic (an octave higher), in which one half of the string moves up while the other moves down, and so forth. A vibrating string has an infinite number of modes, but FPU’s system has a finite number (equal to the number of masses present).
To conduct their study, FPU (along with Mary Tsingou, who, although not an author on the report, contributed significantly to the effort) considered different numbers of masses (16, 32 or 64) in their computational experiments. They then numerically solved the coupled nonlinear equations that govern the motion of the masses. (They could easily derive these equations from their nonlinear spring function and Newton’s famous law f = ma.) In this way, FPU used the MANIAC to compute the behavior for times corresponding to many periods of the fundamental mode in which they started the system. They were absolutely astonished by the results.
Initially, energy was shared among several different modes. After more (simulated) time elapsed, their system returned to something that resembled its starting state. Indeed, 97 percent of the energy in the system was eventually restored to the mode they had initially set up. It was as if the billiard balls had magically reassembled from their scattered state to the perfect initial triangle!
Of course, not everybody was convinced by these computations. One popular conjecture was that FPU had not run the simulations long enough-or perhaps the time required to achieve equipartition for the FPU system was simply too long to be observed numerically. However, in 1972 Los Alamos physicist James L. Tuck and Tsingou (who at that point was using her married name, Menzel) put these doubts to rest with extremely arduous numerical simulations that found recurrences cai such amazingly long time scales that they have sometimes been dubbed “superrecurrences.” This research made it clear that equipartition of energy wasn’t hidden from FPU by computer simulations that were too short—something more interesting was indeed afoot.
1 + 1 = 3
Why did FPU think that nonlinear springs would ensure an equipartition of energy in their experiment? And what is this strange concept of nonlinearity anyway? Obviously, the term refers to a departure from linearity, which we’ve discussed thus far only in terms of the proportionality of inputs and outputs.
Students of physics study linear systems in introductory classes because they are much easier to analyze and understand. When a mass is connected to a linear spring and given a shove, its subsequent behavior is very simple: It will oscillate back and forth at the system’s resonant frequency, which depends only on the size of the mass and the spring constant (the factor that relates the amount of extension or compression to the restoring force). With a nonlinear spring, however, things become much messier. For example, the frequency of oscillation depends on the amplitude. Give it a gentle nudge, and it will oscillate at one frequency; kick it hard and it will oscillate at another.
When one first studies physics, it’s easy to get the impression that nonlinear systems are anomalous. But nonlinear interactions are actually much more characteristic of the real world than are linear ones. For this reason, physicists have been known to quip that the term “nonlinear science” makes about as much sense as saying “non-elephant zoology” (a joke that is sometimes, incorrectly, attributed to Ulam).
How do nonlinear systems differ from linear ones, aside from having amplitude-dependent oscillation frequencies? With a linear system, doubling the input will yield a doubling of the output, as we have discussed. Suppose someone sings twice as loudly into the microphone at a karaoke club—the amplified crooning will be twice as loud when it comes out of the speakers. Singularly, if two people sing a duet, the output will be just the sum (or “superposition”) of what would have come out had each one sung his part separately. Also, if everything is truly linear, voices won’t become distorted. The frequencies that come out (that is, the notes that are heard) will be just the ones the duet put in, regardless of amplitude.
With nonlinear systems, things are far more complicated. For example, the superposition principle doesn’t apply. Additionally, the output frequencies aren’t limited to the input frequencies. Screaming into a karaoke mie, for example, can overload the amplifier, forcing it into a nonlinear regime. What comes out of the speakers is then highly distorted, containing frequencies that were never sung. Much more subtle effects can also take place.
One of the subtle effects of nonlinear physics was first observed in the 1830s, when a young engineer named John Scott Russell was hired to investigate how to improve the efficiency of designs for canal barges of the Union Canal near Edinburgh, Scotland. In a fortuitous accident, a rope pulling a barge gave way Russell described what ensued:
I was observing the motion of a boat which was rapidly drawn along a narrow channel by a pair of horses, when the boat suddenly stopped—not so the mass of water in the channel which it had put in motion; it accumulated round the prow of the vessel in a state of violent agitation, then suddenly leaving it behind, rolled forward with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed. I followed it on horseback, and overtook it still rolling on at a rate of some eight or nine miles (14 km) an hour, preserving its original figure some thirty feet long and a foot to a foot and a half in height. Its height gradually diminished, and after a chase of one or two miles (3 km) I lost it in the windings of the channel.
This strange wave did not act like an ordinary wave on the surface of the ocean. Water waves on the sea (and many other familiar kinds of waves) travel at speeds that depend on their wavelengths. This phenomenon is called dispersion. A disturbance like the one created in front of Russell’s can be envisioned as the super of purely sinusoidal waves, each with a different wavelength. However, if a compact disturbance forms on the surface of the open ocean, each of the component waves will travel at a different speed. As a result, the initial disturbance won’t maintain its shape. Instead, such a wave will stretched and distorted.
Having an inquiring mind, Russell his serendipitous discovery with controlled laboratory experiments and quantified the phenomenon that he had discovered in an 1844 publication. There he showed, for example, that large-amplitude solitary waves in a channel move faster than small ones—a nonlinear effect.
In 1895, Dutch physicist Diederick Korteweg and his student Gustav de Vries derived a nonlinear partial differential equation, now known as the Kortewegde Vries (KdV) equation, that, they argued, could describe the results of Russell’s experiments. This equation shows that the rate of change in time of the wave’s height is governed by the sum of two terms: a nonlinear one (which gives rise to amplitude-dependent velocities) and a linear one (which causes wavelength-dependent dispersion). In particular, Korteweg and de Vries found a solitary-wave solution that matched the strange wave Russell had followed on horseback. This solution arises as a result of a balance between nonlinearity and dispersion. The Dutch physicists also found a periodic solution to their equation, but they were unable to produce general solutions.
Their work and Russell’s observations both fell into obscurity and were ignored by the mathematicians, physicists and engineers studying water waves until the early 1960s when one of us (Zabusky) and the late Martin Kruskal of Princeton University began studying FPU chains. They started from FPU’s model but used, in essence, infinitesimally small springs and masses to represent a continuous line of deformable material rather than a series of discrete masses. This approach allowed them to examine situations with long wavelengths and yielded a partial differential equation that matched the usual one describing linear waves except for its modified dispersion. To represent progressive waves in the system, Kruskal derived from this equation what he and Zabusky later realized was the KdV equation. It seemed intractable analytically, so they (with the assistance of Gary Deem, who was then at Bell Telephone Laboratories) used numerical simulations to observe a near-recurrence to initial conditions. To describe their solutions to the KdV equation, they invented what has become a widely used term for the solitary-wave phenomenon: soliton.
They discovered that solutons would evolve from an initial state and then travel to the left and right until they exchanged their relative positions and refocused almost exactly at another location in space. This work (and the work of many subsequent investigators) has contributed a huge number of analytical, theoretical and experimental advances in myriad areas of mathematics and physics.
While Zabusky, Kruskal and Deem were busy with the FPU problem, Japanese mathematical physicist Morikazu Toda investigated a similar nonlinear system and proved mathematically that it could never show any chaos. There was clearly something especially subtle about the FPU chain.
Minions of Chaos?
Solitary waves can indeed produce some surprisingly regular behavior, but the motion of an FPU system can also be quite chaotic. Indeed, even very simple dynamical systems typically support intricate mixtures of regular and chaotic behavior.
Here we are using the word chaotic in its scientific sense. We do not mean randomness. The outcome in the FPU problem is governed by Newton’s laws, which exactly determine all future motion—there are no random events. Yet after a while, the motions can indeed seem very jumbled and erratic. Moreover, the state of FPU’s system of springs and masses after a given amount of time is very sensitive to its initial setup: Change the initial conditions ever so slightly, and the outcome some time later will be completely different. Many systems—including the atmospheric variations that give rise to the changing weather—show this property and are thus considered chaotic, even though their motion over a short period of time might appear reasonably regular. In fact, as the FPU problem itself shows, the motion over even exceedingly long periods of time can be quite regular!
To determine whether the motion of a given system is regular or chaotic (given particular initial conditions) over the long term, it is helpful to plot changes in the configuration of the system over time. The problem is that even a seemingly simple dynamical system—consisting of only a single mass—has six variables to plot: both the positions and velocities of the mass in x, y and z.
Plotting all six values for the mass as it undergoes its trajectory typically leads to a visual jumble that can be very difficult to interpret. However, plotting an intelligently chosen subset of points (that satisfy a specific, physically motivated condition such as whenever a specific one of the velocity variables is zero) makes it easier to interpret what is going on. Such plots are called Poincaré sections, in honor of the French physicist and mathematician Jules Henri Poincaré.
Regular trajectories are as predictable as the orbits of planets around the Sun or a suburbanite’s daily routine. They can be tracked in time with a great deal of precision. On the other hand, chaotic trajectories are extremely irregular. They tend to wander like drunken sailors and are constrained only by the amount of energy available to them.
Chaos is important to the FPU problem because if it is sufficiently strong, it will mix energy between modes of osculation. That is, chaos can bring about the partitioning of energy in such as system. Although neither FPU nor Tuck and Menzel had found equipartition, in their 1967 study Zabusky and Deem did, after performing simulations of an FPU system for which the initial motions of the masses had a short wavelength of large amplitude. By 2006, others had confirmed this equipartition with more comprehensive simulations and analysis.
Building from late-1960s research on chaos and equipartition by the late Boris Chirikov, Eddie Cohen of the Rockefeller University and several collaborators recently investigated the FPU system at high energy. Exploring this issue systematically, they demonstrated the existence of two thresholds (as a function of energy per oscillator) in the dynamics of the FPU system. At the first threshold, the motion transitions from being completely regular to weakly chaotic—there is some chaotic behavior, but things are still very regular the overwhelming majority of the time. Above a second, higher threshold, strong chaos sets in, allowing energy to be quickly distributed between modes.
Cohen and his collaborators also found that equipartition occurs faster when there are more masses. As the number of nonlinear oscillators becomes infinite (that is, in the real life situations that FPU were trying to model), equipartition does indeed arise for any level of energy input. The initial conditions that FPU used in their numerical simulations were, however, below the threshold for chaos, which prevented them from seeing the equipartition of energy among the different modes of oscillation. FPU would have observed that equipartition had they used either stronger nonlinearities (yielding stronger interactions between different modes) or initial pulses with more energy. We should be thankful that they did not, given how much interest and understanding has arisen as a result.
One example can be seen in studies of heat conduction. (The subject of heat conduction was a key motivation for FPU’s study.) Early in the 19th century, the French mathematician J. B. Joseph Fourier introduced a simple phenomenological law to describe the flow of heat in solids. Yet in the two centuries that have since elapsed, scientists have been unable to derive that law directly using first principles. Attempts to do so date at least as far back as Peter Debye’s 1914 studies of heat conduction in dielectric crystals. He suggested that the finite conductivity of such crystals arises from nonlinear interactions in their lattice vibrations—exactly the sort of phenomenon that FPU’s approach was designed to probe.
Much work on heat conduction has since been done using FPU-like models with each end of the chain soaked in a “heat bath” (one end hot and the other cold) and with each mass experiencing forces in addition to those that come from its neighbors. For example, these models have been used to examine the way heat conductivity depends on both the number of masses and how much chaos there is in the system. Although many important insights have been obtained, the complete set of necessary and sufficient conditions for the validity of the Fourier law remains unknown. Physicists would dearly love to resolve this embarrassing situation.
Every Breath You Take
Decades later, the FPU problem continues to inspire studies of many other fascinating nonlinear systems, such as the atomic lattices of solid-state physics. Until the late 1980s, it was taken for granted that the vibrations of those lattices had to extend over distances that are very large compared with the spacing of atoms. The only recognized exceptions were those that came from defects that destroyed the regular arrangement of atoms in the lattice—say, from contaminants or disruptions in an otherwise pure crystal. The accepted wisdom was that only such irregularities could cause vibrations to become localized (although Zabusky and Deem’s earlier work had hinted otherwise).
This perspective was turned inside out by the discovery of localized modes of vibration in perfect lattices. Such modes, known as intrinsic localized modes (ILMs) or discrete breathers, can arise in strongly nonlinear, spatially extended lattices and (roughly speaking) play a role similar to that of solitons in continuous physical systems. Unlike solitons, though, ILMs don’t have to propagate: They can just vibrate in place. Physicists have now observed ILMs experimentally in a diverse collection of physical systems, including charge-transfer solids, Josephson-junction arrays, photonic crystals, micromechanical-oscillator arrays and Bose-Einstein condensates.
How can nonlinearity produce a localized mode of osciliation in a lattice? To get a feel for this, consider two nonlinear oscillators that can interact weakly Recall that because these oscillators are nonlinear, the frequency of their vibrations depends on their energy. Imagine starting one oscillator off with a strong excitation and the other with a weak one, so that most of the system’s energy starts in the first oscillator. In principle, one can choose those initial excitations so that their oscillations are incommensurate (making the ratio of their oscillation frequencies an irrational number). Consequently, after starting both oscillators at their maximum amplitude, they will never again get back in sync. This prevents the vibrations of the first oscillator (or any of its harmonics) from resonating with any of the modes of the second oscillator, which makes it very difficult to transfer energy between the two oscillators.
Now consider a chain with a large number of oscillators. Set one of them vibrating with relatively large amplitude and at a frequency that is incommensurate with the frequency of the smaller vibrations that the other oscillators are undergoing. That one special oscillator now has a difficult time transferring any of its energy to its neighbors. So this oscillator, and perhaps a small number of neighbors, maintains a large-amplitude oscillation for a long time, yielding an ILM.
In 1988, Albert Sievers (Cornell University) and Shozo Takeno (Kyoto Technical University) showed mat ILMs can arise in an FPU lattice. This idea continues to be pursued actively and has led to exciting new developments. In particular, in a series of papers starting in 2005, Sergej Flach of the Max Planck Institute for the Physics of Complex Systems and his collaborators used this perspective to provide a new take on the FPU recurrences, which they view as resulting from the existence of objects called breathers. One of the most active research problems in nonlinear science is to reconcile Flach’s approach to understanding FPU dynamics with the earlier soliton perspective.
Somewhere, Over the Rainbow…
As we’ve discussed in gory detail, a lot of very smart people have covered considerable ground in myriad investigations of the FPU problem and related systems over the past half -century. During this process, concepts like chaos, solitons and breathers have been invented, developed, refined and applied to a number of real-world systems.
The FPU problem touches on a remarkably broad range of topics in nonlinear dynamics, statistical mechanics and computational physics. Yet these broad categories represent only a small fraction of the research literature that the original FPU paper has spawned. New studies of the FPU problem are still being published today, 54 years after the original Los Alamos report. We fully expect that work of this kind will keep researchers busy long after scientists celebrate the centennial of the FPU problem in 2055.