Inverse Boundary-Value Problems

Margaret Cheney. American Scientist. Volume 85, Issue 5. Sep/Oct 1997.

For the oil industry, 1986 was the worst of times. Between January and April, the price of crude oil plummeted from $26 per barrel to less than $10. In the “oil patch” states of Texas, Oklahoma and Louisiana, the wealthy went broke and the not-so-wealthy lost their jobs. Rolls Royce and Mercedes automobiles popped up in used-car lots. The black license plates of Michiganders who had swarmed into Texas during the oil boom started heading north. In all, the oil industry lost a half-million jobs in the second half of the 1980s more than the entire labor force of the automotive industry.

Few people knew it at the time, but the seeds had already been planted then for a revival of the oil business in the Gulf of Mexico. Ten years later, mathematics, geology, geophysics and computers have combined to enhance the power of seismic prospecting-hunting for oil-bearing rock formations with sound waves.

By the mid-1980s geologists, for the first time, had computers with enough speed and memory to take into account the full complexity of the mathematical equations that describe how sound waves travel through rock. Mathematicians, accustomed to proving theorems about equations no one could hope to solve with paper and pencil, found themselves developing practical ways to estimate the solutions with a computer. Now, in the 1990s, prospectors can use the stunning three-dimensional pictures produced by “prestack depth migration” to locate possible oil deposits underneath salt domes and sills where they could not look a decade ago. The first oil fields discovered in this way have begun commercial production, and more subsalt wells are on the way. The Paris-based International Energy Agency has projected that the annual oil output of the Gulf of Mexico will double between 1995 and 2000.

Seismic prospecting is one of a whole class of problems in which scientists gather information about an inaccessible region by probing it with fields, waves or particles from outside. A physician probes a patient with electric fields to determine the volume of air in the lungs. An astrophysicist measures the gravitational field around a planet in order to determine its mass distribution. A baby “probes” the region around its crib with photons from the light bulb in the ceiling, in a process known as vision. The physician, the astrophysicist and the baby reconstruct the properties of unknown regions from their observations. All of them, except the baby, use mathematics. (We do not really know how the baby does it, but it is far from easy. It takes an infant several months to learn how to interpret what it sees, and a large fraction of the human brain is dedicated to visual processing.)

Mathematicians call these situations inverse boundary-value problems. They are “inverse” because the order of cause and effect is reversed. The scientist observes an effect and tries to deduce the cause. The corresponding “forward” or “direct” problems would be: How does a given volume of air in the lungs alter the electric field measured at the skin when a current is applied; what kind of gravitational field is produced by a planet with a certain density; and what image do the baby’s toys form on its retina? In each of the above examples, measurements made at an edge of a region, or “boundary values,” are used to learn more about its interior.

Inverse boundary-value problems are usually formulated mathematically as differential equations: equations in which the unknown quantity is not a number (the electric potential at one point), but a function (the electric potential at every point). They are generally far too difficult to solve by paper and pencil, except in very simple and idealized cases, but the solutions can be approximated by computers. Far from replacing mathematicians, computers have given them a new set of challenges: to come up with appropriate strategies or algorithms for a computer to use. There is no “right” or “best” way to solve these equations, unlike the equations of high school mathematics. The mathematician’s art lies in understanding the strengths and drawbacks of each method, knowing which one to apply in which setting, devising improvements and developing new algorithms.

Just for the Oil of It

Oil companies look for a likely spot to drill for oil in the same way that homeowners look for a spot to drive a nailby tapping on the wall and listening for the right kind of reverberation. But to find oil under 1,000 feet of water and several thousand feet of rock, they have to tap much louder and listen more carefully. Seismic ships to- two-to-three-mile-long streamers of airguns and hydrophones. The airguns are metal canisters that shoot large volumes of air into the water. These create a burst of sound that propagates outward until it reflects off the bottom of the body of water or off the various subsurface formations. When the sound waves reach the surface again, the hydrophones record them.

Before the hydrophone traces are processed mathematically, they are a confusing jumble of primary signals, ghosts, multiple reflections and random noise. From this hodgepodge, the seismologist must deduce the configuration of the rock strata. In other words, exploring for oil poses an inverse boundary-value problem.

Acoustic waves in a fluid, such as water or even rock (which acts enough like a fluid for these purposes), are simply local oscillations in the pressure. Just like air currents in the atmosphere, particles in a fluid tend to move from high-pressure regions to low. Nevertheless, if particles flow into a region, the pressure must increase over time. These two phenomena work at cross purposes: Particles flow toward a low pressure, which then becomes a high pressure that causes particles to flow away again. Mathematically, this creates a dynamic interaction of the spatial rate of change of pressure and its temporal rate of change. The result is called the acoustic equation:

In this equation, p(x, t) represents the pressure in the rock at any point x and at any time t. The function C2(x) represents the square of the speed of sound at any point. The symbol formula ommitted-read “del-squared” (or “nabla-squared” in the old days when mathematicians had to speak to typesetters)-represents the spatial variation in the pressure, iterated twice because of the feedback described above. The symbol d^sup 2^/d^sup 2^ represents the derivative or temporal rate of change of pressure, also iterated twice. Because the two rates of change oppose each other, as noted above, the solutions to the acoustic equation tend to oscillate-producing sound waves. The zero on the right-hand side of this equation reflects an assumption that there is no energy source inside the rock.

In our everyday life, we think of the speed of sound as a constant, but in rock it is not. Sound travels at around 2,500 meters per second inside sedimentary rock, but around 4,500 meters per second inside a salt dome. Hence, though both p(x, t) and c(x) are unknowns, the latter is the information a seismologist really wants to know, because it helps determine which materials sit in which locations.

The data at the seismologist’s disposal are the hydrophone traces, which indicate the pressure at the surface at any time t. Is this information sufficient to compute the speed of sound everywhere below the surface? Is it reasonable to expect that only a single solution will match the given data, or might there be many different solutions? Mathematicians refer to this question as the uniqueness problem. A good general rule is that, in order to determine a function of n variables, one should collect data that also depend on n variables. Here, the seismologist wishes to determine the acoustic sound speed as a function of position in three-dimensional space (n = 3). According to the rule of thumb, the pressure as a function of time (one variable) at all points on the surface of the water (two more variables) should provide enough data to do the inversion.

For a case where more than one solution fits the data, consider a satellite sent out to determine the mass distribution of a planet. The satellite orbits for several years, covering the whole surface (as the Mars Global Surveyor mission is expected to do around Mars) and recording the gravitational field everywhere it goes. It would acquire a two-dimensional data set (the gravity at the surface), not enough to map a three-dimensional mass distribution. This may seem surprising: After all, the gravitational field is completely determined by the mass distribution, so it might seem that the reverse would also be true. But Isaac Newton, as early as the 17th century, showed that different planets could have identical gravitational fields. In fact, if Mars were perfectly symmetrical, Mars Surveyor would be unable to determine-from gravitational data alone-whether it was solid or hollow!

Another source of difficulty in the acoustic inverse problem is its nonlinearity. In other words, the superposition principle does not hold: If you bounce sound waves off two objects, the measured data are not the sum of the data for the individual objects. When both objects are present, the sound waves reverberate between them-an effect that does not occur with only one. In precomputer days, seismologists learned to recognize this “ringing” in their hydrophone traces. The nonlinearity can also be seen in the acoustic equation, where the unknown pressure p is divided by the square of the unknown sound speed, c. In differential equations, as in high school algebra, a linear equation is one in which the unknown quantities appear one at a time, and are raised only to the first power.

Most students in high school learn that linear equations are much easier to solve than nonlinear (for example, quadratic) ones. The same principle applies to inverse boundary-value problems. Mathematicians have developed a great deal of theory for linear problems. Because of the superposition principle, such problems can often be broken down into simpler pieces that can be solved individually, and then the results can be added together. Moreover, nonuniqueness causes little trouble for linear problems. Frequently there are straightforward tests to determine whether the solution is unique. If not, the structure of the alternative solutions is easily described.

The full, nonlinear complexity of the acoustic inverse problem is still beyond the reach of computers. Most seismic data-processing techniques still rely heavily on clever approximations. Seismologists assume, for example, that sound travels along rays instead of as waves, allowing them to use the geometry of straight lines.

However, computers have changed the realm of possibility in one important respect. Between the 1920s, when reflection seismology was introduced in the United States, and the 1980s, when computer speeds in gigaflops (billions of floating-point operations per second) and core memories in gigabytes first became available, seismologists avoided dealing with the threedimensional nature of the earth’s subsurface. They considered only the two-dimensional plane lying beneath one streamer of airguns, ignoring the effects of rock outside that plane. Now, good approximate solutions of the wave equation in three dimensions, obtained by computer, enable seismologists to visualize geological formations that were once too complex to model, including the salt domes of the Gulf of Mexico. If one could strip away the surrounding rock, these domes would strongly resemble the biblical pillars of salt-except that these pillars sometimes develop mushroom-shaped heads, which can fuse into salt sills. In 1993, under one of the salt sills, Phillips Petroleum struck the Mahogany oil deposit, estimated to contain 100 million barrels of oil, and that discovery ignited the new oil-prospecting boom in the Gulf of Mexico.

Through a Glass Fuzzily

Last January, the National Institutes of Health sponsored a conference to resolve the dispute over whether women between 40 and 50 years old should get a mammogram every year. Some organizations, including the American Cancer Society, have long recommended an annual mammogram; others, including the American College of Physicians, argued that it has not been proven to reduce breast-cancer death rates. Although the doctors could not reach consensus, politicians had no such trouble. By a vote of 98-0 the U.S. Senate called on the NIH to recommend an annual mammogram for women in this age group. President Clinton also weighed in with his opinion in favor of the test.

Nevertheless, it is impossible to legislate away some of the legitimate scientific concerns about mammography. The first problem with mammograms is that the malignant tissue itself is often not visible on an x-ray image. It is not uncommon for a woman to find a lump in her breasts that cannot be seen on a mammogram, even after the lump is known to be malignant. To the extent that this delays the proper diagnosis, such women may have to undergo more aggressive chemotherapy later on.

What is visible on a mammogram is frequently not the tumor, but clusters of small calcium deposits that form around it. The physical property that x rays detect is density, and many tumors do not differ in density from normal tissue. If a tumor does not happen to produce any “microcalcifications,” then it may not show up at all. On the other side of the coin, microcalcifications can also appear in normal breast tissue-in fact, they are more often benign than not. Only an experienced radiologist can distinguish suspicious deposits from benign ones.

Along with several colleagues at Rensselaer Polytechnic Institute, I am investigating electrical-impedance tomography (EIT), an alternative that may detect tumors that mammography misses. In the laboratory, scientists have found that breast and other tumors conduct electricity more than four times better than normal tissue. (No one knows yet why tumors are often more conductive.) In the clinic, of course, doctors cannot hook up a battery to a suspected tumor and run a current through it. They need a noninvasive test. Like oil prospectors, they have to probe an inaccessible region from outside.

In Rensselaer’s Adaptive Current Tomography system, we attach 32 electrodes around the body. (To diagnose breast cancer, we might place them around the breast instead.) We apply a small electric current through these electrodes, creating a mild electric field throughout the body. We cannot measure the field inside the body, but we can measure the voltages at the electrodes. From these measurements, we try to determine the body’s electrical conductivity A hot spot of increased conductivity where there should not be one may be the location of a tumor.

The inverse boundary problem for EIT, like the acoustic problem, involves a differential equation:

In this electrical-impedance equation, there are two unknown functions: sigma(x), the electric conductivity at any point x in the body, and u(x), the electric potential, or voltage. To understand where the equation comes from, it helps to read it from the inside out. The inside “nabla” takes the gradient of the potential u(x)computing the direction in which electrons will tend to flow, as well as the rate of change of voltage in that direction. In electric circuits, the conductance of a wire times the change in voltage gives the current passing through the wire. The tissues inside the human body act like an electrical resistor, so the same principle applies: sigma(x) times the gradient of u(x) represents the current at point x. Finally, the outer “nabla” computes the divergence of the current, a measure of its tendency to flow into or out of one spot. As long as no charge is building up inside the body (a reasonable assumption), the divergence equals zero.

Like the acoustic problem, the inverse electrical-impedance problem is nonlinear, because the unknown conductivity and potential are multiplied together. But the electrical-impedance equation differs from the acoustic equation in some important ways. Because we use a steady current at a fixed frequency, there is no time dependence in the electrical-impedance equation. Another fundamental difference is that the electrical-impedance equation lacks the tension between two opposing principles that characterizes the acoustic equation. This opposition makes the acoustic equation “hyperbolic,” a type of equation whose solutions naturally act like waves. Discontinuities and fine details inside the rock propagate outward to the boundary and little information is lost. Lacking the extra term, the electrical-impedance equation is “elliptic.” In this kind of equation, abrupt changes in the electric field inside the body tend to get smoothed out between there and the surface, until eventually they cannot be distinguished from random noise. This makes the problem of reconstructing the interior more difficult-or, as mathematicians would say, more ill-posed.

In an ill-posed problem (the antonym of “well-posed,” a term coined by the French mathematician Jacques Hadamard around the turn of the century), small changes in the observations may correspond to big changes in the phenomenon being observed. For example, two fuzzy red blobs seen through a frosted glass may look similar but correspond to very different objects, say an apple and a rose. The frosted glass smooth out the differences.

Ill-posed problems cause particular anxiety for mathematicians because the algorithms to solve them tend to be unstable. In the frosted-glass example, one might imagine constructing a sensitive and very expensive machine that could tease out the subtle differences between fuzzy red blobs and tell whether an apple or a rose was behind the glass. But if the machine is sensitive to small differences, it is also sensitive to small errors as well. A tiny glitch in the data-say, a bubble in the glass-might make the machine conclude that the apple is a rose.

One cure for instability is prior information about the unknown region or object. In medical inverse problems, we expect the body to contain certain organs that have certain known ranges of conductivity. We can rule out a reconstruction that gives the patient two hearts or three kidneys.

More data can sometimes improve the reconstruction. With more electrodes and more-precise measurements, we can reduce the size of the smallest detectable objects in the body. But more data, especially in an unstable problem, can be a double-edged sword. Suppose, in our previous example, we had decided that the object behind the frosted glass was a rose. We might try to confirm it, and get a better picture of the unknown object, by looking from another angle. But our unstable machine might decide that, from this new angle, the red blob looks like an apple! Then we would have a new problem: how to explain an object that looks like a rose from one angle and an apple from another. This is the problem of data consistency.

The symptoms of nonlinearity, nonuniqueness, instability and inconsistency are more or less endemic to inverse boundary-value problems. Some applications are less afflicted than others. For example, the inverse problems associated with CAT (computed axial tomography) scans turn out to be linear, because x-ray photons blast right through tissue, traveling mainly in a straight line. The same property that makes x rays dangerous-their high energy-also makes them produce images that are crisper and easier to interpret. The low-energy electrons in EIT, on the other hand, meander all over the place, depending on the local electrical properties.

Since x-ray tomography is mathematically simpler, it became a practical tool in medicine before EIT. Nevertheless, EIT has already been used successfully to image respiration and gastric emptying, where the images can be compared over time. Because ET can show the location of blood within the body, it may also become useful for diagnosing bleeding in the brain or blood clots in the lungs. (The latter condition, a common complication of surgery, currently can be diagnosed only by means of radioactive tracers or by injecting contrast media.) EIT may be able to do these tasks at perhaps one-hundredth of the cost of x-ray tomography without exposing patients to high-energy, and potentially cancer-causing, radiation.

Methods of Solution

So far I have explained why inverse boundary problems are difficult, but I have not said much about how to solve them. Here mathematicians can take advantage of the tremendous computing power that has become available in the past few years, but harnessing the computer in an effective way still calls for human ingenuity. I shall not attempt an exhaustive survey of the methods that have been used; what follows is a selection, including some popular and time-tested methods and some that are more experimental.

In the idealized version of the electrical-impedance equation, the function we seek (the electrical conductivity of tissues in the body) is supposed to be evaluated at every point in the unknown region: in theory, infinitely many points. However, no computer has been or will ever be made that can store an infinite amount of data. Hence the first step in many approaches is to make the problem discrete. One way to do this is to divide the unknown region into a grid, and specify the value of the function on each grid element. This is, for example, how computers display images.

In a computer model we have developed at Rensselaer, we represent a twodimensional slice of the body by a concentric grid with 496 rectangles. (The graduate student who wrote our computer code nicknamed this grid “the Joshua tree,” both because it resembled tree rings and because he was a fan of the rock group U2, which recorded an album by that name.) The number 496 is chosen to match the number of variables in the model with the amount of data, as in the earlier rule of thumb. With 32 electrodes, we can apply 31 different patterns of currents to the body before we start getting redundant data. (For example, a one-ampere current from electrode 1 to electrode 2, or a one-ampere current from electrode 2 to electrode 3 and so on.) That gives us 32 x 31 = 992 voltage measurements, but symmetry considerations make half of them redundant. From the remaining 496 independent voltage measurements, the rule of thumb says we should be able to reconstruct 496 conductivities.

If we number the rectangles in the grid from 1 to 496, then we can use 2 to denote the conductivity of tissue in the first rectangle, 52 for the conductivity in the second rectangle and so on up to sigma496. Similarly, the voltage measurements can be labeled from U1 through U992.

We assume that we have a way to solve the forward problem. That is, starting with any input values for the conductivities sigma1 to sigma496, we can compute the output voltages U1 to U992 that we would measure at the electrodes if the body actually had these conductivities.

Of course, human beings do not have the same body plan as a Joshua tree. So our discrete model will not be accurate, and our simulated output will not be identical to the voltages that the electrodes actually record. Thus we need some way to compare the simulated measurements with the real ones. This is done by defining a “distance” between two sets of measurements. We can do this, for example, by adding up the sums of the squares of the differences between the measurements. If the actual voltages measured by the electrodes on the patient are U1, U2 and so on, then the “distance function” is (U1-U1)2+ (U2-U2)2+…+ (U992-U9926)2

This distance function gives us a quantitative measure of the error in our discrete model. It is not the only way to measure the error, but it is a popular and convenient one.

As explained above, it is unlikely that we will be able to get the error down to zero. The idea of the optimization method for solving inverse boundary-value problems is to look for the conductivities sigma1, through sigma496 that make the error in the voltages as small as possible. That set of conductivities, then, is our best estimate of what is really happening inside the patient’s body. Mathematicians have a variety of techniques to find which discrete-data model has the smallest possible error. Generally we start with a guess, and choose a method of improving the guess to decrease the error.

Optimization methods are commonly used. If one needs an answer to a real inverse problem, optimization is a good technique to try. The ideas apply to almost every possible problem, and the methods can generally be stabilized. They produce a reconstruction that is reasonably consistent with the data.

However, optimization methods tend to require a great deal of computation, and hence are slow and expensive. Also, they can be fooled by “local” minima that do not represent the smallest possible error. For example, a person seeking to minimize his altitude above sea level might move to the bottom of a valley, but it would be difficult, without a topographical map, to tell whether that location was lower than the bottom of the next valley over.

A second classical approach to nonlinear inverse problems is linearization: substituting a “nearby” linear problem as a proxy for a more difficult nonlinear one. After making that substitution, we solve the linear problem using the measured data. This sounds like a fudge, but there are grounds for expecting it to work in some cases. For example, linearization tends to be very useful when the problem is known to be close to a well-understood one. In the acoustic case, if we understand how sound reflects off a perfectly round sphere, then we can use a linearized acoustic equation to model how it reflects off a sphere with small bumps. The nonlinear reverberations between the bumps can be ignored because they are so small.

When is linearization likely to give good results? If there is reason to think that the true configuration is a small perturbation of a known one, then linearization is the method of choice. Algorithms based on linearization are often faster and require less computer memory than optimization methods. Moreover, a study of a linearized problem often gives insight into the corresponding nonlinear problem. Unfortunately, linearization is often used-for lack of a better method-even in the case of large perturbations. In that case, it is not clear how the answers are related to the true solution of the problem.

A newer approach to solving inverse boundary-value problems goes by the name of layer-stripping. We reconstruct the desired quantities on the top surface of the unknown region, then mathematically synthesize the measurements that would have been made a small distance below the top if the top layer had been absent. The synthesized measurements are now treated as boundary values, and the process is repeated until we have simulated the whole region. In essence, we propagate the boundary values downward as if they were a wave.

Layer-stripping methods are appealing because they are based on a simple idea, they are fast and they are not restricted to small perturbations. On simulated data, layer-stripping sometimes works quite well. But on real data, with measurement and modeling errors, these methods have, so far, proven to be too unstable. Also, unlike the optimization approach, layer-stripping does not guarantee that the reconstructed image will be consistent with the measured data.

A more general method of solution is called successive approximation. There are many ways to make successively better reconstructions of an unknown region. One approach is to linearize the problem about some reference problem, solve the linearized problem, and then use the reconstruction as a new reference problem. This procedure can be repeated as often as desired.

Another class of successive approximation methods, also called continuation methods or homotopy methods, involves the dependence of the problem on a parameter. One value of the parameter may correspond to a reference problem that is known, and another value corresponds to the problem one wants to solve. The method of successive approximations consists of changing the parameter in small increments, and making corresponding small changes in the reconstruction.

To apply the homotopy method to the acoustic problem, we could take the parameter to be the frequency. Oil prospectors use frequencies that vary from 5 to 80 cycles per second. We could use linearization to solve the acoustic equation at very low frequencies (long wavelengths). In general, the smallest details one can hope to resolve are about the same size as one wavelength. Thus the linearized model would reveal the broad features of the underground landscape, omitting anything smaller than half a kilometer. As the frequency is increased (and the wavelengths get shorter), the data would contain progressively more information about details of the rock formation. The computer could gradually fill in details until the picture had a resolution of about 30 meters.

Successive approximation methods do address the full nonlinearity of the problem. However, they tend to be computationally intensive. Also, it is not clear under what circumstances the iterative process will converge to the correct answer.

Conclusion

In a broad sense, humans have been solving inverse problems-reasoning from effects to causes-for a very long time, perhaps as long as the species has existed. But inverse boundary-value problems, nevertheless, have a very modern feel to them. As geology probes ever more remote regions of the earth, or medicine seeks more-sensitive tests of the body that do no harm, the importance of measurement at a distance seems likely to increase. Fortunately, along with our increasing interest in these problems, computers have dramatically enhanced our ability to solve them. But we should not forget that the key technology-both for telling the computer what to do and interpreting what it tells us-is still mathematics.