Matthew Stanley. Scientific Thought: In Context. Editor: K Lee Lerner & Brenda Wilmoth Lerner. Volume 1. Gale, 2009.
Stellar astronomy is the study of the sun and the stars: what they are, how they move, and how they are born, live, and die. The major challenge of this field is the staggering distance to even the closest stars. Scientists had great difficulty understanding stars without being able to touch them and experiment upon them. This difficulty required radical, creative thinking and a constant willingness to embrace new ideas and techniques.
Historical Background and Scientific Foundations
The motion of the sun, both on a daily and annual basis, is one of the most basic human observations of the sky. Most ancient civilizations developed techniques of varying sophistication for tracking, recording, and predicting these motions for agricultural and ritual purposes. Some devices have survived in the form of structures, such as Stonehenge in England.
Greek astronomers only investigated the stars as points against which to measure the motion of the planets. Despite their limited tools, Greek astronomers were able to make impressive estimates of the size and distance to the sun. Aristarchus of Samos (c.310-c.230 BC) used the geometry of eclipses to calculate that the sun was roughly 19 times as far away from Earth as the moon, and that the sun was about 6ﬂ the size of Earth. These numbers are far smaller than modern values, but represent a significant achievement of mathematics for their day.
Physical understandings of the sun and stars were codified by the Greek philosopher Aristotle (384-322 BC), whose cosmological system remained intact and in use for nearly two thousand years. Aristotle described the universe as a series of nested spheres, with Earth at the center and the fixed stars at the outside. A critical element of Aristotle’s system was the lunar boundary: below the boundary was the world of earth, air, fire, and water—a world that was marked by constant change—and above it everything was perfect and unchanging in every way. Thus both the sun and the stars, being above the boundary, were thought to not change over time (other than their circular motion). Any changes in the sky, such as meteors, were assumed to be atmospheric phenomena.
The Sun at the Center
Two factors, conceptual and observational, combined in the sixteenth and seventeenth centuries to make the sun a focus of interest. The first was the emergence of the idea that the sun was in some way special. The stimulus for this elevation of the sun’s significance is usually attributed to the Polish astronomer Nicolas Copernicus (1473-1543), whose cosmological system placed the sun at the center of the universe. Copernicus’ motivations were complex, but he argued that the sun was the most important celestial body and that a central location was more fitting: “At rest, in the middle of everything, is the Sun. For in this most beautiful temple, who would place this lamp in another or better position than that from which it can light up everything at the same time?”
One of Copernicus’ most ardent followers, the German astronomer Johannes Kepler (1571-1630), thought the central location of the sun had important physical and theological significance. He thought it might provide a physical force that moved the planets. He made little progress with this idea, but it provided the foundational concept that the sun had physical effects on the planets. From his cultural background, Kepler also saw Christian theological meaning in the structure of the Copernican cosmos.
The Parallax Problem
Few of the early Copernicans gave serious thought to the nature of the stars, but their cosmology did make an important prediction about their appearance. If, as Copernicus claimed, the Earth moved, then astronomers should be able to observe a phenomenon known as stellar parallax. Parallax is the apparent motion of a nearby object relative to a distant one as viewed by an observer who is also in motion. An observer on a moving Earth should be able to see nearby stars moving relative to distant stars on an annual basis. This was not observed, however, and that meant either that Copernicus was wrong, or the universe was vastly larger than previously assumed, since parallax decreases with the distance of the observed objects.
In the years between Copernicus and Kepler, dramatic advances were made in astronomical technology and technique that helped overturn the Aristotelian understandings of stars and the sun. One of the major figures was the Danish astronomer and aristocrat Tycho Brahe (1546-1601). Tycho acquired an entire island and used tremendous resources to build a sophisticated observatory with massive versions of traditional instruments such as the quadrant. His observations were vastly more accurate than any made before, and two observations made a particular impact. The first was in 1572 when he saw that a new star had appeared in the sky (a nova); the second was a comet in 1577. Using his uniquely precise tools and methods, Tycho was able to convincingly demonstrate that both of these phenomena must have been above the moon. This indicated the celestial realm was not unchanging as Aristotle had predicted.
It is interesting to note that the 1572 nova was the first recorded in western astronomy, but several novae had been previously observed and recorded by Chinese astronomers. This is a striking example of how sometimes people, even trained astronomers, see only what they expect to see: since Aristotle said changes among the stars were impossible, observations of such changes were simply rejected.
A generation after Tycho, the Italian astronomer Galileo Galilei (1564-1642) improved the design of the telescope, a new tool invented in the Netherlands. He turned his telescope to the night sky, observing a variety of phenomena not previously recorded. He found that where the planets appeared as disks in the telescope, the stars remained as tiny points. He interpreted this to mean that they were much farther away than the planets, which he used to explain his failure to detect the stellar parallax expected by Copernicus’ theory. Through his telescope he also saw new stars that dramatically expanded the scope of the universe.
While examining the sun, Galileo found that it was not perfect, as the Aristotelian system demanded. In the summer of 1612 he followed in the footsteps of the Jesuit astronomer Christoph Scheiner (1573-1650) and others by using his telescope to explore the surface of the sun. All of these men saw strange dark patches on the sun, which came to be called sunspots. These spots traveled across the sun, sometimes breaking into smaller spots or disappearing entirely. Galileo argued that this was evidence that the sun was imperfect and changing, while Scheiner provided a variety of alternative interpretations.
Galileo’s investigations persuaded many people that the Aristotelian explanations of the sun and stars were lacking. The seventeenth century saw a number of important figures advancing completely new views regarding the composition of the sun and of its place in the solar system. Most important was the French philosopher René Descartes (1596-1650) who postulated that the sun sat at the center of an enormous vortex of moving particles that carried the planets in their orbits. He also proposed that the stars seen in the night sky were identical to our sun and sat at the center of their own vortices filled with planets. Descartes’ vision expanded the scope of the universe to infinity, even implying that there could be many inhabited worlds around other stars.
Precise measurements of the sun’s distance and size became increasingly possible with improved instrumentation and theory in the late seventeenth and eighteenth centuries. The 1769 transit of Venus established the solar distance as 24,000 Earth radii and the solar radius as 112 Earth radii. Earlier, the English physicist and mathematician Isaac Newton (1642-1727) used his new mathematical physics to estimate the sun’s mass as 28,700 times greater than that of Earth; the number was improved in the eighteenth century to 357,000 Earth masses—within 10% of the modern value.
These observations and calculations, while impressive, composed essentially the only facts known about the sun until the middle of the nineteenth century: the largest and most prominent body of the solar system was still a highly mysterious object.
Even with the powerful tools of the telescope and Newtonian physics, it proved difficult to make significant progress in the study of the distant stars. Astronomy as a discipline focused its attention on the study of the planets. Use of the telescope furthered interest in planets because changes in the planets’ appearances could be viewed through a telescope. Planet features could be seen in greater detail, whereas stars merely remained as points of light. Accordingly, the stellar realm was left largely to amateurs such as the innovative English astronomer William Herschel (1738-1822).
Herschel, a German musician who emigrated to England, eventually taking a job as an organist in Bath, was self-taught in astronomy and natural philosophy. He learned to make his own precision telescopes and developed novel observing techniques. Assisted by his sister Caroline, a gifted observer in her own right, he charted the night sky in extraordinary detail. He tried to measure the stellar parallax via a method suggested by Galileo: observing two stars very close to each other in the sky (so-called double stars) so their motion relative to each other would be especially noticeable. This method assumed double stars were just an optical trick (they happened to line up in the sky), but Herschel’s careful observations revealed that many double stars actually revolved around each other. These gravitationally bound pairs showed the universality of Newtonian gravity, but true parallax still remained elusive.
Hershel also followed up on another unexpected result of the search for parallax. In 1718 the English Astronomer Royal, Edmund Halley (1656-1742), found that a number of stars had moved an observable distance since antiquity. These changes were called the proper motions of the stars and seemed to indicate that the stars were not fixed in place as had been assumed, but were in detectable motion. Herschel realized that some of these motions could be a result of the sun’s movement through a group of stars and tried to calculate the direction of that motion. He found that the sun could be described as moving toward a point in the sky (the solar apex) in the constellation Hercules.
Unable to directly measure the distance to the stars, Herschel tried an indirect technique that he called star gauging. Here, he assumed that all stars were similar to the sun and were of uniform brightness, and thus he inferred that dimmer stars were more distant. Because stars differ in brightness and in other important attributes from Earth’s sun, Herschel’s results from this were not entirely correct, but his assumption of uniformity based on the sun as a typical star was a necessary tool for the stellar astronomy of the day.
The Nebular Hypothesis
A critical theoretical development in understanding stars was put forward in 1796 by Pierre Simon de Laplace (1749-1827), a French mathematician of extraordinary skill. His main concern was to show how the solar system could remain stable over long periods of time (an unresolved problem from Newton’s era), but he also proposed a powerful and controversial hypothesis that the sun and the planets could have condensed from a primordial cloud of luminous matter. This nebula would have collapsed under its own gravity, and its rotation would have spun it into a large, thin disk. The central bulge of the disk would have formed the sun and conglomerations of matter at different distances from the center would have formed the planets. Laplace thought this explained several features of the solar system: all the bodies in the solar system (planets and moons) were in virtually the same plane, and all rotated in the same direction. He calculated the odds against such coincidences to be very small and concluded that they must be the result of a process guided by physical laws.
By the middle of the nineteenth century, stellar astronomy was still in its infancy. Almost nothing was known about what composed the stars—what they were made of—or why they were bright, etc. Most astronomers only dealt with the sun when calculating gravitational effects. One of the few clues about the sun’s nature was the study of sunspots. Many thought the spots were cool areas on the sun; Herschel proposed that the sun was a dark, solid body surrounded by a luminous atmosphere, and sunspots were holes in the top layer that revealed the cool, probably inhabited, areas underneath. In 1851, a retired German pharmacist, Heinrich Schwabe (1789-1875), demonstrated a ten-year cycle of sunspot activity. Many looked for correlations between the spots and terrestrial phenomena such as magnetic storms and crop prices. The British amateur astronomer Richard Carrington (1826-1875) found in 1858 that the spots showed differential rotation (spots near the solar equator move faster), suggesting that the sun was not solid. Other sources of information about the sun’s nature were observations made during solar eclipses. With the sun’s disk blocked by the moon, a variety of unusual phenomena could be seen, including a corona and enormous prominences extending from the sun’s surface. These observations suggested strongly that the sun had a very large, complex atmosphere of some sort, but the details remained elusive.
To many it seemed that humanity would simply never be able to know any more about the sun due to our inability to experiment on it directly. The French positivist philosopher August Comte (1798-1857) declared the chemical composition of a star to be the very definition of unattainable knowledge. The rejection of this idea required the development of both new forms of instrumentation and a confidence that those instruments were reliable.
The key to exploring the sun as a physical object was found in an odd feature of the solar spectrum (the rainbow of light formed by passing the sun’s rays through a prism). Hundreds of strange black lines of varying widths and intensities in the spectrum were charted in elaborate detail by the Bavarian glassmaker Joseph Fraunhofer (1787-1826) as part of his efforts to make very high quality glass. He made no attempt to explain or interpret the lines, and many contemporary natural philosophers were even unsure whether they should be associated with the sun or with the terrestrial atmosphere.
Around the same time, chemists noticed that many elements produced bright colored lines when placed in a flame viewed through a prism. Many hoped that these lines could be associated uniquely with individual elements, allowing “spectroscopy” to easily determine the composition of a substance placed in a flame. This was the goal of the German chemist Robert Bunsen (1811-1899) in collaboration with the Prussian physicist Gustav Kirchoff (1824-1887). In 1859 they were finally able to demonstrate what many had suspected, that the positions of the dark lines in the solar spectrum overlapped precisely with the bright lines of certain elements (notably sodium). Kirchoff proposed a theoretical law that described gases as absorbing certain colors from light passing through them (producing dark lines) and also emitting those same colors when heated. This explained why lines were sometimes dark and sometimes bright, but more details would wait until the development of quantum mechanics.
Discovering Commonality in the Universe
By 1861 Kirchoff had used the new spectroscope to find terrestrial elements (iron, sodium, etc.) in the solar spectrum. In Britain, the wealthy amateur astronomer William Huggins (1824-1910), collaborating with a chemist, turned his spectroscope to the stars. In 1864 he was able to successfully detect terrestrial elements in two bright stars. These observations indicated that the sun and stars were made out of the same elements as Earth, a finding that had tremendous implications for contemporary physical and biological theory. The nebular theory had suggested that the sun and planets formed from the same primordial cloud, so the similarity of their composition was seen as good support. This in turn gave support to other theories that relied on gradual development via natural law (such as Darwinian evolution). Further, the detection of terrestrial elements in the stars gave a great boost to the idea that planets and life were common in the universe.
Spectroscopy also forced astronomers interested in the sun to learn laboratory techniques from chemistry, a dramatic change from traditional astronomy. The astronomers who did learn these new techniques also benefited from another technological revolution of the nineteenth century: photography. Early photography was quite difficult, and the sun and stars were particularly challenging subjects due to Earth’s rotation. The sun was first successfully photographed in 1845, and in 1860 solar prominences were recorded. Stellar
spectra were first photographed in 1872, making their analysis much easier and quicker. Similarly, photography allowed precision positional measurement of faint stars for the first time and helped establish a reliable system for quantifying the brightness of stars. Solar and stellar astronomy became increasingly dependent on spectroscopy and photography. At the end of the century, the American astronomer George Ellery Hale (1868-1938) made a tremendous contribution by convincing philanthropists to donate the money for enormous new observatories such as Mount Wilson. These new institutions and instruments revitalized the field of solar physics and became the exemplars for future research on the sun.
A “Burning” and “Shrinking” Sun
Astronomers and physicists interested in the sun were challenged when they tried to apply one of the great triumphs of nineteenth century science: the law of the conservation of energy. This law states simply that energy can be neither created nor destroyed, but only converted from one type to another (such as electrical energy to light energy, or chemical energy to heat). The acceptance of this law in the nineteenth century raised the natural question of the source of the sun’s energy: it produces huge quantities of heat and light, so where does that energy come from?
It seemed clear that the sun could not simply be burning in the chemical sense. It was calculated in 1833 that even if the sun were made completely of coal (the most concentrated fuel then known), it could only burn for a few thousand years. It had clearly been shining for longer, and Julius Mayer (1814-1878), a German doctor, and J.J. Waterston (1811-1883), a Scottish engineer, independently suggested that meteorites falling on the surface of the sun could heat it through the conversion of their kinetic and gravitational energy. However, it was found that the sun’s mass would increase a significant amount each year from the added meteorites—so much that the change in the sun’s gravity would alter Earth’s orbit. As no such change was observed, this solution to the problem of the sun’s energy was discarded.
A more promising solution came from the German scientist Hermann von Helmholtz (1821-1894), who was inspired by Laplace’s nebular theory. Helmholtz realized that the initial nebular collapse would liberate a large amount of energy through gravitational contraction—enough to heat the entire solar system to millions of degrees. The sun would thus begin as an extremely hot body and would continue to heat as it contracted, producing energy from its own collapse. This was seen as a convincing solution, as it explained the source of energy but required no new physics. It did predict that the sun should be shrinking steadily, but the predicted decrease was just below what could have been seen, so it could not be immediately proven or refuted.
Foundations of Theoretical Astrophysics
A breakthrough came with the work of the English astrophysicist Arthur Stanley Eddington (1882-1944). In 1916 he developed a theoretical model of a star (a series of equations describing its structure, mass, temperature, etc.). Other attempts to do this had always foundered on two severe problems: no one knew the nature of the interior of a star, and no one knew the energy source. Eddington, however, made a series of clever approximations, picking very simple possibilities to make his calculations easier, even if the approximations were not likely. Much to his surprise, his models worked very well, validating his initial guesses. He was able to describe the internal structure of stars, predict how bright and about what size stars should be (including the “Eddington limit” of the largest star possible), and suggest a relationship between stars’ masses and luminosities (the mass-luminosity relation), all of which were verified observationally.
Eddington still did not know the energy source. He proposed two possibilities that he thought reasonable in the light of German—American physicist Albert Einstein’s (1879-1955) prediction of mass-energy equivalence: annihilation (electrons and protons destroying each other) and fusion (small atoms combining to form larger atoms, sacrificing some mass in the process). No one knew if either of these processes occurred in nature. Eddington applied them because they were plausible and they made his models work. His methods helped found the discipline of theoretical astrophysics (using a combination of physics, astronomy, and mathematics to interpret and explain celestial phenomena), which relied on a sophisticated use of approximation and sometimes completely unknown factors.
Most astronomers hoped that the evolutionary history of stars could be explained by examining the spectral types of the 100,000 stars that had been studied closely by the beginning of the twentieth century. Between 1905 and 1914, the Danish astronomer Ejnar Hertzsprung (1873-1967) and the American astronomer Henry Norris Russell (1877-1957) hoped to contribute to solving this problem by looking for a relationship between the luminosity and color of stars. This required knowing the absolute luminosity (the actual brightness of a star, as opposed to how bright it appears from Earth) of a number of stars. In turn, this required an exact distance to a star, which was finally measured via parallax in 1838 by the German astronomer F.W. Bessel (1784-1846). In the intervening century, astronomers were only able to measure the parallax and distance for a few hundred stars.
Both Hertzsprung and Russell independently arranged these stars on a diagram with luminosity on the y-axis and spectral type (color) on the x-axis. The stars fell into a pattern called the “backward seven.” This pattern indicated a fundamental division in stars between so-called giants (the upper, horizontal branch) and dwarfs (the lower, sloping branch). The diagram showed that there were stars with the same color, and therefore temperature, but different luminosities. This can only be explained by the brighter stars being larger than dimmer—but similarly colored—cousins. Many astronomers were skeptical of the apparently tremendous size of giant stars (larger than the orbit of Earth), but it was confirmed by Eddington’s theoretical calculations and clever observations by the American physicist Albert Michelson’s (1852-1931) interferometer.
Russell interpreted the diagram as showing the evolutionary sequence of any given star: they begin on the upper right, heat as they contract and move to the left, then cool and move down the dwarf branch (later known as the “main sequence”). This was the “giant-dwarf” theory of stellar evolution, which seemed to fit the observational data and the contraction theory of stellar energy very well. Unfortunately, Eddington’s theoretical work suggested that the giant-dwarf theory was untenable, and the Hertzsprung-Russell (HR) diagram was viewed as a body of empirical data rather than an evolutionary track. A full understanding of the evolution of stars would have to wait for progress in theoretical physics.
The evolutionary processes of stars were finally understood after developments in quantum mechanics, notably the Indian physicist Meghnad Saha’s (1894-1956) method for estimating the amount of different elements in stars for the first time. The American astronomer Cecelia Payne (later Payne-Gaposchkin, 1900-1979), working at Harvard in 1925, found that stars were over 90% hydrogen, with helium composing most of the rest. This information was critical to the calculations devised around 1930 showing that fusion of hydrogen atoms in stellar interiors was plausible. This theory was developed in detail by the German physicist Hans Bethe (1906-2005) in 1939, when he was a refugee from Nazi Germany living in the United States. Once the basic mechanism of nuclear fusion in stars was well understood, astrophysicists were able to calculate how stars changed as they burned their hydrogen fuel over their lifetime. A star’s location on the HR diagram is now understood to be determined by its starting mass: It stays there for its entire life until it burns up its hydrogen, then it moves off the main sequence completely.
Fusion, the process that fuels the heat and light of the sun, describes how small, light atoms (such as hydrogen) combine to form larger, heavier elements (such as carbon). Because the nuclei of atoms are all positively charged and therefore repel each other strongly, it was thought that fusion was theoretically impossible—until the discovery of quantum tunneling. The very high speeds required for quantum tunneling mean that only very hot, very dense environments (such as at the center of a star) allow fusion to occur.
The details of these fusion processes are immensely complex, and astrophysicists relied on computer simulations for many of their results. Heavier elements are constantly being created from lighter ones in stars by a process called nucleosynthesis. In the 1950s, the English astrophysicist Fred Hoyle (1915-2001) and others had great success in showing that stellar nucleosynthesis could explain the amount of heavy elements found in the universe. This work was made possible not just by theoretical insight, but also by the progress of the Cold War. The nuclear reactions that power stars are the same ones that power hydrogen bombs, and many astrophysicists both worked on developing nuclear weapons and used the results of nuclear tests to provide critical empirical data.
One of the peculiar questions raised by these sophisticated evolutionary models was the end points of stars. Dwarf stars like our sun were found to burn their hydrogen and age into red dwarf stars over billions of years, then collapse to a strange object called a white dwarf star. This is a hot, small (not much larger than Earth), super dense (tons per cubic centimeter) star held up by a quirk of quantum physics called degeneracy pressure. The first white dwarf known was Sirius B and was detected by its gravitational pull on its much larger companion star (Sirius A, also known as Alpha Canis Majoris A, the Dog Star). The physics of white dwarfs was described in 1935 by the Indian-American astrophysicist Subrahmanyan Chandrasekhar (1910-1995), who indicated that there was a maximum mass to such objects.
A giant star, however, would burn up its hydrogen much faster than a dwarf (over a period of millions of years) and would to die a violent death in a huge explosion called a supernova. Supernovae are an important part of modern astronomy: They are visible at great distances (the new star seen by Kepler in 1604 was a supernova) and one type is used as a standard candle to establish the distances to faraway galaxies. Supernovae also hurl the heavy elements made in fusion processes out into the universe, distributing the solid matter of which we are made, and sometimes result in the formation of a neutron star (an even denser degenerate star). Neutron stars were first observed in the 1960s in the form of pulsars, swiftly rotating stars that emit powerful pulses of radio waves. Some extremely large stars form even stranger objects during their final collapse: black holes. A black hole is the remains of a star with a gravitational pull so strong that it actually warps space and time, not even allowing light to escape.
Stellar astronomy remains a very active field in the twenty-first century. The births and deaths of stars are investigated in ever more detail and astronomers are particularly interested in how the very first stars in the universe came to form. The study of stars also connects closely to other fields: astronomers interested in the development of planets need to understand the evolution of stars, and many physicists study degenerate stars and black holes from their own perspectives. Stars are the most numerous objects in the universe, and their study will remain critical for the advance of science.
Modern Cultural Connections
The energy from fusion relies on a strange fact of the atomic world—the whole is sometimes less than the sum of its parts. For example, a helium nucleus, which has two protons and two neutrons, has less mass than the combined masses of two protons plus two neutrons. This missing mass is called the mass defect. This mass defect is converted into energy during the fusion process, as described by Einstein’s famous equation E = mc2. In this way, as hydrogen in the center of a star is turned into helium, huge amounts of energy are produced. Helium is then further fused into larger elements such as carbon, oxygen, and nitrogen. All of the elements in the universe heavier than lithium and lighter than lead are formed by fusion in the center of stars, then hurled out into space when a star dies in a huge explosion called a supernova. The very heaviest elements, such as uranium, are formed only during this explosion. Therefore, every piece of solid matter was at some point in the past formed in the center of a star and dispersed during its death throes.
Political, Social, and Religious Implications
The theories of the origins and evolution of stars were seen to have great political, social, and religious implications. In fact, the nebular theory remained widespread and influential despite its technical problems, in large part due to these implications. Laplace himself was adamant that the theory showed that the universe could be explained solely through natural processes without any divine intervention or creation (when asked why God did not appear in his work, he replied “I have no need of that hypothesis”). This rejection of religious explanations fit well with the anti-clericalism of the French Revolution, and Laplace became an influential figure in the Napoleonic state.
Not everyone thought the nebular theory was necessarily atheistic. Decades before, the German philosopher Immanuel Kant (1724-1804) proposed an idea very similar to Laplace’s (though it was never widely publicized). Kant concluded that the formation of the sun and planets from a nebula was an indication of divine action. He argued it would be impossible for natural processes by themselves to produce order from chaos, which showed that a divine being planned for such development to occur.
Many people also saw political significance in the nebular theory. Political reformers in early nineteenth—century Britain argued that progressive change in nature, such as nebular collapse, showed that progress should also be sought in politics. Change was justified as natural. Scottish astronomer John Pringle Nichol (1804-1859) stated: “In the vast Heavens, as well as among phenomena around us, all things are in state of change and progress….” Natural law was thus linked to political and moral law, and physical science was used to justify and better understand human institutions.
The most important impact of the nebular theory in the eighteenth and nineteenth centuries was its contribution to evolutionary world views. It trained scientists to think in terms of extremely long periods of time and in terms of gradual change due to natural causes. This helped ease the reception of Darwin’s ideas in the middle of the century, since much of the anxiety over the implications of progressive change had already been resolved.
Astronomy Provides a Boost to Women in Science
Astronomy was one of the first fields of physical science that had substantial numbers of women. This was due to some surprising developments resulting from the technical challenges of stellar classification. By the 1860s it was clear that stars varied widely in spectra, and astronomers attempted to classify them and understand their variation. By 1910 there were twenty-three different classification systems for stars, causing great confusion. Edward Charles Pickering (1846-1919), director of the Harvard College Observatory, founded his system on very large, very accurate samples of data (eventually over 350,000 stars). This was the Henry Draper Catalogue and the Harvard System, which brought many innovations. New spectral techniques allowed the capture of multiple spectra on one photograph, making the gathering of data much faster. However, this also required a much faster system for processing the data, as there were simply not enough trained astronomers to analyze such massive amounts of information.
Pickering’s solution was to hire a large number of women (often slightingly called “Pickering’s Harem”) and train them to accomplish the tedious task of examining and classifying the photographs. Up to this time there were very few women involved in astronomy; most universities did not even grant degrees to women, because women were erroneously thought to be unable to think analytically or creatively. Rather, the cultural preconceptions dictated that women were to be considered more patient than men and more willing to sit still for the monotonous hours needed to analyze the spectra. This crude gender bias actually proved helpful to women because it provided an invitation into the observatory and a chance to work on important scientific projects. Further, the women at the HCO were resourceful in finding ways to make important scientific contributions despite many institutional and social barriers.
For example, Annie Jump Cannon (1863-1941) came to Harvard in 1896 to work as a classifier. She was instructed to use Pickering’s classification system, which tried to arrange the stars in order of temperature (A through J) by measuring the strength of their hydrogen emission lines. Cannon, however, was much more familiar with the actual data than Pickering, since she handled the photographic plates on a daily basis. She rearranged Pickering’s categories to provide a more reliable temperature sequence, which has survived to the present (OBAFGKM). Henrietta Leavitt (1868-1921) also made important discoveries while at Harvard.
A Cosmos at Odds with Fundamentalist Religious Views
Stellar astronomy speaks to some of the most important issues about the nature of the world and the place of humans in it. The nebular theory and its implications were important parts of the debates over Darwinian evolution in the nineteenth century. William Thomson asserted that the hundred million years allowed by the gravitational contraction theory was simply not long enough for humans to have evolved. In the twentieth century the use of stellar astronomy to date astronomical objects is an important part of establishing that the universe is many billions of years old. It is therefore often attacked by some religious fundamentalists who argue that some religious scriptures dictate a far younger Earth.
The birth and evolution of stars are also fundamental to the question of life elsewhere in the universe. The most basic requirement for extraterrestrial life is the existence of life-supporting planets outside our solar system. Astronomers’ knowledge of planets is still quite tentative, and so far the only discernable planets are inhospitable. Therefore, our knowledge of whether life-supporting planets are common or rare still relies almost completely on our understanding of how stellar nebulae develop into stars and planets. Further investigation of the physics underlying the evolution of stars will remain scientists’ best way to investigate the possibility (and some scientists would argue “probability”) of extraterrestrial life.
Seeking the Energy of the Stars
Rather surprisingly, stellar astronomy has had some effect on technological developments. In the period after World War II, knowledge of the interiors of stars led to the development of nuclear weapons, commonly known as hydrogen bombs or H-bombs, based on fusion. Physicists have devoted a great deal of effort to developing techniques that would allow the use of fusion for peaceful energy production, but have yet to make any real progress. If the process of stellar fusion can be harnessed successfully, it would provide a clean, nearly limitless source of power.
Primary Source Connection
The following article was written by Gail Russell Chaddock, the congressional correspondent for the Christian Science Monitor. Founded in 1908, the Christian Science Monitor is an international newspaper based in Boston, Massachusetts. The article describes the explosion of scientific knowledge in the twentieth century, along with its context in history and the new millenium.
THE MAP OF THE MILLENNIUM IS DRAWN BY THINKERS
BOSTON—If it were necessary to define the modern era in one word, technology would be that word. We live in a world pulsing with technology—digital phones and cameras, the Internet, the space shuttle, cloned sheep, and countless other wonders. But the scientific theories upon which human invention rests often get short shrift and are little understood. Today, in the first of the Monitor’s special millennium reports, we look back over the past 1,000 years and chronicle the major scientific theories and the changes they brought. Technological progress, or advances in the human condition, guided our selections.
On July 4, 1054, a great light from an exploding star appeared in the constellation Taurus. Observers in China, Japan, and the Middle East recorded this bright star, or supernova, which created the Crab Nebula and was visible even in daylight. The Anasazi, or “ancient ones,” in the American Southwest painted it on a canyon wall.
To these skywatchers at the beginning of the millennium, the heavens determined the fate of men. The Anasazi aligned their great houses and kivas with sacred celestial objects, whose movements determined when to plant, sacrifice, or start a pilgrimage. Chinese court astrologers kept precise records of celestial events to alert the emperor to good or bad times to come.
Such close observations of the night skies marked a giant step away from peering into the entrails of sacrificed animals to predict the future—the “science” of many ancient peoples.
But observation alone did not produce the surge of scientific discovery that marks the end of this millennium, especially in the West. That required new instruments and an openness to the culture of science, especially a capacity to share information and bring observation under the discipline of mathematics.
Other civilizations made important scientific advances, but failed to develop them. China built an astronomical clock in 1090, centuries before anything comparable in the West. Chinese astronomers recorded observations of supernovae as early as AD 185, but emperors insisted that astronomical records be kept strictly a state secret. (Those who could read the skies might use this knowledge to unsettle the empire.) And by the mid—15th century, China had sealed itself off from outside contact.
Similarly, up to about 1500, Islam had a much higher level of scientific achievement than the West. Arab thinkers developed trigonometry and algebra and made important advances in optics and chemistry. Their astronomers developed instruments to use the stars to determine absolute direction, to ensure that all mosques faced Mecca.
By the 10th century, Muslims could calculate the exact time and had observed all that could be observed in the skies without a telescope, including sun spots. Their astronomical tables were the textbooks for Europe’s great astronomers.
But the Arab world was not of one mind on the usefulness of wide-ranging scientific pursuits. “The Arabs built on what they got from the Greeks, but their natural philosophers that were doing this were under a cloud and were not fully accepted,” says Edward Grant, distinguished professor emeritus in the history and philosophy of science at Indiana University, who recently completed a second book on this subject.
European universities taught natural philosophy, but a student in the Arab world had to seek out a teacher. “They called the Greek sciences the ‘foreign’ sciences. Natural philosophers were often physically or verbally attacked by those that felt that pure Islam was under threat,” he adds.
However, without the contribution of Arab thought, including passing along Greek ideas, the subsequent breakthroughs in science in the West would have been inconceivable, he says.
On Nov. 11, 1572, Danish astronomer Tycho Brahe noticed that the brightest star in the night sky had never appeared before. The new star, a supernova in the constellation Cassiopeia, prompted a very different reaction than the exploding star that dazzled observers at the beginning of the millennium. This was no sign that the gods were restless or that a plague, famine, or war was about to break out on earth. For Tycho, the observation was a new fact that didn’t fit expectations—and something that a curious mind could try to explain.
According to the prevailing view of cosmology, the earth was the fixed center of the universe, and the stars moved on a rotating spherical dome above them. In this view, there could be nothing new in the heavens—not even a bright new star.
Tycho had studied the Arab astronomical tables, and he knew the night sky. Two years before this sighting, he had invented a new device to measure the position of stars—a 30-foot-high quadrant of oak and brass. Unlike other astronomers in Europe, he didn’t need to guess if the new star was fixed or moving. (It was fixed.)
The quality of his observations set a new standard of scientific work, and word of his analysis spread fast. As a result, the Danish king diverted nearly a third of the national budget to finance the most advanced research complex in Europe for the world’s new most-celebrated astronomer.
It would take Galileo Galilei and the invention of the telescope 27 years later to settle the question of what the new view of the universe should be. But Tycho’s celebrity signaled that a new scientific culture was taking hold in Europe—a culture of close observation and analysis that could capture the interest of a broad public and the backing of ambitious states.
The shared belief of science
“Europeans in the Renaissance began to share a belief that science can explain the world better than any other way—that we can explain, understand, and control the physical world. A thousand years ago, most people wouldn’t have made those statements,” says Michael Sokal, program director for science and technology studies at the National Science Foundation.
There’s no one reason that Europe—“a little peninsula on the cape of Asia”—should have launched such a scientific revolution. Some historians note that Europe’s dominance in science coincided with its conquest of the seas and a vast expansion in commerce. Others cite the importance of a vibrant print culture or Europe’s concentration of cities, which allowed new ideas to circulate quickly.
Most scholars still support some version of German sociologist Max Weber’s claim at the turn of the 20th century that Europe’s dynamism was related to a change in thought, especially the Protestant ethic. The collapse of religious consensus in 16th-century Europe, and the wars that grew out of it, convinced many that mankind should shift from disputing God’s Word to investigating His works.
“Protestant scientists like Robert Boyle defined the scientific method—the cautious, careful, tedious discipline that science requires—as a form of work ethic,” says Margaret Jacob, a professor of European history and science at the University of California, Los Angeles.
Science and the Protestant Reformation
“Science became an expression of piety and very much a part of Protestant English households in the 17th century. Educated men and women came to believe that they should know this. They became interested in home experiments and bought orreries [mechanical models of the heavens] for their living rooms,” she adds.
Latin was replaced by mathematics as the new language of science. New scientific journals spread discoveries fast. By the end of the millennium, gains in science and technology were piling up exponentially.
Scientists call the 20th century the golden age of mathematics, because so many other disciplines began reducing their work to mathematical expression.
“There is no doubt that science is the most extraordinary collective achievement of the human intellect in the 20th century,” says Martin Rees, Royal Society professor at Cambridge University and Britain’s astronomer royal.
“At the beginning of the century, people had no concept of just how much progress the world was going to make,” he adds.
Astronomers are exploring deep connections between stars and atoms. Powerful new telescopes like the Hubble Space Telescope (1990), the Extreme Ultraviolet Explorer (1992), and the Chandra X-ray Telescope (1999), allow scientists to study stars deep in space and along the whole energy spectrum. To scientists at the end of the millennium, supernovae are more than a curious bright light. Their densities, dust disks, and massive stellar winds are opening windows on the origins of the universe.
The dominance of Europe and, later, the United States in scientific discovery carried through the millennium. Nearly 3 in 4 Nobel Laureates in science have been scientists based in the United States, Britain, or Germany, since the prize was first awarded. Experts say this is due, at least in part, to the investments needed to sustain big science.
“Science has become quite expensive in the modern age. You need strong economies to put the resources together to keep launching satellites and more powerful telescopes,” says Mario Livio, senior astrophysicist at the space Telescope Science Institute in Baltimore, Md.
A social elite based on knowledge?
But this brilliance in science at the top of a handful of American universities does not extend deep into American public schools or popular culture. American high school students ranked poorly compared with the rest of the world on a recent international survey of math and science.
“It’s clear that science is at the center of our culture. But we’re having difficulty educating our students to become participants,” says Timothy Ferris, a professor at the University of California, Berkeley and a science writer.
In this effort, the resources of the Internet could be as important as movable type in opening the possibilities of science to a wider audience. A modem can bring the latest images from new telescopes or unmanned space probes to labs, or laptops on a kitchen table. A fifth-grader could play a part, or be the first to recognize in the data being processed on his classroom’s computer intelligent life in space.
“It hasn’t begun to dawn on people what unmanned [space] exploration means when combined with an Internet world. Live images from the surface of Mars or Venus can be accessed simultaneously at 100 million nodes around the world. When you go to your science class, there is a possibility that you will be the person to discover extraterrestrial life on Europa or a fossil on Mars,” says Professor Ferris.
Bob Evans, a retired Methodist minister in Hazelbrook, Australia, still studies the night sky any chance he can. He knows it well. Since 1981, he has discovered 32 supernovae—more than any other private individual has—using a 10-inch telescope that he can carry around in the back of his car.
“You have to learn where the galaxies are and look at them regularly. You learn what they’re supposed to look like. If you see something new, it gives you a surprise. You just hope you’re the first to see it,” he says.
The Rev. Mr. Evans is one of about 50,000 serious amateur astronomers who regularly report results of sightings to professional groups, via the Internet.
“Our members observe the stars, and when an explosion occurs on a specific star, we inform the professional astronomers, and they turn the satellite to look at it. It’s a partnership that has been extremely beneficial,” says Janet Mattei, director of the American Association of Variable Star Observers, based in Cambridge, Mass.
But Evans says that what drives him is a love for observing night skies.
“I started off as a kid looking up at the stars, and it fascinated me. It makes you feel humble about how big nature is. And if a person has the feeling that the universe is created by God, it gives you a sense of how great God is, too,” he says.