G J Weisel. New Dictionary of the History of Ideas. Editor: Maryanne Cline Horowitz. Volume 5. Detroit: Charles Scribner’s Sons, 2005.
It should be understood that a full understanding of the history of physics would include consideration of its institutional, social, and cultural contexts. Physics became a scientific discipline during the nineteenth century, gaining a clear professional and cognitive identity as well as patronage from a number of institutions (especially those pertaining to education and the state). Before the nineteenth century, researchers who did work that we now refer to as physics identified themselves in more general terms—such as natural philosopher or applied mathematician—and discussion of their work often adopts a retrospective definition of physics.
For researchers of the nineteenth century, physics involved the development of quantifiable laws that could be tested by conducting experiments and taking precision measurements. The laws of physics focused on fundamental processes, often discovered in particular areas of research, such as mechanics, electricity and magnetism, optics, fluids, thermodynamics, and the kinetic theory of gases. The various specialists saw physics as a unified science, since they shared the same concepts and laws, with energy becoming the central unifying concept by the end of the century. In forming its cognitive and institutional identity, physics distinguished itself from other scientific and technical disciplines, including mathematics, engineering, chemistry, and astronomy. However, as we will see, the history of physics cannot be understood without considering developments in these other areas.
Middle Ages
The Middle Ages inherited a wealth of knowledge from antiquity, including the systematic philosophy of Aristotle (384-322 B.C.E.) and the synthesis of ancient astronomy in the work of the Hellenistic astronomer Ptolemy (fl. second century C.E.). In agreement with those before him, Aristotle maintained that the terrestrial and celestial realms, separated by the orbit of the Moon, featured entirely different physical behaviors. His terrestrial physics was founded on the existence of four elements (earth, water, air, and fire) and the idea that every motion requires the specification of a cause for that motion. Aristotle considered two broad classes of motion: natural motion, as an object returned to its natural place (as dictated by its elemental composition), and violent motion, as an object was removed forcibly from its natural place. Because the natural place of the element earth was at the center of the cosmos, Aristotle’s physics necessitated a geocentric, or Earth-centered, model of the heavens.
Whereas the terrestrial realm featured constant change, the heavenly bodies moved in uniform circular orbits and were perfect and unchanging. Starting from an exhaustive tabulation of astronomical data, Ptolemy modeled the orbits of each heavenly body using a complex system of circular motions, including a fundamental deferent and one or more epicycles. Often, Ptolemy was forced to make additions, including the eccentric model (in which the center of rotation of the orbiting body was offset from Earth) and the equant model (in which a fictitious point, also not located at Earth, defined uniform motion).
Despite the great value of this work, the West lost a good portion of it with the erosion of the Roman Empire. Luckily, a number of Islamic scholars took an interest in the knowledge of the ancients. In addition to translating Aristotle and Ptolemy (among others) into Arabic, they commented on these works extensively and made a number of innovations in astronomy, optics, matter theory, and mathematics (including the use of “Arabic numerals,” with the zero as a placeholder). For example, al-Battani (c. 858-929) made improvements to Ptolemy’s orbits of the Sun and Moon, compiled a revised catalog of stars, and worked on the construction of astronomical instruments. Avempace (Ibn Badja, c. 1095-1138 or 1139) developed a position first staked out by the Neoplatonist philosopher John Philoponus (fl. sixth century C.E.), arguing that Aristotle was wrong to claim that the time for the fall of a body was proportional to its weight. After the reconquest of Spain during the twelfth century, ancient knowledge became available once again in the Latin West. Arab commentators such as Averroes (Ibn Rushd, 1126-1198) became influential interpreters of an Aristotle that was closer to the original texts and quite different from the glosses and explanatory aids that the West had grown accustomed to.
During the late Middle Ages, there was a general revival of learning and science in the West. The mathematician Jordanus de Nemore (fl. thirteenth century C.E.) pioneered a series of influential studies of static bodies. In addition to studying levers, Jordanus analyzed the (lower) apparent weight of a mass resting on an inclined plane. Despite the church’s condemnation of certain radical interpretations of Aristotelianism during the late thirteenth century, there followed a flowering of activity during the fourteenth century, particularly concerning the problem of motion. Two important centers of activity were Merton College (at Oxford), where a group of mathematicians and logicians included Thomas Bradwardine (c. 1290-1349) and Richard Swineshead (d. 1355), and the University of Paris, which included John Buridan (c. 1295-1358) and Nicole Oresme (c. 1325-1382).
The scholars at Merton College adopted a distinction between dynamics (in which the causes of motion are specified) and kinematics (in which motion is only described). The dynamical problems implied by Aristotelian physics, especially the problem of projectile motion, occupied many medieval scholars (see sidebar, “Causes of Motion: Medieval Understandings”). In kinematics, the release from the search for causation encouraged a number of new developments. The Mertonians developed the concept of velocity in analogy with the medieval idea of the intensity of a quality (such as the redness of an apple), and distinguished between uniform (constant velocity) and nonuniform (accelerated) motion. They also gave the first statement of the mean velocity theorem, which offered a way of comparing constant-acceleration motion to uniform motion.
While the Mertonians presented their analyses of motion through the cumbersome medium of words, other scholars developed graphical techniques. The most influential presentation of the mean speed theorem was offered by Oresme, who recorded the duration of the motion along a horizontal line (or “subject line”) and indicated the intensity of the velocity as a sequence of vertical line segments of varying height. Figure 1 shows that an object undergoing constant acceleration travels the same distance as if it were traveling for the same period of time at its average velocity (the average of its initial and final velocity). Although this work remained entirely abstract and was not based on experiment, it helped later work in kinematics, most notably Galileo’s.
Following Aristotle’s physics, medieval scholars pictured the celestial realm as being of unchanging perfection. Each heavenly body (the Sun, the Moon, the planets, and the sphere of the fixed stars) rotated around Earth on its own celestial sphere. Ptolemy’s addition of epicycles on top of Aristotle’s concentric spheres led medieval astronomers to speak of “partial orbs” within the “total orb” of each heavenly body. The orbs communicated rotational movement to one another without any resistive force and were made of a quintessence or ether, which was an ageless, transparent substance. Beyond the outermost sphere of the fixed stars was the “final cause” of the Unmoved Mover, which was usually equated with the Christian God. Buridan suggested that God impressed an impetus on each orb at the moment of creation and, in the absence of resistance, they had been rotating ever since. Both Buridan and Oresme considered the possibility of a rotating Earth as a way of explaining the diurnal motion of the fixed stars, and found their arguments to be promising but not sufficiently convincing.
Sixteenth and Seventeenth Centuries
The period of the scientific revolution can be taken to extend, simplistically but handily, from 1543, with the publication of Nicolaus Copernicus’s De revolutionibus orbium coelestium, to 1687, with the publication of Isaac Newton’s Philosophiae naturalis principia mathematica, often referred to simply as the Principia. The term “revolution” remains useful, despite the fact that scholars have suggested that the period shows significant continuities with what came before and after. Copernicus (1473-1543) was attracted to a heliocentric, or Sun-centered, model of the universe (already considered over one thousand years before by Aristarchus of Samos) because it eliminated a number of complexities from Ptolemy’s model (including the equant), provided a simple explanation for the diurnal motion of the stars, and agreed with certain theological ideas of his own regarding the Sun as a kind of mystical motive force of the heavens. Among the problems posed by Copernicus’s model of the heavens, the most serious was that it contradicted Aristotelian physics.
Heliocentrism was pursued again by the German mathematician Johannes Kepler (1571-1630). Motivated by a deep religious conviction that a mathematical interpretation of nature reflected the grand plan of the Creator and an equally deep commitment to Copernicanism, Kepler worked with the Danish astronomer Tycho Brahe (1546-1601) with the intention of calculating the orbits of the planets around the Sun. After Brahe’s death, Kepler gained control of his former associate’s data and worked long and hard on the orbit of Mars, eventually to conclude that it was elliptical. Kepler’s so-called “three laws” were identified later by other scholars (including Newton) from different parts of his work, with the elliptical orbits of the planets becoming the first law. The second and third laws were his findings that the area swept out by a line connecting the Sun and a particular planet is the same for any given period of time; and that the square of a planet’s period of revolution around the Sun is proportional to the cube of its distance from the Sun.
The career of Galileo Galilei (1564-1642) began in earnest with his work on improved telescopes and using them to make observations that lent strength to Copernicanism, including the imperfections of the Moon’s surface and the satellites of Jupiter. His public support of Copernicanism led to a struggle with the church, but his greater importance lies with his study of statics and kinematics, in his effort to formulate a new physics that would not conflict with the hypothesis of a moving Earth.
His work in statics was influenced by the Dutch engineer Simon Stevin (1548-1620), who made contributions to the analysis of the lever, to hydrostatics, and to arguments on the impossibility of perpetual motion. Galileo also repeated Stevin’s experiments on free fall, which disproved Aristotle’s contention that heavy bodies fall faster than light bodies, and wrote about them in On Motion (1590), which remained unpublished during his lifetime. There, he made use of a version of Buridan’s impetus theory (see sidebar, “Causes of Motion: Medieval Understandings”), but shifted attention from the total weight of the object to the weight per unit volume. By the time of Two New Sciences (1638), he generalized this idea by claiming that all bodies—of whatever size and composition—fell with equal speed through a vacuum.
Two New Sciences summarized most of Galileo’s work in statics and kinematics (the “two sciences” of the title). In order to better study the motion of bodies undergoing constant acceleration, Galileo used inclined planes pitched at very small angles in order to slow down the motion of a rolling ball. By taking careful distance and time measurements, and using the results of medieval scholars (including the mean speed theorem), he was able to show that the ball’s instantaneous velocity increased linearly with time and that its distance increased according to the square of the time. Furthermore, Galileo proposed a notion of inertial motion as a limiting case of a ball rolling along a perfectly horizontal plane. Because, in this limiting case, the motion of the ball would ultimately follow the circular shape of the earth, his idea is sometimes referred to as a “circular inertia.” Finally, Galileo presented his analysis of parabolic trajectories as a compound motion, made up of inertial motion in the horizontal direction and constant acceleration in the vertical.
The French philosopher René Descartes (1596-1650) and his contemporary Pierre Gassendi (1592-1655) independently came up with an improved conception of inertial motion. Both suggested that an object moving at constant speed and in a straight line (not Galileo’s circle) was conceptually equivalent to the object being at rest. Gassendi tested this idea by observing the path of falling weights on a moving carriage. In his Principia philosophiae (1644), Descartes presented a number of other influential ideas, including his view that the physical world was a kind of clockwork mechanism. In order to communicate cause and effect in his “mechanical philosophy,” all space was filled with matter, making a vacuum impossible. Descartes suggested, for example, that the planets moved in their orbits via a plenum of fine matter that communicated the influence of the Sun through the action of vortices.
Building on work in optics by Kepler, Descartes used the mechanical philosophy to derive the laws of reflection and refraction. In his Dioptrics (1631), he proposed that if light travels at different velocities in two different media, then the sine of the angle of incidence divided by the sine of the angle of refraction is a constant that is characteristic of a particular pair of media. This law of refraction had been discovered earlier, in 1621, by the Dutch scientist Willibrord Snel, though Descartes was probably unaware of this work. In 1662, the French mathematician Pierre de Fermat recast the law of refraction by showing that it follows from the principle that light follows the path of least time (not necessarily the least distance) between two points.
The study of kinematics yielded various conservation laws for collisions and falling bodies. Descartes defined the “quantity of motion” as the mass times the velocity (what is now called “momentum”) and claimed that any closed system had a fixed total quantity of motion. In disagreement with Descartes, Gottfried Wilhelm von Leibniz (1646-1716) suggested instead the “living force” or vis viva as a measure of motion, equal to the mass times the square of the velocity (similar to what is now called “kinetic energy”). For a falling body, Leibniz asserted that the living force plus the “dead force,” the weight of the object times its distance off the ground (similar to “potential energy”), was a constant.
The culmination of the scientific revolution is the work of Isaac Newton. In the Principia (1687), Newton presented a new mechanics that encompassed not only terrestrial physics but also the motion of the planets. A short way into the first book (“Of the Motion of Bodies”), Newton listed his axioms or laws of motion:
- Every body perseveres in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by forces impressed …
- A change in motion is proportional to the motive force impressed and takes place along the straight line in which that force is impressed …
- To any action there is always an opposite and equal reaction; in other words, the actions of two bodies upon each other are always equal and always opposite in direction … (1999, pp. 416-417)
The first law restates Descartes’s concept of rectilinear, inertial motion. The second law introduces Newton’s concept of force, as an entity that causes an object to depart from inertial motion. Following Descartes, Newton defined motion as the mass times the velocity. Assuming that the mass is constant, the “change of motion” is the mass (m) times the acceleration (a); thus the net force (F) acting on an object is given by the equation F ma. Analyzing the motion of the Moon led Newton to the inverse-square law of universal gravitation. Partly as a result of a debate with the scientist Robert Hooke (1635-1703), Newton came to see the Moon as undergoing a compound motion, made up of a tangential, inertial motion and a motion toward the Sun due to the Sun’s gravitational attraction. The Dutch physicist Christiaan Huygens (1629-1695) had suggested that there was a centrifugal force acting away from the center of rotation, which was proportional to v2/r, where v is the velocity and r is the distance from the center of rotation. Newton had derived this result before Huygens but later renamed it the centripetal force, the force that is required to keep the body in orbit and that points toward the center of rotation. Using this relation and Kepler’s finding that the square of the period was proportional to the cube of the distance (Kepler’s “third law”), Newton concluded that the gravitational force on the Moon was proportional to the inverse square of its distance from Earth.
Newton presented his law of universal gravitation in the third book of the Principia (“The System of the World”), and showed that it was consistent with Kepler’s findings and the orbits of the planets. Although he derived many of these results using a technique that he invented called the method of fluxions—differential calculus—Newton presented them in the Principia with the geometrical formalism familiar to readers of the time. He did not publish anything of his work on the calculus until De Analysi (1711; On analysis) during a priority dispute with Leibniz, who invented it independently.
Eighteenth Century
It is helpful to identify two broad tendencies in eighteenth-and nineteenth-century physics, which had been noted by a number of contemporaries, including the German philosopher Immanuel Kant (1724-1804). On the one hand, a mechanical approach analyzed the physical universe as a great machine and built models relying on commonsense notions of cause and effect. This sometimes required the specification of ontological entities to communicate cause and effect, such as Descartes’s plenum. On the other hand, the dynamical approach avoided mechanical models and, instead, concentrated on the mathematical relationship between quantities that could be measured. However, in avoiding mechanical models, the dynamical approach often speculated on the existence of active powers that resided within matter but could not be observed directly. Although this distinction is helpful, many scientists straddled the divide. Newton’s physics, and the general Newtonian scientific culture of the eighteenth century, utilized elements of both approaches. It held true to a mechanical-world picture in analyzing macroscopic systems involving both contact, as in collisions, and action at a distance, as in the orbital motion. But it also contained a dynamical sensibility. Regarding gravity, Newton rejected Descartes’s plenum and speculated that gravity might be due to an all-pervasive ether, tantamount to God’s catholic presence. Such reflections appeared in Newton’s private notes and letters, but some of these became known during the 1740s.
The development of mechanics during the eighteenth century marks one place where the histories of physics and mathematics overlap strongly. Mathematicians with an interest in physical problems recast Newtonian physics in an elegant formalism that took physics away from geometrical treatment and toward the reduction of physical problems to mathematical equations using calculus. Some of these developments were motivated by attempts to confirm Newton’s universal gravitation. The French mathematician Alexis-Claude Clairaut (1713-1765) used perturbation techniques to account for tiny gravitational forces affecting the orbits of heavenly bodies. In 1747 Clairaut published improved predictions for the Moon’s orbit, based on three-body calculations of the Moon, Earth, and Sun, and, in 1758, predictions of the orbit of Halley’s comet, which changed slightly each time that it passed a planet. Some years later, Pierre-Simon Laplace produced a five-volume study, Celestial Mechanics (1799-1825), which showed that changes in planetary orbits, which had previously appeared to be accumulative, were in fact self-correcting. His (perhaps apocryphal) response to Napoléon’s question regarding the place of God in his calculations has come to stand for eighteenth-century deism: “Sire, I had no need for that hypothesis.”
The most important mathematical work was the generalization of Newton’s mechanics using the calculus of variations. Starting from Fermat’s principle of least time, Louis Moreau de Maupertuis (1698-1759) proposed that, for a moving particle, nature always sought to minimize a certain quantity equal to the mass times the velocity times the distance that the particle moves. This “principle of least action” was motivated by the religious idea that the economy of nature gave evidence of God’s existence. The Swiss mathematician Leonhard Euler (1707-1783) recast this idea (but without Maupertuis’s religious motivations) by minimizing the integral over distance of the mass of a particle times its velocity. The Italian Joseph-Louis Lagrange (1736-1813) restated and clarified Euler’s idea, by focusing on minimizing the vis viva integrated over time. His Mécanique analytique (1787) summarized the whole of mechanics, for both solids and fluids and statics and dynamics.
In its high level of mathematical abstraction and its rejection of mechanical models, Lagrange’s formalism typified a dynamical approach. In addition to making a number of problems tractable that had been impossible in Newton’s original approach, the use of the calculus of variations removed from center stage the concept of force, a vector quantity (with magnitude and direction), and replaced it with scalar quantities (which had only magnitude). Lagrange was proud of the fact that Mécanique analytique did not contain a single diagram.
Newton’s physics could be applied to continuous media just as much as systems of masses. In his Hydrodynamica(1738), the Swiss mathematician Daniel Bernoulli used conservation of vis viva to analyze fluid flow. His most famous result was an equation describing the rate at which liquid flows from a hole in a filled vessel. Euler elaborated on Bernoulli’s analyses and developed additional formalism, including the general differential equations of fluid flow and fluid continuity (but restricted to the case of zero viscosity). Clairaut contributed to hydrostatics through his involvement with debates regarding the shape of the earth. In developing a Newtonian prediction, Clairaut analyzed the earth as a fluid mass. After defining an equilibrium condition, he showed that the earth should have an oblate shape, which was confirmed by experiments with pendulums at the earth’s equator and as far north as Lapland.
The study of optics inherited an ambivalence from the previous century, which considered two different mechanical explanations of light. In his Opticks (1704), Newton had advocated a corpuscular, atomistic theory of light. As an emission of particles, light interacted with matter by vibrating in different ways and was therefore either reflected or refracted. In contrast with this, Descartes and Huygens proposed a wave theory of light, arguing that space was full and that light was nothing more than the vibration of a medium.
During the eighteenth century, most scientists preferred Newton’s model of light as an emission of particles. The most important wave theory of light was put forward by Euler, who hypothesized that, in analogy with sound waves, light propagated through a medium, but that the medium itself did not travel. Euler also associated certain wavelengths with certain colors. After Euler, considerable debate occurred between the particle and wave theories of light. This debate was resolved during the early nineteenth century in favor of the wave theory. Between 1801 and 1803, the English physician Thomas Young conducted a series of experiments, the most notable of which was his two-slit experiment, which demonstrated that two coherent light sources set up interference patterns, thus behaving like two wave sources. This work was largely ignored until 1826, when Augustin-Jean Fresnel presented a paper to the French Academy of Science that reproduced Young’s experiments and presented a mathematical analysis of the results.
Electrical research was especially fruitful in the eighteenth century and attracted a large number of researchers. Electricity was likened to “fire,” the most volatile element in Aristotle’s system. Electrical fire was an imponderable fluid that could be made to flow from one body to another but could not be weighed (see sidebar, “Forms of Matter”). After systematic experimentation, the French soldier-scientist Charles-François Du Fay (1698-1739) developed a two-fluid theory of electricity, positing both a negative and a positive fluid. The American statesman and scientist Benjamin Franklin (1706-1790) proposed a competing, one-fluid model. Franklin suggested that electrical fire was positively charged, mutually repulsive, and contained in every object. When fire was added to a body, it showed a positive charge; when fire was removed, the body showed a negative charge. Franklin’s theory was especially successful in explaining the behavior of the Leyden jar (an early version of the capacitor) invented by Ewald Georg von Kleist in 1745. The device was able to store electrical fire using inner and outer electrodes, with the surface of the glass jar in between. Franklin’s interpretation was that the glass was impervious to electrical fire and that while one electrode took on fire, the other electrode expelled an equal amount (see Fig. 3).
After early efforts by John Robison and Henry Cavendish, the first published precision measurements of the electric force law were attributed to the French physicist and engineer Charles-Augustin de Coulomb (1736-1806). Coulomb used a torsion balance to measure the small electrostatic force on pairs of charged spheres and found that it was proportional to the inverse square of the distance between the spheres and to the amount of charge on each sphere. At the close of the century, Cavendish used a similar device to experimentally confirm Newton’s universal law of gravitation, using relatively large masses.
Nineteenth Century
The development of physics during the nineteenth century can be seen as both a culmination of what went before and as preparing the stage for the revolutions in relativity and quantum theory that were to follow. The work of the Irish mathematician and astronomer William Rowan Hamilton (1805-1865) built on Laplace’s revision of Newtonian dynamics to establish a thoroughly abstract and mathematical approach to physical problems. Originally motivated by his work in optics, Hamilton developed a new principle of least action. Instead of using Lagrange’s integral of kinetic energy, Hamilton chose to minimize the integral of the difference between the kinetic and the potential energies. In applying this principle in mechanics, Hamilton reproduced the results of Euler and Lagrange, and showed that it applied to a broader range of problems. After his work was published, as two essays in 1833 and 1834, it was critiqued and improved upon by the German mathematician Carl Gustav Jacob Jacobi (1804-1851). The resulting Hamilton-Jacobi formalism was applied in many fields of physics, including hydrodynamics, optics, acoustics, the kinetic theory of gases, and electrodynamics. However, it did not achieve its full significance until the twentieth century, when it was used to buttress the foundations of quantum mechanics.
Work on magnetism was encouraged by Alessandro Volta’s (1745-1827) development, in 1800, of the voltaic pile (an early battery), which, unlike the Leyden jar, was able to produce a steady source of electric current. Inspired by the German philosophical movement of Naturphilosophie, which espoused that the forces of nature were all interrelated in a higher unity, the Danish physicist Hans Christian Ørsted (1777-1851) sought a magnetic effect from the electric current of Volta’s battery. Ørsted’s announcement of his success, in 1820, brought a flurry of activity, including the work of Jean-Baptiste Biot and Félix Savart, on the force law between a current and a magnet, and the work of André-Marie Ampère, on the force law between two currents. The magnetic force was found to depend on the inverse square of the distance but was more complex due to the subtle vector relations between the currents and distances. For the analysis of inverse-square force laws, the German mathematician Carl Friedrich Gauss (1777-1855) introduced, in 1839, the concept of “potential,” which could be applied with great generality to both electrostatics and magnetism. This work grew from Gauss’s efforts in measuring and understanding the earth’s magnetic field, which he undertook with his compatriot Wilhelm Eduard Weber (d. 1891).
The most significant work in magnetism was done by Michael Faraday (1791-1861) at the Royal Institution of London. By 1831, Faraday had characterized a kind of reverse Ørsted effect, in which a change in magnetism gave rise to a current. For example, he showed that this “electromagnetic induction” occurred between two electric circuits that communicated magnetism through a shared iron ring but, otherwise, were electrically insulated from one another (an early version of the transformer). Faraday made the first measurements of magnetic materials, characterizing diamagnetic, paramagnetic, and ferromagnetic effects (though this terminology is
due to the English mathematician William Whewell). Finally, Faraday pioneered the concept of the field, coining the term “magnetic field” in 1845. He saw the “lines of force” of magnetic or electric fields as being physically real and as filling space (in opposition to the idea of action at a distance).
One of the pinnacles of nineteenth-century physics is the theory of electromagnetism developed by the Scottish physicist James Clerk Maxwell (1831-1879). Maxwell brought together the work of Coulomb, Ampère, and Faraday, and made the crucial addition of the “displacement current,” which acknowledged that a magnetic field can be produced not only by a current but also by a changing electric field. These efforts resulted in a set of four equations that Maxwell used to derive wave equations for the electric and magnetic fields. This led to the astonishing prediction that light was an electromagnetic wave. In developing and interpreting his results, Maxwell sought to build a mechanical model of electromagnetic radiation. Influenced by Faraday’s rejection of action at a distance, Maxwell attempted to see electromagnetic waves as vortices in an ether medium, interspersed with small particles that acted as idle wheels to connect the vortices. Maxwell discarded this mechanical model in later years, in favor of a dynamical perspective. This latter attitude was taken by the German experimentalist Heinrich Rudolph Hertz (1857-1894), who, in 1886, first demonstrated the propagation of electromagnetic waves in the laboratory, using a spark-gap device as a transmitter.
During the eighteenth century, most researchers saw the flow of heat as the flow of the imponderable fluid caloric. Despite developments, such as Benjamin Thompson’s cannon-boring experiments, which suggested that heat involved some sort of microscopic motion, caloric provided a heuristic model that aided in the quantification of experimental results and in the creation of mathematical models. For example, the French engineer Sadi Carnot (1837-1894) did empirical work on steam engines which led to the theory of the thermodynamic cycle, as reported in his Reflections on the Motive Power of Fire (1824). A purely mathematical approach was developed by Jean-Baptiste-Joseph Fourier, who analyzed heat conduction with the method of partial differential equations in his Analytical Theory of Heat (1822).
Carnot’s opinion that caloric was conserved during the running of a steam engine was proved wrong by the development of the first law of thermodynamics. Similar conceptions of the conservation of energy (or “force,” as energy was still referred to) were identified by at least three different people during the 1840s, including the German physician Julius Robert von Mayer (1814-1878), who was interested in the human body’s ability to convert the chemical energy of food to other forms of energy, and the German theoretical physicist Hermann Ludwig Ferdinand von Helmholtz (1821-1894), who gave a mathematical treatment of different types of energy and showed that the different conservation laws could be traced back to the conservation of vis viva in mechanics. The British physicist James Prescott Joule (1818-1889) did an experiment that measured the mechanical equivalent of heat with a system of falling weights and a paddlewheel that stirred water within an insulated vessel (see Fig. 4).
In his Hydrodynamica, Bernoulli had proposed the first kinetic theory of gases, by suggesting that pressure was due to the motion and impact of atoms as they struck the sides of their containment vessel. The work of the chemists John Dalton (1766-1844) and Amedeo Avogadro (1776-1856) indirectly lent support to such a kinetic theory by casting doubt upon the Newtonian program of understanding chemistry in terms of force laws between atoms. After John Herapath’s work on the kinetic theory, in 1820, was largely ignored, Rudolf Clausius published two papers, in 1857 and 1858, in which he sought to derive the specific heats of a gas and introduced the concept of the mean free path between atomic collisions. James Clerk Maxwell added the idea that the atomic collisions would result in a range of velocities, not an average velocity as Clausius thought, and that this would necessitate the use of a statistical approach. In a number of papers published from 1860 to 1862, Maxwell completed the foundations of the kinetic theory and introduced the equipartition theorem, the idea that each degree of freedom (translational or rotational) contributed the same average energy, which was proportional to the temperature of the gas. Clausius and Maxwell’s work in kinetic theory was tied to their crucial contributions to developing the second law of thermodynamics (see sidebar, “Second Law of Thermodynamics”).
End of Classical Physics
By the close of the nineteenth century, many physicists felt that the accomplishments of the century had produced a mature and relatively complete science. Nevertheless, a number of problem areas were apparent to at least some of the community, four of which are closely related to developments mentioned above.
New rays and radiations were discovered near the end of the century, which helped establish (among other things) the modern model of the atom. These included the discovery (by William Crookes and others) of cathode rays within discharge tubes; Wilhelm Conrad Röntgen’s finding, in 1895, of X rays emanating from discharge tubes; and Antoine-Henri Becquerel’s discovery in 1896 that uranium salts were “radioactive” (as Marie Curie labeled the effect in 1898). Each of these led to further developments. In 1897, Joseph John Thomson identified the cathode rays as negatively charged particles called “electrons” and, a year later, was able to measure the charge directly. In 1898, Ernest Rutherford identified two different kinds of radiation from uranium, calling them alpha and beta. In 1902 and 1903, he and Frederick Soddy demonstrated that radioactive decay was due to the disintegration of heavy elements into slightly lighter elements. In 1911, he scattered alpha particles from thin gold foils and explained infrequent scattering to large angles by the presence of a concentrated, positively charged atomic nucleus.
The study of blackbody radiation (radiation from a heated object that is a good emitter) yielded results that are crucial to the early development of quantum mechanics. In 1893 Wilhelm Wien derived a promising “displacement law” that gave the wavelength at which a blackbody radiated at maximum intensity, but precision data failed to confirm it. Furthermore, classical theory proved unable to model the intensity curves, especially at lower wavelengths. In 1900 the German theoretical physicist Max Planck (1858-1947) derived the intensity curve using the statistical methods of the Austrian physicist Ludwig Eduard Boltzmann (1844-1906) and the device of counting the energy of the oscillators of the blackbody in increments of hf, where f is the frequency and h is a constant (now known as “Planck’s constant”). Despite achieving excellent fits to data, Planck was hesitant to accept his own derivation, due to his aversion for statistical methods and atomism.
It is doubtful that Planck interpreted his use of energy increments to mean that the energy of the oscillators and radiation came in chunks (or “quanta”). However, this idea was clearly enunciated by Albert Einstein in his 1905 paper on the photoelectric effect. Einstein explained in this paper why the electrons that are ejected from a cathode by incident light do not increase in energy when the intensity of the light is increased. Instead, the fact that the electrons increase in energy when the frequency of the light is increased suggested that light comes in quantum units (later called “photons”) and have an energy given by Planck’s equation, hf.
Electromagnetic theory, though one of the most important results of nineteenth-century physical theory, contained a number of puzzles. On the one hand, electromagnetism sometimes gave the same result for all reference frames. For example, Faraday’s induction law gave the same result for the current induced in a loop of wire for two situations: when the loop moves relative to a stationary magnet and when the magnet moves (with the same speed) relative to a stationary loop. On the other hand, if an ether medium were introduced for electromagnetic waves, then the predictions of electromagnetism should usually change for different reference frames. In a second paper from 1905, Einstein reinterpreted attempts by Henri Poincaré (1854-1912) and Hendrik Antoon Lorentz (1853-1928) to answer this puzzle, by insisting that the laws of physics should give the same results in all inertial reference frames. This, along with the principle of the constancy of the speed of light, formed the basis of Einstein’s special theory of relativity.