David Kaiser. American Scientist. Volume 93, Issue 2. March 2005.
George Gamow, the wisecracking theoretical physicist who helped invent the Big Bang model of the universe, was fond of explaining what he liked best about his line of work: He could lie down on a couch and close his eyes, and no one would be able to tell whether he was working or not. A fine gag, but a bad model for thinking about the day-to-day work that theoretical physicists do. For too long, physicists, historians and philosophers took Gamow’s joke quite seriously. Research in theory, we were told, concerns abstract thought wholly separated from anything like labor, activity or skill. Theories, worldviews or paradigms seemed the appropriate units of analysis, and the challenge lay in charting the birth and conceptual development of particular ideas.
In the accounts that resulted from such studies, the skilled manipulation of tools played little role. Ideas, embodied in texts, traveled easily from theorist to theorist, shorn of the material constraints that encumbered experimental physicists (tied as they were to their electron microscopes, accelerators or bubble chambers). The age-old trope of minds versus hands has been at play in our account of progress in physics, which pictures a purely cognitive realm of ideas separated from a manual realm of action.
This depiction of what theorists do, I am convinced, obscures a great deal more than it clarifies. Since at least the middle of the 20th century, most theorists have not spent their days (nor, indeed, their nights) in some philosopher’s dreamworld of disembodied concepts; rather, their main task has been to calculate. Theorists tinker with models and estimate effects, always trying to reduce the inchoate confusion of experimental and observational evidence and mathematical possibility into tractable representations. Calculational tools mediate between various kinds of representations of the natural world and provide the currency of everyday work.
In my research I have adopted a tool’s-eye view of theoretical physics, focusing in particular on one of theorists’ most important tools, known as the Feynman diagram. Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics. Of course, no tool ever applies itself, much less interprets the results of its usage and draws scientific conclusions. Once the Feynman diagram appeared in the physics toolkit, physicists had to learn how to use it to accomplish and inform their calculations. My research has therefore focused on the work that was required to make Feynman diagrams the tool of choice.
The American theoretical physicist Richard Feynman first introduced his diagrams in the late 1940s as a bookkeeping device for simplifying lengthy calculations in one area of physics—quantum electrodynamics, or QED, the quantum-mechanical description of electromagnetic forces. Soon the diagrams gained adherents throughout the fields of nuclear and particle physics. Not long thereafter, other theorists adopted—and subtly adapted—Feynman diagrams for solving many-body problems in solid-state theory. By the end of the 1960s, some physicists even used versions of Feynman’s line drawings for calculations in gravitational physics. With the diagrams’ aid, entire new calculational vistas opened for physicists. Theorists learned to calculate things that many had barely dreamed possible before World War II. It might be said that physics can progress no faster than physicists’ ability to calculate. Thus, in the same way that computer-enabled computation might today be said to be enabling a genomic revolution, Feynman diagrams helped to transform the way physicists saw the world, and their place in it.
Stuck in the Mud
Feynman introduced his novel diagrams in a private, invitation-only meeting at the Pocono Manor Inn in rural Pennsylvania during the spring of 1948. Twenty-eight theorists had gathered at the inn for several days of intense discussions. Most of the young theorists were preoccupied with the problems of QED. And those problems were, in the understated language of physics, nontrivial.
QED explains the force of electromagnetism—the physical force that causes like charges to repel each other and opposite charges to attract—at the quantum-mechanical level. In QED, electrons and other fundamental particles exchange virtual photons—ghostlike particles of light—which serve as carriers of this force. A virtual particle is one that has borrowed energy from the vacuum, briefly shimmering into existence literally from nothing. Virtual particles must pay back the borrowed energy quickly, popping out of existence again, on a time scale set by Werner Heisenberg’s uncertainty principle.
Two terrific problems marred physicists’ efforts to make QED calculations. First, as they had known since the early 1930s, QED produced unphysical infinities, rather than finite answers, when pushed beyond its simplest approximations. When posing what seemed like straightforward questions—for instance, what is the probability that two electrons will scatter?— theorists could scrape together reasonable answers with rough-and-ready approximations. But as soon as they tried to push their calculations further, to refine their starting approximations, the equations broke down. The problem was that the forcecarrying virtual photons could borrow any amount of energy whatsoever, even infinite energy, as long as they paid it back quickly enough. Infinities began cropping up throughout the theorists’ equations, and their calculations kept returning infinity as an answer, rather than the finite quantity needed to answer the question at hand.
A second problem lurked within theorists’ attempts to calculate with QED: The formalism was notoriously cumbersome, an algebraic nightmare of distinct terms to track and evaluate. In principle, electrons could interact with each other by shooting any number of virtual photons back and forth. The more photons in the fray, the more complicated the corresponding equations, and yet the quantum-mechanical calculation depended on tracking each scenario and adding up all the contributions.
All hope was not lost, at least at first. Heisenberg, Wolfgang Pauli, Paul Dirac and the other interwar architects of QED knew that they could approximate this infinitely complicated calculation because the charge of the electron (e) is so small: e2~1/137, in appropriate units. The charge of the electrons governed how strong their interactions would be with the force-carrying photons: Every time a pair of electrons traded another photon back and forth, the equations describing the exchange picked up another factor of this small number, e2. So a scenario in which the electrons traded only one photon would “weigh in” with the factor e2, whereas electrons trading two photons would carry the much smaller factor e4. This event, that is, would make a contribution to the full calculation that was less than one one-hundredth the contribution of the single-photon exchange. The term corresponding to an exchange of three photons (with a factor of e6) would be ten thousand times smaller than the one-photon-exchange term, and so on. Although the full calculations extended in principle to include an infinite number of separate contributions, in practice any given calculation could be truncated after only a few terms. This was known as a perturbative calculation: Theorists could approximate the full answer by keeping only those few terms that made the largest contribution, since all of the additional terms were expected to contribute numerically insignificant corrections.
Deceptively simple in the abstract, this scheme was extraordinarily difficult in practice. One of Heisenberg’s graduate students had braved an e4 calculation in the mid-1930s—just tracking the first round of correction terms and ignoring all others—and quickly found himself swimming in hundreds of distinct terms. Individual contributions to the overall calculation stretched over four or five lines of algebra. It was all too easy to conflate or, worse, to omit terms within the algebraic morass. Divergence difficulties, acute accounting woes—by the start of World War II, QED seemed an unholy mess, as calculationally intractable as it was conceptually muddled.
In his Pocono Manor Inn talk, Feynman told his fellow theorists that his diagrams offered new promise for helping them march through the thickets of QED calculations. As one of his first examples, he considered the problem of electron-electron scattering. He drew a simple diagram on the blackboard, similar to the one later reproduced in his first article on the new diagrammatic techniques. The diagram represented events in two dimensions: space on the horizontal axis and time on the vertical axis.
The diagram, he explained, provided a shorthand for a uniquely associated mathematical description: An electron had a certain likelihood of moving as a free particle from the point x1 to x5. Feynman called this likelihood K+(5,1). The other incoming electron moved freely—with likelihood K+(6,2)—from point x2 to x6. This second electron could then emit a virtual photon at x6, which in turn would move—with likelihood δ +(s56 2)—to x5, where the first electron would absorb it. (Here s56 represented the distance in space and time that the photon traveled.)
The likelihood that an electron would emit or absorb a photon was eγµ, where e was the electron’s charge and γµ a vector of Dirac matrices (arrays of numbers to keep track of the electron’s spin). Having given up some of its energy and momentum, the electron on the right would move from x6 to x4, much the way a hunter recoils after firing a rifle. The electron on the left, meanwhile, upon absorbing the photon and hence gaining some additional energy and momentum, would scatter from x5 to x3. In Feynman’s hands, then, this diagram stood in for the mathematical expression (itself written in terms of the abbreviations K+ and δ+).
In this simplest process, the two electrons traded just one photon between them; the straight electron lines intersected with the wavy photon line in two places, called “vertices.” The associated mathematical term therefore contained two factors of the electron’s charge, e—one for each vertex. When squared, this expression gave a fairly good estimate for the probability that two electrons would scatter. Yet both Feynman and his listeners knew that this was only the start of the calculation. In principle, as noted above, the two electrons could trade any number of photons back and forth.
Feynman thus used his new diagrams to describe the various possibilities. For example, there were nine different ways that the electrons could exchange two photons, each of which would involve four vertices (and hence their associated mathematical expressions would contain e4 instead of e2). As in the simplest case (involving only one photon), Feynman could walk through the mathematical contribution from each of these diagrams, plugging in K+‘s and δ+‘s for each electron and photon line, and connecting them at the vertices with factors of eγµ.
The main difference from the single-photon case was that most of the integrals for the two-photon diagrams blew up to infinity, rather than providing a finite answer—just as physicists had been finding with their non-diagrammatic calculations for two decades. So Feynman next showed how some of the troublesome infinities could be removed—the step physicists dubbed “renormalization”—using a combination of calculational tricks, some of his own design and others borrowed. The order of operations was important: Feynman started with the diagrams as a mnemonic aid in order to write down the relevant integrals, and only later altered these integrals, one at a time, to remove the infinities.
By using the diagrams to organize the calculational problem, Feynman had thus solved a long-standing puzzle that had stymied the world’s best theoretical physicists for years. Looking back, we might expect the reception from his colleagues at the Pocono Manor Inn to have been appreciative, at the very least. Yet things did not go well at the meeting. For one thing, the odds were stacked against Feynman: His presentation followed a marathon day-long lecture by Harvard’s Wunderkind, Julian Schwinger. Schwinger had arrived at a different method (independent of any diagrams) to remove the infinities from QED calculations, and the audience sat glued to their seats—pausing only briefly for lunch—as Schwinger unveiled his derivation.
Coming late in the day, Feynman’s blackboard presentation was rushed and unfocused. No one seemed able to follow what he was doing. He suffered frequent interruptions from the likes of Niels Bohr, Paul Dirac and Edward Teller, each of whom pressed Feynman on how his new doodles fit in with the established principles of quantum physics. Others asked more generally, in exasperation, what rules governed the diagrams’ use. By all accounts, Feynman left the meeting disappointed, even depressed.
Feynman’s frustration with the Pocono presentation has been noted often. Overlooked in these accounts, however, is the fact that this confusion lingered long after the diagrams’ inauspicious introduction. Even some of Feynman’s closest friends and colleagues had difficulty following where his diagrams came from or how they were to be used. People such as Hans Bethe, a world expert on QED and Feynman’s senior colleague at Cornell, and Ted Welton, Feynman’s former undergraduate study partner and by this time also an expert on QED, failed to understand what Feynman was doing, repeatedly asking him to coach them along.
Other theorists who had attended the Pocono meeting, including Rochester’s Robert Marshak, remained flummoxed when trying to apply the new techniques, having to ask Feynman to calculate for them since they were unable to undertake diagrammatic calculations themselves. During the winter of 1950, meanwhile, a graduate student and two postdoctoral associates began trading increasingly detailed letters, trying to understand why they each kept getting different answers when using the diagrams for what was supposed to be the same calculation. As late as 1953—fully five years after Feynman had unveiled his new technique at the Pocono meeting—Stanford’s senior theorist, Leonard Schiff, wrote in a letter of recommendation for a recent graduate that his student did understand the diagrammatic techniques and had used them in his thesis. As Schiff’s letter makes clear, graduate students could not be assumed to understand or be well practiced with Feynman’s diagrams. The new techniques were neither automatic nor obvious for many physicists; the diagrams did not spread on their own.
Dyson and the Apostolic Postdocs
The diagrams did spread, though—thanks overwhelmingly to the efforts of Feynman’s younger associate Freeman Dyson. Dyson studied mathematics in Cambridge, England, before traveling to the United States to pursue graduate studies in theoretical physics. He arrived at Cornell in the fall of 1947 to study with Hans Bethe. Over the course of that year he also began meeting with Feynman, just at the time that Feynman was working out his new approach to QED. Dyson and Feynman talked often during the spring of 1948 about Feynman’s diagrams and how they could be used—conversations that continued in close quarters when the two drove across the country together that summer, just a few months after Feynman’s Pocono Manor presentation.
Later that summer, Dyson attended the summer school on theoretical physics at the University of Michigan, which featured detailed lectures by Julian Schwinger on his own, non-diagrammatic approach to renormalization. The summer school offered Dyson the opportunity to talk informally and at length with Schwinger in much the same way that he had already been talking with Feynman. Thus by September 1948, Dyson, and Dyson alone, had spent intense, concentrated time talking directly with both Feynman and Schwinger about their new techniques. At the end of the summer, Dyson took up residence at the Institute for Advanced Study in Princeton, New Jersey.
Shortly after his arrival in Princeton, Dyson submitted an article to the Physical Review that compared Feynman’s and Schwinger’s methods. (He also analyzed the methods of the Japanese theorist, Tomonaga Sin-itiro, who had worked on the problem during and after the war; soon after the war, Schwinger arrived independently at an approach very similar to Tomonaga’s.) More than just compare, Dyson demonstrated the mathematical equivalence of all three approaches—all this before Feynman had written a single article on his new diagrams. Dyson’s early article, and a lengthy follow-up article submitted that winter, were both published months in advance of Feynman’s own papers. Even years after Feynman’s now-famous articles appeared in print, Dyson’s pair of articles were cited more often than Feynman’s.
In these early papers, Dyson derived rules for the diagrams’ use—precisely what Feynman’s frustrated auditors at the Pocono meeting had found lacking. Dyson’s articles offered a “how to” guide, including step-by-step instructions for how the diagrams should be drawn and how they were to be translated into their associated mathematical expressions. In addition to systematizing Feynman’s diagrams, Dyson derived the form and use of the diagrams from first principles, a topic that Feynman had not broached at all. Beyond all these clarifications and derivations, Dyson went on to demonstrate how, diagrams in hand, the troubling infinities within QED could be removed systematically from any calculation, no matter how complicated. Until that time, Tomonaga, Schwinger and Feynman had worked only with the first round of perturbative correction terms, and only in the context of a few specific problems. Building on the topology of the diagrams, Dyson generalized from these worked examples to offer a proof that problems in QED could be renormalized.
More important than his published articles, Dyson converted the Institute for Advanced Study into a factory for Feynman diagrams. To understand how, we must first step back and consider changes in physicists’ postdoctoral training during this period. Before World War II, only a small portion of physicists who completed Ph.D.’s within the United States went on for postdoctoral training; it was still common to take a job with either industry or academia directly from one’s Ph.D. In the case of theoretical physicists—still a small minority among physicists within the U.S. before the war—those who did pursue postdoctoral training usually traveled to the established European centers. It was only in Cambridge, Copenhagen, Göttingen or Zurich that these young American theorists could “learn the music,” in I. I. Rabi’s famous phrase, and not just “the libretto” of research in physics. On returning, many of these same American physicists—among them Edwin Kemble, John Van Vleck, John Slater and J. Robert Oppenheimer, as well as Rabi—endeavored to build up domestic postdoctoral training grounds for young theorists.
Soon after the war, one of the key centers for young theorists to complete postdoctoral work became the Institute for Advanced Study, newly under Oppenheimer’s direction. Having achieved worldwide fame for his role as director of the wartime laboratory at Los Alamos, Oppenheimer was in constant demand afterward. He left his Berkeley post in 1947 to become director of the Princeton institute, in part to have a perch closer to his newfound consulting duties in Washington, D.C. He made it a condition of his accepting the position that he be allowed to increase the numbers of young, temporary members within the physics staff—that is, to turn the institute into a center for theoretical physicists’ postdoctoral training. The institute quickly became a common stopping-ground for young theorists, who circulated through what Oppenheimer called his “intellectual hotel” for two-year postdoctoral stays.
This focused yet informal haven for postdocs proved crucial for spreading Feynman diagrams around. When Dyson arrived in the fall of 1948—just one year after Oppenheimer became director and began to implement his plan for theorists’ postdoctoral study at the institute—he joined a cohort of 11 other junior theorists. One of the new buildings at the institute, which was supposed to contain offices for the new visitors, had not been completed on time, so the entire crew of theory postdocs spent much of that fall semester huddled around desks in a single office. The close quarters bred immediate collaborations. Very quickly, Dyson emerged as a kind of ringleader, training his peers in the new diagrammatic techniques and coordinating a series of collaborative calculations involving the diagrams.
One of the most famous of these calculations was published by two of Dyson’s peers at the institute, Robert Karplus and Norman Kroll. After Dyson got them started, they pursued the e4 corrections to an electron’s magnetic moment—that is, to show how strongly a spinning electron would be affected by an external electromagnetic field. This was a monumental calculation involving a long list of complicated Feynman diagrams. Tracing through each diagram-and-integral pair as Dyson had taught them, the postdocs demonstrated that an electron should have a magnetic moment of 1.001147 instead of 1 (in appropriate units), an answer whose six-place accuracy compared incredibly well with the latest experimental measurements. Following “much helpful discussion with F. J. Dyson,” Karplus and Kroll thus showed how Feynman diagrams could be put to work for calculations no one had dreamed possible before.
The Princeton postdocs, personally tutored in the niceties of diagrammatic calculations by Dyson, soon left the institute to take teaching jobs elsewhere. More than four-fifths of all the articles that used Feynman diagrams in the main American physics journal, the Physical Review, between 1949 and 1954 were submitted by these postdocs directly, or by graduate students (and other colleagues) whom they trained on arriving at their new jobs. The great majority of the 114 authors who made use of the diagrams in the Physical Review during this period did so because they had been trained in the new techniques by Dyson or by one of Dyson’s newly minted apprentices. (All but two of the remaining authors interacted directly with Feynman.) The acknowledgments in graduate students’ dissertations from geographically dispersed departments in places including Berkeley, Chicago, Iowa City, Bloomington, Madison, Urbana, Rochester and Ithaca confirm the role of the institute postdocs in taking the new techniques with them and teaching their own recruits how to use them. In this way, Feynman diagrams spread throughout the U.S. by means of a postdoc cascade emanating from the Institute for Advanced Study.
Years later, Schwinger sniffed that Feynman diagrams had “brought computation to the masses.” The diagrams, he insisted, were a matter at most of “pedagogy, not physics.” They certainly were a matter of pedagogy. Looking at the authors of all these diagrammatic articles, the institute postdocs’ pedagogical mission becomes clear: More than 80 percent of the authors were still in the midst of their training when they began using Feynman diagrams, either as graduate students or as postdocs. Most of the others began using the diagrams while young instructors or assistant professors, less than seven years past their doctorates. Older physicists simply did not “re-tool.”
All the same, the diagrams did not spread everywhere. Individuals and even entire departments that remained out of touch with the newly dispersed postdocs failed to make any use of the diagrams, even years after detailed instructions for their use had been in print. One of Dyson’s first converts at the institute, Fritz Rohrlich (who went on to publish one of the first textbooks on the new diagrammatic techniques), had to advise a graduate student at the University of Pennsylvania that he should choose a different dissertation topic or drop out of graduate school altogether; without any representatives from the Princeton network in town, the student simply would not be able to get up to speed with the diagrammatic methods.
As physicists recognized at the time, much more than published research articles or pedagogical texts was required to spread the diagrams around. Personal mentoring and the postdocs’ peripatetic appointments were the key. Very similar transfer mechanisms spread the diagrams to young theorists in Great Britain and Japan, while the hardening of the Cold War choked off the diagrams’ spread to physicists in the Soviet Union. Only with the return of face-to-face workshops between American and Soviet physicists in the mid-1950s, under the “Atoms for Peace” initiatives, did Soviet physicists begin to use Feynman diagrams at anything resembling the pace in other countries.
The Diagrams Dominate
Schwinger’s disparaging comments aside, the efficiency of using Feynman diagrams for perturbative calculations within QED was simply undeniable. Given the labyrinthine nature of the correction terms in these calculations, and the rapidity and ease with which they could be resolved using the diagrams, one would have expected them to be dispersed and used widely for this purpose. And yet it didn’t happen. Only a handful of authors published high-order perturbative calculations akin to Karplus and Kroll’s, trotting out the diagrams as bookkeepers for the ever-tinier wisps of QED perturbations. Fewer than 20 percent of all the diagrammatic articles in the Physical Review between 1949 and 1954 used the diagrams in this way.
Instead, physicists most often used the diagrams to study nuclear particles and interactions rather than the familiar electrodynamic interactions between electrons and photons. Dozens of new nuclear particles, such as mesons (now known to be composite particles that are bound states of the nuclear constituents called quarks and their antimatter counterparts), were turning up in the new government-funded particle accelerators of postwar America. Charting the behavior of all these new particles thus became a topic of immense experimental as well as theoretical interest.
Yet the diagrams did not have an obvious place in the new studies. Feynman and Dyson had honed their diagrammatic techniques for the case of weakly interacting electrodynamics, but nuclear particles interact strongly. Whereas theorists could exploit the smallness of the electron’s charge for their perturbative bookkeeping in QED, various experiments indicated that the strength of the coupling force between nuclear particles (g2) was much larger, between 7 and 57 rather than 1/137. If theorists tried to treat nuclear particle scattering the same way they treated photon-electron scattering, with a long series of more and more complicated Feynman diagrams, each containing more and more vertices, then each higher-order diagram would include extra factors of the large number g2. Unlike the situation in QED, therefore, these complicated diagrams, with many vertices and hence many factors of g2, would overwhelm the lowest-order contributions. Precisely for this reason, Feynman cautioned Enrico Fermi late in 1951, “Don’t believe any calculation in meson theory which uses a Feynman diagram!”
Despite Feynman’s warning, scores of young theorists kept busy (and still do!) with diagrammatic calculations of nuclear forces. In fact, more than half of all the diagrammatic articles in the Physical Review between 1949 and 1954 applied the diagrams to nuclear topics, including the four earliest diagramfilled articles published after Dyson’s and Feynman’s own. Rather than discard the diagrams in the face of the breakdown of perturbative methods, theorists clung to the diagrams’ bare lines, fashioning new uses and interpretations for them.
Some theorists, for example, began to use the diagrams as physical pictures of collision events in the new accelerators. Suddenly flooded by a “zoo” of unanticipated nuclear particles streaming forth from the big accelerators, the theorists could use the diagrams to keep track of which particles participated in what types of interactions, a type of bookkeeping more akin to botanical classification than to perturbative calculation. Other theorists used the diagrams as a quick way to differentiate between competing physical effects: If one diagram featured two nuclear-force vertices (g2) but only one electromagneticforce vertex (e), then that physical process could be expected to contribute more strongly than a diagram with two factors of e and only one g—even if neither diagram could be formally evaluated. By the early 1960s, a group centered around Geoffrey Chew in Berkeley pushed the diagrams even further. They sought to exhume the diagrams from the theoretical embedding Dyson had worked so hard to establish and use them as the basis of a new theory of nuclear particles that would replace the very framework from which the diagrams had been derived.
Throughout the 1950s and 1960s, physicists stretched the umbilical cord that had linked the diagrams to Dyson’s elegant, rule-bound instructions for their use. From the start, physicists tinkered with the diagrams—adding a new type of line here, dropping an earlier arrow convention there, adopting different labeling schemes—to bring out features they now deemed most relevant. The visual pastiche did not emerge randomly, however. Local schools emerged as mentors and their students crafted the diagrams to better suit their calculational purposes. The diagrams drawn by graduate students at Cornell began to look more and more like each other and less like the diagrams drawn by students at Columbia or Rochester or Chicago. Pedagogy shaped the diagrams’ differentiation as much as it drove their circulation.
On theorists pressed, further adapting Feynman diagrams for studies of strongly interacting particles even though perturbative calculations proved impossible. One physicist compared this urge to use Feynman diagrams in nuclear physics, despite the swollen coupling constant, to “the sort of craniometry that was fashionable in the nineteenth century,” which “made about as much sense.” A rule-bound scheme for making perturbative calculations of nuclear forces emerged only in 1973, when H. David Politzer, David Gross and Frank Wilczek discovered the property of “asymptotic freedom” in quantum chromodynamics (QCD), an emerging theory of the strong nuclear force, a discovery for which the trio received the Nobel Prize in 2004. Yet in the quarter-century between Feynman’s introduction of the diagrams and this breakthrough, with no single theory to guide them, physicists scribbled their Feynman diagrams incessantly—prompting another Nobel laureate, Philip Anderson, to ask recently if he and his colleagues had been “brainwashed by Feynman?” Doodling the diagrams continued unabated, even as physicists’ theoretical framework underwent a sea change. For generations of theorists, trained from the start to approach calculations with this tool of choice, Feynman diagrams came first.
The story of the spread of Feynman diagrams reveals the work required to craft both research tools and the tool users who will put them to work. The great majority of physicists who used the diagrams during the decade after their introduction did so only after working closely with a member of the diagrammatic network. Postdocs circulated through the Institute for Advanced Study, participating in intense study sessions and collaborative calculations while there. Then they took jobs throughout the United States (and elsewhere) and began to drill their own students in how to use the diagrams. To an overwhelming degree, physicists who remained outside this rapidly expanding network did not pick up the diagrams for their research. Personal contact and individual mentoring remained the diagrams’ predominant means of circulation even years after explicit instructions for the diagrams’ use had been in print. Face-to-face mentoring rather than the circulation of texts provided the most robust means of inculcating the skills required to use the new diagrams. In fact, the homework assignments that the postdocs assigned to their students often stipulated little more than to draw the appropriate Feynman diagrams for a given problem, not even to translate the diagrams into mathematical expressions. These students learned early that calculations would now begin with Feynman diagrams.
Meanwhile local traditions emerged. Young physicists at Cornell, Columbia, Rochester, Berkeley and elsewhere practiced drawing and interpreting the diagrams in distinct ways, toward distinct ends. These diagrammatic appropriations bore less and less resemblance to Dyson’s original packaging for the diagrams. His first-principles derivation and set of one-to-one translation rules guided Norman Kroll’s students at Columbia, for example, but were deemed less salient for students at Rochester and were all but dismissed by Geoffrey Chew’s cohort at Berkeley. Mentors made choices about what to work on and what to train their students to do. As with any tool, we can only understand physicists’ deployment of Feynman diagrams by considering their local contexts of use.
Thus it remains impossible to separate the research practices from the means by which various scientific practitioners were trained. Within a generation, Feynman diagrams became the tool that undergirded calculations in everything from electrodynamics to nuclear and particle physics to solid-state physics and beyond. This was accomplished through much pedagogical work, postdoc to postdoc, mentor to disciples. Feynman diagrams do not occur in nature; and theoretical physicists are not born, they are made. During the middle decades of the 20th century, both were fashioned as part of the same pedagogical process.