Gary J Weisel. Scientific Thought: In Context. Editor: K Lee Lerner & Brenda Wilmoth Lerner, Volume 2, Gale, 2009.
Semiconductors are materials that form crystal solids (such as silicon and gallium arsenide) and have properties between those of insulators and conductors. A typical semiconductor’s conductivity (the degree to which it conducts electricity) depends on imperfections in the material (such as lattice defects), added impurities, and external conditions (such as applied electric fields). Man’s increased ability to control the degree of conduction in semiconductors lies at the heart of the information revolution of the late twentieth and early twenty-first centuries.
Historical Background and Scientific Foundations
Although the term semiconductor was first used around 1911, the properties of materials that we now call semiconductors had been studied beginning about 80 years earlier. The English physicist and chemist Michael Faraday (1791-1867) did the earliest known work during the 1830s with his research on silver sulfide, the conductivity of which increased as it was heated (exactly the opposite of most other materials such as metals). In 1873 the English engineer Willoughby Smith (1828-1891) published a paper on what we now call photoconductivity, showing that selenium’s conductivity increased when light shined on it. Three years later, professor of natural philosophy William Grylls Adams (1836-1915) and his student Richard Evans Day discovered the photoelectric effect when they reported that selenium also produced electricity when light was shined on it. The German physicist Karl Ferdinand Braun (1850-1918) published a series of papers on the electrical behavior of the junctions that were formed when thin metal wires were pressed against the surfaces of metallic sulfides. He found that the current running through the devices depended on the polarity of the voltage across them, making him the discoverer of the rectifying contact, or diode.
The search for semiconducting materials grew during the early twentieth century. After testing thousands of materials, American engineer Greenleaf Whittier Pickard (1877-1956) found a way to detect radio signals using a slender silicon carbide wire, which he called a “cat’s whisker,” pressed against the surface of a silicon crystal (similar to the devices tested by Ferdinand Braun). In 1906 Pickard patented his crystal detector, which became a basic component of early radio sets. By conducting current more in one direction than in the other, the crystal detector converted the alternating current originating from a radio antenna to the simpler signals required for the listener’s headphones. Nine years later, American physicist Manson Benedicks developed a crystal device based not on silicon, but on germanium. Neither Pickard nor Benedicks could explain why their devices worked, but produced solid empirical evidence of their behavior.
Although popular with radio hobbyists, crystal detectors were finicky and unreliable. In 1904, two years before Pickard’s patent, the English electrical engineer and physicist John Ambrose Fleming (1849-1945) invented the thermionic valve, an electronic radio wave rectifier. Building on American inventor Thomas Edison’s (1847-1931) invention of the light bulb, Fleming placed two electrodes inside a partially evacuated glass (vacuum) tube. By carefully shaping the electrodes—the negative cathode and the positive anode—he found that the electricity entering the tube as alternating current was “rectified” or converted into direct current. This thermionic or Fleming valve was later applied to diodes.
In 1907 American inventor Lee De Forest (1873-1961) invented the audion, a new type of vacuum tube, by inserting a third electrode called the grid between the two electrodes of Fleming’s rectifier. De Forest found that applying current to the grid both rectified and amplified the voltage running between the electrodes. This provided a cost-effective way to magnify voice transmissions via radio. AT&T also modified the audion in 1914 for use as a signal amplifier for long-distance telephone lines.
Although the redesigned audion was a success in the transcontinental phone line, it was clear that vacuum tubes were prone to failure. AT&T hoped that a new device might be developed that was based on semiconductors. This was the start of a commitment to industrial research and development that would, some 40 years later, lead to the invention of the transistor.
Semiconductors and Quantum Mechanics
Until the 1920s, work on semiconductors was primarily empirical and practical. After the discoveries of Austrian physicist Erwin Schroedinger (1887-1961) and German physicist Werner Heisenberg (1901-1976), quantum mechanics was applied to the developing field of solid-state physics (of which semiconductor work was one part). The first move toward a quantum theory of solids was an analysis of electrons in metals. German physicist Arnold Sommerfeld (1868-1951), working at the University of Munich, started from a classical model in which the electrons are seen as a noninteracting gas, and added so-called Fermi-Dirac statistics, in which no two electrons could share the same quantum states.
Although Sommerfeld’s theory improved agreement with experimental data, there were still notable failings, including its inability to explain how electrons could have a large mean free path in crystals (that is, have such long distances between collisions). In 1928 one of Heisenberg’s students, the Swiss-born American theorist Felix Bloch (1905-1983), answered this question in his doctoral thesis, by viewing the metal as a three-dimensional lattice. After Bloch used a periodic potential to represent the atoms making up the crystal lattice, he was able to assume that the function describing the location of the electrons took a form reflecting the pattern of the lattice potential. A long mean free path for electrons followed naturally from this assumption. More important, instead of finding discrete energy levels as in individual atoms, Bloch found that electrons in metals had a band, or range, of allowed energies.
Bloch’s use of quantum theory made it possible to predict the conductivity of conductors, but it could not tell the difference between insulators, conductors, and semiconductors. This was accomplished by the English physicist Alan Wilson (1939-), whose papers on band theory almost single-handedly made the study of semiconductors, and solid-state physics in general, recognized fields of study. In addition to Bloch’s formalism, Wilson used the ideas of German-born British physicist Rudolf Peierls (1907-1995), who found that vacancies in almost-filled electron bands could be considered “holes” that behaved as though they were positively charged carriers. He also found that electron bands were often discontinuous in solids, where the bands of allowed energies were broken up by regions of forbidden energies, or “band gaps.”
Wilson combined these ideas and found a simple and convincing explanation for the difference between types of materials. Conductors were materials in which the electrons filled to a level that was still within a continuous band of energies and were therefore free to move from atom to atom. Insulators were materials in which the electrons filled one band of energies, after which there was a large energy gap to the next available band. Even if an insulator were heated to high temperatures, the electrons would not be able to jump across the gap into the next band where they could move around the solid. Semiconductors were simply insulators with a smaller energy gap. When heated, electrons could jump out of the valence band and into the conduction band. This produced a free electron in the conduction band and a hole, or empty space in the valence band.
Pure (or instrinsic) semiconductors could be controlled by adding impurities that had atomic levels that fell between the valence and conduction bands, a process called doping. This could be done in two ways: Adding impurities that contain one more electron than the original semiconductor results in an n-type material (such as phosphorus-doped silicon). The extra electrons or majority carriers in the conduction band are available for conduction; the holes are minority carriers. Adding impurities that contain one less electron than the original semiconductor results in a p-type material (such as boron-doped silicon). In this case, the holes in the valence band are the majority carriers available for conduction.
Semiconductor theory also benefited from the study of rectifying contacts. One of the most important contributions came from German theorist Walter Schottky (1886-1976) in 1939, working at the German electronics firm of Siemens. He studied industrial rectifiers used in power applications that were based on junctions between copper and cuprous oxide, a p-type semiconductor. After a discussion with Peirels, Schottky noted that a small potential difference occurs at the surface of the semiconductor, and its free electrons are partly swept out. This leaves a depletion region of low conductivity at the surface, where no carriers were available for conduction. After calculating the depletion region’s shape and width, Schottky found that if a positive bias (positive voltage) is applied to the semiconductor side, then the depletion region is eliminated and a current flows. If positive bias is applied to the metal side, however, then the normal potential difference between the materials increases, along with the width of the depletion region, and no current flows.
These breakthroughs in solid-state theory enabled the development of devices that would have been beyond imagining in the late-nineteenth and early twentieth centuries. One of the most important uses of this science in World War II came from the development of radar. In 1917 Nikola Tesla (1856-1943) pioneered the first use of electromagnetic energy to detect objects at distance; by the mid-1930s an international effort had emerged, led mainly by British scientist Robert Watson-Watt (1892-1973), who patented the first workable system in 1935. In 1940 the United States agreed to assist the British war effort by establishing a sort of Manhattan Project for radar, the Radiation Laboratory (Rad Lab) at the Massachusetts Institute of Technology.
The British shared a new source of electromagnetic radiation with their American allies, a device called a cavity magnetron that used a rectifier, usually based on semiconductor materials, particularly silicon. The Rad Lab enlisted American scientist Frederick Seitz (1911-) and his solid-state physics group at the University of Pennsylvania to produce purer initial samples of silicon and control subsequent doping. Seitz, who had published an influential textbook summarizing then-current knowledge in solid-state physics, now led his research team in experiments that purified samples of silicon through repeated melting, then carefully doped them with boron.
Other new developments were encouraged by the war effort but arrived too late to affect its outcome. One group associated with the Rad Lab at Purdue University was headed by Austrian-born American physicist Karl Lark-Horovitz (1892-1958), who produced high-purity germanium samples doped with tin. The resulting rectifiers were able to withstand about ten times the voltage of silicon devices. The germanium rectifiers came off the Western Electric production lines in early 1945, not quite in time to affect the deployment of wartime radar systems, but making a considerable impact in general electronics.
Another wartime development was a new type of semiconductor diode discovered by American scientist Russell Ohl (1898-1987) at Bell Laboratories in 1939. Ohl accidentally found a junction between p-type and n-type regions while testing a silicon sample. Upon further investigation, he found that this p-n junction acted as a rectifier (current passed when a positive potential was applied to the p-type side of the diode but not when it was applied to the n-type side), and that it produced a relatively large voltage when light was shined on it via the photovoltaic effect, a type of photoelectric effect.
After the war, Bell Labs continued to discover devices that could replace the expensive and unreliable tube-based amplifiers. American scientist Walter Brattain (1902-1987) won a position in the division along with John Bardeen (1908-1981), who left academia for industrial research and development. The two worked with American engineer William Shockley (1910-1989), one of the division’s directors.
The group first attempted to realize Shockley’s concept of a field-effect transistor (FET), in which two electrodes (later named a source and a drain) were placed at either end of a piece of semiconducting material and an external electrode (the gate) was placed above. Shockley hoped that appropriate voltage on the external electrode would produce a charge layer to connect the input and output electrodes. Controlling current flow in this way made the device similar in operation to a triode vacuum tube.
Unfortunately, the first attempts in 1945 failed. Bardeen and Brattain suspected that rogue quantum states at the surface of the semiconductor were trapping charge carriers and forming an electric shield that canceled the external electrode’s influence beyond the surface. To check this hypothesis, they conducted a number of experiments, eventually fixing the problem by placing an electrolyte (e.g., water) between the external electrode and the semiconductor.
This led the team to a new semiconductor design that used a slab of n-type germanium and three contacts: one to the bottom of the slab and two to the top. Bardeen and Brattain found that the first versions of this device gave small but promising amplification. At this point they made a crucial and unexpected discovery: When the two surface contacts were sufficiently close together, the operation of the device improved significantly.
Bardeen and Brattain then built yet another device using two rectifying surface contacts positioned close together with a clever spring-loaded assembly. On December 16, 1947, this point-contact transistor gave a strong amplification. The team determined that when one rectifying point contact (the emitter) was given forward bias (in this case, positive voltage), it injected holes into the n-type germanium, which were then drawn to the other point contact (the collector), since it was reversed biased (negative) voltage. Because the holes, the minority carriers inside n-type material, were as important as the electrons themselves (the majority carriers), the device was referred to as a bipolar transistor.
Shockley had not been greatly involved in the point-contact transistor and was eager to make his own contribution. In fact, a difference of opinion regarding the interpretation of the point-contact transistor led him to an even better transistor design. While Bardeen and Brattain thought that the minority carriers were confined to the transistor’s surface and could not travel through the bulk of the semiconductor, Shockley took the opposite view, proposing a transistor that would involve the transport of minority carriers within the bulk.
This bipolar junction or npn transistor consisted of three layers of semiconductor; the outer two (emitter and collector) were n-type; the middle layer (the base) was p-type. Unlike the point-contact transistor, in which the emitter and collector were placed very close together at the surface of a large base, in the junction transistor the base had to be made very slender to allow the emitter and collector to be close enough together. The work of American physical chemist Gordon Teal (1907-2003) and American engineer Morgan Sparks (1916-2008) on growing germanium p-n junctions made such slender base regions technically possible; the new junction transistor was conclusively demonstrated on April 12, 1950. Two years later Henry Theuerer’s work on the growing highly purified silicon junctions made it possible to construct silicon junction transistors.
During the 1950s and 1960s, miniaturization in electronics was motivated by consumer products such as radios and hearing aids, military applications such as missile systems, and many areas of scientific research. The idea for a monolithic integrated circuit, one that combined many components in a single part, spurred much research. The idea was first realized in working devices by two people at roughly the same time. At Texas Instruments, American engineer Jack Kilby (1923-2005) recognized in September 1958 that resistors, capacitors, and transistors could all be made out of silicon and combined on a single part. However, because Texas Instruments had not yet developed the silicon technology that he needed, Kilby’s first all-in-one circuits were germanium based, including an oscillator and a switching circuit called a flip-flop.
A few months later, at the start-up company Fairchild Semiconductor, American engineer Robert Noyce (1927-1990), came up with an even better approach to monolithic integration. In 1958 Fairchild began to manufacture discrete silicon transistors with the “mesa” approach pioneered at Bell Labs: Each thin silicon wafer was defined with a carefully applied wax patch. The exposed area was then subjected to acid etches which left raised “mesa” surfaces where the wax was. Soon, instead of wax patches, Fairchild turned to photolithography, in which a photosensitive material was applied to the silicon wafer and exposed to an image of the circuit. A weak acid then uncovered the desired areas, which were then subject to etching and eventual doping in diffusion ovens.
Another Fairchild scientist, Jean Hoerni (1924-1997), introduced the concept of planar manufacturing, in which many devices were constructed on a silicon wafer and connected by metal strips, again, using photolithography. During various steps, the wafers were given coverings of silicon dioxide to help with the etching and diffusion and give a protective coating to the finished product. Fairchild’s new technology worked wonderfully for the construction of integrated circuits; the company won a patent for it in 1961.
Integrated circuit manufacturing technologies led to the development of new devices, the most important of which was actually an old one: Shockley’s 1945 idea of a field-effect transistor (FET). In 1960 Korean-born American physicist Dawon Kahng (1931-1992) and Egyptian-born Martin Atalla of Bell Labs developed a variation on Shockley’s idea in which the drain and source electrodes were connected or not connected depending on whether the voltage on the gate electrode produced an inversion layer of minority carriers. For example, when a p-type substrate is subjected to a positive gate voltage, then an inversion layer of electrons connects the n-type source and drain. The new planar technology made it relatively easy to manufacture a metal-oxide-semiconductor FET (or MOSFET), especially since the silicon surfaces that had given Shockley so much trouble could now be rendered harmless with the “passivation” of a final silicon dioxide covering. The MOSFET, in turn, enabled a new surge of miniaturization and, because it was cheaper to fabricate, all but replaced junction transistors (though they were still manufactured for special applications).
Modern Cultural Connections
Miniaturization was crucial to the computing revolution that occurred in the years following the MOSFET’s invention. Two of the most important new devices were developed by Intel, a company founded in 1968 by scientists including Robert Noyce (1927-1990) and Hungarian-born American chemical engineer Andrew Grove (1936-2002). In 1970 Intel produced the first random access memory (RAM) using silicon-based processing (the Intel 1103). One year later it manufactured the first microprocessor (the Intel 4004). During the next decade, Intel continued to make improvements in their products and, in 1981, benefited greatly when IBM chose Intel’s 8088 microprocessor for its personal computer. Though not the first personal computer (Xerox and Commodore had introduced their own some years earlier), IBM’s Intel-powered model dominated the market.
Since the invention of the integrated circuit computer parts have become denser by virtue of their smaller feature size. Microprocessors constructed in the early 1970s contained about 5,000 transistors and had feature sizes (silicon or metal lines) of about 10 microns. Intel’s Pentium microprocessors in the early 2000s contained about 30 million transistors and had feature sizes below 0.2 microns. At present writing, nanotechnology, the design and manufacture of functional electronic systems at the molecular level, is being explored by academic researchers and industry executives alike. Some forecasters suggest that nanolithography will produce integrated circuits with features below the 0.05 micrometer level. Others suggest that nanotechnology will take computer hardware away from semiconductor-based technologies altogether and toward entirely new methods of storing information.