Computers and Anthropology

Stefan Artmann. 21st Century Anthropology: A Reference Handbook. Editor: H James Birx. Volume 2. Thousand Oaks, CA: Sage Reference, 2010.

Human beings are toolmakers. The history of their civilization is strongly influenced by technological innovations that, to an ever-greater extent, make the use of matter and energy for human purposes possible. From the building of huts and the making of a fire to the construction of skyscrapers and the fusion of nuclei, all cultures have been transforming the resources of nature to reproduce and improve the basis of their own survival. Besides matter and energy, there exists another source of supply that has been exploited technologically since time immemorial—information. Whoever used, for the first time, a sharp edge to leave a durable mark that could express excitement, appeal to demons, or represent a bagged animal, stood at the beginning of a process that led to our modern technologies for storing, transmitting, and processing information. Writing, printing, telegraphy, telephony, radio, and television opened new ways of storing and transmitting information; digital computers also revolutionized the processing of information. Although other innovations of the 20th century, such as the industrial production of artificial fertilizers, might have a greater impact on the biological survival of the human species, the computer is the key technology of today when it comes to the emergence of a globalized information society.

The computer is an artifact that has manifold and far-reaching repercussions on current anthropological research. Apart from being a very useful tool for scientific data management and analysis, the computer serves as a model of the human mind and shapes the understanding of information society. The individual and culture are interwoven when the future of humankind is discussed in terms of the relation between human beings and the technologies they invent: Does cultural evolution lead to super-intelligent artifacts, or to yet unknown kinds of hybrid systems that will take the place of Homo sapiens?

It is necessary for an anthropologist to have at least a basic understanding of the technological history and the conceptual foundations of the computer. In the following, historical landmarks, which are representative of the cultural role of digital calculating technology, are described, and future trends, which are already discernible today, are presented. Then, the fundamental concept of computation is introduced, together with the simplest form of an abstract computing machine. The science called artificial intelligence finally occupies the focus of attention; it builds a bridge between computer science and anthropology so that the field of computational anthropology can be established.

From Abacuses to PCs and Beyond: A Very Short History of Digital Calculating Technology

The desire to create artifacts that have all the abilities human beings possess is documented by testimonials handed down to us from antiquity. The legendary king Pygmalion of Cyprus, whose story Ovid tells, carved a statue of his ideal woman, because true women did not live up to his expectations. After he had fallen in love with his work of art, Aphrodite breathed life into the statue so that Pygmalion could marry her. Real technologies, however, were largely bound to mimic and enhance the physical capacity of human beings and other organisms. None of the Greek gods gave assistance in creating objects with cognitive skills. The long history of digital devices for calculating with numbers, which led eventually to the development of computers, clearly shows the difficulties human inventors had in order to overcome constructing intelligent artifacts. Digital, from the Latin digitus, meaning “finger” (the principal corporeal counting aid), means that these devices are based on counting discrete units as representations of numbers, in contrast to analog instruments based on measuring continuous physical quantities.

Digital Calculators

The Abacus

Although the development of intelligent machines is still a matter of front-line research, digital instruments that help human beings with routine mental activities, such as doing arithmetic, were already used in early but advanced cultures. Most widespread was the abacus (the Latin loan word of a Greek expression meaning “slab”), which has been used in various forms, for example, in Mesopotamia, Persia, Greece, Rome, India, China, and Central America. Typically, it consists of a stable structure, such as a tablet inscribed with geometrical markings or a frame holding parallel rods, in which small objects (calculi, Latin for “pebbles”), such as stones or beads, are moved as counters.

Strictly speaking, an abacus does not calculate. It is a number-representing mnemonic tool, the functioning of which is completely dependent on manual activity. An abacus assists in a calculation; its stable structure makes its user follow the internal order of a particular numerical system, and the actual positions of its counters store the partial results of an ongoing calculation.

Mechanical Calculators

The next decisive step, beyond number-recording aids, was the invention of mechanical calculators in early modern times, when thinking in terms of mechanisms flourished. These calculators could perform more and more complex arithmetical tasks with less and less human intervention. In 1623, the first one was built for astronomical tasks by the German theologian and scientist, Wilhelm Schickard (1592-1635). It was called a “calculating clock,” could add as well as subtract, and—what is most important—had a decimal carryover mechanism: If the sum of two digits exceeded nine, a one was carried over to the next column of numbers to the left.

Two famous philosophers, both of whom were also pioneers in mathematics, pushed the development of calculators forward. In the early 1640s, for his father, a tax collector, the young Blaise Pascal (1623-1662) invented a machine that could add and subtract with automatic carryover. Thirty years later, Gottfried Wilhelm Leibniz (1646-1716) formulated, within the context of his Enlightenment project to free mankind from the tiring burden of tedious mental activity, the design principles of calculators that implement all the first rules of arithmetic. Leibniz’s so-called “stepped reckoner mechanism” was used until the 20th century. Great technological progress happened, of course, in respect to the complexity, accuracy, speed, user-friendliness, and durability of such calculators.

What the calculators mentioned so far have in common is that the level of difficulty of the tasks they can perform is quite strictly limited by their mechanical structure. Ideas to overcome those limits evolved during the 19th century. A legendary calculator of that time, the analytical engine, was planned from 1834 onward by the British mathematician Charles Babbage (1791-1871). Although he would never realize it, not least because of the inadequate mechanical engineering of those days, the analytical engine made a great advance by introducing the idea of programming. Babbage wanted to use perforated cards, which had been employed for the control of mechanical looms since the 18th century, in order to instruct the analytical engine to work through a sequence of simple calculations that, altogether, perform a complex mathematical task. In this system, each simple calculation depended on the result of the previous calculation in the sequence. Had Babbage succeeded, a great step toward full mechanization of arithmetic would have been taken. With his failed project, mechanical calculators entered, at least conceptually, the age of automatization.

The First Digital Computers

The rapid development of science and technology, with which the industrial revolution went hand in hand, led to a strong demand for calculating machines and people who could operate them fast and reliably, in order to solve mathematical equations. These people were called computers—a word derived from the Latin computare, “to calculate” (originally, to cut numerals into a piece of wood). However, machines were not called “computers” before the end of World War II. General technological prerequisites for their development included the use of electricity (not only as a power source but also as a carrier of information, e.g., in Morse code telegraphy), the development of relays as electromagnetic switches that can control the flow of electricity, and the refinement of punched card and paper tape technology, for example, in tabulating systems that were used for the storage and statistical analysis of huge volumes of data.

The first who began to project digital computers in the 1930s, and build them in the 1940s, were physicists and engineers in Germany, the United States, and Great Britain. Their developments were carried out in parallel and, for the most part, independently from each other. Whereas the literature on the history of early computers is sometimes biased by national and institutional interests, the following selection of four machines shall just show the variety of technology and purpose that characterizes the first generation of computers.

Pioneering Machines: Z3, ASCC, ENIAC, and Colossus

The first functioning general-purpose digital computer, the Z3, was built by the German civil engineer, Konrad Zuse (1910-1995), in 1941. Following an idea of Leibniz, Zuse based his machine on the binary system—in hindsight, this was a natural consequence of using relays (the Z3 contained about 2600). These electromechanical switches can be in two different states, which represent zero and one, respectively. Zuse implemented logical operations (the junctors AND and OR as well as the negation NOT) in electrical circuits and used them to calculate with binary numbers.

Zuse was not encouraged much by the official institutions of Nazi Germany; after the war, he set up the leading computer company of West Germany. In Great Britain and the United States, the situation was completely different. The development of computers, at first particularly for military purposes, had a lot of support from companies, scientific institutions, and governments. Thus, it cannot come as a surprise that the first large-scale computer was built in the United States. In 1937, a Harvard engineer, Howard H. Aiken (1900-1973), proposed to construct an electromechanical computer on the bases of the available technology of decimal-system calculating machines. With the support of IBM, this machine became the automatic sequence controlled calculator (ASSC), or Harvard Mark I, and it was put into operation in 1944. Aiken’s calculator—a large-scale machine, indeed: 50 feet long and nearly 10 feet high, made of 750,000 parts and 500 miles of wire— realized Babbage’s dream: a special-purpose programmable machine for number crunching in order to numerically solve differential equations. A giant of the preelectronic epoch, the ASSC was in use until 1959.

Only 2 years after Aiken’s machine, America’s first large-scale electronic computer was completed. The ENIAC, the acronym of electronic numerical integrator and computer, was built by J. Presper Eckert (1919-1995), an electrical engineer, and John W. Mauchly (1907-1980), a physicist, at the University of Pennsylvania between 1943 and 1945. Whereas the ASSC had been the result of an ingenious combination of traditional technology, the ENIAC was based on state-of-the-art electronics: high-speed vacuum tubes that were not as reliable as relays but could switch 1,000 times faster. Its 18,000 tubes made it the incomparably fastest machine built so far, yet to program it took quite a long time; rewiring was necessary for each new problem. The ENIAC was used, like the ASSC, mainly to calculate (in the decimal system) solutions of differential equations for military and civic purposes until 1955.

Only 30 years after the end of World War II, it became known that the ENIAC was not the world’s first large-scale electronic computer. Since the 1970s, declassified information of the British intelligence service unveiled step by step the existence and specifications of the Colossus machines, a series of computers the first of which had been completed in 1943. They were special-purpose machines used to discover the settings of the code wheels of German cipher machines, so intercepted encrypted messages could be decrypted. Encoding and decoding is a form of the manipulation of symbols—and even a more general one than calculating; it is not restricted to numerals. Since cryptanalysis must happen as fast as possible, particularly in times of war, the electrical engineer Thomas H. Flowers (1905-1998), who worked at the now-famous Government Communications Headquarters Bletchley Park during World War II, decided to automate it by using computers that were based on vacuum tubes (about 1,500 in the first Colossus, about 2,500 in the later machines) and operated in binary logic.

Triumph of the Computer

After World War II, the computer started its triumphal march and became the key technology of modern developed societies. This success story has been made possible by an impressive series of technological breakthroughs, some of which will be highlighted below.

Technological Progress on the Basis of the von Neumann Architecture

None of the four pioneer machines introduced above had an internal memory that could store programs. The first more-than-experimental computers that incorporated such storage were the electronic discrete variable computer (EDVAC) built by Mauchly and Eckert between 1945 and 1951, a smaller and faster successor to their ENIAC, and the IAS computer, named after the Institute for Advanced Study in Princeton, New Jersey, where its construction was started in 1946 and finished in 1952. What seemed to be just a rather innocuous technological improvement was, in fact, a great conceptual step toward the realization of a truly all-purpose computer. Storing, in an internal memory, the program (or software) that controls the working of the computer made it easier to change instructions—not only for the human programmer, but also for the computer itself—even during its operation. From that time on, programs were considered symbol structures that were data for other programs.

The general design principles of the IAS computer were developed by the Hungarian-born mathematician John von Neumann (1903-1957), one of the greatest scientists of the 20th century. He not only knew the need for fast, reliable, and simply programmable computers very well from his wartime work, for example, on the atomic bomb, but he also had the ability to recognize the formal structures underlying a tremendous variety of scientific and engineering problems very quickly and to find ingenious solutions for them. After Mauchly and Eckert had told von Neumann about their ideas for the EDVAC, he began to tackle the formal problem of conceiving a general architecture for the all-purpose computer. The solution he found—in honor of him, it is now called the von Neumann architecture—is the abstract organization according to which ordinary computers are designed to this day. It distinguishes the five essential material (hardware) components of a computer:

  • A memory unit, whose locations store data and programs so that they can be read and rewritten arbitrarily
  • A central arithmetic unit, which implements the fundamental rules of arithmetic
  • An input unit, which allows the user to feed the computer new data and programs
  • An output unit, which transmits the results of computations and other information about the computer to the user
  • A central control unit, which manages the sequential execution of programs by coordinating the information flow in the computer

The IAS machine served as the paradigm for a number of influential noncommercial and commercial computers, both inside and outside the United States. The implementation of von Neumann’s general architecture has, of course, been benefiting from incredible technological progress, which concerns all components of the computer. The most evident trend is miniaturization. Since the mid-1950s, after the pioneer machines of the 1940s and early 1950s used electromechanical relays and vacuum tubes as switches, the second generation of computers experimented with transistors, a solid-state technology invented in 1947. A few years later, the third generation used integrated circuits, or chips (invented 1957-1958), which combine transistors and other semiconductors in prewired configurations and are fabricated from a single piece of silicon. This process led, in the early 1970s, to implementing the arithmetic and the control unit (plus working memory) on one chip, the microprocessor. According to the so-called Moore’s Law, which was postulated by the engineer Gordon E. Moore (b. 1929) in 1965 and has been validated to this day, the number of transistor functions on a chip is doubling every 18 months.

The progress of switching technology led to a spectrum of machines that implemented the von Neumann architecture on different scales. In the 1960s, mainframe computers, such as the IBM System/360, continued the tradition of the pioneer machines. These room-filling and expensive systems were built for the high-speed processing of very large data sets and were operated by specialists. Minicomputers, such as the DEC PDP-8, appeared in the second half of the 1960s. They were smaller and less expensive machines made possible by transistors. Not only did these machines allow their users to interact directly with them, but also their standard design required them to be individually adapted to particular applications. In the second half of the 1970s, microprocessor-based personal computers, such as the Apple II, began to fascinate hobbyists, and these computers have been welcomed by small businesses and common consumers since the 1980s. These innovations made the use of the computer easier and easier in the office and at home; for example, by implementing graphical user interfaces, the personal computer and its mobile siblings became an integral part of everyday life.

Besides miniaturization and increasing user-friendliness, the spread of computer networks is another very important trend of the last 25 years. Starting in the mid-1980s, personal workstations, as parts of local networks that connect computers of any size, made the idea of distributed systems familiar in many professional contexts. The 1990s and the beginning of the third millennium saw the rise of global networking, via the Internet, and especially the World Wide Web, in all areas of society; the transmission and processing of information, communication, and computation began to fuse together in such a way that a truly global information society is technologically possible now.

SAGE: The System Dynamics of Information Society in a Nutshell

From the perspective of cultural anthropology, the most significant tendency in the history of digital computing, since World War II, is the development of an ever-stronger integration of computers into all fields of human activity. They are used in offices (e.g., for accounting), in factories (e.g., for control of manufacturing machines), in laboratories (e.g., for data analysis), in the armed forces (e.g., for early warning), in the infrastructure (e.g., for traffic-flow control), and at home (e.g., for playing).

What is quite remarkable for computers, compared to other artifacts, is their astonishing capacity to induce, due to their information processing abilities, the emergence of new sociotechnological systems in all contexts of use. Such systems are purposefully ordered sets of equipment and processes on the one hand and ordered sets of institutions and persons who conceive, realize, and use technologies on the other. The task to create and develop those complex organizations, by controlling them through networked computers, forced engineers and scientists to think generally about systems as functional units of the information society.

To show the impact of that thought on modern culture, it is advisable to have a closer look at the prototype of a computerized sociotechnological system, the Semi-Automatic Ground Environment (SAGE). It served, from the 1950s to the 1980s, as a command, control, and communications system for the continental air defense of the United States. SAGE automated the early warning about, and interception of, attacking aircraft as a complex process that involves radar detection, the computer processing of incoming data, the countrywide communication of analyzed data, and the direction of intercepting aircraft. Ground-based and airborne radar systems, planes, ships, missile sites, command stations, and a hierarchy of military decision makers were to be so connected that a fast flow of reliable information was guaranteed.

A necessary condition for gaining an information advantage in air defense was the availability of computers that could process information in real time, that is, so fast that the results could lead to successful action in the situation that had been depicted by the initial data. The Whirlwind machines, built at the Massachusetts Institute of Technology between 1945 and 1953 under the direction of Jay W. Forrester (b. 1918), were the progenitor of the SAGE computers. They constituted the knots not only of a nationwide network of telephone-line information transmission channels but also of local clusters of cathode-ray terminals that had access to a central computer in time-sharing mode. They outputted radar information graphically and were interactively operated by a light pen. Those innovations implied that programs of a length and complexity unknown so far had to be developed and the definition of new high-level programming languages included. Since human beings were needed to accomplish these feats, scientists, engineers, and operating crews had to be trained. Psychological barriers among academia, the military, and civil companies needed to be overcome. To solve these problems, technological progress required institutional innovation; the nonprofit MITRE Corporation was founded. It was a group of research, development, and implementation organizations that worked only for governmental clients, both military and civil.

SAGE required interacting in real time with a complex technological system that contributed to the stability of deterrence during the cold war. People who were responsible for its development, use, and maintenance had to learn the management of complex systems. Thus, it cannot come as a surprise that many SAGE professionals set up, or became managers in, companies that were pioneers of the computer industry. The best example of how the SAGE experience influenced the emerging information society is Forrester himself, who, as an engineer, invented a new type of storage device—the magnetic core memory, which was faster and more reliable than the vacuum tubes used before. Forrester became a professor of management at MIT, started research on the computer-based modeling of complex social systems, and provided the scientific basis of the famous Club of Rome reports in the 1970s.

SAGE already represented, a few decades ago, those important aspects of the interaction between technology and culture that characterize the internal dynamics of organizations in today’s information society. It could adopt this symbolic quality because it contributed to a most important public good, national security, during an era in which the latter was highly threatened, so technological as well as institutional innovation in this field was strongly supported. Thus, the development of SAGE is a main event in the origin of the information society, and it needs to be intensely contemplated by cultural anthropologists.

Today’s Information Society

The social life of organisms is based on communication, the exchange of information by means of the transport of matter and energy. In this sense, any society in the history of life is an “information society.” What cultural anthropologists mean when they use this term, which was invented by Japanese journalists in the mid-1960s (joho shakai), is essentially two things: First, the velocity and direction of societal change in today’s developed countries is to a large degree explainable only by taking into account the central role they allow the development of information technologies to play as sources of material and intellectual wealth; second, the great importance of information technologies is a key element when members of developed countries describe the society they live in, and acknowledge that those technologies are a powerful motor of their productivity. From both points of view, it follows that scientific, economic, political, and juridical decisions about information technology are not only a most important factor in the transformation of all areas of society, but also these decisions are recognized as such. The computer becomes the preferred medium of influencing social and cultural change.

Some Trends Toward Human-Computer Symbiosis

It is difficult to give a fairly sensible forecast of the volume and direction of growth of information technology for even the near future. Twenty years ago, the impact of the Internet on everyday life would generally not have been considered as being as tremendous as it has become. Extrapolating progress from the status quo too conservatively is just one widespread error in prediction, while underestimating the difficulties of technological innovation is another one. However, promising candidates for trends in computing that are likely to become very important in the near future are as follows:

Pervasive and ubiquitous computing: The difference between computers and other artifacts of everyday life, such as kitchen utensils and clothes, will vanish almost completely. What is more, those computer-equipped objects can be connected with each other so that a continuous flow of information between them will be possible.

Unconventional computers: Beyond the von Neumann architecture, many other design principles of computers are being developed. This involves, for example, the construction of a quantum computer whose processor could, according to the strange laws of quantum physics, work through a program starting simultaneously with a very great number of different input data. Another important research direction experiments on organic matter (e.g., DNA), sometimes even whole organisms (e.g., slime molds), as computational media.

Agent software: Programs are beginning to be conceived as agents that their principals allow to perform simple or complex tasks independently in respect to certain types of behavior. A mobile agent, for example, is sent out to search information in a computer network that might be helpful in solving a problem, and its way through the network is not determined by its user.

Autonomous robots: Computer-controlled automata that process sensory input from their environments, move in real space-time, and autonomously carry out specific functions in complex situations will be mass-produced. Future children will grow up with such robots, and when the behavioral complexity of the latter becomes high enough, intelligence and an ethical status will be ascribed to them.

Altogether, these technological developments will converge on a new quality of technological existence, which might be called “human-computer symbiosis.” This concept was introduced by the American psychologist Joseph C. R. Licklider (1915-1990) as early as 1960. In biology, the Greek symbiosis (literally, “living together”) basically means that at least two organisms of different species continuously cooperate in a way that benefits all involved. As regards the interaction of human beings and computers, a symbiosis that enriches the physical and intellectual life of humans, as well as fosters autonomy in computers, is accompanied by astounding phenomena at the intersection of biology, sociology, and engineering, for example, virtualization and hybridization. (Network-based communities of software agents will strongly influence how human individuals form their identities by communicating through their agents, and ontologically new types of entities, hybrid systems that synthesize evolved and engineered components, will emerge). Anthropology has to become anthropotechnology.

Turing’s Universal Machine and the Concept of Computation

The impact of computers on modern society since World War II is obvious, and their future significance can hardly be overestimated. To grasp the influence of the computer on an anthropological understanding of the human being, a look at the conceptual basis of the von Neumann architecture, the abstract technological scheme according to which nearly all of today’s computers are built, is a prerequisite.

Most generally, computers are machines; thus, it must be asked what is meant by “machine.” Computers constitute a special class of machines, so the difference between them and other such classes has to be described. This involves stating the meaning of “calculation” more precisely, which can be done very elegantly by introducing an abstract automaton, the Turing machine, which in turn helps define the concept of a computer as it is understood conventionally. An analysis of this machine shows the fundamental limits of mechanical computations.

Machines: Particular and Universal

A physical object is considered a machine if the explanation of its behavior supposes that a mechanism is working in its inside in order to generate the behavior. Moreover, the mechanism is thought to perform its task neither by chance nor by some mysterious power. Instead, how the internal mechanism generates the system’s external behavior is assumed to be explainable by a process that runs, in time and space, according to laws of nature, which have effects under boundary conditions set by the organization of the system.

More generally, a machine implements, through its mechanism, a rule leading to a certain result (behavior) when given a certain input. A machine not only fulfils a concrete function (a task that it shall perform) but also realizes an abstract function (a formal rule that maps input on output). The behavior of a car, for example, can be explained as a movement in a coordinate system, with time and space axes, that results according to natural law from the interaction of many independent variables: the design of the car, the actions of its driver, and environmental conditions.

Most machines, such as planes, television sets, and refrigerators, perform just one or very few specific tasks; they are particular machines. If machines for many functions are constructed, then the guiding principle is to let the machine utilize a part of its input as a specification of the behavior it shall show next. By extending the set of alternatively possible behaviors more and more, the universal machine results. It can perform an infinite number of formally describable different tasks, given arbitrary input, if the correct mechanism is chosen for processing the input. From the abstract point of view introduced above, the universal machine can realize an infinite class of rules that map input on output. That should sound familiar; it is a reformulation of the idea of the all-purpose calculator, the computer. Thus, it must be possible to give a precise meaning to the idea of calculation by describing the basic architecture of the universal machine, and to define, in terms of this abstract automaton, what any particular computer does: to calculate an output given an input and a program.

The Turing Machine

John von Neumann has described the basic architecture according to which ordinary computers are designed to this day. Yet, more than 10 years before von Neumann wrote down his ideas, a British mathematical genius, Alan M. Turing (1912-1954), had already invented a most simple abstract automaton that was able to do anything that an all-purpose digital information processor can do. In retrospect, his 1936 paper “On Computable Numbers, With an Application to the Entscheidungs problem,” which introduced the machine that is named for Turing nowadays, is the birth certificate of theoretical computer science. During World War II, Turing acted as chief scientist for the Bletchley Park cryptanalysts and designed electromechanical devices, which were predecessors of the Colossi machines, for decrypting intercepted German messages. After the war, he was the guiding spirit of the development of the first British all-purpose electronic computer, the automatic computing engine (ACE), whose design was far ahead of that of contemporary machines and shifted priority from complex hardware to complex software.

  • The Turing machine integrates all components of the von Neumann architecture in an astonishingly simple way:
  • The memory unit is an infinitely long tape that consists of linearly ordered discrete squares, each containing a zero or a one.
  • The combined input and output unit is a head that can read and overwrite the content of one square of the tape at a time.
  • The combined central control and arithmetic unit is a look-up table that contains information on what the read/write head should do next according to the internal state (represented by a row of the look-up table) the machine is in. It reads the symbol in the square on which the head is positioned at any given time, overwrites it with a symbol, and moves one square to the left or the right. The look-up table says what the new state of the machine will be after the head’s action is executed.

The Turing machine works as follows: The machine starts out in its initial state. The input, to be processed by the automaton, is written out in a sequence of binary digits on the tape. The head reads the symbol in the square on which it is positioned. The look-up table says what the head should do, given the initial state of the machine and the symbol read. This is done (e.g., the head writes a zero and moves to the left), and the internal state of the machine is updated according to what, given the read symbol, the lookup table says. Then the operational cycle starts again: The machine reads the symbol in the square on which the head is now positioned, the look-up table says what the head should do, and so on. The machine stops if, given its present internal state and the symbol just read, the look-up table sends the machine into the halting state after the head has executed its operation. The machine will not do anything further. Then, the sequence of symbols on the tape is the result of the calculation the machine has done, given the initial contents of the tape and the particular look-up table.

The look-up table of Turing’s ingenious automaton is analogous to the program of a digital computer. The Turing machine can, thus, be programmed by an appropriately completed look-up table, in order to make a particular calculation using as input a series of symbols on the tape. What is more, the Turing machine can be programmed to interpret a part of its input as a description of another Turing machine whose program it shall execute. Such a programmable Turing machine is equivalent to a digital computer, even though it takes it comparatively many more steps to do even simple calculations; it is a universal Turing machine that can implement all possible programs any computer based on the von Neumann architecture is able to process. Thus, the universal Turing machine defines abstractly what a computer is.

Turing introduced his universal machine in order to prove that it defines the class of all processes that can be carried out by stepwise going through a sequence of well-defined rules, that is, by an algorithm (from the name of an Arabic mathematician of the 9th century, Al-Khwarizmi, who wrote the oldest known work on algebra). Anything that is calculable by such a process—a mathematician would speak of general recursive functions—is computable by a universal Turing machine. The vague concept of calculation can thus be substituted with Turing’s more precise concept of mechanical computation by means of his universal machine. This proposal is called the Church-Turing hypothesis—Alonzo Church (1903-1995) being an American logician who worked on a clarification of the concept of calculation at the same time as Turing. It is just a hypothesis, since it cannot be proven to be true; it is a well-founded proposal to state a vague concept more precisely. Unconventional computers of the future, for example, both quantum and organic computers, might make it necessary to revise the Church-Turing hypothesis.

Mechanically Undecidable Problems

Turing also showed mathematically that it is possible to confront his universal machine with problems it principally cannot solve in finite time. Such problems are called mechanically undecidable and represent the limits of what can be done by computers.

A famous problem, called the “halting problem,” that has been proven, by the American mathematician Martin Davis (b. 1928), to be unsolvable, is to program a Turing machine so that it decides, in a finite lapse of time, whether any given Turing machine will stop its processing of an arbitrary input. By using proof techniques known from metamathematics, Davis showed that there does not exist a Turing machine that implements a general decision procedure for solving the halting problem. The simplest solution seems to be to let a tested machine run and see if it stops the processing of a given input. Yet, this would not help in the case of very long computations, since the testing Turing machine does not know whether it has waited long enough; the tested machine might stop just one second after the testing machine has stopped the test run. The general idea of how to prove the undecidability of the halting problem involves the construction of a self-contradictory Turing machine that tests itself as a testing machine and does not halt when it, as a tested machine, does so, and vice versa.

Turing’s ideas on abstract universal machines seem far away from anthropological considerations on computers, yet the reverse is true. That there are principal and mathematically provable limits on what computers can do is of utmost importance when it comes to computational models of human thinking and behavior. This is most evident in the science of artificial intelligence, one of whose founding fathers was Turing.

Artificial Intelligence as Computational Anthropology

Anthropologists, like most other scientists, apply the computer as a tool for scientific data management and analysis or just for text processing. Already in the 1950s, the French ethnologist Claude Lévi-Strauss (b. 1908) insisted that, without the intensive use of computers for documentation and analysis, anthropologists could not handle their large volumes of collected data any more. Yet, the computer is also used as an explanatory model of human thinking and behavior, most explicitly in artificial intelligence (AI), a research program that emerged just a few years after the first digital computers had been constructed. Its name was invented by an American computer scientist, John McCarthy (b. 1927), to title a workshop at Dartmouth College in 1956; this event went down in the annals of AI as the foundation event of this discipline.

The research program of AI can be construed in both a narrow and a broad sense. Its aim may expressly be an anthropological one: to understand human intelligence, so that computer scientists are called upon to construct artifacts whose internal mechanisms and external behaviors get closer and closer to the cognitive and behavioral patterns of Homo sapiens. AI may, on the other hand, be indifferent to the resemblance of the artifacts computer scientists construct to resemble human beings. Then, AI is considered the general science of all possible intelligent systems that have, neither in their externally observable behavior nor in their internally detectable information processing, to measure up to humans. Yet, even if AI is regarded as an anthropological science, its constructions are possible intelligent systems in a particular sense: After they have been successfully designed and people have accustomed themselves to interacting with them over a considerable lapse of time, AI artifacts might be accepted as possessing humanlike but, of course, not real human intelligence.

The Turing Test, or How to Make the Concept of Human Intelligence Operational

Whether AI is construed in the narrow or the broad sense, it needs a catalogue of criteria that a system must fulfill if it is to be recognized as being intelligent. In an experimental science, those criteria should take on the form of operationalizable sufficient conditions of ascribing intelligence. This was stated clearly by Turing in his 1950 paper “Computing Machinery and Intelligence,” a historic document of AI’s beginnings that still is hotly disputed.

Turing proposed a game in which a computer communicates to a human being in order to convince the latter that it is also human. This so-called imitation game can be transformed into a test that nowadays is named for Turing and basically requires the following components: two computer terminals, a connection between them, a human being who operates one of the terminals, a digital computer that operates the other, and a partition wall between the human and the computer so that the former cannot see the latter. The human starts the test by typing a question into the terminal. This question is sent to the other terminal, into which the computer then keys an answer. The human receives the answer, asks another question, and so on. After some time, the conversation is interrupted, and the human must address the question of whether he communicated with another human or with a computer. The computer passes the Turing test when it succeeds in deceiving the human, a feat that, as Turing hoped, will be performed by a digital computer in the near future.

The Turing test suggests that one may ascribe intelligence to a computer when its communicative behavior is not distinguishable from that of an intelligent human in the same situation. This reasoning is based on an analogy that invites one to infer an externally unobservable cognitive competence from an observable behavior known to be shown by humans who are considered intelligent.

The Need for a Full Structural Comparison of Humans With Computers

The criterion of intelligence provided by the Turing test, or similar experimental procedures, does not refer to the internal information processing of possible intelligent systems. This is an important shortcoming, if AI is to be construed as the science of human-like intelligent systems; the comparison of humans with artifacts in respect to their intelligence must then also refer to mechanisms of information processing that are working inside.

On the hardware level of, roughly speaking, the human brain and the microprocessor of the computer, differences prevail, as von Neumann showed in his lectures on the computer and the brain: The brain is not a purely digital machine, and statistical methods of data analysis are hard-wired in neurophysiology that are not implemented in the integrated circuits of conventional computers. Yet, if human beings and computers are compared on the software level, similarities might be discernible. The conceptual basis of such a comparison is given by Turing’s structural definition of computation: It does not need to specify whether a computing system has been evolved by natural selection, educated by cultural interaction, or engineered by scientific construction. Thus, the science of computing might apply to digital computers as well as to human beings, and it is an empirical question whether human cognition can be described by computational models as a form of programmable processing of digital information. If the answer is affirmative, then the concept of computing becomes a most important supportive part of a general framework for anthropology that also includes an engineering part: AI or, as this discipline might then be called more adequately, “computational anthropology.”

The Physical Symbol System Hypothesis: A Paradigm of Computational Anthropology

The hypothesis that has been most influential in the history of AI was developed by two American computer scientists, Allen Newell (1927-1992) and Herbert A. Simon (1916-2001), who did research also in psychology, economics, political science, and philosophy. Simon is the only computer scientist so far to become a Nobel laureate; he was awarded the 1978 prize in economics.

The unifying idea of Newell and Simon’s work is the physical symbol system hypothesis. It says that, for any system that shows general intelligent action, it is a necessary and sufficient condition to be a physical symbol system. By “general intelligence,” Simon and Newell mean that such a system is comparable to a human being as regards the realization of its aims in complex situations and the adaptation of its aims to the environment. Against the backdrop of Turing’s structural theory of computation, it is easy to explain what symbol systems are: They are identical to universal machines. Newell and Simon decided to introduce a new name for Turing’s abstract automaton, since they realized that the mechanism by which a Turing machine interprets its input is necessarily based on symbol-based designation: A particular symbol on the tape stands, relative to a well-defined internal state of the machine, for a certain operation of its read/write head and a change of its internal state. The same is true for the interpretation of symbols on the tape as representations of other Turing machines—the essential feature of computational universality.

The reason Simon and Newell speak of physical symbol systems is that they are interested, as empirical scientists, in really existing symbol systems. Any symbol system in the physical world, such as a personal computer, is not an unrestrictedly universal machine; in contrast to the abstract machine of Turing, it cannot possess, for example, an infinitely large memory. Differences between real symbol systems are, thus, due to their respective material constitution that appears as a limitation on information processing if compared to a universal Turing machine. Another way of expressing this idea is to say that the behavior of all physical symbol systems can exhibit general intelligence, because it is generated by a universal machine, but each system does so on a particular material basis that results in its specific form of bounded rationality (e.g., cells in the human brain and integrated circuits in digital computers).

These remarks show that the physical symbol system hypothesis is not to be interpreted as being incompatible with more biologically inspired approaches to intelligent behavior (such as neural net modeling, artificial life research, and evolutionary robotics), most of which have their intellectual origins in cybernetics (from the Greek kybernetes, “steersman”), the structural science of communication and control, which was founded by the American mathematician Norbert Wiener (1894-1964). The idea of a computational anthropology based on the physical symbol system hypothesis does not exclude those approaches; it integrates them. Any anthropological model of intelligent behavior and cognitive processes that can be expressed in the form of a program (which is the case in the approaches mentioned above) is principally executable by a symbol system (i.e., a universal Turing machine). Biological research is, in fact, of utmost importance for Newell and Simon’s conception of AI; the difference in computational power between a program that simulates the cognitive mechanisms and generates the intelligent behavior of an average human being in everyday situations on the one hand, and the universal Turing machine on the other, is due to the physical constraints of the human body on its internal information processing.

An intensely debated question asks whether human beings have cognitive and behavioral abilities that are not algorithmically formalizable. If there do not exist such skills, then the principal limits of computability are also the anthropological limits of human thought and action. Otherwise, it might be good to advise anthropologists that they should expect the construction of new, unconventional types of computers beyond the von Neumann architecture; these automata might then help the anthropologist to understand what human beings are by analyzing what they can engineer and how they think about the limits of their artifacts.

Conclusion

The enormous impact of computers on modern society urges anthropologists, engineers, and philosophers to discuss today how traditional ideas of the human condition will fare in the technological world of tomorrow. Humanism has been proposing, since the Renaissance, that the way toward a humane existence leads neither through religious belief nor through ideologies of collective welfare, but through the education of the individual. “Education” here means a process of developing oneself into a spiritually and corporeally refined, ethically autonomous, and socially agreeable being. Traditional humanism, which recommends the study of classical works of art as the best way toward independent individuality, usually spreads the fear of technology, the latter being considered the quintessence of standardization and heteronomy. However, anthropologists may help undermine this misleading confrontation: Humans are toolmakers from the beginnings of their history, and progress in science and engineering makes the individualization of artifacts possible. Information technology is not only a tool adaptable to individual needs and abilities but also a medium for developing them.

Particularly in respect to the computer, it is of great importance for anthropologists to listen to a warning of the Spanish philosopher José Ortega y Gasset (1883-1955): The real danger of our time is that human beings are becoming too lethargic and unimaginative to introduce unusual uses for their tools, machines, and automata or to invent new technologies. One reason for this threatening weakness might be the still widespread attitude of traditional humanism that an eternal essence of humankind must be defended against technology. Instead, anthropology should consider the computer to be not only an adaptable tool of and a developmental medium for humanity but also an existential challenge to humanity, in the sense of a venturesome call to discover the unexplored possibilities of human existence. Since technology has always been powerful in transcending given circumstances, an anthropologically enlightened humanistic perspective on today’s information technology might properly be called “transhumanism.”