Patrice Flichy. Handbook of New Media: Social Shaping and Consequences of ICTs. Editor: Leah A Lievrouw & Sonia Livingstone. Sage Publications. 2002.
Many histories of information and communication could be written: the history of institutions and firms, the history of programmes and creative works, the history of techniques, and the history of practices and uses which, in turn, can be related to that of work and leisure but also to that of the public sphere. All these histories are related to very different fields of social science, and the information and communication sector is far too vast to present all these perspectives here. I have chosen to take as the main theme of this chapter the question of relations between ICTs and society. With this point of view we are at the heart of a number of debates: debate on the effects of communication, which for a long time mobilized sociologists of the media; extensive debate on determinism among historians of techniques; and debate around the sociotechnical perspective which has now been adopted by most sociologists of science and technology.
We shall focus on three points in particular: the launching and development of ICTs, their uses in a professional context and, lastly, their uses in a leisure context.
Innovation in ICTs
Does Technology Drive History?
In many histories of computing the transistor, then the microprocessor, are considered to be the determining elements. Behind this very common theory lies the idea that technical progress is inevitable and globally linear. Electronic components and basic technologies such as digitization are seen as determining the form of the technical devices we use. Similarly, these new machines have been expected to determine the organization of work (Friedmann, 1978), our leisure activities and, more broadly, our ways of thinking (McLuhan, 1964) or even society at large (Ellul, 1964).
By contrast, other researchers such as the historian David Noble (1984) clearly show, through the case of automatically controlled machine tools, that there is no one best way, and that the effect of a technique cannot be understood without simultaneously studying its use and the choices made by its designers. After the Second World War two alternatives were explored to automate production: record-playback (automatic analogue) machines and numerical control machines. The former recorded the design of a part drawn by a human operator and then automatically produced copies. The numeric machine, by contrast, did not need to memorize human knowhow; it was capable of programming design and production. If the numeric option triumphed, it was not because it was more reliable or easier to implement, but because it corresponded to the representations that designers and corporate managers (future buyers) had of automated industrial production.
Numerical control was always more than a technology for cutting metals, especially in the eyes of its MIT designers who knew little about metal cutting: it was a symbol of the computer age, of mathematical elegance, of power, order and predictability, of continuous flow, of remote control, of the automatic factory. Record- playback, on the other hand, however much it represented a significant advance on manual methods, retained a vestige of traditional human skills; as such, in the eyes of the future (and engineers always confuse the present and the future) it was obsolete. (Noble, 1979: 29-30)
Studying Successes and Failures
One conclusion can be drawn from Noble’s study: there is never a single technical solution; as a rule, several solutions are studied in parallel. The historian has to study these different solutions and analyse both successes and failures (Bijker et al., 1987). The RCA videodisk is a good example of failure. Margaret Graham (1986) showed that the television corporation RCA had everything it needed to launch its new product successfully one of the best US research laboratories, positive market research, support from the media and, lastly, access to a very large programme catalogue. Despite all these assets, RCA sold no more than 500,000 copies of its videodisk player in three years. Another technical system, the VCR, developed in Japan, was to dominate the market with a resulting loss for RCA of $600 million. The company that had launched television in the US was eventually bought out by General Electric and then by the French company Thomson. Thus, as many computer manufacturers were to discover, a company can pay very dearly for a failure. The main lesson that Graham draws from this case is that technical or commercial competencies are not enough if they are not properly coordinated. That was the underlying cause of the videodisk’s failure. As sociologists of technology have shown: ‘rather than rational decision-making, it is necessary to talk of an aggregation of interests that can or cannot be produced. Innovation is the art of involving a growing number of allies who are made stronger and stronger’ (Akrich et al., 1988: 17; see also Latow, 1987). This strategy of alliances was what enabled France Télécom to successfully launch its videotex (the minitel) in the early 1980s. A few months before the new medium was launched, this telematic project collided with a virulent combination of media and political opposition. In fact, France Télécom, which at the time was still the French Post Office (PTT), wanted to create and run the whole system on its own. It hoped not only to provide the network and terminals but also to install all the information to be offered to the public on its own servers. Other European post and telecommunications authorities had opted for the same schema. However, faced with intense opposition France Télécom backed down and decided to move from a closed to an open system in which any firm could become a service provider. France Télécom simply transported the information and took care of the billing (with a share of the revenue paid back to the supplier). Owing to partnerships with service providers and especially with the press, 20 per cent of French households adopted the new service (Flichy, 1991).
Behind the questions of coordination of designers within a company or of partnerships with the outside, lies the question of the mobilization of the different parties concerned by innovation: R&D engineers, marketing specialists, salespeople, repairers, partner companies (manufacturers of components, content providers, etc.) but also users. In an interactionist approach, sociologists have used the notion of boundary objects. These are objects situated at the intersection of several social worlds, which meet the needs of all worlds simultaneously. ‘They are objects which are both plastic enough to adapt to local needs and the constraints of the several parties employing them, yet robust enough to maintain a common identity’ (Star and Griesemer, 1989: 393). A boundary object is the result of complex interaction between the different actors concerned. This is the exact opposite of the naive idea of innovation spawned ready-made by the inventor’s mind. The history of Macintosh clearly illustrates this point. Two computers using the principle of graphic windows were produced concurrently at Apple: Lisa, a failure, and Macintosh, the Californian company’s leading machine. Lisa was designed in a rather cumbersome organizational frame, with a strict division of tasks between teams. Macintosh, by contrast, was developed by a small, tightly knit team in which choices made by individuals were always discussed collectively. The software developers gave their opinions on the hardware and vice versa. Moreover, the people in charge of building the factory and of marketing and finance were also included in these discussions. This continuous negotiation caused the project to be amended more than once. A computer that was originally designed for the general public was eventually substituted for Lisa as an office machine. The simplicity of its use, imagined from the outset, was one of this computer’s most attractive features (Guterl, 1984). The Macintosh is a computer situated on the boundary between hardware and software, between the computer specialist and the layperson.
While the perspective of boundary objects is very useful for the study of specific cases, other perspectives are needed to analyse more long-term phenomena. The path-dependency concept devised by economist and historian Paul David is particularly illuminating. Through the paradigmatic example of the typewriter keyboard which has not evolved since its invention, David (1985) builds a model that compares dynamic growth to a tree. At each branch the actors face a choice. This choice may sometimes be linked to a minor element, but once a solution has been chosen by a large number of actors it becomes relatively stable. The outcome of initial choices is relatively unpredictable but a time comes when a technique or an industrial mode of organization is imposed, with a resulting phenomenon of lock-in.
IBM’s entry into the PC market is an interesting example of path dependency. Big Blue was aware that this was an entirely new market and that it needed to find a specific mode of organization to produce a fundamentally different computer. A task force was set up to produce and market this microcomputer, independently of the rest of the company. Unlike Apple and in the true IBM tradition, it chose an open architecture not protected by patents. This meant that users could buy peripherals and software from other companies. The task force also decided to break away from IBM’s traditional vertical integration. The central processing unit would thus be bought from Intel and the operating system from a startup, Microsoft. There were two main reasons for this policy: the desire to be reactive and to short-circuit the usual functioning; and the desire to show the Department of Justice that IBM had stopped its monopolistic behaviour. The strategy was a success. In 1982, the first full year of production, IBM’s microcomputer revenues totalled £500 million and in 1985 £5.5 billion, ‘a record of revenue growth unsurpassed in industrial history’ (Chandler, 2000: 33). But it also had a whole series of unexpected effects. While the appearance of rivals (producers of clones and specialized software) was part of the game, the key position that this afforded Intel and, to an even greater extent, Microsoft (Cusumano, 1995) was far less predictable. These two companies rapidly became the two main players in the microcomputing industry, at IBM’s expense.
The Internet is another case of a path-dependent history. It has sometimes been said that ARPANET, the ancestor of the network of networks, was built so that the US Army could maintain communication links in case of a Soviet attack. In fact, this network had a far more modest aim: it was to link the computing departments of universities working for ARPA, the US Defense Department’s advanced research agency (Hafner and Lyon, 1996: 41, 77). Telecommunication companies and especially AT&T refused to build this new data network. It was therefore designed by computer specialists who had a new view of computing, suited to communication between machines with the same status. This computer-mediated communication was profoundly different from IBM’s centralized and hierarchized computing system or from a telephone network. From the outset ARPANET was a highly decentralized network which stopped at the university entrance. This technical architecture left a large degree of leeway to each computing site which could organize itself as it wished as regards hardware and software and could create its own local area network. The only constraint was the need to be able to connect to an interface. These technical choices in favour of decentralization were also to be found in the organization of work needed to develop the network. Construction of the network was entrusted to a small company closely linked to MIT. This company dealt with no technical problems posed by data exchange beyond the interface, considering that they were the universities’ responsibility. Unlike the time-sharing system developed by IBM, in particular, where the central computer is in a master-slave position to the terminals, in ARPANET host computers were on an equal footing with terminals.
Whereas ARPANET was launched and coordinated by ARPA, Usenet, which constituted another branch of what was to become Internet, was developed cooperatively by research centres not linked to ARPANET. Usenet did not have its own financing. The administrators of the system were computer scientists who participated on a voluntary basis, making space on their hard disks to record news and transmitting it by telephone.
The Internet was designed in the second half of the 1970s as an ‘internetwork architecture’, that is, a metaprotocol for interaction between networks built on different principles. The idea of an open architecture leaves total autonomy to each network. Since the metaprotocol manages only interaction between networks, each individual network can maintain its own mode of functioning. Furthermore, the Internet has no central authority. The Internet society is only an associative coordination structure. Applications proposed on the ARPANET or Internet (e-mail, newsgroups, database sharing and, later, the World Wide Web) were proposed by different designers and can be used by anyone who wants to (Norberg and O’Neill, 1996; Abbate, 1999).
The two main principles of decentralization and free access in which the Internet is grounded stem essentially from the academic functioning of its founders. When the Internet subsequently became a system of communication for the general public, these two principles were perpetuated to a large extent. The network is still not managed by a single operator, and a large amount of software, especially browsers, circulates freely on the web, at least in its most basic form.
The initial choices that will profoundly influence the trajectory of a technology are related not only to contingent decisions but also to the representations of the designers. Thus, the founding fathers of the Internet, such as Licklider or Engelbart, thought that computing was not only a calculation tool but also a means of communication. Hiltz and Turoff considered that once computer-mediated communication was widespread ‘we will become the Network Nation, exchanging vast amounts of both information and social-emotional communications with colleagues, friends and strangers who share similar interests, who are spread out all over the nation’ (1978: xxvii-xxiv). This theme of the creation of collective intelligence through networking was to mobilize many computer specialists in the 1970s and 1980s, and to appeal strongly to users. The Californian bulletin board the WELL, for example, functioned around that idea (Rheingold, 1994).
We similarly find a common project among hackers, those young computer enthusiasts who were to play an essential part in the design of the first microcomputers (Freiberger and Swaine, 1984). They shared the same principles:
- access to computers should be unlimited and total
- all information should be free
- mistrust authority, promote decentralization
- hackers should be judged by their hacking, not bogus criteria such as degrees, age, race or position
- you can create art and beauty on a computer
- computers can change your life for the better. (Levy, 1985: 40-5)
They considered that computing was a device to be made available to all, which would help to build a new society.
Technological imagination is a key component of the development of technology. Without the myths produced by the American counter-culture in the early 1970s, the microcomputer would probably have remained a mere curiosity. The ideology of computing for all, in a decentralized form, suddenly lent a whole new dimension to hackers’ tinkering in their garages. However, we can talk of the influence of the counter-culture on the microcomputing project only if we consider that there were hackers who chose to associate the values of the counter-culture with their passion for computing. They defined the problems that they chose to grapple with. The community ideology alone did not create the microcomputer; at best, it produced a mythical frame of use. The basis of hackers’ activity was immersion both in the counter-culture and in the world of computer tinkering; these two components were not only juxtaposed but also very closely linked. The ties needed for the establishment of a permanent sociotechnological frame were built by the actors; the technological or social dreams thus had no power other than supplying resources for the action.
This type of study of the representations of designers of information tools has also been made by Simon Schaffer (1995) on Babbage’s calculating engine, generally considered to be the computer’s mechanical ancestor. Babbage, a Victorian mathematician, also conducted research on the division of labour observed in factories and at the Greenwich Royal Observatory where dozens of employees did calculations from morning till night to prepare mathematical tables. To make the production of goods and calculations more efficient, it was necessary to automate the factory system. Just as Jacquard had mechanized weaving with his famous loom, so too Babbage wanted to mechanize the production of numeric tables. He hoped to speed up the production of tables which were perfectly accurate and rid of all human error. But Schaffer does more than point out the analogy between Babbage’s machine and the factory system. He draws a map of the places and networks in which the credibility of Babbage’s machine was regularly evaluated. Although the machine was never totally operational, it was exhibited often enough to contribute towards the production of a need that it could not entirely fulfil.
Representations of the Media
Representations of techniques are interesting to observe, not only in inventors but also in the first users. Susan Douglas studied enthusiasts of the wireless, whose passion for this new technology was to popularize Marconi’s invention and attract the attention of the press. A short story from 1912 is a good example of these representations of the new medium. In it Francis Collins describes the practices of new users of the wireless:
imagine a gigantic spider’s web with innumerable threads radiating from New York more than a thousand miles over land and sea in all directions. In his station, our operator may be compared to the spider, sleepless, vigilant, ever watching for the faintest tremor from the farthest corner of his invisible fabric … These operators, thousands of miles apart, talk and joke with one another as though they were in the same room. (Douglas, 1987: 199)
This is clearly a new imaginary type of communication that Collins is proposing. But this utopian discourse, which not only described potential users but also indicated everything that the wireless could do for society and individuals, was becoming a reality. Everyone could communicate instantly and independently with persons very far away, whenever they wanted to. Communication was free in that it did not depend on telegraph or telephone operators and therefore did not have to be paid for. As Susan Douglas says: ‘The ether was an exciting new frontier in which men and boys could congregate, compete, test their mettle, and be privy to a range of new information. Social order and social control were defied’ (1987: 214).
After the First World War a new ‘wireless mania’ appeared. This time it concerned wireless telephony, soon to become radio broadcasting. This new device was also to create a new community feeling. Douglas, in ‘The social destiny of radio’ wrote: ‘How fine is the texture of the web that radio is even now spinning! It is achieving the task of making us feel together, think together, live together’ (Douglas, 1987: 306).
Internet utopias, which in a sense are related to those of radio, also changed when the new technology left the world of designers in universities and groups of hackers (Flichy, forthcoming). For example, the first manuals for the public at large gave a fairly coherent representation of the net that combined characteristics of scientific communities and those of communities of electronics enthusiasts. One of these guides considered that internauts ‘freed from physical limitations … are developing new types of cohesive and effective communities – ones which are defined more by common interest and purpose than by an accident of geography, ones on which what really counts is what you say and think and feel, not how you look’ (Gaffin and Kapor, 1991: 8-9). Rheingold considers that virtual communities bring together individuals from all corners of the globe who exchange information or expertise. More broadly, they build ties of cooperation and develop conversations that are as intellectually and emotionally rich as those of real life. It is a world of balanced interaction between equals. In short, the net can make it possible not only to work collectively but also to re-establish a social link that is slackening, and to breathe life into public debate and democratic life (Rheingold, 1994).
Later the press started to write extensively about the Internet. In 1995 an editorial in Time noted that:
most conventional computer systems are hierarchical and proprietary; they run on copyrighted software in a pyramid structure that gives dictatorial powers to the system operators to sit on top. The Internet, by contrast, is open (non-proprietary) and rabidly democratic. No one owns it. No single organization controls it. It is run like a commune with 4.8 million fiercely independent members (called hosts). It crosses national boundaries and answers to no sovereign. It is literally lawless … Stripped of the external trappings of wealth, power, beauty and social status, people tend to be judged in the cyberspace of Internet only by their ideas. (Time, special issue, March 1995: 9)
Newsweek made 1995 Internet Year. It opened its year-end issue with the following phrase stretched across four pages: ‘this changes … everything’ (Newsweek, special double issue, 2 January 1996). The editorial noted that the Internet is ‘the medium that will change the way we communicate, shop, publish and (so the cybersmut cops warned) be damned’.
Thus, with the Internet, like the wireless, we find two examples of the ‘rhetoric of the sublime technology’ that Leo Marx (1964) already referred to regarding the steam engine. This rhetoric nevertheless takes on a specific form with communication systems, for it concerns not only a particular domain of productive activity but also social links and, more generally, the way society is constituted. The communication utopias of wireless and Internet are therefore fairly similar. They successively refer to interpersonal communication, group communication and, later, mass communication. In so far as both technologies soon acquired an international dimension, so that they could cover the entire planet, the utopias also granted much importance to the feeling of ubiquity found less strongly in other electric or electronic media. Finally, these utopias emphasized the principles of liberty and free access that characterize the development of these two technologies. They were defined in contrast to the two main technical systems of the time: telegraph and telephone in the case of wireless technology; and the absence of compatibility between computing systems, along with the centralized and hierarchical view of data networks that IBM incarnated, in the case of the Internet.
ICTs in the Professional Sphere
Historical comparisons between the nineteenth and twentieth centuries are equally enlightening as regards the role of ICTs in economic activity. James Beniger shows that the industrial revolution was accompanied by what he calls ‘the control revolution’. Changes in the scale of productive activity necessitated new modes of organization, for example the bureaucratic organization developed by railways, the organization of material processing established in industries, telecommunications used by distribution, and mass media used by marketing. For him, information is at the centre of this control revolution which continued developing in the second half of the twentieth century. ‘Microprocessing and computing technology, contrary to currently fashionable opinion, do not represent a new force only recently unleashed on an unprepared society but merely the most recent instalment in the continuing development of the Control Revolution’ (Beniger, 1986: 435).
The First Generation of Office Machines
Let us consider in more detail these nineteenth-century information and communication tools that were to facilitate the organization of business and markets. The telegraph was a decisive element in the organization of markets, first of all the stock market. In Europe the transmission of stock market information was the first use to which the telegraph was put, in the 1850s, and it facilitated the unification of values (Flichy, 1995: 46-50). In the United States the telegraph, in conjunction with the railroad, made it possible to unify regional markets and to create a large national market from east coast to west (DuBoff, 1983). It also facilitated the creation of large-scale business enterprises which made huge economies of scale and scope possible. The telegraph and railway companies were the prototype of such enterprises. Circulating messages or goods from one end of the North American continent to the other demanded complex coordination and could not be organized by the market alone. It necessitated the creation of large multisite corporations that decentralized responsibilities and simultaneously created functional coordination in finance and technical research (Chandler, 1977). The railway companies that used the telegraph to control traffic processed these data and thus developed the first management methods subsequently applied in other industrial sectors. With the increase in the size of firms the coordination of work was profoundly modified. In small pre-industrial firms the owner and a few skilled artisans organized the work verbally. Writing was reserved for communication with the outside. By contrast, in large industrial firms writing was to be used extensively in internal coordination. Managers were thus to standardize production processes through handbooks or circular letters. Sales manuals were given to sales agents in order to standardize prices and the sale process itself. The development of accounting, which made it possible to calculate costs precisely and to better determine prices, also used writing (Yates, 1989). In industrial work, this use of the written document was to be systematized, at the end of the century, in Taylorism. The existence of written instructions given by the methods department to each worker was one of the basic elements of the scientific organization of work (Taylor, 1911). Office work also increased substantially. In the US the number of clerical workers rose from 74,000 in 1870 to 2.8 million in 1920 (Yates, 2000: 112).
To produce and manage this paper, various tools appeared: the typewriter, the roneo machine, the calculator, but also furniture to file and store documents, etc. All these devices were, in a sense, the first generation of data processing machines. We thus witness several parallel evolutions: growth in the size of firms, increase in the number of written documents, and appearance of new machines. But Joanne Yates (1994) has clearly shown that these evolutions would not necessarily have been articulated to one another without a locus of mediation and incentives to change. The managerial literature that developed at the time, as well as the first business schools, were to propose management methods and recommend the use of new office machines. For this new managerial ideology, writing was the most appropriate procedure to set up efficient coordination between the different actors in the firm. This systematic management also proposed management tools.
Mainframe and Centralized Organization
Computing, designed to meet scientists’ and the defence force’s needs (Campbell-Kelly and Aspray, 1996), did not immediately find its place in managerial activity. Ten years after the advent of computing, a pioneer such as Howard Aiken could still write: ‘if it should ever turn out that the basic logics of a machine designed for the numerical solution of differential equations coincide with the logics of a machine intended to make bills for a department store, I would regard this as the most amazing coincidence that I have ever encountered’ (quoted by Ceruzzi, 1987: 197). The shift from calculating machines to management machines was made by companies that already had extensive technical and commercial experience in punch card tabulating technology, such as NCR, Burroughs and of course IBM (Cortada, 1993).
So, when IBM built the first mainframe computers it knew business organization so well that it could conceive a structure well adapted to the dominant form of organization: the multidivisional functional hierarchy. A database system like IMS (Information Management System) clearly reflected this organizational hierarchy (Nolan, 2000: 220).
This parallel between technical and organizational structures was also observed by French sociologists of labour who noted that, despite the utopian discourse of computer specialists on the structuring effects of computers in the processing and circulation of information, computing introduced no change in the firm; on the contrary, it reproduced the existing order (Ballé and Peaucelle, 1972). To understand this, we need to examine in detail the work of computer specialists in firms. To rationalize the circulation and processing of information they started by drawing up computer guidelines. They first analysed the existing functioning but, since few of them had any particular skills regarding the best modes of organization, they simply formalized written rules. When there were no such rules they had them formulated by the hierarchy. The computer program was thus to incorporate all these rules into an automatic data processing and circulation system. The bending of rules that characterizes any bureaucracy thus became more difficult. In this way computerization rigidified procedures rather than renewing them. We can consider, like Colette Hoffsaes, that:
faced with the inability to know the rules of functioning of organizations, the past was repeated … Computer specialists had a project for change but they were not in a position to change the aims that were known by and embodied in the operatives. Rather, they were to help them to do better what they already did. They thus intervened at the process level, the stability of which they tended to increase. (1978: 307)
Computing did nevertheless bring some changes. The role of those that Peter Drucker (1974) calls ‘knowledge workers’ or ‘professional managers’ increased. Their new mission in the firm was no longer to control others; on the contrary, they defined themselves by their own work. Moreover, executive tasks became more routine and less qualified. The division of labour was accentuated and a Taylorization of tertiary work emerged. While data were codified by the departments that produced them (pay, accounting, etc.), they were then captured in large centralized workshops where card punchers and then controllers processed them. Punch cards were then processed by keyboard operators in large computer rooms. Lastly, the data were returned to the service that had produced them.
Data telecommunications and the use of ‘dumb terminals’ were to modify the process a little. They made it possible to integrate on a single machine the codification and keying in of data as well as their updating. Big computing workshops were consequently fragmented and data capture activities, which in the meantime had become more qualified, moved closer to the data production services or in some instances even became part of them. In parallel, the number of junior management posts were sharply reduced. The division of work and its control were henceforth done by the computing system rather than by the hierarchy. Yet corporate organization changed little. Those first computing networks were extremely hierarchized and the new device had been designed and organized centrally by the data processing division.
From the managers’ point of view, the situation was somewhat different. Owing to the possibilities afforded by management computing, each manager, at corporate management level and in the divisions and even the departments, had a financial statement at regular intervals. With quasi-immediate access to this information, the US corporate model, in which each manager is responsible for the revenue generated by his/her unit, was able to function efficiently (Nolan, 2000: 229).
Microcomputing and the Temptation to Decentralize
While mainframe computing was initialized and developed by managers in order to automate routine data processing tasks, microcomputing was adopted at grassroots level. Another current of French sociology of labour, which studies not phenomena of reproduction but elements of transformation, facts which impact on the future, closely studied the diffusion of microcomputing. These sociologists have shown that it was often employees who had a fair amount of autonomy in the organization of their work (personal secretaries, archivists, etc.) who grasped this new tool and proposed small applications adapted to their immediate environment: management of leave, monitoring budgets, bibliographic database, etc. (Alter, 1991).
These innovative people later became experts in microcomputing and thus acquired a new legitimacy and a little more power. In a new technical world which was particularly uncertain because users received no support from the data processing divisions, which were opposed to PCs at the time, these first users soon became resource persons who not only mastered the new technology but were also capable of using it to improve their own productivity. This situation corresponds to an innovation model studied by Von Hippel (1988). He considers that end users are often essential innovators. They do not need much technical expertise to find new uses. Their strength derives from close contact with daily problems that the new devices have to solve. This model of uncontrolled diffusion was to spread particularly fast because investments in microcomputers were sufficiently small to enable many departments to take decisions in this respect on their own. Thus, contact was made directly between sellers and users, along the lines of a model resembling a market for the public at large rather than a business market.
Yet these innovators did encounter a good deal of opposition. Apart from the data processing divisions which saw the PC as a technological alternative which they did not control, middle management also saw it as a potential challenge to the organization, with greater autonomy for employees. By contrast, top management saw microcomputers as an opportunity to create a counter-power vis-à-vis the data processing division. It therefore left local initiatives to develop, as an experiment. Initially it accepted a diversity of models of computerization in the various services.
While this fragmented innovation model made it possible to mobilize multiple initiatives, it was potentially also a source of disorder and inefficiency. Thus, in a second stage top management took over the reins, in collaboration with the data processing divisions who were forced to include the PC in their plan. The project was to replace partial computerization in islands of the organization by a totally integrated information system (Cerruti and Reiser, 1993). The setting up of such a system posed not only technical but also organizational problems, for these new machines allowed for automation but also for a different way of thinking and managing. Zuboff (1988) has clearly shown that informating and automating are the two faces of computers in firms.
Digital Network and Interactive Deployment
While we can roughly say that mainframe computing developed in a centralized way, while microcomputing started off being decentralized, intranet and network data communications correspond to a more interactive mode in the development of computing. Bar et al. (2000) have thus constructed a cyclical model of the development of digital networks. Initially, intranet was used to automate existing work processes, for example to organize the circulation of documents in a firm. It was a way of enhancing efficiency and improving productivity. Once the network was available it could be used for other purposes, such as information searches on request or simple administrative procedures, for example ordering supplies, requests for leave, etc. Through this experimental phase, new communication technologies fitted into the organization better. The third phase could then begin, in which the firm was to be reorganized and to modify its digital network simultaneously.
Research on French firms has reached similar conclusions (Benghozi et al., 2000). These authors also note development in phases, although computerization was initiated either by local experimentation or by a decision by top management. Yet the network expanded fully only if there was articulation between the two phases. The idea was to construct a learning device in a context of controlled disorder. If we now review the setting up of digital networks, we note that effects on organization were diverse. Within a single business sector, such as the press, diametrically opposed organizational choices have been observed. At a newspaper in the west of France, local editorial staff devote as little time as possible to computing which is left to employees previously responsible for setting the newspaper. This is a case where computerized workflow has not undermined the division of work. By contrast, at a newspaper in the north, the introduction of an integrated computing system radically modified the organization of work. In local offices the same person does investigative work and typesetting (Ruellan and Thierry, 1998). These two contrasting models clearly show that in these cases the introduction of network computing does not, in itself, induce a new organization. Thus, even if there is no technical determinism, is there organizational determinism? It would appear so, given that in many cases the setting up of intranet or of cooperative devices in smaller businesses is related to substantial organizational change. In reality, such reorganization is actually an opportunity to introduce these new tools.
The various studies cited, both in the US and in France, are thus grounded in a conception where technology coevolves with the organization and its members (Leonard-Barton, 1988; Orlikowski, 1992). They therefore contrast with current discourse on the revolution in the organization of work generated by digital networks.
ICTs and Private Life
The Gradual Slide from Public to Private Sphere
The history of information technology in the private sphere, as in the business world, originates in the nineteenth century. Public life changed profoundly during that period. Richard Sennett (1977) considers that it lost its character of conviviality and interaction, to become a space in which people mix together in silence. With regard to this ‘public private life’ he talks of ‘daydreaming’, referring to the same phenomenon as did Edgar Allan Poe in ‘The man of the crowd’ (1971) or Baudelaire (1978) in his study of the stroller. In that work Baudelaire presented an individual who is both out of his home and at home everywhere. This articulation between public and private is also characteristic of the theatre. For a good part of the century, the theatre was above all a place for social interaction. The box was a sort of drawing room where people could converse, observe others and watch the show. Gradually, it became more usual to switch off the lights in the hall and to focus on the stage. Audiences had to listen in silence. New theatres were designed to enable audiences to see the stage, above all. Theatres thus received a solitary crowd; the public was an entity in which individuals experienced their emotions separately (see Flichy, 1995: 152-5).
We also find this dialectic between public and private spheres with the beginnings of photography. In the mid nineteenth century the photo portrait, a private image, became the main use of the new medium. But photographers who took these pictures set themselves up in the most frequented urban places. Their studios became veritable urban attractions. Inside, they were decorated like parlours; outside, collections of portraits several metres high were displayed on the pavement. In their windows copies of diverse portraits were exhibited, for example, crowned heads, artists, individuals who by nature have to show themselves, but also portraits of ordinary people (Mary, 1993: 83-4). Thus, if the man in the street had his portrait taken to give to his family, his picture also became public.
This play between public and private images appeared in several respects. Ordinary people were photographed in stereotyped poses; they chose the décor from a catalogue and the photographer often touched up the picture to make it closer to current standards of beauty. Important people, on the other hand, displayed details of their private lives in addition to their official poses. Furthermore, with the multiplication of copies photography was no longer used only for private souvenirs; it became a medium. People usually had between 10 and 100 copies of their ‘portrait card’ or ‘photo visiting card’ printed, but in some cases tens of thousands of copies were made. These photo portraits were put into albums with not only photographs of family and friends but also portraits of celebrities bought from specialized publishers. The binding of the albums played on the secret/representation ambiguity; they were often beautifully decorated and sometimes had a lock to protect the owner’s privacy.
The debate between private and collective media also appeared with the beginnings of the cinema. Edison thought his kinetoscope would be installed in public places, to be used by individuals (Clark, 1977). The success of the projector proposed by Lumière and other inventors was, by contrast, partly owing to the fact that it fitted into a tradition of collective shows. The content proposed, like that of the other visual media, turned around the dual attraction of daily life and fantasy. The idea was to show something that surprised by both its familiarity and its strangeness. Lumière focused more on the former, with scenes from daily life in both the private sphere (‘Le déjeuner de bébé’, ‘La dispute de bébé’, etc.) and in public (‘La sortie des usines Lumière’, ‘L’arrivée du train en gare de la Ciotat’, etc.). Lumière’s first projectionists were also cameramen. They filmed scenes in the towns in which they were and showed them the same evening to an audience that was likely to recognize itself in the film.
This fairground cinema made people familiar with the new medium but it eventually bored them. The real success of the cinema appeared when it started telling stories, that is, when it became a narrative medium. The success of narrative filmmakers such as William Paul in England and Charles Pathé in France was based on their entry into an industrial economy. The French industrialist soon discovered that it was in his interests to make a large number of copies and to distribute them throughout the world. He developed a system of industrial production similar to that of the press, in which he produced one film per week and later several (Abel, 1995).
Going to the cinema soon became a regular habit. Audiences behaved as they did at café concerts. For example, in 1912 a Venetian journalist wrote:
the most beautiful cinema is that of Sant Margeria, where working class women go … Oh! How they applaud in certain scenes. Hordes of children sometimes go to see series of landscapes and discover vulgar scenes: ‘one doesn’t kiss on the mouth’ and the public answers ‘si’ (‘yes one does’); ‘no one touches my lips’ and the public again choruses ‘si’. (Turnaturi, 1995)
In the United States the new entertainment medium attracted a large immigrant population in particular (Sklar, 1975). The narrative cinema was to become a big consumer of scenarios, sometimes found in legitimate literature. That was how the cultured classes were to become interested in this new medium that gradually became the seventh art. As the Italian historian Gabriella Turnaturi (1995) notes, on the eve of the First World War ‘a slow and difficult process of unification of customs, culture and traditions took place through the learning of a common language in the obscurity of cinema halls’.
While the cinema was part of the emergence of collective urban entertainment, the late nineteenth century also saw the advent of entertainment at home. Withdrawal into the home, pointed out by historians of private life, was reflected mainly in the appearance of private musical life which adopted the piano as the main instrument. The piano became an emblematic element of the middle classes’ furniture. It was to bring public music into the domestic sphere, a transformation achieved through a very specific activity in the writing of music: reduction. Composers arranged scores written for an orchestra into pieces for piano. This same phenomenon of adaptation can be found at the beginning of jazz in the United States, where publishers simplified the rhythmic complexity of ragtime. In the tradition of the Frankfurt School, this activity of adaptation and reduction was often denounced: the capitalist market was killing art. Can we not consider, on the contrary, that these score sheets were the instrument of mediation between public and private music?
But music for amateur pianists was not only scholarly music. A very large market developed for sheet music for songs. In some cases over a million copies of the so-called ‘royalty songs’ were printed. Singing these songs, accompanied by the piano, was an important feature in the domestic entertainment of the upper and middle classes. It is estimated that by 1914 one quarter of English homes had a piano (Ehrlich, 1976).
It was in this context that the phonograph appeared. This device soon found a place in the domestic sphere. In the United States 3 per cent of all homes had one by 1900, 15 per cent by 1910 and 50 per cent by 1920. At first the catalogue consisted of songs, popular ballads or a few well-known symphonies. A second catalogue was later created, consisting of major operatic arias (Gelatt, 1965). While some singers were starting to be very popular, phonographic recordings were to enable people to remember tunes and popular songs they had already heard. As Walter Benjamin said, ‘the art of collecting is a practical form of re-remembering’ (1989: 222). This taste for collection concerned not only records but also photographs and postcards. The latter were used to send pictures of historical buildings to those who did not travel. The same applied to records which found a public among those who could not go to the opera and could thus listen to ‘the most enchanting selection of the world’s greatest singers’ at home.
Thus the phonograph, like the piano, was not only the instrument that allowed a private musical activity to be substituted for a public one. It was also owing to this device that music for shows was arranged for music at home (see Flichy, 1995: 67-75).
Radio and Television, Family Media
Between the two wars, radio was to replace records to a large extent. The new medium was presented as a tool enabling people to listen to plays or to music at home. Advertisements often presented the radio set as a theatre in the living room. Sets were ‘designed to fit into all contexts of family life and the privacy of the home’. Of the receiver it was said that ‘all the waves in the world come to nest in it’. We thus witness a privatization of the public sphere of entertainment. As many documents of the time show, reception was a family activity (Isola, 1990), a sort of ritual. It was often the father who tuned the radio and silence was demanded to listen to it. C.A. Lewis (1942), the first programme manager at the BBC, considered that:
broadcasting means the rediscovery of the home. In these days when house and hearth have been largely given up in favour of a multitude of other interests and activities outside, with the consequent disintegration of family ties and affections, it appears that this new persuasion may to some extent reinstate the parental roof in its old accustomed place.
This family medium was considerably successful. In the US in 1927, five years after the first broadcasts were launched, 24 per cent of all households had a wireless. In 1932, despite the economic crisis, this number had risen to 60 per cent. In the UK, 73 per cent of all households had a set on the eve of World War II, and listened to radio on average four hours a day (Briggs, 1965: 253).
From the end of the 1940s television entered the domestic sphere. Yet its introduction into the home took place differently to that of the phonograph and radio. Whereas in the Victorian world there was a profound break between the public and private spheres, and the home was designed to be a closed space, the domestic space of post-war North American suburbs was built on a complex interaction between public and private. According to Lynn Spigel, ‘in paradoxical terms, privacy was something which could be enjoyed only in the company of others’. Middle-class suburbs described in magazines at the time ‘suggested the new form of social cohesion which allowed people to be alone and together at the same time’ (1992: 6).
A typical feature of the model home of that period was the large open living room and ‘American kitchen’. Rooms opened onto the outside, giving the impression that public space was a continuation of domestic space. It was this new conception of space articulating public and private that prevailed at the birth of television. In 1946 Thomas Hutchinson published an introduction to this new medium, called Here is Television: Your Window on the World for the general public. Four years later, in another book for the general public, called Radio, Television and Society, Charles Siepmann wrote: ‘television provides a maximum extension of the perceived environment with a minimum of effort … It is bringing the world to people’s door’ (Spigel, 1992: 7).
This theme can also be found in advertisements from that period. While adverts for the phonograph focused on the decorative aspect of the machine as a part of the furnishings of a room, those for radio emphasized the same point but added the idea that the set brought sounds of the world into the home (‘all the world’s roads lie before you, it’s a beautiful, splendid adventure … right in your armchair!’). Advertisements for television showed the world. Photographs of a TV set featured the Eiffel Tower, Big Ben or the Statue of Liberty, for example. One advertisement showed a baseball field with an armchair and a TV set on it, with a baseball player on the screen.
This association of the domestic sphere with historical buildings or famous cities is also found in the first television series. In ‘Make Room for Daddy’, the hero’s apartment has a splendid view of the New York skyline. One of the key characters in ‘I Love Lucy’ sees the hills of Hollywood from his bedroom window. These characters thus act against a backdrop of prestigious sites (Spigel, 1992: 10). Another theme runs through advertising during that period: the dramatization of the domestic sphere. Certain advertisements referred to television as a ‘family’, ‘chairside’ or ‘living room’ theatre (1992: 12). Some showed a woman in a black evening gown watching television at home. But it was also the great ritual events of our societies that television brought into the home: national celebrations, coronations and major sports events (Dayan and Katz, 1992).
The association between daily life and the entertainment world was furthermore found in new ways in which the American middle classes decorated their homes. What in Europe is called an American kitchen enabled the housewife to prepare meals while watching television. Very popular TV programmes in the 1950s also show the wish to include the entertainment world in daily life. In certain episodes of ‘I Love Lucy’ famous Hollywood actors are invited to dinner in ordinary homes (Mann, 1992: 44).
By contrast, television in the 1980s no longer tried to mix the two or to orchestrate major societal rituals. Instead, small family rituals were organized. In this new form that Italian sociologists have called ‘neo-television’ (Eco, 1985; Casetti and Odin, 1990), the content of interaction or the personality of the participants is of little consequence; the only thing that matters is their presence. Television displays daily life but also becomes a reference to serve as a standard for daily life. We thus witness a constant play in which television and society mirror each other.
Live Together Separately
As the family gathered together around television to watch it collectively (Meyrowitz, 1985), it abandoned radio which became an individual medium. Owing to the portability and low cost of transistor radios, everybody could listen alone in their bedroom while doing something else. This new family could ‘live together separately’ (Flichy, 1995: 158). It was probably with rock music, which appeared at that time, that use of the transistor and the related medium, the record player, appeared in full force. Whereas in the 1950s collective listening around the jukebox was decisive, listening was subsequently done at home, especially by teenage girls whose culture became a ‘culture of the bedroom, the place where girls meet, listen to music and teach each other make-up skills, practise their dancing’ (Frith, 1978: 64). Rock music listened to on a transistor radio or record player gave teenagers the opportunity to control their own space. This behaviour was part of the ‘juxtaposed home’; it allowed teenagers to remove themselves from adult supervision while still living with their parents (Flichy, 1995: 160-5). This shift in listening from the living room to the bedroom was therefore far more than a new mode of listening, it was a way of asserting oneself, of creating a peer subculture. Was this individualization of listening to become generalized and affect television? In the mid 1980s that was far from being the case. David Morley’s ethnographic study shows the family television to be a perpetual source of tension within the family between men, women, parents and children (Morley, 1986). As families acquired more and more sets, living room wars (Ang, 1996) were expected to disappear. Yet the TV set did not, as in the case of the transistor radio, disappear from the living room. It has remained the main set around which the family gathers, while sets in bedrooms are used less.
What about the mobile telephone, the PC and the Internet? Research in these areas is still limited and final conclusions can hardly be drawn. The current literature nevertheless allows us to make some hypotheses. In Europe the mobile phone seems to be the communication device that has developed fastest. Like radio in the 1950s, it has spread rapidly among the youth. Once again, this device has enabled them to become more autonomous as regards the family cell, to live in it but to be elsewhere. Yet the cell phone, which is above all an individual communication device, does not seem to be causing the disappearance of the fixed phone which remains the family phone (Heurtin, 1998).
Despite discourse on the information society revolution, one has to agree that the microcomputer followed by its connection to the Internet have not spread as fast in homes as radio and television. In the US, for example, it took about 20 years before households were equipped with at least one computer (more than one in the home remains rare). It seems that in many cases one member of the family tends to appropriate the set. A French study shows that in wealthy families it is often the father and that in working-class families it is, by contrast, a child and often the oldest son. In this case the set is installed in his bedroom (Jouët and Pasquier, 1999: 41). Yet, contrary to widespread belief, the computer does not isolate its user from others. It is characterized by a high level of sociability within peer groups. Young people exchange software and various tips for more efficient use of hardware and software. Video games are often played collectively. Alongside these horizontal social networks, Kirsten Drotner sees the emergence of vertical networks that function between generations: skills are passed on no longer from oldest to youngest but from teenagers to adults (1999: 103-4).
Although computing and the Internet, unlike radio and television and like the mobile phone, are personal tools, they are nevertheless characterized by complex social relations in peer groups and between generations. Sociability among internauts also has another degree of complexity: the fact that the individual can communicate anonymously behind a pseudonym and simultaneously have several pseudonyms and therefore several personalities. French sociologists had already noticed the phenomenon with the minitel. Users of minitel forums wanted not only to give themselves a new identity but also to take advantage of their mask to have different social behaviour, to reveal other facets of themselves and thus to better display their true identity (Jouët, 1991; Toussaint, 1991). Sherry Turkle studied the question of multiple identities on the Internet and especially in MUDs. One of her interviewees declared: ‘I’m not one thing, I’m many things. Each part gets to be more fully expressed in MUDs than in the real world. So even though I play more than one self on MUDs, I feel more like “myself” when I’m MUDding’ (Turkle, 1997: 185). These different lives can be lived out simultaneously in different windows of the same computer screen. ‘I split my mind, say another player, I can see myself as being two or three or more. And I just turn on one part of my mind and then another when I go from window to window’ (Turkle, 1996: 194). Are we heading for an identity crisis? Or can we consider, like Turkle, that this coexistence of different identities is one of the characteristics of our postmodern age?
Today the Internet constitutes the last phase in the history of information and communication technologies. But the network of networks is also a particularly interesting case since it enables us to recap most of the points considered in this chapter. This is simply one of the possible versions of digital networks. It is the result of a long history that started in the late 1960s. While IBM and AT&T had the competencies to launch a digital network, it was the collaboration of academics and hackers, with military funding, that spawned this network. The initial choices were profoundly marked by the representations of these actors who dreamed of a communicating, free, universal and non-hierarchized network. It was this same utopia that was spread by the media in the early 1990s. By contrast, the diffusion of the new technology in the corporate world combines a centralized model from mainframe computing and an open model launched by microcomputing. At home, the Internet combines the characteristics of several means of information and communication. It is a tool for interpersonal interaction and collective communication in virtual groups, but also a new medium with multiple sources of information. But this multiform information and communication tool also tends to undermine the separation between the professional and private spheres. It makes it easy to work from home and sometimes to attend to personal things at work. Unlike radio and television on the one hand and the telephone on the other, which were quickly standardized around an economic model and a media format, the Internet is fundamentally heterogeneous. This diversity is a key asset. As a result, use of the Internet cannot be unified around an economic model or a communicational format. It is not a medium but a system which is tending to become as complex as the society of which it is claimed to be a virtual copy.