Peter Lovelock & John Ure. Handbook of New Media: Social Shaping and Consequences of ICTs. Editor: Leah A Lievrouw & Sonia Livingstone. Sage Publications. 2002.
During the year when this chapter was written, the first phase, or lurch, of the Internet economy had played itself out with the bursting of the dot.com bubble. Claims, hardly theories, of a ‘new economy’ backed by a ‘new economics’ or ‘silicon economics’ came thick and fast during this first phase, driven in the popular imagination by the very ‘new media’ that were part and parcel of the phenomenon. It was also promoted for all it was worth by the IT companies and telecom equipment manufacturers who stood to gain the most from the boom. Suddenly the Internet was fashionable, even in the literal sense as IT business people started ‘dressing down’ to more closely resemble their web-culture counterparts. A whole new vocabulary started up. We learned that Internet startups were faced with ‘burn rate’ problems owing to ‘customer acquisition costs’, and that industries no longer ‘transform’, let alone ‘change’: they ‘morph’. Peals of laughter would greet any banker who had the temerity to ask a dot.com startup when the black ink would appear on the balance sheet, an attitude brilliantly captured time and again in the textbook of the Internet, the cartoons of Doonesbury. There really seems to have been little more substance to the decision-making processes inside many of these companies than to the characters brought to life by Gary Trudeau.
Yet something important survives—something that is a new permanent feature of the political economy landscape. Whether it constitutes a ‘new paradigm’ or just an evolving feature of the rapid developments in communications, computers and the media content sector—all of which in fact have a longer history than the commercialization of the Internet—is subject to debate. (See the discussion of the Internet as a general-purpose technology below.) What does seem to be obvious is that a larger absolute number of people are more mobile across boundaries—geography, skill, language, industry and so on—than ever before, that new (IT) skills are in high demand, that the idea of lifetime employment with one organization is dying out, and that this economic and demographic flux advantages and disadvantages different sets of people and communities. But in principle all of this has happened before. Perhaps the pace and scope—without getting into the globalization debate—of change are new, and the factors that have explanatory power clearly include the Internet and the rise of the ‘new media’ industries as part of a structural shift within economies.
The term ‘new media’, which has displaced ‘multimedia’, has no universally accepted meaning, but at least has the advantage of focusing attention on what constitutes ‘new’ as opposed to ‘multi’, where examples can be traced back to early cinema, if not earlier. The Internet, and its association with the World Wide Web, are the most in-your-face examples of the ‘new’, but of course they rest upon precedent developments in telecommunications and computer networking, as well as the media. We would emphasize that these precedent developments include policy, regulatory and market developments as well as technological ones. Forthcoming from the technological synergies and business convergence of these sectors is electronic commerce, and this is the climacteric, the critical turning point for the apostles of the new economy viewpoint.
It raises many interesting questions. Does the rapid diffusion of the Internet give overwhelming advantages to the first mover, the United States, in areas such as electronic business, or is it a potential equalizer, offering small and medium enterprises (SMEs) in developing economies an equal chance to enter the world market? Will early American dominance of the web homogenize American English as the world’s lingua franca, or will non-English-language websites and voice-activated and translation software have other effects? Will the spread of the Internet exacerbate or alleviate the digital divide between those with affordable universal access and those without? These questions are important for social and political as well as economic and cultural reasons, but they are raised here only to underscore the unknown. In this chapter we are more narrowly focused on what in the classical tradition would be called issues of political economy. Our aim is to review how the growth and development of the Internet and the closely related developments in telecommunications and electronic commerce, which are both underpinnings to and part of the new media industries, are being researched by scholars and industry specialists. This means that when considering the types of questions raised above we are, in the first instance, more interested in how, methodologically, such questions are being approached than in the questions or answers as such. This implies that if a question raises that much methodological interest, it is probably an interesting question to scholarship, and the converse may also be true.
Before we leave our biases as social scientists, we may add that in our view all materially grounded science early in its development, or early in a new line of inquiry, relies upon empirical data, a process of collection, classification, measurement, etc. These processes themselves are theoretically informed, for example the collection process often involves a selection process; the classification system is not arbitrary; and the choice of metrics and the role assigned to measurement and to the subsequent use of quantitative techniques is quite critical. It is worth making these short points because in the early emergence of studies of the Internet, telecommunications and electronic commerce, the gap between systematic scholarly research and what is often termed ‘analysis’ in the multitude of public and private media now available can be substantial.
A number of factors can be presented to explain this, including the commercialization of large swathes of higher education, giving rise to one of the foremost components of the new ‘knowledge-based’ economy, the education industry. But we also would point out that the pace of development and radical change in the areas of the Internet, telecommunications and electronic commerce has been rapid and recent, leaving scholars breathless to catch up. As a consequence, many scholars, for example in the field of economics, are inclined at first to regard these developments as outgrowths of existing industries or commercial practices which can be accommodated within existing theory even if the models have to be tweaked a bit. The ‘others’ are economists who wish to view the new developments as new in the sense of setting a foundation for a ‘new economy’ with ‘new economics’ based upon rather old principles of ‘network economics’ and ‘increasing returns’. Both approaches have something to offer in the sense that healthy scepticism is the basis of good science, but so also is a willingness to recognize a shift of parameters. Our point is simply that the phenomenon under scrutiny is in its formative stages, reliable time series and panel data are often not available, and scholars who must feel their way in the dark will look for any source of enlightenment they can find.
In this chapter we first examine the concept of a new economy, and its relation to ‘new media’, the Internet and computer networking. We then consider in greater detail the research issues around the Internet, telecommunications networking and electronic commerce, using a political economy perspective throughout.
A New Economy?
Despite the lack of any clear definitions of the new economy, its main attributes, according to the OECD in A New Economy? The Changing Role of Innovation and Information Technology in Growth (2000), are (1) higher rates of non-inflationary growth, (2) lower rates of unemployment associated with business cycles, and (3) new sources of growth, including areas of increasing returns to scale, network effects and externalities. Among these new sources of growth are several observed factors, popularly labelled ‘laws’.
Moore’s Law, Metcalfe’s Law, and Gilder’s Law
Moore’s law is the observation first made by Gordon Moore, then chairman of Intel, that every 18 months it is possible to double the number of transistor circuits etched on a computer chip. This ‘law’ has prevailed for the past 40 years. Moore’s law implies a tenfold increase in memory and processing power every five years, a hundredfold every ten years, a thousandfold every 15. This is one of the most dramatic rates of sustained technical progress in history.
As extraordinary as Moore’s law has proven, of even greater impact has been Metcalfe’s law. Bob Metcalfe, inventor of Ethernet, observed that the value of a network is proportional to the square of the number of people using it. Thus, Metcalfe’s law, which is a modern version of the old law of increasing returns, says that the value of a network increases in direct proportion to the square of the number of machines that are on it. This is also known as the ‘network effect’. The value to any one individual of a telephone or fax machine, for instance, is proportional to the number of friends and associates who have phones or faxes. Double the number of participants, therefore, and the value to each participant is doubled, and the total value of the network is multiplied fourfold.
This advance has pulled through extraordinary rates of associated innovation, and has led to communications capacity exploding at a rate that now dwarfs even Moore’s law. The result has become known as ‘Gilder’s law’, named after the high-tech futurologist George Gilder (1997), and forecasts that total bandwidth will triple every year for the next 25 years. Improvements in data compression, amplification and multiplexing now permit a single fibre-optic strand to carry 25 terabits of information per second, that is 25 times more information than the average traffic load of the entire world’s communications networks put together.
Network Economics and a New Economy?
But does the power of these technological and business forces bring into being a new economy driven by a new economics? Two converts to the idea that the ‘acceleration of productivity growth, driven by information technology, is the most remarkable feature of the US growth resurgence’ of the late 1990s are Jorgenson and Stiroh (2000). But they firmly focus on the price-performance indicators of the computer industry itself and the diffusion of these gains to sectors that have undergone a restructuring by substituting lower-cost capital services to take advantage of information technology. They see ‘little support for the “new economy” picture of spillovers cascading from information technology producers onto users of this technology’ (2000: 4). On the contrary, others such as Besnahan (1999) have taken the view that the networking aspect of computers is becoming decisive in raising productivity through its effects on restructuring bureaucracies and organizations, substituting for many moderate skill white-collar jobs and combining higher levels of managerial, professional and technical skills.
An early collection of papers was Kahin and Keller’s Public Access to the Internet (1995), which surveyed a range of issues from macro information policy to networked community issues to the microeconomics of the Internet. Internet Economics (McKnight and Bailey, 1997) followed, but again the economic focus is micro, on how the economics of the Internet works out, for example how to price for Internet services. This quasi-business approach also runs through the other major output of the period, Shapiro and Varian’s Information Rules: a Strategic Guide to the Network Economy (1999), a book with references but without footnotes. Here the thesis ‘is that durable economic principles can guide you in today’s frenetic business environment’ (1999: 1). We may assume that some less frenzied business people have had time to read their mistakes since the dot.com mania came to an end; if not, they had an alternative in Kevin Kelly’s New Rules for the New Economy (1999). This is the ‘you ain’t seen nothing yet’ school of new economics, and its first sentence begins: ‘No one can escape the transforming fire of machines.’ There is an inexorability about this version of the new economics, yet Kelly does also raise a crucial issue: uncertainty. Nothing is for certain when technology changes so rapidly, and business models based upon one generation of technology are forced to give way to others based on untried and untested marketing strategies.
For Kelly, founding father of Wired, the magazine which epitomized, if not the new economy thinking, certainly the new economy lifestyle of the 1990s, the foundation of the new economics is the phenomenon of increasing returns to scale, particularly as propounded in the works of Stanford University professor Brian Arthur (1994). This version of the new economics was regarded as the consequence of Metcalfe’s law. As Kelly explained it, ‘The prime law of networking is known as the law of increasing returns. Value explodes with membership … The law of increasing returns is far more than the textbook notion of economies of scale … industrial economies of scale increase value linearly, while the prime law increases value exponentially—the difference between a piggy-bank and compound interest’ (www.wired.com/archive/5.09/netrules.html, 1997). For more, see Kelly (1998).
Despite these claims for the networked economy, for a time there was a conspicuous lack of evidence of productivity in the figures. Yet there was a lot of evidence that the new economy gave businesses much greater flexibility in labour markets, including substituting information technology for labour, employing more workers on flexible hours and wage rates, and outsourcing to companies who did likewise. Professional economists like Krugman (1997) refuted the new economy on just these grounds, suggesting that an old-fashioned wage squeeze rather than a new-fashioned economics was at work. More specifically, Krugman took issue with the proposition that increasing returns were becoming a defining characteristic of the new economy: ‘Against Metcalfe’s Law must be set DeLong’s Law (after Berkeley’s Brad DeLong): in building a network, you tend to do the most valuable connections first. Is the net effect increasing or diminishing returns? It can go either way’ (‘Networks and increasing returns: a cautionary tale’, http://web.mit.edu/krugman/www/metcalfe.htm).
While the debate has identified problems with popular economic prognostications ungrounded in rigorous theory, there remain serious questions as to how ‘network economics’ plays out in practice, although Shapiro and Varian (1999:182) emphasize the ‘double whammy’ effects on both the supply and demand sides of the economy. Are the effects of networking principally on the structure of the economy as could be read into the sort of discussion above? Structural shifts do reallocate resources from low- to high-productivity sectors as well as shift resources from labour to capital. How far are they also endogenous, simultaneously raising productivity across the board? How far are the effects instrumental on the velocity of circulation of money and capital,7 and how do these effects play through the demand and supply sides of the economy?
Understanding the Internet’s morphology provides a basis from which to begin examining the implications of its development and diffusion for industries such as telecommunications, as well as for public policy. This is the approach taken by Denton et al. (2000) in a study for the Asia-Pacific Economic Council (APEC) of international charging arrangements for internet services (ICAIS). According to Denton et al., the frequent description of the Internet as ‘a network of networks’ simply ‘confuses more than it clarifies’. The point is that the Internet is not so much a ‘thing’ or ‘collection of things’ (networks, computers, access devices) as a means of global communications (or ‘global information system’—see below) using a protocol, or a set of compatible protocols, principally transmission control protocol (TCP) and Internet protocol (IP), otherwise shortened to TCP/IP. This permits computers of all kinds, independent of their internal architectures, to talk to each other across any suitably digitalized connected networks, private or public, wireline or wireless, by transmitting traffic in packets of byte-sized ‘bits’ of information. Each packet has a header or an address that is intelligible to the routers on the networks that function like the switches on the circuits traditionally used by telephone companies. Because of the lack of any one-to-one circuit connectivity in a packet-switched system it is technically referred to as ‘connectionless’.
Denton et al. helpfully cite the resolution of the US Federal Networking Council, an authoritative body of Internet architects, announced on 24 October 1995:
Resolution: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term ‘Internet’.
‘Internet’ refers to the global information system that
- is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons;
- is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and
- provides, uses or makes accessible, either publicly or privately, high-level services layered on the communications and related infrastructure described herein.
It is useful to consider the distinction between a ‘means of global communications’ and a ‘global information system’. First, the term ‘information’ is being used here in an engineering sense, without implications as to its veracity or use, rather like the scientific use of the term ‘event’ to describe any discrete or measurable occurrence, observation or happening. Second, a means of communications implies an accessible mode of protocol conversion (such as analogue to digital) and traffic transmission, such as a telecommunications system, whereas the term ‘information system’ as adopted by the FNC implies specifically the role of content on (the web) or over the Internet. This is indicated by the reference to various communications layers in point (iii) of the resolution.
A seven-layered network protocol architecture, known as the open systems interconnection (OSI) reference model, was formalized in 1977 by the International Standards Organization. Each layer in the stack constitutes the ‘client’ of the layer below it and the ‘server’ to the layer above it.
Layers provide agreements among people—and the machines that they build and program—as to who will do what, and when. As such, layers are standards. Knowledge of the existence of layers is therefore fundamental to understanding how the Internet works, and why it works differently from previous signal transport media. While the Internet uses a layered signal architecture, the founders of the Internet decided not to conform to the ISO seven-layer model. Rather, TCP/IP takes the top three layers, five through seven, and combines them into one—the application layer.
The result is that the Internet is an open system where new protocols may be added by a process of consensus within the industry. The effect of these protocols can be to change how the system of signal transmission works. Protocols can be introduced without any money being spent on changing any physical object within the signal transmission system. Thus, to understand why the Internet is driving technological and business change so effectively, one needs to understand the function of layers.
The Internet as a GPT?
The Internet has proved all-pervasive and this has naturally led scholars of the process of innovation to identify the Internet as a general-purpose technology after Besnahan and Trajtenberg (1992). Lipsey et al. (1998) define a GPT as having three characteristics: pervasiveness, technological dynamism and innovational complementarities, where the last is associated with research and development and economic growth connected with a series of complementary goods and services. In the same volume, Harris (1998) focuses on the Internet as a GPT which can bring benefit to small regional economies through the externalities associated with ‘network economics’, although there are losers in his analysis: unskilled workers unable to adjust.
The advantage of treating the Internet as a ‘radical innovation’, a ‘macro-invention’ or a GPT, is to open ways to explore its impact on industrial restructuring, productivity and economic growth of complementary sectors in light of the discussion above. In this regard, the facilitation of electronic commerce and related data transfers may be the key to the productivity question. Certainly the impact on the demand for greater investment in telecommunications bandwidth alone has been remarkable.
A related but separate issue of research concerns the diffusion process, the ways in which the rate of adoption of the Internet is driven and how it is in turn driving the networked economy. Diffusion studies are the subject of a vast literature, but if one reference is to be made it is worth mentioning the compilation edited by Stoneman (1995), if only to refer to Karshenas and Stoneman (1995) who provide valuable insights into analytical research methods and results. Of particular relevance is their stress on the need for methodologies that do not necessarily privilege endogenous factors in the diffusion process. Their own analytical preference is for discrete choice models which render tidy equilibrium outcomes, so in looking at the diffusion process they distinguish between the ‘desire to acquire’ and the ‘decision to acquire’. The former may be influenced by learning, peer pressure, and so forth (leaving the process open to epidemic models and continuous logistic curve analysis) but the latter is an either/or decision based upon objective criteria, such as price and disposal income (open to the use of cost-benefit, probit and discrete choice models). Here perhaps is an opportunity for the economics and anthropology, sociology, psychology and cultural studies professions to work more closely together in future research agenda.
The National Information Infrastructure
The ‘information superhighway’, rather like the Internet, has a Platonic quality about it: once named it exists in the collective mind and popular imagination of society. The national information infrastructure (NII) gained currency when entered into the initial draft of the High Performance Computing Act (HPCA) of 1991 by Al Gore, then a US senator, with metaphorical reference to the interstate superhighway programme of the 1950s authored and championed by Senator Albert Gore Sr (Gore, 1991; HPCCI, 1994). It went on to become a central plank of the Clinton-Gore administration’s platform for the US in late 1993 (Clinton and Gore, 1993). As with the Internet itself, the concept’s robustness came from its embrace of a meshing of disparate public and private transmission and switching systems—the highways, junctions and toll booths—and access points and devices—the local loop telephone networks, telephones, computers, cable TVs, cell phones, VSATs (very small aperture terminals)—as well as content.
The goals of the NII were laid out in detail in The National Information Infrastructure: Agenda for Action by the 1993 commission established by Vice-President Gore (Executive Office of the President, 1993). Part of its appeal was the appearance of a radical departure in national policy initiative towards information and communications technologies, but one entirely grounded on incremental development. The NII was about promoting integration through interconnection, not about building anew or reallocating resources, and its strategic thrust was about international competitiveness: ‘The NII will enable US firms to compete and win in the global economy, generating good jobs for the American people and economic growth for the nation.’ While it did contain a positive commitment to social objectives, notably to broaden the concept of ‘universal service’ to bring affordable Internet access to schools and rural communities, its central mantra was: ‘The private sector will lead the development of the NII.’ The NII was envisioned as a catalyst to innovation, research and development, where federal funds would focus on research left untouched by private investment, as well as providing matching funds for private sector R&D, and the Clinton-Gore vehicle for this was the High Performance Computing and Communications Program (HPCCP) born of the 1991 Act.
The Global Information Infrastructure
The reason why it is important to recount the founding history of the NII and recent United States telecommunications policy is because of their global impact. But this has to be put into perspective. During the second half of the twentieth century the trajectory of telecommunications policy has been roughly similar in developed and developing economies. State-run or private monopolies gave way to a measure of deregulation, which included the separation of posts from telephony functions and their corporatization (Petrazzini, 1995; ITU, 1998a; Lovelock, 1998a) which went some way to removing investment and financial issues from the constraints of national budgets. It also opened the way for the start of privatization programmes and public listings, and the early liberalization of some markets, usually starting with so-called ‘value added’ services. The term ‘value-added’ was in reality, as Cowhey and Aronson (1988) pointed out, not so much a technological or economic one as a regulatory means of distinguishing what would and would not be open to competitive entry. In truth, a basic real-time analogue telephone call is ‘value added’ as it transforms voice into electromagnetic pulses.
But the timing and scale of the reforms in the US and the size of the information and communications technology sector were bound to be agenda-setting for the rest of the world, not least through global initiatives such as the General Agreement on Trade in Services (GATS) of the GATT, the establishment of the World Trade Organization (WTO) and the achievement of the first WTO services agreement, the Basic Agreement on Telecommunications (BAT) in February 1998. Very much part of the agenda-setting process was Vice-President Gore’s 1994 proposal to the International Telecommunication Union (ITU) meeting in Buenos Aires for a global information infrastructure (GII), a ‘planetary information network’ which, alongside social, educational, health and cultural advantages, would help create a ‘global information marketplace’ (Macgregor Wise, 1997).
The response of many developing economies, especially those experiencing the economic boom in Asia, was a judicious renaming of existing projects lending them a renewed sense of purpose. These Asian economies were already embarked, in their own ways, on developing information infrastructures. Asian states in the 1990s were encouraging a vigorous outward expansion of their own national companies, meeting the Western and Japanese multinationals at the regional level. Despite the setback of the Asian economic crisis after 1997, no Asian country today wants to proceed slowly in developing its own version of the information superhighway (Langdale, 1997; Noam et al., 1994; Petrazzini, 1995; Ure, 1993; 1995; 1997; Singh, 1999). Malaysia’s multimedia supercorridor just to the south of the capital Kuala Lumpur is one example; Singapore’s intelligent island project IT2000 is another; the South Korean KII, the Japanese JNII, mainland China’s CNII and Taiwan’s NII, which is to support an ambition to become a regional operating centre for telecoms and information services, are other examples (Hukill, 2000; Lovelock, 1998b).
The Changing Industrial Structure of Telecommunications
The telecommunications infrastructure is the taproot of the NII, and as such was an important part of the policy agenda of the 1993 commission’s report. The subsequent Telecommunications Act 1996 was really the culmination of several decades of legal judgements and Federal Communications Commission (FCC) rulings, during which time wireless operators were recognized as legitimate public and private service providers in long-distance markets; and, through the rulings Computer 1, Computer 2 and Computer 3 (see Bruce et al., 1988; 1994), computer-based data communications and information service providers were likewise legitimated. The 1996 Act took the further step of recognizing the public interest in having cable TV and telecommunications services converge, clearing the last regulatory roadblocks to the seamless information superhighway. For an exegesis of the Act, see Huber et al. (1996). For a more recent critical economic assessment of its impact, see Kahn et al. (1999).
In terms of its wider, global impact on policy-making, the 1996 Act should be compared to the 1982 Supreme Court’s Modification of Final Judgment (MFJ) to divest AT&T of its regional Bell operating companies. Similar ground-breaking events in Britain and Japan echoed the judgement. The coincidence of these events—for a critical contemporary assessment see Hills (1986)—in the world’s three largest telecommunications markets marked the decisive global turning point of telecommunications from public utility status to commercial commodity status, a shift that resonated with the growing strategic importance of telecommunications within the world of commerce and industry, multinational manufacturing and trading, and especially in the 1980s within the world of global finance.
On the demand side, the growing use of computers in the 1980s, notably minicomputers and personal computers, the growing abundance of satellite communications, and the first commercially marketed mobile cell phones were complementary developments spurring on the use of both data and voice transmissions. Equally dramatic were developments on the supply side. Digital switching and transmission began replacing analogue, and the operations of Moore’s law in terms of falling microchip prices and a phenomenal increase in processing power of computers that were used for switching shifted the underlying economics of telecommunications and, with it, the entire structure of the industry.
Huber (1987) imaginatively captured this in a progress report on the divestiture of AT&T for the Department of Justice. Huber noted that the differential effect of the fall in switching costs relative to transmission costs was driving the migration of small-scale switches along the lines of transmission to public, and increasingly private, network nodes or exchange points. In the process the pyramid architecture of the traditional telecommunications network was compressed into a topology more typical of an IT network of distributed nodes:
When switching is expensive and transmission is cheap, the efficient network looks like a pyramid. One hundred million telephones converge into twenty thousand end-office switches, which converge into a thousand tandem switches, and so on up to a handful of regional master switches at the apex. The system has comparatively few switches; it has many lines. By contrast, when switching is cheap and transmission expensive, the efficient network is a ring. The nodes (switches or computers) are connected along a ‘geodesic’—a path of minimum length. (1987: 1.3)
Huber et al. (1993) followed up with an equally challenging second progress report. In it they conclude that economies of scale in the then new technology of optical fibre cable reinforced an element of natural monopoly in long-distance traffic haulage, while the other innovative technology of the period, the civilian use of cellular mobile telephony, was allowing new levels of competitive entry in the local loop.
By 1982, the lawyers and economists had fully grasped the importance of microwave technology in the long-distance market. But they ignored fiber optics. By 1982, the lawyers and economists thought they understood the wire in the local exchange. But they ignored radio. The result was a divestiture decree that was obsolete almost from the day it went into effect. (1993: 1.42)
In other words, the Consent Decree divesting AT&T of its regional Bell operating companies, leaving them as effective monopolies over vast swathes of state territory, and opening AT&T’s long-distance markets to competitive entry, was in logic, and with hindsight, exactly 180 degrees the wrong way about. The point is laboured here to underscore an important aspect in the recent history of policy and regulatory reform. On the one hand, the 1984 divestiture, and the privatization and liberalization moves in Britain and Japan, set precedents and an agenda for most other nations, developing as well as developed, despite the diversity of local circumstances. This is well reflected in a series of World Bank and International Finance Corporation papers: for example, see Ambrose et al. (1990) for the IFC, and Wellenius et al. (1989) and Smith and Staple (1994) for the World Bank.
On the other hand, the work of Huber and many, many others raised a far greater general awareness of the importance of critical and well-grounded thinking for the design of policy and regulation, and issues such as information asymmetry within the industry, the dangers of regulatory ‘capture’, interconnection issues, paying for universal service obligations, and so forth. The volume edited by Brock (1995) captures the mood. Policy liberalization and new market entry also necessitated new or reformed regulatory structures—quasi-independent regulators who could draw upon industry expertise and the professional assistance of accountants, economists and lawyers. Bruce et al. (1988; 1994) and Crandall and Waverman (1995) provide reviews of the policy and regulatory structures and economic debates that serve as benchmarks for subsequent changes.
Leading exponents of liberalization, such as Beesley and Littlechild (1989), devised forms of ‘incentive regulation’ such as price capping, designed to encourage dominant carriers to act in efficient market-like ways until such time as new entry could establish genuinely competitive markets. Wenders (1987), Mitchell and Vogelsang (1991) and Einhorn (1991) among numerous others provide useful guides and reference to these schemes and debates, many of which first appeared in the Bell Journal of Economics and Management Science, later the Bell Journal of Economics and more recently the RAND Journal of Economics. Debates and analysis among economists appear in most of the leading journals and overlap with legal perspectives in journals such as the Columbia Law Review, the Harvard Law Review and the Yale Journal on Regulation. Specialist technical journals, such as the International Telecommunications Society’s Information Economics and Policy and policy journals like Telecommunications Policy, Prometheus and more recently Info, are regular sources of analysis and informed discussion. For a review of earlier research literature, see Snow (1986) and for a more recent review of reform issues see Melody (1997).
The highly technical nature of the telecommunications business has always meant that engineers and engineering data have been crucial information sources for analysis by economists and lawyers alike, and from the legendary Bell Labs downwards (or outwards) telephone companies and equipment manufacturers have supplied their own research findings. The London-based Institution of Electrical Engineers (IEE) has been an especially influential source of analysis, commissioning as early as Morgan’s Telecommunications Economics (1958), followed by Elements of Telecommunications Economics by Littlechild (1979) and most recently World Telecommunications Economics by Wheatley (1999).
In preparing its defence in the late 1970s AT&T employed the skills of leading economists, notably Professor William Baumol (see Sharkey, 1982), to explore economic and social inefficiencies associated with the potential loss of economies of scale and scope. From the work of Baumol et al. (e.g. 1988) came contestability theory which posited that just the threat of competitive entry could induce a monopolist to simulate competitive levels of pricing and output. But equally the prospect remained that the entry of discrete stand-alone telecommunications operators serving selective profitable markets could render the monopoly, and the associated economies of scale and scope, unsustainable.19 These arguments survived the breakup of AT&T in the form of the efficient component pricing rule (ECPR), advocated strongly by Baumol and Sidak (1994) but vigorously opposed on practical grounds by Albon (1994) and, along with stand-alone cost arguments, on theoretical grounds by Kahn (1998), whose two-volume Economics of Regulation (Kahn, 1970-1) remains the seminal work on public sector regulation.
Sidak and Spulber (1998) perhaps represent the dying shots of the argument, maintaining that incumbents should be fully compensated for the unforeseen consequences of regulatory-led liberalization. In his review of their position Trebling (2000) argues, with an eye to the mergers and amalgamations and alliances being formed across the converging sectors of ‘new media’, that there is no clear-cut evidence that the post-regulated monopoly era will end up as anything other than a market of competitive oligopolies.
Telecommunications Trade in Services, Growth and Development
The argument is not quite dead in many developing countries where concerns over cross-subsidies and universal service, or at least service to the key political constituents within the country, has always been important. The traditional argument for monopolies, or rather of monopolies, has been the ability to transfer price according to policy requirements, which can include social objectives such as serving rural communities. The sad twofold fact is that first, in the case of many of the poorest countries the objectives have shown little signs of being met under the state or private monopoly model (ITU, 1984), and second, the traditional source of subsidy, namely long-distance and international traffic, is inexorably disappearing. Everything from callback to Internet telephony, and from refile to international simple resale, operates to bypass the high-priced routes. Bypass proved so irresistible and unstoppable that incumbent carriers increasingly adopted these same means just to keep their business, but two developments swung the arguments beyond dispute by the turn of the century. The first was the global spread of the Internet and the World Wide Web, and the second was the spread of broadband technologies. They challenge the fundamentals of monthly rentals, call charges and value-added service charges, the traditional bread-and-butter telecommunications service revenues (Ure, 2000).
Much of the work on the relationship between telecommunications and development has been done under the auspices of either the International Telecommunication Union (ITU) or the World Bank. These include ITU (1986; 1988a; 1988b). For helpful reviews of the literature see Saunders et al. (1983; 1994) for the World Bank.
Numerous case studies attempted to measure the economic benefits to rural and agricultural communities of access to telephony, but the benefits already suppose a significant level of integration into a market economy which in turn presupposes a value to be imputed to market information. Where anything beyond simple commodity production exists, markets exist and a value for telecommunications can be derived. This raises interesting possibilities of ways to give local communities and entrepreneurs means to foster local or regional networks at rates of return which would be unacceptable to corporate capital. This is an important area of research still to be developed. The point here also is that the social aim to provide universal service—or universal access, which is not quite the same thing (ITU, 1998b)—sometimes needs to be justified on grounds other than economic development, and this has implications for the ways and means of funding such a commitment. For example, a national network of communications can serve the political aspirations of nation-state builders, can have the effect of opening remote tribal areas and rainforests to stronger outside influences, can aid in disaster relief work, can bring educational or medical benefits to a wider population, and so forth. It cuts many different ways, not all of which are necessarily developmental.
Throughout the 1980s the ITU produced a steady stream of studies on development indicators and by the mid 1990s was still producing charts arguably showing a strong correlation between GDP per capita growth and the number of telephone lines per 100 of the population, known as the teledensity. The statistical analysis often represented no more than an averaging process, and was certainly incapable of supporting an argument for causation (ITU, 1993). Indeed in some cases, for example Indonesia, it could be argued that few of the staple occupations were hindered by the lack of telecommunications, and few would have been significantly enhanced by access to telecommunications, with one or two notable exceptions, such as the hotel and tourism trade. Under these conditions, economic growth was more likely to be cause than consequence.
Jipp (1963), who proposed a simple regression of teledensity (mainlines per head of population) on domestic per capita income, may be rightly considered the pioneer of the econometrics that underlay much of this work. Others followed, but the econometric studies also ran into methodological problems as Roller and Waverman (1996) have pointed out. The first is the problem of simultaneity: that is, the supply of telecommunications (or other public infrastructure) services can cause economic growth but simultaneously economic growth can cause the increase in demand for telecommunications services. So which is the chicken and which the egg? Second, the cause of economic growth may be the cumulative effect of fixed assets, for example in R&D, to which telecommunications investment is closely correlated. When these factors are accounted for, most of the income growth effects from telecommunications investment in previous studies disappears.
Roller and Waverman explicitly tackle these two issues, and then, most interestingly, test for the effects of ‘network externalities’ on some level of critical mass of network connections. First, following the postulates of new growth theory that looks to model endogenous sources of growth, they internalize the role of telecommunications investment by building a micro-model for its supply and demand. They then estimate the micro-model simultaneously with a macro-model of the relationship between GDP growth and telecommunications using data from across 21 OECD countries and 14 developing countries for 1970-90. They control for country-specific ‘fixed effects’ by adopting each country’s intercept separately, and then test for network externalities by regressing not only on each country’s penetration rate (the ITU’s ‘teledensity’) but also on the penetration rate squared, thereby introducing an indicator of network scale. Finding coefficients of < 0 on the former, and > 0 on the latter, they thereby find evidence suggestive of network externalities, and go on to estimate for the OECD as a whole. The OECD average teledensity was 30 (that is, 30 mainlines per 100 population), and from a 10 per cent increase in teledensity they found a GDP growth effect of 2.8 per cent—that is, an elasticity of 0.28. For the US, with a teledensity of 40, they found a growth effect of 7.8 per cent, and for Germany, with a teledensity of 32, a growth effect of 3.7 per cent.
It is too early to conclude the case for network effects proven, although it has intuitive appeal, but then so did the idea that telecommunications investment was a cause of higher economic growth when the most that could be safely assumed was that an efficient telecommunications infrastructure was an enabler, its existence the absence of a constraint. Roller and Waverman’s work does seem to imply a serious challenge to the long-held and cherished beliefs of the ITU and the World Bank that investment in traditional telecommunications in developing countries brings economic returns far beyond the financial returns enjoyed by commercial investors, and this is clearly an area that will attract more careful research study. To complicate matters, the growing importance of trade in services, many of which directly or indirectly rely upon telecommunications, will have to be considered in the growth factors. And finally, if network effects are supported by further research, this will also have a bearing on debates about the new or networked economy.
The Broadband Age
The shift from the analogue to the digital to the Internet era is all about a shift from low to higher degrees of uncertainty (see Ure, 2000) and nothing illustrates it better than the emerging age of broadband. Broadband was until the late 1990s a backbone technology for aggregating traffic, but the Internet age opens the demand for broadband in the access networks. End users almost always require more downstream than upstream capacity, so numerous asymmetric subscriber digital line (ASDL) technologies, including wireline, third-generation wireless, fixed wireless, laser beams, cable hybrid fibre coaxial systems and so forth, are competing as modes of access. The interesting underlying business issue is: what will access networks sell? Unlike predominantly voice systems, where the demand for the network arose from the nature of the network’s service and connectivity, the demand for ‘always-on’ broadband access networks is essentially a derived demand for the services, content and applications of Internet providers. Will customers pay for plain old telephone services in the future? ‘Always-on’ seems to preclude simple metering models, and Internet telephony seems to preclude any form of separate charging for voice traffic. What then is the future of the telephone company? In an era of uncertainty the restructuring of the industry will remain a major topic of research interest.
The earliest forms of computer-based electronic commerce date from the late 1960s. In those days, they served a variety of distinct purposes, such as: time-sharing of mainframe computing CPU cycles; packet-switched express mail delivery; data transfer delivery; and business-to-business facsimile transmission. Businesses acquired early e-commerce services by leasing the value-added networking services of international telephone companies, or by acquiring leased computer time and network access offered by the large in-house shops of GE, IBM, McDonnell Douglas and EDS (Kimberley, 1991; Wigand and Benjamin, 1995).
Through the 1970s and 1980s, businesses began extending their networks to reach out to customers and business partners by electronically sending and receiving purchase orders, invoices and shipping notifications. The result was a proliferation of electronic data (or document) interchange (EDI) transmitted over value-added networks (VANs). In the 1980s, vendors such as McDonnell Douglas and General Motors introduced computer-aided design (CAD), engineering and manufacturing over these communications networks, which allowed managers, engineers and users to collaborate on design and production.
The consequences of such laissez-faire development were felt throughout the 1970s and 1980s. Bewildering arrays of proprietary networks, computer architectures and clumsy text-based computer interfaces of proprietary software were haphazardly grouped together with vendor hardware, making e-commerce and computing both labour and capital intensive. None of this came cheaply and all but the largest firms were locked out of e-commerce technologies by their sheer cost and scale. In response, services bureaus grew out of the internal corporate computing operations of larger firms. They used their already substantial economies of scale to offer network and computing services of greater reliability and lower cost than even large firms could develop internally.
Retrospectively—see Clark and Westland (1999) and Kalakota and Whinston (1996)—we can see that the market for e-commerce services over the three decades up to 1990 could be encapsulated in five broad divisions:
- Electronic mail, providing store-and-forward services for the business-to-business exchange of information. Mailbox services transferred information directly from the sender to the receiver; gateway services transferred information only as far as a corporate server.
- Enhanced fax, providing point-to-point delivery of documents encoded as fax rather than e-mail, which usually implied that there was non-text information that needed to be encoded.
- Electronic data interchange, providing computer-to-computer exchange of information using standardized transaction formats. These transactions typically involved purchase or sales functions.
- Transaction processing, supporting credit, claims, payment authorization and settlement of transactions. Transaction processing services often involved collaboration between an information transport service and an authorization provider, such as a bank.
- Groupware, employed within a secure, managed environment, which supported e-mail, calendaring, scheduling, real-time conferencing, information sharing and workflow management.
Internet e-commerce took off with the arrival of the Mosaic browser and the World Wide Web in the early 1990s, helped on its way by the liberalization process in telecommunications (see above) and a range of technical and networking innovations. In retrospect, the late 1990s can be seen as the initial phase of a process of long—and potentially profound—transition. Just like traditional commerce, electronic commercial activities involve four basic levels: a communications infrastructure, carrying messages about prices, quantities, service or product characteristics; a marketplace, the market coordination environment in which buyers meet sellers and negotiate (this, of course, encompasses intermediaries, allowing sellers to transact business with buyers); transaction mechanisms to send, execute and settle orders (including payments); and deliverables, the service or merchandise being exchanged (see Bar and Murasse, 1999; Pico et al., 1999).
What is meant by e-commerce? Part of the problem in coming to terms with e-commerce—and, indeed, one of the problems in measuring e-commerce—is the fuzziness of the concept. Sterret and Shah, for example, describe it as ‘a broad, somewhat vague term, that essentially represents any transaction handled electronically… it includes, but is not limited to, transactions on the Internet’ (1998: 43). Part of the problem is that early attempts to define the concept were provided by those who had very specific, and at times relatively narrow, research agendas. In the introductory paper inaugurating a new business journal devoted to e-commerce, Zwass (1996) defined electronic commerce as ‘the sharing of business information, maintaining business relationships, and conducting business transactions by means of telecommunications networks’—thereby reflecting the business school bent of the readership. Somewhat similarly, Applegate et al. (1996) reflect their MIS perspective when they define electronic commerce as the use of network communications technology to engage in a wide range of activities up and down the value-added chain both within and outside the organization.
In an effort to reflect the wider economic and societal impact of electronic commerce, the OECD (1997) defined e-commerce as generally referring ‘to all forms of transactions relating to commercial activities, including both organizations and individuals, that are based upon the processing and transmission of digitized data, including text, sound, and visual images’. And in attempting to incorporate the broader social impact, the European Commission (1997) went even further in scope:
Electronic commerce is about doing business electronically. It is based on the electronic processing and transmission of data, including, text, sound, and video. It encompasses many diverse activities including electronic trading of goods and services, online delivery of digital content, electronic funds transfers, electronic share trading, electronic bills of lading, commercial auctions, collaborative design and engineering, online sourcing, public procurement, direct consumer marketing, and after sales service. It involves both products (e.g. consumer goods, specialized medical equipment) and services (e.g. information services, financial and legal services); traditional activities (e.g. healthcare, education) and new activities (virtual malls).
What we are seeing then is that doing business electronically will eventually encompass the same areas as ‘traditional’ business. Organizations are still exchanging information, marketing their products and services, buying and selling, recruiting new employees, gathering research, and providing customer service; but now, to a greater extent, paper-based and face-to-face transactions are being augmented, if not actually being replaced, by electronic means. So, while it is increasingly accepted that the Internet will transform business transactions and consumer life, the question becomes: how do we measure it?
In 1995, the American Electronics Association estimated that electronic commerce over the Internet totalled around US$200 million. By 1997, Nielsen Media was suggesting Internet-based sales had reached US$21 billion, and that 10 per cent of US companies were already offering products online. The WTO (1998), on the other hand, estimated the figure for e-commerce worldwide to be a more realistic, although still impressive, US$8 billion by 1998. If these figures seem disparate, they pale in comparison to the forecasts that followed. In 1999, the US Department of Commerce (1999) predicted that by 2002 the Internet would be used for more than US$300 billion worth of commerce between businesses alone. By 2000 they were estimating that the amount of business conducted directly on the Internet had increased in revenue to $171.4 billion from $99.8 billion a year earlier, as traditional brick-and-mortar retailers embraced Internet sales and companies began buying their supplies online. These estimates for e-commerce have varied for a variety of reasons, but mostly the differences are attributable to the definitional problems flagged above. Some estimates calculate only online transactions, while others include web-initiated transactions.
Nevertheless, five broad themes can be discerned in the emergence of electronic commerce. The first is the relative importance of time. Many of the routines that help define the ‘look and feel’ of an economy and society are a function of time: mass production is the fastest way of producing at the lowest cost; one’s community tends to be geographically determined because time is a determinant of proximity. And while ‘Internet time’ became a catchcry of the new economy, implying an increased importance on timeliness, electronic commerce also reduced the importance of time by speeding up production cycles, and enabling around-the-clock transactions.
The second theme is the disappearance of geographic (and economic) boundaries. The ability to communicate and transact business anywhere, anytime, changes not merely how business is done but where it is done and the extent of the potential market, through the erosion of economic and geographic boundaries.
The third theme is disintermediation. Traditional intermediary functions are being redefined and even replaced. In some cases this is leading to new and far closer relationships between business and consumers. In others it is leading to opportunities for new intermediaries to emerge and provide market coordination.
The fourth theme is open source. The widespread adoption of the Internet as a platform for business is due to its non-proprietary standards and open nature as well as to the huge industry that has evolved to support it. Openness has also emerged as a business strategy. This has led to a shift in the role of consumers; an expectation of openness is building on the part of consumers/citizens, which will cause transformations, for better (e.g. increased transparency, competition) or for worse (e.g. potential invasion of privacy), in the economy and society.
The fifth and final theme is a catalytic effect. Electronic commerce is serving to accelerate, and diffuse more widely, the changes that are already under way in the economy, such as the reform of regulations, the establishment of electronic links between businesses (EDI), the globalization of economic activity and the demand for higher-skilled workers. Likewise, many sectoral trends already under way, such as electronic banking, direct booking of travel and one-to-one marketing, are being accelerated because of e-commerce. This is changing the organization of work structures, increasing interactivity and the number of channels for knowledge diffusion in the workplace.
Broadly speaking, Internet-based electronic commerce can be broken down into three broad categories: business-to-consumer (B2C), business-to-business (B2B), and business-to-government sectors. Taxonomies of emerging e-commerce models are beginning to appear (see, for example, Harrington and Reed, 1996; Kalakota and Whinston, 1996; Sawhney and Kaplan 1999; Timmers, 1998). However, as we noted above in trying to assess the size and the scope of the market, the emerging nature of the market itself makes it difficult to define. As a result, existing taxonomies tend to blur the line between B2B e-commerce and B2C e-commerce. Given that the lines between these two fields are vague, it would be surprising if this were not the case.
Many of the advantages of e-commerce were first exploited by retail ‘e-businesses’ such as Amazon.com, eTrade and Auto-by-tel that were created as Internet versions of traditional bookstores, brokerage firms and auto dealerships. But in most parts of the world outside the United States, online consumer-oriented activity is still mostly informational rather than transactional, and although electronic payment systems have now appeared in most countries, traditional means of payment still dominate commercial transactions. Establishing a significant e-commerce presence remains costly, potentially risky, and therefore a barrier for companies—particularly small and medium enterprises (SMEs). This distinction between those that can afford to play the web globally and those that cannot may become an increasingly important issue, and may yet put paid to the concept of the Internet as the great leveller of opportunity.
Business-to-consumer electronic commerce can be broadly classified into two categories: the retailing of tangible and intangible goods. In the initial phase of e-commerce development, the goods retailed online were specific niche products such as computer software (US Department of Commerce, 1998). Intangible products and services sold and distributed online included software, music, magazine articles, news broadcasts, securities brokerage, airline tickets and insurance policies. However, the advent of the Internet has revolutionized the way transactions are conducted in industries from entertainment to banking, travel and insurance by commoditizing information (see Shapiro and Varian, 1999). Where this had its biggest early impact, though, was in the business-to-business or, perhaps, computer-to-computer arena.
As we have seen, B2B e-commerce, defined here as restricted to computer-to-computer commerce, actually began in the mid 1960s with EDI inspiring a rash of predictions about the advent of the paperless office. By the 1970s, EDI allowed businesses to exchange documentation remotely and securely. It was foreseen—correctly—that this would speed up transaction time and minimize transaction costs.
In the first wave of Internet e-commerce most of the attention focused upon the high-profile vendor examples such as Amazon.com, CDNow or eBay. And, even when attention began focusing on the B2B realm, it was on prominent, well-established firms such as Cisco and Dell that eliminated ‘old economy’ middlemen and sold directly to business consumers. But the real impact of B2B e-commerce has been taking place on a broader scale.
B2B e-commerce is transforming from simple buy-side and sell-side solutions to a world of electronic markets that enable companies to streamline their commercial processes and reduce operational costs by linking multiple buyers and sellers via the Internet. There are a growing number of electronic exchanges that serve specific industry communities. These online exchanges, or marketspaces, provide access to information about products from a variety of vendors, along with objective industry information (industry forms, research papers, newspaper and trade journal reviews, etc.). The rise of these collaborative online communities is fuelling the growth of online procurement and selling systems, as companies move to get on board with their particular industry site. The integration of online auctions to these sites will further increase the attractiveness of such communities.
The Role of Government
Governments have increasingly come to realize that they, too, have a major role to play in the development of e-commerce, by placing their own procurement procedures and government services online, thereby helping to increase adoption and dissemination. But to achieve even this level of e-commerce promotion and facilitation, governments have to rethink the way they are organized and the manner in which they respond to the economic and social needs of the community. The first reaction of policy-makers to the growth of the Internet was, in essence, to do nothing. The lagging development of new laws or ‘cyberlaw’ for cyberspace, however, merely exacerbated the problem, and the inability to keep pace with technological developments led to uncertainty and in many instances conflict.
As a result, there has been increasing pressure on governmental agencies, industry bodies, administration agencies, standards organizations and professional and public interest groups to come together to form uniform, universal standards that provide businesses and individuals with better guidance on what they should and should not do on the Internet; and provide a framework for dispute resolution and, eventually, security and redress. It is these areas of public policy and technical standards that present the greatest challenge for the adoption and implementation of electronic commerce on a global basis.26 Research is emerging which identifies many such issues related to global electronic commerce (GEC). These include: the impact on traditional business (Applegate et al., 1996), electronic payments solutions and security issues (Farhoomand et al., 1998), universal communications protocols and security concerns (Rietveld and Janssen, 1990), linguistics (Barnett and Choi, 1995), taxation laws and currency exchange (Deans and Kane, 1992), intellectual property and intrafirm transborder data flows (Rayport and Sviokla, 1994), shifting legal and social standards (Holbrook, 1997), and data security, privacy, technical security, and legal security, and the imposition of certain rules and obligations to allow business to function (Angelides, 1997).
Some of these challenges to e-commerce have already begun to be addressed by the public sector. The United States, the European Union, Japan and the OECD (1999) agree in their global e-commerce framework proposals on a number of the major policy challenges to be dealt with. These include the legal framework for Internet transactions (e.g. commercial code, intellectual property/copyright and trademarks, domain names, privacy and security); the financial framework (e.g. customs, taxation, electronic payments); and market access and trade logistics (e.g. market access to the Internet, access for suppliers over the Internet, content, shipping of goods, etc.).
The United States government in particular, in recognizing the need to address such issues, has actively sought to prepare a strategy which will accelerate the growth of GEC, and more particularly the Internet. In A Framework for Global Electronic Commerce (White House, 1997), an Interagency Working Group on Electronic Commerce under the guidance of US Vice-President Gore (see earlier), established a set of five principles to guide policy development. The framework suggested that: (1) the private sector should lead; (2) governments should avoid undue restrictions on electronic commerce; (3) where governmental involvement is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce; (4) governments should recognize the unique qualities of the Internet; and (5) electronic commerce on the Internet should be facilitated on a global basis.
It also made recommendations in respect of the following: tariffs and taxation, advocating that the Internet should be a tariff-free environment; electronic payment systems, and the need for flexibility in terms of any regulations imposed; the development of a universal commercial code for electronic commerce, which permits parties to be able to do business with each other under the terms and conditions they agree on; intellectual property protection, and associated protection and authentication of products purchased and sold using electronic commerce; privacy, to ensure that people are comfortable doing business across this new medium; security and reliability of the telecommunications infrastructure; and information technology, as relates to content, and technical standards.
The following points summarize the issues and areas for government concern.
One major effort has been the establishment of the Model Law on Electronic Commerce by the United Nations Commission on International Trade Law (UNCITRAL), which was established to provide national governments with a framework to eliminate legal barriers to e-commerce. Its intention is to make electronic documents, such as those using EDI and e-mail, as official as paper-signed documents. In addition, international efforts in the area of authentication and certification technologies are continuing.
On the question of customs, it has been agreed by the major developed nations that, as far as possible, zero tariffs should be maintained for goods and services delivered over electronic means. However, existing tariffs would apply for physically delivered goods, even though they were purchased over the Internet. On taxation an uneasy consensus has emerged between the US and Europe that a moratorium on electronic transactions should be adopted. Enforcing taxation once such a moratorium breaks down raises a host of jurisdictional dilemmas.
Electronic commerce is being shaped by, and increasingly will help to shape, modern society as a whole, especially in the areas of education, health and government services. Societal factors will merit attention from a public policy standpoint, two of which are first, access and its determinants (e.g. income) and constraints (e.g. time) (see Hudson in this volume for a discussion of universal access), and second, confidence and trust.
One of the key features of electronic commerce is the potential system-wide gains in efficiency to be reaped when firms are linked across industries. This suggests the need to widen the notion of ‘innovation’ from a focus on high technology in manufacturing to include consumer goods and services and to adopt a more systemic perspective. (See also the emphasis that Lamberton in this volume places on the organizational effects of information in the process of industrial change.)
Electronic commerce will increase international trade, particularly in electronically delivered products, many of which are services which have not yet been exposed to significant international trade but have been ‘traded’ through foreign direct investment or have operated on a global level only for large corporate clients. This change may come as a shock to sectors that have been sheltered by logistical or regulatory barriers. In addition, it will generate pressures to reduce differences in regulatory standards—accreditation, licensing, restrictions on activity—for newly tradable products.
Many electronic commerce products benefit from non-rivalry (one person’s consumption does not limit or reduce the value of the product to other consumers), network externalities (each additional user of a product increases its value to other users), and increasing returns to scale (unit costs decrease as sales increases). These factors create an environment where producers may engage in practices designed to establish themselves as the de facto standard. This can hinder innovation and competition. Another form of anti-competitive strategy is an attempt to restrict access to services through technological gateways. For example, throughout the 1990s cable and direct-to-home satellite television companies vied with each other to control what is termed within the industry ‘conditional access’ to the customer.27
Electronic commerce calls into question the applicability of retail regulations designed for a ‘bricks-and-mortar’ world, such as restrictions on the size of stores and opening hours, limitations on pricing and promotions, granting of monopolies for the sale of certain products (e.g. liquor) and permit and licensing requirements. In addition, regulations governing the cost and availability of non-discriminatory access to information and communications technologies (ICTs) are required if e-commerce is to flourish.
Conventional economic theory would suggest that governments should only subsidize basic research into ICT technologies. However, the experience of the past three decades shows that most of the major ICT innovations (e.g. time-sharing, networking, routers, workstations, optic fibres, semiconductors (RISC, VLSI), parallel computing), many of which are more applied or developmental in nature, are the result of government-funded research or government programmes. The other area is government going online. Online government procurement is widely regarded as a catalyst for promoting electronic commerce, while government providing services online to citizens not only can educate and promote e-commerce in society, but offers the potential to make government itself more accessible and open.
Conclusion: Productivity and Growth
A key economic impact of electronic commerce today is the reducing of firms’ production costs, and this is identified as a factor that will spur the spread of e-commerce within and between businesses. By the very nature of the technologies that enable electronic commerce to take place, many of these businesses will be in the heartland of the ‘new media’ industries.
Although there are measurement problems associated with capturing the quality changes inherent in many of these activities, it is assumed that e-commerce will result in productivity gains (see earlier). Given that e-commerce is more a way of doing business than a sector, these gains could be distributed widely across OECD economies—including in the services sector, which has not enjoyed significant, measurable productivity gains in the past—and could help to enable long-term growth. And if this growth is widely distributed across the global economy, the ‘new economy’ will have acquired a sustainable material base.
As e-commerce evolves, it is likely to follow the ‘reverse product cycle’, in which process efficiency gains are followed by quality improvements to existing products and then the creation of new products. (Many of these will be part of the ‘new media’ sector: see Cooke in this volume.) Typically, it is in this final stage that significant economic growth occurs. E-commerce has the potential to be a platform from which significant new products emerge, many of which will be digital and delivered online. New products have a tendency to beget more new products and processes in a virtuous spiral, just as Edison’s electric lamp led to the development of power generation and delivery, which led to other electrical products. From a political economy perspective, the ultimate research question arising from the convergence of the technologies and commercial activities behind new media is whether the results will be as socially transformative as agriculture and manufacturing, or whether the new economy stops at the high street shop.