Stop Being Evil: A Proposal for Unbiased Google Search

Joshua G Hazan. Michigan Law Review. Volume 111, Issue 5. March 2013.

Since its inception in the late 1990s, Google has done as much as anyone to create an “open internet.” Thanks to Google’s unparalleled search algorithms, anyone’s ideas can be heard, and all kinds of information are easier than ever to find. As Google has extended its ambition beyond its core function, however, it has conducted itself in a manner that now threatens the openness and diversity of the same internet ecosystem that it once championed. By promoting its own content and vertical search services above all others, Google places a significant obstacle in the path of its competitors. This handicap will only be magnified as search engines become increasingly important and the internet continues to expand.

In order to mitigate the potential damage to competition, we must prevent Google from leveraging its power in core search to steal market share for its downstream vertical search services. Requiring Google core search to integrate its competitors’ vertical offerings would promote competition without intrusive administrative interference. But action must come soon. Search is taking shape very quickly. Once the next generation of online search emerges, the dominant players will have already cemented their positions. Let us hope that when the dust settles it isn’t too late.

Introduction

Imagine a query for “Michigan Wolverines Football” that yields jerseys from Amazon, game statistics and the team injury report from ESPN, Coach Hoke’s latest press conference from YouTube, and game tickets from StubHub all on the same page. Such a search experience would be truly universal. Indeed, Google is increasingly moving toward a universal model with its “core” search but all of its integrated “vertical” services are published by Google itself. Because Google has such great market power in core search, the integration raises competitive concerns that Google is leveraging its dominance in core search to gain market share in vertical search.

Since its inception, Google has been largely celebrated for its contributions toward the progress of the internet. Indeed, Google has been a great pioneer in the digital age by making the internet’s vast stores of information accessible to average users. A loyal contingent of users has rewarded Google for its tremendous innovation, making it the most frequently visited website in the world. Its name has become a generic verb, synonymous with “to search the internet.” It ranks among the twenty most valuable public companies in the world, and it continues to grow at a torrid pace. Yet the past year has brought Google unwanted attention as well. It has begun to face scrutiny for the anticompetitive nature of some of its business practices, both in the United States and in the European Union. In particular, Google has come under fire for giving preferential treatment to its own proprietary services over those of its downstream competitors. This favoritism creates the potential for foreclosure.

Antitrust law promotes free market competition by regulating anticompetitive conduct by companies in positions of power. Its overarching purpose in doing so is to increase consumer welfare. In the United States, anticompetitive conduct by a single firm is governed primarily under § 2 of the Sherman Act, which prohibits monopolization and attempts to monopolize. Although the Federal Trade Commission (“FTC”) recently dropped its investigation, competition authorities around the world continue to investigate Google because of concerns that its practice of favoring its own products will foreclose existing and potential competitors, thus allowing Google to maintain and extend its market power.

This Note argues that by favoring its own proprietary (“vertical”) services in its general (“core”) search results, Google violates the spirit, if not the letter, of U.S. competition laws. Part I explores Google’s role within the greater internet ecosystem and weighs the potential consequences of Google hard-coding its own services at the top of the results page. Part II explains how Google’s conduct does in fact violate § 2 of the Sherman Act and § 5 of the FTC Act, despite the FTC finding otherwise, and identifies instances in which analogous conduct has been found illegal in the past. Part III proposes a remedy to Google’s search bias modeled after the Department of Justice’s settlement with Microsoft, whereby Google is required to allow competing publishers to integrate their services into a Google core search when a user so desires. Finally, Part IV examines the criticisms that policing Google is unnecessary and concludes that they are unconvincing.

I. Google’s Ability to Direct Internet Traffic

The advent of Google substantially improved the searchability and usability of the internet and thereby decentralized the flow of information. This Part argues, however, that Google has since become a bottleneck for the flow of information on the internet, and that it has exploited this power to disproportionately direct internet users to Google’s own content.

In the 1990s, the internet was extremely disorganized and its potential unrealized. There was a limited quantity and variety of content available, and accessing that content was often an arduous task. The search engines of the day, such as Lycos and Webcrawler, brought users results based on the number of times their keywords appeared, which severely limited their effectiveness. For instance, a search for “Microsoft” might have shown the websites of vendors who sold Microsoftproducts before Microsoft’s own website. Thus, the relatively tiny internet remained highly fragmented and information was often difficult to find. Internet users often needed to know what was available before even commencing their searches, and web pages drew traffic by proliferating their brands through conventional media.

Larry Page and Sergey Brin, Google’s cofounders, changed all this. They developed an algorithm that ranks web results based on “relevance,” which represents some combination of the web page’s presence in links from other sites and its popularity with users searching similar queries. Introduced in beta form in 1998, Google quickly drew praise for returning better results, despite indexing just a fraction as many web pages as its competitors. Its subsequent success has been well documented. By 2000, Google had indexed over a billion web pages, and by the time of its initial public offering (“IPO”) in 2004, it was responsible for processing 84.7 percent of all search queries on the internet. Google helped pioneer a world where internet users don’t need to know exactly what they are searching for before they search, and the company now enjoys the third-highest market capitalization among U.S. tech companies because of it. Yet at the same time, Google’s corporate pledge—called “Don’t Be Evil”—promises emphasizing ethics over profits. The preface of the pledge specifically states that Google’s services should give users “unbiased access to information.”

Despite the internet’s promise as a vehicle for decentralized speech, “bottlenecks” have emerged through which internet traffic must pass. Internet service providers (“ISPs”) like Comcast, which own and operate the physical bandwidth through which data transmission occurs, were the first such bottlenecks. There existed a real danger that ISPs would be able to slow down or block data transmission for entities that competed with them until the Federal Communications Commission implemented “net neutrality” regulations proscribing ISPs from providing preferential transmission to those who pay for it. Google, in fact, spearheaded the net neutrality movement, thus ensuring that ISPs couldn’t decide the internet’s winners and losers. The following comes from Google’s own Public Policy Blog:

[I]nnovation has thrived online because the Internet’s architecture enables any and all users to generate new ideas and technologies, which are allowed to succeed based on their own merits and benefits. Some major broadband service providers have threatened to act as gatekeepers, playing favorites with particular applications or content providers, demonstrating that this threat is all too real. It’s no stretch to say that such discriminatory practices could have prevented Google from getting offthe ground-and they could prevent the next Google from ever coming to be.

More recently, search engines have emerged as a new form of bottleneck. The owners of these bottlenecks can be likened to gatekeepers who “directly manipulate the flow of information—suppressing some sources while highlighting others—whether on the basis of intrinsic preferences or in response to inducements or pressures by others.” The unstructured nature of the internet makes this so. Google estimates that there are over one trillion unique URLs in existence with that number growing by several billion every day. Indeed, there is so much content that without an easy way to navigate through it, the vast majority of content would never be found. This would neutralize much of the internet’s potential; but search engines fill the internet’s structural void. A quick Google search for “Nike” yields links to sites selling Nike products, a map of local dealers, a compilation of news articles, and a stock quote for the company. Such a service is invaluable to internet users and publishers alike. Yet that same necessity gives Google its power.

In the United States, Google is used for nearly two-thirds of all internet search queries. In Europe, that number is closer to 95 percent. This dominance puts Google in a unique position to direct internet traffic. In fact, a page’s rank within Google’s core search results is strongly correlated with that page’s web traffic. Google’s defenders suggest that this strong correlation is simply indicative of Google’s superior “business acumen”—that is, it places the most relevant sites first. However, this ignores the strong determinant effect a page’s ranking can have on its traffic. Most users likely assume that the first few results for a given query are the most relevant ones and do not bother to question this assumption unless the link they choose differs dramatically from the content that they expected. Because the first few results attract the vast majority of clicks, Google has the ability to direct traffic on the internet. And since advertising revenue—the lifeblood of most web pages—is tied to a website’s traffic, Google is in a position to decide whose content flourishes and whose flounders. Oren Bracha and Frank Pasquale assert that with so many billions of web pages, “where both commercial and non-commercial speakers place great weight on attracting users’ attention, a high [search] ranking is critical to success.”

Of course, this is not entirely a bad thing. Google’s ability to help internet users discern the best sites from the rest is precisely what gives the search engine its value. We want Google to pick the winners and losers for us so that we don’t have to waste time sifting through the losers ourselves. But we also need to be able to trust that Google is showing us the most “relevant” results and not engaging in something more nefarious, which is where Google leads its users astray. Google leads its users to believe that its “organic” search results are based on “math” (algorithmic relevance), but this is in many cases false. Google determines relevance by combining algorithmic and editorial decisions, which means that many factors, including subjective ones, can influence a website’s ranking on the results page. For instance, a site can be demoted for punitive reasons, such as trying to manipulate the search engine through a practice called “search engine optimization.” Its ranking could also theoretically be altered for political reasons, such as censorship. But perhaps the most dangerous and the most realized risk is of Google manipulating rankings based on its own commercial interests. This manipulation frequently occurs when Google elevates its own proprietary services to the top of a search query, despite the fact that another result fits better with conventional notions of “relevance.” For example, as Benjamin Edelman and Benjamin Lockwood have demonstrated, searches for “email,” “maps,” and “video” bring up Google’s Gmail, Google Maps, and Google Videos applications as the first result for each. Similarly, it is no coincidence that the results for “Nike,” mentioned earlier in this Section, yielded a local map through Google Maps, store listings through Google Places, and news results through Google News. Indeed, in 1998 before Google’s initial public offering, its founders acknowledged the potential for commercial interests to bias search engine results.

These editorial decisions, which “hard-code” Google’s own services above those of its competitors, often go against conventional notions of relevance and can help defeat a competitor with a superior product. For instance, MapQuest was once the most popular service for internet maps with 57 percent of the share for online mapping in 2007. By most conventional notions of relevance it should have been the first mapping site to appear. However, immediately following Google’s integration of Google Maps into its core search results, Google Maps’s market share skyrocketed, primarily at the expense of MapQuest’s. MapQuest was decimated instantly, not necessarily because Google’s product was better on the merits but because it enjoyed an advantage in terms of its exposure on Google’s core search. And MapQuest is far from an isolated victim. When Google launched Google Finance in 2008, Yahoo! Finance was by far the most popular source for financial information on the internet. Google Finance did not even rank among the top ten but was listed atop Google’s search results for ticker symbols and has since surged through the ranks at the expense of sites like MSN Money and Forbes. Google Shopping was similarly struggling until Google pinned it to the top of its core search results.

Google imposes algorithmic penalties on legitimate websites that compete with Google’s services-particularly “vertical search” sites. By giving preference to its own products in the search rankings, Google is able to dominate markets for services for which there are much more popular, and in turn presumably much more relevant, competitors and industry leaders. Given how strongly a Google ranking not only correlates with but also determines a website’s traffic, this advantage is likely unassailable. Further, Google’s dominant search position allows it to topple established industry leaders, like MapQuest, virtually overnight.

Recently, Google has faced a surplus of unwelcome attention both in the United States and Europe over the threat posed by search bias. In response to complaints by competing publishers, the European Commission and the Federal Trade Commission launched large-scale investigations into Google for anticompetitive practices. The FTC has since closed its investigation, citing insufficient evidence to support an antitrust case, but its European counterparts remain undeterred. Google also faced a number of congressional hearings in the United States during 2011. As Senator Mike Lee of Utah, the ranking Republican on the Senate antitrust subcommittee explained, Google has “cooked it” so that its own products and services are given priority in its search results over those of competing companies. Former Senator Herb Kohl of Wisconsin, a Democrat, and until January of this year, chairman of the antitrust subcommittee, voiced similar concerns regarding Google’s recent acquisitions of content providers such as ITA travel software and restaurant-review company Zagat. He asked, in light of these acquisitions, “Does Google’s transformation create an inherent conflict of interest which threatens to stifle competition?”

I would answer Senator Kohl with a resounding “yes.” Google’s desire to obtain more traffic by prioritizing its own services in its search results threatens to stifle competition and ultimately hurt consumers. Today, Google’s own words about net neutrality can just as easily describe its own exclusionary conduct in search.

II. Applicability of Antitrust Law

This Part argues that Google’s practice of prioritizing its own services in its search results runs afoul of federal antitrust law. Section II.A explains the requirements of § 2 of the Sherman Act, which governs monopolization, and § 5 of the FTC Act, which is at least coextensive with § 2. Section II.B contends that Google’s conduct violates these laws.

A. Sources of Antitrust Law

The Sherman Act is the foundation for most competition law in the United States. Its fundamental purpose is to promote “a market-based economy that increases economic growth and maximizes the wealth and prosperity of our society.” It achieves this purpose by preserving competition, which spurs companies to reduce costs, improve the quality of their products, and innovate for future products, all of which improve consumer welfare.

The abuse of monopoly power is governed by § 2 of the Sherman Act. Section 2 says it shall be unlawful for any person to “monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations.” Monopolies are undesirable because they have a tendency to raise prices while reducing output, quality, and innovation, which all harm consumers.

In order to violate § 2 of the Sherman Act, a firm’s unilateral conduct must meet two requirements. First, the firm must already possess or have “a dangerous probability of achieving monopoly power.” From an economic standpoint, monopoly power means “the power to control prices or exclude competition.” Second, the firm must acquire or attempt to acquire monopoly power through improper means, since “size does not determine guilt” on its own under the Sherman Act. This makes sense, because as the Supreme Court established in United States v. Grinnell Corp., monopoly power can result from “growth or development as a consequence of a superior product, business acumen, or historic accident.”

An important factor in determining whether a firm possesses monopoly power is the firm’s share in the relevant product market. The relevant product market is defined as the market for all products that are “reasonably interchangeable by consumers for the same purposes.” Though there is no definitive threshold for the market share of a monopoly, courts have estimated that a market share between 70 and 90 percent is indicative of a monopoly, while 50 percent is the absolute minimum for a finding of monopoly. Of course, market share is not an absolute litmus test for a firm’s ability to control prices and exclude competitors. Therefore, dominant market share is never a sufficient condition for a finding of monopoly power. For example, if smaller firms in the market can readily ramp up production in response to a reduction by the high-share firm, the high-share firm is unlikely to be found to possess market power because it will not be able to readily control prices.

The second requirement for a § 2 violation-anticompetitive conduct- may take a variety of different forms, but tying and bundling are particularly relevant to this Note. Tying occurs when a firm sells a product in one (“tying”) market in which it has market power, only on the condition that the buyer also purchases that firm’s offering in a separate (“tied”) product market. Tying hurts competition in the tied market because firms in the tied market are unable to compete for customers who purchase the tying product. Similarly, “bundling” is a strategy by which a firm operating in two distinct product markets—dominant in at least one of them—prices a bundle of products81 from both markets such that that it does not make economic sense for consumers to purchase from the firm’s “equally efficient” single-market competitors.

The foregoing is not an exhaustive list of the many forms that anticompetitive conduct may take, as the Sherman Act does not specify exactly which types of conduct should be found exclusionary. Nevertheless, “[w]ithin the context of § 2 claims, the Supreme Court has recognized the impropriety of monopoly leveraging, i.e., the use of monopoly power in one market to strengthen a monopoly share in another market.” A claim of monopoly leveraging exists when the plaintiffcan show that the defendant “threatens the [second] market with the higher prices or reduced output or quality associated with the kind of monopoly that is ordinarily accompanied by a large market share.”

Congress, through § 5 of the FTC Act, gave the Commission the power to detect and prohibit “unfair methods of competition.” U.S. courts have interpreted § 5 as enabling the FTC “to proscribe behavior beyond conduct prohibited by the other federal antitrust statutes, including Section 2.”

Until recently, “the Commission ha[d] not pursued free-standing unfair method of competition claims outside of the most well-accepted areas.” However, the Commission has “seen an increasing amount of potentially anticompetitive conduct that is not easily reached under [§ 2].” In response, the FTC has demonstrated a willingness to assert its § 5 mandate beyond conduct traditionally encompassed by the Sherman Act. For instance, the FTC recently settled a case with Intel, in which it alleged that Intel had fallen behind Advanced Micro Devices in the race for technological superiority for microprocessors and had resorted to anticompetitive practices like loyalty rebates for equipment manufacturers in order to stall competitors. The conduct did not fall neatly within a classic § 2 theory, but the Commission was able to extract concessions nonetheless.

According to Commissioner Rosch, § 5 also extends beyond § 2 in that it authorizes the FTC to protect consumer choice independent of the conduct’s price effects. That is to say that maintaining variety is an independent objective of competition law. This goal is not universally accepted, but it is potentially the most applicable to Google.

B. Application to Google

Google possesses a dominant position in the market for core internet search and uses its power to direct consumers to Google’s other proprietary services. This conduct violates § 2 of the Sherman Act and, alternatively, constitutes “unfair competition” under § 5 of the FTC Act.

Google meets the first requirement for finding a § 2 violation: possession of monopoly power. It processes nearly two-thirds of all internet queries domestically and, perhaps more importantly given the highly globalized nature of internet commerce, more than 92 percent of all queries around the world. Its deep integration with its own mobile operating system Android and with Apple’s iOS has allowed it to obtain a 97 percent share of the rapidly growing mobile segment of internet search, which is projected to make up more than half of all web searches by 2014.

When measured in terms of search advertising revenue—search engines’ primary source of income—Google is just as critical. It earns approximately 82 percent of all search advertising revenue. This is an important consideration in Google’s market power because the company operates in a complex two-sided market. The services appear free to consumers but in fact are funded by advertising on the other side of the market. Like television and newspaper sponsors, Google’s advertisers also buy a service from the company exposure on the internet. Google’s incredible share of advertising revenue clearly indicates its indispensability to online search advertisers and serves as a good proxy for its power over them.

If one uses the number of competitors as a metric of market concentration, Google is also dominant. It faces significant competition from just two other search engines—Bing and Yahoo—and really only one considering that Bing now powers Yahoo. Further, Microsoftis hemorrhaging money on Bing, which suggests that the continued viability of even this limited competition is hardly a foregone conclusion. The barriers to entry are incredibly high, which means that if existing competitors fold, none are likely to rise up to challenge Google in the future. Former Assistant Attorney General of the DOJ’s Antitrust Division Tom Barnett agreed:

If you have an 80 percent share of the market with barriers to entry, you have monopoly power.

Those barriers don’t come from the supposed cost of switching or clicking to another site. The barriers come from building an effective search engine. You need the scale, the volume of traffic that Google has to tune the engine, and it’s an ongoing process. Nobody else is going to catch Google, even if you had access to their algorithm today. They have market power.

Even Google’s own chairman and former chief executive, Eric Schmidt, has acknowledged the extent of Google’s market power. When Senator Herb Kohl asked Schmidt during a Senate antitrust subcommittee hearing in September 2011 whether Google is a monopolist in online search, Mr. Schmidt replied, “I would agree, Senator, that we’re in that area.”

By guiding users to Google’s own content regardless of whether a competitor’s content might be more relevant, Google meets the second requirement for a § 2 violation: anticompetitive conduct. To be sure, Google has not always done this. At the time of its IPO in 2004, Google was simply an internet search provider and featured little content of its own.

Since then, Google’s business model has changed significantly. The company now features a wide variety of proprietary content both developed internally and acquired elsewhere, including Google Finance, Google Maps, Google News, Google Travel, Google Flight Search, Google Places, Google Plus, Google Product Search, YouTube, and Zagat. These services perform specialized functions not interchangeable with Google’s core search function and therefore exist in separate product markets.

The services, many of them commonly known as “vertical search” sites, generally face much stiffer competition in their respective markets than Google does in core search. Google Finance competes with Yahoo! Finance, Google Maps with MapQuest and Bing Maps, and Google Flight Search with Expedia, Travelocity, Priceline, Orbitz, and a host of others. Yet because Google stood to profit more by keeping users on its own sites than sending them away, it began to engage in the same practices its founders once ridiculed.

Despite the fact that Google represents itself to users as an unbiased search engine that ranks results purely based on relevance, Google executives have admitted otherwise. Marissa Mayer, a former Google vice president, explained in a 2007 speech that, in the past, Google ranked links in the following manner:

Based on … popularity … when we roll[ed] out Google Finance, we did put the Google link first. It seems only fair, right? We do all the work for the search page and all these other things, so we do put it first … And after that it’s ranked usually by popularity.

Ten years ago, some might have been inclined to agree with Ms. Mayer’s notion of fairness. For a search engine as powerful as Google, however, the practice is highly problematic. This is because, as discussed in Section I.B, consumers are increasingly reliant on search engines to navigate the ever-growing internet. Over 90 percent of consumers use search engines to navigate the more than one trillion existing web pages, and 88 percent of them click on one of the first three links in a set of results. Google’s own sites, rather than the most relevant sites, commonly occupy the most valuable real estate on a results page. By elevating its own content to the top of its results without necessarily “earning” that position based on the algorithm, Google generates traffic for its own services (and enjoys the resulting advertising revenue on those pages) while simultaneously depriving its competitors of that traffic.

Thus, competition between Google search verticals and its competitors is no longer based solely on the merits of the products that the companies have built. Google has created a built-in advantage for its vertical search services based on superior exposure, and it’s an advantage no other company can match. A competitor who develops a superior shopping comparison service, for instance, must still rely on searchers to look beyond Google’s “higher ranked” Product Search service to win business. The competitor’s handicap is further magnified in the maps market, where the Google Map appears at the top of the results page but the competing map requires at least a click-through and likely more data entry on a new page. This type of conduct, where Google owes the success of its search verticals not to “superior product, business acumen, or historic accident” but to the mere fact that they are subsidiaries of “the biggest kingmaker on this Earth,” is anticompetitive.

The notion that biased search constitutes anticompetitive conduct is hardly revolutionary. There is highly analogous precedent for such a finding in the government’s investigation of airline customer reservation services (“CRSs”) in the 1980s. CRS software, which allowed travel agents to search for flights, was often owned by individual airlines and displayed that airline’s flights first, even if other carriers offered lower prices or nonstop service. Despite the fact that the travel agents were free to use a different CRS or run more searches to find other flights, the Department of Transportation promulgated rules prohibiting the CRSs from ordering flights based on any factor related to carrier identity.

The same principle applies here: Google ought not to rank results by any metric that distinctively favors Google. The consequences of doing so could be very harmful to competition and, in turn, consumers. For instance, companies like Yelp! and Nextag, which rely on Google to drive 75 and 65 percent of their traffic respectively, might not remain viable if they lose this traffic to Google’s proprietary offerings.

Admittedly, antitrust law exists to protect competition, not individual competitors. Nevertheless, these firms would likely “have to spend much more on advertising to make up for lost traffic coming from Google queries,” which is money they could otherwise put toward product development or other productive uses. Moreover, Google’s favoritism could threaten innovation generally. It is easy to see how an inventor could be deterred from investing his blood, sweat, and tears into his idea for a product with the looming threat of Google usurping it soon after. Indeed, the CEOs of both Nextag and Yelp! testified before the Senate Antitrust Subcommittee that neither would attempt to launch their companies today in light of Google’s current business practices. Google’s anticompetitive conduct has a chilling effect on competition that could resonate throughout future generations of the internet.

Google’s practices will result in an internet with fewer choices for consumers and businesses, higher prices, and less innovation. Obviously, for a sector of our economy that boasts 240 million domestic users and $170 billion in annual commerce, this is highly undesirable. Unfortunately, since the FTC abandoned its investigation, a direct settlement with the government in the near-term is unlikely. But the issue remains far from dead. A court could still impose a remedy through an injunction in a private lawsuit brought by Google’s vertical competitors. And the FTC or the DOJ can always reopen the investigation at a later date as competitive harms become more readily observable, as this Note anticipates. Part III of this Note proposes a remedy that will mitigate the anticompetitive effects of Google’s favoritism.

III. A Familiar Solution

This Part argues that the optimal solution to remedy the antitrust violations set forth in Part II is for Google to publish its Application Programming Interfaces (“APIs”). Other search verticals could then compete with Google on the merits of their services by integrating into Google core search. This type of solution, first mentioned by Edelman, finds precedent in Microsoft’s antitrust settlement over browser interoperability.

In United States v. MicrosoftCorp., the government accused the software giant Microsoft of bundling its dominant operating system Windows with its web browser Internet Explorer, which had been competing primarily with Netscape. In order to prevent Netscape from gaining a foothold in the market for browsers, Microsoft entered into a number of agreements with original equipment manufacturers (“OEMs”) to install Internet Explorer in a prominent location on the desktop while either relegating Netscape’s browser to a less conspicuous location or excluding it altogether. The government also accused Microsoft of using the Windows source code to advantage Microsoft’s other offerings by allowing those offerings to access features that third-party software could not. The government alleged this was an instance of anticompetitive tying, as Microsoft was leveraging its position as a monopolist in the market for operating systems to advantage its other software in competitive markets.

Microsoft’s market power in operating systems was almost indisputable. Windows was installed on nearly nineteen out of every twenty Intel-based PCs and its market share was well fortified. In order for consumers to switch operating systems, they needed to spend hundreds, if not thousands, of dollars on new software or equipment, as well as put in the requisite time to learn how to operate the new system. This meant that the barriers to entry for competitors were enormous given the cost of developing an operating system and the arduous task of convincing consumers to migrate.

The Department of Justice worried that Microsoft’s ability to handicap competing application developers by limiting their ability to integrate with Windows would stifle innovation. And so, as part of its settlement in 2001, Microsoft signed a consent decree in which it agreed to publish all the Windows Application Programming Interfaces that its other software used. This ensured that virtually all programs could integrate with Windows in all the same ways that Microsoft’s own products could. It effectively prevented Microsoft from using its operating system dominance to steal market share for its other product offerings.

This Note proposes a similar solution for Google. The anticompetitive threat is Google foreclosing equally or more efficient competitors by advantaging its own offerings on the Google core search platform in a way that competitors can’t possibly match. The solution is to neutralize the advantage to ensure that an equally efficient competitor can compete. In order to accomplish this, antitrust authorities should require Google to document its own APIs and allow competing content providers like search verticals to develop applications that integrate into a Google core search.

An API is a set of standardized requests that allows software programs to communicate with each other. Specifically, when APIs are defined for the host software, they allow other programs to call upon the host program to “request services” that it would not otherwise be able to duplicate itself. “By providing a means for requesting program services, an API is said to grant access to or open an application.” It is analogous to “an electrical socket, which allows outside products (in this case, applications) to plug into the electrical system (in this case, [Google]).” With common plug forms, all devices are able to draw the resources regardless of manufacturer.

In the Microsoft case, Windows was the host program (operating system) with which outside programs (Netscape) needed to communicate in order to operate. By withholding the operating system API that defined how a web browser calls upon Windows functionality, Microsoft was able to protect its own downstream product (Internet Explorer). Here, Google core search is the equivalent of a host program. Google’s downstream services “plug in” to its search functionality, which is how it knows which map to display when a person searches for a nearby restaurant. Under the status quo, Google is able to ensure that only its own downstream offerings are integrated into Google’s core search. But what if Google published APIs and made them accessible to third parties? History tells us that we could very well see the emergence of a vibrant network of developers building “applications” on top of Google’s core search. In the years after Microsoft agreed not to restrict its APIs, software like Apple’s iTunes has challenged the reign of Microsoft’s Windows Media Player while Mozilla Firefox and Google Chrome have eaten away at Internet Explorer’s dominance. Just as an open Windows helped drive innovation in software, an open Google could do something similar on the web.

Today, when a user searches for the name of a nearby restaurant, a Google Map might appear among the search results. With an open interface, Google’s advanced search algorithm would still determine when an embedded map would be appropriate, but the map would not necessarily be Google’s. For instance, MapQuest could develop a Google application that would display a MapQuest map on the Google search results page in place of the Google map. Google would not be required to display the MapQuest map on its own, but users would be able to select MapQuest as a “preferred provider” of sorts in their Google account settings. For that user, any search that would ordinarily yield an embedded Google map would produce an embedded MapQuest map instead. This would ensure that Google users who prefer MapQuest maps are not driven to switch to Google Maps merely because in the long run it requires fewer clicks from a search page. The same goes for Bing Maps and Yahoo! Maps. If one of them implements a new feature that distinguishes it from Google Maps and that users find desirable, it will be able to compete directly on the merits with Google Maps and no longer have to overcome the inconvenience factor for users who prefer Google for their core search needs.

One can imagine how an open Google could lead to fierce competition in the downstream markets and, in turn, a thriving network of preferred providers. Of these, consumers could customize their favorite offerings for a variety of categories of downstream services. For instance, today a Google search for “flights to Cancun” might yield results from Google Travel, Google Flight Search, and Google Places all on one page. In the future, however, the same search could yield flights from Expedia, hotel reviews from Priceline, and Cancun restaurant reviews from Yelp!. If Google publishes the best vertical offerings, then few users will switch vertical search providers and Google will be rewarded with the bulk of the vertical search traffic.

This solution also incorporates aspects of Microsoft’s 2009 settlement with the European Commission regarding abuse of its dominance in the market for web browsers. In that case, the European Commission alleged that Microsoft”distort[ed] competition on the merits between competing web browsers insofar as it provide[d] Internet Explorer with an artificial distribution advantage” over competitors. The terms of the settlement called for Windows to present users with a list of browsers in random order, including Internet Explorer, to choose from. By requiring users to make an active choice, the tie between Internet Explorer and Windows was cut. My proposal, similarly, would present users with the option of choosing their favorite map publisher, travel metasearch, shopping site, et cetera.

Unlike the EU Microsoft model, the selection screen in my proposal would not be limited to the most popular offerings arranged in a randomized order, the approach suggested by Edelman. Instead, it would display all qualified applications for that API. This avoids the hypocrisy of punishing Google for stunting competition in downstream markets only to “lock in” market share for an oligopoly of players in that same market. It also maintains enough flexibility to accommodate asymmetrical consumer preferences. For example, while users in Detroit will have the Detroit Free Press among their favorite newspaper providers, they are unlikely to be very interested in the San Francisco Chronicle and vice versa. In the market for news applications, a comprehensive list will ensure that everyone receives the benefits of integrated search instead of only those who share the most common tastes.

The solution I put forth offers several other advantages compared to Edelman’s as well. Unlike the browsers in Europe, the options here would be ordered alphabetically instead of randomly to ensure that users could easily locate their preferred provider. The alternative—theoretically unlimited applications in randomized order—could be so chaotic for consumers that it would diminish their added utility from this system and could potentially deter use of the system entirely. Further, the selection mechanism would not be an intrusive pop-up like the browser selection process as Edelman suggests, but rather would be accessed through a user’s account settings.

The subtle selection process is actually a very important component of integrating vertical providers in order to avoid compromising the user’s experience of the Google platform itself. Under Edelman’s more extreme proposal, a user who does not actively make the selections will presumably receive no vertical search results—only options for providers—every time he runs a search without logging in first (as he might if searching from a different computer, for instance). This could actually drive users from Google’s core search altogether and sap public support for intervention. With a passive selection process it is fair to assume that fewer users will actively switch providers, but as long as the option is available to them it is difficult to say that the competition is unfair.

Some critics dismiss vertical search interoperability as unduly intrusive. They compare it to the integrated, multifunctional iPhone and conclude that it would be ridiculous to “require these innovators to reengineer their devices so that iPhone users can swap in a Canon or Sony camera instead of Apple’s chosen camera, or swap in a Zune or Rio music player instead of Apple’s iPod.” This analogy, however, is disingenuous. One reason my plan is desirable is because it would probably not be cost prohibitive for Google. Indeed, the leading antitrust scholars explain that the Microsoft court was sensitive to the low cost of publishing APIs. That we have yet to see a cell phone with a swappable camera, but can find embedded third-party content virtually everywhere on the internet, illustrates how far-fetched the analogy is. Microsoft is a far better analogy.

An important qualification to ensure minimal intrusiveness is that Google would only be required to publish the “external” APIs of its core search (which allow it to communicate with other programs) and not “internal” ones (which are used exclusively by the core search service itself). This was an important distinction Microsoft drew in its own consent decree and it would ensure that the remedy doesn’t burden Google’s innovation in core search itself. Google should also be permitted to maintain reasonable prohibitions on what sorts of programs can integrate with the core service in order to maintain its high-quality user experience. For instance, programs that compromise user or server security or run afoul of Google’s privacy policy could be blocked.

The remedy would theoretically be limited in duration, as is typical in antitrust cases. A fixed time limit is not ideal in this case, however, since Google’s importance to the development of the internet is more likely to increase than decrease over time. Instead, the remedy could theoretically be conditioned on other factors. For instance, if Google’s share of core search were to fall below a certain threshold in the future (50 percent, for example), the requirements could sunset automatically. Or if technology changes in such a way that Google is no longer critical to online exposure, the company could seek to terminate the interoperability rules.

Such a system would maximize consumer welfare during each search (in the short run) by showing users the results from all of their most trusted search verticals, all without compromising the convenience of the current integrated, “universal search” format. More importantly, it would maximize consumer welfare in the long run by ensuring that Google has to compete with other search verticals on the merits of their offerings in order to earn consumers’ business. Thus, competition, and in turn innovation, will thrive. Ultimately, antitrust law in the United States is most concerned with the consumer. This remedy would take an important step toward protecting consumers far into the future, no matter how the internet may develop going forward.

IV. Response to Criticisms

This Part responds to three separate counterarguments put forth by opponents of neutral search. In response to the claim that competing sites are just “one click away,” Section IV.A asserts that users are unlikely to be aware of superior content and that the two-sided structure of the search market means Google’s advertising customers will still suffer. Section IV.B refutes the claim that Google is only one entity and thus is not “leveraging” its power in an anticompetitive way. Finally, Section IV.C responds to critics who question the wisdom of governmental meddling in such a dynamic industry by establishing that my solution will not detract from Google’s incentive to innovate.

A. “Just One Click Away”

Many critics of search neutrality attack the very notion that Google has monopoly power. The key refrain, initiated by Google and endorsed by many commentators, is that regardless of its market share, Google can’t possibly have monopoly power because competing products are “just one click away.” For its search engine, this means that if Google’s core search gives a lackluster performance, Bing and Yahoo! are easily accessible to users seeking more relevant results. For its search verticals like Google Maps, there are no obstacles to an unsatisfied user typing MapQuest’s website into his browser’s address bar. Unlike Microsoft’s Windows, which could not easily be abandoned by an unsatisfied user, Google’s audience is not captive.

Geoffrey Manne, one of the most outspoken opponents of search neutrality, asserts that there is no single, correct way to ascertain a website’s relevance, stating that “[r]elevance is a slippery and subjective concept, different for every user and every query, and there is no a priori way to define it.” He is correct. Google helps us differentiate between websites, which is exactly why we use a search engine in the first place. If Google did not discriminate, the search engine would be nothing more than a phone book for the internet. Thus, the extent to which a user agrees with Google’s rankings is precisely what gives Google its value, and if users disagree with Google’s determination, Bing and Yahoo! are “one click away.” For this reason, Manne concludes that regulation is unnecessary because the market dictates that Google will regulate itself. Since Google’s utility to users and, in turn, its attractiveness to advertisers depend on a high level of relevance, it undermines Google’s profitability to show less relevant sites first.

However, the “one click away” argument overlooks some significant points. First, it presumes that each user has a predefined conception of relevance. In reality, it seems much more likely that a user’s conception of relevance is influenced by the search results. Most people are likely to believe that the first listing they see is the most relevant listing. Therefore, instead of switching to Bing or Yahoo! if the results do not appear in the order the user expects, the user simply modifies his perception of the website’s relevance. Daniel Crane admits, “Empirical work shows that users place a large degree of trust in Google’s perceived neutrality in ranking relevance to queries, often substituting Google’s algorithmic judgment of relevance for their own evaluation of search result abstracts.” Bracha and Pasquale similarly conclude that manipulation is not likely to correlate with user defection. In fact, this entire argument depends on the highly questionable assumption that consumers even have the ability to detect the manipulation. A search engine as readily embraced as Google would likely need to establish a long pattern of highly irrelevant results to lose the esteem of its users.

Second, the “one click away” argument ignores the fact that Google’s customers are not only internet users but also advertisers. Advertising is Google’s primary revenue stream. This makes advertisers some of its most important customers and they are anything but “one click away.” If advertisers want to maintain a visible online presence, they are beholden to Google. If Google shuts out competing search verticals (whose ad space competes with Google’s), it could become the exclusive outlet for online advertising in sectors like online maps and travel search. The diminished competition would lead to higher prices for this often overlooked group of customers.

As a result, it matters little whether Google’s customers are truly captive in the same way Microsoft’s users were, and it would be a mistake to focus on that aspect of the analogy. Instead, the key focus should be on Google’s ability to lock out its competition, rather than lock in its users. Google may not be able to prevent its users from accessing its competitors’ offerings, but it can actively steer them toward its own. Consumers may prefer features of a competitor’s service, but that may not be enough to overcome the convenience of Google’s service integrated into their main internet portal. And for sites still working toward achieving widespread familiarity with the public but that require exposure to do so, Google’s favoritism may forever relegate upstarts to the internet’s B-list regardless of their quality.

With search engine exposure likely to become even more critical to success in the future, the risk of the above scenario coming true is high. And in turn, the incentive to invest in a great idea is diminished. Perhaps the “next” Google—a highly successful business that earned its popularity through cutting-edge innovation and an even online playing field—would fail to materialize. This is bad for consumers in the long run, regardless of how many “clicks away” competitors are today. They would be better off, both in the long run and short run, if competing search verticals were available no clicks away, as described in the solution in Part III.

B. One Service

The second assertion Google’s defenders make is that Google Maps, Google Finance, Google Flight Search, and YouTube are not actually separate products from Google’s core search. The future of search, they contend, is more than just ranking web pages. The industry is moving toward “universal search,” where information of all kinds is available through a single query. According to Chairman Schmidt, “the question of whether we ‘favor’ our ‘products and services’ is based on an inaccurate premise. These universal search results are our search service-they are not some separate ‘Google content’ that can be ‘favored.’” To illustrate, when a consumer searches for Outback Steakhouse, he does not just want to see a list of websites related to his favorite purveyor of Australian cuisine. Instead, he wants to see the restaurant’s website listed next to a map featuring all the nearest locations, with user-written restaurant reviews for each. Likewise, he wants a search for “trip to Ann Arbor” to give him flight options, local hotel information, and Zagat’s recommendations for the best local brewery.

Search is moving away from the “ten blue links” display. Given the progression toward universal search, Google’s defenders contend that Google’s conduct is not anticompetitive with respect to search verticals and content providers. Rather, it is procompetitive with respect to Google’s core-search competitors like Bing and Yahoo!, which integrate their own proprietary offerings just like Google.

If we accept Google’s contention that all its services are merely components of one Google search, it becomes difficult to make the claim that Google is leveraging its power in one market to drive out competitors in other markets. Yet the idea of a single, integrated product market defies conventional notions of product markets, which require “reasonable interchangeability” between the competing products. For example, if a user is unsatisfied with the results for “pet food” on Google’s core search, does Google really expect her to switch to … Travelocity?

Commentators also disagree with the characterization of Google’s portfolio of services as a unitary product. Even the company contradicts this characterization by publishing an extensive list of “products” on its website. Thus, the unitary product contention ultimately appears to be a disingenuous decoy argument over semantics.

Further, acceptance of Google’s perspective could have disastrous effects for consumer choice in the internet ecosystem of the future. In Schmidt’s eyes, Google seems to be competing with only Bing and Yahoo! to determine who can amass the strongest network of content and vertical search services that consumers want to use for all their information needs. Since other third-party verticals are not really “competing,” they are merely collateral damage in a war of attrition between the search giants. Leftunrestrained, it is not hard to see Google leading us into a world where maps, travel, video, shopping, and possibly even news are all found primarily through the “universal” networks of the major search providers. If you want a map, search Google. If you aren’t happy with Google, try Bing. Sure, competing products will still be “one click away,” but there very well may be only one competitor for any product offering. And it would likely never change, since the massive scale and capital required to compete in universal search would make new entry impossible. The “open internet” dream would die.

This is admittedly a somewhat extreme, apocalyptic prophecy for open internet, but the risk of diminished content diversity on the web is very real. Universal search is undoubtedly a good thing for consumers. But it would be shortsighted to excuse Google from fair competition with services it admittedly competes with simply because it happens to compete with bigger fish too.

Finally, Google’s ambition in universal search is easily reconcilable with the proposal in Part III to make it compete fairly with vertical search services. The proposal simply calls for Google to allow competing search verticals to integrate into its core search in the same way Google’s own offerings can. Google could still provide a truly universal search experience. The only difference is that the content could come from sources other than Google if the consumer so desired. In fact, if Google’s endgame really is the market for universal search, implementing such a system might even be in its best interest. Consumers would love it, and Bing and Yahoo! would need to work diligently to catch up.

C. Don’t Stifle a Dynamic Industry

Finally, critics of search neutrality contend that, even if Google has monopoly power and even if it is using that power in a way that seems anticompetitive on the surface, we should not attempt to apply traditional antitrust principles to an industry as rapidly changing as the search industry. Google may be an empire today but it did not even have a registered domain name a little more than fifteen years ago. In another fifteen, who knows what the landscape of the web will look like. We should not assume that just because Google dominates search today, it will dominate other forms of content in the future. After all, people shared the same concerns about lost diversity of content when America Online (“AOL”) was set to merge with Time Warner in 2000. Less than a decade later, the media dominance failed to materialize, the marriage is over, and AOL has been relegated to a skeleton of its former self.

In the same vein, search-neutrality critics argue that in such a dynamic sector, government intervention to protect innovation might actually end up impeding it. Crane would attribute this to heavy-handed government regulation, as oversight over the “thousands of complex decisions” that Google makes each day would require “an army of bureaucrats … dwarfing the one mandated by the Microsoft consent decree.”

Manne’s camp views the problem slightly differently. Google itself is a great innovator, it argues, and has proven so time and again. Manne believes that Google will continue to innovate, but neutrality rules that would limit its potential advertising revenue (by preventing it from directing consumers to its sites) and put its source code at risk (by requiring it to publish its algorithm) would serve to reduce Google’s incentive to innovate.

These arguments all have merit. However, they are not compelling enough to warrant abandoning our long-established reverence for competition, especially given the manageable requirements of my solution. First, although we admittedly cannot foretell exactly how far Google’s dominion will reach, that is not a good reason to overlook the risk. Just because AOL’s widespread dominance failed to materialize does not mean Google’s will too, particularly because Google has a substantial amount of control over how the internet develops going forward.

Second, the burdensome regulatory regime that Crane anticipates will not arise under my proposed solution. Unlike Edelman’s solution, which calls for a blanket general prohibition on favoritism, an army of technical bureaucrats would not be necessary. This is because Google would only have to publish enough of its protocols to allow competing search verticals to plug into Google’s core search. The industry would therefore regulate itself, as competitors would monitor compliance. If Google does not engage in any foul play, it would likely be able to make its “thousands of complex decisions about the ordering of search results” unimpeded.

And while Manne is correct to note that Google has been one of the internet’s preeminent innovators, its accomplishments do not afford it special treatment under antitrust law. Google’s incentive to continue innovating would not be diminished under the proposal in Part III because if it stagnated in vertical search, competitors’ offerings would steal its market share. Likewise, if it failed to continue innovating in core search, it would lose market share to Bing and Yahoo!. In fact, Google’s motivation would likely be greater than ever because if it wanted to continue to earn advertising revenue from its search verticals, it would have to rely on the merits of its products instead of an artificial distributional advantage. Manne’s contention that forced publication of its algorithm would discourage Google from investing in core search is probably true, but not relevant in this case because this Note does not advocate such publication.

Conclusion

Since its inception in the late 1990s, Google has done as much as any company to create an “open internet” with its impressive innovation and its advocacy on behalf of net neutrality. However, as Google extends its ambition beyond its core function, the company has begun to threaten the very openness and diversity it once championed. Senator Richard Blumenthal likened Google to a racetrack owner. “You run the racetrack, you own the racetrack,” he told Schmidt, and now that Google owns some of the horses, “you seem to be winning.” In order to mitigate the damage to competition, Google must be prevented from leveraging its power in core search to steal market share for its downstream vertical search services. For now, the FTC has decided to kick the can down the road, but this Note expects that Google has not heard the last from the antitrust authorities. When a remedy does come, whether from the agencies or through the courts, we must be careful to avoid remedies like simple, blanket prohibitions on self-promotion that risk stalling the internet in the era of “ten blue links.” Requiring Google to integrate its competitors into its core search results would promote competition without intrusive administrative interference, and it would protect consumer choice while allowing online search to continue on its path toward becoming truly universal.