Andrew McAfee & Erik Brynjolfsson. Foreign Affairs. Volume 95, Issue 4. July/August 2016.
The promises of science fiction are quickly becoming workaday realities. Cars and trucks are starting to drive themselves in normal traffic. Machines have begun to understand our speech, figure out what we want, and satisfy our requests. They have learned to write clean prose, generate novel scientific hypotheses (that are supported by later research), compose evocative music, and beat us, quite literally, at our own games: chess, poker, and even go.
This technological surge is just getting started, and there’s much more to come. For one thing, the fundamental building blocks that launched it will continue to improve rapidly. The costs of processing, memory, bandwidth, sensors, and storage continue to fall exponentially. Cloud computing will make all these resources available on demand across the world. Digital data will become only more pervasive, letting us run experiments, test theories, and learn at an ever-greater scale. And the billions of humans around the world are growing increasingly connected; they’re not only tapping into the world’s knowledge (much of which is available for free) but also expanding and remixing it. This means that the global population of innovators, entrepreneurs, and geeks is growing quickly and, with it, the potential for breakthroughs.
Most important, humanity has recently become much better at building machines that can figure things out on their own. By studying lots of examples, identifying relevant patterns, and applying them to new examples, computers have been able to achieve human and superhuman levels of performance in a range of tasks: recognizing street signs, parsing human speech, identifying credit fraud, modeling how materials will behave under different conditions, and more.
Building machines that can learn on their own is critical, because when it comes to accomplishing many tasks, we humans “know more than we can tell,” as the scientist and philosopher Michael Polanyi put it. Historically, this served as a hard barrier to digitizing much work: after all, if no human could explain all the steps followed when completing a task, then no programmer could embed those rules in software. Recent advances mean that “Polanyi’s paradox” is not the barrier it once was; machines can learn even when humans can’t teach them.
As a result, jobs that involve matching patterns, in particular, from customer service to medical diagnosis, will increasingly be performed by machines. Because U.S. companies are both the world’s most prolific producers and the world’s most enthusiastic consumers of technology, many of the effects of the digital revolution will likely be seen first in the United States. Low-wage jobs are especially at risk: in its 2016 report to the president, the U.S. Council of Economic Advisers estimated that 83 percent of jobs paying less than $20 per hour could be automated.
Such a radical reshaping of work will call for new policies to protect the vulnerable while reaping the gains of the new age. The choices made now will prove particularly consequential. The wrong interventions will hurt the economic prospects of millions of people around the world and leave them losing a race against the machines, while the right ones will give them the best chance of keeping up as technology speeds forward.
How to tell the difference? Two basic principles should guide decisions: allow flexibility and experimentation instead of imposing constraints, and directly encourage work instead of planning for its obsolescence.
A More Flexible Economy
In times of rapid change, when the world is even less predictable than usual, people and organizations need to be given greater freedom to experiment and innovate. In other words, when one aspect of the capitalist dynamic of creative destruction is speeding up-in this case, the substitution of digital technologies for cognitive work-the right response is to encourage the other elements of the system to also move faster. Everything from individual tasks to entire industries is being disrupted, so it’s foolish to try to lock in place select elements of the existing order. Yet often, the temptation to try to preserve the status quo has proved irresistible.
Even though the times call for flexibility, policymakers seem to be moving in the opposite direction. In recent decades in the United States, business dynamism and labor-market fluidity have in fact decreased. Entrepreneurship, job growth within young companies, worker moves from one job or city to another-these and other similar phenomena have all shown steady declines that predate the Great Recession.
The decay of business dynamism appears to be the result of what the economist John Haltiwanger has characterized as “death by a thousand cuts.” Many of these cuts are restrictions placed on some kinds of work. According to the economist Morris Kleiner, whereas only around five percent of American workers in the 1950s were required to have a state license to do their jobs, by 2008, the figure had climbed to almost 30 percent. Some of the requirements are plainly absurd: in Tennessee, a hair shampooer must complete 70 days of training and two exams, whereas the average emergency medical technician needs just 33 days of training. As Jason Furman, chair of the Council of Economic Advisers, said in 2015, “Licensing may be contributing to a range of challenges facing labor markets, including reduced labor force participation, higher long-term unemployment, and higher part-time employment.”
Some states are already taking action. In early 2016, legislators in North Carolina proposed eliminating 15 licensing boards, including those for irrigation contractors and pastoral counselors. Such efforts should be expanded. It is far from clear how large the gains from easing excessive requirements would be, but it’s well worth finding out.
Leveling the Playing Field
Some of the barriers facing young, fast-growing, technology-centric companies today illustrate another kind of inflexibility: entrenched interests working to preserve their positions. Tesla sells its popular electric cars at fixed prices with no haggling, but laws preventing automakers from acting as retailers bar the company from doing so at its own facilities in six states, which together account for 18 percent of the U.S. new-car market. The ride-hailing company Uber has had to fight taxi regulators in city after city, even though customers clearly value its convenience and safety and drivers value its income and flexibility. These battles provide strong evidence of “regulatory capture,” a phenomenon in which agencies act on behalf of special interests instead of the public. Startups should certainly pay their fair share of taxes and operate safely, but they shouldn’t be kept out of markets by incumbents’ machinations.
In the regulatory wars between start-ups and incumbents, defenders of the status quo often claim to be fighting to maintain a level playing field. But today’s playing fields are far from level; they’re often tilted toward established companies. More fundamentally, many decades-old regulations designed to protect consumers from so-called information asymmetries no longer make sense in the information age. When it comes to many goods and services, consumers now know more than ever, from the exact route a Lyftdriver took to the previous guests’ ratings of an Airbnb host.
The ability to rate Uber and Lyftdrivers after every trip goes a long way toward explaining why they often take such care to keep their cars clean, and it provides an efficient way to weed out drivers who are less customer-oriented. Even the most diligent taxi cab regulator would find it impossible to conduct meaningful observations that frequently. As Eric Spiegelman, the president of the Los Angeles Taxicab Commission, has admitted, “Uber’s method is better for passengers.” In more and more markets, as digital technologies make relevant information widely available, the need for centralized regulation should go down, not up.
Similar breakthroughs in transparency have transformed other parts of the economy, from ski resorts that cannot exaggerate their snowfall to airlines that cannot hide their record of on-time arrivals. There is little need for lemon laws, after all, when everyone knows which cars are the lemons. As technology races ahead, there will be substantial opportunities to relevel the playing fields on which businesses compete. The innovation surge that is under way now will highlight the stark differences between actual capitalism, where competition among companies yields great gains for people, and crony capitalism, in which incumbents and their allies in government strive to avoid disruptions. It’s clear which is the better type, and so policy should promote it.
Flexibility will also require better data, since experimenting works only if one knows whether a given effort is having the desired effect. It is unfortunate, then, that the U.S. Congress cut the budget for the Bureau of Labor Statistics by 11 percent in real terms between 2010 and 2015. Businesses, policymakers, and academics all make heavy use of the evidence collected by the federal government about the U.S. work force.
A much more encouraging development is President Barack Obama’s Open Government Initiative. In 2013, Obama signed an executive order making open and machine-readable data “the new default for government information.” At all levels of the government, the United States needs more such efforts, which would prove helpful to all sorts of decision-makers. As more and more digital data become available, there will be even more opportunities for sharing information to improve policy, and the government should play a key role in this process. As Larry Summers, a former secretary of the treasury, recently put it, “Data is the ultimate public good.”
The relationship between employers and the people who work for them is another area where the United States faces choices between rigidity and flexibility. Today, companies must designate their workers as either employees or contractors. This classification, which is overseen by the Internal Revenue Service, affects whether workers receive overtime pay, are eligible for compensation for on-the-job injuries, and have the right to organize into unions.
The last decade has seen a substantial rise in various forms of contracting. According to the economists Lawrence Katz and Alan Krueger, the percentage of American workers in “alternative arrangements,” including temporary staffing, contracting, and on-call work, increased from ten percent in 2005 to 16 percent in 2015. This trend should accelerate with the continued growth of the “on-demand economy,” epitomized by Uber and Lyftand the freelancer marketplaces TaskRabbit and Upwork. Although only about 0.4 percent of the U.S. work force (about 600,000 people) currently earns a primary living through these digital intermediaries, this figure will likely grow rapidly.
These significant shifts in the nature of employment have prompted calls for rethinking the way workers are classified. Krueger and Seth Harris, a former deputy secretary of labor, have proposed the creation of a new “independent worker” designation. These workers would not be eligible for overtime pay or unemployment insurance. But they would enjoy the protection of federal antidiscrimination statutes and have the right to organize, and their employers, whether online or offline, would withhold taxes and make payroll tax contributions. Proposals such as this deserve serious consideration, including of how to implement them without making decisions about worker classification more difficult. In fact, a more flexible approach might be to eliminate as many arbitrary distinctions between employees and independent workers as possible by making benefits portable rather than tightly linked to any particular employer.
It is tempting to protect the kinds of full-time salaried jobs that gave rise to the United States’ large and prosperous middle class. But policymakers should keep two things in mind. First, not everyone wants a classic industrial-era job. Second, it simply isn’t possible to regulate the postwar middle class back into existence. Attempts to do so-for example, by making it more difficult for companies to hire anyone except full-time salaried employees-will only result in a protected community of jobholders that shrinks over time and an ever-growing group excluded from participation.
More broadly, as technology transforms the economy, policymakers will face all manner of new and unpredicted choices. In making them, they should return to the basics: remove rigidities, provide flexibility, and boost resilience. Should schools have greater freedom to reward their best-performing teachers and remove their worst? Yes, especially in light of research showing how much teacher quality influences lifetime student earnings. Should entry-level workers have to sign restrictive noncompete agreements? No. Should the federal government experiment with extending student loan guarantees to nontraditional job-preparation programs, such as “nanodegree” courses and “coding boot camps,” even if they’re offered by unaccredited institutions? Yes.
Of course, flexibility and dynamism do not trump all other goals. Workplace health and safety are essential, as are clear property rights and legal protections that make it possible to assign responsibility for harms. The key is to distinguish legitimate protections from those that are designed primarily to protect incumbents and impede change.
Money for Nothing?
The second principle, that policy should directly encourage labor, has a straightforward justification: work’s value both for individuals and for communities goes well beyond its financial role. As Voltaire put it, “Work saves us from three great evils: boredom, vice, and need.” But isn’t work itself becoming passé, thanks to automation? A 2013 study by Carl Benedikt Frey and Michael Osborne of Oxford University, which predicted the automation of nearly half of U.S. jobs, would certainly seem to call for radical policy changes.
The most widely discussed of these nowadays is the provision of a universal basic income: a cash award given by the government to all citizens, regardless of need. A universal basic income has attracted broad support in the past-from Martin Luther King, Jr., to President Richard Nixon-and its popularity is once more on the rise. The governments of Finland and Switzerland, as well as several Dutch cities, have made moves toward rolling out a universal basic income. In the United States, the idea boasts a diverse group of champions, including the libertarian social scientist Charles Murray, the technology entrepreneur Sam Altman, and the former service employees’ union president Andy Stern.
A universal basic income has obvious appeal in a job-light future where a great many people can’t earn a living from their labor, but it would be prohibitively expensive to provide even a small universal income to a population as large as that of the United States. In 2014, there were about 134 million households in the country, averaging 2.6 people each. The federal poverty level that year for a household of that size was approximately $18,000 per year. A universal basic income of that amount, then, would cost about $2.4 trillion per year, or more than 75 percent of all federal tax receipts in 2014.
At current levels of national income, this kind of universal basic income is unworkable. As a result, most realistic proposals for one today are far more modest and often not truly universal, since they would extend the cash award only to low-income groups. It is hard to see how less ambitious versions of the policy would mitigate the effects of large-scale, technology-induced joblessness.
Back to Work
Fortunately, there is no need for policies for a jobless economy yet, for the simple reason that the era of mass technological unemployment is not imminent. The Frey and Osborne study and the analysis in the Council of Economic Advisers’ report offered no time horizon for their job-loss forecasts. And as the authors of the underlying research acknowledge, its methodology relies on subjective judgments about jobs’ susceptibility to automation and makes no attempt to estimate any technology-enabled job gains. Nor is there any sign that the United States is currently approaching “peak jobs.” From the end of the recession in July 2009 to March 2016, the country saw net gains of, on average, more than 160,000 jobs per month. Over that time, the unemployment rate fell from a high of ten percent to five percent.
Despite this strong and consistent job growth, however, there are clear signs that this is an atypical recovery and that significant weaknesses remain in the labor market. The headline unemployment rate is so low in part because it is calculated based on the number of people who are actually participating in the labor force (that is, working or looking for work), and labor-force participation fell sharply during the recession and has been very slow to recover afterward. Since 2011, less than 82 percent of working-age Americans have participated in the labor force, a level last seen more than 30 years ago, when women had not yet begun working outside the home in large numbers. Unsurprisingly, wage growth has also remained anemic since the end of the recession.
Declining work-force participation is troubling not only because work provides income but also because it gives people meaning. The sociologist William Julius Wilson has argued that “the consequences of high neighborhood joblessness are more devastating than those of high neighborhood poverty,” and a great deal of research supports his view. As employment prospects have dimmed in recent years for the United States’ least educated workers, Robert Putnam, Murray, and other social scientists have documented troubling results: declines in social cohesion and civic participation and increases in divorce rates, absentee parenting, drug use, and crime. In 2015, the economists Anne Case and Angus Deaton published the alarming finding that although death rates in the United States have fallen steadily for most demographic groups, they have risen for middle-aged whites, and especially for those with less than a high school education (a group facing particularly sharp employment challenges). The increased mortality among this group was almost entirely due to three factors: suicide, cirrhosis and other chronic liver diseases, and acute alcohol and drug poisoning.
Of course, these social woes stem from many sources. But unemployment and underemployment no doubt contribute, and troubled communities would certainly benefit from more opportunities and incentives for work. As President Franklin Roosevelt once said, “Providing useful work is superior to any and every kind of dole.”
Because work provides benefits to individuals, households, and communities that go far beyond the money earned, policy should encourage employment. Unlike a universal basic income, wage subsidies do just that. In the United States, the Earned Income Tax Credit, which is administered through annual tax returns, offers a maximum yearly benefit of $6,242 for a family with three or more children. Whereas a universal basic income would be given unconditionally, the EITC is available only to people with wage income and therefore provides a direct incentive to work.
An experiment from the late 1960s and early 1970s offered clear evidence of the importance of such an incentive. Thousands of households in Denver and Seattle received differing combinations of a relatively generous basic income and a wage subsidy. The results were clear and consistent: in both cities, once the assistance started, both men and women worked fewer hours, and their marriages were more likely to dissolve. These declines were significantly associated with the basic income, but not with the wage subsidy, suggesting that it was the arrival of income without work that made things worse. Wage subsidies, by contrast, encourage people to work more hours (and increase their tax credit), as the economists Raj Chetty, John Friedman, and Emmanuel Saez have found of the EITC.
But for now, efforts to raise the minimum wage enjoy more popular momentum. At a time when the federal minimum wage stands at $7.25 per hour and no state has one higher than $10 per hour, many states and localities are facing loud calls to raise the minimum wage all the way to $15. Some of these efforts have been successful; New York and California are slated to raise their minimum wages to $15 in 2018 and 2022, respectively.
Raising the rewards for work is a laudable goal, but significantly higher minimum wages are not the best way to accomplish it. When labor becomes more expensive, companies tend to use less of it, all else being equal. It is true that across the large amount of research on minimum-wage hikes, the average finding is that they at most reduce total employment only slightly. But it is also true that estimates of the effects vary widely and that most of this research has examined only modest increases.
There is reason to believe that minimum-wage increases of 50 percent or more, even if phased in gradually, would worsen job prospects for the least affluent and least skilled workers-an especially undesirable outcome at a time of low work-force participation. As Arindrajit Dube, an economist who has studied previous minimumwage hikes, has put it, “If you’re risk-averse, this would not be the scale at which to try things.” The safest combination of policies, therefore, is a moderate minimum wage together with a substantially expanded EITC or similar wage subsidy. Just as individuals should be encouraged to seek work, employers should be encouraged to provide it, and much higher minimum wages have the opposite effect.
Ever-smarter machines will prove transformative, just as electrification, internal combustion, and steam power were in earlier eras. New technology will create opportunities for vastly greater productivity and wealth but will also upend the labor market.
In times of disruption, it is impossible to predict exactly how the work force will be affected. The best strategy is not to try to slow the technology but to strive for flexibility, so that people, organizations, and institutions can learn and grow their way into a healthy future. Furthermore, given the importance of work beyond the income it generates, policy should encourage work rather than assuming we live in a world without the need for it.
It’s easy to be pessimistic about whether any of the proposed policies will be enacted. Polarization in Congress is at a postwar high, the 2016 presidential candidates have largely dodged fundamental questions about the challenges facing the economy, and the forces of inertia, as ever, remain strong. Policymaking will no doubt lag behind the technology.
But there are a few hopeful signs. One is that the EITC enjoys bipartisan support, with both Obama and Paul Ryan, the Republican Speaker of the House, in favor of making it more generous and extending it to younger workers. Both sides of the aisle appear to support policies that directly encourage work, perhaps because it comports well with the American preference for industriousness that has struck observers from Alexis de Tocqueville onward. It’s worth undertaking more experimentation in this area, in order to better understand the tradeoffs and incentive effects of variations of these policies.
The other principle-that policy should promote flexibility-is also gaining traction, albeit in a more piecemeal way. Some cities and states are working to ease job licensing restrictions and other rigidities and are growing more receptive to the companies and practices of the on-demand economy. Because regulations and policies exist at multiple independent levels-federal, state, and local-advocates of flexibility should probably not expect that fast and systematic action will bring it about. They can, however, continue to highlight its importance and conduct research to better understand why business dynamism is declining.
The rise of intelligent computers can and should be good news for the economy. It will bring great material prosperity, better health, and other benefits that can’t be foreseen. But a broadly shared prosperity is not automatic or inevitable. In the new age of machines, it will take humans to achieve that.