Michael K Gusmano. Bioethics. Editor: Bruce Jennings, 4th Edition, Volume 3, Macmillan Reference USA, 2014.
By any measure, the United States has the most expensive health care system in the world. Even with its enormous investment in health care, more than 48 million residents lack health insurance protection, and access to primary and specialty care is inequitable. US residents suffer from high rates of mortality and hospitalizations that could have been avoided with timely and appropriate access to care, and medical costs are often a source of financial strain for people with serious illnesses (Hacker 2008; Nolte and McKee 2012). Decades of evidence from health services and policy research suggests large regional variations in medical practice and medical spending that cannot be explained by the needs of patients or differences in outcomes (Wennberg and Gittlesohn 1973; Zucker-man, Waidmann, Berenson, and Hadley 2010; Skinner 2011). Together, these findings are evidence that the US health care system squanders resources and fails to address adequately the health needs of the country. Not surprisingly, public opinion polls regularly find that medical professionals and the public are deeply dissatisfied with the system and believe that major change is necessary.
Most countries in the developed world have either a national health service or some form of national health insurance system. In Australia and Canada, national health insurance takes the form of a government-run, “single-payer” system, but the more common model is a multipayer “Bismarkian” system. In 1883 Otto von Bismark, the chancellor of Germany, expanded an existing sickness fund that had evolved from Middle Age guilds, made membership mandatory among employees earning under a certain salary, and enacted regulations that standardized the funds. Although there are important differences among them, all Bismarkian systems have multiple sickness funds that operate within a system of all-payer rate regulation. The government establishes the basic benefit packages that are covered by all funds. Payments to hospitals and physicians are established through annual negotiations between the sickness funds and providers, and the government establishes an annual budget or budget targets for the system. Singlepayer national health insurance systems are financed primarily through general revenues, but Bismarkian systems rely primarily on dedicated payroll contributions from employers and employees (White 1995). Because all of these systems involve universal, or near universal, participation, they are able to pool the lifetime risk of poor health and the need for health care across the entire country and treat medical care as a “special” good that should be available on the basis of need, not ability to pay. In countries with national health services, most hospitals are public, specialists work as salaried employees in hospitals, and primary care physicians are private contractors. In countries with national health insurance systems, the health care delivery system includes a mix of public and private providers, but there is public health insurance based on social insurance principles.
The United States is unique in that it has neither a national health service nor a national health insurance system. Instead, it relies on a patchwork of public and private insurance with large gaps. It uses a social insurance model to finance care for older people and people with permanent disabilities (Medicare); a social welfare model for some people with low incomes (Medicaid and CHIP [Children’s Health Insurance Program]); and a subsidized employerbased health insurance system for a large, but shrinking, percentage of people in the workforce. Along with these public and private insurance programs, the US government operates a number of smaller health care programs, including the military health care system, the Veterans Administration health system, and the Indian Health Service for Native American and Alaskan Native peoples, which are organized along the lines of a national health service (Oliver 2007). Even among people with insurance, the US system calls for large and growing out-of-pocket costs to guard against so-called moral hazard—the idea that health insurance encourages people to consume health care beyond the point that its marginal cost equals its marginal benefit because it insulates them against the price of care (Stone 2011). Compared with the wealthy nations that make up the Organisation for Economic Co-operation and Development (OECD), the United States is last in terms of the public share of total health care expenditures. Although the United States has the highest per capita health care expenditures—public and private combined—and spends the highest percentage of its GDP on health care, it has the lowest share of public expenditure as a percentage of total health expenditures.
In contrast to the nation’s system of health care financing, the federal and state governments have a well-developed public health infrastructure (Gusmano, Rod-win, and Weisz 2010). At the federal level, this includes the US Food and Drug Administration (FDA), the Centers for Disease Control and Prevention (CDC), and the office of the US Surgeon General. State and local public health departments conduct epidemiological surveillance and operate a number of primary and secondary prevention programs.
The Patient Protection and Affordable Care Act (ACA), signed into law by President Barack Obama in 2010, marks the largest expansion of government in health care since the adoption of Medicare and Medicaid in 1965 and the first time the federal government adopted a law with the goal of providing (nearly) all citizens with health insurance. Although it closes many of the gaps in health insurance coverage, the ACA leaves the basic patchwork nature of the insurance system in place, is expected to leave more than 20 million residents without health insurance, and is not likely to control health care costs (Gusmano 2011). Although the law offers important insurance protection to millions of Americans who lack health insurance and helps to minimize regional inequalities in access to public insurance for the poor, the adequacy of this protection, in light of anticipated cost increases, is in doubt and the system remains fragile.
Health Policy and American Values
To what extent does the lack of national health insurance reflect American values? The lack of national health insurance, and the public sector’s relatively small role as compared with its role in other developed countries, is often explained by reference to unique American values. Critics of expanding the role of government in health care—greater government involvement in and oversight of health insurance protection, increased regulation of the prices of medical goods and services, or enhanced efforts to address the social and economic determinants of health—argue that doing so is inconsistent with liberty. Scholars and elected officials who cite the relatively limited role of government, including the lack of national health insurance, as an American tradition often point to liberalism grounded in the ideas of the English philosopher John Locke (1632-1704). Locke, who argued for limiting the power of government in order to protect individual liberty, influenced the framers of the US Constitution (Locke 1924 [1690]).
Yet a focus on Lockean values alone is insufficient. The absence of national health insurance is consistent with the claim that Americans favor limited government, but the fact that the US incarceration rate is higher than that of any other OECD nation is not (Morone 2004). “Negative” liberty—the absence of obstacles—is a value that helps to shape US health policy, but this competes with other deeply held American values, including a belief in responsible stewardship and civic virtue. Indeed, Medicare, which covers older people and citizens with permanent disabilities, is the largest public health insurance program in the world. Carolyn Tuohy (1999) rejects the “American values” explanation of why the United States does not have a universal health system. Instead, she argues that, in the period following World War II, Americans, Canadians, and the British held comparable values with regard to health care but that the “accidents” of particular political decisions at critical moments in history propelled each country’s legal regimes governing health care in wildly different directions.
A more complete explanation must acknowledge opposition by powerful interest groups in the context of fragmented institutions along with a public that can be skeptical of big-government solutions (Steinmo and Watts 1995; Brown 2008). The existence of powerful interests that oppose health reform is not unique to the United States, but US reformers must combat these interests in a political system designed to limit major policy change. As Richard Neustadt puts it, the United States has a “government of separated institutions sharing powers” (1960, 28-29); built into the system are a host of veto points, or checks, that make it difficult to enact major policy change (Steinmo and Watts 1995). The ACA, like previous health reform proposals, had to run a legislative gauntlet that included three House committees, the House Rules committee, and two Senate committees (Morone 2010). President Obama enjoyed significant Democratic majorities in both houses of Congress; but a complete lack of support by Republicans and divisions within the Democratic Party meant that the law passed in large part because of the administration’s efforts to “work around” the veto points created by the constitutional system.
What are the consequences of a fragmented health system for the US political economy? Does the absence of national health insurance undermine the political system? Advocates for national health insurance argue that health care inequalities undermine the goal of creating a republic because it weakens mutual respect among citizens. The founders of the American republic did not believe that civic virtue alone was a sufficient basis for republican government, but they clearly thought it was necessary. In the Federalist Papers, Alexander Hamilton argued that government was instituted because “the passions of men will not conform to the dictates of reason and justice without restraint” (no. 15). James Madison wrote: “Is there no virtue among us? If there be not, we are in a wretched situation. No theoretical checks, nor form of government, can render us secure” (quoted in Elkin and Solton 1993, 139).
The communal provision of social goods, such as health care and education, helps to lessen the impact of material inequality. Vast material inequality can, and often has, led to moral condemnation because the poor are blamed for being poor because of their own moral failings (Weir et al. 1988). At a minimum, the organization of social welfare policy should avoid exacerbating material inequalities produced by market competition. Communal provision fosters deliberation and public-spirited thinking by reminding us that we are more than strangers bargaining with each other over the distribution of goods; we are a community (Elkin 1987). An extensive welfare state involving the communal provision of some public goods is the institutional structure that is needed to provide balance in a society with a large market-based economy. Markets help to limit government and provide some level of freedom, but they also tend to atomize and fracture society (Hayek 1944). The communal provision of public goods works to counter this effect by creating a sense of solidarity (Beauchamp 1988).
Health Care Financing and Delivery: An Overview
Since the late 1950s, employer-sponsored private health insurance has been the most common type of health insurance for Americans under the age of sixty-five. The creation of an employer-based health insurance system was an unintended consequence of several administrative, legal, and legislative decisions (Howard 1993). Because the United States relies on “tax expenditures” to subsidize private welfare benefits like employer-sponsored health insurance, the true scope of the US welfare state is “hidden” (Howard 1993). Employer-based coverage expanded between 1950 and the 1970s, but since that time the system has been eroding. In addition, employers have reduced or eliminated retiree health benefits.
Medicare, Medicaid, and CHIP
The publicly subsidized system of employer-based private health insurance does not address the health needs of those who are not in the workforce or, typically, the spouses and dependents of individuals in the workforce. The adoption of Medicare and Medicaid in 1965 was an effort to fill in these gaps. After a nearly fifteen-year struggle that began during the Harry S. Truman administration, Medicare was passed in 1965, along with a “welfare bill” known as Medicaid to be administered by the states. The Medicare program includes hospital insurance, physicians’ insurance, and, since 2003, a separate prescription drug benefit. Medicare Part A, based on President Lyndon Johnson’s original proposal, is a hospital insurance program based on the Social Security contributory model. Medicare Part B is a voluntary supplementary medical insurance program funded through beneficiary premiums and federal general revenues. In 1972 the program was expanded to include the disabled and people with end-stage renal disease. By 2013 Medicare provided health insurance to about 48 million Americans.
The 2003 Medicare Modernization Act, signed into law by President George W. Bush, brought significant change to the program’s scope and structure. It created Medicare Part D, which offers a prescription drug benefit. Beneficiaries are required to enroll in either a stand-alone prescription drug plan or a private Medicare Advantage (MA) plan that includes Part D prescription drug coverage. Along with the creation of a prescription drug benefit, the 2003 act also provides incentives to encourage beneficiaries to select MA plans over the traditional Medicare program. By 2012 about 27 percent of Medicare beneficiaries (13 million people) were enrolled in MA plans, an increase of 10 percent over the year before (Gold, Jacobson, Damico, and Neuman 2013).
Medicaid is a social welfare program that is jointly financed and jointly administered by the federal and state governments. The federal government matches state spending based on a formula in which lower-income states receive a higher Medicaid matching rate than higher-income states. States that agree to participate in the program must meet minimum federal standards for eligibility and coverage, but for most of its history the law has allowed for enormous variation among the states. By 1990 the federal government had expanded Medicaid to include all pregnant women and children with incomes below 133 percent of the federal poverty level and required states to phase in coverage of all children in families with incomes below the poverty level.
The Affordable Care Act will expand Medicaid by an estimated 41 percent starting in 2014 (Holahan, Buettgens, Carroll, and Dorn 2012). It also creates more uniform national criteria for program eligibility. In 2012 the Supreme Court ruled that states do not have to participate in the ACA Medicaid expansion; states that do agree to participate will be required, starting in 2014, to offer coverage to everyone within 138 percent of the federal poverty level. The federal government will cover 100 percent of the costs associated with the Medicaid expansion between 2014 and 2016 and 90 percent of these costs in subsequent years.
In 1997 the State Child Health Insurance Program (SCHIP), now known as CHIP, expanded public insurance coverage for children in lower-income families who are not eligible for Medicaid. CHIP was created as a separate program with a funding model similar to that of Medicaid in that it provides federal matching funds for state programs over which states have significant discretion. The federal government provided states with $40 billion over ten years to provide expanded child coverage. As of 2007, CHIP covered 8 million children (Centers for Medicare and Medicaid Services 2012).
The Health Care Safety Net
Despite the patchwork of public and private insurance described above, as of 2013 more than 48 million Americans did not have health insurance. To care for the uninsured, the United States relies on a patchwork “system” of safety-net providers, including public and not-for-profit hospitals, federally qualified community health centers, school-based health centers, municipal/local health clinics, nonprofit Visiting Nurse Associations, family planning clinics, and public dental clinics. Along with public funding for these institutions, there are laws designed to increase access to care for uninsured patients. For example, in 1986 the Congress enacted the Emergency Medical Treatment and Active Labor Act (EMTALA) as part of the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA) (Pub. L. 99-272). The law provides patients with access to emergency medical care and prevents hospitals from “dumping” medically unstable patients who could not afford to pay for their care (Zibulewsky 2001). Under the law, “any patient arriving at an Emergency Department (ED) in a hospital that participates in the Medicare program must be given an initial screening, and if found to be in need of emergency treatment (or in active labor), must be treated until stable” (Section 1867 of the Social Security Act, 42 U.S.C. § 1395dd). EMTALA does not require these facilities to provide additional emergency medical treatment.
Together, these institutions and policies play a vital role in providing access to the uninsured and Medicaid clients, but the growth in the number of uninsured and reductions in payments from public and private payers has undermined their ability to do so. In 2000 the Institute of Medicine concluded that the safety net was “intact but endangered.” In a 2011 article Mark Hall argued that these institutions may be politically vulnerable after the implementation of health reform because a larger percentage of the patients they treat may be undocumented immigrants, who are excluded from all US public health programs.
Health Care Workforce
In most OECD health care systems, at least half of the physicians are in primary care. By contrast, in the United States about 70 percent of physicians are specialists, and only about 30 percent are in primary care. This is important because systems with a higher concentration of primary care physicians enjoy better results. An effective system of primary care improves coordination and continuity of care (Starfield, Shi, and Macinko 2005). Health care systems with a higher concentration of primary care physicians enjoy higher life expectancy at birth and lower infant mortality, lower mortality from all causes, lower disease-specific mortality, lower rates of avoidable mortality, lower rates of avoidable hospital conditions, and higher self-reported health status. A high concentration of primary care doctors is also associated with lower health inequalities.
Evolution of Health Care Policy since 1960
There is a long history of failed efforts to adopt some form of national health insurance in the United States. In the early 1960s, liberal advocates stepped away from calls for national health insurance in favor of a program that would cover older people who qualified for Social Security. Until 1965 the efforts of organized medicine, combined with a lack of political will on the part of either the Congress or successive presidents, worked to defeat proposals for oldage hospital insurance during the Truman (1945-1953), Dwight D. Eisenhower (1953-1961), and John F. Kennedy (1961-1963) administrations. But the 1964 election of Lyndon Johnson, along with a huge Democratic majority in Congress, led to what Theodore Marmor in a 1973 book called “the politics of legislative certainty” for health care reform. After a nearly fifteen-year struggle, Medicare was passed in 1965.
President Richard Nixon’s Comprehensive Health Insurance Program (CHIP) was one of the most prominent efforts to reform the health insurance system in the second half of the twentieth century (Kingdon 1995). By the time the Congress was ready to hold hearings on health care reform, however, the Watergate scandal, which began in 1972, had engulfed Nixon’s presidency, preventing him from dedicating time or energy to negotiating an agreement with key congressional leaders. President Jimmy Carter (1977-1981) also took up the cause of national health insurance. The Carter principles for health reform included comprehensive coverage, freedom of choice, “aggressive” cost containment, a “significant” role for the private insurance industry, and “appropriate” government regulation. Carter was more focused on cost control and did not pursue national health insurance aggressively. He attempted to negotiate the differences between liberal and conservative factions in Congress by proposing an expansive version of catastrophic coverage that included expanded benefits for women and children, and also relied on hospital cost containment regulation. Hearings were held in both the House and the Senate during the 96th Congress, but the proposal was never reported out of committee.
After more than a decade of dormancy during the Ronald Reagan (1981-1989) and George H. W. Bush (1989-1993) presidencies, the issue of national health insurance reemerged in the fall of 1993. On September 22, 1993, President Bill Clinton presented his health reform proposal to a joint session of Congress. This reform initiative was the most comprehensive since Truman’s failed attempts to pass compulsory national health insurance. The debate began with an air of inevitability. The precise direction of reform was in question, but it appeared that health care reform was an issue whose time had come. After a series of missteps by the administration and concerted opposition by the plan’s detractors, Senate Majority Leader George Mitchell announced on September 26, 1994, that the Senate was abandoning efforts to enact comprehensive health care reform for the year (Priest 1994). Health care reform died and remained off the policy agenda until the presidential election campaign of 2008.
In contrast to President Clinton, President Obama did not develop a detailed health reform plan. Instead, he outlined eight broad principles for health reform in his fiscal year 2010 budget: (1) protect families’ financial health; (2) ensure affordable, quality health coverage for all Americans; (3) provide portability of coverage; (4) guarantee choice of doctors; (5) invest in prevention and wellness; (6) improve patient safety and quality of care; (7) end barriers to coverage for people with preexisting medical conditions; and (8) reduce the long-term growth of health care costs for businesses and government (Orszag 2009). The administration and Democratic congressional leaders also met with officials from several stakeholder groups in an effort to bring about consensus in support of an individual mandate and other dimensions of the Democratic health reform ideas (Pear 2009). Equally important, President Obama strove to co-opt stake-holders, particularly those who had worked against reform in the past. He was able to avoid some of the negative reaction faced by the Clinton plan because he relied on a long implementation period rather than on provider fee regulation or some other mechanism that would have threatened entrenched interests. The ACA was signed into law on March 23, 2010, with its most costly provisions set to be implemented beginning in 2014.
Policy Responses to the Issue of Cost
Expanding health insurance coverage has not been the only focus of federal policy makers. By 1970 both liberal and conservative critics of the system began to point fingers at providers as the cause of the health care system’s problems and the major obstacle to its reform (Starr 1982). Although the language of “crisis” dominated the health care debate from 1969 on, concerns with cost began almost immediately after the Medicare and Medicaid programs were adopted. In 1966 the Congress passed the Comprehensive Health Planning Program to fund voluntary planning agencies. These public-private partnerships were dominated by physicians and hospital representatives who had no interest in controlling costs and were thus ineffective (Morone 1990).
During the Nixon administration, a number of programs, based on both regulatory and competitive models, were adopted to contain costs. The first regulatory effort was the establishment of professional standards review organizations (PSROs). In addition to controls placed on capital expenditure, the amendments to the Social Security Act mandated the establishment of PSROs to monitor the utilization and quality of local medical services. These efforts were severely limited because the legislation did not permit anyone but physicians to participate in PSROs. As a result, these organizations focused their attention on promoting quality rather than policing fellow providers for engaging in “inappropriate” utilization (Morone 1990).
The Health Planning and Resources Development Act in 1974 established local health system agencies (HSAs) and required all states to adopt certificate-of-need programs. The HSAs were supposed to help coordinate existing public and private health care resources and were under tremendous pressure from the Department of Health, Education, and Welfare (HEW) to do something about health care costs. The department restricted the ability of the health care industry to appoint HSA board members and threatened to terminate HSAs if costs were not cut. Despite the great hopes for the HAS program, these institutions were not successful (Starr 1982; Morone 1990).
The second strategy used by Nixon was to promote the use of health maintenance organizations (HMOs) to introduce “competition” into the health care system. HMOs avoided the pitfalls of fee-for-service reimbursement, which in effect penalizes providers for returning patients to health; instead, they encouraged physicians to find lower-cost methods of treatment and to promote health. In 1971 Nixon announced a “new national health strategy,” the centerpiece of which was the establishment of grants to encourage the development of HMOs. The business community, as well as many Republican governors, including Ronald Reagan in California and Nelson Rockefeller in New York, began to promote the use of HMOs as a solution to the health care “crisis” (Starr 1982).
In 1973 Congress passed the Health Maintenance Organization Act, designed to aid the development of HMOs. It required businesses with more than twenty-five employees to offer at least one qualified HMO to their employees. It also provided grants to develop new HMOs. This effort to engage in what Lawrence Brown, in his 1987 book, calls “decentralized market building” was not successful. According to Brown, the proliferation of HMOs was much too slow to have an impact on health care costs and renewed calls for more centralized market building during the Carter and Reagan administrations.
By the late 1970s the urge to contain health care costs had clearly eclipsed the desire to expand access. President Carter’s 1977 Hospital Cost Containment Act called for limiting increases in hospital charges, and he asked Congress for quick passage of the legislation. The Carter administration “made hospital cost containment [its] top priority in health and one of [its] top priorities in domestic policy” (Kingdon 1995, 23). Eventually, a version of the Carter bill passed the Senate, but after a massive lobbying effort by the American Hospital Association (AHA) and the American Medical Association (AMA), it died in the House in 1978.
President Reagan hoped to address costs through the use of a market-based reform. Ironically, however, the most significant and long-lasting change to the health care system adopted during the Reagan administration was actually a complex, highly regulatory system of hospital reimbursement. The adoption of Medicare’s prospective payment system (PPS), using diagnosis-related groups (DRGs) to reimburse hospitals under Part A of the program represented a radical change in the relationship between the federal government and health care providers. Under the system adopted in 1983, Medicare sets standard payments for hospitalization based on the patient’s diagnosis, after adjusting for the average cost of care in the area. This system is supposed to force hospitals to become more efficient. If a hospital is able to treat a patient at a cost that is less than the amount for which they are reimbursed, they are allowed to keep the difference. If it costs more to treat a patient than the DRG rate, the hospital loses money. Although PPS has not solved the problem of health care costs in the United States, it represents one of the more successful efforts to limit the rate of inflation in Medicare.
The ACA and the Effort to Achieve “Value for Money”
Although Medicare’s prospective payment system has helped slow the growth of hospital expenditures, the overall health care inflation rate continues to outpace the general rate of inflation. Thus controlling health care costs remains high on the national policy agenda. There has, however, been an effort to redefine the issue of cost to focus on generating greater value for money (Gusmano and Callahan 2011). The new focus can be attributed, in part, to research suggesting that the benefits of some health care interventions far exceed their costs (Cutler and McClellan 2001). Rather than reduce spending on health care indiscriminately, many policy makers are calling for an appropriate evidence base to compare the effectiveness of various health care interventions and select those that generate the largest return on investment.
Interest in comparative effectiveness research (CER) grew throughout the 2008 presidential campaign, and the ACA included significant federal funding to expand it (Iglehart 2010). To address concerns that this analysis would lead to rationing of care on the basis of cost, the ACA prohibits the federal government from considering costs when conducting CER, and CER cannot be used for coverage, reimbursement, or other policy recommendations.
Along with the development of CER, the ACA encourages the Center for Medicare and Medicaid Services to proliferate the creation of accountable care organizations (ACOs). Like the original HMOs, ACOs are affiliations of health care providers that accept risk for a population of patients. They are an organizational mechanism designed to provide health care providers with an incentive to offer high-quality care at the lowest possible price (Greaney 2011). In addition, the ACA promotes primary care medical homes, bundled payment, and pay-for-performance initiatives in the hopes of generating value for money. Despite the array of initiatives supported by the ACA, there is little expectation that any of these reforms is likely to “bend the cost curve” (Oberlander 2011).
Is the failure to adopt more aggressive cost control measures a problem that needs to be solved? Most economists argue that there is no “right” amount of social resources to dedicate to health care. Indeed, some argue that it may be reasonable for the United States to spend an even higher percentage of its resources on health care (Fogel 2000). Once we recognize that the amount of resources a country spends on health or any other good is a political question, it is crucial to assess whether our decisions about health care reflect the preferences of citizens or of special interests and political institutions incapable of making difficult choices and acting in the public interest.