Jim Baumohl & Jerome H Jaffe. Encyclopedia of Drugs, Alcohol, and Addictive Behavior. Editor: Rosalyn Carson-DeWitt. 2nd Edition, Volume 3, Macmillan Reference USA, 2001.
The history of the treatment of alcohol and other drug problems is often assumed to be a straightforward story of progress—moralism, neglect, and brutality were displaced by scientific knowledge, medical activism, and professional civility; a view that the addict exercised free will in choosing to use drugs was succeeded by an understanding of how a “disease” or “disorder” could overrule the capacity to choose.
This assumption is historically incorrect. First, it neglects the coexistence and mutual influence of views emphasizing free will or social or biological determinism. While one view may have enjoyed greater influence at a given time, its competitors have never been vanquished. No generation has any more solved the puzzle of addiction than it has resolved the related enigmas of the relationship between mind and body, choice and compulsion. Second, it is equally incorrect to associate condemnation and neglect with the free-will position or kindness and activism with the determinist perspective. The truth is more complicated.
As various studies have demonstrated, there is a tenacious American folk wisdom about addiction. Simply put, it goes as follows: While addicts experience a compulsion to take a drug, this develops as the result of repeated bad choices that are socially influenced; further, addicts can rid themselves of compulsion only by developing self-discipline, perhaps with some skilled influence in the form of treatment. Thus, in our culture, and despite the modern message that “addiction is a disease like hypertension or diabetes,” addicts are understood to be both sick and immoral, blameless and culpable, free and determined. In the popular mind, and among treatment professionals, addicts are ambiguous characters.
The history of treatment in the United States reflects this cultural dilemma. Cultures limit the range of possible responses to a problem, and because they tend to change very slowly in fundamental ways, to the extent that an important problem recurs or remains unsolved, the range of possible responses will be explored repeatedly as new generations search for fresh insights and effective methods of intervention. At various times, treatment has embraced exhortation and coercion, sermons and miracle drugs, democratic mutual aid, and autocratic professional prerogative—often simultaneously.
The Premodern Era
Modernity has different meanings with respect to the treatment of habitual drunkenness and drug addiction. In the case of habitual drunkenness, the modern era is traceable to the birth of Alcoholics Anonymous (AA) in 1935. In the case of drug addiction, delineating historic periods is more difficult, but we will mark the modern era by the introduction of methadone maintenance (for heroin dependence) in 1965 and passage of the federal Narcotic Addict Rehabilitation Act (NARA) in 1966.
We should also clarify our choices of terminology. The terms alcoholism and alcoholic date from the middle of the nineteenth century, but they did not come into common professional use until the early twentieth century and were not embedded in the American vernacular until after the rapid growth of AA during the 1940s. The more common professional terms in the premodern era were inebriety and inebriate, but as these often were used to refer to a heterogeneous group now called “substance abusers,” we will use the durable term drunkard when writing about this era. Similarly, the term drug addict was not in common use until the early 1900s; before this time habitual users of drugs were known as “morphinists,” “cocainists,” or sometimes, “dope fiends.” In order to speak generally and to avoid pejorative (if historically accurate) terminology, we will use drug addict, and we will use addict and addiction when speaking of both habitual drunkards and drug addicts.
The Treatment of Habitual Drunkards
The Tradition of Mutual Aid
The organized, specialized effort to help habitual drunkards began with the Washington Total Abstinence Movement in 1842. This Washingtonian Movement stands at the head of a tradition of mutual aid that developed throughout the 1800s in close connection to American Protestantism, particularly its evangelical expressions. The Salvation Army, which traces its American incarnation to the mid-1870s, is also in this line, and so is AA and the many other “Anonymous” fellowships it inspired.
Washingtonian societies were dedicated to sobering up hard drinkers, usually (but not always) men. The societies intended to foster a solidarity based on shared experience with suffering that transcended profound social divisions. (They were neutral on the divisive question of prohibition.) Although some famous teetotalers like Abraham Lincoln were members, the societies included the disreputable, the unlettered, and sometimes non-whites and women as equals. Their motives were couched in terms of Christian charity, economic self-improvement, and democratic principles.
The hallmark of mutual aid is the banding together of people in similar circumstances to help one another. (The popular term “self-help” is thus misleading.) The Washingtonians and their successors did not invent the methods by which they fostered solidarity and mutual support. However, in adapting the voluntary association to the reform of drunkards, the Washingtonians introduced new elements.
Owing its provenance to the revival meeting, the most striking and controversial (some found it distasteful) Washingtonian innovation was the confession of drunkards before their peers, and sometimes before a general audience. We are familiar with its contemporary form: “I am Jim B, and I am an [alcoholic, drug addict, etc.]”; but the practice dates from Washingtonian “experience lectures,” forums for the telling of “drunkard’s tales,” stories of degradation, struggle, and redemption through sobriety. These introduced the drunkard’s tortured inner life to the polite public. “You all know me and what I used to be,” Salvation Army lecturers often began.
Some Washingtonian societies also established temporary homes, or refuges, for drunkards. These were places where drunkards could live for a short time while they sobered up and were introduced to the Washingtonian fellowship, whose members found them jobs and other necessities. A century later, AA would reinvent this institution (the recovery home) as part of its twelfth-step work—the commitment to help other drunks.
Although not continuous with these early refuges, beginning in Boston (1857), San Francisco (1859), and Chicago (1863), a number of formal inebriate homes were established to treat drunkards in the Washingtonian tradition. Typically, these were small institutions (fewer than 50 beds), operated as private charities, sometimes under religious or temperance auspices. They relied on the voluntary cooperation of their residents and used temperance fellowship as a form of what we now call aftercare. They were located in urban environments and did not isolate their residents from community life. Although they often were superintended by physicians, residence rarely exceeded three weeks and medical treatment was considered important only in managing withdrawal symptoms or Delirium Tremens (DTs). The terms disease and “vice,” cure and “reformation” were used interchangeably, and sober outcomes were attributed to the influences of family, friends, and the fellowship, not to medical intervention. Inebriate homes practiced a profoundly social (and sometimes spiritual) form of treatment based on the belief that the human capacity for transformation was never extinguished, no matter how “despotic” the “appetite” for alcohol.
For those in the Washingtonian line, the source of such optimism was their belief in the presence of an immortal God in the human mind. The mind, they believed, was distinct from the brain and other corruptible flesh and was formed in God’s image. By the mid-1800s, the image of God was far more benign and rational than the often wrathful, finally inscrutable deity of even the early 1700s. This gradual change in the conception of God owed much to the spread of the market as arbiter of economic affairs and social relations. The rigorous logic of the market reordered economics from the academy to the workshop. In its train, a disciplined, optimistic rationalism—and the ideas of moral progress and human perfectibility—suffused popular culture and theology.
At the same time, another form of rationalism, that of natural science, was pervading popular discourse and causing tumult in seminary and pulpit. Science did not overthrow religion so much as assume a place alongside it. For believers, scientific order was a wonder of the divine plan. The natural “laws of health,” as various rules of disciplined self-denial were known, were signals of divine intent, of God’s ideas about right living. The drunkard was therefore both sinful and sick, having contracted the disease as the result of moral transgression. (A common analogy of the time was to syphilis; today, some religious leaders speak similarly of AIDS.) Thus, while Washingtonians and their successors spoke of addiction as a disease—by which they meant an organically based compulsion—they also employed clerical images, for they believed in the power of the divinely inspirited human mind to choose the rational good (total abstinence from alcohol) and to thus achieve health. In the Washingtonian tradition, the languages of morality and disease became assimilated, and remain so in the many contemporary Anonymous fellowships’ claim that addiction is in part a “spiritual disease.”
Although the Washingtonian Movement as such was defunct by 1850, Washingtonianism was extremely influential until about 1865. The tradition did not disappear, but in the decades following the Civil War (1861-1865), profound changes in American culture and society, and related changes in the temperance movement, blunted Washingtonian influence and gave new prominence to a competing philosophy of treatment and its attendant practices and institutional embodiment. The philosophy was that of biological determinism, or “somaticism,” and its institutional expression was the “inebriate asylum.”
The Asylum Tradition
In 1810, Benjamin Rush, a Philadelphia physician, signer of the Declaration of Independence, and first formulator of a disease theory of addiction (though not the inventor of the idea), proposed “sober houses” for drunkards. However, Samuel Woodward, a Massachusetts insane asylum superintendent and temperance orator was the father of institutional treatment based on a somatic explanation of habitual drunkenness. In a tract written in 1835, Woodward contributed two critical ideas to what would become the inebriate asylum movement of the nineteenth and early twentieth centuries. The first was that drunkards could not be treated successfully on a voluntary basis. The second, which flowed from the first, was that they needed legal restraint in a “well-conducted institution”—by which Woodward meant something like the insane asylum that he superintended.
The line of thinking staked out by Rush and Woodward had no institutional realization until an inebriate asylum subsidized by the State of New York opened in Binghamton in 1864. Another was opened in Kings County, New York, in 1869. In subsequent decades, pursuant to arduous promotion by the American Association for the Cure of Inebriates (AACI, founded in 1870), public inebriate asylums opened in Massachusetts (1893), Iowa (1904), and Minnesota (1908). Other jurisdictions chartered inebriate asylums but never built them (Texas and Washington, D.C.), and in California an inebriate asylum chartered in 1888 was converted to an insane asylum before the facility opened in 1893. Indeed, Binghamton was converted to an insane asylum in 1879. By the advent of Prohibition in the United States in 1920, all public inebriate asylums had been closed or converted to other use.
The inebriate-asylum movement spawned dozens of private sanitaria that treated well-to-do drunkards and, by the 1890s, drug addicts. However, judged by its manifestation in brick and mortar, the movement for public treatment was a failure. For two related reasons, the AACI was notably unsuccessful in converting legislatures to its cause. First, its physician members never could produce a strictly medical “cure” for addiction. Although its theorist-practitioners developed rigorously somatic explanations of addiction that dispensed with will power, spirituality, and the therapeutic necessity of fellowship, they relied on recuperation by bed rest, a healthy diet, and therapeutic baths (hydrotherapy), followed by the discipline of useful labor. This regime was highly structured (military analogies were popular) and medically supervised, and was set in a context of prolonged legal restraint (involuntary commitment). However, there was nothing particularly innovative or medical about this approach. Its methods already were the staples of lunatic asylums (called mental hospitals in most states after about 1900), almshouses, and county jails, institutions that managed huge numbers of habitual drunkards and, after the 1880s, drug addicts. Second, the inebriate asylum was an ambitious undertaking: like the insane asylum, it was to accommodate several hundred patients on a sequestered rural estate. Few legislatures could be persuaded that such costly new institutions were worth the price. In a word, the inebriate asylum was viewed as redundant.
The ideology of the inebriate-asylum movement—its adherents’ view of the world—was shaped by two profound, contemporaneous developments in American culture and society: (1) the rising esteem and secularism of science and (2) the growing disorder and complexity of American society after the Civil War. The movement reflected the grand aspirations of Gilded Age science, whose practical applications were transforming American life: railroads and streetcars, the telephone, gas and electrical lighting—all attested to the power of science and human ingenuity. It was a time when “scientific” understanding became the basis for professional standing, not only for medicine, but for all manner of professional groups, from proto-social workers to plumbers. The metaphor of disease, and the optimistic message implicit in its use—that all defects could be cured—became popular among forward thinkers. In the most widely read book of its time, the utopian novel, Looking Backward (1888) by Edward Bellamy, the author characterized all sorts of misconduct as disease, and his near-perfect world of the year 2000 cured its rare wayward citizens in public hospitals.
If Washingtonians assimilated the languages of morality and disease, the rising generation of inebriate-asylum enthusiasts radically separated them, and often reduced human volition to a by-product of neurology. In the United States and Europe, they initiated research on the biology (and later, the genetics) of addiction. Primitive by today’s standards, it nonetheless established a robust tradition of inquiry that remains lively.
The inebriate-asylum movement appealed to American aspirations to create a better world through science, but it also addressed growing fears of social disorder. The extent of such disorder should not be exaggerated, however; pre-industrial America was more disorderly than nostalgic chroniclers have made it seem, and urbanization and industrialization were less chaotic than critics sometimes contend. On the whole, though, life after the Civil War was more complex, more anonymous, and less certain.
Immigration from abroad was an important fuel for such change and promoted the (American) nativist fears that accompanied it. In the 1830s, free Americans were overwhelmingly Anglo-Saxon in origin and Protestant in belief. By the 1880s, this was changing dramatically. Burgeoning northern and western cities were becoming testing grounds for the promise and limits of diversity—indeed, for explanations of diversity. Amid glaring inequality of wealth and opportunity, cultural conflicts often were played out around practices of consciousness alteration. Protestant, native-born Americans (including African Americans) were remarkably abstemious (a notable success of the Protestant-driven temperance movement); the mostly Roman Catholic Italians and French were daily wine drinkers; Poles, Germans, and some Scandinavians drank large quantities of beer (some on Sunday—in public beer gardens).
Of Irish Catholics, who had a large temperance movement of their own but also a penchant for drunkenness (what is known as a “bi-modal distribution” of drinking habits), a California temperance editor wrote in 1883: “They are by far the worst and meanest material in which to store whisky.” Native Americans had been introduced to alcohol by traders and government agents from colonial times, so “firewater” became a factor in the westward movement and the ensuing Indian Wars. The “idolatrous” (non-Judeo-Christian) Chinese introduced opium smoking to America, a practice that crossed the color line during the 1870s and became popular among young white men and women during the 1880s. Then from 1900 to 1920, Mexicans became associated with Cannabis (marijuana) use in the West and Southwest. In the South, African-American men frequently were accused of the riotous use of cocaine, with subsequent designs on white women.
The increasingly diverse backgrounds of the U.S. population became a source of conflict and disorder; the rollercoaster ride of industrial capitalism was another. The United States experienced two prolonged economic depressions (then called “panics”) between the Civil War and the turn of the century—from 1873 to 1878 and from 1893 to 1898. In between, a short but sharp slump during the mid-1880s took its toll on stability. During these years, the noun “tramp” entered the American language; the country experienced its first pronounced labor violence and political bombings (dynamite being an 1860s product of scientific ingenuity); in the spring of 1894, “armies of the unemployed” converged on Washington, D.C., from all over the country.
This era of mounting diversity and instability was marked by a failing faith in exhortation (verbal appeal) as a method to achieve social regulation and by a concomitant exaltation of coercive means (force). Although never abandoning altogether its sympathy for drunkards, the temperance movement made securing prohibitionist measures its primary objective. Although never withdrawing its support from surviving Washingtonian institutions, temperance adherents simultaneously supported the more stringent regime promoted by inebriate asylum enthusiasts, some of whom believed that an orderly, peaceful society required the lifetime detention of incurable addicts. Indeed, the temperance movement helped to popularize theories that purported to demonstrate a biological basis for the failure of certain racial and ethnic groups to live up to the abstemious standard of so-called native stock—or to benefit from treatment. In the name of “prevention,” such views justified not only prohibition laws but also statutes that in a few states permitted the forced sterilization of addicts.
In sum, the legacy of the inebriate-asylum movement was the biologically based approach to understanding addiction, the corollary claim that addiction is the special province of medicine and physicians, the notion that successful treatment requires legal coercion, and the assertion that treatment is both a responsibility of government and a commodity to be sold on the market. These ideas endure as part of the complex intellectual, professional, and political fabric of treatment.
The Tradition of Mental Hygiene
The mental hygiene movement, customarily dated from the 1908 publication of Clifford Beers’ A Mind That Found Itself, represented a departure from the somatic tradition of thought about mental disorder and addiction. At the same time, it did not appeal to spiritual explanations nor did it dwell on will power. Rather, mental hygienists employed a socio-biological determinism: Although addiction could be the result of hereditary biological defect, and could be incurable, its origins were mainly familial and social, and if the condition was addressed early on, could be arrested. Mental hygienists stressed the important roles of family, friends, and occupation in creating a salubrious environment for an addict’s continuing sobriety. Mental hygiene did not speak the language of mutual aid, but it was similarly environmental in outlook. This was the beginning of what later would be called community mental health, and its point of view virtually defines what we understand to be “modern” about treatment and the biopsychosocial perspective.
The environmentalism of mental hygiene challenged the rationale of the asylum model of treatment. Mental hygienists criticized the asylum’s lack of connection with community life and its reliance on involuntary treatment, claiming that only voluntary access to free or inexpensive care would attract patients in the early stages of drinking or drug-taking careers. The history of the Massachusetts Hospital for Dipsomaniacs and Inebriates (1893-1920) illustrates well the influence of mental hygiene philosophy and practice. Between 1893 and 1907, the hospital was run on the asylum model. After a complete reorganization in 1908, it followed a mental hygiene course: Most of its admissions were legally voluntary; the hospital established a statewide network of outpatient clinics; it worked closely with local charities, probation offices, employers, and the families of patients. Known finally as Norfolk State Hospital, it was a preview of what treatment was to become, beginning in the 1940s.
Even so, Norfolk created on its campus a “farm” for the long-term detention of “incurables.” The mental hygiene movement modified the emphasis of the asylum tradition but did not entirely abandon its practices. Indeed, under the banner of mental hygiene, between 1910 and 1925, many local governments across the United States established “farms” to segregate repeated public drunkenness offenders and drug addicts. Some of these persisted until the 1960s, and some have been reopened in recent years to accommodate homeless people with alcohol and drug problems. As discussed below, the asylum tradition remained particularly important in the treatment of drug addicts.
Although tobacco use is now widely considered in the United States to be a problem akin to drug dependence, for most of the twentieth century it was not treated as such by either the medical or criminal-justice establishment. However, nineteenth-century temperance groups saw tobacco use as another form of inebriety. As far back as the 1890s, advertisements for patent medicines claimed to help people break the tobacco habit. In the great temperance upsurge of the early twentieth century, more than twenty states passed tobacco prohibition laws, but most of these were quickly repealed. Public concern with habitual tobacco use declined dramatically from the 1920s through the 1950s, and cigarette smoking (over smokeless tobacco, pipes, or cigars) became normative behavior among men and grew steadily among women. This situation changed abruptly with the publication of the 1964 Report of the U.S. Surgeon General that linked cigarette smoking to cancer. Since then, increasing attention has been paid to the tobacco habit, or tobacco dependence, and to treatment for it. Treatment approaches are at least as varied as those described here for alcohol and other drugs. Pharmacological treatments, such as nicotine chewing gum and skin patches, have been used, as have acupuncture, hypnosis, mutual aid, aversive electric shock, and other techniques. While many people advocate that government or private insurance should pay for treatment of this addiction, to date there have been no suggestions that tobacco addicts should be treated on a compulsory basis, although the places where it is legal to smoke have been diminishing.
The Treatment of Drug Addicts
Although the San Francisco Home for the Care of the Inebriate (1859-1898) treated a few opium addicts as early as 1862, Washingtonian institutions mainly treated drunkards. Similarly, although a few reborn drug addicts were among the legions of the Salvation Army and other urban missions by 1900, they were vastly outnumbered by reformed drunkards. Until the organization of what is today Narcotics Anonymous (NA) in 1953, there was no large or well-defined group of addicts involved in the practices of mutual aid, and there were a variety of reasons for this.
Drug addiction was not a matter of widespread concern until after the Washingtonian philosophy had been eclipsed by the asylum model of treatment. Further, drug addicts were quickly perceived to be more exotic and ominous than habitual drunkards. Although there were many people addicted to morphine as a result of ill-advised medical treatment or attempts at self-treatment during the late 1800s, this more or less respectable population declined after the turn of the century as physicians and pharmacists reformed their dispensing practices and new laws required the disclosure of the content of patent medicines and nostrums. At the same time, a growing number of urban young people began to experiment with drugs, especially smoking opium, morphine, and cocaine. By 1910, drug addiction was popularly associated with petty thieves, dissipated actors, gamblers, prostitutes, and other nightlife aficionados, and with racial minorities and dissolute youth. Unlike habitual drunkards, drug addicts never were caricatured as boisterous and occasionally obstreperous nuisances or buffoons; especially after 1900, they usually were portrayed as dangerous predators and corrupters of society, alternating between drug-induced torpor (in the case of opiates) or hyperactivity and hallucination (in the case of cocaine) and a craving that propelled them on relentless and unscrupulous searches for drugs and the means to buy them.
The “criminal taint” of drug addiction, and the widespread view that most addicts were incurable and would do anything to alleviate withdrawal symptoms, provided a powerful rationale for their prolonged confinement under strict conditions. Even the mental hygienists at Norfolk State Hospital had no expectation that addicts would remain sober and favored incarcerating them in the Massachusetts State Farm at Bridgewater, a correctional facility. Indeed, state hospitals were generally more opposed to admitting addicts than habitual drunkards, preferring to have them incarcerated in jails. Even more than drunkards, addicts disturbed the routine and good order of state hospitals, in no small part because they were, as a group, considerably younger and less conventional than other hospital patients. They pursued sexual liaisons in violation of institutional rules against fraternization; they smuggled drugs into the hospitals; and once through withdrawal, they escaped in droves.
Nor were jails and prisons anxious to take in addicts, mainly because of the problem of smuggling. By the late 1880s, opium was a customary (though illicit) medium of exchange at San Quentin Prison in California, and it was routinely available in the big county jails of the United States at the turn of the century. As state laws against the sale or possession of opiates and cocaine proliferated in the 1890s, and as they began to be more strictly worded and enforced after 1910, county jails and state prisons faced a major problem of internal order. This intensified with the implementation of the federal Harrison Narcotics Act (passed in 1914 to take effect in March 1915), particularly after a U.S. Supreme Court decision in 1919 made it illegal for physicians to prescribe opiates for the purpose of maintaining an addict’s habit. The vast majority of drug offenders, even those arrested by federal agents, were prosecuted under state drug and vagrancy laws and sent to state and county lockups. The resulting crisis led jailers to support two related treatment strategies.
The first of these was the creation of special institutions for drug addicts. Thus the county farms mentioned earlier in this essay were created, or laws were passed to allow addicts to be committed to existing state or county hospitals with wards designated for this purpose. Mendocino State Hospital in California, Worcester State Hospital in Massachusetts, Norwich State Hospital in Connecticut, and Philadelphia General Hospital, to name a few, treated significant numbers of addicts in the 1910s and 1920s. Later, California (1928) and Washington (1935) opened state-sponsored variations on the jail farm, though under the auspices of their state hospital systems.
The growing number of addict-prisoners in the federal system also led to their segregation, first at Leavenworth, Kansas (mainly), and then at two narcotic hospitals opened at Lexington, Kentucky (1935) and Fort Worth, Texas (1938). Operated by the U.S. Public Health Service, these hospitals were in fact more like jails, although they were authorized to admit voluntary patients of “good character” whose applications were approved by the U.S. Surgeon General. Initially, these patients were kept involuntarily once they had been admitted, but a federal district court ruling in 1936 affirmed that voluntary patients could leave after giving notice. Before they were closed in the 1970s, the two facilities had admitted more than 60,000 individuals comprising over 100,000 admissions.
Jailers were also an important part of local political coalitions in support of a short-lived and controversial treatment strategy of the early 1920s—drug dispensaries for registered addicts. At least forty-four such clinics were established nationwide, most in late 1919 or early 1920, following the Supreme Court’s antimaintenance ruling.
In principle, these were not to be maintenance clinics. Addicts initially were to receive their customary dosages of morphine (occasionally heroin, and very rarely, smoking opium), and were then to be “reduced” over a short time to whatever dosage prevented withdrawal. At this point, abstinence was to be achieved.
In practice, few of the clinics worked this way. Many clinic operators believed that their primary aim was to mitigate drug peddling by supplying addicts through medical channels. This implied a maintenance strategy at odds with the Supreme Court’s interpretation of the Harrison Act and with some earlier state laws forbidding maintenance (in California and Massachusetts, e.g.). Further, most clinic operators agreed with the American Medical Association (AMA) that dispensaries could only work effectively within the law if prolonged institutional treatment was available once the addict’s dosage had been reduced to the brink of withdrawal. In the absence of such institutional capacity, reduction was useless, and so clinic doctors rarely bothered. The Prohibition Unit of the U.S. Department of the Treasury (which enforced the Harrison Act), state boards of pharmacy (which typically enforced state drug laws), and local medical societies and law enforcement agencies regarded the clinics as stop-gaps, valuable only until adequate public hospitals could be opened.
In the midst of the inflation following World War I, localities looked to the states to finance such institutions and states looked to the federal government, particularly the U.S. Public Health Service, which had operated hospitals for merchant mariners since 1792. But legislation to create a federal treatment program failed to pass and the states were thrown on their own resources. The Prohibition Unit, convinced that the clinics were doing more harm than good, moved to close them, threatening dispensing physicians with prosecution. The clinics closed rapidly. The last one, at Shreveport, Louisiana, closed in 1923. Addicts were consigned to their customary ports of call in jails, prisons, or for the fortunate few, private sanitaria.
The controversy over maintenance did not disappear, however, particularly on the West Coast, where efforts to loosen its prohibition in the states of California and Washington continued until the United States entered World War II (1941). Further, both federal and state governments permitted the maintenance of a small number of addicts, usually of middle age or older, suffering from severe pain related to a terminal illness or an incurable condition. However, the period from 1923 through 1965 was generally characterized by the strict enforcement of increasingly severe laws against drug possession and sales, by relentless opposition to maintenance, and by treatment that was essentially in the asylum tradition, supplemented by the mental hygiene innovation of supervised probation. In 1961, California passed legislation permitting the compulsory treatment of drug addicts (including marijuana users) and established the California Civil Addict Program within the Department of Corrections. From 1962 to 1964, more than 1,000 people were committed to a 7-year period of supervision, which typically involved an initial year of residential treatment in a facility surrounded by barbed wire to discourage premature departure. In 1964, New York passed similar legislation but assigned its implementation to a special commission rather than to the Department of Corrections. As in California, New York’s residential treatment facilities were “secure.” As late as 1966, the federal Narcotic Addict Rehabilitation Act (NARA), in most respects a piece of “modern” legislation, nonetheless provided for the compulsory treatment of addicts and made the hospitals at Lexington and Fort Worth into the institutional bases of the NARA program.
The Modern Era
The modern history of alcohol and drug treatment has been shaped by the therapeutic pluralism descended from the mutual-aid, asylum, and mental hygiene traditions; the growing prestige of clinical and basic medical research; the coexistence of public and private sectors of treatment; and an increasingly complex field of interorganizational relationships involving several layers of government and substantial fragmentation within each layer.
The influence of Alcoholics Anonymous can hardly be exaggerated. Whatever its therapeutic success—a point of warm debate among scholars—AA has profoundly affected the treatment of people now regularly known as alcoholics. AA’s impact has been both ideological and institutional; that is, its promotion of “disease theory” within the mutual-aid tradition has changed how recent generations think about excessive or problem-causing alcohol consumption and treatment methods, and the penetration of policymaking bodies and treatment institutions by people recovering from alcoholism has shaped the funding and practices of treatment.
AA’s impact was facilitated by the growing influence of the mental hygiene movement during the 1920s and 1930s, for AA provided the critical therapeutic bridge between the segregating institution and the community at large. This was recognized quickly by men like Clinton Duffy, the great “reform” warden of San Quentin, who encouraged the establishment of AA groups in his prison in 1942. Much early twelve-step work was done in U.S. county jails. Harvard psychiatrist Robert Fleming opined in 1944 that the prolonged institutionalization of alcoholics was no longer necessary; a week’s medical care in a general hospital followed by community-based psychotherapy and AA participation was his new prescription. The growth of AA permitted the first substantial stirrings of community care since the Washingtonian Movement.
During the early 1960s, some state hospitals, particularly in Minnesota, incorporated recovering alcoholics and the principles of AA into their treatment programs. What became known as the Minnesota model of short-term inpatient care (usually 28 days) and subsequent AA fellowship and recovery-home living spread slowly but discernibly among private treatment providers such as the Hazelden Foundation, also in Minnesota, and the Mary Lind Foundation in Los Angeles. Across the country, local councils on alcoholism, dominated by people recovering from alcoholism and encouraged by the National Council On Alcoholism and Drug Dependence and the National Institute of Mental Health (NIMH, created in 1946, was an ardent promoter of community psychiatry), began to press states and localities for outpatient clinics, diversion of alcoholics from jail, and other methods consistent with the traditions of mutual aid and mental hygiene. Even so, treatment resources for alcoholics did not expand dramatically. A survey in 1967 found only 130 outpatient clinics and only 100 halfway houses and recovery homes dedicated to serving alcoholics. Alcoholics continued to be barred from most hospital emergency rooms.
All this advocacy and organizing activity were propelled by the concept of “alcoholism as a disease,” a proposition given its most systematic modern exposition by E. M. Jellinek in The Disease Concept of Alcoholism (1960). Jellinek was more provisional in his use of the term than most of his readers appreciated, but he understood the important strategic value of such a claim. In the first instance, the language of disease challenged the legal and correctional system’s jurisdiction over alcoholics; in addition, it provided a rationale for the increased availability of services for alcoholics within established medical facilities and under the aegis of public health. Jellinek was widely read in the literature of the earlier inebriate asylum movement, and although he disparaged its science he understood and sympathized with its aims. He fully understood that whatever its equivocal status as scientific truth, the assertion that alcoholism is a disease carries important implications for treatment policies.
Several important court decisions in the 1960s endorsed the view that alcoholism was a disease; in 1967, a presidential commission on law enforcement concluded that it was both ineffective and inhumane to handle public drunkenness offenders within the criminal-justice system and recommended creating a network of detoxification centers instead. In 1970, Congress passed the Comprehensive Alcohol Abuse and Alcoholism Prevention, Treatment and Rehabilitation Act (the “Hughes Act”). Senator Harold Hughes, a former governor of Iowa, was a recovering alcoholic. A persuasive speaker, Hughes became the conscience of the Congress in developing support for a more humane and decent response to people with alcoholism and related problems. He was supported in these efforts by Senator Harrison Williams, Congressman Paul Rogers, and several advocacy groups led by the National Council on Alcoholism and the North American Association on Alcohol Problems. While Hughes’s early efforts had been supported by President Lyndon Johnson and Assistant to the President Joseph Califano, it was President Richard M. Nixon who signed the legislation establishing the National Institute on Alcohol Abuse and Alcoholism (NIAAA). This legislation made federal funds available for the first time specifically for alcoholism treatment programs.
The Hughes Act accomplished three goals of the modern alcoholism treatment movement. First, it effectively redefined alcoholism as a primary disorder, not a symptom of mental illness. Second, and based on this distinction, it created the federal agency—NIAAA—that would not be dominated by the mental-health establishment competing for the same resources. Finally, and of great practical importance, the Hughes Act established two major grant programs in support of treatment. One authorized NIAAA to make competitive awards (grants and contracts) directly to public and nonprofit agencies; the other was a formula-grant program, which allocated money to states based on a formula accounting for per capita income, population, and demonstrated need.
NIAAA aggressively sought state adoption of the model Uniform Alcoholism and Intoxication Treatment Act, first drafted in 1971 by the National Conference of Commissioners on Uniform State Laws. Section 1 of the Uniform Act, as it was known, stated that “intoxicated persons may not be subject to criminal prosecution because of their consumption of alcoholic beverages but rather should be afforded a continuum of treatment.” By 1980, thirty states had adopted some version of the Uniform Act, thereby decriminalizing public drunkenness.
The thrust of federal and state grant making was to create an effective system of community-based alcoholism treatment services. This occurred in tandem with the deinstitutionalization process that was rapidly depopulating state mental hospitals. Although we customarily think of deinstitutionalization as affecting only the mentally ill, in fact it had an important impact on alcoholics. In 1960, a decade before deinstitutionalization began in earnest, thirty-six states had provisions specifically for the involuntary hospitalization of “alcoholics,” “habitual drunkards,” and “inebriates.” In addition, many states had voluntary-admission statutes. By the mid-70s, however, these laws were history. Prepared or not, local communities had to provide.
The alcoholism-treatment field was not static during the 1980s. The federal “block grant system,” stringent drunk-driving laws, and the rise of Employee Assistance Programs (EAPs) and insurance coverage for treatment, all important developments, will be discussed following a description of the modern era of drug treatment.
Even by the late 1950s, the tough law, anti-maintenance consensus of an earlier era of drug control and treatment was breaking down. A joint report of the American Bar Association and the American Medical Association in 1958, finally published in 1961, cautiously favored outpatient treatment and limited opioid maintenance as alternatives to “threats of jail or prison sentences.” In 1962, appealing to disease theory, the U.S. Supreme Court struck down a California statute that made drug addiction per se a crime. Medical treatment, not the “cruel and unusual punishment” of incarceration, was the Court’s desideratum. In 1963, the President’s Advisory Commission on Narcotic Drug Abuse made substantially similar recommendations.
It was the experimental success of Methadone maintenance that finally altered the discussion of opioid maintenance. Methadone, a synthesized drug with opioid properties, was invented by German pharmacologists during World War II and had been used at the U.S. public health service hospital at Lexington to block addicts’ withdrawal symptoms. In 1963 and 1964, with the support of the prestigious Rockefeller University, medical researchers Vincent Dole and Marie Nyswander began to study its wider use in the treatment of heroin addiction. Their research proceeded despite opposition by the federal Bureau of Narcotics, and was first published in 1965. The remarkable changes they observed in their patients soon were replicated by other scholars. Methadone maintenance attracted considerable notoriety and generated new enthusiasm for maintenance as a strategy of treatment.
Methadone maintenance did not become widespread overnight, however, and it has never been without controversy. The most fundamental criticism of maintenance has always been that it presumes “incurability,” encourages users to continue to rely on a narcotic medication, and thereby undermines abstinence-based approaches. During the 1960s, and especially during the 1970s, when methadone maintenance programs expanded dramatically, this criticism came mainly from two sources: (1) abstinence-based programs run by recovering addicts more or less in the mutual-aid tradition and (2) minority poverty activists who saw in methadone a palliative strategy to treat what they saw as a symptom of economic deprivation without addressing its causes.
Opposition from those working in the mutual-aid tradition came chiefly from veterans of therapeutic communities inspired by Synanon (established in Southern California in 1958) and Daytop Village (opened in New York City in 1964). While most therapeutic communities saw addiction primarily as a result of characterological deficits and immaturity, some drew financial support from the Office of Economic Opportunity (OEO), the short-lived, principal arm of the War on Poverty, and relied on an analysis of heroin addiction that located its social sources in adaptations to poverty. This was an important theme of much scholarship on addiction during and after the late 1950s. In this analysis, still vital today, no form of treatment is effective without job and community development to support aftercare and prevent relapse. Descending from the mental hygiene tradition, this view provided a rationale for great skepticism about any narrow medical approach that was proclaimed as a “solution” rather than as a first step. There was (and remains) no inherent contradiction between maintenance and antipoverty strategies, and many workers in antipoverty programs embraced methadone as a viable and useful treatment. But many did not, and the result was an uneasy pluralism in drug-treatment approaches. In 1966, when New York City launched a major expansion of treatment for drug addiction, it chose to make drug-free therapeutic communities the centerpieces of its effort.
The middle to late 1960s were marked by a modest expansion of publicly supported programs for drug addiction, characterized by competition among a variety of distinct and sometimes incompatible treatment philosophies: therapeutic communities; methadone maintenance programs; compulsory treatment with prolonged residential components; twelve-step programs; overtly religious programs; and a number of traditional mental-health approaches offering detoxification followed by supportive psychotherapies.
Despite the variety of approaches, accessibility to voluntary treatment remained limited throughout the 1960s. In 1968, NIMH undertook a survey to identify every private or public program focused on the treatment of drug addiction in the United States; it located only 183. Most of these were in New York, California, Illinois, Massachusetts, Connecticut, and New Jersey. Of these, 77 percent had been open for less than 5 years. Only the federal hospitals at Lexington and Fort Worth had been in operation for 20 years or more.
In addition to establishing the federal civil commitment program, the Narcotic Addict Rehabilitation Act of 1966 authorized NIMH to make grants to establish community-based treatment programs. The first of these were awarded in 1968; they provided federal support for therapeutic communities and methadone maintenance. This expansion of treatment capacity was also notable for its attention to problems associated with a variety of drugs. It came at a time of sharp increase in marijuana use among middle-class youth, an epidemic of amphetamine use, growing experimentation with LSD, and media preoccupation with the counterculture, or the “youth revolt.” Thus, the political urge to provide treatment was fueled by two enduring concerns of Americans—unconventional and disorderly behavior by young people and minority group members; and the connection between drug use and crime. Anything that might work was tried.
The administration of President Richard M. Nixon took office in 1969 and made the connection between drugs and crime a priority, concentrating first on law enforcement, federal legislation (the Controlled Substances Act of 1970), and a reorganization of federal enforcement agencies. In 1970, while the administration was beginning to consider the role of treatment in its overall strategy, heroin use among service personnel in Vietnam captured media attention. In response, on June 17, 1971, Nixon declared a War on Drugs and created, by executive order, the Special Action Office for Drug Abuse Prevention (SAODAP) within the executive office of the president. He appointed as director Dr Jerome H. Jaffe, a psychiatrist and pharmacologist from the University of Chicago and the director of the Illinois Drug Abuse Programs. SAODAP was the first in a two-decade series of differently named White House special offices concerned with the drug problem; Jaffe was the first in a series of so-called Drug Czars (though the title might most appropriately fit Harry Anslinger, autocratic boss of the Bureau of Narcotics for over 30 years.)
The creation of SAODAP marked the federal government’s first commitment to make treatment widely available. Indeed, SAODAP’s goal was to make treatment so available that addicts could not say they committed crimes to get drugs because they could not obtain treatment. Over the next several years, a variety of community-based programs were initiated and/or expanded. The major modalities were drug-free outpatient programs, methadone maintenance, and therapeutic communities. SAODAP deliberately deemphasized hospital-based programs, allowing the civil commitment program under NARA to wither away. Even so, the need to expand treatment for the Veterans Administration (VA) resulted in funding VA hospitals to use their beds for both detoxification and rehabilitation. SAODAP fully supported methadone maintenance, regarded as experimental by NIMH and federal law-enforcement agencies, and became a focal point of controversy as it presided over the dramatic growth of methadone programs beginning in the early 1970s. Treatment within the military also was legitimized as an alternative to court martial.
SAODAP was given a legislative basis in 1972. The same legislation, the Drug Office and Treatment Act, also created a formula grant program for drug treatment comparable in intent to that for alcoholism treatment. The legislation required the production of a written National Strategy, and authorized establishment of the National Institute on Drug Abuse (NIDA), analogous to NIAAA. Like NIAAA, NIDA was lodged within NIMH.
During its first two years, SAODAP directed an unprecedented expansion of treatment. In early 1971 there were 36 federally funded treatment programs in the United States. By January 1972 there were 235, and by January 1973, almost 400. For a brief, 3-year period, the federal resources allocated to treatment, prevention, and research exceeded those allocated to law enforcement, actually comprising two-thirds of the drug resources in the 1973 federal budget.
In 1973, Dr. Robert Dupont, also a psychiatrist, succeeded Jaffe at SAODAP. Dupont had established and directed a treatment program in Washington, D.C., and had extensive experience with methadone treatment. He extended the work of SAODAP and then provided for continuity of policy when he became the first director of NIDA.
During the administration of President Gerald R. Ford (August 1974-January 1977), the sense of urgency about drug problems declined. This was not due to indifference; it reflected a belief that the metaphor of war was not appropriate to a problem that might be controlled but was unlikely ever to be eliminated. The recent lesson of Vietnam—that wars must be quickly won to be popular—was not lost on Ford’s advisors. Thus, Ford did not appoint a Drug Czar, leaving coordination of drug activities to a unit within the Office of Management and Budget. There were no sharp changes in policy, but the treatment budget was substantially reduced from the highwater mark of the Nixon era.
The administration of President Jimmy Carter heightened the expectations of those interested in expanded and improved treatment. One of Carter’s close advisors, Dr Peter Bourne, was a psychiatrist who had established treatment programs in Georgia and who had worked briefly in SAODAP during the Nixon administration. Bourne enjoyed more White House influence than any previous presidential advisor on drug issues. However, Bourne resigned in July 1978, and in the wake of his resignation, drug issues resumed their low profile. Resources for treatment from 1978 to 1980 were stagnant despite an unprecedented inflation rate.
Measured in 1976 dollars, the level of federal support for treatment was cut almost in half between 1976 and 1982. The Ford, Carter, and Reagan administrations all presided over this decline. At the same time, as the result of the impact of inflation on the cost of state and local government, these jurisdictions also curtailed their support, thus aggravating the impact of federal reductions.
However, the Reagan administration was ideologically different from its predecessors—it was characterized by considerable skepticism about federal activism in general and about the efficacy of drug treatment in particular. Although it increased resources for law enforcement and supply control and introduced a stringent policy of zero tolerance that filled American prisons and newly popular (though hardly innovative) therapeutic boot camps with drug offenders, the Reagan administration downplayed treatment in favor of prevention—especially First Lady Nancy Reagan’s “Just Say No” campaign and the president’s public advocacy of widespread drug testing of employees in industry and government. The 1980 reorganization of the federal block grant program that supported both alcohol and drug treatment combined these funds into an Alcohol, Drug Abuse and Mental Health Services (ADMS) block grant and turned these funds over to the states. In the process, overall funding was reduced from 625 million to 428 million dollars and federal oversight was virtually abandoned. After 1984, federal regulation required that a certain percentage of these funds be spent on prevention rather than treatment. The Institute of Medicine estimated that the proportion of the ADMS block funds available to support drug treatment fell from 256 million dollars in 1980 to 93 million dollars in 1986—and this estimate did not account for inflation.
In spite of the Reagan administration’s lack of interest in drug treatment, congressional interest was rekindled. It was apparent by 1984 that HIV was being transmitted among drug injectors and by drug injectors to others, especially their female partners and their fetal young. Crack, an extremely potent and inexpensive form of smokable cocaine, was being aggressively marketed in areas of concentrated poverty, although it took the deaths of several prominent athletes, particularly Len Bias, a first-round draft choice of the Boston Celtics, to pique concern with the growing use of cocaine. Prodded by Congress, the second Reagan administration, in its closing years, did increase funding for both research and treatment. However, according to the Institute of Medicine, these increases did not compensate for the effects of previous budget cuts and inflationary erosion. Adjusted for inflation, public funding for drug treatment in 1989 (the last Reagan budget) was substantially below the level of 1972 through 1974, the opening years of Nixon’s War on Drugs.
Even so, the Reagan administration retained its emphasis on law enforcement and prevention. To better focus on prevention, in 1987 it created the Office for Substance Abuse Prevention (OSAP), placing it within the Alcohol, Drug Abuse, and Mental Health Administration (ADAMHA). Most prevention activities carried out by the National Institute on Drug Abuse (NIDA) were transferred to OSAP, the first director of which was Dr Elaine Johnson.
In 1989, President George H. Bush reinvigorated the position of drug czar when he appointed Dr William Bennett, former secretary of education in the Reagan administration, to head his new White House drug policy office, the Office of National Drug Control Policy (ONDCP). ONDCP was charged with coordinating demand-side (prevention and treatment) and supply-side (law enforcement) matters relating to drugs. There were increases in resources for treatment—and even more substantial increases in law-enforcement efforts. Although Bennett had recruited a noted drug-abuse scholar, Dr Herbert Kleber, as his deputy for demand-side activity, the ONDCP chief and his staff remained skeptical about the value of treatment, continuing the decade-long policy of emphasizing prevention and law enforcement.
Later in 1989, much of the authority and funding for drug treatment was transferred from NIDA to another new agency created within ADAMHA, the Office for Treatment Improvement (OTI). Dr Beny J. Primm, a major figure in drug treatment, was recruited to organize OTI and to be its first director. OTI was given responsibility for oversight of the block (formula) grant for drug and alcohol treatment and prevention and was given new authority and budget resources to make grants for treatment-demonstration projects.
In 1992, Congress decided that the placement of OTI and OSAP within ADAMHA, which also housed NIDA, NIAAA, and NIMH, was leading to conflicts between the missions of research and those of treatment and prevention. In still another reorganization, the three research institutes—NIDA, NIAAA, and NIMH—were transferred to the National Institutes of Health (NIH), and the remaining service functions were incorporated into a new agency, the Substance Abuse and Mental Health Services Administration (SAMHSA). SAMHSA was composed of three centers: the Center for Substance Abuse Prevention (CSAP), consisting primarily of the former OSAP; the Center for Substance Abuse Treatment (CSAT), consisting primarily of the former OTI; and the Center for Mental Health Services (CMHS), consisting of the service-demonstration grant projects that were formerly within NIMH.
Succeeding President Bush in 1992, President Bill Clinton appointed Dr Lee Brown as his Drug Czar. Brown, a criminologist by academic training, had been a police chief in New York and Texas. Although there were some signs within the administration that drug treatment was understood to be an important part of attacking persistent joblessness and welfare dependency, the early Clinton budgets made only slight shifts in resource allocation. Further, as Clinton’s health-care reform, welfare reform, and crime and employment strategies became hostage to management of the national budget deficit and partisan politics, no major initiatives specifically on drug treatment were introduced during the first two years he was in office. Some provisions for more treatment within the criminal-justice system were part of the original crime bill. As a result of the recession of the early 1990s, and faced with the necessity of accommodating in their jails and prisons huge numbers of drug offenders incarcerated on mandatory sentences, states and counties also failed to restore the support an earlier era provided for treatment. In some cases, they retrenched considerably. In 1996, Brown was succeeded as Drug Czar by General Barry McCaffrey. Although McCaffrey signaled an early intent to shift federal resources toward the treatment of America’s “three million hard-core users” (as he put it during his confirmation hearing), his performance in office took quite a different turn. By 1998, it was clear that McCaffrey’s principal concern was interdiction, especially in Mexico, and his budgets reflected this continuing emphasis. Although the 1999 federal drug budget included a $143 million increase in the federal block grant for drug treatment, two-thirds of the funds remained committed to supply reduction.
A Two-Tiered System
Beginning in the 1970s and promoted by NIAAA, NIDA, and a few insurance industry leaders like The Travelers, health insurance policies began to provide coverage for the treatment of alcohol and drug dependence. Sometimes this was the result of labor negotiations; sometimes it was the result of state insurance commission mandates for its inclusion. In response to the availability of support, private hospitals (both nonprofit and for-profit) expanded their treatment capacities dramatically. There had been no such growth in the private-treatment sector since the boom of the inebriate asylum era.
Commonly, treatment programs within the private sector were based on the Minnesota model, emphasizing twelve step principles and employing recovering people. Such programs typically consisted of a brief period of inpatient detoxification followed by several weeks of inpatient rehabilitation. Twenty-eight days was such a common duration of inpatient care that the programs often were referred to as 28-day programs. The posthospital phase of treatment usually consisted of participation in AA, Narcotics Anonymous, or Cocaine Anonymous.
Such programs—often called chemical-dependency programs because they admitted people with drug and alcohol problems—catered almost exclusively to those with health insurance. (In many instances, they represented important profit centers for medical institutions needing to subsidize financial losses from other services, like emergency rooms.) Those without insurance either had no access to treatment or made use of the network of publicly supported programs—a network that became increasingly thin during the 1980s and increasingly under pressure to find sources of funds other than public grants and contracts and payments from medical programs for the indigent (such as Medicaid). Sliding fee scales became more commonly used, and in some places scarce public treatment slots were absorbed by fee-paying drinking drivers mandated to treatment by stricter penalties for drunk driving and more systematic enforcement of such laws.
The growth of the private sector was spurred as well by Employee Assistance Programs (EAPs), efforts to intervene in alcohol and/or drug problems at places of employment. This strategy goes back at least to the Washingtonian movement, but formal EAPs date from the 1940s. Their ranks swelled during the 1970s and 80s. Generally, EAPs referred people with more serious alcohol and drug problems to formal—usually private—treatment programs, which were paid primarily by fees derived from third-party payers, such as insurance companies, who in turn derived their funds from policies paid for or subsidized by employers. The sharply rising cost to employers of providing alcohol and drug treatment was a major factor in the rise of managed care, which was aimed initially at controlling the cost of mental health and alcohol and drug treatment. The major mechanism by which the managed-care industry addressed the cost of treatment was to challenge the practice of using several weeks of inpatient care as the initial phase of treatment for alcohol and drug dependence. In practice, treatment providers were told that inpatient treatment beyond a few days could not be justified and would not be paid for under the insurance policy.
The success of managed care in reducing costs by constraining the use of inpatient treatment resulted in a dramatic growth of managed-care organizations and an equally significant contraction and restructuring of the private alcohol and drug treatment system. By the early 1990s, a number of states had obtained federal permission to use managed-care approaches to contain the costs of treatment for individuals covered by federal programs like Medicaid. The future of funding for treatment, the various public grant and contract programs notwithstanding, is inseparable from the broader national debate on the financing of health care.
In 1990, the Institute of Medicine described U.S. treatment arrangements as a two-tiered system, comprised of public and private sectors, in which the private sector served 40 percent of the patients but garnered 60 percent of total treatment expenditures. Although the ratio of patients to revenues cannot be known for earlier eras, this two-tiered structure is a creature of the nineteenth century, when treatment was established both as a public good and a commodity. Barring some revolution in the organization of U.S. health care, this is unlikely to change soon. What remains to be seen is what the balance of public and private treatment will be, what innovations or reinventions will be born of financial necessity, or as the result of homeless addicts and a groaning correctional system. History allows us to predict the likely questions, but it is not a very reliable guide to specific answers.