Vicki L Lamb. Handbook of Death and Dying. Editor: Clifton D Bryant. Volume 1. Thousand Oaks, CA: Sage Reference, 2003.
The purpose of this chapter is to provide an overview of the major trends in causes of death in the United States from colonial times to the end of the 20th century. During this period, many changes have occurred in the types of diseases and conditions that have caused death, differentials in patterns of death, and life expectancy. The chapter is divided into five major sections. The first three of these address particular time periods: the colonial period through approximately 1790, the 19th century, and the 20th century. Each of these sections begins with a discussion of available sources of mortality data and then covers the major causes of death and the differences (e.g., by age, sex, race, social class) in death rates. Each section also presents information on the role and contribution of medicine to the death rates in each time period. The fourth major section examines two additional topics that span the entire U.S. historical period: war casualties and deaths, and disaster deaths. In the final section, I discuss the historic changes in the U.S. patterns of death within the context of the theory of epidemiologic transition.
Brief definitions of a number of the demographic terms employed used throughout the chapter are in order. The terms death rate and mortality rate, which are used interchangeably, refer to number of deaths per unit of the population, such as deaths per 100,000 persons. Case-fatality rate refers to the proportion of persons having a particular disease who die from that disease. For example, the casefatality rate for smallpox might be 15% in a particular year. Life expectancy is the number of years an individual is expected to live from birth based on the age-specific death rates of that time or year. Thus the lower life expectancy estimates for the earlier periods are partially due to extremely high death rates for infants and children, in addition to higher death rates at the older ages.
The Colonial Period through the New Republic: 1600-1790
Records of mortality rates, causes, and differentials are limited in coverage for the United States before the late 1800s. Information on deaths and causes of death in colonial America comes from personal journals, diaries, and letters as well as from newspaper accounts and other public records. Thus information about mortality trends from colonial times through the mid to late 19th century is based on data collected in smaller geographic units, such as cities and reporting states. Deaths were not regularly reported or recorded in all areas, particularly infant and child deaths. As a result, we have little understanding of infant and child death rates in colonial times.
The available resources indicate that there were regional differences in mortality patterns within the colonial states, and it is unlikely that there was a national mortality pattern for colonial America. The major causes of death were infectious diseases, thus the differences in death rates across the colonies were due to differences in rates of disease transmission and survival, immunity to disease, and the use of effective methods of treating or preventing the spread of disease.
Between 1600 and 1775, smallpox was both universal and fatal in Europe and North America; it was a major cause of death in colonial America (Duffy 1953). Smallpox is a highly communicable viral disease that causes 3 to 4 days of high fever and rapid pulse with intense headache and back pain, followed by skin eruptions that eventually develop into pustules. Once infected, the person either dies or survives with an extended period of immunity. The virus typically is passed from host to host, but it can also remain infectious for months on inanimate objects, including bedding and clothing.
In the urban centers of the Old World—Europe, Africa, and Asia—children were most susceptible to the spread of smallpox. In Britain, it was considered a childhood disease (Fenn 2001; Marks and Beatty 1973). Persons who reached adulthood were usually immune and less likely to be affected by the spread of the disease. Smallpox was brought to the colonies primarily from the British Isles and the West Indies. Port settlements, such as Boston and Charleston, were particularly vulnerable to outbreaks of smallpox. The irregular arrival of trade and passenger ships meant that the spread of the disease was sporadic, resulting in periods in which the disease was not present in regions of colonial America. The periodic smallpox epidemics in North America affected persons of all ages, not just children, and the disease was greatly feared.
At the time, the common perception was that smallpox was more deadly in America than it was in Europe. However, historic evidence indicates that death rates from smallpox were actually lower in the colonies than in England (Duffy 1953). One of the factors contributing to the lower death rate was the more widespread use of variolation, or inoculation, which was first used in 1720 (Duffy 1953). Variolation was the introduction of pus from persons infected with the disease into incisions in healthy persons. The healthy person would typically contract a relatively mild case of smallpox. Chances for survival were much greater for those inoculated; their case-fatality rate was 2-3%, compared with an estimated 10-50% death rate for those who contracted the disease naturally (Duffy 1953). Unfortunately, inoculated persons were fully contagious with the smallpox virus, and unrestricted variolation could lead to greater spread of the disease, particularly in unaffected areas. The practice was used on a very limited basis in England, and thus the case-fatality rate there was higher.
The use of variolation was a point of controversy and debate in the colonies. Cotton Mather (1663-1728), a Bostonian theologian interested in scientific and medical matters, was highly supportive of the practice of variolation. To demonstrate the benefits of the practice, Mather and others collected data on a widespread smallpox epidemic in Boston in 1721. Mather reported to the Royal Society of London, of which he was a member, that of those who contracted the disease naturally, 1 in 6 had died, whereas of the almost 300 who were inoculated, only 1 in 60 had died (Duffy 1953).
Smallpox was not the only disease that threatened the American colonists. There were periodic outbreaks of yellow fever in coastal regions, brought from Africa to the West Indies to North America. Yellow fever is an infectious disease that is transmitted by mosquitoes, not native to North America, which probably bred in water barrels on slave ships from the West Coast of Africa (Duffy 1953). The disease is characterized first by high fever and flushed face, lips, and tongue. Within a few days, temperature drops below normal, the skin takes on a yellowish hue, and bloody, black vomiting occurs. Death occurs due to liver and kidney failure and extreme toxemia. According to Duffy (1953), the case-fatality rate from yellow fever in colonial America varied between 12% and 80%. The first outbreak of the disease in North America probably occurred in 1693, brought to Boston by a British ship from Barbados (Duffy 1953; Marks and Beatty 1973). Charleston, Philadelphia, and New York were struck with yellow fever numerous times in the 1700s. In 1796, New Orleans had its first outbreak of yellow fever. The disease was puzzling to the colonists because infected persons who moved to new locations did not transmit the disease. In addition, the disease would “miraculously” disappear during the cooler fall and winter months, particularly in the northern colonies. Although some had suggested the connection between mosquitoes and the transmission of yellow fever (and other diseases) at earlier times, it was not until the end of the 19th century that experiments by the U.S. Army Yellow Fever Commission proved this to be so (Marks and Beatty 1973).
Other contagious and infectious diseases that caused sickness and death in colonial times included malaria, dysentery, respiratory diseases (pneumonia, influenza, and other respiratory infections), typhoid fever, typhus, diphtheria, scarlet fever (which was often confused with diphtheria), measles, whooping cough, mumps, and venereal diseases. Only a few trends are mentioned here. Malaria, spread by mosquitoes, was endemic and appeared annually in the warm months. The disease caused great sickness, debilitation, and death, but the death rate for malaria was much lower than that for smallpox or yellow fever. Dysentery also was endemic to the American colonies, resulting in debilitating sickness and death for persons of all ages. Respiratory diseases occurred consistently in the winter months, although few data are available to allow us to document their negative effects on human life. Diphtheria primarily affected young children. Between 1735 and 1740, there was a widespread epidemic of a deadly form of diphtheria that killed hundreds of children and numerous adults in New England, New York, and New Jersey. Throughout the second half of the 18th century, waves of diphtheria threatened the colonies. Measles epidemics were noted in New England from 1759 to 1772, in Charleston in 1722, and in Philadelphia in 1778.
Infections associated with cuts, amputations, and other medical “care” were a constant problem and cause of death during the colonial period due to lack of knowledge about germs and the spread of disease. Another common cause of death was newcomers’ failure at “seasoning”—that is, at adapting to the new disease environment of the colonies. This was a notable problem in the southern colonies; it has been estimated that in the 1600s as many as 40% of new arrivals in that region did not survive their first year (Gemery 2000).
Colonial Response to Infectious Disease Epidemics
Because epidemics were so widespread when they occurred, they severely disrupted the colonists’ daily lives. For example, Duffy (1953) describes what happened when the 1721 smallpox epidemic was sweeping through Boston: “For the next few months, business was at a standstill. All intercourse was shunned, and the streets were deserted with the exception of wagons carrying the dead and of a few doctors and nurses visiting the victims” (p. 50). Public offices, such as those that housed general assemblies, legislatures, and various courts, were closed for the duration of an epidemic or moved to outlying areas, away from the urban centers. Legislation was passed to address the seriousness of outbreaks of obviously contagious diseases, primarily through the isolation of sick or infected individuals. Eventually all the colonies enacted laws requiring quarantine periods of from 10 to 20 days for ships with affected passengers. Laws also were passed that required those who were affected by contagious disease to be quarantined at home or in quarantine hospitals or “pest houses” so that they were isolated from the general population. Massachusetts was the most successful colony in regulating and enforcing the isolation of contagious diseases, and thus in reducing the spread of such diseases (Duffy 1953).
Colonial physicians did little to help those suffering from infectious and epidemic diseases because such conditions were widely considered to be God’s punishment for sinful ways. For this reason many doctors argued against the practice of variolation to quell the fatal effects of smallpox. During epidemics it was not uncommon for public and/or religious officials to designate a day for public prayer and fasting to appeal to God’s grace to spare the population from further spread of disease.
There was little formal medical education in the colonies before the Revolutionary War. Most medical practitioners received training through apprenticeships. The few who had formal medical degrees were primarily educated in Europe. The Medical School of the College of Philadelphia, which later became the Medical School of the University of Pennsylvania, was founded in 1765. In 1768, a medical school was established in King’s College in New York City, which later became Columbia University. Toner (1874) estimated that at the beginning of the Revolutionary War there were approximately 3,500 medical practitioners in the colonies, of whom only 400 had received formal medical training and only 50 had received M.D. degrees from the two medical schools that had been established in the colonies.
Native North American Mortality Trends
There were few serious epidemic diseases in North America prior to European settlement (Thornton 2000). One possible reason for this is that the Native North Americans had few domesticated animals, to which a number of the Old World infectious human diseases can be traced. In addition, there were no large centers of population concentration, which means that the relatively high overall population density necessary for the transmission and survival of epidemic diseases did not exist (Thornton 2000). Some infectious diseases were present among Native North Americans, including tuberculosis and treponemal infections, such as syphilis. It has been estimated that the life expectancies of Native North Americans were similar to those of people living in Europe at the time. According to Thornton (2000), “Life expectancies for Native Americans—generally in the 20s and 30s—were kept relatively low by famine, nutritional deficiency diseases (e.g., pellagra), warfare, parasites, dysentery, influenza, fevers, and other ailments, besides tuberculosis and treponemal infections” (p. 15).
The colonization of North America brought a host of new diseases to the continent that the native population had never before experienced, including smallpox, typhoid, diphtheria, scarlet fever, whooping cough, pneumonia, malaria, and yellow fever (Duffy 1953; Thornton 2000). Having no natural immunity, the Native North American populations were devastated by the introduction of Old World infectious diseases, particularly during the 16th and 17th centuries.
A number of direct and indirect depopulation effects were associated with European colonization. Estimates vary widely regarding the numbers and proportions of the native population that were killed directly due to the introduction of Old World infectious diseases. Indirect postepidemic effects included declines in births due to lower fecundity because of disease and loss of marital partners. Death rates may also have increased after epidemics due to food shortages (especially if epidemics occurred during important agricultural periods) and to mothers’ inability to feed and care for their infants and other children (Thornton 2000). Nondisease mortality effects on the native population of European colonization included deaths via wars and genocide, enslavement, tribal removals and relocations, and the resultant changes in Native American culture, organization, and subsistence patterns (Duffy 1953; Thornton 2000).
African American Mortality Trends in Colonial America
Information on the demographic trends of Africans brought to the colonies, and their offspring, is limited. Because of the slave trade, the records of Africans brought to the colonies are more complete than information about European immigration. However, much less is documented regarding the births, deaths, and marital unions of Africans in the colonies during this time. Slaveholders, when documenting economic transactions associated with slavery, did not identify individuals by name, or may have recorded only given names (Walsh 2000). Marital unions between slaves were not recognized in any colony except Massachusetts. Some plantation records listed minor children with mothers, but husbands or fathers were rarely identified (Walsh 2000).
The data on mortality trends during the colonial period are particularly sparse. It is generally believed that mortality rates for Africans were higher than those for white colonists. As noted above, new arrivals were vulnerable to deaths due to “seasoning,” and, according to Walsh (2000), “the traditional belief was that one-third of all black immigrants died during their first three years in the New World” (p. 206). Other factors associated with higher death rates for Africans and African Americans include insufficient nutrition, shelter, and access to medical care. Death rates for Africans tended to be lower than those for whites in the summer and fall, presumably due to the Africans’ greater genetic protection against malaria (Walsh 2000), but winters were particularly hard on Africans, especially in the northern colonies, in part due to their vulnerability to respiratory diseases, such as pneumonia, influenza, and pleurisy.
Regional Mortality Trends
General mortality trends varied during the colonial period. In New England during the early 1700s, death rates were much higher in urban areas than they were in small towns, with estimated annual deaths per 1,000 persons ranging from the 30s to the 40s in urban areas and from 15 to 25 in small towns (Gemery 2000). Urban death rates declined to converge with rural death rates in the early 19th century.
In terms of regional differences, mortality rates were lowest in New England and highest in the South. The middle colonies had mortality trends that closely resembled those of New England. The lower death rates in the New England colonies were due in part to the colonists’ success in legislating and enforcing quarantine laws to reduce the spread of infectious and contagious disease. The higher mortality rates in the South reflect the warmer disease environment and the greater number of immigrants, among whom there were many deaths due to “seasoning” (Gemery 2000; Wells 1992).
Population Growth during the Colonial Period
Although epidemics and other factors brought death to many, the North American population grew at a steady rate. As Benjamin Franklin observed, compared with Europeans, the colonists tended to marry at younger ages and to have larger families. Such patterns of marriage and fertility led to steady population growth. Life expectancy for the colonists has been estimated to be higher than that of their European counterparts because of lower population density (lessening the spread of epidemics), less travel and migration (reducing exposure to new contagious diseases), and better nutrition and standard of living. Studies of differences in stature between native-born American adult males and those in European countries in the mid-1700s point to sufficient nutritional support in that the American natives had a height advantage of 5 to 7 centimeters (Steckel 1999). Evidence also indicates that there were virtually no social class differences in adult height by the time of the Revolutionary War, which meant that everyone in the American colonies, including the poor and slaves, had adequate access to nutrition (Steckel 1999). There were much greater class differences in stature in Europe at the time. However, the average adult male in colonial America was taller than the average upper-class male in England.
Through the 19th Century: 1790-1900
The first U.S. Census, taken in 1790, marked an important advance in the routine collection of demographic data in the United States. However, the census did not collect mortality information until the mid-1800s, and even then the information was “incomplete, biased, and uneven” (Haines 2000:328). The U.S. population continued to increase at a rate of around 3% annually from 1790 to 1860 due to immigration, high fertility, and decreases in mortality patterns (Haines 2000). During the rest of the 19th century the population growth rate slowed to approximately 2.3%, primarily due to reduced fertility rates. In 1800, the average number of births for white women was 7.04; by 1900, the rate had declined to an average of 3.56 (Haines 2000:308, table 8.2).
During the 19th century the major causes of death continued to be infectious and contagious diseases. The threat of smallpox was reduced with the discovery of the benefits of vaccination using the relatively safe cowpox virus, credited to Edward Jenner in the late 1700s. Yellow fever was ever present during the summer months; it reached epidemic proportions in the 1840s and 1850s and in 1878. Typhus, typhoid fever, scarlet fever, and tuberculosis continued to be major causes of death and sickness in the 19th century.
The United States had its first cholera epidemic in 1832, brought by immigrants from England. Cholera is caused by bacteria that are usually spread through feces-contaminated water and food. Extreme diarrhea, vomiting, stomach cramps, and extreme dehydration characterize the disease, and the case-fatality rate is very high when treatment is unavailable. Cholera epidemics occurred in the United States from 1832 through 1873, in part due to filth and poor public sanitation, especially in urban areas.
Studies of human stature and other data (e.g., genealogical records and death registration systems in several large cities) suggest an increase in mortality rates in the 1840s and 1850s. One factor that contributed to the increased death rates was the transportation revolution that was occurring in the United States between 1800 and 1860. During this time, interregional migration and trade increased, there was a trend toward greater urbanization along with a shift from farming to employment in factories, and the public school system expanded (Haines 2000; Steckel 1999). The result of these changes was a disease environment in which there was increased interpersonal contact and greater exposure to infectious and contagious diseases. Many of the diseases prevalent at the time (e.g., cholera, diarrhea, tuberculosis, most respiratory infections, measles, and whooping cough) can negatively affect nutritional status, particularly among children, which in turn can inhibit the body’s ability to grow and develop at normal levels and remain healthy. There was a decline in average height for men born in the 1820s and 1830s in the United States until the latter part of the 19th century; this trend points to reduced nutritional sustenance (Haines 2000; Steckel 1999). Class differences in height were evident among Union soldiers in the Civil War, suggesting socioeconomic differences in nutritional support (Margo and Steckel 1983). Farmers had the greatest height advantage, and laborers had the least.
Because the only mortality data available are inadequate or incomplete, there has been some debate among demographers regarding the beginning of a pattern of sustained mortality decline during the 19th century. Most of the evidence points to a trend of mortality decline beginning in the latter part of the 19th century, after the end of the Civil War. At that time, death rates began a continuous decline with few fluctuations. Advances in medical science did not have great impacts on the initial decline in mortality rates until the 20th century. Instead, the observed increase in life expectancy has been attributed to an improved standard of living (better diet, nutrition, and shelter) and to advances in public health practices. For example, in 1854, John Snow identified a public water pump as the source of a cholera outbreak in London. This discovery led municipal authorities to work toward greater access to pure water and sewage disposal. Boston and New York City began piping in water via aqueducts prior to the Civil War, and other large municipalities instituted public works programs at the end of the 19th century to improve water supplies (Haines 2000). The new science of bacteriology contributed to the understanding of the importance of water filtration and the addition of chemicals to purify public water. Because large cities could command the resources necessary to institute public health reforms, by 1900 the 10 largest cities in the United States had better mortality rates than did cities with populations of 25,000 (Haines 2000). However, better water and sewer systems were not the sole cause for the decline in mortality in the United States. As Haines (2000) notes, there were many factors:
Other areas of public health activity from the late nineteenth century onward included vaccination against smallpox; the use of diphtheria and tetanus antitoxins (from the 1890s); more extensive use of quarantine (as more diseases were identified as contagious); the cleaning of urban streets and public areas to reduce disease foci; physical examinations for school children; health education; improved child labor and workplace health and safety laws; legislations and enforcement efforts to reduce food adulteration and especially to obtain pure milk; measures to eliminate ineffective or dangerous medications (e.g., the Pure Food and Drug Act of 1906); increased knowledge of and education about nutrition; stricter licensing of physicians, nurses, and midwives; more rigorous medical education; building codes to improve heating, plumbing, and ventilation systems in housing; measures to alleviate air pollution in urban settings; and the creation of state and local boards of health to oversee and administer these programs. (Pp. 336-37)
Mortality Differentials
During the 19th century there were significant differentials in mortality rates in the United States. For most ages, males had higher death rates than did females. However, female deaths sometimes exceeded male deaths between ages 20 and 50 due to the hazards of childbearing, frontier life, and vulnerability to disease-causing organisms (Haines 2000). Urban death rates were higher than those in rural areas due to greater population density and crowding, the greater probability of unsafe drinking water, and the accumulation of garbage and waste. These mortality differences began to diminish at the end of the 19th century with the institution of public health reforms.
There is evidence of emerging social class differences in mortality rates at the end of the 19th century, with the lowest death rates for white-collar types of occupations and the highest for laborers and servants (Haines 2000). During the 1800s, the mortality rate for adult African slaves was close to that of the free population, indicating that adult slaves received adequate nutrition and support (Steckel 2000). However, the children of slaves fared much worse. Estimates based on information from plantation records and other documents indicate that infant and child death rates for slaves were roughly double the rate for the free population (Steckel 2000). The major causes for the excess infant and child deaths were poor prenatal conditions, low birth weights, inadequate diets, and childhood diseases.
There is little information available on the health of free blacks, but the evidence indicates that winters were particularly harsh for African Americans, particularly in the North, due to their “maladaptation to cold and little inherited immunity to respiratory infections such as pneumonia and tuberculosis” (Steckel 2000:461). In the latter part of the 19th century, when the United States was experiencing a long-term decline in mortality rates, African American adult mortality showed little improvement. However, child mortality rates were greatly reduced.
20th-Century Trends: 1900-2000
The individual states and the federal government were slow to develop vital registration systems to record demographic events such as marriages, births, and deaths. In 1900, the federal government began publishing annual mortality statistics based on the newly established death registration system (Hetzel 1997). The initial death registration area included 10 states and the District of Columbia, plus 153 cities not in the death registration states. The population covered by the 1900 death registration area included greater proportions of urban and foreign-born persons than did the U.S. population as a whole (Haines 2000). Through the years, additional states were added to the death registration system, and by 1933 the system covered all of the United States (Hetzel 1997). The data presented in this section are based on those collected in the federal death registration system.
The most dramatic improvements in mortality rates that have taken place in the United States occurred during the 20th century. Figure 1 shows the remarkable decline in death rates from 1900 to 2000. In 1900, the crude death rate was estimated to be 1,719 deaths per 100,000 persons; by 2000, that figure had dropped to 874 deaths. The most prominent spike in the graph was caused by the influenza pandemic, which in 1918 resulted in a death rate of 1,810 deaths per 100,000 persons. This pandemic resulted in the death of half a million Americans (Kolata 1999:5).1 Other spikes visible in Figure 1 reflect smaller influenza outbreaks in the early part of the 20th century. With the decline in death rates came a corresponding increase in life expectancy, which reflects the remarkable mortality improvement in the United States over the 20th century. In 1900, the average life expectancy at birth was 47.3 years, whereas in 2000 it was estimated to be 76.9 years.
For the first half of the 20th century, the long-term decline in mortality rates was paralleled by a trend in decline of deaths due to infectious diseases, whereas mortality due to noninfectious diseases remained constant. Declines in infectious disease mortality rates were due primarily to declines in deaths due to influenza, pneumonia, and tuberculosis, which together accounted for 60% of infectious disease deaths (Armstrong, Conn, and Pinner 1999). During the second half of the century, declines in infectious disease deaths slowed to an annual rate of 2.3% until about 1980, when there was a reversal (Armstrong et al. 1999). Deaths due to infectious diseases increased at an annual rate of 4.8% until the mid-1990s, when they again began to decline. The AIDS epidemic was the reason for the increase in infectious disease deaths that began in 1980.
Public health advances in the 20th century continued to reduce the spread of infectious disease. In addition, medical science played a significant role in changing the most prevalent causes of death and extending life expectancy. Germ theory, advanced by Pasteur in the 1860s, became an accepted part of medical science, and surgical practices became safer as a result of Halsted’s work at Johns Hopkins University at the end of the 19th century. As Easterlin (2000) has stated, “In the last half of the nineteenth century there was a revolution in knowledge of methods of disease control” (p. 637).
Changes in the most common causes of death shifted from infectious and contagious diseases to debilitating and chronic conditions in the 20th century. In 1910, heart disease was the leading cause of death in the United States (except during the influenza pandemic), and in the 1930s cancer emerged as the second most common cause of death, with stroke as the third most common. The rate of deaths for heart disease was highest in the 1960s and 1970s, which accounts in part for the stagnant trend in life expectancy during that period. Since that time, heart disease death rates have been declining.
Mortality Differentials
During the first half of the 20th century mortality rates dropped for all ages, with strong health improvements for the younger ages. With the control of infectious and contagious diseases, children gained a greater probability of achieving adulthood by the 1950s. Indeed, much of the increase in life expectancy in the 20th century can be attributed to reductions in infant and child mortality. Arriaga (1984) estimates that reductions in mortality for persons up to age 20 contributed almost 60% to the gains in life expectancy. Mortality rates for young to middle-aged adults also improved over the 20th century. Gains in mortality rates for ages 20 to 39 have been estimated to contribute 17% to increased life expectancy.
Since the late 1960s, death rates for American adults age 65 years and over have experienced a rapid decline. The most notable mortality improvements have been for persons 85 years old and older. The decline in heart disease deaths has been a contributing factor in the positive trend in old-age mortality. In the second half of the 20th century, medical science shifted its research focus to a greater understanding of diseases of old age. The resulting research has led to greater awareness of disease causes and symptoms, and this has had two direct effects. First, medical knowledge and practice have greatly improved. Tests are now available to detect the early onset of numerous chronic diseases, such as heart disease and various types of cancers, as well as to determine genetic predispositions for certain degenerative conditions. In addition, advances have been made in therapies to treat or slow the progress of chronic diseases, such as medication to control high cholesterol and hypertension. Second, medical research has uncovered lifestyle factors that are associated with certain diseases and chronic conditions. Thus public awareness of the ill effects of particular lifestyle behaviors—such as tobacco use, high-fat diets, and lack of exercise—has grown. An additional factor that has contributed to the improvement in old-age mortality is the establishment of Medicare in 1965. Virtually all Americans aged 65 years and older are eligible for health insurance coverage through the federally supported Medicare program, which covers hospitalization (Part A) and can also include medical insurance for outpatient and physician services (Part B).
Figure 4 shows the differences in life expectancy by sex for Americans through the 20th century. Throughout this period, life expectancy for females was higher than that for males. In 1900, females had a 2-year life expectancy advantage over males. The differential increased on a fluctuating basis up to 5.6 years in 1917 and 1918, after which it dropped to just 1 year in 1920. After fluctuating until about 1950, the differential steadily widened until it reached 7.8 years in the late 1970s. More recently, differentials in male and female life expectancy have been declining.
The small difference between male and female life expectancy at the beginning of the 20th century was due to females’ vulnerability to infectious diseases and high maternal mortality rates. At the beginning of the century the maternal mortality rate in the United States was much higher than that in European countries, but the U.S. rates include indirect causes of maternal death, such as pneumonia during pregnancy (Loudon 2000). The U.S. rates were highest between 1915 and 1920, peaking at 916 maternal deaths per 100,000 births in 1918 (Guyer et al. 2000). The influenza pandemic contributed to this high rate. Maternal mortality rates remained high in the 1920s and early 1930s, primarily because many women received no care or inappropriate care for birth complications or were subject to inappropriately performed medical interventions. Beginning in 1936 there was a steep decline in maternal mortality until 1956, when the rate was reduced from 582 to 40 deaths per 100,000 live births (Guyer et al. 2000). The reduction in maternal deaths has been attributed to improvements in obstetrical care and the use of drugs to fight infection as well as the development of more appropriate methods to deal with delivery problems in hospital births and the use of trained midwives for home births (Guyer et al. 2000; Loudon 2000).
The widening gap between female and male life expectancy later in the 20th century was due to increased male disadvantage in deaths due to heart disease and, to a lesser extent, cancer, in addition to the decline in maternal mortality (Nathanson 1984; Waldron 1993). Explanations for the female mortality advantage tend to be of two types: biological and behavioral/environmental (Nathanson 1984). For example, females have protective hormonal advantages regarding heart disease, and men are more likely to engage in risky behavior or lifestyles that lead to higher death rates due to lung cancer, accidents, and other violent deaths (Waldron 1993). At the end of the 20th century, American males continued to have higher mortality rates, although the differentials are narrowing. In 1999, the ageadjusted mortality rates for males were higher than those for females for 12 of the top 15 causes of death. The mortality rates were similar for stroke and kidney disease, and females had higher death rates due to Alzheimer’s disease. The narrowing of the gap between males and females in terms of life expectancy was due to greater male improvements in mortality for heart disease, cancer, suicide, and homicide (Hoyert et al. 2001).
Throughout the 20th century there were large race differentials in mortality rates, although the trend was toward a reduction of these differences. Figure 5 shows the trends in life expectancy for whites and nonwhites from 1900 and for blacks from 1970 (when data on blacks began to be collected and estimated routinely for mortality statistics). In 1900, average life expectancy for whites was 47.6 years and that for nonwhites was 33.0 years, a difference of 14.6 years. By 1990, the life expectancy for whites was estimated to be 76.1 years, whereas it was 71.2 years for all nonwhites (5.1 year difference) and 69.1 years for blacks (7.0 year difference). The black-white differential in 1999 was 5.9 years, and in 2000 black life expectancy was estimated to be 5.6 years less than that for whites. As these figures show, the black-white gap in life expectancy has been declining. The most recent declines have been due to greater improvements in black mortality rates for homicide, cancer, stroke, and HIV disease. However, the continued gap between whites and blacks in life expectancy exists because of excess mortality for blacks compared with whites in the top five leading causes of death, in addition to deaths due to homicide, hypertension, septicemia, kidney disease, and diabetes (Hoyert et al. 2001).
Part of the black-white life expectancy gap is also due to differences in survival during the first year of life. Birth certificates have recorded race of child since the early part of the 20th century. The infant mortality rate (IMR) is the number of deaths of infants before their first birthdays per 1,000 live births. Figure 6 shows the trend of black and white IMRs for the period 1916 to 2000. In 1916 there were more than 85 excess black infant deaths compared with white infant deaths. The differential has declined over the century, but there remains a disproportionate number of black infant deaths. Recent research has found that much of the black-white gap in infant mortality is due to differences between blacks and whites in terms of sociodemographic factors (such as income and mother’s education), maternal health, prenatal health care, and birth outcomes (such as low birth weight) (Hummer 1993; Hummer et al. 1999).
Sex and race differentials for life expectancy at birth since 1970 show that white females have the longest life expectancy, topping at around 80 years since 1997. Life expectancy for black females is second highest, although the rates are converging with those of white males. Black males are most disadvantaged in terms of life expectancy. The trend in life expectancy for black males was stagnant and declining somewhat during the late 1980s into the 1990s. Only in recent years has there been a marked improvement in black male life expectancy. The black male disadvantage is primarily due to excessive deaths for youth, teens, and adults up to age 65, in which the age-specific black male death rates are roughly double those for white males (Anderson 2001). In 1999, homicide and unintentional accidents were the first and second most common causes of death for black males for age groups 15-19, 20-24, and 25-34.
War Casualties and Deaths
A separate topic affecting U.S. mortality trends not yet addressed is deaths associated with wars waged by Americans. Since the beginning of settlement in the New World, Americans have fought in wars and service members have died. Table 1 presents the estimated numbers of battle deaths associated with all U.S. wars from the American Revolution to the first Gulf War. The table displays the numbers of casualties associated with battle deaths as well as noncombat deaths of service members during wartime. The most deadly war, in terms of the proportions of combatants killed in battle, has been the Civil War, in which Americans fought Americans. An estimated 6.34% of Union soldiers and 7.10% of Confederate soldiers died in combat.
Before the end of the 19th century, wars were fought without the benefit of much scientific preventive medical support or practice, because medical knowledge had not progressed regarding the understanding of germ theory (Bayne-Jones 1968). However, medical personnel did try to influence the health and well-being of troops. Because hundreds of soldiers died of smallpox during the first 2 years of the Revolutionary War, General George Washington and the Continental Congress decided that all new recruits should be inoculated against the disease, and smallpox deaths were greatly reduced. Washington, Dr. Benjamin Rush, and others emphasized the importance of cleanliness of person and environment, yet most soldiers did not heed their recommendations. Many died from diseases that were caused by filthy encampments and inadequate food and clothing. It has been estimated that for every soldier who died in combat in the Revolutionary War, 10 died from disease (Bayne-Jones 1968). The causes of death were primarily infectious and contagious diseases, including typhus, dysentery, and smallpox (until inoculation was enforced).
By the War of 1812 all U.S. soldiers were vaccinated against smallpox with the inert cowpox virus. However, throughout the 19th century many soldiers died during wartime from diseases such as typhus, dysentery, diarrhea, and pneumonia due to unsanitary camp conditions and unsafe food and water. It has been estimated that for every soldier who died in combat during the Mexican War, 7 died of diseases, primarily dysentery (Bayne-Jones 1968).
During the Civil War, disease casualties were vast for both the Union and Confederate armies. As was the case in earlier wars, such casualties were primarily due to unsanitary conditions, inadequate food and clothing, and the soldiers’ lack of immunity to contagious diseases. Many diseaserelated deaths were caused by intestinal disorders, such as diarrhea, dysentery, and typhoid fever. Other causes of death included respiratory diseases, such as pneumonia and bronchitis, as well as measles (and secondary complications resulting from measles), smallpox (from volunteers who were not vaccinated), and tuberculosis. During the short Spanish-American War, close to 2,000 persons died from disease, primarily typhoid fever, compared with fewer than 400 deaths in battle (BayneJones 1968).
The knowledge and practice of preventive medicine flourished at the beginning of the 20th century, resulting in better conrol of personal and environmental conditions associated with the occurrence and spread of contagious and infectious diseases. Sanitary conditions and preventive health measures were much improved by World War I, yet even then the total number of deaths to soldiers due to disease was equivalent to the total number of battle deaths, because of the worldwide influenza pandemic in 1918 at the end of the war.
Disaster Deaths
In addition to diseases and war, American mortality trends have been influenced by natural and human-made disasters. Table 2 presents selected data on deaths due to disasters in the United States. Hurricanes are annually a threat to the Gulf and East Coasts. The most deadly hurricane on record, which killed an estimated 6,000 to 8,000 people, occurred in Galveston, Texas, in 1900. Tornadoes cause many problems in the southern, central, and midwestern states. In 1884, an estimated 60 tornadoes struck from Mississippi to Indiana, causing 800 deaths. Earthquakes have also caused numerous deaths. The most deadly earthquake to date, which left 500 persons dead or missing, took place in San Francisco in 1906. The most devastating disaster the United States has seen was the terrorist attacks on September 11, 2001, in which hijacked airplanes were crashed into the World Trade Center towers in New York City as well as into the Pentagon in Arlington, Virginia, and a field in a rural area in Shanksville, Pennsylvania. In all, close to 3,000 persons were killed.
Conclusion
Omran (1971) has formalized the patterned shifts in disease trends and causes of mortality, and the resulting impacts on life expectancy and population growth, in the theory of epidemiologic transition. This transition consists of three major stages: (a) “the age of pestilence and famine,” when mortality is high and fluctuating; (b) “the age of receding pandemics,” when mortality declines progressively as epidemics decrease in frequency and magnitude; and (c) “the age of degenerative and man-made diseases,” when mortality continues to decline and eventually approaches stability at a relatively low level (Omran 1982:172).
In the United States, the first stage of the epidemiologic transition extended until the middle of the 19th century. Up to that time, unpredictable and somewhat uncontrollable epidemics were the major causes of death. Other causes of death and disease included parasitic and deficiency diseases, pneumonia, diarrhea and malnutrition complexes for children, and tuberculosis and childbirth complications for women. Fewer peaks and fluctuations of mortality rates characterize the early phase of the second stage, although mortality levels remained quite elevated. Infant and childhood mortality rates remained high, as did mortality for female adolescents and women of childbearing age. As this stage continued, death rates began a steady decline. Declines in U.S. mortality rates occurred in the late 1800s. Through the 20th century, improvements were first gained in the reduction of infectious disease deaths. Later, maternal mortality rates as well as infant mortality rates progressively fell and reductions also were realized in childhood mortality rates.
The third stage reflects even further improvements in survivorship, especially in the advanced older ages. Infant mortality rates drop, and childhood mortality accounts for less than 10% of total mortality, whereas deaths to persons over the age of 50 represent at least 70% of total deaths. The U.S. entered the third stage during the early part of the 20th century. The epidemiologic transition favors the young over the old and females over males. Survival chances improve markedly for children of both sexes and for females through their childbearing ages. By the third stage, the female age-specific mortality risks are lower than those for males for all ages. An outcome of these mortality differentials is a further imbalance in the sex ratio, with fewer males than females, especially at the older ages.
A question that remains is whether the United States has entered a fourth stage of the epidemiologic transition, with further declines in deaths due to chronic diseases and continued increases in life expectancy, particularly at the older ages. Omran did not anticipate such dramatic mortality improvements in his original framework.
An additional factor that possibly points to a new stage is the increasing presence of infectious diseases. There are unprecedented numbers of new infectious diseases (e.g., Legionnaires’ disease, HIV/AIDS, Ebola virus, and severe acute respiratory syndrome, or SARS), infections such as tuberculosis and malaria are reemerging, and some reemerging pathogens are generating antimicrobialresistant strains (Barrett et al. 1998). Factors that have contributed to the recent rise and spread of infectious diseases include increased urbanization and crowding, greater travel to Third World countries and remote areas, global climate changes, and the overuse of antibiotics and pesticides (Olshansky et al. 1997). The escalation of infectious diseases is a growing concern at both national and global levels because the long-term impacts of this trend are unknown.