The Social and Cultural Construction of American Childhood

Steven Mintz. Handbook of Contemporary Families. Editor: Marilyn Coleman & Lawrence H Ganong. Sage Publication. 2004.

Childhood is not an unchanging, biological stage of life, and children are not just “grow’d,” like Topsy in Harriet Beecher Stowe’s Uncle Tom’s Cabin. Rather, childhood is a social and cultural construct. Every aspect of childhood including children’s relationships with their parents and peers, their proportion of the population, and their paths through childhood to adulthood has changed dramatically over the past four centuries. Methods of child rearing, the duration of schooling, the nature of children’s play, young people’s participation in work, and the points of demarcation between childhood, adolescence, and adulthood are products of culture, class, and historical era (Heywood, 2001; Illick, 2002).

Childhood in the past was experienced and conceived of in quite a different way than today. Just two centuries ago, there was far less age segregation than there is today and much less concern with organizing experience by chronological age. There was also far less sentimentalizing of children as special beings who were more innocent and vulnerable than adults. This does not mean that adults failed to recognize childhood as a stage of life with its own special needs and characteristics. Nor does it imply that parents were unconcerned about their children and failed to love them and mourn their deaths. Rather, it means that the experience of young people was organized and valued very differently than it is today.

Language itself illustrates shifts in the construction of childhood. Two hundred years ago, the words used to describe childhood were far less precise than those we use today. The word infancy referred not to the months after birth but to the period in which children were under their mother’s control, typically from birth to age 5 or 6. The word childhood might refer to someone as young as 5 or 6 or as old as the late teens or early 20s. The word was vague because chronological age was less important than physical strength, size, and maturity. Instead of using our term adolescence or teenager, Americans two centuries ago used a broader and more expansive term, youth, which stretched from the preteen years until the early or mid-20s. The vagueness of this term reflected the amorphousness of the life stages. A young person did not achieve full adult status until marriage and establishment of an independent farm or entrance into a full-time trade or profession. Full adulthood might be attained as early as the mid- or late teens but usually did not occur until the late 20s or early 30s (Chudacoff, 1989; Kett, 1977).

How, then, has childhood changed over the past 400 years? The transformations might be grouped into four broad categories. The first involves shifts in the timing, sequence, and stages of growing up. Over the past four centuries, the stages of childhood have grown much more precise, uniform, and prescriptive. Demography is a second force for change. A sharp reduction in the birthrate has more rigidly divided families into distinct generations and allowed parents to lavish more time, attention, and resources on each child. A third major shift involves the separation of children from work, the equation of childhood with schooling, and the increasing integration of the young into a consumer society. The fourth category is attitudinal. Adult conceptions of childhood have shifted profoundly over time, from the 17th-century Puritan image of children as depraved beings who needed to be restrained, to the Enlightened notion of children as blank slates who could be shaped by environmental influences, to the Romantic conception of children as creatures with innocent souls and redeemable, docile wills, to the Darwinian emphasis on highly differentiated stages of children’s cognitive, physiological, and emotional development, to the Freudian conception of children as seething cauldrons of instinctual drives, to contemporary notions that emphasize children’s competence and capacity for early learning.

Caricature and History

Our images of childhood in the past tend to be colored by caricature and stereotype. According to one popular narrative, the history of childhood is essentially a story of progress: a movement away from ignorance, cruelty, superstition, and authoritarianism to enlightenment and heightened concern. From a world in which children were treated as little adults, grown-ups only gradually came to recognize children as having special needs and a unique nature. In this Whiggish narrative, the New England Puritans play a pivotal role. These zealous Calvinist Protestants serve as the retrograde symbols of a less enlightened past.

The Puritans are easily caricatured as an emotionally cold and humorless people who regarded even newborn infants as embodiments of sin, who terrorized the young with threats of damnation and hellfire, and who believed that the chief task of parenthood was to break children’s sinful will. Yet if we are to truly understand the changes that have occurred in young people’s lives, it is essential to move beyond the misleading stereotype of stern Puritan patriarchs bent on crushing children’s will through fear of damnation. The fact is that the Puritans were among the very first people in the Western world to seriously and systematically reflect on the nature of childhood. Surviving evidence suggests that many Puritan parents cherished their children, expressed deep concern for their salvation, and were convinced that the prospects of their movement ultimately depended on winning the rising generation’s minds and souls.

The Puritans did, however, conceive of children very differently than we do today. They regarded young children as “incomplete adults” who were dangerously unformed, even animalistic, in their inability to speak and their impulse to crawl and who needed to be transformed as rapidly as possible into speaking, standing human beings. This conception of childhood shaped Puritan child-rearing practices. Instead of allowing infants to crawl on the floor, they placed them in “walking-stools” or “go-carts” similar to modern-day “walkers.” They placed even very young girls in corsets to make them appear more adultlike, and for the same reason they sometimes placed a rod along the spine of children of both sexes to ensure an erect posture (Calvert, 1992).

Though Puritan parents and ministers expressed concern about sin, their primary stress was not on repressing immorality but rather on instilling piety within children and promoting their spiritual conversion. Surviving evidence suggests that Puritan American parents were in closer and more constant contact with their children and interacted more often and more deeply with them than their counterparts in Europe. When the Puritans did seek to combat childish and youthful sins, they relied less on physical punishment than on communal oversight, education, work, and an unusually intense family life. Instead of emphasizing physical punishment and external constraints, Puritan parents stressed internalized restraints as they sought to instill within their children feelings of unworthiness and of vulnerability to divine judgment. Within their home, Puritan children were taught the necessity of recognizing their sinfulness and striving for repentance and salvation (Demos, 1970; Morgan, 1966).

Until surprisingly recently, it was taken for granted that 17th-century New England did not recognize a stage of development comparable to modern adolescence and lacked anything remotely resembling a modern youth culture. We now know that this view is wrong. Contrary to the older notion that the ready availability of land and a shortage of labor accelerated the passage to adulthood, it is clear that childhood in New England, as in old England, was followed by a protracted transitional period of “semidependence” called youth. Not wholly dependent on their parents for economic support, yet not in a position to set up an independent household, young people moved in and out of their parental home and participated in a distinctive culture of their own. As early as age 7 or 8 (but usually in their early teens), many young people lived and worked in other people’s homes as servants or apprentices.

Diversity in Colonial America

Diversity from one region to another was a hallmark of life in colonial America. New England was extremely healthy and stable; Maryland and Virginia were deadly and chaotic. New England was settled primarily by families, the Chesapeake region predominantly by single young males in their teens and 20s.

In 17th-century New England, a healthful environment and a balanced sex ratio encouraged the establishment of households that were more stable and patriarchal than in England itself. Plentiful land allowed early New Englanders to maintain inheritance patterns that kept children close to home. The church and community strongly reinforced paternal authority, and fathers intervened in their offspring’s lives even after their offspring had reached adulthood for example, by having legal authority to consent, or deny consent, to marriages.

In the colonial Chesapeake, in contrast, a high mortality rate and a sharply skewed sex ratio inhibited the formation of stable families until the late 17th or 18th centuries. Stepparents, stepchildren, stepsiblings, and orphans were common, and parents rarely lived to see their grandchildren. The defining experience in the lives of immigrant youth in the 17th-century Chesapeake region was the institution of indentured servitude.

Approximately three quarters of English migrants to the Chesapeake arrived as indentured servants, agreeing to labor for a term of service usually 4 to 7 years in exchange for passage and room and board. Before 1660, approximately 50,000 English immigrants arrived in Virginia. Most were young, single, and male, usually in their late teens or early 20s. More than half of all indentured servants died before their term of service expired. In England, service was often described as a familylike relationship in which servants were likened to children. In the Chesapeake, in contrast, servants were treated more like chattel property who were at risk of physical abuse and disease (Mintz & Kellogg, 1988).

In the Middle Colonies of New York, New Jersey, Pennsylvania, and Delaware, and particularly among the Quakers, conceptions of family and childhood first arose that anticipated those characteristic of the 19th and 20th centuries. Unlike New England and Chesapeake households, which often sheltered servants, apprentices, laborers, and other dependents within their walls and which often lived in close proximity to extended relatives, the Middle Colonies tended to have private families, consisting exclusively of a father, mother, and children bound together by ties of affection. Not essentially a unit of production or a vehicle for transmitting property and craft skills, the private family was regarded as an instrument for educating children and forming their character. Quaker parents, in particular, emphasized equality over hierarchy, gentle guidance over strict discipline, and relatively early autonomy for children (Levy, 1988).

Far from being a static or slowly changing era, the colonial period was a time of dynamic and far-reaching changes. Even before the American Revolution, the progression of young people toward adulthood had grown increasingly problematic. In the face of rapid population growth, there was no longer sufficient land to sustain the older distribution of social roles within local communities. Economic, demographic, and other social changes weakened parental and communal control over the younga development manifest in a dramatic increase in rates of illegitimacy and in pregnancies contracted before marriage. The rate of premarital pregnancies (involving brides who bore children in less than 8 months after marriage) rose from under 10% of first births at the end of the 17th century to about 40%. At the same time, the older system of indentured servitude declined. By the end of the century, apprenticeship, instead of being a system of training in a craft skill, was becoming a source of cheap labor. Choice of a vocation was becoming more problematic, and the teen years were becoming a period of increasing uncertainty (Greven, 1970; Rorabaugh, 1986).

The Late-18th- and Early-19th-Century Paradox: The Sentimentalization and Regimentation of Childhood

The Enlightenment and the Romantic movement produced two new conceptions of childhood and youth that were to have vast repercussions for the future. From the Romantic movement came a conception of the child as a symbol of organic wholeness and spiritual vision, a creature purer and more sensitive and intuitive than any adult. From the Enlightenment came a celebration of youth as a symbol of innovation and social change, an image captured in Eugene Delacroix’s painting Liberty Leading the People, where liberty is personified as a young male (Coveney, 1967; Gillis, 1974). The 19th-century urban middle class would simplify and popularize these notions, depicting the child in terms of asexual innocence, sinless purity, and vulnerability and as a symbol of the future. This attitude could be seen in the practice of dressing young girls and boys in identical asexual smocks or gowns and leaving young boys with long curls. It was also evident in a mounting effort to shelter young people from the contamination of the adult world (Calvert, 1992; Ryan, 1981).

But the urban middle class also embraced a somewhat contradictory set of values. If childhood was to be a carefree period of play, it was also to serve as training ground for adulthood. By the early 19th century, the process of socialization had grown problematic in new ways. Unable to transmit their status position directly to their children by bequeathing them family lands, middle-class parents adopted a variety of new strategies to assist their children. They sharply reduced their birthrates, which fell from an average of 7 to 10 children in 1800 to 5 in 1850 and just 3 in 1900. Reduced birthrates, accomplished through a mixture of abstinence and coitus interruptus, supplemented by abortion, allowed parents to devote more attention and resources to each child’s upbringing. Meanwhile, industrialization and urbanization took the urban middle-class father from the home, leaving the mother in charge and that situation led to new child-rearing techniques. There was a greater emphasis on the maternal role in shaping children’s moral character and on the manipulation of guilt and withdrawal of affection rather than physical coercion. At the same time, young people’s residence in the parental home became more prolonged and continuous, usually lasting until their late teens or early 20s. Perhaps the most important development was the emergence, beginning in Massachusetts in the 1830s, of a new system of public schooling emphasizing age-graded classes, longer school terms, and a uniform curriculum (Cott, 1977; Reinier, 1996).

The Persistent Importance of Class, Ethnicity, Gender, and Region

During the 19th century, there was no single, common pattern of childhood. Rather, there were a multiplicity of childhoods, differentiated by class, ethnicity, gender, and geographical location. Indeed, at no point in American history was childhood more diverse than in the mid- and late 19th century. Gender, ethnicity, race, region, and, above all, social class helped determine the length of childhood, the duration of schooling, the age of entry into work, even the kinds of play that children took part in, the toys they acquired, and the books they read (Clement, 1997).

The children of the urban middle class, prosperous commercial farmers, and southern planters enjoyed increasingly long childhoods, free from major household or work responsibilities until their late teens or 20s, whereas the offspring of urban workers, frontier farmers, and blacks, both slave and free, had briefer childhoods and became involved in work inside or outside the home before they reached their teens. Many urban working-class children contributed to the family economy by scavenging in the streets, vacant lots, or back alleys, collecting coal, wood, and other items that could be used at home or sold. Others took part in the street trades, selling gum, peanuts, and crackers. In industrial towns, young people under the age of 15 contributed on average about 20% of their family’s income. In mining areas, boys as young as 10 or 12 worked as breakers, separating coal from pieces of slate and wood, before becoming miners in their mid- or late teens. On farms, children as young as 5 or 6 might pull weeds or chase birds and cattle away from crops. By the time they reached the age of 8, many tended livestock, and as they grew older they milked cows, churned butter, fed chickens, collected eggs, hauled water, scrubbed laundry, and harvested crops. A blurring of gender roles among children and youth was especially common on frontier farms (Clement, 1997; Nasaw, 1985; Stansell, 1986).

Schooling varied as widely, as did work routines. In the rural North, the Midwest, and the Far West, most mid- and late-19th-century students attended one-room schools for 3 to 6 months a year. In contrast, city children attended age-graded classes taught by professional teachers 9 months a year. In both rural and urban areas, girls tended to receive more schooling than boys (Clement, 1997).

Child Protection

As early as the 1820s, middle-class reformers were shocked by the sight of gangs of youths prowling the streets, young girls selling matchbooks on street corners, and teenage prostitutes plying their trade in front of hotels or on theaters’ third floor. Concerned about the “deviant” family life of the immigrant and native-born working classes, reformers adopted a variety of strategies to address problems of gangs, juvenile delinquency, and child abuse, neglect, and poverty in the nation’s rapidly growing cities. Child savers, as these youth workers and reformers were known, were determined to save children from the moral and physical dangers of city streets, poverty, and parental ignorance and abuse.

During the 19th century, reformers’ interest in improving the well-being of American children led them to create age-specific programs and institutions for delinquent, disabled, and dependent young people, from public schools, orphan asylums, and foundling homes to YMCAs, children’s hospitals, and reform schools. Child savers tended to believe that moral training would alleviate many of the problems associated with urbanization, industrialization, and immigration. The number of institutions that the child savers created is astounding. By 1900, a directory of public and private charities for young people and their families was 620 pages long. Meanwhile, African Americans, Irish and Italian Catholics, eastern European Jews, and other ethnic groups established their own child-saving institutions and agencies, rooted in their own needs and attitudes toward childhood, poverty, and relief. African Americans devised an informal system of adoption that was later institutionalized within the black community itself. Concern with children was a trans-Atlantic phenomenon. Indeed, the inspiration for such innovations as tax-supported public schools and kindergartens (the first of which appeared in St. Louis in 1873), and for such reform movements as raising age-of-consent laws, came from abroad. Britain enacted at least 79 statutes on the subject of child welfare and education between 1870 and 1908 (Ashby, 1997).

Child saving a phrase initially associated with Elbridge Gerry’s New York Society for the Prevention of Cruelty to Children, which was founded in 1874evolved through a series of overlapping phases. The first phase, which we might term “child rescue,” began during the 1790s and involved the establishment by private philanthropists of congregate institutions for poor, dependent, and delinquent children, ranging from Sunday schools and orphan asylums to houses of refuge and reform schools. Instead of assisting poor children in their own homes or relying on a system of indentures to handle problems of delinquency, dependency, and poverty, reformers generally viewed congregate institutions as the most effective and cost-efficient way to address problems that were more visible in an urban setting. These institutions sought to instill “middle class values and lower class skills” (Platt, 1977, p. 176) by internalizing the values of order and self-discipline, while instructing male inmates in manual skills and simple crafts and female inmates in knitting, sewing, and housework (Ashby, 1997; Holloran, 1989).

The child-saving agencies, which were generally staffed by untrained, underpaid caretakers or political appointees, tended to blur the distinction between dependent children, delinquents, and potential delinquents. These institutions, which served both as schools and as prisons, adopted expansive definitions of delinquency, including acts that would not be crimes if committed by adults, such as truancy, incorrigibility, disobedience, and running away. Definitions of deviance were significantly influenced by gender. Females, but not males, were institutionalized for sexual promiscuity. Fixated on urban problems, the institutions tended to neglect rural children, who were frequently confined with adults in county poorhouses. Even when institutions were built in rural areas, the inmates were overwhelming urban in origin. It is important, however, to recognize that these new institutions were not simply imposed upon the poor. Parents in poverty often used these institutions for their own purposes. In times of crisis, orphan asylums served as temporary boardinghouses (Ashby, 1997; Brenzel, 1983; Schneider, 1992).

This first phase of child saving set important precedents for the future. It established two crucial legal principles “the best interests of the child,” an expansive concept that gave judges broad discretion to make decisions regarding children’s custody, and parens patriae, the legal doctrine that gives the government authority to serve as a child’s guardian. The doctrine of parens patriae had two sides. On the one hand, it held that the state had the legal authority to intervene in cases where families had failed, and it therefore mandated public intervention to rectify parental failure, abuse, or neglect. On the other hand, the doctrine implied that the state should intervene only in the most extreme instances of abuse or neglect. This meant that in the interest of respecting family privacy, public authorities would ignore all but the most severe problems (Sutton, 1988).

By the 1850s, a reaction against congregate institutions had begun to set in as the prisonlike character of asylums and orphanages became increasingly self-evident. A number of reformers responded by creating smaller, more familylike “cottages,” while the new children’s aid societies inaugurated a program of orphan trains, fostering out poor city children to farm families in the Midwest and later in the Far West. By 1929, the aid societies had transported over 200,000 children and adolescents from eastern cities. Driven by a mixture of charitable and economic motives, the aid societies hoped to remove poor children from pernicious urban influences and supply workers for labor-short rural areas. Despite efforts to find alternatives to the institutional care of dependent and delinquent children, congregate institutions continued to dominate the care of destitute, delinquent, dependent, and disabled children well into the 20th century. Many working-class and immigrant parents were simply unable to maintain their children during periods of crisis. In addition, it proved impossible to place many infants or sick or disabled children in foster homes. In some cases, children voluntarily returned to institutions after harrowing experiences in foster homes (Ashby, 1997; Holt, 1992; O’Connor, 2001).

A second phase of reform, which we might call “child protection,” was sparked by the “discovery” of child abuse in the 1870s, when doctors, crusading journalists, humanitarian reformers, and urban elites began to turn their attention to the problems of children of the immigrant poor. Patterned after reformers’ earlier interest in cruelty toward animals, these campaigns sought to rescue innocent children who had been mistreated or abandoned. Concern with child abuse led to investigation of other forms of abuse, such as the phenomenon of “baby farming,” the practice of sending unwanted infants off to boarding homes where they were badly neglected or simply allowed to die. Other abuses that aroused the concern of child protectors were claims that children being murdered for insurance money and that young girls were being sold into the white slave trade, prostitution. As early as the 1870s, child protectors campaigned to move children out of poorhouses (which often meant separating children from their parents). In subsequent years, reformers called for regulating or abolishing baby farming, raising the age of consent for sexual relations, and establishing day nurseries for working mothers. Societies for the Prevention of Cruelty to Children played a crucial role in the expansion of state power to regulate families. Even though they were private agencies, they were granted authority to search homes to investigate suspected cases of abuse and to remove children from their parents. The “Cruelty” was often accused of breaking up poor families on flimsy grounds, but one of the most striking findings of recent scholarship is that much of the demand for state intervention came from family members themselves. Initially led by gentleman amateurs, the child protection organizations gradually came under the administration of professional middle-class female social workers (Gordon, 1988; Pleck, 1987).

The third and most far-reaching phase in the history of child welfare is associated with the Progressive Era. Beginning in the 1890s, professional charity and settlement house workers, educators, penologists, and sociologists called for expanded state responsibility and professional administration to assist dependent and delinquent children. They championed campaigns against child labor, compulsory education laws, juvenile courts, kindergartens, the playground movement, and public health measures to reduce child mortality. They sought to keep poor children with their parents and out of massive, regimented, ineffective institutions, off crowded and perilous streets, and away from exploitative and dangerous sweatshops, mines, and factories (Macleod, 1998; Tiffin, 1982).

An especially important constituency for child welfare initiatives came from activist women organized into clubs and federations throughout the country. Among the first women to attend college, these clubwomen organized at the grassroots level and succeeded in winning laws limiting work hours for women; establishing a federal Children’s Bureau; and enacting laws, subsequently overturned by the courts, abolishing child labor. Less suspicious of government corruption than easterners, midwesterners, such as Nebraska-born Grace and Edith Abbott, Homer Folks from Michigan, and Edwin Witte of Wisconsin, took the lead in pressing for a federal role in child welfare. The 1909 White House Conference on Children and establishment of the U.S. Children’s Bureau in 1912 demonstrated that child welfare had become a national concern.

Some of the Progressive Era’s greatest successes involved children’s health, aided by the discovery of the germ theory of disease and the pasteurization of milk. In New York City, the infant death rate fell from 144 per 1,000 in 1908 to less than 50 per 1,000 by 1939. Deaths from infectious disease also dropped sharply. In New York, the mortality rate from tuberculosis dropped 61% between 1907 and 1917 (Lindenmeyer, 1997; Prescott, 1998).

Another significant advance involved enactment of mother’s pensions, which allowed impoverished mothers to care for children in their own homes. Illinois adopted the first mothers’ pension law in 1911; 8 years later, 39 states and Alaska and Hawaii had enacted similar laws. However, the mother’s pension laws discriminated against mothers of color and provided benefits only to women deemed respectable. Benefits were so inadequate that most recipients had to supplement benefits with wage labor. The Social Security Act of 1935 added federal funds to the state programs. But the programs remained stingy and stigmatizing, as aid was means tested and morals tested. Recipients of aid were required to prove that they were completely impoverished; they were also subject to surprise visits to ensure their respectability. A short-lived achievement was enactment of the Sheppard-Towner Act, which disseminated information about pre-and postnatal care for mothers and infants and provided home visits for mothers mainly in rural areas. This latter triumph was reversed by the end of the 1920s by opposition from the medical profession and the Public Health Service, which resented encroachment into their domains. Child labor was finally abolished by the 1937 Fair Labor Standards Act (Lindenmeyer, 1997; Tiffin, 1982).

One of the most lasting achievements of the Progressive Era child savers was the creation of the modern juvenile justice system. A loss of confidence in the ability of reform schools to rehabilitate youthful offenders, combined with a Progressive faith in the ability of professionals to assist young people, inspired the creation of juvenile courts and the probation system. In the new juvenile justice system, youthful offenders were treated as delinquents rather than as criminals; proceedings took place in private, without a jury or a transcript; and a juvenile court judge had the discretion to commit delinquents to an institution or to probation for the remainder of their childhood and adolescence. What was most distinctive about the juvenile court was its emphasis on probation and family-centered treatment. The alleged purpose of the juvenile court was to protect and rehabilitate youthful offenders, not to punish them. But because these were not viewed as adversarial proceedings, due process protections did not apply. In addition, young people could be brought before juvenile courts for status offenses that would not be crimes for adults. In most jurisdictions, juvenile offenders were not entitled to receive prior notice of charges against them, to be protected against self-incrimination, to cross-examine witnesses, or to have an attorney defend them (Getis, 2000).

Following World War I, emphasis on child saving declined. Attention was directed away from the economic threats to children’s welfare and focused instead on individual psychology. Child guidance clinics sought to address the problems of maladjusted, rebellious, and predelinquent children. The declining emphasis on children’s welfare reflected a variety of factors: the waning influence of the women’s movement; social workers’ embrace of psychoanalytic theories that emphasized individual adjustment; and the Great Depression and World War II, which diverted public attention away from children’s issues. One positive effect of this shift in focus was that it challenged an earlier overemphasis on eugenics and encouraged a recognition of the importance of the environmental factors influencing children’s development. Institutions like the Iowa Child Welfare Research Station, the world’s first institute to conduct scientific research on children’s development, and rival centers at Berkeley, Minnesota, and Yale demonstrated the decisive importance of a child’s experiences during the first years of life, helping to pave the way for early childhood education programs such as Head Start (Cravens, 1993; Jones, 1999).

Child saving raises difficult issues of evaluation. The child savers have been accused of class bias, discrimination against single mothers, imposition of middle-class values on the poor, and confusion of delinquency and neglect with survival strategies adopted by the poor. There can be no doubt that many of the policies adopted by the child savers broke up families and criminalized behavior that had not been regarded as illegal in the past. Meanwhile, there was a marked disjuncture between the reformers’ heady aspirations and their actual achievements. Hobbled by legislative stinginess, many of their reforms had negligible effects (Hawes, 1991).

Universalizing Middle-Class Childhood

A revolution in the lives of young people began toward the end of the 19th century. Among this revolution’s defining characteristics were the development of more elaborate notions of the stages of childhood development, prolonged education, delayed entry into the workforce, and the increasing segregation of young people in adult-sponsored, adult-organized institutions ranging from junior high and high schools to the Boy and Girl Scouts (Macleod, 1983, 1998). Partly the result of demographic and economic developments, which reduced the demand for child labor and greatly decreased the proportion of children in the general population from half the population in the mid-19th century to a third by 1900these changes also reflected the imposition by adults of new structures on young people’s lives as well as a new conception of children’s proper chronological development (Kett, 1977).

Before the Civil War, young people moved sporadically in and out of their parental home, schools, and jobs an irregular, episodic pattern that the historian Joseph F. Kett (1977) termed “semidependence.” A young person might take on work responsibilities, within or outside the home, as young as age 8 or 9 and enter an apprenticeship around age 12 or 14, returning home periodically for briefer or longer periods. The teen years were a period of uncertain status and anxiety marked by a jarring mixture of freedom and subordination. During the second half of the 19th century, there were heightened efforts to replace unstructured contacts with adults with age-segregated institutions. Lying behind this development was a belief that young people would benefit from growing up with others their own age; that youth should be devoted to education, play, and character-building activities; and that maturation should take place gradually, inside a loving home and segregated from adult affairs. Urbanization also contributed to this development, as more same-aged children congregated in cities. Universalizing the middle-class pattern of childhood was the product of protracted struggle. Not until the 1930s was child labor finally outlawed (by the Fair Labor Standards Act of 1937), and not until the 1950s did high school attendance became a universal experience (Kett, 1977; Macleod, 1983; Storrs, 2000; Trattner, 1970).

The impact of this revolution in young people’s lives remains in dispute. On the positive side, it greatly expanded educational opportunities and reduced the exploitation of children in factories, mines, and street trades. But this revolution also entailed certain costs. The adult-organized institutions and organizations that were developed around the turn of the 20th century promoted norms of conformity and anti-intellectualism and made it more difficult for young people to assert their growing maturity and competence outside the realm of sports. More uniform and standardized age norms also made it more difficult for those who could not adapt to a more structured sequence of growing up. Ironically, the creation of adult-organized institutions allowed young people to create new kinds of youth cultures that were at least partially free of adult control and supervision (Graebner, 1990; Kett, 1977).

In the wake of the Darwinian revolution, educators, psychologists, church workers, youth workers, and parents themselves began to pay increasing attention to the stages of children’s physiological and psychological development and to develop new institutions that were supposed to meet the needs of young people of distinct ages. Before the Civil War, and especially before the 1840s, chronological age was only loosely connected to young people’s experience. Physical size and maturity were more important than a young person’s actual age. Schools, workplaces, young men’s organizations (such as volunteer military and fire companies and literary or debating societies), and even colleges contained young people of widely varying ages. But beginning with the establishment of age-graded school classrooms in the 1840s and 1850s, age consciousness intensified. During the mid- and late 19th century, there was a growing concern with the proper chronological development of young people and a growing abhorrence of “precocity.” One of the first signs of this shift in attitude took place in the 1830s and 1840s, when children as young as 3, 4, 5, and even 6 were expelled from public schools a development partially reversed in the late 1880s with the first public funding of kindergartens (Chudacoff, 1989; Kett, 1977).

A key contributor to the heightened sensitivity to age was the educational psychologist G. Stanley Hall. His survey of children entering Boston schools in 1880 inspired the “child study” movement, which encouraged teachers and parents to collect information about the stages of child development. The years surrounding puberty were singled out for special attention. The publication in 1904 of Hall’s two-volume work Adolescence would give reformers, educators, and parents not only a label but also an explanation for the unique character of this age group. Hall argued that children recapitulated the stages of evolution of the human race, from presavagery to civilization. To become happy, well-adjusted adults, children had to successfully pass through each of these stages. Adolescence, between 13 and 18 years of age, was particularly crucial. “The dawn of puberty,” Hall wrote, “is soon followed by a stormy period when there is a peculiar proneness to be either very good or very bad.” A period of awkwardness and vulnerability, adolescence was a time not only of sexual maturation but also of turbulent moral and psychological change (Kett, 1977; Ross, 1972).

Even before Hall popularized the term adolescence, religious, health, and educational concerns had led reformers to focus on the years surrounding puberty. Fearful that the early teen years saw a falling away from religious faith, a new emphasis was placed on religious rituals that coincided with the onset of puberty, such as confirmation and the Jewish bar mitzvah. Adult-sponsored organizations emphasizing “muscular Christianity,” such as Christian Endeavor, the Epworth League, and the YMCA, were founded or expanded to meet young people’s health and religious needs (Macleod, 1983; Putney, 2001).

By the end of the 19th century, anxieties about the teen years had further intensified. The term adolescence was no longer merely descriptive; it had become prescriptive. Worries about precocious sexual activities among girls led reformers, beginning in the 1880s, to lobby to raise age-of-consent laws and to stringently enforce statutory rape laws. Concern that young people would be contaminated by exposure to commercial entertainments at night led to adoption, in many cities, of curfews. Fear that if young men entered the workforce too soon, without adequate schooling, they would find themselves stuck in dead-end jobs stimulated demand for child labor legislation demand that received support from labor unions concerned about the substitution of teen laborers for adult workers (Kett, 1977; Macleod, 1998; Odem, 1995).

In 19th-century working-class and farm families, children were valuable contributors to the family economy, whereas urban middle-class children were increasingly sheltered from the world of work. Beginning in the late 19th century, social reformers demanded that the protections for middle-class children apply to working-class and immigrant children as well. All children, regardless of their family’s circumstances, had a right to an education and a safe and protected childhood. Children were transformed from objects of utility into objects of sentiment (Zelizer, 1994). Bitter political and legal battles erupted over what a childhood should be. Whereas many poor and immigrant parents clung to the notion that children should be economically useful, middle-class child savers saw this as exploitation of children. The organized working class had long been opposed to child labor, but certain employers, especially southern mill owners, fostered the view that work was good for children (Macleod, 1998).

During the 20th century, the process of growing up gradually grew more uniform. All young people, irrespective of class, ethnicity, gender, and region, were expected to pass through the same institutions and experiences at roughly the same age. Perhaps the most important development was the transformation of the high school from an institution for college preparation for the few to one preparing all young people for life (Chudacoff, 1989; Macleod, 1998).

Children as Active Agents

Children are not passive recipients of the broader culture. They are adaptive within the limits of the environment in which they find themselves. Sometimes young people have exhibited a collective power that is remarkable. In 1899, newsboys in New York formed a union and staged a largely successful strike against William Randolph Hearst and Joseph Pulitzer (Nasaw, 1985).

In the late 19th century, the children of the “New Immigrants” from eastern and southern Europe often served as crucial cultural intermediaries. Age relationships within families were often inverted, as young people, who often picked up English and American customs more easily than their parents, helped negotiate relationships between their families and landlords, employers, and government bureaucrats (Berrol, 1995).

In the 20th century, young people repeatedly served as a cultural avant-garde, playing a pivotal role in the process of cultural change. The development of dating, which began to appear in the 1910s, illustrates the ability of young people to create a culture apart from that imposed by adults. Around the same time, working-class children and youth quickly embraced the expanding world of commercial entertainment of penny arcades, movies, and amusement parks, soon making up a large share of the audience for commercial amusements. At least since the 1920s, the teen years have often been a period of intercultural mixing, as young people have absorbed and revised clothing and musical styles from groups across ethnic and class lines (Peiss, 1986).

Finally, throughout the 20th century young people played political roles that have often been forgotten or marginalized. During the late 1950s and 1960s, young people stood at the forefront of efforts to integrate public schools and to protest the Vietnam War. Though we generally associate the student protests of the 1960s with college students, there were massive demonstrations, involving tens of thousands of high school students, demanding more equitable funding of public education, bilingual education, a more relevant curriculum, and smaller classes.

Children’s Rights

During the turbulent 1960s and early 1970s, several influential social critics such as Edgar Z. Friedenberg, Paul Goodman, Jules Henry, and Kenneth Keniston argued that postwar society’s methods of child rearing and socialization interfered with young people’s central developmental tasks. The postwar young grew up in a world of contradictions. Middle-class society valued independence but made the young dependent on adults to fulfill their needs; it stressed achievement but gave the young few avenues in which to achieve. By adhering to a romantic view of childhood innocence, middle-class society denied young people their freedom and their rights. American society had segregated children into a separate category and failed to recognize their growing competence and maturity; in its concern for protecting childhood innocence, it had limited the responsibilities given to young people and punished them severely when they fell from that state of innocence (Holt, 1975).

The concept of children’s rights was one of the most significant outgrowths of the liberation struggles of the 1960s. It was also a product of a significant demographic development, the postwar baby boom, which dramatically altered the ratio of adults to children, shifting cultural influence to the young. The phrase children’s rights was not new. As early as 1905, the Progressive Era reformer Florence Kelley asserted that a right to childhood existed. During the late 1940s, a number of books invoking the phrase appeared. But in general, the phrase involved enumerating children’s needs, such as a right to an education, a right to play, and a right to be loved and cared for. These early defenses of children’s rights emphasized children’s vulnerable status and their need for a nurturing environment and wanted to permit the state to assume a broader role in intervening in families in cases of need (Hawes, 1991).

Advocates of children’s rights during the 1960s and 1970s had a different goal in mind. They wanted to award minors many of the same legal rights as adults, including the right to make certain medical or educational decisions on their own and a right to have their voice heard in decisions over adoption, custody, divorce, termination of parental rights, or child abuse. The Supreme Court rulings in the 1969Tinker case, which guaranteed students the right to free speech and expression, and the 1967 case In re Gault, which granted young people certain procedural rights in juvenile court proceedings, marked the beginning of a legal revolution in the rights of children. A major arena of legal conflict involved the explosive issue of teenage sexuality. The most controversial issue was whether minors would be able to obtain contraceptives or abortions without parental consent. In a 1977 case, Carey v. Population Services International (1977), the Supreme Court invalidated a New York law that prohibited the sale of condoms to adolescents under 16, concluding that the “right to privacy in connection with decisions affecting procreation extends to minors as well as adults.” The Court held that the state interest in discouraging adolescents’ sexual activity was not furthered by withholding from them the means to protect themselves.

In subsequent cases, courts struck down state laws requiring parental notice or consent if their children sought contraceptives. In Planned Parenthood Association v. Matheson (D. Utah 1983), for example, a federal district court recognized that teenagers’ “decisions whether to accomplish or prevent conception are among the most private and sensitive” and concluded that “the state may not impose a blanket parental notification requirement on minors seeking to exercise their constitutionally protected right to decide whether to bear or beget a child by using contraceptives.” The two most important sources of federal family planning funds in the nationTitle X of the Public Health Service Act of 1970 and Medicaid (Title XIX of the Social Security Act of 1965)require the confidential provision of contraceptive services to eligible recipients, regardless of their age or marital status. By 1995, condom availability programs were operating in at least 431 public schools.

Schools became a central battlefield in the children’s rights controversies. In the majority opinion in the Tinker case, Associate Justice Abe Fortas wrote that schools were special places and that civil liberties had to be balanced against “the need for affirming the comprehensive authority of the states and of school officials, to prescribe and control conduct.” In subsequent cases, the court has sought to define this balance. In the 1975 case of Goss v. Lopez, the Court granted students the right to due process when they were threatened with a suspension of more than 10 days and declared that a punishment cannot be more serious than the misconduct. But the Court, fearful of undercutting principals’ and teachers’ authority, announced that schools needed to provide only informal hearings, not elaborate judicial procedures. And students, the Court went on to say, did not have a right to a hearing for a minor punishment such as a detention or if they posed a danger to other students or school property. In other cases, the justices ruled that school officials could search student lockers, but only when they had grounds for believing that a specific locker contained dangerous or illegal items. It permitted administrators to impose random drug tests, but only on students engaging in extracurricular activities. It allowed school authorities to censor school newspapers only when these were sponsored by the school itself (Board of Education of Independent School District No. 92 of Pottawatomie County v. Earls, 2002; Hazelwood School Dist. v. Kuhlmeier, 1988; United States v. Sokolow, 1988).

Gender equity for girls and young women offered yet another front in the battle for children’s rights. The basic legal tool for attaining gender equity was Title IX of the Educational Amendments of 1972, which prohibited sex discrimination in any educational program or activity. It required schools to grant female students equal academic and athletic opportunities. Academic opportunity was the initial concern, but athletics quickly became the most visible field of contention. In 1971, 3.7 million boys and just 294,015 girls participated in high school sports. By 2000, boys’ participation had risen to 3.9 million and girls’ to 2.7 million, a nearly tenfold increase.

Since the mid-1970s, utopian visions of children’s liberation have been displaced by a preoccupation with child abuse and protection and with punishing juveniles who commit serious crimes as adults. A series of “moral panics” over children’s well-being fueled this cultural shift. Over the past three decades, American society experienced a series of highly publicized panics over teen pregnancy; stranger abduction of children; ritual sexual abuse in day care centers; youthful smoking, drinking, and illicit drug use; and youth gangs, juvenile predators; and school shootings (Jenkins, 1998).

Panics about children’s well-being are nothing new. During World War II, there was an obsession with latchkey children and fear about a purported explosion in juvenile delinquency. After the war, there were panics over youth gangs and, most remarkably, over the supposedly deleterious effects of comic books. But there seems little doubt that the panics that have spread since 1970 have been more widely publicized and have had a greater impact on public perception and policy. In retrospect, it seems clear that the waves of public hysteria over these problems were truly “moral panics” in a sociological sense. That is, they were highly exaggerated. They were inextricably linked to anxieties over profound changes in adults’ lives especially the increase in married women’s participation in the workforce and family instability, shifts in sexual mores, and the growing prevalence of drug use. By virtually every statistical measure, young people are better off today than in any previous generation. They are better educated. The gains are especially pronounced among girls, ethnic minorities, and children with disabilities. Yet even as their condition has improved, public anxiety has increased. There is little doubt that public concern simply represents the latest example of American nostalgia for a mythical golden age (Gilbert, 1986; Males, 1996, 1999).

A Return to Little Adulthood

What is a contemporary child? Few would describe a child in Victorian terms, as an innocent, asexual creature with a nature fundamentally different from that of adults. Nor would many define a child as some children’s rights advocates did in the late 1960s or early 1970s, as a rights-bearing individual who should have precisely the same privileges and freedoms as adults. Our society still regards children as special beings with distinct needs, but more than ever before we also see children in other ways: as individuals who are capable of learning at an early age, as more precocious, knowledgeable, and independent than any recent generation, and as independent consumers to be sold to.

We live in a time of profound uncertainty about what constitutes a child. Contemporary society has blurred the distinctions between childhood, adolescence, and adulthood, dressing children in adult-style clothes, ascribing to them adult thoughts, and treating them like grown-ups in miniature. We no longer have a consensus about the proper dividing line between childhood, adolescence, and adulthood. There is great division within our society about when a young person is old enough to have sex or smoke or drink or do paid work or take full responsibility for criminal behavior.

It is a common lament that children today are growing up too fast and that our culture is depriving them of the carefree childhood they deserve. Children today watch television shows and movies saturated with sexual innuendo and violence. Many of their play activities are organized by adults or are highly individualized and technologically mediated by computers and video games; as a result they have less opportunity for unstructured group play. Meanwhile, they engage in sex at a younger age, spend less time with adults, and are heavily influenced by peers and by a commercial culture. Of course, we have not returned to the premodern world of childhood. We have something that is very new. Though young people are no longer the binary opposite of adults, and have become independent consumers and avid patrons of mass culture, they remain segregated in age-graded school and dependents in their parental homes, and they are regarded as having a nature different from that of grown-ups.

There is a widespread view that young people are caught between two conflicting trends: a riskier, more toxic social environment and less parental support. But there is no consensus about what, if anything, to do about this. The dominant strategy has been to try to preserve childhood as a time of innocence through public policy. By installing V-chips in television sets, imposing curfews and school dress codes, and using restrictions on drivers’ licenses to enforce prohibitions on teen smoking and drinking, adult society seeks to empower parents and to reassert childhood as a protected state. Such policies as random drug tests for students in extracurricular activities, abstinence-only sex education programs, and more stringent enforcement of statutory rape laws represent attempts to counteract the impact of permissive culture on young people’s lives.

But if there is any lesson that the history of childhood can teach us, it is the error of thinking that we can radically separate the lives of children from those of adults. Young people’s behavior tends to mimic that of their parents and the adults who surround them. The best predictor of whether a young person will smoke, take drugs, or engage in violent activity is his or her parents’ behavior. Restrictions on young people’s behavior have often proven to be ineffective or counterproductive.

Nostalgia provides no substitute for effective public policies that address the real problems of poverty, family instability, health care, and education that many young people confront. We could not return to the 1950s even if we really wanted to. A century ago, a small group of “child savers” awakened their society to problems of child poverty, abuse, and neglect, ill health, and inadequate schooling that were far greater and far less tractable than any of the problems that we now confront. It is easy to condemn these earlier reformers for their paternalism and interest in social control, but for all their biases and limitations, they demonstrated a creativity and energy that we can only admire. The challenge of our time is to duplicate their passion and their achievements while overcoming their limitations.