Accounting for Disease and Distress: Morals of the Normal and Abnormal

Margaret Lock. Handbook of Social Studies in Health and Medicine. Editor: Gary L Albrecht, Ray Fitzpatrick, Susan C Scrimshaw. Sage Publications, 2000.

Introduction

Consideration of normality and abnormality in connection with health and illness inevitably raises questions for social scientists about just how this distinction is conceptualized and then reproduced in social practices. Further, how does the labeling of body states, or the behavior of individuals or groups of individuals as ‘normal’ or ‘abnormal’ affect their lives? Alternatively, how does the idea of being designated ‘at risk’ for future abnormalities affect daily life? Of even more interest is why the creation of a moral discourse is so often associated with ideas about normal and abnormal, even when the language and practices of biomedicine are usually assumed to be predominantly rational and free of censure? In this chapter, by drawing on examples from around the world, I will show how ideas about normality and abnormality are culturally constructed and intimately associated with the social, political, and moral order, with profound consequences for individual well-being, and frequently for the allocation of responsibility in connection with the onset or persistence of disease and illness.

A brief example at the outset will illustrate some of the complexities with which we must grapple when discussing this subject. The process of labor and birth has become progressively regulated over the past 20 years, so that this supremely subjective experience has been transformed into a statistically constructed process whereby the duration of the stages of delivery and the timing of the transitions from one stage to another must occur within medically established parameters. Almost all hospital births, in EuroAmerica at least, now carry out an ‘active management of labor’ based on the Friedman curves. The result of this normalization of birth has been increased pressure placed on many women, particularly Caucasian women, to ‘speed up’ the process of birth. By contrast, in the Canadian north, common knowledge, shared both by Inuit women and health-care professionals familiar with that environment, has it that ‘normal,’ labor among the Inuit is remarkably rapid. Inuit women themselves take a certain pride in quick deliveries, to which diet and lifestyle may well contribute.

In an effort to try to reduce infant mortality, the Canadian government implemented a policy in the early 1980s whereby all Inuit women were to be ‘evacuated’ and flown south to give birth. This policy caused great unrest, not only because women were isolated from their families, but also because they were systematically subjected to technological interventions in hospitals in the Canadian south, where it became regular practice to slow down what were designated as abnormal labor experiences. These practices quickly led to disputes and active resistance on the part of women to the evacuation policy. In the intervening years, some policy modifications have been made, and increasingly, well-trained midwives work across northern Canada, but for a large number of Inuit women the process of giving birth commences with an aeroplane flight of more than 1000 miles so that labor and birth may be technologically managed in the tertiary care hospitals of urban Canada, where, more often than not, management is dominated by efforts to prolong labor and birth (Kaufert and O’Neil 1990).

This example illustrates a situation with which we have become familiar over the course of this century, namely an increase in interventions, both medical and psychological, into various stages of the life cycle by health-care professionals (Conrad 1992). As is well known, birth was one of the first life cycle events to be medicalized, and there is no doubt that this process has lowered both infant and maternal mortality rates to some extent, although changes in lifestyle factors have made a greater contribution to these improved statistics (Macfarlane and Mugford 1984).

What I want to emphasize here is not only the discrepancy between authoritative and subjective accounts about what constitutes a ‘normal’ birth, but further, the assumption made on the part of the majority of obstetricians that their knowledge can be applied without modification to all births, regardless of marked cultural, social, and economic differences among pregnant women. This assumption persists even though there is considerable evidence to suggest that cultural and lifestyle factors influence pregnancy, the process of birth, and its outcomes (see McClain 1982). Can this ‘boxification’ (Kaufert 1990) of birth into normal and abnormal cases based on a systematic setting aside of all apparent variation be justified? Do we have any evidence as to what might constitute ‘normal’ variation in the process of birth, wherever its location worldwide? Should the ‘average’ Caucasian body be taken as the standard around which variation is established, whether this be in connection with birth or other health-related events?

Inventing the Normal

Until well into the last century, use of the term normal was virtually limited to the fields of mathematics and physics. It was not until certain ideas about pathology took hold in the 1820s that arguments about the relationship between normal and abnormal biological states were seriously debated for the first time. Auguste Comte, writing in 1851, noted a major shift in conceptualization that had taken place 30 years previously, when the physician Broussais had first argued that the phenomena of disease are essentially of the same kind as those of health, and thus health and disease differ from each other only in ‘intensity.’

Before Broussais, the dominant approach to disease in Europe had been one in which it was conceptualized as regulated by entirely different laws from those that govern health. Although the early Galenic idea of a healthy body being one of balance among excesses and deficiencies, hot and cold states, and so on remained important, in the late eighteenth century, a notion of pathological organs was superimposed on earlier thinking, and medicine became preoccupied with a study of sick organs, rather with variations in the condition of individual patients, which had previously been the case (see also Canguilhem 1991; Hacking 1990)

Although clinical medicine, until the present time, has remained focused on organ pathology, and Broussais himself was deeply immersed in theories about organ pathology (Porter 1998), he nevertheless postulated that normality could be understood as being on a continuum with pathology, and further that the ‘normal’ is the center from which all deviation departs (Hacking 1990: 164). This theme was taken up and expanded upon by several influential thinkers during the course of the nineteenth century, among them Auguste Comte and Claude Bernard. In the 1960s, Georges Canguilhem, in writing a synthesis of the work of the previous century in connection with normality, concluded that ‘strictly speaking … there is no biological science of the normal. There is a science of biological situations and conditions called normal’ (1991: 228). Canguilhem concluded that normality can only be understood in context, ‘as situated in action,’ and moreover, diversity does not infer sickness, nor does ‘objective’ pathology exist.

The systematization of disease categories and the ordering by governments, public health officials, and others of information on disease incidence became a social preoccupation from the end of the last century (Foucault 1979). In the interests of the ‘surveillance,’ of society, what formerly had been an interest in variation around the norm was gradually reformulated so as to make categorical, classifiable distinctions between normal and pathological. Normal and abnormal were now conceptualized as a dichotomy.

The philosopher Ian Hacking, less interested than the physician Canguilhem in clinical medicine, seeking to document the formation of the science of probability, and influenced to some extent by Foucault, argues that our present understanding of the idea of normal is a key concept in what he labels ‘the taming of chance.’ Hacking notes that for a good number of years use of the normal/pathological continuum postulated by Broussais was confined to medicine, but then towards the end of the nineteenth century, ‘it moved into the sphere of almost everything. People, behavior, states of affairs, diplomatic relations, molecules: all these may be normal or abnormal’ (Hacking 1990: 160). Hacking argues that we talk freely today about ‘normal’ people, and, of even more importance, we often go on without a second thought to suggest that this is how things ought to be. Thus, the idea of normality is frequently used to close the gap between ‘is’ and ‘ought,’ and so has a moral quality built into it. Hacking traces our current expanded understanding of normal directly back to Comte. He describes the way in which Comte, perhaps inspired by a personal brush with mental illness, moved normality out beyond the clinic into the political sphere, at which point ‘normal ceased to be the ordinary healthy state; it became the purified state to which we should strive, and to which our energies are tending. In short, progress and the normal state became inextricably linked’ (Hacking 1990: 168), and further, not only individuals, but aggregates of individuals could be labeled as normal or otherwise.

Thus, a fundamental tension was introduced into the idea of normal that currently contains both the meaning of an existing average and a state of perfection towards which individuals or societies can strive. Both the idea of a deviation by degree from a norm and the idea of a perfect state are encapsulated in the one term. Following Durkheim, normal can be understood as that which is right and proper. In this case, efforts to restore normality entail a return to a former equilibrium, to a status quo, but taken further, normal can be interpreted as only average, and hence is something to be improved upon. In its most extreme form, argues Hacking, this interpretation can lead all too easily to eugenics. Two ideas, therefore, are contained in the one concept of normal: one of preservation, the other of amelioration. As Hacking aptly puts it: ‘Words have profound memories that oil our shrill and squeaky rhetoric’; the normal now stands at once, ‘indifferently’ for what is typical, the ‘unenthusiastic objective average, but it also stands for what has been, good health, and for what shall be, our chosen destiny’ (1990: 169). Hacking concludes that this benign and sterile-sounding word, normal, has become one of the most powerful [ideological] tools of the twentieth century.

Disease and the Normal

It is generally agreed that the idea of disease as deviation from a biological norm dominates medical thinking and practice at the present time. Although social and cultural contributions to the incidence of disease may be acknowledged, their effect is usually understood simply as contributing either directly (through diet and individual behavior) or indirectly (through a lack of sanitation, a polluted work environment, or stress, for example) to a ‘final common pathway’ leading to pathological changes in biology wherein lies the ‘real’ disease. Factors extraneous to the body are made accessories before the fact of disease.

In recent years, many social scientists and psychiatrists have taken a critical stance about this type of argument; one in which they question the epistemologically neutral claims inherent to the biomedical sciences. Mishler et al, for example, in tune with Canguilhem, made it clear long ago that there is no way to define a biological norm or deviations from that norm without reference to specific populations and their sociocultural characteristics (1981: 4). They cite Redlich, who insists that one must ask ‘normal for what?’ and ‘normal for whom?’ In other words, assertions about the normality of biological functioning, or about the normal structure of an organ, must be based on the relationship between the observed instance and its distribution in a specified population (Redlich 1957). Further, implicit to any specified norm is a set of presupposed standard conditions with regard to when, how, and on whom measurements are made.

A major difficulty arises because the average value for a variable of some specified population may not correspond to an ideal standard, ensuring that ‘[specific characteristics of populations and their life situations are critical to understanding and interpreting the significance of average values and of’deviations’ from universal or idea standards of health (Mishler et al. 1981: 4). A classic study carried out by Ryle illustrates this difficulty. In a clinical and epidemiological study of adolescents in populations living on different diets, he found considerable variability in the size of thyroid glands. Ryle (1961) concluded that the presence of ‘visible glands’ in a population where this phenomenon is common cannot be interpreted as a meaningful clinical sign or precursor of a goitre in later life, as physicians are taught to believe. Ryle argues that this ‘symptom’ may represent a normal adapation to a specific environment rather than a deviation from a universal standard of healthy thyroid function.

A classic anthropological study supports Ryle’s argument. While working among the Subanum of Mindanao, in the Philippines, Charles Frake, an ethnolinguist, made a classification of diseases using the local taxonomy (1977). He found that the Subanum have an exceedingly elaborate taxonomy of skin conditions based on their astute observation of the numerous skin changes commonly visible on their bodies, changes intimately associated with life in a damp, tropical environment. Frake, following the lead of his informants, interpreted the majority of these changes as ‘normal,’ even though they might at times need medication. To a biomedically trained dermatologist virtually all of these changes would no doubt have signalled disease, although it is likely that very few dermatologists would have great facility with either making specific diagnoses of these conditions or with treating them.

Yet, another classic study published in the 1930s, this time from the United States, provides evidence of the extent to which subjective assessment and prior expectations can be involved in making judgments about what is normal and abnormal. More than 1000 American schoolchildren were examined by physicians to determine whether they should have their tonsils removed. It was found that 600 children had already had this surgical procedure, and they were therefore removed from the study. Of the remaining 400, it was recommended that 45 per cent of them have a tonsillectomy, implying that the other 55 per cent fell within the bounds of normal. However, when these ‘normal’ children were examined by a second set of physicians, they recommended that 46 per cent of this group have their tonsils removed. A third group of physicians, who were not informed of the earlier recommendations, examined the children who had survived the first two rounds, and they recommended that 44 per cent of them have their tonsils removed. In all, after three successive rounds of examinations, only 44 children out of the original 1000 had not had a tonsillectomy recommended for them (Wennberg and Gittelsohn 1982: 130). When we recall that today tonsillectomies are avoided if at all possible, this telling example suggests two things. First, how the decisions about diagnosing pathology made by individual physicians can be highly subjective, and therefore by implication ideas about normal are also subjective. Second, in addition to variation in assessments among individual physicians, fashions in surgical and medical procedures also contribute to interpretations of normality.

Situating the Abnormal

The examples cited thus far involve value judgements about physical signs and symptoms that are visible, or can be made visible, and are then subjected to assessment and measurement. However, it is in cultural psychiatry and crosscultural studies in connection with issues relating to mental health that concepts of the normal have been most systematically challenged. One impetus from the outset for this kind of research has been the question of cultural relativism, and whether major mental disorder is the result of universal biological abnormalities or, alternatively, whether culture makes such a major contribution that certain diseases familiar to biomedicine do not occur in all settings. An extension of this approach, following labeling theory, argues that if certain types of behavior are neither labeled as abnormal nor stigmatized, then this may serve as a major form of protection against the appearance of serious or chronic pathology.

In her careful study among both the Bering Sea Eskimo (Inuit) and the Yoruba of Nigeria, Jane Murphy (1976) concluded that virtually all symptoms that would be labeled by a psychiatrist as signs of schizophrenia would, under certain conditions, similarly be regarded as abnormal in both these cultural settings, that is, as signs of ‘craziness.’ However, local responses to crazy people often buffered the effects of the illness. Murphy went further, and pointed out that certain classical symptoms of schizophrenia, such as dissociation, would not be recognized as abnormal if the affected individual could ‘control’ the episodes at will, especially if these episodes were integral to religious or sacred activities. On the contrary, they would probably be highly valued as essential to cultural continuity. Murphy’s work was among the first of numerous anthropological studies that demonstrated enormous difficulties in attempting to translate concepts of mental disorder and ideas about abnormality across cultures (see also Field 1960; Rivers 1924).

In recent years, a more critical approach has been taken by a number of social scientists working in this area, one in which the disease categories of biomedicine are no longer assumed to be above epistemological scrutiny. The emphasis has shifted away, therefore, from the exoticisms of other cultures to a more balanced approach in which all knowledge about normal and abnormal is interpreted in cultural context. Good and Kleinman pose the problem as follows:

How are we to know whether the clinical syndromes identified in research in the United States and Europe are universal diseases, linked to discrete biological disorders, or culture-specific forms of illness behavior resulting from complex interactions among physiological, psychological, social and cultural variables? Are they universal patterns, representing ‘final common pathways’ to be identified through studies of neurotransmitters and neuroendocrinology, or are they culture-specific syndromes, linked to underlying psychophysiological processes but produced as final common ethnobehavioral pathways (Carr 1978). (Good and Kleinman 1985: 297)

Good and Kleinman reviewed research that reveals that anxiety disorders are present, as far as can be ascertained, in all societies, but that the phenomenology of such disorders, ‘the meaningful forms through which distress is articulated and constituted as social reality’ varies in significant ways across cultures (1985: 298). These authors argue that we must take care not to fall into the trap of creating a ‘category fallacy’ in which the nosological categories developed for particular EuroAmerican populations are then applied indiscriminately to all human populations (Kleinman et al. 1977).

Robert Barrett takes up the challenge of interpreting the disease of schizophrenia as it is understand today as, in effect, a culture-specific syndrome, a product of the recent history of EuroAmerican thought about mind and body, individualism, and modernity. In a review article he shows how the institutional practices of psychiatry first created in the nineteenth century made possible the production of a new category of knowledge—schizophrenia. Prior to institutionalization, the kind of ‘crazy’ behavior involving disorders of cognition and perception that we now associate with schizophrenia would have elicited a range of responses, not all of them indicating that pathology was involved. As with the Inuit and Yoruba, the specific circumstances would have been crucial in passing judgement. Barrett interprets schizophrenia as we know it today as a ‘polysemic symbol’ in which various meanings and values are condensed, including stigma, weakness, inner degeneration, a diseased brain, and chronicity. Without this associated constellation of meanings, schizophrenia as we understand it would not exist. Barrett goes on to argue that the individualistic concept of personhood, so characteristic of EuroAmerica, has also contributed to our understanding of this disease. He shows how a theme of a divided, split, or disintegrated individual runs through nineteenth century psychiatric discourse and continues to the present day. Of course, schizophrenia is not the only disease associated with splitting and dissociation, but it has also been the prototypical example of such a condition. The perceived loss of autonomy and boundedness taken as characteristic of schizophrenia are signs of the breakdown of the individual, and thus of the person. Further, the classification and treatment of schizophrenic patients as broken people with ‘permeable ego boundaries’ profoundly influences the subjective experience of the disease (Barrett 1988).

Barrett, himself a psychiatrist, argues that categorizing patients as suffering from schizophrenia implies a specific ideological stance that may highlight, problematize, and reinforce certain experiences, such as auditory hallucinations, for example. Barrett’s argument is neither one of simple social construction, nor of schizophrenia as a myth, but a much more subtle argument in which he does not dispute at all the reality of symptoms, or the horror of the disease. He points out, however, that a careful review of the cross-cultural literature indicates that some of the constitutional components of what we understand as schizophrenia may be virtually absent in certain non-Western settings: ‘Thus, in some cultures, especially those which do not employ concept of “mind” as opposed to “body,” the closest equivalents to schizophrenia are not concerned with “mental experiences” at all, but employ criteria related to impairment in social functioning or persistent rule violation’ (Barrett 1988: 379). Craziness need not be conceptualized, therefore, as a state of mind, but as a loss of a capacity to be effectively social. In other words, where there is less of a focus on individual autonomy and the splitting of personality, and where symptoms of what, in psychiatry, is classified as dissociation is not necessarily regarded a priori as pathological, the meaning of symptoms will be interpreted differently, and carry a different moral valence, with implications for patient well-being. In cultural settings where schizophrenia-like symptoms are not stigmatized, for example, chronicity has been shown to be less severe (Waxier 1979).

Similar arguments to that of Barrett have been developed for clinical depression as we currently define it, that is, as being a psychiatric ethnocategory characteristic of EuroAmerica society, and further particularly associated with middle and higher socioeconomic groups. Translation of meaning in connection with depressive emotional states is extremely problematic both within and across cultures (Kleinman and Good 1985, see also Kirmayer 1984).

Negotiating Interpretations of Normal and Abnormal

Susan Sontag (1977) warned us, when she contracted breast cancer more than a decade ago, about the ‘punitive and sentimental fantasies’ concocted in connection with certain illnesses. She was concerned about the way in which images about illnesses are put to social use, and about the various stereotypes and moralizing that are associated with certain images throughout history. Sontag insisted that the ‘most truthful way of regarding illness—and the healthiest way of being ill—is the one most purified of, most resistant to, metaphoric thinking.’ She was particularly concerned, not surprisingly, with certain research, current at that time, that claimed to have established a statistically significant association between a given personality type and the incidence of breast cancer.

Sontag’s exhortation to confine our interpretations about illness to the material is, it seems, easily justifiable as a means to eliminate stigmatization and to make disease morally neutral. However, failure to pay attention to the moral discourse associated with illness usually forces premature closure about the social dimensions of suffering (Kleinman et al. 1997). Further, and not unrelated to social suffering, this approach leaves unproblematized interpretations of the normal and abnormal. This is where the meaning-centered approach characteristic of much medical anthropology comes into its own; an approach that takes the lived experiences and local knowledge of involved individuals as its point of departure, and then situates this data in cultural and political context. Many researchers then go further, to reflect on the unexamined assumptions present in biomedicine, in light of the findings obtained from meaning-centered research projects.

While doing research among Greek immigrants in Montreal in the late 1980s, I found that the complaint most commonly expressed by women, particularly those working in the notoriously exploitative garment industry, was of nevra (nerves). Nevra is associated with a frightening loss of control, and is described as an experience of powerful feelings of ‘bursting out,’ ‘breaking out,’ or ‘boiling over,’ in other words, a sense of disruption of normal body boundaries. Once the condition becomes chronic, headaches, chest pain, and other pains radiating out from the back of the neck also characteristically become part of nevra symptomatology. This experience is so common that at least one major Montreal hospital uses nevra as a diagnostic category. Some women, when they visit a doctor, are diagnosed as clinically depressed and given antidepressants, but the majority do not meet the usual criteria for depression, and a tendency exists for certain physicians to dismiss the patient’s complaints as being ‘all in her head’ (Lock and Dunk 1990).

The Greek concept of nevra is part of a larger family of similar conditions commonly experienced in the Arab world, the southern Mediterranean, and in Central and Latin America (where it appears to have been transported from Europe). The condition is also present in isolated parts of North America, including the Appalachians and Newfoundland (Low 1985), suggesting that it was formerly widely spread throughout Europe. Further, nevra or nervios (in Spanish) is just one among many ‘culture-bound’ or ‘culturally interpreted’ syndromes located around the world. Byron Good (1977) formulated the concept of a ‘semantic illness network’ in which popular categories of illness, including the culture-bound syndromes, can be understood as representing ‘congeries of words, metaphors, and images that condense around specific events. These conditions are frequently characterized in the medical literature as somatization,’ and treated as evidence of a psychological disorder that manifests itself physically.

Among Montreal immigrants it is considered normal to experience nevra in daily life; it is only when symptoms become very disabling that a woman will visit her doctor. If episodes become very frequent or oscillate rapidly back and forth between stenohoria, in which an individual feels confined and depressed, and agoraphobia, in which one feels overwhelming anxiety at the thought of going out, then the condition is assumed to have become an illness. Some women are believed to be more constitutionally vulnerable to attacks than others, and men are not entirely immune from them. Further, nevra is associated by all women with the immigrant experience, and many of them, when interviewed, linked it explicitly to the abusive working conditions they were subjected to in Montreal.

This ever present stress is punctuated by precipitating events, ranging from crises such as being fired or laid off from to work, to family quarrels, or at times spousal abuse (Lock 1990). It is in situations such as these that the term nevra is used to describe a conjunction of destructive social events, uncontrollable emotional responses, and culturally characteristic disabling physical symptoms. In order to better appreciate the cultural significance of nevra, a brief digression into the structure of Greek family life, as it was until recently, is necessary.

In common with many other societies of the world (Griaule 1965; Hugh-Jones 1979), Greeks relate a healthy and ‘correct’ human body to a clean and orderly house, and this is, in turn, associated with moral order in society at large. The house is the focus of family life, not only because it furnishes all the physical and social needs of family members, but also because it is a spiritual center, replete with icons and regular ritual activity, where family members seek to emulate the Holy Family (DuBoulay 1986). Management of the house is the special responsibility of the woman, who is both functionally and symbolically associated with it (Dubisch 1986). Cleanliness and order in the house are said to reflect the character of the housewife, and a discussion of private, inside family matters should not cross the threshold into the threatening domain of the outside world. Ideally, a woman should never leave the house for frivolous or idle reasons or venture outside where dirt and immorality abound. A woman who spends too much of her time outside of the house can be accused of damaging the all-important social reputation of her family.

Just as a distinction is made between inside and outside the house, so too is a distinction made between the inner and outer body (Dubisch 1986). Contact between what enters the body and what leaves it must be avoided. Dirty clothes and polluting human products must be strictly segregated from food preparation. Although fulfillment of male sexual needs is considered imperative, a woman’s life is hedged with taboos around menstruation, marriage, the sexual act, and childbirth, all designed to confine any elicit desires she may have, and contain the polluting nature of her bodily products.

A woman’s task is to bind the family together, to keep it ritually pure, and to protect it from the potentially destructive outside world. This task, together with the raising of children, has traditionally been the prime source of self-esteem for Greek women. While men must protect the family honor in the outside world, women have been required to exhibit modesty at all times, and their bodies were symbols of family integrity and purity as a whole. Emotional stability is valued in this situation, and any signs of loss of control on the part of women is worrisome.

This is the normative state, an idealized situation that in daily life is, of course, often not lived up to, or alternatively may be deliberately flouted. Nevertheless, this has been the value system-the standard—against which Greek women have measured their lives until recently. As is usual among immigrant populations, values, particularly those pertaining to family life, tend to persist after migration; the uncertainties produced by a new way of life may actually promote and harden them. In Montreal, Greek immigrant women complain that they seldom have an opportunity to go out of the house unaccompanied by their husbands unless it is to go to work. A Greek-Canadian physician described many of his patients as suffering from what he called the ‘hostage syndrome,’ the results of vigilant husbands protecting their family honor in unfamiliar surroundings.

Abiding by traditional codes of conduct, once a source of pride for Greek women, can become crippling after immigration. When a harsh climate, cramped apartment life with few friends or relatives nearby, language difficulties, and debilitating working conditions are taken into account, it is hardly surprising that so many women experience acute isolation, physical suffering in the form of nevra, and serious doubts about the worth of their lives. However, because a negative moral discourse is closely associated with nevra, women are often ambivalent about a frank discussion of symptoms. It is clear that many of those women who visit a doctor for medication do so with the hope that their physical distress will be legitimized, at the same time it is relieved, through medicalization. Like Sontag, the majority of these women want their illness purified of metaphoric thinking and recognized as thoroughly material, thus ensuring that their suffering is both individualized and depoliticized. When activists among immigrant women in Montreal have focused on the social and political origins of nevra and other illness states, their plea usually falls on stony ground, but this is not because the women do not explicitly recognize that bouts of nevra may be directly related to working conditions or near-poverty. Their financial insecurity and status as immigrants means that these women cannot risk even contemplating political action. Above all, they want prompt and effective action taken to relieve their individual suffering, and they want it made quite clear to husbands and other involved observers that their suffering is both ‘real’ and painful.

The literature of medical anthropology is replete with similar telling examples in which, at the most fundamental level, arguments about bodily ills are essentially moral disputes about the boundaries between normal and abnormal and their social significance. Ong (1988), for example, interpreted attacks of spirit possession on the shop floors of multinational factories in Malaysia as complex and ambivalent but not abnormal responses of young women to violations of their gendered sense of self, difficult work conditions, and the process of modernization. The psychologicalization and medicalization of these attacks by consultant medical professionals permitted a different moral interpretation of the problem by employers: one of ‘primitive minds’ disrupting the creation of capital.

Similarly, the refusal of many Japanese adolescents to go to school is labeled by certain psychiatrists in that country (but not all) as deviant, but this behavior can also be understood as an individualized, muted form of resistance to manipulation by families, peers, and teachers. Japanese themselves debate in public as to whether this behavior is indeed abnormal, or, on the contrary, positively adaptive, given the highly competitive, exhausting school system, in which bullying is a frequent, characteristic of their society today. Such a situation is noted by many to be one result of the heavy price of Japanese modernization (Lock 1991). Similarly, the Kleinmans have analyzed narratives about chronic pain in China as, in effect, normal responses to chaotic political change at the national level. These changes are associated with collective and personal delegitimation of the daily life of thousands of ordinary people, and the subjective experience of physical malaise, that in the clinical situation are interpreted as and reduced to physical disorder. Swartz and Levett note that, not surprisingly, ‘psychological sequelae’ have frequently been reported in connection with the impact of massive long-term political repression of children in South Africa. They go on to argue that this psychologicalization is too narrowly defined, and that ‘the costs of generations of oppression of children cannot be offset simply by interventions of mental health workers’ (Swartz and Levett 1989: 747) Further, these researchers argue, ‘it is a serious fallacy to assume that if something is wrong within the society, then this must be reflected necessarily in the psychopathological make-up of individuals’ (1989: 747, emphasis added). In common with those authors cited above, Swartz and Levett oppose the normalization and transformation of political and social repression into individual pathology and its management solely through medical interventions. They are particularly concerned that even when certain patients are labeled as ‘victims,’ and thus a moral and political component in addition to ‘pathology’ is in theory acknowledged, societal dynamics working to repress memories of the past ensure that the bodies of individuals rather than the body politic is made the focus of attention (see also Melish 1998, in connection with slavery in the United States). Furthermore, the plight of thousands of children whose suffering is chronic, but not exemplified by major traumatic episodes that bring them to the attention of mental health workers, goes largely unnoticed.

Allan Young, researching the invention of post-traumatic stress disorder, shows just how powerful is the current psychiatric model in the creation of this new disease. Psychiatrists assume that the uncovering and reliving of a single traumatic episode during the course of therapy will open the door to relief from chronic debilitating stress, and postulated pathological changes in the neuroendocrinological system (Young 1995). Thus, even the atrocities of the Vietnam War, and moral condemnation of them, are individualized and depoliticized. The violent and repressive behavior of powerful forces in society is rarely labeled as abnormal, nor is its long-lasting effect on the daily lives of millions of people explicitly acknowledged, but rather the physical manifestations of distress in those relatively few individuals who come to the attention of the medical world are interpreted as pathologies that must be purged.

Making Health a Matter of Morals

Historical and anthropological research suggests that all societies create concepts about what constitutes a well-functioning social, political, and moral order. These concepts are intimately associated with what is assumed to be the health and well-being of the individuals who form any given society (Janzen 1981). It must be kept in mind, of course, as Marx (1967), Mumford (1963), and more recently Comaroff (1985), Fanon (1967), Lock and Kaufert (1998), Scheper-Hughes (1992), and others have shown, with special emphasis on ethnicity and gender differences, that the well-being of some individuals may be exploited in any given society for the sake of those with power, and that this may, in effect, go unnoticed and be considered normal. Further, within any given society, dominant values and ideologies are contested, and they change over time. Nevertheless, a close association between the moral order and ideas about the health or ill health of society and of the individuals who compose that society persists in one form or another. By extension, a ‘sick society,’ is one in which the moral order is thought to be under threat.

The dominant metaphor for more than 2000 years in China and for many hundreds of years in Korea and Japan, for example, has been that of harmony, implying the selfconscious contribution by individuals to a harmonious social and political order. Health is understood in the East Asian philosophical and medical system as being in a continuum with illness, and not diametrically opposed to it. Moreover, individuals are recognized as having relative amounts of health, depending on such factors as the season of the year, their occupation, their age, and so on, as opposed to a finite presence or absence of health. Thus, health can only be understood with respect to the location of the microcosm of the individual in the macrocosm of the social order, and the physical and mental condition of individuals is conceptualized as inextricable from that of their surroundings, social and environmental (Lock 1980).

East Asian medicine is often described as holistic by its aficionados, but in practice, it is the bodies of individuals that are manipulated, and not facets of the social order. Thus, in early Chinese history the health of the entire polity, for which the Emperor’s body was a living synecdoche, was dependent upon the moral and healthy behavior of his subjects (similar knowledge was evident in medieval Europe). Individual concerns and interests are by definition suppressed in a Confucian ideology for the sake of society, and this attitude extends to the management of bodies. Thus, individuals are expected to ‘bend’ to fit the standards of society and, should illness occur, resort is made to herbal medication, acupuncture, and other therapies to bring the mind/body back into harmony with the macrocosm. The objective of such efforts is returning the individual to active participation in society. This system is therefore inherently conservative, and locates responsibility for the occurrence of health and illness firmly with individuals. Even though the demands posed by society on individual health are freely acknowledged, and it is sometimes recognized that they induce illness, they are nevertheless considered unavoidable and remain essentially unchallenged.

Turning to another example, the concept of health widely used by the majority of indigenous peoples of North America, prior to colonization, was one in which a healthy person was understood as inseparable from his relationship to the land. The Cree concept, ‘being-alive-well,’ suggests that individuals must be correctly situated with respect to the land; a ‘sense of place,’ is inherent to the continuance of health in both the family and the community, and therefore to individual health (Adelson 1991). This concept is, of course, an abstract ideal, one that is currently being selfconsciously rethought among the Cree, in part as a response to the massive disruptions caused in their communities by, most recently, the building of the James Bay hydroelectric dam, followed by the threat for many years of the building of a second dam. The mobilization of tradition is also part of a movement across North America, among aboriginal peoples, to take back full control of their communities, and to eradicate the postcolonial situation of forced dependence and discrimination so evident for many years.

These two examples can be compared with the emergence of what Becker has described as the ‘new health morality’ (1986) in North America. Becker described an exceedingly individualized approach to health, produced by our historical and philosophical heritage and fostered by both governments and the medical profession (Lock 1998a), that has transformed individual ‘health into the moral’ (Conrad 1994). Wellness, the avoidance of disease and illness and the ‘improvement of health,’ has become a widespread ‘virtue,’ especially among the middle classes, and for some appears to take on the aura of a secular path to salvation. Preservation of individual health has thus become an end in itself rather than a means to some other objective, an objective often understood, as with the East Asian and Cree examples, as contributing to society at large.

On the basis of empirical research, Crawford (1984) constructed a ‘cultural account of health’ as constituted in contemporary middle-class North America. The results of open-ended interviews carried out in the Chicago area with 60 adults, female and male, revealed two oft-repeated themes in the accounts of respondents. One theme was of self-control and a cluster of related concepts including self-discipline, self-denial, and will power. A second complementary set of themes were grouped around the idea of release and freedom. Individuals repeatedly expressed the idea that working out, eating well, giving up smoking, alcohol use, and so on, are essential to good health and a normal life, and moreover, such activities were taken to be evidence of willpower and self-control. Making time to be healthy was spontaneously ranked highly by the majority of informants, who also noted that such behavior exhibited an active refusal to be coopted by the unhealthy, pathological society in which they found themselves. Crawford summed his findings up as follows:

The practical activity of health promotion, whereby health is viewed as a goal to be achieved through instrumental behaviors aimed a maintaining or enhancing biological functioning, is integral to an encompassing symbolic order. It is an order in which the individual body, separated from mind and society, is ‘managed’ according to criteria elaborated in the biomedical sciences. Medical knowledge, internalized and reproduced in our everyday discourse, represents a distinct, although by no means universal, way of experiencing our ‘selves,’ our bodies and our world. (Crawford 1984: 73)

One of the master symbols of contemporary medicine and of North American society as a whole is, of course, that of control. Crawford argues that by taking personal responsibility for health we are displaying not only a desire for control, but an ability to seize it and enact it. We cooperate in the creation of normal, healthy citizens, thus validating the dominant moral order. He goes on to suggest that in this time of severe economic cut-backs, individual bodies—‘the ultimate metaphor’—refract the general mood, as we attempt to control what is within our grasp (Crawford 1984: 80). Although it is the economically deprived who are the most affected by budget constraints, Crawford argues that the middle class reaffirm their relatively protected status through personal discipline designed, above all, to maintain health.

When interviewed by Crawford, many people expressed the idea that control must be tempered by release, usually through the fulfillment of instant desire and consumption. Crawford argues that it is not surprising, therefore, that bulimia, characterized by alternating behaviors of gorging and purging, has emerged as one of the most common eating disorders of our time. The body is not only a symbolic field for the reproduction of dominant values and conceptions, ‘it is also a site for resistance to and transformation of those systems of meaning’ (Crawford 1984: 95; see also Lock 1990, 1993a). In sickness, this struggle may be expressed (often unconsciously) in forms that replicate the tensions present in society at large. Crawford concludes his study by considering what political implications might be drawn from the current fitness movement. Is the taking of individual control and responsibility for health indeed a step to personal ‘empowerment,’ as many fitness advocates claim, or is it only part of the answer? Are individual lifestyle changes precisely what ‘power’ requires of us at this historical moment, while little is done about the social determinants of ill health, in particular about discrimination and poverty? Is well-being as virtue being transformed into a dangerous fetish as Mich has suggested (1992), while governments limit their domain of responsibility to economic development, frequently ignoring the cost to the well-being of large segments of society?

The Protean Nature of Abnormality: Gender and Aging

I will return now to medicalization of the life cycle and, in particular, its linkage to the ‘aging society,’ in which the economic burden that the elderly are assumed to pose is currently a cause for great concern (Lock 1993b). In recent years we have witnessed the medicalization of aging on an unprecedented scale. The very process of aging has been widely reinterpreted as deviation from the normal, a process against which individuals and their physicians should take major precautions.

In North America, and in Europe to a slightly lesser extent, discourse about women as they approach the end of their reproductive years, whether that of the medical world or popular accounts, focuses obsessively on menopause and the supposed long-term consequences of an ‘estrogen-starved body’ to health in later life. Medical literature, with only a few exceptions, is overwhelmingly concerned with pathology and decrepitude associated with aging (although recently the strident tone characteristic of earlier decades has been modified). Thus, the end of menstruation is described as the consequence of ‘failing ovaries’ (Haspels and van Keep 1979: 59), or the ‘inevitable demise’ of the ‘follicular unit’ (London and Hammond 1986: 906). There are other, more positive ways, to interpret these biological changes (Wentz 1988), but the dominant discourse is about loss, failure, and decrepitude (Martin 1987) and menopause is widely understood as a deficiency disease, one in which depleted estrogen supplies should be replaced to attain the levels found in younger, fertile women.

Why should there be such an emphasis on female decrepitude? Surely, aging is an unavoidable, ‘normal’ process common to both men and women. Clearly, the increased proportion of the elderly in society is one source of concern. An article by Gail Sheehy gives us a clue as to why this concern is so worrying to us. Sheehy states: ‘At the turn of the century, a woman could expect to live to the age of forty-seven or eight’ (Sheehy 1991: 227), a sentiment widely expressed not only in popular literature but also in scientific articles. Gosden, for example, writing a text for biologists, is explicit that the very existence of ‘postmenopausal’ women is something of a cultural ‘artifact,’ the result of our ‘recent mastery of the environment’ (Gosden 1985: 2). Although the majority of authors who create arguments like Sheehy and Gosden believe that their conclusions are unbiased, it is clear that their reading of the evidence is selective. Demographers have convincingly shown that high rates of infant and maternal mortality have served until well into this century to keep mean life expectancies low, thus masking the presence of older people in all societies. When remaining life expectancy, once aged 45 or older, is examined, it is evident that people aged 60 and over have been part of all human groups for many hundreds and possibly thousands of years. It is the case, of course, that a greatly increased number of people live to old age than was formerly the case, and a concern about their health is clearly justified, but to talk of older women as artifacts or as ‘unnatural’, as does Gosden, is misleading, especially when claims such as his are then used to justify the administration of medication to all women on a life-time basis once they approach menopause.

Coupled with this inattention to demography by those who support the cultural artifact of aging as abnormality hypothesis, is a second assertion, namely that the human female is the only member of the class Mammalia to reach reproductive senescence during her lifetime. As the gynecologist Dewhurst puts it:

The cessation of menstruation, or the menopause, in the human female is … a relatively unique [sic] phenomenon in the animal kingdom. With increasing longevity modern woman differs from her forebears as well as from other species in that she can look forward to 20 or 30 years … after the menopause. (Dewhurst 1981: 592)

This type of argument corroborates the ones based on false demography estimates in creating an image of older women as going against nature’s purpose, whose very existence is, in effect, abnormal. An assumption embedded in arguments that compare women to apes and other animals who continue to menstruate until they die would seem to be that female life is about the reproduction of the species, and that the nonreproductive postmenopausal woman is a perambulating anomaly. However, biologists whose specialty is aging make the claim that the maximum human life-span potential is somewhere between 95 and more than 100 years. Further, all the systems of the human body, with the exception only of the female reproductive organs, age in order that they may survive to 80 years of age or older, unless pathology strikes (Leidy 1994). In addition, an emerging literature in biological anthropology argues that menopause in human females evolved approximately one and a half million years ago, most probably as a biological adaptation to the long-term nurturance necessary for highly dependent human infants, a dependency not found in apes. The hypothesis is that it is biologically more advantageous to the survival of human offspring to have both the mother and the grandmother providing care. Only under these circumstances could helpless infants be comparatively safe and weaned successfully onto solid foods that had to be collected by foraging (Peccei 1995). Investment in numerous infants produced by all women over the entire life span, probably proved to be less biologically advantageous than intense investment placed in fewer infants, the majority of whom survived (a situation in which mothers and grandmothers cooperated rather than competed with each other). If this was the case, then women who ceased to menstruate early would have been selected for over the course of evolutionary time.

Despite these data from the basic sciences, middle-aged women are explicitly contrasted with the animal world, and found wanting, by many members of the health-care profession. They are also compared with younger, fertile women whose bones and hearts show few signs of degeneration and who are taken as the standard for women of all ages (Lock 1993b: 38). Of even more significance, perhaps, is that women’s bodies differ from those of men. Simone De Beauvoir argued that woman is constructed as ‘other,’ and Haraway, writing about nineteenth-century Europe and North America, asserts that ‘the “neutral,” universal body was always the unmarked masculine’ (Haraway 1989: 357). Obviously, older people, both men and women, are at an increased risk for various diseases associated with aging, but there has been a tendency to conflate women’s aging with the end of menstruation. Given the climate created by the arguments outlined above about older women as anomalies, it is not surprising that female aging has come to be understood primarily as inevitable pathology.

The Management of Aging

It has been postulated since classical times that ovarian secretions produce a profound effect on many parts of the female body, and although explanations have changed over the years as to just how this effect is produced, this interpretation is not disputed (Oudshoorn 1994). However, medical interest in failing ovaries and dropping estrogen levels is no longer confined, as was formerly the case, to what is often thought of among physicians as the rather inconsequential symptoms usually associated with menopause, in particular that of the hot flash. Interest has turned to the postmenopausal woman, her 30 years of remaining life, and the ‘management’ of her deviant body necessitated by being at an increased risk of broken bones and a failing heart. Recently, the estrogen deficiency of post-menopause has also been associated with an increased risk of Alzheimer’s disease by certain researchers.

Some of the current literature goes further and describes medicalization with hormone replacement therapy not merely as a prophylactic against future disease, but as a positive enhancement to well-being and longevity (Utian and Jacobowitz 1990). Normality is not simply the mean, but also something that can be improved upon. The assumption is that virtually all women will benefit from replacement therapy, and any individuals who may be placed ‘at risk’ by taking this medication are thought of as ‘outlyers’ and as so much variation around the norm. It is not surprising, therefore, that it is recommended in professional journals that virtually all women of menopausal age be considered as candidates for replacement therapy, with the possible exception of those who are considered to be at high risk of breast cancer (SOGC Policy Statement 1995).

Of course, this discourse for medicalized health maintenance has been challenged by individual women, by organizations such as the National Women’s Health Network in Washington (1989, 1995), by social scientists (Palmlund 1997), by a good number of physicians (Love 1997; Price and Little 1996), and by certain feminists (Freidan 1993; Greer 1991). An assumption could have been made that a decline in estrogen levels is ‘normal’ and that this decline functions as a protective device against cancer and other diseases associated with aging, as a few professionals have argued, but this suggestion barely sees the light of day. The pathology of deficiency associated with aging dominates.

Palmlund has studied the marketing of estrogens and progesterones over the past three decades. Following Bourdieu (1977), he argues that economic, cultural, and social capital have all been invested in creating the construction of menopause as abnormal, as a condition of risk for which hormones are marketed to subvert that risk (Palmlund 1997). Promotion of the long-term use of replacement therapy is in large part a reflection of the relationship between the medical profession and the major pharmaceutical companies, but concern about the economic ‘liability’ of an aging female population is also evident. In the medical and epidemiological literature on menopause, it is common to start out with a rhetorical flourish that sets the stage for coming to terms with the superfluity of older women and their potential cost to society if their health should fail:

It is estimated that every day in North America 3500 new women experience menopause and that by the end of this century 49 million women will be postmenopausal. (Reid 1988: 25)

In recent years, the medicalization of the aging male has commenced (Oudshoorn 1997), indicating just how powerful are the economic incentives to commodify the latter part of the life cycle. This second wave of medicalization suggests that economic gain, rather than a patriarchal ideology, is perhaps the principal force driving this megaindustry, but it remains based on unexamined assumptions about the universality of aging, that part of the life cycle inextricably associated with pathology.

Extensive survey research carried out in the mid-1980s shows that, compared with Americans and Canadians, Japanese women experience remarkably few symptoms at the end of menstruation, including those considered to be universally characteristic of menopause, namely hot flashes and night sweats (Lock 1993b). It is notable that there is no word that specifically signifies a ‘hot flash’ in Japanese. On the basis of these findings, I have argued for a recognition of ‘local biologies’ (Lock 1993b). In other words, sufficient variation exists among biological populations that the physical effects of lowered estrogen levels on the body characteristic of the female mid-life transition are not the same in all geographical locations. There is evidence from other parts of the world, in addition to Japan, of considerable variation in symptom reporting at menopause (Beyene 1989; Lock 1994). This variation, to which genetics, diet, environment, and other cultural variables no doubt contribute, accounts for marked differences in subjective experience and associated symptom reporting at this stage of the life cycle. The differences between North American and Japanese women are sufficient to produce an effect on (but not determine) the creation of knowledge, both popular and professional, about this life cycle transition. Konenki (the term in Japanese glossed as menopause) has never been thought of as a disease-like state, nor even equated closely with the end of menstruation, even by Japanese medical professionals. The symptoms most closely associated with konenki are shoulder stiffness and other similar culturally specific sensations, including a ‘heavy’ head (Lock 1993b).

Japanese physicians keep abreast of the medical literature published in the West, and so one could expect them, living as they do in a country actively dedicated to preventive medicine, that there might be some incentive to make use of hormone replacement therapy (HRT), as is the case in EuroAmerica. However, this is not so because first, as we have seen, symptom reporting is different and very few women go to see gynecologists at this stage of the life cycle. In addition, local biology plays a part in other ways: mortality from coronary heart disease for Japanese women is about one-quarter that of American women (WHO 1990), and it is estimated that although Japanese women become osteoporotic twice as often as do Japanese men, nevertheless, this is approximately half as often as in North America (Ross et al. 1991). These figures, combined with a mortality rate from breast cancer about one-quarter that of North America and the longest life expectancy in the world for Japanese women, has meant that there is relatively little pressure for Japanese gynecologists to enter into the international arena of debate about the pros and cons of long-term medication with HRT, something about which many of them are, in any case, decidedly uncomfortable because of a pervasive concern about iatrogenesis. When dealing with healthy middle-aged women, the first line of resort of Japanese doctors is usually to encourage good dietary practices and plenty of exercise. For those few women with troubling symptoms, herbal medicine is commonly prescribed, even by gynecologists (Lock 1993b). Increasing use of HRT has taken place in Japan over the past few years, but not to the extent that is common in Europe or North America.

These findings, necessarily presented in a rather superficial fashion here, suggest that it is important to decenter assumptions about biological universalism. The margins between nature and culture and normal and abnormal are cultural constructs. Obviously, aging cannot be avoided, but the power of both biology and culture to shape the experience of aging and the meanings—individual, social, and political-attributed to this process demand fine-grained, contextualized interpretations in which we must reconsider that which we take to be normal and abnormal.

Eliminating the Mistakes of Nature

With the development of molecular genetics and the mapping of the human genome, genes have become knowable entities, subject to manipulation. This knowledge permits us to think in entirely new ways about what is to be taken as normal with respect to human bodies and behavior. Mapping the human genome has been likened to the Holy Grail of biology; one scientist declared in the mid-1980s that the Human Genome Project was the ultimate response to the commandment, ‘Know thyself (Bishop and Waldholz 1990). While certain members of the scientific community have been actively opposed to the genome project, in large part because it consumes a vast amount of resources that would otherwise be used for other kinds of research, many scientists have been very vocal about the benefits that society will receive by completing this project. Daniel Koshland, until recently the editor of Science, stated, for example, that withholding support from the Human Genome Project is to incur, the immorality of omission—the failure to apply a great new technology to aid the poor, the infirm, and the underprivileged’ (Koshland 1989). Robert Plomin, in supporting the project, notes that, ‘Just fifteen years ago, the idea of genetic influence on complex human behavior was anathema to many behavioral scientists. Now, however, the role of inheritance in behavior has become widely accepted, even for sensitive domains such as IQ’ (Plomin 1990).

The historian of science Edward Yoxen points out that we are currently witnessing a conceptual shift that has not been present in the language of geneticists prior to the advent of molecular genetics. While the contribution of genetics to the incidence of disease has been recognized throughout this century, it has only been in the past two decades that the notion of ‘genetic disease’ has come to dominate discourse such that other contributory factors are often obscured from view (Yoxen 1984). Fox Keller argues that it was this shift in discourse that made the Human Genome Project both reasonable and desirable in the minds of many researchers involved (Fox Keller 1992). In mapping the Human Genome, the objective is to create a baseline norm for our shared genetic inheritance. However, the map that will be produced, based almost completely on samples taken from a Caucasian population, with a few Asian samples included, will correspond to the actual genome of no living individual, and we will all, in effect, be deviants from this norm (Lewontin 1992).

Moreover, with this map in hand, the belief is that we will then rapidly move into an era in which we will be able to ‘guarantee all human beings an individual and natural right, the right to health’ (Fox Keller 1992: 295). Fox Keller cites a 1988 report put out by the Office of Technology Assessment in the United States in which it is argued that ‘new technologies for identifying traits and altering genes make it possible for eugenic goals to be achieved through technological as opposed to social control.’ The report discusses what is described as a ‘eugenics of normalcy,’ namely ‘the use of genetic information … to ensure that … each individual has at least a modicum of normal genes’ (1988: 84, emphasis added). This report concludes that ‘individuals have a paramount right to be born with a normal, adequate hereditary endowment’ (1988: 86).

The suggestion that emerges from this report is that for at least certain advocates of the new genetics, the idea of amelioration, of improving the quality of the gene pool, is looming large on the horizon. However, as Fox Keller and others have pointed out, the language used is no longer one that supports the implementation of eugenics via government-supported social policies for the good of society, the species, or even of the collective gene pool, as was the case earlier this century (1992: 295). We are now in an era dominated by the idea of individual choice in connection with decisions relating to health and illness. Thus, genetic information will simply furnish the knowledge that individuals need in order to realize their inalienable right to health. ‘Geneticization’ is the term coined by Lippman (1992) to capture this tendency to distinguish people one from another on the basis of genetics, and increasingly to define disorders, behaviors, and physiological variation as wholly or in part due to genetic abnormalities.

One major disadvantage with this Utopian type of talk to date, aside from the fact that it is inherently eugenic, blatantly reductionistic, and often wildly inaccurate, is that as yet we do not have therapeutic techniques available to manipulate the genes of individuals, although the time is rapidly approaching when experiments in utero with gene therapy may be implemented. Further, we have definitive diagnostic capabilities for only those relatively uncomplicated (although often devastating) diseases that follow Mendelian inheritance patterns. We are not able to predict with any certainty how and when multifactorial diseases such as breast and prostate cancer and Alzheimer’s disease (some forms of which are now associated with genetics) will occur. We know even less about the so-called behavioral disorders such as addictions or attention deficit disorder. Scientists critical of the hubris so often associated with the new genetics are careful to point out that only those with a mind set that assumes human behavior is determined by genetics could entertain the idea that we will soon be able to make diagnoses about the presence or absence of certain genes that determine individual behavior (Lewontin 1997).

Given the present level of knowledge in the new genetics, it takes little insight to realize that the burden of decision making in connection with genetic testing and screening, for the immediate future at least, will fall on women of reproductive age and their partners, and that the ‘choice’ they will be expected to make is in connection with abortion. The only alternative at present is to undergo expensive IVF treatment and select those fertilized embryos for implantation into the woman’s uterus that have been ‘screened’ for certain diseases. It is clear that even when labeled as being ‘at risk’ of carrying a fetus with a major genetic disorder, not all women are willing to avail themselves of new reproductive technologies (Beeson and Doksum 1999; Lock 1998b). It is equally clear that women are already making decisions about pregnancies and abortion on the basis of information that they have been given by genetic counselors and geneticists, and that this information is couched in the language of risk and probabilities (Lock 1998c; Rapp 1988, 1990).

Mary Douglas has characterized the idea of ‘risk’ as a central cultural construct of our time (Douglas 1990), a construct that did not exist in a technical sense prior to the end of the last century. The ‘philosophy of risk,’ as Ewald notes, incorporates a secularized approach to life, where God is removed from the scene, leaving the control of events entirely in human hands. This approach is a logical outcome of understanding life as a rational enterprise to be actively orchestrated by societies and individuals (Ewald 1991). Obviously, a rational approach to the management of disease is not at issue here, and nor is the enormous advantages that have been incurred by the systematization of disease categories and by research into the abnormal and pathological. However, understanding disease in terms of risk inevitably raises some difficulties. Douglas argues that use of the word ‘risk’ rather than ‘danger’ or ‘hazard’ has the rhetorical effect of creating an aura of neutrality, of cloaking the concept in scientific legitimacy. Paradoxically, this permits statements about risk to be readily associated with moral approbation. Danger, reworded as risk, is removed from the sphere of the unpredictable, the supernatural, and the divine, and is placed squarely, in EuroAmerica at least, at the feet of responsible individuals, as the research of Crawford has shown. Risk becomes, in Douglas’s words, ‘a forensic resource’ whereby individuals can be held accountable (1990). However, as Francis Collins, the current director of the National Center for Human Genome Research in Washington, points out, in the world of genetics ‘we are all at risk for something,’ and thus we are all, in effect, potentially abnormal (Beardsley 1996: 102).

Dorothy Nelkin has recently documented a case of what she describes as the ‘growing practice of genetic testing in American society,’ in this instance for the gene for Fragile-X syndrome associated with certain physical and behavioral disorders among children (Nelkin 1996). Guidelines for testing were issued in 1995 by the American College of Human Genetics, and included a recommendation that those asymptomatic individuals deemed to be ‘at risk’ from this disease should be tested, in addition to children already exhibiting characteristic symptoms. The incidence of this disease, associated with mental impairment among other things, is estimated to be about one per 1500 males and one per 2500 females. In common with a good number of other so-called genetic diseases, the genes involved exhibit ‘incomplete penetrance,’ that is, not all individuals with the genotype will manifest the disease. It is estimated that about 20 per cent of males and 70 per cent of females with the mutation express no symptoms, making the designation of ‘at risk’ extremely problematic. Moreover, the severity of symptoms varies enormously and cannot be predicted.

The first testing program, developed by an industry-university consortium, was organized in 1993 in the Colorado public school system as a prototype for developing a national program. The project was funded by Oncor, a private biotechnology company, and was explicitly designed to save later public expenditure on children with behavioral problems. The research team tested selected children and developed a checklist of ‘abnormal’ behavioral and physical characteristics associated with the disease, including hyperactivity, learning problems, double-jointed fingers, prominent ears, and so on. After 2 years, the program failed to turn up the anticipated number of cases, was deemed uneconomic, and suspended (Nelkin 1996: 538). Nelkin notes that testing was not done in a clinical setting. It was driven by economic and entrepreneurial interests, and there are no known therapeutic means to change the condition of the children identified. However, the impact on the lives of those children who tested positive was significant, not the least of which was discrimination against them by health insurance companies. Nelkin points out that many involved parents not only cooperated but actively encouraged the promotion of testing for the Fragile-X gene. She goes on to state that a significant number of parents, in particular mothers, experienced relief once their child’s so-called behavioral problem was identified as genetic because the mothers could no longer be found wanting for the condition of their child.

Conclusion

The final two examples in this chapter, of the medicalization of menopause and of the move toward widespread genetic testing and screening, both clearly indicate that political and entrepreneurial interests are, above all, driving what is to be defined as abnormal today. In our present mood we are not willing to tolerate individuals who are liable, as we understand it, to place a financial burden on society, and their condition of being ‘at risk’ is treated as though pathological. Georges Canguilhem’s maxim that normality can only be understood ‘as situated in action,’ and moreover, that diversity does not infer sickness nor does ‘objective’ pathology exist, has been entirely abandoned. We are no longer in a mood where normal means average; we are in an era of amelioration, enhancement, and progress through increasing intervention into the ‘mistakes’ of nature. However, in this climate, the environmental, social, and political factors that, rather than genes, contribute to so much disease, are eclipsed, and tend to be removed from professional and public attention. Research in connection with these factors remains relatively underfunded. Basic medical science has made enormous strides and brought about insights in connection with any number of diseases, but when, under the guise of health promotion, individual bodies and individual responsibility for health are made the cornerstone of health care, moral responsibility for the occurrence of illness and pathology is often diverted from where it belongs (on perennial problems of inequality, exploitation, poverty, sexism, and racism) and inappropriately placed at the feet of individuals designated as abnormal or at risk of being so because of their biological make-up.