Medical Uncertainty Revisited

Renée C Fox. Handbook of Social Studies in Health and Medicine. Editor: Gary L Albrecht, Ray Fitzpatrick, Susan C Scrimshaw. Sage Publications, 2000.


Uncertainty is inherent in medicine. Scientific, technological, and clinical advances change the content of medical uncertainty and alter its contours, but they do not drive it away. Furthermore, although medical progress dispels some uncertainties, it uncovers others that were not formerly recognized, and it may even create new areas of uncertainty that did not previously exist.

The theme of medical uncertainty pervades the medical literature. It is also a major motif in the medical sociological literature. In both contexts, uncertainty is not only regarded as a challenging and problematic constant, but also as a matter of serious concern because of the adverse ways it affects the work and role responsibilities of physicians and the fate of patients. Uncertainty complicates and curtails the ability of physicians to prevent, diagnose, and treat disease, illness, and injury, and to predict the evolution and outcome of patients’ medical conditions and the results of the medical decisions and actions taken on their behalf. The implications are more than scientific and intellectual. Medical uncertainty raises emotionally and existentially charged questions about the meaning-fulness as well as the efficacy of physicians’ efforts to safeguard their patients’ well-being, relieve their suffering, heal their ills, restore their health, and prolong their lives. It intersects with the risks and limitations and their accompanying ambiguities that medicine and its practice entail, and it evokes the inescapably tragic dimension of medicine: the fact that all patients-and all physicians as well—are mortal.

The primary goal of this chapter is to reconsider the forms of uncertainty that I have previously identified and analyzed (Fox 1957; Fox 1980) in light of the pertinent medical, social, and cultural developments that have occurred during the 1980s and 1990s. My empirical referents are drawn mainly from American milieux.

Previous Sociological Discussions of Medical Uncertainty

It was Talcott Parsons who first made medical uncertainty amenable to sociological analysis (Parsons 1951: 428-70). He emphasized ‘the great importance of … the element of uncertainty’ in the role of the physician and ‘the situation of medical practice.’ He identified some of the sources and forms of medical uncertainty, and he linked them to the ‘factors of known [and unknown] impossibility’ and the ‘limits of control’ that physicians continually encounter. ‘The remarkable advances of medicine,’ he observed, have ‘by no means eliminated’ these components of medicine from its scientific matrix or its clinical application. Rather, in significant ways, they have ‘increased awareness of the vast amount of human ignorance’ about health and illness that still exists. This seeming paradox is not unique to medicine, Parsons recognized; it is a generic characteristic of scientific progress. However, what makes medicine special in this and in other regards, he stated, are the distinctive features of medical work: the fact that health, illness, and medicine are associated with basic and intimate aspects of the human body, psyche, and story, and with transcendent and ultimate aspects of the human life cycle, with the privileged and penetrating access to patients’ bodies and their private lives that physicians are accorded as part of their work, and with the role that physicians play both in fending off, and in pronouncing, the death of patients. These attributes of physicians’ responsibilities, Parsons contended, and ‘the magnitude and character’ of what they entail augment the meaning and enhance the stress of the medical uncertainties and impossibilities that they face. Invoking anthropologist Bronislaw Malinowski’s insights, Parsons pointed out that the ‘combination of uncertainty and [the physician’s] strong emotional interests’ in successfully diagnosing and treating the patient’s condition are conducive to magical modes of thought and action. The scientific tradition of modern medicine may ‘preclude outright magic,’ he went on to say, but the medically ‘ritualized optimism’ and the ‘bias in favor of active intervention’ with which many physicians (especially American physicians) are inclined to respond in these conditions contain elements within them that are covertly and functionally magical.

Parsons’s perspective on medical uncertainty has had an enduring influence on how sociologists have dealt with this phenomenon in their research and writing. It has been a central source of my own sustained interest in medical uncertainty, my outlook on it, and my first-hand studies of its presence and implications in a variety of medical contexts. These contexts have included a metabolic research ward (Fox 1959), a number of medical schools (Fox 1957, 1978a, 1978b, 1978c, 1989a; Lief and Fox 1963), an array of medical research settings (especially those where patient-oriented clinical research is being conducted) in Belgium, as well as in the United States (Blumberg and Fox 1985; Fox 1959, 1962, 1964, 1976, 1978d, 1996a; Swazey and Fox 1970), and a wide cross section of American and continental European medical centers in which organ transplantation, the artificial kidney, and various models of an artificial heart have been pioneered, developed, and deployed (Fox and Swazey 1974, 1992). My research has consistently focused on the social, cultural, emotional, and moral and spiritual meaning of medical uncertainty for physicians and patients, and on their collective ways of responding to it.

My essay on the ‘training for uncertainty’ of medical students, published in 1957, appears to have captured and conceptualized a quintessence of becoming a physician. In spite of all the changes in medical knowledge and the reforms in medical school curricula that have occurred during the past four decades, medical students still give spontaneous testimony to the importance of training for uncertainty in the professional education that they undergo. In their 1994 appraisal of the New Pathway curriculum of Harvard Medical School, for example, students affirmed that none of its achievements is ‘more impressive … [than the] process of wrestling with uncertainty’—the way that it taught them that ‘medicine is filled with uncertainty—indeed, that uncertainty rather than certainty tends to be the norm’ (Silver 1994: 127-28).

There are three basic types of uncertainty around which the process of ‘training for uncertainty’ in medical school centers is based. These are the uncertainties that originate in the impossibility of commanding all the vast knowledge and complex skills of continually advancing modern medicine, the uncertainties that stem from the many gaps in medical knowledge and limitations in medical understanding and effectiveness that nonetheless exist, and the uncertainties connected with distinguishing between personal ignorance and ineptitude, and the lacunae and incapacities of the field of medicine itself. The neophyte status of medical students, I found, increases their awareness of these forms of uncertainty in knowledge. In the words of one student, discerning whether the uncertainty they encounter is ‘their fault’ or ‘the fault of the field’ is the problem, and the anxieties they experience in medical school are heightened by their anticipatory concern about its implications for how knowledgeably and competently they will be able to care for patients when they graduate into the role of physician, with the responsibilities of clinical practice.

The uncertainty that medical students encounter is not confined to the intellectual, scientific, and technical dimensions of medicine. In the contexts of dissecting a cadaver in the anatomy laboratory, participating in autopsies, assisting at births, and through their contacts with sick children, patients in pain, and those who are terminally ill (among others), students come face to face with what might be termed existential uncertainty. In other words, they are faced with critical problems of meaning, and ‘ponder-ably imponderable’ questions about the ‘whys’ and the mysteries of life and death that are at once integral to medicine and that transcend it.

The collective modes of coming to terms with the multiple uncertainties of medicine that I have watched medical students develop are harbingers of some of the key ways in which many physicians handle uncertainty (Fox 1978c). These include those listed below.

  • A process of intellectualization that entails achieving as much cognitive command of the situation as possible through the acquisition of greater knowledge and skill, and increasing mastery of the probability-based logic with which medicine approaches the uncertainties of diagnosis, therapy, and prognosis, and of the clinical judgment that lies at their heart. To a degree, it also involves defining and operationalizing medical problems in strict, scientific terms that siphon off some of their affectivity and reduce some of their complexity.
  • The attainment of a more detached kind of concern about uncertainty (Lief and Fox 1963) by muting awareness of its constant presence in medical work, pushing strong feelings about the most emotionally evocative issues it raises below the surface of consciousness not displaying uncertainty, and shrouding it in silence. This complex of responses to uncertainty is influenced and structured by the professional socialization process that medical students undergo. This socialization process consists mainly of the largely latent ‘messages’ they receive from their teachers and that they reinforce in one another about what medically capable and emotionally mature physicians ought and ought not to admit, exhibit, and discuss with colleagues and with patients.
  • The employment of a special genre of medical humor—counterphobic and ironic, infused with bravado and self-mockery, often impious and macabre—that is centered on the uncertainties and limitations of medical knowledge, medical errors, the side effects of medical and surgical interventions, the failure to cure, and death. Ostensibly, the capacity to joke about medical uncertainty in its various guises indicates an attitude of relative ease with its presence. On closer inspection, however, the tightly patterned character of this joking, the fact that it resembles what Sigmund Freud called ‘gallows humor’ and also front-lines-of-the-battlefield trench humor, and the difficulties that many students and physicians experience in talking seriously about medical uncertainty, all suggest that this humor is far from nonchalant. Rather, it seems to be impelled and shaped by a considerable amount of dissembled stress.

Building on my work, sociologist Donald Light set out to discover what kinds of uncertainty newly graduated physicians encounter after their medical school years, and how they deal with the quandaries that these uncertainties present. In the context of his first-hand study of the education and socialization of psychiatrists during their residency training, Light identified a cluster of clinical uncertainties surrounding diagnosis, treatment, and patient responses that cross-cut the uncertainties of knowledge I delineated (Light 1980). He found that the need of these young physicians to control uncertainty grew as their clinical responsibilities increased, so that progressively, ‘training for uncertainty [became] training for control’ (Light 1979: 320). He identified two characteristic ways that physicians gained control over their work: through the assertion and exercise of ‘individual clinical judgements’ based on their personal experience; and by ‘acquiring a treatment philosophy’ premised on the espousal of a particular ‘paradigm’ or ‘approach.’ In adopting these means, Light cautioned, physicians ‘[ran] the danger of gaining too much control over the uncertainties of their work by becoming insensitive to the complexities of diagnosis, treatment, and client relationships.’

Like Donald Light (on whose writings he drew), psychiatrist Jay Katz contends that some of the mechanisms for coping with uncertainty that physicians learn during their medical school and postgraduate years make it possible for them to ‘disregard’ uncertainty in clinical situations (Katz 1984). In his view, ‘once one leaves the arena of laboratory and clinical experimentation, there is little evidence that physicians … consciously take uncertainty into account either in their self-reflections or in their interactions with patients.’ Their tendency to ‘avoid’ uncertainty, he alleges, is buttressed by the profession’s demand for ‘conformity and orthodoxy,’ and by specialization that ‘narrows diagnostic vision,’ and ‘fosters belief in the superior effectiveness of treatments prescribed by one’s [field]’ (Katz 1984: 165-206).

Katz and Light, along with sociologist Paul Atkinson, assert that issues of certainty and uncertainty are intricately entwined (Atkinson 1984: 954). Most importantly, they emphasize how the ‘training for uncertainty’ trajectory can insulate physicians from medical uncertainty in ways that make them less able to acknowledge it. Thus, an unanticipated and unintended outcome of their professional socialization is that it may inadvertently lead to ‘training for certainty,’ and beyond that, to ‘training for over-certainty.’

The article on ‘The evolution of medical uncertainty’ (Fox 1980) began with a micro-dynamic account of the cumulative insights that my research on uncertainty in various medical settings had yielded over the course of some 30 years. This provided the background from which I ventured a more macroscopic set of observations and reflections on what appeared to be the growing attention and significance that issues of medical uncertainty were being accorded on the larger American scene.

From the vantage point of a continuous medical uncertainty watcher, I had the impression that a more pervasive societal interest in this phenomenon, and greater professional and public concern about its concomitants and consequences, had been developing throughout the 1960s and 1970s. Health, illness, and medicine seemed to have become foci of heightened anxiety about uncertainty and amplified awareness of it—centering on known and unknown risks, hazards, errors, limitations, and harm that such medical uncertainty could engender. By the end of the 1970s, medical intellectuals as astutely perceptive as physician-essayist Lewis Thomas and Nobel Laureate in Medicine Andre Cournand were taking note of what they each regarded as this at once notable and perplexing American malaise:

As a people, we have become obsessed with Health…. We have lost all confidence in the human body. The new consensus is that we are badly designed, intrinsically fallible, vulnerable to a host of hostile influences inside and around us, and only precariously alive…. The new danger to our well-being … is in becoming a nation of health hypochondriacs, living gingerly, worrying ourselves half to death…(Thomas 1979: 47-50)

The American public is being swept by a medical epidemic characterized by doubt of certitude, recognition of error, and discovery of hazard. (Cournand 1977: 700)

The expanding professional and public interest in medical uncertainty, and the apprehension that accompanied it, were concentrated both on the human diseases that still elude scientific understanding and clinical control—especially cancer—despite all the medical progress that has been made in the course of the century and despite the potentially dangerous and noxious side effects that advances in the diagnosis, treatment, and prevention of disease and illness have brought in their wake. In addition, research with recombinant DNA (the compound deoxyribonucleic acid) had triggered great worry in the scientific community as well as among the lay public about the ‘unexpectedly bad consequences’ that this new technology might have for human health and well-being, for example, ‘through the creation of new types of organisms never yet subjected to the pressures of evolution and which might have disease-causing potentialities that we do not now have to face’ (Watson 1976: 3).

These forms of medical uncertainty had meta-medical implications. Cancer was not only portrayed as a set of malignant diseases with which biology and medicine were still unable to deal knowledgeably and effectively, but also as one of the most pernicious and lethal types of suffering to which human beings are subject. The controversy that erupted over DNA technology brought forth feelings of dread over the dangers-even the monsters—that the dawning capacity of humankind to intervene in the evolution of all forms of life on this planet, including and especially its own, might produce. Indignation over the continuing inability of modern medicine to deal with unsolved problems of health and illness coexisted with anxiety about the medical ‘hubris’ and the ‘nemesis’-borne side effects of biomedical attempts to master these problems (Mich 1976). This highly ambivalent outlook was suggestive of a more diffuse societal ‘uncertainty about uncertainty,’ as if we were culturally unsure about how to approach the kinds of medical uncertainty now before us.

During the two decades that have ensued since ‘The evolution of medical uncertainty’ (1980), some of the developments that have occurred in medical science and technology, in the practice of medicine, and in the social and cultural conditions surrounding them, have contributed to the appearance of new elements of medical uncertainty. Many of these, however, were foreshadowed by previous manifestations of uncertainty, and all of them are compatible with the uncertainties in medical knowledge that were identified in the original ‘Training for uncertainty’ essay.

By and large, the new forms of uncertainty that have come into view in the past 20 years have not been extensively described or analyzed by social scientists. Therefore, they will be considered here largely through the medium of scientific and medical literature, and the insights that sociological reflection on that literature yields.

Uncertainty, ‘Medicine and Molecules’

To begin with, some of the major advances in medicine have done more than produce knowledge and techniques that further enlarge the enormous amount that physicians were already called upon to learn. Cumulatively, they have also resulted in basic changes in some of the underlying assumptions and modes of thought of present-day medicine. Foremost among these are the transformations in the cognitive framework of modern medicine that have occurred since 1953, when Francis Crick and James Watson published articles in which they announced their discovery of the self-complementary, double helix structure of DNA and their hypothesis of ‘a possible copying mechanism for the genetic material’ (Watson and Crick 1953a, 1953b). This discovery, and Watson and Crick’s subsequent work that showed the way toward analysis of the genetic code and understanding of how genetic material directs the synthesis of proteins, ushered in the so-called ‘biological revolution’ in which the ‘new’ molecular and cell biology, with its genetic focus, became ascendant. A veritable explosion of information and knowledge has been unleashed, epitomized by the Human Genome Project, a massive, international scientific program to achieve nothing less than mapping and sequencing all the genes in the human body and the noncoding regions of all the DNA contained in the human genes as well.

The nature of this knowledge, however, is highly reductionistic. It disaggregates biological systems by breaking them into smaller and smaller parts. As sociologist Howard Kaye states in his analysis of ‘the social meaning of modern biology,’ it concentrates attention on genes rather than on individual organisms (Kaye 1986). Medical educators like Daniel Tosteson (the former Dean of Harvard Medical School) point out that there is an unfulfilled need for a conceptual framework within which this kind of micro-knowledge can be synthesized, integrated, and made pertinent to the organismic, pathophysiogical level of clinical medicine. A unifying system does not yet exist, he maintains, that would enable physicians ‘to think about their patients in ways that permit appropriate access to molecular detail when such knowledge is crucial for diagnostic, preventive, or therapeutic action, without the burden of such a ponderous accumulation of facts that it will impede analysis and decision’ (Tosteson and Goldman 1994: 175).

In this respect, the intellectual gap that exists between ‘medicine and molecules’ constitutes a paradigmatic source of medical uncertainty and limitation, despite the regnant conviction that the new molecular knowledge will soon transform the practice of medicine by illuminating the etiology and mechanisms of human diseases and providing the basis for more potent and rational therapies. For example, at this stage in the development of somatic gene therapy, ‘clinical efficacy in human patients has [still] not been definitively demonstrated,’ or even ‘clearly established in any gene therapy protocol’ (Orkin and Motulsky 1995, non-paginated text). However, as of June 1995, more than 100 clinical protocols involving gene therapy had already been approved and initiated by the US National Institutes of Health (NIH) Recombinant Advisory Committee. Some 597 human subjects had undergone gene transfer experiments under these auspices, approximately $2 million per year for this research was being provided by NIH, and a larger amount of funding was forthcoming from industrial sources. According to a committee appointed by the Director of NIH, Harold Varmus, to assess the current status and promise of gene therapy, this lack of clinical efficacy in human patients is due to major difficulties in current gene transfer vectors, and in understanding their biological reaction with the host on the one hand, to the inadequacy of attention that has been accorded to studies of disease pathophysiology on the other, and to the challenging problem of bridging the two ‘at the interface of frontier science and patient care’:

As the field of gene therapy expands [the committee’s report stated], the need for appropriately trained personnel, including basic scientists with familiarity of disease pathophysiology and medical scientists and physicians with an appreciation of the complex basic science issues will become even greater. (Orkin and Motulsky 1995)

The persistence of conceptual, basic scientific, technical, and clinical bases of medical uncertainty notwithstanding, the atmosphere that pervades the field of molecular biology tends to be so exuberant, that the authors of the NIH report on gene therapy believed it important to make the following admonitory observations:

Expectations of current gene therapy protocols have been oversold. Overzealous representation of gene therapy has obscured the exploratory nature of initial studies, colored the manner in which the findings are portrayed to the scientific press and public and led to the widely held, but mistaken perception that clinical gene therapy is already highly successful … We cannot predict when the benefits of gene therapy will be realized. (Orkin and Motulsky 1995)

Nor is such unbridled optimism and certitude confined to the realm of gene therapy. It was conspicuously present, for example, at the workshop on xenotransplantation (animal-to-human organ transplantation) held by the US Institute of Medicine on 17 July 1996 (Fox 1996b: 9-11; Institute of Medicine 1996). The molecular and cell biologists present, and also some of the immunologists, were so enthusiastic about the experiments they were conducting in the laboratory with xenografts of certain animal-to-animal cells and tissues that they were inclined to overestimate the degree of control that currently exists with regard to the transplantation of solid, human-to-human organs, and to underplay the even more vigorous rejection reaction that whole organ grafts between phylogenetically distant species are likely to elicit. Rather ironically, they were more prone to imply that enough is now known to make animal-to-human transplants clinically feasible than the several transplant surgeons participating in the workshop who had done pioneering clinical trials with baboon to human transplants.

What ‘best characterizes molecular biology,’ Howard Kaye states, is its ‘aggressive, simplifying, reductionist approach, … attitude, [and] research strategy,’ and beyond that, its ‘world view,’ articulated by its founding practitioners and leading theorists. In this ‘aggressively reductionistic’ and ‘deterministic’ perspective, ‘culture is reduced to biology; biology, to the laws of physics and chemistry at the molecular level; mind, to matter; behavior, to genes; organism, to program; the origin of species, to macromolecules; life, to reproduction’ (Kaye 1986: 55-7). Although it is powerfully and brilliantly generative of new biological knowledge, it is also an outlook that greatly simplifies the complexity of the phenomena it observes and analyzes. This attribute accounts in part for the tendency toward hyper-certainty that is visible in newly developing and still experimental areas, such as gene therapy and xenotransplantation, where molecular biology and genetics play a pivotal role. Historians of medicine and science Robert L. Martensen and David S. Jones remind us that, in addition, the fact that [nowadays many physicians and researchers believe that “molecular medicine” will satisfy the yearning for medicine to be an “exact science” is part of a much older process of ‘searching for medical certainty’ (Martensen and Jones 1997).

Uncertainty and the ‘Emergence’ and ‘Reemergence’ of Infectious Diseases

A second change in the cognitive base of contemporary medicine that has opened up new areas of uncertainty, and reopened old ones, is related to what is called in the medical literature, ‘the emergence and reemergence of infectious diseases.’ These terms refer to diseases that have ‘newly appeared in the population, or are rapidly expanding their range,’ those that are ‘already widespread but, while not new in the human population, are newly recognized,’ and to the resurgence of old scourges in new, more severe forms (Morse 1993: 10-11). The pathogenic microbes are often viruses, but bacteria and parasites are also involved in these outbreaks of infections. The spectrum of ‘new’ and ‘old’ diseases that they cause range from HIV/AIDS, Ebola hemorrhagic fever, Legionnaire’s disease, Lyme disease, and bovine spongiform encephalopathy (‘mad cow’ disease), to cholera, dengue, yellow fever, and tuberculosis, among many others.

To a sobering degree, the occurrence of infectious diseases and their spread are precipitated by human conditions and behavior, for example, by changes in patterns of agriculture and irrigation, massive rural-to-urban population movement, increasing population density in cities, global travel and trade, immigration, warfare, refugee migration and internment, economic crises, political upheavals, famine, poverty, and homelessness. Even more humbling, as historian William McNeill points out, is the fact that our human attempts to ‘make things the way we want them, and, by skill, organization and knowledge, to insulate ourselves from local and frequent disasters … change natural ecological relationships.’ In turn, this creates ‘new situations that become unstable … [and] new vulnerabilities] to some larger disaster’ (McNeill 1993: 5-6). Such ecosystem disruptions are as true of medical interventions as of other forms of supposedly ameliorative action. An important and threatening example of this phenomenon is the fact that microbes, such as certain strains of Staphyloccus aureus, Streptococcus pneumoniae, Myobacterium tuberculosis, and Neisseria gonorrhea, have become resistant to a substantial proportion of antibiotic drugs considered first-line treatments, partly because they have been extensively, often excessively, used in humans and also ‘in veterinary medicine, animal husbandry, agriculture, and aquaculture’ (Tenoyer and Hughes 1996: 303).

At this historical juncture, most medical and public health professionals have distanced themselves from the previous certainty (voiced as recently as 1969, by the then US Surgeon General) that ‘western scientific medicine can [and/or has] overcome pathogenic agents’ (Porter 1998: 491-2). They are poised somewhere between a reawakened realization that ‘the more we drive infections to the margins of human experience, the wider we open a door for a new catastrophic infection’ (McNeill 1993: 36), and the determined conviction that ‘because we now understand many of the factors leading to [emerging diseases] … we should be in a position to circumvent [them] at fairly early stages, [through] [sophisticated surveillance with clinical, diagnostic, and epidemiological components on an international scale’ (Morse 1993: 26).

Uncertainty and Prognosis (Christakis 1995, 1999)

Still another conceptual shift occurring in present-day medicine is the increasing importance that medical prognosis has assumed—a development that has accentuated problems of uncertainty often faced by physicians when they are called upon to make explicit predictions about the outcome of a patient’s illness or state. Physician-sociologist Nicholas A. Christakis has shown that ‘diagnosis and therapy [have always received] more attention than prognosis in patient care, medical research, and medical education.’ In his view, this is partly a consequence of ‘the contemporary dominance of an ontological [conception] of disease … in which disease is seen as generic and generally independent of its expression in an individual’:

Making a diagnosis has become the central concern of the clinical encounter because prognosis and therapy are seen to follow necessarily and directly from it. The ontological perspective is further reinforced when an effective therapy for a disease exists because effective therapy further narrows the range of possible outcomes a disease might have. Once a diagnosis is made and effective therapy is initiated, the clinical course of a disease is presumed to be relatively fixed, non-individualistic, and standardized. (Christakis 1999)

Even if a patient has a condition that is generally amenable to existing therapy, this does not inevitably mean that his/her medical history will unfold in the usual way, or result in a favorable outcome. Explicit prognostication becomes both more difficult and more necessary in such instances. Although it may be a means of gaining some degree of control over the unfolding clinical situation, prognosticating under these circumstances is likely to be threatening both to the physician and the patient, because it reveals not only medical uncertainty and limitation, but also medical fallibility.

Prognosis comes into special prominence, too, when a patient is facing imminent death in spite of all the means of remedying illness and prolonging life that modern medicine and its practitioners command. Predicting whether a patient will soon die, when, and how, and conveying this information with discernment to the patient and family is one of the physician’s most solemn obligations. It is also a way of structuring and managing a situation that challenges the physician’s mastery, and that evokes the mortality that he ultimately shares with all patients.

‘A close study of physician attitudes and behavior reveals a dread of prognostication [Nicholas Christakis writes]—whether accurate or inaccurate … favorable or unfavorable. Physicians would rather not formulate or discuss prognosis’ (Christakis 1999). This is because they associate prognosis with the limits of their diagnostic and therapeutic powers, and with the grave illnesses and impending deaths of patients. In addition, Christakis has discovered, they have a shared inclination to believe that any negative predictions they make about patients’ conditions and their outcomes may have ‘self-fulfilling prophecy’ effects, whether or not they communicate their somber expectations to patients. For these reasons, physicians not only have a tendency to skew prognosis-setting in a positive, optimistic direction, but also to play down, and if possible avoid, medical forecasting.

Their apprehension notwithstanding, current and pending developments in medical science and technology, and in the social settings in which medicine is practiced, are making overt prognostication more important and more difficult for physicians to eschew than in the past. Christakis has identified a number of such developments. First and foremost, he contends, is the increasing prevalence of chronic disease, in which the diagnosis is already known. Another development is that therapy mainly entails the continuation of previously initiated interventions. In addition, the chief clinical encounters and challenges entail anticipating, forestalling, and mitigating adverse new events stemming from the disease itself, or from cumulative side effects of what is being done to treat it.

The invention and utilization of new forms of medical technology, Christakis avers, is another set of factors contributing to the growing relevance of prognosis. Notable among these are genetic testing methods that can reveal whether an asymptomatic person will or will not develop a genetically based disease such as Huntington’s chorea or, through the analysis of both an individual’s genes and those of her spouse, testing that can yield a probabilistic prediction about the chances of the couple giving birth to a baby with particular, genetically borne disorders. The emergence of novel reproductive technologies such as prenatal ultrasound and amniocentesis provides information about a pregnancy and the condition and development of the fetus that have postnatal import. As more technologies are invented that either directly or indirectly produce prognostic data, physicians will be under greater pressure to make clinical predictions. In turn, they will be confronted with added problems of uncertainty and limitation, such as what to tell the parents about the future clinical course of a baby diagnosed in utero with polycystic kidney disease, or what to offer a person certain to develop in mid-life a fatal, degenerative neurological disease such as Huntington’s chorea for which no therapy exists.

What Christakis terms ‘the increasingly bureaucratic structure of American medical practice’ is also focusing more attention on prognostic judgements. In the wake of the accelerating growth of managed care, an expanding percentage of US physicians are becoming salaried employees in large, formal medical organizations. In these milieux, where physicians’ practice styles and behavior are reviewed and regulated by others (physician and nonphysician), cost containment, the economic allocation of scarce resources, and efficacy are likely to be emphasized. Within the framework of these structures and norms, physicians are being asked to base clinical decisions, such as the timing of hospitalization, the duration of hospital stay, and the referral of patients for terminal hospice care, on prognostic assessments of the course of the illnesses involved.

Finally, the intensifying interest in ethical (or so-called bioethical) parameters of medical care, and the concern about them that have gained momentum in public as well as professional arenas of American life since the early 1970s, have played a role in accentuating the importance of prognosis. For example, the greater insistence on the ethical imperative of informed, voluntary consent from patients for the diagnostic and therapeutic measures they undergo not only entails explaining to them what these interventions are, but also telling them what they are expected to accomplish and what risks and negative side effects may be involved.

End-of-life medical care is another important area of bioethical deliberation to which prognosis is integral for physicians, patients, and their families. It profoundly affects the tone and the content of the discussion they have with one another about such care, and the decisions that are made about whether to initiate, forego, or terminate the life-sustaining treatment of patients who are critically ill. Making the kinds of forecasts about suffering and pain, and about the quality of life and of death, that this implies carries all the participants in such decisions beyond medicine and into the realm of questions of meaning and of spiritual beliefs and uncertainties.

Uncertainty and the Irony of Iatrogenesis: Slde Effects and Error

As the foregoing discussions of prognosis, emerging infectious diseases, and the advent of molecular biology suggest, a continual source of medical uncertainty is the unwanted, sometimes predictable, and frequently unpredictable, side effects of the technology, procedures, and drugs that physicians use to diagnose and treat patients’ disorders. Throughout the history of medicine, the actions that physicians have taken on behalf of their patients have always had a mixture of beneficial and harmful consequences. Paradoxically, the impressive advances of modern medicine have in certain ways augmented its iatrogenically induced adverse effects on patients. As the modes of diagnosing and treating disease and illness have become more powerful and efficacious, they have also grown more dangerous, exposing patients to more potential risk, suffering, and harm through their anticipated and unanticipated negative consequences.

The current armamentaria of cancer treatments, for instance, consist of surgical procedures, radiotherapy, and chemotherapy regimens that, however meliorative or curative, are highly invasive, in some cases mutilating, causing physically and psychologically painful symptoms such as fever, infection, anemia, severe fatigue, hair loss, incontinence, impotence, and premature menopause. To cite another instance, the skilled use of various combinations of highly active, antiretroviral drugs, including protease inhibitors, has recently brought about what appears to be a dramatic improvement in the symptoms, daily round, and life span of persons infected with HIV. However, these drug ‘cocktails’ are so ‘expensive and complex, with [such] high pill burdens, numerous adverse effects … myriad drug interactions,’ and oppressive ‘quality of life issues,’ that many recipients of the battery of drugs find it difficult to adhere to the regimen necessary for optimal results (Cohen and Fauci 1998: 87). This can cause the HIV virus to mutate into drug-resistant strains, with grave consequences for the patients taking the drug, and in the long run, public health.

Furthermore, whether the drugs are intended for therapy for HIV/AIDS, cancer, or other disease conditions, in spite of all the pharmacological progress that has been made, no satisfactorily encompassing, overall theory of drug action has as yet been developed. This makes it difficult for physicians to foretell how favorable and/or unfavorable an individual patient’s responses to particular drugs will be, and to apprise the patient of possible adverse reactions without unduly alarming him or contributing to the occurrence of negative, placebolike effects.

There is nothing new in medicine about error causing serious injurious consequences, but the increasingly hazardous and intricate character of the instrumentalities that present-day medicine deploys enhances the potential seriousness of the errors that take place. Pediatric cardiac surgeon Marc de Laval depicts the ‘high technology’ area of medicine in which he works, for example, as ‘a complex socio-technical system,’ that ‘shares many similarities with high hazard enterprises, such as the aviation industry, nuclear power plants, marine and railroad transportation, chemical plants and the like’ (de Laval 1996). Using psychologist James Reason’s conceptual framework for examining human errors that occur in high-risk systems (Reason 1990), de Laval identifies and analyzes examples of the kinds of error that he has observed or experienced in his own practice. He terms these as active or latent failures: skill-rule-, and knowledge-based mistakes; accidents that may result from ‘the combination of high operational hazards (an intramural coronary artery) and human fallibility’ those that emanate from the ever-increasing amount of ‘hardware’ utilized in high-technology medicine (such as ‘diagnostic equipment, anesthetic equipment, perfusión equipment, monitoring equipment, drug delivery systems, [and] cardiomechanical assistance devices’); those that happen at the ‘interface’ between ‘hardware’ and what he calls ‘liveware.’ ‘A few months ago, during a repair of tetralogy of Fallot,’ de Laval writes illustratively:

there was a power cut in central London. The hospital generator went on and activated two pumps of the extracorporeal circuit but failed to activate the main head pump. The perfusionist immediately noticed the technical failure and used the handle to activate the pump manually. Unfortunately, he turned it clockwise instead of anticlockwise, and air traveled into the arterial line. This is a good example of technical failure, human error at the hardware/liveware interface, but also a latent failure arising from the company that made the hardware, which should have been equipped with a device preventing such an accidental happening, (de Laval 1996)

He also describes personal examples wherein the physical, administrative, financial, social structural, interpersonal, or cultural ‘environment’ in which he and his colleagues work can affect surgical performance (its excellence and fallibility) and surgical outcomes, including the way in which errors are dealt with when they occur (de Laval 1996; de Laval et al. 1994).

de Laval is one of a number of physicians who has become sufficiently interested in medical and surgical error, and concerned about it, to try to study its origins, dynamics, and consequences. One of their common conclusions is that if doctors were more open about their fallibility—more able to discuss mistakes both with colleagues and with patients—they would not only be relieved of some of ‘the burden of perfection,’ and its isolating anguish, they might also be able to establish better communication with patients—especially with regard to their complaints, matters of medical uncertainty, and the conveying of ‘bad news’—and feel more willing to avail themselves of opportunities for improving their knowledge, skill, and performance. Thus, greater physician openness about errors could lead to a reduction in their frequency and in the incidence of malpractice suits as well (Christensen et al. 1992; Levinson et al. 1997; Royal College of Physicians of London 1997).

Uncertainty and Individually Focused versus Collectivity-Oriented Medicine

Reconciling and integrating the one-on-one, doctor-patient relationship of clinical medicine with population-based reasoning and action is a long-standing cognitive problem in modern medicine, fraught with uncertainty, that also evokes strong sentiments about physicians’ role responsibilities and value commitments. Although the tensions between these two orientations and modalities of thought are not new, they have been increased by a number of converging factors that include the emergence and reemergence of infectious diseases and the persistent, even mounting, epidemiological tendency of chronic diseases to take their greatest toll on the health and life expectancy of persons in the lowest and poorest strata of advanced modern societies like the United States and the United Kingdom. Other factors include the growing importance of managed care organizations in the United States, and their enrolled patient populations, and the burgeoning emphasis on practicing what is termed ‘evidence-based medicine,’ with interventions and outcomes that are clinically appropriate, efficacious, and cost-effective. Each of these developments invites a more aggregate-based, collectivity-oriented perspective than is usually characteristic of the individually focused physician-patient dyad of clinical practice. This raises difficult methodological, attitudinal, and professional questions about how the two approaches, and their implications for the handling of medical uncertainty, can be reconciled.

For example, according to what might be called its official definition, ‘the practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence,’ derived from the basic sciences of medicine, and from patient-centered clinical research conducted via large, randomized, controlled clinical trials, or from the systematic review (including meta-analysis) of a number of smaller, more disparate published clinical studies (Sackett et al. 1997: 2). David Sackett, one of its founding fathers and chief codifiers, and his co-authors declare that, ‘Evidence-based medicine is not “cook-book” medicine. Because it requires a bottom-up approach that integrates the best external evidence and patient choice, it cannot result in slavish cookbook approaches to individual patient care’ (Sackett et al. 1997: 3-4).

However, there are numerous thoughtful British and American physicians and some social scientists of medicine who regard evidence-based medicine with skepticism and apprehension. They invoke the very epistemological, philosophical, practicum, and policy concerns that Sackett and colleagues dismiss. Evidence-medicine, they say, is ‘bias[ed] toward a narrow scientism’ and empiricism and a kind of ‘biomedical positivism’ whose goal is ‘a science-based rationalization of health services research, … health care … and, by extension, health policy’:

[It] makes a spurious claim to provide certainty in a world of clinical uncertainty. The dilemma facing policy makers, managers and practitioners, as well as the public in general, is that in most cases we are not dealing with a clear-cut question of whether treatment is effective or ineffective. Rather, the questions are how effective, and to what degree of probability? (Hunter 1996: 6)

In the final analysis, what is ‘appropriate care’?, a physician asks:

It depends [he answers], on which clinicians are questioned, where they live and work, what weight is given to different types of evidence and end points, whether one considers the preferences of patients and families, the level of resources in a given health system, and the prevailing values of both the system and the society in which it operates. (Naylor 1998: 1920)

Such physician-critics of evidence-based medicine feel that it may misconstrue clinical expertise by ‘reducing the complexity of clinical decision making to the simple matter of following the results of relevant, rigorously controlled trials in its quest for a particular kind of certainty’ (Hurwitz 1997a):

To varying degrees, the judgments required of clinicians in discrete areas of medicine such as diagnosis, the treatment of some chronic conditions, or the management of anticoagulation, can be more or less successfully objectified, but this does not reduce clinical judgment to nothing more than a form of “decisional algebra” that can be encapsulated in expert systems, algorithms, protocols, or guidelines.

Making judgments about complex individual circumstances in the context of different degrees of uncertainty, where opinions differ, and where the authority of one’s senses, perceptions and intuitions frequently play interacting roles is the routine reality of such medical practice. Opinionated judgments, grounded in clinical experience, counterweighted by knowledge of scientific findings, and modified by respect for patients’ wishes are not necessarily simple transductions of input information which result in output decisions. (Hurwitz 1997b)

Physicians who have this perspective on the clinical encounter do not believe that all the variation that exists in medical practice is either surprising or necessarily a state of affairs that can, or should, be remedied through the formulation and application of clinical guidelines derived principally from the results of randomized, controlled clinical trials. It is important to first study these variations, they insist. Soundly based guidelines, they concede, can help to ‘focus such variation, especially where there is both considerable certainty about efficacious treatment strategies (based on scientific evidence or expert opinion), and where significant departure from these strategies occurs without valid justification’ (Hurwitz 1997b). However, they insist that this is not equally true of clinical situations in which there are ‘inherent uncertainties.’ Nor is it the case when ‘the evidence derived from patients enrolled in published trials is [not] relevant to the patient one is agonizing over—a circumstance that is both frequent and serious in a field like geriatrics; for example, wherein too many RCTs [randomized controlled trials] have excluded older, and particularly older and iller [sic] patients’ (Grimley-Evans 1995: 461). In circumstances like these, to place too much credence in evidence-based medicine, standardized clinical guidelines, or average outcomes in the population may eventuate in approaching patients in a pseudo-scientific, ‘evidence-biased’ way that pays insufficient attention to the individual particularities of their states of health, illness, and well-being (Grimley-Evans 1995: 461-2 [italics added]).

Physicians with this intricate view of clinical observation and reasoning are also troubled by the extent to which evidence-based medicine appears to be contributing to the ‘fragmenting and shifting away [of] clinical expertise … from its previous locus with the practicing physician … towards corporate entities such as expert panels, consensus conferences, clinical guideline development groups, and experts in data extraction and analysis [whose] skills are not necessarily similar to those required by the physician’ (Hurwitz 1997a).

Other patterns of tension between population-based and individual patient-focused medical reasoning and commitment have arisen in the field of organ transplantation, both with regard to successive retransplants and the transplanting of organs from animals to human (xenotrans-plants).

Despite advances in the biology of organ and tissue rejection, and the development of new immunosuppressive drugs, which attenuate or retard the immune reactions responsible for the rejection of transplanted organs and that prolong their survival in recipients, the rejection reaction continues to be a major cause of graft failure. This means that virtually all transplant recipients will eventually reject the organ or organs they have received and become potential candidates for retransplants. Their eligibility for repeated transplants is thrown into question by what transplant physicians term the ‘shortage,’ of donated transplantable organs, and consequently the thousands of patients with end-stage diseases waiting for transplants who may never receive them. In addition, the results of retransplants, with regard to graft survival and patient mortality, are generally far less favorable than the outcome of first-time transplants. (This is more true of heart and liver, than of kidney retransplants.)

Transplant clinicians are very reluctant to accept and follow a rule of one organ per recipient. They are also reluctant to concede to bioethicists that because they have a ‘moral duty to direct scarce lifesaving resources to those most likely to benefit from them,’ primary transplant candidates should be given a better chance of receiving organs than retransplant candidates, and the number of times that transplants are offered to patients should be limited (Ubel et al. 1993). This resistance of transplant clinicians stems from the duration and strength of the relationships they form with these very sick patients, whom they have already wrested from death through organ transplantation. To use their own language, many transplanters feel that they are ‘abandoning’ their patients if they do not seek a retransplant for them when graft failure takes place (Fox 1997).

Xenotransplantation—the second sphere of transplantation that juxtaposes medical uncertainty, physicians’ concerns about the particular patients for whom they care, and physicians’ responsibility to a larger collectivity—‘promises great benefit to some patients,’ on the one hand, but presents ‘the possibility of a new disease entering the human population,’ on the other (Bach et al. 1998: 142). There are biomedically sound bases for supposing that the potential for transmission of infectious agents from animal donors to human transplant recipients may be greater than in human-to-human transplants. Some of the organisms carried by a xenograft may be unknown human pathogens, and they may include ‘xenotropic’ organisms that are not threatening to the animal donor species, but can cause disease in a human recipient. Such infectious diseases may not only have the potential to infect individual organ recipients, but also to spread to the general population. Nobody knows how big this risk is, but all medical scientists and physicians agree that ‘it is unequivocally greater than zero’ (Institute of Medicine 1996: 92). The ambiguity and possible gravity of the collective risks of xenotransplantation are magnified by the threat to human health posed by emerging viruses and other microorganisms, many of which are thought to be transmissible from animals to humans. In addition, there is the acute realization that some of these diseases—of which the human immunodeficiency virus (HIV) acquired immune deficiency syndrome (AIDS) is the most harrowing example—can become epidemic, even pandemic.

Transplant physicians do not deny that there is a potential risk of infection to organ recipients and to the community at large. They agree with such bodies as the US Public Health Service and Food and Drug Administration (USA Public Health Service 1996), and the UK Xenotransplantation Interim Regulatory Authority and Ministry of Health, that it is advisable to have special guidelines, rules, and comprehensive mechanisms for the close monitoring and continuing surveillance of xenograft recipients, the family members with whom they have intimate contact, and the health professionals who care for them. At the same time, however, transplanters are disposed to playing down the uniqueness and seriousness of the risks associated with xenotransplantation and to minimizing the dangers to the public’s health that it might unleash. They are more inclined to dwell on its potential lifesaving benefits, and on what they believe is an obligation to respond to the suffering and need of patients awaiting transplants by augmenting the organs available to them in this way.

Epistemológical Uncertainty

‘These are strange times, when we are healthier than ever but more anxious about our health,’ writes social historian of medicine Roy Porter in the introduction to his panoramic ‘medical history of humanity,’ The Greatest Benefit to Mankind (1998). He thereby echoes, at the end of the 1990s, comments that were made by observers like Lewis Thomas and Andre Cournand twenty years earlier.

In myriad ways [Porter goes on to say], medicine continues to advance, new treatments appear, surgery works marvels, and (partly as a result), people live longer…. Yet few people today feel confident about their personal health or about doctors, health-care delivery and the medical profession in general….

Medicine is … going through … a fundamental crisis, the price of progress and its attendant inflated expectations….[It] has become the prisoner of its success. Having conquered many grave diseases and provided relief from suffering, its mandate has become muddled. What are its aims? Where is it to stop?

‘[M]edicine’s finest hour is the dawn of its dilemmas,’ Roy Porter concludes. ‘For centuries medicine was impotent and thus unproblematic…. Today, with “mission accomplished”, its triumphs are dissolving in disorientation…. It is losing its way, or having to redefine its goals’ … (1998: 3-4, 716-18).

This end-of-the-twentieth-century anxiety, ambivalence, and perplexity about the successes and failures of Western medicine, its progress and impasses, capacities and limits, its sense of direction and of future goals, are subterranean motifs in all the phenomena surrounding medical uncertainty discussed here. This cultural mood is a pervasive, contextual part of the issues associated with the bridging of molecules and medicine, the tenacity of certain chronic diseases, the resurgence of infectious disease, the intellectual and emotional difficulties posed by medical prediction and prognosis, the tensions between individual- and population-oriented medicine, and the iatrogenic effects of the procedures, machines, and pharmacopoeia that are integral to present-day processes of medical diagnosis, therapy, prognosis, and prevention.

I would venture to go several steps further than Roy Porter in interpreting the medico-centric state of ‘anomie’ that he has identified. In the medical literature published during the 1990s and used as research for this chapter, there are consistent indications that what Porter refers to as the ‘disorientation’ of medicine at the turn of the millennium not only involves its clinical accomplishments, limitations, liabilities, and overall sense of direction, but also its fundamental way of thought. Whether they deal with phenomena associated with HIV/AIDS, cancer, or inflammatory bowel disease, for example, infectious or chronic syndromes, processes of diagnosis, prevention, treatment, care, or prognosis, or methods of collecting and analyzing medical data, many recent journal articles express concern about current problems of epistemological uncertainty:

… Big gaps remain in our knowledge of HIV, and it may be that we need a more complex response in terms of therapeutic approaches…. Similarly, prevention has focused largely on fairly simple psychological approaches. … The gaps are even bigger in determining how to prevent a million people from becoming infected with AIDS this year … and at the same time to care for nearly 30 million people with HIV living in developing countries….(Piot 1998: 1844-5)

In a recent commentary of AIDS therapy, the phrase “Failure isn’t what it used to be … but neither is success” was coined (Cohen 1998)…. Failure has generally been defined in vir-ological terms—the inability to achieve complete suppression of viral replication…. However, treatment failure is not only viral resistance. In fact, definition of failure or success of treatment is a far more complex phenomenon. (Perrin and Telenti 1998: 1871)

Renal cell carcinoma continues to fool internists and noninternists alike…. One source of error [is] the clinicians’ overreliance on the use of patterns…. Pattern recognition greatly simplifies problem solving…. Occasionally, however, we rely on pattern recognition to a fault, trying to fit square pegs into round holes….(Saint et al. 1998: 381)

Diseases like inflammatory bowel disease that have systemic manifestations can pose daunting diagnostic challenges. … The focus and training that physicians bring to a clinical case typically create cognitive expectations that determine their attention to and interpretation of events….[T]hese elements can be important to reasoning in the presence of uncertainty while also being a source of error in diagnostic interpretation. (Berkwits and Gluckman 1997: 1683-4)

The National Institutes of Health convened a consensus conference in January 1997 to examine new evidence on the effectiveness of mammographic screening for breast cancer for women ages 40 to 49 years…. Critics of the panel stated resoundingly that it had reached the “wrong” conclusion, understating the effectiveness of mammography, exaggerating the potential harms of false-positive results, and raising unnecessary fears about the safety of mammography. The implication that the panel should not have had these concerns or expressed them perpetuates the notion that there is only one correct way to interpret evidence. Who can say when evidence is “good” enough? (Woolf and Lawrence 1997: 2105-6)

Two articles in this issue reach apparently conflicting conclusions regarding the safety of the short postpartum hospital stays that are now … standard for apparently well mothers and newborns….[S]cience does not and probably can not supply airtight evidence that longer stays are more effective…. In the absence of an adequate base of scientific knowledge about [how] to achieve the best health outcomes, it appears rational and ethical to be guided by a combination of good judgment, caution, and compassion in weighing the best evidence available. (Braveman et al. 1997: 334-46)

It is impossible to say, on the basis of recent evidence alone, whether the results of a large randomized, controlled trial or those of a meta-analysis of many smaller studies are more likely to be close to the truth…. We never know as much as we think we know. (Bailar 1997: 559-60)

Embedded in such journal passages are basic questions of epistemology (Hamlyn 1967), involving the nature of medical knowledge, where and how it is generally found and obtained, the role that observation, reason, and experience play in this process, how much of what medical scientists and physicians think they know is real knowledge (certain enough, or based on sufficiently good grounds for this claim to be made), what the connections between medical knowledge, judgment, and belief are, and ought to be, and how errors of cognition, perception, judgment, and belief can be recognized, analyzed, and reduced, if not eliminated. In addition to these classical issues, questions that are more specific to medicine are raised about the intricate relations that exist between scientific and important, nonscientific aspects of medicine and the implications for the “scientific-ness” of the field. Questions are also raised about the relationship between simplicity and complexity in medical accuracy, understanding, and effectiveness, as well as between the scientific base of medicine, its clinical application to diagnosis, therapy, prognosis, and prevention, and to the formulation and implementation of health policy. These articles also consider the way in which what physician-scientist Ludwik Fleck termed the characteristic ‘thought-style’ of medicine (Lowy, 1990: 215-27) contributes both to the pattern-recognition and clinical problem-solving capacities of physicians, and to the built-in biases that result from their internalized conceptions and preconceptions. The articles are also concerned with problems of achieving consensus between medical professionals when they disagree in clinical and policy contexts. The articles also deal with how better to join, and more fruitfully integrate patient-, population-, and globally oriented medicine and attention to the disparate health, illness, and medicine conditions and needs in the ‘two worlds’ of developed and developing countries.

There is an implicit sense in which the evidence-based medicine movement (invoked by a number of the articles I sampled) is as much an indicator of this epistemological uncertainty and searching as a response to it. It implies that a great deal that medicine professes to know is neither strongly supported by reliable and valid scientific evidence, nor clinically efficient and efficacious. Although the ways of determining the effects of medical interventions that the evidence-based medicine approach prescribes (randomized trials, meta-analyses, and systematic reviews) are respected by physicians, they are not viewed as conceptual, methodological, or empirical panaceas for the cognitive challenges, problems, and deficiencies with which modern medicine is presently grappling.

Vocational Uncertainty

The future for US physicians is full of uncertainty—but also full of opportunities. Tomorrow’s doctors should not be unemployed; they should be redefined. (Konner 1998)

Along with the kinds of uncertainty that are associated with the conceptual framework, knowledge base, and technological armamentarium of medicine, physicians are also facing uncertainty in their professional status and roles. The main precipitants of this uncertainty in the United States are the nationwide restructuring of health insurance and reorganization of health-care delivery that are taking place as the country moves rapidly toward a predominantly managed care system. What consequences this will have for the employment of physicians, the fields within medicine that they select and deselect, their conditions of work, their incomes, the scope, continuity, and quality of care they offer to patients, and the relationships they establish with them, the sorts of professional decision-making and autonomy they will and will not be able to exercise, and the meaning, fulfillment, and frustration they will experience in their chosen careers are among the serious vocational questions that doctors, medical students, and ‘pre-med’ students are mutually facing.

A somewhat anomalous situation exists in the United States in this regard. Although young persons interested in becoming physicians are keenly aware of these uncertainties {Pulse 1997, 1998), an unprecedented number of college and university students have been seeking admission to medical schools throughout most of the 1990s. These aspirants include many daughters and sons of physicians who have advised their children not enter the profession under the present circumstances, and have urged them to consider other professional or business fields. We need more knowledge and understanding of the young men and women opting for medical careers at this time of transition and indeterminacy in the profession. What are their conceptions of being a physician? What motivates them to become doctors? How do they see and expect to handle the changing organizational and economic, social and psychological conditions under which they will practice medicine?

‘Bioethical’ Uncertainty

Finally, medicine is at the center of a larger, more far-reaching form of uncertainty that underlies American bioethics. This area of reflection, inquiry, and action that surfaced at the beginning of the 1970s has grown progressively more prominent ever since (Fox and Swazey 1984; Fox 1989b, 1990, 1994).

‘Bioethics is not just bioethics’, it pertains to more than medicine and to more than ethics. Using biology and medicine as a metaphorical language and a symbolic medium, concentrating on the problematic consequences of particular biomedical advances, and drawing predominantly on the logico-rational principles of analytic philosophy, US bioethics implicitly deals with uncertainty—fraught questions of value, belief, and meaning that are as religious and metaphysical as they are medical and moral.

What is life? What is death? When does a life begin? When does it end? What is a person? What is a child? What is a parent? What is a family? Who are my brothers and my sisters, my neighbors and my strangers? Is it better not to have been born at all than to have been born with a severe genetic defect? How vigorously should we intervene in the human condition to repair and improve ourselves? And when should we cease and desist? This at once elemental and transcendental questioning, coded into the deep structure of American bioethics, is indicative of the magnitude of foundational change through which not only medicine, but also the society and culture of which it is an integral part, are undergoing.

Medical Uncertainty and Change: Overview

The modalities of uncertainty discussed here are closely connected with a variety of changes that are occurring both within and around medicine at this historical juncture. The gamut of these changes is broad; they are scientific and technological, cognitive and ethical, conceptual and empirical, methodological and procedural, and social and cultural in nature, and they have ramifying implications for the way of thought, the value system, and the practice of medicine that affect how it is delivered and experienced by health professionals and patients.

Several general characteristics of these uncertainty-accompanied changes are particularly notable. A number of them—such as mutations and developing drug resistance of certain pathogens—emanate from unintended consequences and iatrogenic side effects of efficacious medical actions. In addition, to an increasing degree, the change-related uncertainties that medicine is currently facing asks physicians to bridge and try to coordinate micro-, macro-, individual, and collective entities that range in size and scope from molecules and genes, organs and organ systems, to embodied patients and large patient populations. This calls for very different angles of vision that not only pose major scientific problems, but also raise important moral issues. For example, within the more corporately organized US system of health care that is unfolding, how can physicians abide by both an ‘individual ethic’ and a ‘distributive ethic’ that will enable them to ‘provide optimal care for each of their patients and … for all patients within a group … at the same time?’ (Kassirer 1998: 197) As this question suggests, in the changing ethical, social, and scientific situations in which they find themselves, physicians are encountering a considerable amount of uncertainty about how they should practice medicine.

As the grounding of medicine shifts in multidimensional ways, long-standing sources and manifestations of uncertainty have been reactivated, accentuated, or modified and new ones have formed. It is with extensive uncertainty about its state of knowledge and accomplishments, its future directions and limitations, and with a mixture of confidence and insecurity, that modern Western medicine is approaching the twenty-first century.