Water

Christopher Hamlin. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 1. Cambridge, UK: Cambridge University Press, 2000.

The ingestion of water in some form is widely recognized as essential for human life. But we usually do not consider water as food because it does not contain any of those substances we regard as nutriments. Yet if its status as a foodstuff remains ambiguous, it is far less so than it has been through much of human history. Water (or more properly “waters,” for it is only in the last two centuries that it can really have been viewed as a singular substance) has been considered as food, a solvent for food, a pharmaceutical substance, a lethal substance, a characteristic physiological state, and a spiritual or quasi-spiritual entity.

This chapter raises questions about what sort of substance water has been conceived to be and what nutritional role it has been held to have. Moreover it also explores what we know of the history of the kinds of waters that were viewed as suitable to drink—with regard to their origins, the means used to determine their potability, and their preparation or purification. It also has a little to say about historical knowledge of drinking-water habits (i.e., how much water did people drink at different times and situations?) and water consumption as a means of disease transmission.

What Water Is

Modern notions of water as a compound chemical substance more or less laden with dissolved or suspended minerals, gases, microorganisms, or organic detritus have been held at best for only the last two centuries. Even earlier ideas of water as one of the four (or five) elements will mislead us, for in many such schemes elements were less fundamental substances than dynamic principles (e.g., in the case of water, the dynamic tendency is to wet things, cool them, and dissolve them) or generic labels for regular combinations of qualities. In one strand of Aristotelianism, for example, water can be understood as matter possessing the qualities of being cold and wet; thus whenever one finds those characteristics, one is coming across something more or less watery. It may even be inappropriate to think of water as wholly a natural substance; as we shall see, springs and wells (if not necessarily the water from them) often held sacred status. The primacy of water in the symbolism of many of the world’s religions as a medium of dissolution and rebirth invites us to recognize water as numinous in a way that most other foodstuffs are not (Eliade 1958; Bachelard 1983).

At the very least, it is clear that through much of Western history “water” referred to a class of substances. “Waters” varied enormously both in terms of origin (rainfall, snowmelt, dew, and pond, spring, and river water were seen to be significantly different) and from place to place, just as climate and other geographical characteristics—vegetation, soil, topography—vary. Whereas for most of us the modern taxonomy of water quality includes only two classes (pure and impure), in the past subtle and complicated characterizations were nearly universal, especially with regard to water from springs, a matter that fascinated many writers. Indeed, the uniqueness of a water is a key attribute of place, and waters are linked to places in much the same way in which we now associate the characteristics of wines with their places of origin. This idea is evident in the title of the famous Hippocratic treatise “On Airs, Waters, and Places”; it is a central theme in the treatments of post-Hippocratic classical authors like Pliny the Elder and Marcus Pollio Vitruvius, and much the same sensibility seems apparent in Celtic, Teutonic, and Chinese perspectives.

Water varied from place to place in many ways, but one can generalize about the kinds of qualities that interested classical authors. Many did mention taste, and they usually held that the less taste the water had, the better. Taste, in turn, was associated with a host of other qualities that linked the immediate sensory experience of drinking a water with the effects of its continued consumption on health and constitution. Other key factors were coldness, lightness, and heaviness.

Coldness did not necessarily refer to temperature, which in any case could not be measured except subjectively and indirectly. In Chinese cosmology coldness and hotness were part of the universal system of polarities, applied to foodstuffs as well as to many other substances and activities. Water was by definition cool; steeping or cooking things in it (even boiling them) was accordingly a cooling process (Simoons 1991: 24). For the mechanical philosophers of early eighteenth-century Europe, “cold” and “hot” had become terms of chemical composition: Sulfurous water was “hot,” that containing niter or alum was “cold” (Chambers 1741).

Lightness appears usually as a subjective quality akin to ease of digestibility. Pliny, in an unusual outburst of skepticism, observed that unfortunately “this lightness of water can be discovered with difficulty except by sensation, as the kinds of water differ practically nothing in weight” (Pliny 1963: book 31, chap. 21). But occasionally lightness was a parameter that could be objectively measured: Equal areas of tissue were wetted in different waters and their weights compared to determine which water was the lighter, and hence the better to drink (Lorcin 1985: 262).

The Classes of Waters

In general, desirable or undesirable properties were associated with the source of the water one obtained, and there was general agreement on the ranking of these sources. Rainwater was usually held to be the best water even though it was also regarded as the quickest to become putrid (though this in itself was not problematic, since the water might still be used after it had finished putrefying). Though some wealthy Romans made much of its virtues, water from melted snow or ice was generally viewed as harmful, possibly because it was associated with goiter, but more likely because, being deaerated, it tasted flat and led to a heaviness in the stomach (Burton 1868: 241; Vitruvius 1960: 239; Pliny 1963: book 31, chap. 21; Diderot and D’Alembert 1969; Soyer 1977: 296; Simoons 1991: 491-2).

Most deemed water from mountain streams (particularly on north-facing slopes) better than water from wells or streams in hot plains because it was held that the heat of the sun was likely to drive off the lighter or best parts of the waters (though Burton, summarizing classical authors, insisted that waters from tropical places were “frequently purer than ours in the north, more subtile [sic], thin, and lighter” [Burton 1868: 241]). Running waters were to be preferred to stagnant waters; waters stored in cisterns were undesirable because they accumulated “slime or disgusting insects” (Pliny 1963: book 31, chap. 21). But none of these generalizations obviated the need to characterize each individual source of water because, as Pliny noted, “the taste of rivers is usually variable, owing to the great difference in river beds. For waters vary with the land over which they flow and with the juices of the plants they wash” (1963: book 31, chap. 32).

Water and Health

Such descriptions clearly indicate that classical authors were much concerned with the effects of waters on health. One can understand their views in terms of a four-part scheme for classifying waters. Some waters are seen as positively beneficial to health, medicaments to be taken to cure various maladies. Others are viewed as good, “sweet” (the term has long been used to describe waters) waters, of acceptable taste and suitable for dietetic use. Still others are regarded as having undesirable qualities as a beverage. Accordingly, they are to be used sparingly, only after treatment, or with compensatory foodstuffs. Finally, authors recognize some waters as pathogenic in some sense, even lethal.

Beyond taste and health effects, waters (particularly spring waters) were characterized in terms of a host of bizarre properties they were believed to have. Pliny tells us (as does Vitruvius) of springs that turn black-wooled sheep into white-wooled sheep, that cause women to conceive, that endow those who drink of them with beautiful singing voices, that petrify whatever is dipped into them, that inebriate those who drink from them or, alternatively, make those who drink of them abstemious (Pliny 1963: book 31, chaps. 3-17). To this one might add the Levitical “bitter waters of jealousy,” which possessed the property of identifying adulteresses.

Sometimes authors were explicit in attributing the properties of springs to a concept of chemical admixture: The water had the properties it had owing to what had happened to it, such as the kinds of mineral substances it had encountered underground. Yet the earth was seen as in some sense alive, and we would accordingly be unwarranted in assuming that a modern concept of solution is implied (Eliade 1978). Waters were earth’s vital fluids. The Roman architect Vitruvius noted, for example:

[T]he human body, which consists in part of the earthy, contains many kinds of juices, such as blood, milk, sweat, urine, and tears. If all this variation of flavors is found in a small portion of the earthy, we should not be surprised to find in the great earth itself countless varieties of juices, through the veins of which the water runs, and becomes saturated with them before reaching the outlets of springs. In this way, different varieties of springs or peculiar kinds are produced, on account of diversity of situation, characteristics of country, and dissimilar properties of soils.” (Vitruvius 1960: 241-2)

It is probably right to see this linkage of macrocosm and microcosm as something more than analogical; such linkages would remain a part of popular understanding even after the rise of a mechanistic cosmology in the seventeenth century.

The properties of waters might also be understood as manifesting the spirits or resident divinities of springs, because many springs and rivers were thought of as home to (or the embodiment of) a divinity. Such views were held in many premodern cultures, although perhaps best known are the 30,000 nymphs associated by Hesiod with springs in Greece (their brothers were the rivers) (Hesiod 1953: 337-82; Moser 1990; Tölle-Kastenbein 1990). In many, particularly rural, places in France, Britain, Germany, and elsewhere, worship of such divinities persisted well into Christian, and even into modern times. Periodic efforts of the medieval Roman Catholic church to halt such worship usually failed and, in fact, led to the association of wells and springs as sites of miracles linked with particular saints (Hope 1893; Hof-mann-Krayer and Bachtold-Stäubli 1927; Vaillat 1932; Guitard 1951; Bord and Bord 1986; Guillerme 1988).

Where water cults were restricted to specific springs, it becomes difficult to deal with questions of why springs were worshiped and what the rituals of worship signified. R. A. Wild has argued that the Nile-worshiping cult of Isis and Sarapis (important and widely distributed during the early Roman empire) simply understood Nile water as the most perfect water; it was associated with fecundity, for humans as well as for crops, and was known as a fattening water (Wild 1981). One may speculate that in a similar sense the worship of the local water source symbolized and represented the dependence of a community on that water. Mineral springs may also have come to be worshiped for their health-giving properties; equally, a spring’s reputation as sacred was an asset to a local economy and made clear to local residents that their locality had a privileged cosmic status (Hamlin 1990a; Harley 1990).

Water as Drink

Water, because it possessed such a broad range of significant and powerful properties, was thus to be used with care in a diet. In the tradition of the Hippocratic writers, authors of medical treatises on regimen had much to say about the conditions of waters and about the circumstances in which water was to be drunk. We should first note that most authors were unenthusiastic about the drinking of water. However much it might seem the natural drink of the animal kingdom, it was also viewed as having a remarkable power to disturb the stability of the human constitution. Summing up the views of classical antiquity on water as beverage, the nineteenth-century chef Alexis Soyer wrote, “water is certainly the most ancient beverage, the most simple, natural, and the most common, which nature has given to mankind. But it is necessary to be really thirsty in order to drink water, and as soon as this craving is satisfied it becomes insipid and nauseous” (Soyer 1977: 299).

The principal later medieval medical text, Avicenna’s Canon of Medicine, for example, advised one not to drink water with a meal, but only at the meal’s end, and then in small quantities. Water taken later, during digestion, would interrupt that process. One was also not to drink water while fasting, or after bathing, sex, or exercise. Nor should one give in to night thirst. To do so would disrupt digestion and would not quench the thirst for long. For Avicenna, a water’s temperature was also a crucial determinant of its physiological effect. Too much cold water was harmful, whereas “tepid water evokes nausea.” Warm water acted as a purgative; yet too much of it weakened the stomach (Gruner 1930: 228, 401, 407-8; Lorcin 1985). The effects of habitually imbibing certain waters could be cumulative. The Hippocratic text “On Airs, Waters, and Places,” for example, held that cold waters had a detrimental effect on women’s constitutions: Menstruation was impaired and made painful; breast feeding was inhibited (Hippocrates 1939: 22).

Such a sensitivity to the careful use of water within the diet is also evident in premodern Chinese writings on diet. There, too, one finds recognition of an extraordinary range of properties possessed by different waters (drips from stalactites were seen to enhance longevity) and, accordingly, great interest in classifying sources of water. In China, the preference was for warm (or boiled) water, possibly, though not necessarily, in which vegetable substances had been steeped (i.e., tea). Cold water was deemed to damage the intestines (Mote 1977: 229-30; Simoons 1991: 24, 441, 463).

Although modes of pathological explanation changed over the centuries, concern with the role of waters in regimens remained important up to the mid-nineteenth century and the onset of a medicine more oriented to specific diseases. The Enlightenment authors of the Encyclopédie proposed to determine, through a sort of clinical trial, the full physiological effects of water, but they noted that such a project was impossible because one could not do without water (in some form): One could detect only the differential effects of water and other drinks. (They were particularly interested in the claim that water drinking enhanced male sexual performance [“très vigoureux”]; they thought it probable that such tales reflected only the incapacitative effects of alcoholic drink, not the positive effects of water [Diderot and D’Alembert 1969: entry on “eau commune”]). Late-eighteenth-century British medical men were still stressing the emetic and diluent properties of water. Much food with much water could provoke corpulency; too much water with too little food could promote a diet deficient in nutrients, since food would move too quickly through the digestive tract. For some foods, water was not a sufficient solvent; successful digestion of meats, for example, required fermented beverages (though the alcohol in such beverages was seen as a dangerous side effect). Care was to be taken in quenching thirst, which might not simply signal too little internal moisture but instead indicate too much food or the wrong kinds of food (Encyclopedia Britannica 1797: entry on “drink”; Rees Cyclopedia 1819: entry on “diet”).

In antiquity, as in the eighteenth century, the wise physician recognized that general rules like those just mentioned might need modification: One modified them according to a sophisticated explanation of the particular nutritive and other functions of water within the body (further adapted in accord with the constitution and condition of the individual who was to drink it), according to a knowledge of the particular water and the modes of preparation it had undergone. Within the body, water was understood to have a number of effects, both gastronomic and pharmaceutical, but as M.T. Lorcin has noted, in such regimen literature this distinction is inappropriate; health consists of the proper cultivation of the constitution; all ingesta contributed to this (Lorcin 1985: 268).

The chief medical functions of water were as a diluent of food and as a coolant. It acted also as a solvent of food, as an initiator of digestive and other transformations, and as a tonic (a substance that strengthened or gave tone to one’s stomach and/or other fibers). It was seen also as a mild purgative (Chambers 1741).

The Treatment of Water

To augment some of the functions just mentioned and retard others, waters might be treated or purified. Some of the harmful qualities of water were understood to be susceptible to neutralization or purification. Avicenna recommended boiling as a good means of purification; he held that the mineral residue left behind contained the congealed “coldness” that was the impurity (Gruner 1930: 223). Water might also be made to pass from a container, by capillary action, along a wick of fleece; the drops falling from the end would be assumed to have been purified. Harmful qualities could also be removed by addition of vinegar or wine or by soaking in the water some substance, such as pearl barley, onions, or wax, which would absorb or counteract injurious matters.

One might also shake a suspect water with sand (a technique remarkably similar to that used by Edward Buchner to obtain bacteria-free water for the experiments that led to the concept of the enzyme). Finally, classical and medieval authors recognized the value of filtration, whether a natural filtration through soil, or an artificial filtration through wool, bread crumbs, or cloth (“in order to make sure there are no leeches or other creatures in it”) (Gruner 1930: 222, 454-5; Baker 1948: 1-8; Lorcin 1985: 263). The great potency of water for good and ill, along with its considerable variability from place to place, made it crucial for travelers to be especially careful of the waters they drank: “The traveller is more exposed to illness from the diversity of drinking water than he is from the diversity of foods. … it is necessary to be particular about correcting the bad qualities of the drinking water, and expend every effort in purifying it” (Gruner 1930: 454).

Thus, long before there was a clear concept of waterborne disease, there was great deal of appreciation, shared by cultures in many parts of the world, of the various characters of waters and of their manifold effects on health. How assiduously people followed hygienic advice about which waters to drink and how to prepare them, and how far following such advice would have been adequate to prevent water-borne diseases is not clear, but it is clear that in cases where public waterworks existed, such as the aqueducts that supplied Rome, those in charge of their administration were supposed to be concerned, in part, with quality. It seems evident that humans have been subject to waterborne diseases throughout recorded history, and, thus, it is remarkable that there is little mention of epidemics (or even cases) of waterborne diseases prior to the nineteenth century (but see Ackerknecht 1965: 24, 41-2, 47, 134-6; Janssens 1983; Jannetta 1987: 148-9; Grmek 1989: 15-6, 346-50).

One might attribute this lack of waterborne epidemics to relatively low population density or a magnitude of travel that was usually too low to sustain outbreaks of diseases caused by relatively fragile bacteria. In this connection, it is notable that many of the records do deal with diseases that we might now attribute to the hardier parasites. Yet it is surely also the case that a population, aware of the dangers of water and possessing an impressive armamentarium of techniques for improving that water, did much to prevent waterborne disease outbreaks. Even if one does not see the Chinese preference for warmed (ideally boiled) water as representing hygienic consciousness, it surely was beneficial in relatively heavily populated areas where paddy cultivation, with night soil as fertilizer, was customarily practiced (but see Needham: 1970). In other cases, as in the addition of wine to water in early modern France, the action was explicitly a purification, with greater or lesser amounts of wine added according to the estimated degree of impurity of the water (Roche 1984).

The Reclassification of Water

Most clearly for Pliny, but also for many classical, medieval, and early modern authors, “waters” were “marvels,” each unique, whether owing to the mix of natural agency to which it had been exposed or to its intrinsically marvelous character (Pliny 1963: book 31, chap. 18). By the seventeenth century, European writers on waters had come to emphasize a binary classification: Water was either common (more or less potable) water or mineral water. “Mineral waters” was the collective term for the remarkable springs Pliny had described. Less and less did they represent the mark of the “hand of providence” on a particular locale; increasingly their properties were understood in terms of the salts or gases dissolved in them (Brockliss 1990; Hamlin 1990a, 1990b; Harley 1990; Palmer 1990).

Far from being unique, any mineral spring could be understood as belonging to one of a few general types. These included chalybeate, or iron-bearing waters, drunk to treat anemia; sulfurous waters, good for skin problems; acidulous waters, full of carbonic and other acids that gave the stomach a lightness; and saline waters, which served usually as purgatives. A good many springs, with a wide variety of constituents (and a few with no unusual chemical constituents at all), were also held to be cures for infertility and other diseases of women (Cayleff 1987).

Springs varied in temperature, which might or might not be significant. That waters in some springs were to be bathed in and that waters from others were to be taken internally was a less formidable distinction than it seems to us now. Bathing was not simply a treatment of the skin; the water (or its essential qualities) was understood to be able to enter the body through the skin or to be able to cause significant internal effect in some other way. Some therapeutic regimens, such as hydropathy, popular among educated Americans and Europeans in the mid-nineteenth century, integrated a wide variety of external and internal applications of water to produce improvements in health, which clients (like Charles Darwin) regarded as dramatic indeed (Donegan 1986; Cayleff 1987; Vigarello 1988; Brockliss 1990; Browne 1990).

The characterization of springs in terms of chemical constituents was not so much a consequence of the maturation of chemical science as one of the sources of that maturation. Such characterizations were necessary for the proprietors of mineral springs to compete in a medical marketplace. People from the rising middle classes, who increasingly patronized mineral waters, were no more willing to trust in miracles in taking the waters than in any other aspect of business. Every spring had its testimonials, miracles, and claims of excellent accommodations and exalted society for its visitors. Thus, the chemical composition of a spring seemed the only reliable means to make a decision on whether one might patronize a lesser-known resort nearby or whether one undertook a lengthy journey (Guitard 1951; Hamlin 1990b; but see Brockliss 1990). Chemistry also provided a means of bringing the spa to the patient through medicinal waters, which could be bottled for widespread distribution (Kirkby 1902; Coley 1984). Following the discovery of the means to manufacture carbonated water by Joseph Priestly and Torbern Bergman, such enterprise gave rise to the soft-drink industry (Boklund 1956).

One effect, of course, of making water part of the domain of chemistry was to reduce “waters” to a mixture of a simple substrate, “water” (whose composition as a compound of hydrogen and oxygen had been recognized by the end of the eighteenth century) with various amounts of other chemical substances. This conceptual transformation was not achieved without resistance, particularly from physicians (often with practices associated with particular springs), who saw their art threatened by the reductionism of chemistry and continued to maintain that each spring had a peculiar “life” that no chemist could imitate and whose benefits could only be obtained if its waters were drunk on-site (Hamlin 1990a, 1990b).

If, in the light of the new chemistry, mineral waters were no more than mixtures of simple substances, common water was even simpler and less interesting. To medical men and to chemists, this eau commune was to be evaluated as belonging to one of two mutually exclusive categories: It was either “pure” or “impure.” The terms did not refer to the ideal of chemical purity, which was recognized as practically unattainable; they were simply used to indicate whether the water was suitable or unsuitable for general domestic use (including direct consumption).

At the beginning of the nineteenth century, the chemists’ chief conception of impurity in water was hardness, the presence of dissolved mineral earths. Initially, this new focus supplemented, rather than replaced, the sophisticated classical taxonomy of waters (no one championed soft water that was obviously foul), and the quantification of hardness (expressed as degrees of hardness) was simply a valuable service that chemists could (easily) provide. Hardness was an industrially significant distinction; for steam engine boilers and for brewing, tanning, and many textile processes, the hardness of water was the key criterion. It seemed a medically significant criterion, too: Just like the steam boiler, the drinker of hard water could clog up with bladder stones or gout, conditions that attained remarkable prominence in medical practice, at least in eighteenth-century England (British Cyclopedia 1835: s.v.”water”; Hamlin 1990b).

Waters in the Industrial World

The nineteenth century saw great changes in views of drinkable water and equally great changes in predominant notions of who was competent to judge water quality. Despite the chemists’ infatuation with hardness, traditional senses-based approaches to judging water still prevailed in Europe at the beginning of the century. In choosing waters, ordinary people continued to be guided by tradition, taste, and immediate physiological effects. In many cases the standards they used were those found in the classical literature: Stagnant, “foul” water was to be avoided, clear, light, “bright” water was to be desired.

After midcentury, however, expert definitions prevailed. Often experts would insist that water that looked, smelled, and tasted good, and that had perhaps been long used by a local population, was actually bad. Indeed, in some situations, experts’ standards were virtually the opposite of lay standards. Light, sharp water had those qualities because it contained dissolved nitrates, which, in turn, were decomposition products from leaking cesspools. The best-tasting well water might, thus, be the most dangerously contaminated (Hardy 1991: 80-1). Less often, experts would insist that waters that laypeople found objectionable (perhaps because they had a strong taste of peat or iron) were wholly harmless.

No longer were chemists restricting themselves to determinations of hardness. Even though the techniques at their disposal did not change significantly (hardness and other forms of mineral content remained the only characteristics they could determine with reasonable effectiveness), chemists increasingly were claiming that they had defined and had the means to quantify what was objectionable in a water beyond its dissolved minerals. They could, they insisted, measure those qualities that had been the basis of the classical water taxonomy better than these qualities could be detected by the senses.

The key quality that interested them, and which at first supplemented and then displaced concern with hardness or softness, was putridity. “Putridity,” while a vague concept, had been the centerpiece of an approach to evaluating water based on one’s subjective repugnance to it—owing to its odor, appearance, taste, and associations (the German word Fäulnis better embodies such a combination of the visceral and technical). Chemists replaced the senses-based definition of putridity with more arcane indicators of putridity or potential putridity. For much of the nineteenth century, however, they were not in agreement about what precisely these arcane indicators were or the best ways to measure them. Some felt it sufficient to determine the quantity of “organic matter,” even though they admitted that this parameter was in some sense an artifact of analytical instrumentation.

Such an approach was contrary to the belief that it was some unknown qualitative factor of organic matter (and not such matter itself) that was associated with putridity and disease. In any case, “foulness” or “putridity” ceased to be a physical state of water and instead became an expert’s concept indicating an amount or a presumed condition of “organic” matter. This determination, in turn, was usually believed to correspond to a presumed fecal contamination. Henceforth, the repugnance that “putrid” or “foul” conjured up was to operate through the imagination, rather than directly through the senses.

The champion of this novel perspective was the English chemist Edward Frankland, the leading international authority on water analysis in the 1870s and 1880s. Frankland took the view that it was foolish to try to detect quantities of some unknown disease-generating agency. It was much better simply to try to discover whether water had been subject to contamination in its course through or over the ground. The possibility of dangerous contamination was sufficient reason for public authorities to avoid such supplies of water, and the idea of contamination was to be sufficient to compel ordinary people to avoid its use (Hamlin 1990b).

The shift in approaches to the assessment of water that Frankland exemplifies is a far-reaching one. Associating what might be wrong in water with the presumed commission at some time past of an act of contamination made the religious term “pollution” in its traditional sense of desecration the primary construct for a discussion of water quality, and it came to replace “foul” and “putrid” (Douglas 1966). A presumed act done to the water thus replaced a manifest condition of the water. Although laypersons had known whether water was “foul” at one time, it was up to experts to say whether water had been “polluted.” Consequently, water became (and remains) one of very few “foods” whose most important qualities were defined wholly by experts, and whose consumption, accordingly, marked complete trust of the individual in some outside institution: a government, a bottled-water company, or the maker of a filter.

There were, of course, good reasons for such a transformation, and underwriting it was the fear of waterborne (or water-generated) disease. That dense urban environments were dangerous to health was a long-standing medical truth, and water was implicated in this danger: Standing surface water, particularly in marshes, was believed to interact in some way with town filth to generate both fever (particularly malaria) and chronic debilitation. Although consumption of water was not the focus of concern, there was medical consensus that drinking such stuff could not be beneficial to health. Yet at the beginning of the nineteenth century, the doctors were unable to say much about how and in what ways such water was bad, or how it became bad, or how serious a problem bad water might be. Some held that water became harmful by absorbing harmful elements from a filthy urban atmosphere and was simply another means of communicating that state of air. Keeping the water covered would keep it pure, they believed. (Others thought that the putridity was inherent in the water itself and infected the atmosphere.)

Whatever the mechanism, the increasing frequency of epidemic disease was evident in the newly industrialized cities of the nineteenth century. They were swept repeatedly by waves of Asiatic cholera, as well as typhoid fever (only clinically distinguished from other forms of continued fever in the 1840s), and other enteric infections (less clearly identified but no less deadly) (Ackerknecht 1965; Luckin 1984, 1986).

Not until after 1850 were these diseases commonly associated with fecally contaminated water, and even that recognition did not provide unambiguous guidelines for determining water quality because such contaminated water sources only rarely caused severe outbreaks of disease. One might assume that they did so only when contaminated with some specific substance, but as that substance was unknown, it could not be measured, nor was there a clear correlation between the quantity of contamination and the amount of disease. Water that was evidently transmitting cholera was, according to the most sophisticated chemical measures available, substantially purer than water that evidently caused no harm. Some, like Frankland, held that any water that had ever been subject to such contamination should be avoided, but in heavily populated areas, where rivers were essential sources of water, this recommendationn seemed impracticable.

Although these contradictions demonstrated the inadequacy of the lay determinations of water quality, the techniques of the experts were little better prior to the twentieth century. Nonetheless, in the nineteenth century, judging waters became a consummately expert task, so much so that the European colony in Shanghai felt it necessary to send water samples all the way to London to be analyzed by Frank-land (MacPherson 1987: 85). And even after the microbes responsible for cholera and typhoid were identified in the 1880s and means were developed for their detection, many experts remained skeptical, unwilling to accept negative findings of their analyses (Hamlin 1990b). But by the early twentieth century, the institution of chlorination, more carefully monitored filtering, and a better understanding of the microbe-removing actions of filters finally led to a widely shared confidence in the safety of urban water supplies (Baker 1948). Such confidence, however, appears to have peaked, and is now in decline.

Town Supplies—Water for All

The fact that cities and towns throughout the world recognize the provision to dwelling houses of piped-in, potable water as an essential component in achieving an acceptable standard of living is remarkable indeed. It involves, in fact, two kinds of public decisions: first, a recognition of a need for a supply of water to be readily available to all settled areas, and second, a recognition of a need for a supply of water to be piped into each dwelling unit.

We usually associate both these features (household water supplies from a public waterworks) as exemplifying the organizational genius of Imperial Rome and lament that it was only in the nineteenth century that authorities, guided by new knowledge of disease transmission and new standards of public decency, again acknowledged water supply as a public duty. Yet a “hydraulic consciousness” was well developed in many medieval and early modern European towns (as well as existing far beyond the Roman empire in the ancient world) (Burton 1868: 241). This consciousness manifested itself in the building of public fountains and pumps, the diversion of brooks for water supply, and even the use of public cisterns and filters, as in Venice. All of these means were used to supply water for industrial purposes, for town cleansing, and for the fighting of fires, as well as for domestic use, but it would appear that the piping of water into individual homes was not felt to be important (Baker 1948: 11-17; Guillerme 1988; Vogel 1988; Dienes and Prutsch 1990; Grewe 1991). High-volume domestic uses of water did not exist; water closets would not become popular until the nineteenth century; bathing was, at least in early modern France, seen to be dangerous to health; and clothing (other than linen) was rarely washed (Vigarello 1988).

By no means was the provision of good drinking water atop this list, but that it was on the list at all is remarkable. How much water people drank, when and where they got it, how public authorities assessed the need for drinkable water and understood their role in supplying it are all questions about which far too little is known. Summarizing pre-nineteenth-century sources, M. N. Baker presented evidence to suggest that urban dwellers did not expect to find raw water drinkable and that knowledge of effective means for treating waters was widespread.

These means ranged from simply allowing sediment in water to settle and then decanting the water to the addition of purifiers (vinegar or wine) or coagulants (alum), or to drinking water only in boiled forms, like tea (Baker 1948: 24-5). The excessive consumption of alcohol was also seen by nineteenth-century temperance advocates as a public response to the unavailability, particularly in poor neighborhoods, of drinkable water (Chadwick 1965: 135-50). It was probably a prudent response: Beer, in particular, was cheap, usually made with a higher-quality water than that readily available, and often more accessible to the poor than water. In Britain it was drunk in hospitals and schools, not just in taverns (Harrison 1971: 37-8, 298-9).

Temperance advocates were often among the champions of public water supplies. But other kinds of reformers became involved, too, sometimes for curious reasons. Those concerned with the morals of the poor worried that a central pump or well was often a locus for the spread of immorality. Children, waiting to fill water containers (sometimes for several hours, if the sources are to be believed), were exposed to bad language, immoral activity, and dangerous ideas. An in-house supply of water could prevent all that. A public drinking fountain movement, begun in Britain in the late 1850s (initially supported by brewers and, surprisingly, not by temperance advocates), received critical support from the Royal Society for the Prevention of Cruelty to Animals, which was concerned about thirsty animals ridden or driven into towns that lacked facilities for watering animals (Davies 1989: 19).

In the century from 1840 to 1940, in almost all of the industrialized world, a public responsibility for providing town dwellers with in-home water was recognized. The timing and circumstances of that recognition varied from society to society (and, significantly, sometimes from town to town), with “public health” considerations usually providing the warrant for that recognition. Adequate sanitary provisions came to include the provision of a water closet in some form and a continuous supply of water that could be drunk without treatment. Whatever its merit on epidemiological grounds, this notion of sanitary adequacy represented the successful promulgation of an ideology of cleanliness and decency that was quite new, and in this transition the status of water changed. No longer was it an aliment whose quality one judged independently for oneself, nor was it something one had to hunt for and sometimes secure only after much labor (Chadwick 1965: 141-2). Instead it was (or was supposed to be) truly a “necessity” of life, something easily and immediately available, nearly as available (and often almost as cheap) as breathable air.

Although the image of water as a public good essential for meeting universal standards of health and decency usually supplied the rationale for undertaking water-supply projects, the ulterior motives of private interests were often more important in actually getting waterworks built. Perhaps the most significant of these private interests were industrial users. Many industries required large quantities of relatively high-quality water, and the capital costs of obtaining such supplies were prohibitive for individual firms. Consequently, they sought to obtain those supplies (sometimes at subsidized prices) through the sanitary betterment of society. In port cities with much warehouse space, the threat of fire was another underlying incentive for a public water supply.

Some towns took early action to secure control of important watersheds, either with the expectation of profitably selling water to their neighbors or of acquiring commercial advantage. Investors found waterworks projects attractive for a number of reasons, among them a steady dividend, or the possibility of selling land or shares at inflated prices. New York’s first waterworks project attracted speculators because it functioned as a nonchartered and hence unofficial, bank. Some of the capital raised was used to build a waterworks; the rest went into the general capital market (Blake 1956). It need hardly be said that contractors, plumbers, and lawyers were delighted to support waterworks projects. Water is not usually viewed as an article of commerce in the way that most foods are, yet once it had been defined as a public necessity, there was plenty of money to be made from it (Blake 1956; Hassan 1985; Brown 1988; Goubert 1989).

Although this transformation in the availability of water made water drinking much more convenient, the resultant technologies were by no means regarded as an unmitigated benefit. Networks of water mains (and sewer lines) linked people physically across classes and neighborhoods in ways that they had resisted being linked and sometimes in ways that proved hazardous. A common complaint about sewer systems was that they spread disease rather than prevented it because sewer gas frequently rose through poorly trapped drains into houses. It was believed that one was exposed to any infection that the occupants of any other dwelling on the sewer line permitted to go down the drain. More serious, from the perspective of modern epidemiology, was the potential of water mains to distribute infection—precisely what took place in the 1892 Hamburg cholera epidemic (Luckin 1984, 1986; Evans 1987).

Yet for most of the twentieth century, events like the 1892 outbreak of cholera in Hamburg have been rare in the industrialized world. When properly maintained and supervised, the water networks work well. By the end of the nineteenth century, water engineers, finally possessing the torch of bacteriological analysis to illuminate their work, made the filtering of water a dependable operation, even when the water was heavily contaminated. In the first two decades of the twentieth century, they acquired an even more powerful technique in chlorination. Initially used only when source waters were especially bad or in other unusual circumstances, chlorination quickly became a standard form of water treatment. Even if it was almost always unnecessary, and merely supplementary to other modes of purification, chlorination provided a measure of confidence. That it interfered with (ruined, many might say) the taste of water was no longer of much importance (Baker 1948: 321-56; Hamlin 1990b; O’Toole 1990). Thus, the concept of water as a substance that was necessary to ingest occasionally (even if it was potentially a mode of disease transmission) had, in much of the modern world, very nearly displaced the older concept of “waters” as unique substances, varying from place to place, some of them downright harmful, others with nearly miraculous healthful qualities.

Water in the Present

In recent years, there have been signs that a further transformation of the status and concept of water is under way. In the United States, the authorities responsible for supplying drinkable water are no longer as trusted as they once were (in many parts of the world, of course, such authorities have never known that degree of trust). In some cases that loss of trust reflects a real inability to maintain standards of water quality. But it also reflects public concern about new kinds of contaminants, such as toxic organic chemicals, viruses, and giardia (McCleary 1990; Hurst 1991). In some cases the effects of these contaminants may only be manifest after many years and only through use of the most sophisticated epidemiological techniques (Hand 1988). Nor are customary methods of water analysis or approaches to water purification yet well adapted to such contaminants.

The response of the public has been to revert to a technology, the home water filter (or other water purification devices), that had been popular in the nineteenth century before water authorities were trusted. For some, drinking water has again become a commodity that we think we must go out of our way to secure; something that we haul home in heavy fat bottles from the supermarket. Yet these responses are not adequate to the problem of trustworthiness. The capabilities of domestic water purification devices vary enormously, as does the quality of the product sold by the bottled-water industry and the degree of inspection it receives (Fit to Drink? 1990). Indeed, these responses say less about our need for water we can trust than they do about the institutions we trust.

The rise of the elite bottled mineral waters industry is a reversion too. Pliny tells us that the kings of Persia carried bottled water taken from the River Choapsis with them (Burton 1868: 242); Herodotus and Plutarch referred to an export trade in bottled Nile water—some of it used by devotees of the cult of Isis and Osiris (Wild 1981: 91-4). Such trade was still widespread in the seventeenth and eighteenth centuries. Then, as now, quality control was a problem and customers complained about the excessive price (Kirkby 1902; Boklund 1956; Coley 1984).

The revival of this industry makes it easier for us to appreciate the fine distinctions among waters made by Pliny, Vitruvius, and the medieval and early modern therapists of the regimen. Modern elites have agreed with their predecessors that the taste (can one say bouquet?) of a water really is important, and that through the drinking of fine waters one can cultivate one’s health in ways far more delicate than simply keeping one’s insides moist and avoiding cholera.