Energy-Water Nexus: Head-On Collision or Near Miss?

Kristen Averyt. American Scientist. Volume 104, Issue 3. May/Jun 2016.

One year ago, U.S. Secretary of Energy Ernest Moniz warned that the ongoing drought in California could bring brownouts, and that climate change could create more challenges for power plants. Moniz linked this risk to hydropower. But the reliability of energy production and its connections with drought and climate are far more complex than his remarks suggest.

Since the onset of the California drought in 2011, the amount of electricity generated by hydropower has declined from 23 to 9 percent. To make up the difference, by 2014 wind power had doubled its contribution to 8 percent and utility-scale solar power had increased to 5 percent; but electricity production by natural gas also increased. The result was an 8 percent increase in the state’s carbon emissions from 2011 to 2014, because natural gas is mostly methane, a potent greenhouse gas. Agriculture complicates predictions about energy production and water use. Through 2015, the industry has suffered a loss of over $2.7 billion in revenue since the onset of the drought. Of that amount, $590 million can be attributed to the cost of the energy needed to pump groundwater as surface water availability has declined. That cost has been passed on to the consumer in food prices across the United States.

For parched Californians who need drinking water, a desalination plant that turns seawater into freshwater just began delivering up to 189,000 cubic meters of water to those in the San Diego region. Although it is the most efficient desalination plant on the planet, it will still use 300 million kilowatt-hours of energy and increase the amount of carbon emissions attributable to production of the state’s water supply.

Energy production requires water, and water treatment and distribution require energy. Both energy and water demands are stressed by climate change and population growth, but efficiency in one sector does not necessarily translate to efficiency in the other. As the problems with rising temperatures, increasing droughts, growing energy demands, and escalating water needs collide, it becomes clear that solutions to each problem must consider cascading effects on the others.

By 2050, the world will be a fundamentally different place than it is today: The population on our planet could exceed 9.7 billion people, and global temperatures are expected to be about 1 degree Celsius hotter than today. Those changes, in tum, will lead to many others, because the water cycle will be different and because more people could mean more energy use.

The way that water is cycled among atmosphere, land, and water bodies will change, because as temperatures increase, the atmosphere holds more water, causing a shift in both the Earth’s energy balance and the relative distribution of water among the components of its cycle. This shift will drive expansion of the latitudinal boundaries of the planet’s deserts, change precipitation patterns, and decrease water availability across much of the planet.

More people could mean increased demand for energy. Indeed, per capita energy use varies from 0.8 megawatt-hours in India to 3.8 megawatt-hours in China to as much as 5.4 megawatt-hours in the United States. But a consistent trend is that access to electricity is key to combating poverty and malnutrition. With more intense and more frequent heat waves, we will need even more power to manage public health and safety during the summer months.

Decision makers operating at every scale of governance are working toward creating water and energy systems that are resilient to a hot and crowded future. But just as scientists tend to consider climate impacts as isolated sectors, so do those managing resources and assessing vulnerabilities. When each is considered through a single lens, the full range of risks and prospects may not be apparent, leading to surprise impacts and missed opportunities.

The so-called energy-water nexus illustrates how one sector may compromise efforts by another when it pursues in isolation what it deems to be an optimal strategy for dealing with the future. The term energy-water nexus was coined in the 1990s by a small group of scientists working in national laboratories tasked with assessing what the Department of Energy identified as an emerging field of risk: interdependencies between water and power.

Water for Energy

Water is required at each step in the process of energy production (see the figure on page 160). From resource extraction through refining, to transportation of fuels and electricity generation, it is really water that powers the planet. But in this process, by far the most water is used during thermal generation, when heat is converted to electric power. It may surprise some to learn that the water required to run thermoelectric power plants accounts for the largest part of energy’s water footprint. A key challenge for the future, then, is to design an electricity system that will meet the power demands of a growing population during heat waves and droughts, when energy demand for air conditioning is high and water supply is low.

Globally, 15 percent of all the water used supports electricity generation. In the United States, because we use more electricity per capita than most countries (5.4 megawatt-hours), 45 percent of all the water withdrawn in a given year is used in energy production (161 million gallons per day is used to run power plants-more than the 117 million gallons used daily to grow food and to feed livestock).

Power plants use all this water because most of them use heat to make power and therefore need water for cooling. Most power plants generate electricity using a process that first turns thermal energy into work. These thermoelectric power plants operate by burning a fuel, such as coal or natural gas, which heats up a reservoir of water. As the water boils, the steam produced rotates a turbine, which generates electricity. Next, the steam is condensed so that it can be reheated to generate even more steam, and then the cycle can continue. The most efficient way to condense the steam is to pass cold water through the system.

That demand for cooling water accounts for over 95 percent of all the water used to produce energy. The reason water is optimal for cooling is because of its high heat capacity. The hydrogen bonds in water allow the molecules to hold a relatively large amount of energy, making the introduction of lots of cold water into the system the most efficient way to move heat out.

The end result of this process is that thermoelectric power generation is dependent on a continuous supply of cool water. In the United States, roughly 90 percent of our electricity comes from this type of power plant, which is why so much of the domestic water budget is used by the electricity sector. But nuclear power plants can use even higher amounts of water for cooling than do other thermoelectric plants.

When discussing the energy-water nexus, most of the focus is on thermoelectric plants, because hydropower does not “use” water in the same sense as these power sources, except for evaporation that may occur on a reservoir built to support a hydroelectric dam. Still, hydropower is an important electricity source, particularly in the Pacific Northwest, where climate change is not expected to cause problems with drought but is expected to change the timing of water arrival as it melts from mountain snowpack.

Exactly how much water is used by any of the more than 1,700 operational thermoelectric power plants in the United States depends on several factors, of which the most important is how cooling water is continuously supplied to the power plant.

About 47 percent of our electricity comes from power plants that use what’s called a once-through process. These types of power plants are located on rivers, streams, lakes, and coastlines. The nearby water flows through the power plant and then is returned back to the source. Other power plants, particularly those located in arid regions, use a recirculating system. These systems use evaporation to remove heat from the cooling water after it has passed through the condenser, either in small ponds where evaporation occurs naturally, or cooling towers, which accelerate the process.

Each technology type has tradeoffs related to water withdrawals, consumptive use, and water quality. Using evaporation to pull heat from cooling water consumes, through evaporation, 2 to 30 times more water per unit of electricity generated (kilowatt-hour) than is consumed by a once-through facility. On the other hand, a power plant using once-through cooling will withdraw as much as 60 times more water per kilowatt-hour than will a plant that uses evaporative cooling. Although the majority of the water from these power plants is returned to the source, the temperature of that water is, on average, 10 degrees Celsius warmer than when it came into the power plant, making power plants the top source of aquatic thermal pollution in the United States.

Most of the once-through power plants in the nation are located in the eastern United States, where there are abundant surface water resources to support the large water withdrawal requirement. In the West, evaporative cooling is the predominant technology because of the lack of ample water, so these power plants pay the penalty of a larger consumptive footprint. These differences create distinctive vulnerabilities for each half of the country.

Over the past 10 years, power plants have encountered problems generating adequate power because of insufficient water. Not surprisingly, these collisions at the energy-water nexus have generally occurred during heat waves and droughts. Here’s what happens: When it’s hot outside, air conditioners are cranked up, and power plants go into high gear. Turning up the power means that more water moves through the plants. Problems emerge when there isn’t enough water to meet these elevated electricity demands. During the 2012 drought in Texas, a reservoir serving the 2,250-megawatt Martin Lake power plant dropped so low that the operating company rushed to complete a pipeline that brought in water from a river 8 miles away. Climate models predict both higher temperatures and more drought in many regions. Places like Texas made it through droughts such as the one in 2012 but may face future energy problems in addition to the more obvious water-supply problems.

Another issue has to do with water temperature. If the cooling water coming into a plant is too warm, the thermodynamic process is no longer efficient, and electricity production drastically declines. And in the case of a nuclear power plant, without sufficient cooling water to move heat away from the nuclear core, a nuclear meltdown can occur. Some eastern power plants routinely have to curtail production because of increased temperatures in cooling water, and nuclear plants in particular have been forced to shut down. This happened at the now-retired Vermont Yankee Nuclear Power Plant in July 2012. During that month, the facility had to limit electricity production multiple times because of low flows on the Connecticut River and high water temperatures.

Elevated temperatures are not just a problem for power-plant operations. If the water entering a plant is already warm, the effluent is even hotter, which can create problems for aquatic ecosystems. Some bass species can tolerate high temperatures, but a rainbow trout thrives in waters around 14 degrees Celsius and generally cannot survive in temperatures above 24 degrees Celsius.

In some states, there are limits on effluent temperatures to protect fish. But given the need to ensure public health during extreme heat waves, power plants are often granted waivers so that they can use warm water to operate, and the water that is returned is much hotter. For example, during the heat wave in 2012, the Braidwood Nuclear Power Plant outside Chicago was one of at least 29 power plants in the state of Illinois granted a variance allowing effluent temperatures to exceed 32 degrees Celsius-the limit set by state law. In the Southeast, striped bass kills on Lake Norman in North Carolina have been linked on multiple occasions to high water temperatures associated with nuclear power generation.

Keeping the power on is not only important so that we can charge our laptops. It is a matter of public health and safety. When the lights go out, those who rely on electronic medical devices and those vulnerable to the heat, including the elderly, are at significant risk. In August 2003 alone, almost 45,000 heat-related deaths occurred across Europe, in part because nuclear power plants were not able to sustain operations. The water was too hot, and the demand was too high. Another ominous climate trend: Heat waves become doubly dangerous when they also disrupt the power needed for air conditioning.

Newer cooling technologies that require no water address some of these problems, but they work best in very cold, dry regions with Siberian-type climates. Dry cooling circulates cold, dry air through the system to absorb heat and condense steam. Hybrid technologies that can switch between wet and dry systems are now sometimes used, particularly at new power plants being constructed in the western United States. But in operational settings, dry cooling is efficient only when outdoor temperatures are relatively low. Given that dry cooling is most useful in an arid desert, where it also tends to get warm in the summer months, plants would still need to switch to cooling water when temperatures get too high. Fortunately, newer dry and hybrid cooling technologies are emerging that do not have this problem.

Western thermoelectric power plants have generally avoided waterrelated curtailments, because they are fairly well adapted to their low-water environments. Almost all of them use evaporative cooling, so the immediate availability of large quantities of water is not as important as it is for a once-through facility. Also, a large proportion of western power plants are fueled by natural gas and renewables, whereas those in the east are more likely to use coal or nuclear power. And the fuel source matters for water use just as much as it does for greenhouse gas emissions.

To illustrate this concept, consider a hypothetical 250-megawatt coalfired power plant that has a 75 percent capacity factor and uses evaporative cooling towers. That plant would withdraw approximately 6 million cubic meters of water per year and consume 4 million cubic meters. If the operating utility were to opt for a lower-carbon technology, they might consider nuclear power. But the water intensity of power generation in a nuclear plant would be about 10 percent greater for withdrawals than that of the original coal-fired plant, because more cooling water is required to pull intense heat away from the nuclear core. And, surprisingly, a wet-cooled concentrated solar power plant would use just as much water as a nuclear facility, if not more. Concentrating solar power, such as the 280-megawatt Solana Generating Station in Arizona, uses a thermoelectric process; therefore, cooling, whether by water or cold air, is still necessary, even though the Sun provides the thermal energy source.

If that coal-fired power plant were to switch to a natural gas source that uses an integrated gasification combined cycle (which turns fuel into a clean gas that burns extremely efficiently), the water use (both in terms of withdrawals and consumption) at the plant would be cut by about 70 percent per unit of electricity produced, and carbon emissions would be cut by roughly 40 percent.

The benefits of a switch from coal to natural gas have included a decline in the rate of annual greenhouse gas emissions by the United States. However, burning natural gas still emits carbon. And there are fugitive methane emissions at the wellhead and in the midstream sectors. So, depending on the evolution of future energy demands, a natural gas future has the potential to offset its current carbon benefits.

Researchers and technologists have been testing carbon capture and storage technologies, along with the practicality of coupling this technology with coal and natural gas facilities. Doing so would greatly ameliorate carbon emissions, but it could also more than double the water intensity of electricity generation, because the process by which carbon is captured is energy intensive, requiring even more cooling water. So when it comes to electricity, low-carbon choices are not necessarily low-water choices. But they can be.

Both wind and photovoltaic electricity sources require very little, if any, water to operate. A trivial amount of water may be used for washing solar panels and wind turbines, but in practice, power plants don’t require cleaning. Water is used in the production of wind and solar power largely at the front end, for mining, processing, and fabrication. That’s not to say that any of these technologies are perfect. They have their own set of challenges related to land use, wildlife habitat fragmentation, and optimization of grid integration. In the design of power plants that can handle the heat, there will be complex tradeoffs to consider in order to manage the cascading risks associated with ensuring a reliable electricity supply with limited water.

Energy for Water

As water scarcity grows in drier areas, transporting it and treating it becomes more energy-intensive. Addressing the other side of the energy-water nexus is a greater challenge in a changing climate, because there is less water to go around.

Exactly how much power is used globally to ensure adequate water supplies is, at best, a guess. Even in the United States, the best estimates suggest that pumping, conveying, and cleaning water requires 3 percent of the total electricity supply (13 percent when heating water is considered), compared with the 5 percent of the electricity supply that is used for air conditioning. But even those data are hard to corroborate. Of course, ensuring access to a clean, safe water supply requires energy. But the energy intensity of that water varies significantly, depending on location. For instance, a New Yorker would use 0.7 kilowatt hours per cubic meter of water, whereas the energy embedded in the water supply of a resident in southern California would be almost 5 times that amount.

Let’s break that down a bit: For the New Yorker, the energy intensity of water is embedded primarily in the distribution of drinking water and in wastewater treatment. Across the country, there are approximately 160,000 publicly owned drinking water plants, and another 16,000 water treatment plants. At drinking water plants, 80 percent of the energy required goes to pumping, making power the second-highest budget item after labor. On the water treatment side, power is needed for aeration, pumping, and solids processing. For this reason, 25 to 40 percent of the operating costs of a wastewater utility are for energy. And the dirtier the water, the more power is necessary.

For a westerner, the energy intensity of water tends to be greater, because the energy required for basic waterplant functions is compounded’ by the need to store and then move water long distances from the source to population centers in dry places. In 1907, the federal government established what is today called the Bureau of Reclamation and gave it the mission of developing and managing water resources of the West. Twenty years later, construction began on Hoover Dam, the Bureau’s first large-scale project. The dam blocked part of the Colorado River, creating what is still the largest reservoir in the country-Lake Mead. This marked the beginning, not of how the West was won, but of how it was plumbed.

This tradition of managing water to support development continues today. Over 4,800 kilometers of pipelines, canals, and aqueducts transport roughly the same amount of water that flows annually in the Colorado River-14.8 cubic kilometers. However, all trips are not equal. Conveying water across flat land requires very little power, and water flowing downhill can generate power, but moving water upward- either from the ground or over mountains-demands a lot of energy.

Although the energy penalty of groundwater withdrawals may seem trivial, the costs add up. As mentioned earlier, a case in point is California. Revenue losses there of $2.7 billion during recent dry years are not solely the result of crop losses; over 20 percent of that cost is due to added spending on the electricity needed to pump more groundwater as surface water supplies diminish.

Given the power required to move water a couple of hundred feet to get it out of the ground, it’s no surprise that the energy intensity of the Southwest’s large conveyance systems that pump water over mountains is far greater.

The large-scale surface-water complex that moves water, particularly from the Colorado River, through the Western landscape makes some of the region’s water providers the largest users of electricity. In Arizona, the largest user is the aqueduct called the Central Arizona Project (CAP)-the perfect example of the potential conflict inherent to the energy-water nexus.

Over 20 percent of Arizona’s water supply is brought in from the Colorado River via CAR Construction of the aqueduct began in 1973 and was largely complete by 1993, at a cost of $3.6 billion. The pipeline uses a series of pumps to move water 541 kilometers (336 miles) over 915 meters (3,000 feet) of elevation, traveling from Lake Havasu up to Phoenix, and then on to Tucson. In a given year, 2.8 million megawatt-hours of electricity are needed to run CAP and ensure water for these two desert cities. The irony is that 90 percent of the power comes from the Navajo Generating Station. In 2014, this coal-fired power plant on the banks of Lake Powell emitted over 17 million metric tons of carbon, making it one of the top three carbon emitters in the United States. Since 24 percent of the electricity generated at Navajo is used by the pipeline, the carbon footprint of this water supply is among the highest in the country. And to complete the nexus, that plant uses a lot of water.

The biggest challenge for the West is that the region is growing faster than most other areas of the United States, and climate change is already diminishing water supplies. Water managers know this fact, and many places are considering adaptation strategies that include large-scale conveyance systems. Taken together, the major projects under consideration, and some under construction, would move an additional 5.4 cubic kilometers of water to water-poor areas. What is most sobering is that the estimated power intensity of many of these projects is even larger than the energy footprint of the CAP, but the total amount of water delivered is unlikely to be as large.

Greenhouse emissions drive global warming, and that warming increases water demand. Water requires energy, which produces greenhouse gases. And warmer temperatures can also interfere with energy production because power plants cannot operate optimally. We have to find a way to break out of this feedback loop. If the Southwest and California are to meet future water demands by investing in large water projects, they certainly ought to consider how they are going to power these systems. The choices they make will affect-one way or another-emissions targets and related mitigation policies.

The Energy-Water Future

There are unique vulnerabilities that emerge for both the energy and water sectors when the interconnections between the two are considered. These issues manifest very differently depending on location, and the risks will evolve as climate change and population growth drive the planet into a fundamentally different future.

There may be no perfect solution to these cross-sector issues, one that will truly satisfy all perspectives. But by understanding the cascading challenges and tradeoffs across multiple sectors, we have the opportunity to optimize our investments by considering the broad picture of risks and vulnerabilities. Many of us are looking at a future that will be hotter and drier, with more people and fewer resources to go around. But now that we know more about how energy, water, and climate intersect, we have the opportunity to plan, design, and innovate for what will be a very different planet. We can realize the cobenefits of embracing efficiencies in our water and energy supplies; we can optimize and implement the utility of the future, one that integrates management of water, energy, and air quality; and we can realize the value of our natural resources and how they support our way of life. Understanding these connections will help us to ensure the resilience of the entire system-whether it is an ecosystem, a city, a nation, or the planet.